Thursday, January 28, 2016

Simple checkpoints for a Test Manager to self assess cost of quality, am I betting too much or too low?

Cost of quality is relative. There is no high or low cost of quality as it completely depends on project to project. However, there are few simple checkpoints which you can ask yourself to gauge where you stand, in delivering quality assurance through testing.
  1. Study your project plan for critical path. Understand those testing tasks occupying the critical path. More it is on critical path, more will be the case that cost of quality will be higher.
  2. Understand stakeholders involved in the project. It will help you understand their views on quality, their risk appetite. Lesser their understanding of quality and lesser their risk appetite, more will be the cost of quality.
  3. Understand your operating model - have you put your best model and team to action? Lesser the operating levers being invoked, more will be the cost of quality.
  4. Understand the system that is getting built. Less it is critical, less will be the cost of poor quality.
  5. Understand the delivery processes. Less it is stable and mature, more will be the cost of quality in controlling the processes.
  6. Study the history of past failures and look for a prevention plan. More the past failures and no prevention plan, higher will be the cost of quality through failure costs.
  7. Understand the tools landscape. More your team is equipped with right tools, less will be the cost of quality.
  8. Understand the project governance. Poorer the governance, more will be the cost leaks.

Saturday, January 23, 2016

Cloud Testing - a Hype?

Those were the 1990s, the time of internet penetration when people were hesitant to give DOB and gender in their mail account profiles. We have come a long way. With social media in picture, a lot of private information is sensibly posted for public views by the millennials in the world of increasingly insecure technology. Software is getting into a similar transformation through cloud computing. Companies are no more looking into in house capabilities for running IT shop in a tighter security mindset. Serious thinking to leverage external computing power beyond their data centre walls is going on in every IT Strategy discussion. Cloud computing is evolving into a normal. Let us look at what it means to software testing world in brief.


Cloud testing is the broad word used across the testing community and the exact definition of what it is, is notably absent or least explained. While the definition of Cloud computing itself is of much debate and doesn't seem to add up, let us look at how a test manager need to adapt himself into the new realm. Utilising a public or private infrastructure in hosting a software, run a platform or the environment as an on demand / pay as you go basis is Cloud computing. When it comes to Cloud Testing, it would primarily mean any of the below and more.


  • Testing a product that is built as SaaS (Software as a Service)
  • Testing a product that is developed and released in a PaaS (Platform as a Service)
  • Testing a product to be hosted in IaaS (Infrastructure as a Service)
  • Testing an in house product in cloud hosted test environments (non production IaaS)
  • Migration of an in house product into XaaS (X implies a Software, Platform or Infrastructure)
  • Migration of software/platform/Infrastructure from one Cloud service provider to another
  • Integration of an in house software into cloud hosted business services
  • Testing Cloud Management products


Perhaps nothing much changes for the tester, as the requirements, design and environment usage will consider and factor in cloud specifics any ways. System testing could be a straightforward way to deal with, like how a functional tester does not bother if the code is in Java or .Net as long as the software speaks for the business requirements. That said, there are some additional dimensions the testing community has to think through.


Performance testing - while the infrastructure is scalable in Cloud, but note that it's only on demand or what has been subscribed for. The service provider may only allocate what is signed for. The real advantage could not necessarily the scalability but the ability to scale up in short time. As a testing team, challenging the NFRs and testing against the same to rightly size the capacity with Cloud service provider along with anticipated capacity to invoke on demand is required.


Security and compliance Testing -  Testing team should have a good hold on the security related NFRs. With access to the Cloud service provider contract clauses, it is essential to inspect the same for NFRs. Can the production data reside in another country where the service provider data centre is? Who (from Cloud service provider) has access to the production sensitive data? Is it compliant by local law?

Operational Acceptance Testing -  How the requirements are addressing the availability, business continuity and maintainability elements of the software in Cloud?


Regression testing - how the upgrade and other software maintenance is going to happen post live? How the Cloud service provider is going to maintain / upgrade their capacities?


Test Management - How the efficient deployment schedules and high reliability of test environments are going to positively impact the test execution? While the test environment and release management gets optimised by reducing capex, shortening time to market and brings flexibility it may not directly influence the testing and if so, senior management has to be rightly managed for expectations.


Good news is that cloud computing is not as disruptive to testing community as it is for Dev or Infrastructure teams. The test philosophy remains the same. While it's easy for new companies to directly jump into cloud solutions, it's bleak enough for existing companies to decide and migrate/integrate their legacy systems into cloud. However, they may be forced to do so with more and more enterprise software getting into IT strategy and the providers offer lucrative on demand cloud solutions, at least the IT industry ostensibly appears to point to this direction.

Tuesday, January 19, 2016

Shift left or right?

Most of the test managers are familiar with implementing 'Shift left' approaches in Test Strategy to deliver upfront quality. Typical ways of doing shift left is to apply inspection techniques in reviewing the requirements and design artifacts, doing white box testing along with development while the components are getting developed etc. Shift left approach brings in early time to market by pulling the test schedule to the left and so is the value addition by identifying defects as early as possible there by reducing the cost of quality.

Test Managers need to be conscious of what to test late in the game, it is 'Shifting Right'. In particular, when it comes to regression testing of certain common/shared components, it makes sense to test as late as possible to ensure the code that goes live is regressed against the latest merged code base. In case of non functional testing, its better to do the soak tests as late as possible. Striking the right balance on what to test early and what to test late elevates an ordinary test manager into a master test strategist.

Monday, January 18, 2016

Software testing and Software Quality Assurance

Software testing - is a process of verifying and validating if the software that is developed meets the desired objective. Though the Software testing process runs across the SDLC through unit tests by Dev teams, inspection and dynamic testing by test teams, it’s important to understand that testing alone will not bring in software quality. Software quality should be managed by top management through quality assurance across all the processes that are part of the software development system. Requirements gathering process, Design process, coding process, release process, testing process ....and more.

Large projects with multiple releases should have a documented Quality Assurance methodology on how the processes are going to be executed, how the processes are going to be measured for performance, how the processes will be controlled for variations, how the processes will be continuously improved and optimized for project objectives through increased cooperation and collaboration between processes. Software testing is yet another process in the software development system and testing alone will not improve software quality. Ensuring overall quality of the software project needs constant focus on process definition, assurance and control in order to improve the productivity, reduce waste and churn out a highly predictable, consistent and quality output.

Cost of poorly performing processes is very high. Why a software project spends 20-30% of the budget in testing? Need for software testing increases if there is poor requirements/design review process, poor coding standards, poor unit testing, poor architecture standards, no traceability followed between requirements to code, etc. A software supplier should study client's quality assurance approach, understand how their processes are going to cooperate and communicate with Client's processes and communicate how the processes will be controlled and improved for better process performance to the Clients. It could become a differentiator for the supplier against the competitor. Now we understand why the Dev cum testing projects by suppliers are more successful rather than a Dev only or test only projects because the opportunities for processes to be cooperative and collaborative is very high.

Software testing budgets are a price to non conformance. Software testing cost can be reduced, by only through conformance to a well managed process model.

Saturday, January 16, 2016

Implementation knowledge for Test Managers

In the arena of Formula one, thousands of hours of work and millions of spend goes into product design, development and testing. Product testing happens not only for the driver to perform but also for a 22 member pit stop crew who has to deliver a 3 second operation, that has a big influence in winning or losing a race. This is no different to an implementation phase of a software project where a tested product lands in the race track, the PROD environment.

There could be several Known unknowns that drive a test strategy during planning phase, one of the important drivers being the Implementation strategy itself. Understanding the implementation approach, hot spots and quality risks and embracing them by incorporating into test strategy as early as possible is the key. It’s a fact that quality issues during implementation phase are not only costly but painful where the development/test teams become lean by then (not by tiredness, but for the purpose!). Let us see why and how testing team can advocate the quality on Implementation, the little known unknown at the time of project planning.

There are Functional and Non Functional requirements that talk about the product’s expected behaviour once it’s up and running but there could be least explained documentation on the product’s journey to the destination. Product quality is experienced by the customer right from the time it goes out of testing shop. The ability of the product to get deployed into prod before 100% live has to be of good quality and needs to be tested with equal importance to the capability of the product. This becomes complex in staged implementations where the product gets rolled out in waves in order to manage the business impact in a controlled manner.
Below are some of the checkpoints to the above context, the testing team can ensure while shaping a test strategy,

  • Understand the Implementation strategy – if staged roll out, what are the quality risks at each stage?
  • Understand the code drops that leads to system test – what is the success criteria of each code drop, how are they different from implementation stages?
  • Understand the NFRs and challenge the same for Implementation specific call outs like data migration, production down times, business thresholds and limitations in live systems, what tasks run in parallel, what cannot?
  • Find out the hidden risks - Are there any open questions left by designers and architects to figure out on implementation test, based on system test?
  • Who is going to do the actual implementation, have they signed off the implementation strategy?
  • Have the Test Environment and Test Data teams understood the implementation strategy?
  • Have the Dev teams understood Implementation strategy and aware of any implementation specific deliverables? (e.g.: custom data load tool), does it need to be tested, though it will not be part of the product in Live?

Testing team challenging the project quite early in above mentioned dimensions not only helps reduce cost of poor quality but also facilitates continuous improvements to the solution by making it more effective. Untested or under tested Implementation procedures and tools against the expected behaviour sometimes may pull out product from getting live, leading to a complicated roll back and placing the current system with issues. We really don’t want to see a mechanic struggling with a faulty wheel gun while trying to remove the wheel, or refuelling rigs failing to pump 10 Litres/second, while the full media broadcasting the situation to millions of television viewers around the world. Abandonment of pit stop refuelling since 2009 is indeed a continuous improvement on product safety. Yes, testing team can influence the implementation by asking risk questions quite early in the project and devising a risk mitigation strategy.

Software test productivity

To put it simply, about 90% of software testing service business, a multi-billion dollar industry in the world will be managed by top few global service providers. They are in the strong position and they are in the best position to improve the testing standards and have the greatest obligation to improve.

With lack of standards in defining a proper Testing Unit of Measure, productivity definition gets diluted and only complicates the sizing and throughput of the work from getting quantified for comparison. Eventually the customer who avails the service either signs up something what they feel right or just makes decisions based on their knowledge and sometimes based on poor experience from previous suppliers which only changes the problem from one landscape to another and not necessarily fixes it. Moreover, with no consistency in benchmarks it becomes difficult for customer to compare the service offerings between the suppliers from a value perspective to determine who is good.

Ongoing competition between the service providers only confuses the customer while the need of the hour is a structured communication and cooperation between the suppliers in elevating the standard of work. A consortium (like the ones in high tech or drug industry for instance) can only streamline and define the basics to software testing service quantification that could give clarity to customers. Customers don’t want innovation. It is producer’s responsibility to be innovative in identifying customer’s problems and solve them through win-win situations.

Knowledge of variations for a Test Manager

Think of software testing as a system that comprises of processes, people and technology as components. Its essential for a test manager to understand the system and the variations associated to its components. Each system has a level of complexity and emit unique behaviours. When it comes to testing phase, a test manager becomes effective if he understands the variations in the processes so that he can measure, monitor and control the processes to achieve the goal.

Some of the example variations of a typical test phase are,

  • What is the ideal defect fix turn around time to be assumed while planning daily run plan? (experience of Dev team, how much they are loaded on other activities are some variations)
  • What is the testers effectiveness in finding valid defects? (tester's knowledge of the product under test, how the defect rejection rate looks like in previous cycles are variations)
  • What is the expected defect leak level across test cycles? (initial cycles may have high leaks as the team is in the process of gaining knowledge)
  • What is ability of test bed to behave like a real solution? (level of stubs, downsized servers, license restrictions - are variations)
  • What is the testers daily productivity? (reliability on automation, tester experience, working in night shifts? - are variations)
  • What is the psychology of the people? (people morale is a variation, there could be degraded quality of fix or test in last cycles of very large projects due to over stretched teams. On doubt of a defect, a tester may 'Pass' and the other may 'Fail' a test - largely depends on personality)

Without understanding the variations, the goals set on test metrics may accomplish nothing. A goal outside the control limit cannot be accomplished without changing the system. Project management has to be mindful of what the system can produce and set the realistic goals considering variations.