Get certified - Transform your world of work today


Agile Testing: Key Points for Unlearning

27 January 2012

Madhu Expedith
Agile Everyware LLC

"He who knows to unlearn, learns best."

— Anonymous

When quality assurance teams and management who have adopted Agile practices first put the ideas to work, they face a significant impediment in unlearning the traditional mind-set and practices that experience in traditional practices has instilled in them.

The following are some of the key aspects that need to be unlearned before attempting to deploy Agile practices, from a QA perspective:

  • The testing team needs to be independent and independently empowered in order to be effective.
  • Without a separate test strategy and test plan, it's tough to manage testing.
  • The V-model for verification and validation cannot be applied in an Agile sprint.
  • Independent testing teams don't do white-box testing.
  • The value of testing is realized only when defects are logged.
  • Automation is optional and is required only when regression testing is needed.
  • Testing nonfunctional aspects, such as performance of the system, is not possible in a sprint.
  • Testing must follow planning, specification, execution, and completion sequentially.
  • We don't have to write new test cases for detected defects.
  • Poorly written code is not the testing team's focus, as long as the code addresses the required functionality.
  • Test-process improvement models do not address aspects of Agile testing.

Let's look at these assertions one by one.

The testing team needs to be independent and independently empowered in order to be effective.

Traditionally, testing teams have had followed different organizational styles: having no independent testers while developers perform the testing, having independent testers within the development teams, having independent testing performed by a separate division within the organization — even outsourcing independent testing. Often the testing team would like to be empowered and report directly to a senior project manager rather than to the development or the technical lead. The logic, or at least the perceived logic, is to allow the testing team to report and escalate technical defects without potential inhibitions from the technical lead.

The Agile testing mind-set change that's required is that the testers are an integral part of an Agile team. Their focus is to deliver a quality shippable product at the end of each sprint and to achieve the "done" state for the backlog items committed without any technical debt. The testers report to the Agile team and are accountable to the product owner or the business.

Without a separate test strategy and test plan, it's tough to manage testing.

A test strategy document can typically be defined at the organizational level, the division or portfolio level, or even at the product level. Seldom must the test strategy be defined for each project, unless the project is large and the duration spans many years. The project-specific test approach is documented in the test plan for the project.

In the case of Agile projects, the test approach can be documented in the release plan, and the sprint-specific testing activities during sprint planning. A separate test plan may not be required. However, having a test strategy at a level higher than the project could be useful, especially when the organization is undergoing transformation to Agile. The test strategy can define the Agile testing practices and the techniques to be followed across the organization or division; subsequently, Agile teams can adopt one or more of these practices while defining the test approach in the release plan for the particular project.

The V-model for verification and validation cannot be applied in an Agile sprint.

Within an Agile sprint, verification and validation are addressed by adopting Agile practices, such verifying whether INVEST criteria for documenting requirements is followed, creating and reviewing evocative documentation and simple design, reviewing visual modeling, holding daily stand-up meetings, reviewing radiator boards, following continuous integration, refactoring, running automated development tests and automated acceptance tests, holding focused reviews, and enhancing communication by having the product owner and customer on the team.

The following figure shows an Agile V-model for verification and validation, as compared to the traditional V-model:

Independent testing teams don't do white-box testing.

Independent testing teams traditionally focus on black-box testing, possibly shrugging off any responsibility related to low-level testing. However, in Agile projects, testers play a significant role in automated development and acceptance tests. Agile testing is continuous and seldom staged. Agile testers need to understand the design and code-level aspects in order to effectively perform testing for a sprint. While the developers take the lead in unit testing, the Agile testing team shadows the low-level testing efforts and leads the automation aspect.

The value of testing is realized only when defects are logged.

While the value of testing lies in early detection of defects and ensuring that the shippable product is of good quality, Agile teams need to unlearn the defect numbers-game mind-set. Teams may perceive that more detected defects indicates better performance of the testing team. As a result, many cosmetic defects are logged.

This should be managed. The Agile testing team directly contributes to the "done" state of the product backlog item, which essentially means that a backlog item cannot be considered done unless it passes testing. Agile testing teams must make use of the radiator boards to effectively radiate the information on the status of the backlog items.

Automation is optional and is required only when regression testing is needed.

Automation is not optional; it's an essential aspect, especially when the business is trying to improve the time to market for its products. Agile teams working at peak velocity adopt such practices as continuous integration, automated development tests, and automated acceptance tests. Without automation and application of tools, the team cannot achieve the desired agility.

Testing nonfunctional aspects, such as performance of the system, is not possible in a sprint.

Sometimes it may not be possible to perform testing of nonfunctional aspects, such as system performance, within a sprint. However, this can be addressed by having a separate release sprint during release planning. The release sprint can address the required nonfunctional testing and also perform a cycle of acceptance testing to ensure that the system works after any defect fixes. Rigorous integration testing may not be required if the system was continuously integrated and tested by leveraging automation.

Testing must follow planning, specification, execution, and completion sequentially.

The aspects of planning, specification, execution, and completion are highly relevant in Agile testing. However, we need to understand that Agile testing is continuous, not staged. While one backlog item may be marked "done," another item could be in its specification stages. Some teams follow the practice of updating a backlog item as "done" only when the test cases are automated for the backlog item.

We don't have to write new test cases for detected defects.

Traditionally, it hasn't been a practice for a test team to go back to specify a test case for a detected defect, especially for defects detected during exploratory testing. One of the key pain points for not doing so is the process of re-baselining the test case document and running around for signatures, since this is a change from the planned baseline. However, adapting to change is one of the Agile framework's foundational aspects. In Agile testing, new test cases are specified for detected defects that don't already have an associated test case, and the test case is subsequently included in the automation test cases suite.

Poorly written code is not the testing team's focus, as long as the code addresses the required functionality.

This is related to the point above ("Independent testing teams don't do white-box testing"), that independent test teams traditionally focus only on black-box testing and may be unconcerned with the quality of the code as long the code performs the required functionality. But the value add from the testing team can be significant if it can provide early feedback and also identify technical debt by focusing on the code-level aspects during verification and during validation or testing. This is one of the key Agile-testing mind-set changes required for a new Agile tester.

Test-process improvement models do not address aspects of Agile testing.

In fact, we do have the ability to measure and improve Agile testing, using standard industry models. Test-process improvement methods such as TPI NEXT advocate business-driven test process improvement in an Agile environment by prioritizing the key areas of focus. This facilitates an Agile testing mind-set by mapping Agile principles with specific, prioritized areas. TPI NEXT also provides specific "improvement suggestions" for the checkpoints in priority areas of Agile testing.


Although the task of performing testing is not very different in principle in Waterfall, iterative, or Agile, the Agile mind-set and its testing practices provide effective new means to achieve the desired results. The agility lies in the Agile practices, rather than in the overarching process itself.

Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.

Article Rating

Current rating: 4 (15 ratings)


Amit Garg, CSM, 1/28/2012 12:14:09 PM
Many think of agile sprints as a 'mini' Waterfall. The aspect of testing especially is perceived to be functional and non-functional testing stacked in a staged manner within a sprint and not continuous. The Agile V-Model summed it all well. The unlearning is the key...
Sanjay Sudhakar, CSM, 1/29/2012 9:15:22 PM
I personally faced challenges when my team did not unlearn the defect numbers-game mind-set which affected the entire team's balance. This if managed well along with QA folks who are creative in performing exploratory testing would make the quality of deliverable extremely good.
Sanjay Sudhakar, CSM, 1/29/2012 9:15:36 PM
Great article especially for those who perceived agile testing as a separate activity. As Amit said Agile V Model summed it all well.
Subramani Dharmar, CSM, 1/30/2012 10:30:07 AM
Good article to know the things that we need to unlearn before moving from waterfall model of testing to Agile Testing.
Madhu Expedith, CSM, 2/7/2012 2:13:28 PM
Posting a comment from a colleague, Sathianarayanan:

One of the crucial aspects of a team which implements agile process is to get the test team buy-in. We faced severe difficulties in this area as they were resistant to changing their existing processes by simply saying "This will not work for us. Let dev team follow their own processes and we will adhere only with the timelines". Finally higher management had to step in to enforce this.
Dr. Sanjeev Raman, ACP, SPC, SA, SPM, SP, CCA, AKT, MGT 3.0, CSP,CSD,CSM,CSPO,REP, 2/8/2012 7:10:18 PM
I managed a project for a major retail client HQ in the West Coast. Our goal was to release dozen retail e-commerce sites in multiple languages. When I took the project over from a previous PM, I was forced to work with their offshore QA model. To make a long story short, I agree with every point you made in this article.

I was given an onsite QA Lead to be on my "Scrum" team and he was responsible for coordinating all the QA testing for each sprint. Ironically, he was a certified scrum master (CSM) but he insisted on a black box, manual testing approach and only committed to doing QA when we were "code complete" for each sprint. Given that their QA team out numbered our Scrum team 3 to 1, we could not fix all the bugs within the timebox sprint to complete the stories, which put us in a very difficult situation. Another issue with their testing approach is that they gave no consideration to troubleshooting what the root causes were for the defects nor did they believe in automation. Their attitude was that the "developers" has to figure it all out, fix everything, and give us a clean build to retest. Most of our requirements were around data and permutations and they insisted to test each of the 6,000 permutations manually by scaling up more people offshore. (I brought in a highly skilled data analyst and we automated the testing within a week but faced strong opposition in the process).

I raised all these concerns to the Director of QA and how we need to be more agile (and even outlined a transitional plan). But, given the bureaucracy and her wanting to do empire building, she was not interested in my concerns.

In the end, as I was leaving, they brought a certified scrum coach (CSC) to work with a new PM. The last I checked, the only retail e-commerce sites to go live were the ones I managed when I was there and its been several month since I left...
Madhu Expedith, CSM, 2/12/2012 11:49:12 PM
Thanks for all your comments. QA Organizations strive to be on the critical path for the minimum time-span. Testing team potentially will be on the critical path for the longest duration if traditional mindset is not unlearned and this may cause significant delays in the release of the software.
Bill Rinko-Gay, CSM, 2/16/2012 8:39:28 AM
As a QA manager with over 30 years' experience, I find the entire development team needs to stop thinking of QA as a separate activity. Within the Scrum team, quality output is part of the definition of done. The team is as much on the hook for writing code that can be tested automatically as it is for automating the tests (note, I am not dividing the team into developers and testers on purpose). There may need to be testing when the work of multiple teams is integrated, but that is still part of the whole process, not a separate test activity. One of my greatest success stories came about in a non-Agile world only when the Development manager recognized his role in good quality assurance, and re-architected his entire code base for testability. We went from being unable to produce a quality release in 9 months to producing high-quality releases every 6 weeks, and that without Agile. Just imagine what an Agile team can do when it doesn't remove QA from the development domain.
Bruno Eduardo Andrade de Carvalho, CSM, 2/25/2012 3:49:28 PM
Very good article,
Dan Babington, CSM, 3/10/2012 6:15:36 AM

An interesting article. I have a question though, in this article you are effectively eliminating a pure-play tester (i.e. one without any development skills or development skills that are not appropriate to the current project). This type of tester would typically use manual scripts (e.g. a Word document) for their testing. Have I misunderstood your article, and if not, how would you look to incorporate that type of individual?
Andreea TOMOIAGA, CSM, 3/13/2012 8:04:36 AM
Hi Madhu, I found this article very interesting and enjoyed the parallel with the V Model.

Hi Dan, I think you raise an interesting point here. From what I saw in my experience so far individuals with testing/QA but no coding experience can significantly contribute in a Scrum team that uses XP practices too by being involved in the process of establishing together with domain experts and the product owner what is the user acceptance criteria for the product backlog items. From the beginning the individual with testing/QA experience brings in value by contributing to a detailed examination and definition of the aspects to be defined with regards on how the product should work. What scenarios are left out? Are some cases missed by the domain experts?

If in later stages that individual wants to contribute to white box testing too by iteratively acquiring coding skills too then this is for the better since the whole philosophy of agile encourages skills development so that team members acquire also other skills on top of their "core" ones through collaboration with the team, in this case becoming "generalizing specialists".
Madhu Expedith, CSM, 3/15/2012 12:26:19 AM
Hello Dan and Andreea, Thanks for your comments. In agile projects where 3 or 4 sprints are included for a release, the aspect of automation significantly reduces the time for testing and therefore QA remains on the critical path for shorter duration (especially in cases where you have a separate release sprint). You just have to ask a couple of questions: 1. What if the QA Team can directly create automation test cases in the time they create manual test cases? Note: Even if the QA takes slightly longer duration for creating automation test cases than manual, there are still advantages such as less time taken for Test Execution and lesser errors in Test Case Design (because a wrong test case also fails fast :-)) 2. Do we have enough time from business users to execute a full-fledged traditional UAT? or rather would you review the acceptance test cases with business user and have those test cases also automated along with the other test cases?
Andreea TOMOIAGA, CSM, 3/15/2012 3:59:55 AM
Hi Madhu, thank you for the comment provided. I fully agree with the necessity to automate testing since although associated with effort at the beginning in the long run, it gains time. With the UAT I do not necessarily refer to business users but rather to onsite experts as presented in the XP methodology. An active review would be the one that I agree with you in the absence of enough time. A sketch of the tests to be reviewed and then to actively fill in/comment. Otherwise it is easy to deliver something that the client/end user does not really want.
Dele Oluwole, CSP,CSM, 9/2/2012 5:58:40 PM
Nice article Madhu, but note that the list of assertions in Agile Testing is inexhaustible. The key ones you missed are:

1. Tester MUST test high priority user stories first but
2. Everyone is Responsible for Testing and Quality
3. Every user story MUST include acceptance criteria that can be used as the base for developing test cases
4. Defects should only be raised and recorded when they are not going to be fixed immediately
5. Test cases, code, and data SHOULD be treated with the same importance as production code
Madhu Expedith, CSM, 9/13/2012 10:35:23 AM
Thanks Dele for your feedback. As you rightly said "the list of assertions in Agile Testing is inexhaustible". Having said that, I would like state that if there are 2 ways to solve a problem, it is not necessary that one of them must be wrong.
Couple of comments:
" 1. Tester MUST test high priority user stories first": When the focus is on ordering/prioritization and when rightly done, the Define/Build/Test cycle inherently takes care of this. Thanks for emphasizing.
" 2. Everyone is Responsible for Testing and Quality" - agree with you again. However, there can be 2 school of thoughts here as well... (a) People who build the system, test the system (b) You cannot solve problems with the same thought process that created them in the first place and therefore another pair of eyes looking at it most often than otherwise improves quality - it is not about which right or wrong but what works for the respective team.
Leenu Chawla, CSP,CSM, 11/5/2012 2:57:40 AM
Hi Madhu !!
The artice is very interesting as , it is truely said "unlearing is the key".
Defect number game is the most harmful at times, and have to be sorted as quickly as possible in every Agile Project.
Glen Wang, CSM, 2/18/2013 3:21:45 AM
Agile doesn't change test itself, it changes relationship of testers and designers.
Madhu Expedith, CSM, 4/5/2013 5:03:04 PM
It provides the right objectives for the tester: Focus on delivering a working software and just breaking the software so that teams don't end up in a scenario where testing did a great job but the project was a failure.
Kwaku Ampomah, CSPO, 11/26/2013 11:18:42 AM
Thank you for such an interesting and enlightening article. Everything that you wrote is what I try to instill in the teams I work with. Well written - Nice one
Sujith Phalgunan, CSM, 12/9/2013 4:55:29 AM
Bang on!! Madhu. The article and the comments. A good read for any level tester transitioning to an agile project. Managing agile projects I have noticed that the mindset of the tester needs a paradigm shift in comparison to a developer and articles like these help them re-enforce this. Such testers should be provided a lot of support initially and confidence to carry on from the whole team.
Madhu Expedith, CSM, 12/23/2013 2:41:41 AM
Thanks Kwaku and Sujith
I would like to add more tip that may help with the transition: Observe whether the person playing the role of the Test Manager is involved technically or s/he is playing the role of a Resource Manager managing testers and test analysts. If you have a Test Manager who is involved technically, there is high probability that the transition will be a lot smoother and with less resistance.
Parashuram Bellikatti, CSM, 1/20/2015 12:57:35 AM
Nice article.I believe that entire development team needs consider QA as a integral activity.
Quality is everyone's business and it should be clearly included as part of the definition of done.
Scott Sherwood, CSM, 10/14/2015 9:16:58 AM
A tester working in an agile environment does need a different mindset as they will need to work in a different, more approachable and integrated way with the rest of the team. Based on experience, we've put together a short list of ways at you can integrate software testers into a team when working with scrum.

It all comes down to being involved at every stage and this relates to both testers, designers, UX or anyone who is contributing towards the sprint.

You must Login or Signup to comment.

The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.


Newsletter Sign-Up