This article is born out of the need to establish that due diligence is required even in the face of the self-organization and independence that Agile principles hinge on.
I want to start by quoting one of the pillars of the Agile Methodology domain. Mike Cohn laid it out in his book Succeeding with Agile (2010): "Continuous integration refers to integrating new or changed code into an application as soon as possible and then testing the application to make sure nothing has been broken . . . usually achieved with the help of a tool or script . . . and to run a suite of regression tests over the entire application. . . . Other artifacts will exist: Test Plans, executable test cases. . . ."
I also want to start by defining, in layman's language, some software testing terms that are relevant to my argument:
Integration testing: Testing continuously evolving modules (with defined expected actual outcome)
User acceptance testing: User representative testing (with defined expected actual outcome)
Sanity testing: Testing after multiple builds (with defined expected actual outcome)
Smoke testing: Testing after every build (with defined expected actual outcome)
Exploratory testing: Unplanned and unscripted testing (no defined expected actual outcome)
Regression testing: End-to-end testing done to ensure that nothing is broken in the integrating modules
In an organization where I trained Agile testers recently, I was confronted by a senior tester who believed vehemently that the only testing required in Agile projects is exploratory testing. She claimed to have worked in Agile environments for more than nine years and that in all the organizations she had worked, when it was time to test it was just exploratory testing.
There are a lot of misconceptions like this about Agile, and I want to seize this opportunity to present an "argument."
How possible will it be to verify and validate an application that has been continuously integrated without historical knowledge of the modules that have been integrated and a defined, expected behavior after the integration? It's apparent that test scripting to verify the expected behavior of an application after code compilation is an inevitable exercise in both traditional (sequential) and Agile methods, but is Agile an easy way out of effective, robust, and result-oriented testing?
The obvious that must be set straight is that Agile development and frameworks like Scrum and extreme programming are in support of continuous integration testing. These frameworks, where adopted, are not known to be uncertain and ambiguous. I want to agree that testing in Agile has evolved a lot after many years, but it's definitely not a pretext to demean the quality assurance expectation in an Agile project.
Even though a Scrum project doesn't require upfront analysis or design, as all work in the project occurs in the repeated cycle of sprints, it's still necessary to note that planning is nonetheless required before committing to a sprint estimate.
In Scrum, the intentional and emergent technical practices' adaptive nature have come with loads of misconceptions or misconstrued concepts that must be refuted at any given opportunity.
Let me begin my argument with what is expected of Agile testers. Testing is not out for perfect requirements; rather, testers are out to ensure conformance to users' needs, especially in Scrum, where the prediction of all user needs is known not be a possibility.
Unlike testing in the traditional software development environment, in Scrum testers cannot ask to wait for the delivery of a perfect requirement document and ensure that the system does everything. High priority is given to talking about requirements and engaging with the product owner rather than writing about them; no more will testers remain idle, waiting for requirements, but will follow up on validation criteria in order know what's expected of a new feature. Where Scrum is adopted, testers have to become more proactive, enthusiastic, and forward thinking in their handling of requirements.
Agile testers have become more skillful as they interact more with programmers to understand codes functionalities, delivery, and integration into the existing application and environment. Technical practices such as test-driven development, pair programming, and test automation have sharpened Agile testers and put testing at center stage in Agile activities.
From the first, Agile project testing has gone from verification and validation to being a way of building quality into a product. This is largely because testing has become a central practice and is integral to the development process, rather than coming after development.
Integration testing is vital to any Agile project, while user acceptance testing allows the product owner to validate and approve the product before deployment.
Exploratory testing therefore can only be integral to Agile testing, but not vice versa. While exploratory testing can be said to be integral to Agile development, it certainly cannot be the whole exercise.
The reason why integration, regression, and automation testing is popular in Agile is because testing can largely guarantee a high level of confidence in the product being developed incrementally and iteratively. Testing is done severally, not once, by way of product integration, regression, and automation. This approach increasingly guarantees confidence in the product. Testing in traditional (sequential) methods will probably not have this luxury, as the testing window opens once in the software development cycle. Defects not discovered at this stage find their way into production, thereby raising the overhead cost.
Since products are developed in an incremental and iterative manner, it is imperative to verify that integrating modules developed in all iterations are tested to ensure that they all actually are integrating for the overall product benefit and business value. Regression (end-to-end) testing is necessary to ensure that nothing in the latest iteration has broken the entire product. For some projects it will be necessary to validate individual (system) functionality before the integration testing.
Toward the last iteration of a release, user acceptance testing, or acceptance testing, is necessary to ensure the validation of the product and to be sure it meets the requirement criteria or validation criteria of the user. The verification of functionalities by the Agile tester is not enough until validation is done by or for the product owner/user.
With software development, the paradigm has now shifted and has elevated testing activities to a place of prominence, where no product is committed to use until validated. Testing activities in an Agile environment therefore cannot be all exploratory, but exploratory testing is part of the whole process of quality assurance. The evangelists of exclusive Agile exploratory testing should note that "a Scrum team will require a suitable automated testing environment regardless of whether it also does continuous integration" (Mike Cohn, 2010:163).
Still, not many Agile projects will require just two phases, like integration and regression. But it's definitely not only exploratory testing that's needed, as is erroneously believed in some quarters.
We should also note that:
- In extreme programming, the customer defines the test.
- In Scrum, the product owner defines acceptance criteria, i.e., the expected actual or expected validation criteria. The tester therefore has to verify every feature in the user story so as to meet the validation criteria.
Not all about exploratory testing
It is imperative to state in clear terms why Agile testing cannot be all about exploratory testing. It is unequivocally the case that:
- You cannot estimate your time for exploratory testing, i.e., assign points realistically.
- You cannot plan for exploratory testing, as you do not have defined expected results.
- There is no defined scope for exploratory testing.
- The tester, product owner, and Scrum team are not in control.
- There is no measure of progress, as testers cannot determine when testing is enough.
As I round up this article, I would like to leave you with a quote:
Exploratory testing is also known as ad hoc testing. Unfortunately, ad hoc is too often synonymous with sloppy and careless work. So, in the early 1990s a group of test methodologists (now calling themselves the Context-Driven School) began using the term "exploratory" instead. With this new terminology, first published by Cem Kaner in his book Testing Computer Software, they sought to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline.
— James Bach, Exploratory Testing Explained, v.1.3, 4/16/03
It is the responsibility of the tester (and the Agile/Scrum team) to ensure that acceptance testing is in line with the expectation of the product owner. If we agree that there is an expectation, we therefore have to design test cases (even if minimal) that will verify the specified acceptance criteria. Customers, stakeholders, and product owners all have expectations and return on investment that can only be met by a Scrum team that is driven by a goal of high quality, and the team must have a tester (or testers) as its navigator.