As we know, "Agile testing" is not a completely different testing procedure but a software testing practice, following the principles of the Agile life cycle. How? Its most salient aspect is that it emphasizes testing and close work with the end users (or at least with the story owners) throughout the project.
Agile testing involves testing as early as possible. Testing early is one of the success factors for any Agile development, as long as the development setup supports this by providing a successful build-to-testing team. With its ever-growing maturity, Agile testing is becoming more and more integrated throughout the project lifecycle, with each feature being "fully tested" as it's developed, rather than most of the testing coming at the end of development.
To elaborate: I've worked in situations in which each development team is comprised of quality-assurance (QA) member(s) at 4:1 ratio. So a typical eight-member team will have two testing members. We've used CruiseControl.Net to run a continuous integration (CI) and ensure that the QA members on the team get a buildable (in Agile terminology, a "potentially shippable product") solution to test—even if we're still in a development environment. So the development work flowed like this:
Now, assume the above situation repeating itself for every sprint (this could mean every two to four weeks). Agile testing will need to address validation of one or more of the "new software modules" from the customer perspective during each of these individual cycles. It will also need to take into consideration how and when to handle the regression before the eventual release. Thus, testing is no longer a phase; rather, it blends with development, and "continuous testing" becomes the mantra, and the only way to ensure continuous progress and eventual success.
Under such a challenging environment (imagine in particular a multiteam/cross-location situation), where new requirements are implemented and K-LOCs are checked in, demanding both ad-hoc and regression testing, a ten-to-twelve-hour day often seems insufficient. This may eventually lead to churning among resources. How do we handle this?
- One simple solution could be to increase the head count—add more resources to manage QA requirements. Perhaps under T&M projects, this would be an interesting proposition. But obviously the client won't be fond of such an option, and it's definitely not a worthy solution for the vendor either in a fixed-price project.
- A better alternative would be to simplify things by streamlining a few processes so that life becomes relatively simpler for testers. A few important considerations:
- Involve QA at the beginning of requirement finalization, so that QA members get the maximum possible visibility of the requirements.
- Introduce accountability from quality perspective. That is, introduce an in-development test lead, test-case writing lead, story owner, and a business analyst (who works in sync with the story owner to define acceptance criteria).
- As QA starts working on test-case preparation, involve the customer for review. This helps to ensure the completeness of test cases, and additional review also prevents redundant cases and steps.
- Enforce standard checklist-based acceptance criteria. This could be the starting point for the tester.
- Standardize all nonfunctional quality criteria (usability, performance, memory usage) across the application and get those documented for easy reference. Communicate to story owners so those criteria are truly referred in each applicable user story.
- Well-defined dependencies should be marked on each user story so that the corresponding test lead can take due measures at testing (or while defining the testing strategy).
- Use lightweight documentation styles/tools (for example, simply use the "Description" tab of the TFS to define the acceptance criteria, rather than attaching multiple documents).
- Capture test scenarios as part of the requirement item for exploratory testing (one could create a product backlog item, link individual stories, then identify testing scenarios for each).
- Leverage these story items for multiple purposes (for example, in TFS you may simply link as child/parent).
- Consider having a QA Scrum once every one or two weeks, as well as a QA Retrospect in which testers across teams meet to align test activities. (One advantage of having a separate test team in one room is that the communication between the testers is good.)
Now, what should the Agile work items look like? Let's check a typical flow:
Typically, as a tester logs a bug, he or she could simply link the bug to corresponding story item, so the bugs logged against a particular functionality can be easily collated by pulling data associated with a PBI or story.
When to handle "Regression Testing"?
Generally, under Agile testing, each new functionality is tested as the sprint progresses. Typically, toward the end of the sprint, a small window is kept for a short regression test before moving to the next sprint. Often Agile Teams implement a BVT (Build Verification Testing) routine in which a standard set of verification steps, cutting across the application, are performed to ensure application stability and functioning. If possible, this routine could be automated and integrated as part of the CI server to make the release even more stringent.
Also, on projects running over several sprints, it's standard practice to have a "code hardening" or "release sprint" to ensure the overall functionality of the application, mostly from an integration point of view. Ideally, this should not stretch past 30 to 45 days, assuming enough care was taken during the individual sprints to cover defects.