Sometimes, when testing user stories in Scrum, there's a final Waterfall interaction to deal with. The scenario I present here is based on this situation: a Scrum process with an interaction of sequential phases at the end of the process to (re)test the whole developed functionality. These sequential phases are mandatory for our organization, which follows a Waterfall process for the releases of the product. So, for the moment at least, we have to deal with this — and my experience is that we aren't alone. We call this scenario of interaction with Scrum the "Waterfall-at-end" (see Michele Sliger, Bridging the gap: Agile projects in the Waterfall enterprise. Better Software, July/August 2006, 26-31). I think this scenario is frequent because the adoption of Scrum often is incremental inside an organization, and both approaches can coexist for a while.
Below is a graphic depiction of this scenario:
I'd like to discuss how the testing has been integrated into our process, then analyze the objections that are sometimes raised on the testing choice made. I hope this can help people in the same situation we're in, and that it can start a discussion about best testing approaches in this situation.
The functionalities we develop are constituted of many user stories. For each user story, at the beginning of the process, all team members together identify what we call "test ideas." Test ideas can be seen as test acceptance criteria for the user stories. Generally, every test idea is translated into a manual test, with different steps to follow to finish the test scenario. These manual tests are written and run for every user story.
The problem compared to a full Scrum approach is that, given the Waterfall-at-end interaction, we are obliged to rerun the tests for the whole functionality to qualify the product before going to production.
There are several valid objections to the scenario, and they are interrelated:
- Creating and running tests for every user story is inefficient because we have to rerun them for the whole functionality. It's better to create tests on the whole functionality at the end of the Scrum process.
- We spend a great deal of time creating tests for every user story when, instead, we could create one single test grouping more user stories.
- We spend too much time running the same tests: We run tests for each user story (during Scrum), then for the whole functionality once delivered (during Waterfall-at-end), and then once in the QA phase (during Waterfall-at-end).
Testing each user story separately is, for me, the basis of the Agile process, even in an interaction with a Waterfall-at-end scenario like the one described. Integrating testing into the process itself is something we should do for any software development process, not only in Agile or Scrum. This approach respects one of the fundamental principles of modern quality management: "Quality is built in, not inspected in."
The reason to respect this principle even when Scrum interacts with a Waterfall process is because testing later in the process brings well-known problems (see Mike Cohn, Succeeding with Agile: Software Development Using Scrum. Addison-Wesley, 2011, 308-309). It's also crucial because the team shouldn't think that quality is something different than development, something to assign to only one person in the team (the QA specialist). It's important to keep a whole-team approach (Lisa Crispin and Janet Gregory, Agile Testing: A Practical Guide for Testers and Agile Teams. Addison-Wesley, 2009).
Let's analyze the objections more in detail:
1) Creating and running tests for every user story is inefficient because we have to rerun them for the whole functionality. It's better to create tests on the whole functionality at the end of the Scrum process.
It's true that it can take more time to create tests for every user story than to create the tests for the whole functionality. But what I've noticed is that when tests are created per user story, we tend to test more. I think that's a logical consequence of splitting the functionality in more manageable parts (user stories): Test ideas come more naturally and in more quantity when we're dealing with user stories rather than considering the whole functionality.
So even if it's possible that we spend more time with this approach, we test more (with more tests resulting at the end). To me this is not a waste of time but proof that integrating testing from the beginning of the process can cultivate more thorough testing. The development is done with all these test ideas in mind, and this helps produce better quality, because in order to finish a story all the tests have to be passed.
2) We spend a great deal of time creating tests for every user story when, instead, we could create one single test grouping more user stories.
It's true that sometimes, especially with small user stories, it seems like a waste of time writing a test when we know that two user stories later, we'll have to test another part of the functionality. Why not just create a single test? In any case, the test for the Waterfall-at-end phase could have a single test (perhaps grouping two or three user stories).
I think that this mind-set may sometimes reflect the fact that we have user stories that aren't well created, or tests with a bad design. Even if a test is done for a small user story and we have to modify it when another story arrives, the impact shouldn't be high if the story has been created in the right way and the test was well done.
And what if, instead, we create a single test (grouping more user stories) and then for some reason the functionality isn't fully developed? Isn't it always better to have a set of tests reflecting the functionality developed so far?
3) We spend too much time running the same tests: We run tests for each user story (during Scrum), then for the whole functionality once delivered (during Waterfall-at-end), and then once in the QA phase (during Waterfall-at-end).
This is the easiest objection to respond to because the answer is: Yes, it's true!
I simply agree, and I don't see a better solution other than introducing test automation. But sometimes you have to deal with the organizational culture and barriers to successful adoption of automated tests. I see this as a technical debt. Our team has to deal with it, introducing the change and showing that it's possible to use Agile testing principles. And I can say that it's not easy.
I hope other readers have ideas about how to deal with this testing issue. I'd enjoy an online discussion.