When we're working on software projects and following an Agile method, e.g., Scrum, our usual focus is on how we communicate with customers about the actual requirements, and how we manage (or empower) the Agile team so that deliverable software can be put out at the end of each iteration.
However, what about the other important perspective of making good software, i.e., testing, or quality assurance? I know in the industry we already have theories and practices about Agile testing. Unit tests are a very good example. By spending some effort in writing unit tests while implementing new features, we can identify more problems earlier and save much more time in debugging strange defects at later phases, when different modules/features are integrated. Another fantastic attribute about unit testing is that once the test cases are implemented, they're easy to run over and over again, making regression testing a piece of cake. When used along with some tool system, such as Maven, unit testing makes up an important cornerstone of continuous integration.
I believe the emergence of unit testing is one of the most revolutionary events of software engineering in the past 20 years. It made people changed their minds about testing, whether it should be manual, and why it can't wait until all parts of the software are done. There are two key points about unit testing:
- Use automatic runnable test code.
- Test as early as possible.
Then, is unit testing enough? (And note that when I say unit testing, I'd really like to expand that to code-level testing — by which I mean all test cases that were written in the same programming language as the software being tested. It doesn't have to be a unit test; it can also be, for example, an integration test that relies on some platform-dependent test framework.) Although I am a big fan of unit testing, I still don't think it's enough for creating good software, even in an Agile project. Unit testing usually has some limitations in capability. For example, writing test code to check software UI takes a lot of time, and sometimes it's even impossible because the UI could be so complicated that writing test code could be even more complicated. Manual testing can sometimes be more reliable and easier.
So why should Agile testing only be automatic? For those areas where manual testing fits better, why can't we adapt the traditional manual test to better fit Agile projects? The key point is to do that testing as early as possible, along with all automatic tests we already and always do.
In my practice, I call this "real-time free testing."
The background of the real-time free test is that my Scrum team already had means of testing: code level tests, continuous integration tests, manual acceptance tests with predefined test cases. But although we had all of those, I still found some things really hard to manage, especially the acceptance tests. For a specific user story, the acceptance test needs to wait until the user story is finished — or even until other related user stories are finished. This and other limitations always make the acceptance test a little bit delayed. For example, acceptance test execution usually needs to be run on the next iteration. Defects found during the acceptance test could be "extra" work in the next iteration and complicate that iteration's planning.
So how to test (manually) as much as possible and leave fewer defects to the iteration deliverables? Below are my uses of the real-time free test:
Generally, test as early as possible.
- Any increments that can be tested manually should be tested as soon as they're done, no matter whether there's an acceptance test case for it or not.
- This is usually planned at the Daily Scrum, when teammates communicate about what was done yesterday. The real-time free tester would say, "OK, I can test that ASAP."
- Communication is important between the increment maker (user story developer) and the real-time free tester about what can tested and how to test.
- All problems found during real-time free testing are reported to the increment maker directly, and the increment maker can decide how to fix them and communicate that in the next Daily Scrum, or even immediately.
- The real-time free tester does not have to be a professional verification engineer. He or she can be another developer who happens to have no user story to work on. (I usually assigned this job to an intern.)
- There is no rule about how to do a real-time free test. All that is needed is an understanding of the user stories. Actually, it's preferable to have someone test as a normal end user, since an end user doesn't have test cases and test steps in his or her mind.
The actual result of running real-time free tests was surprisingly good. In almost every retrospective, I got some positive feedback about it because it helped find minor issues quickly, some of which weren't even covered by the acceptance cases.
I have a metaphor about acceptance testing and real-time free testing: Software testing is like coloring or painting a piece of white paper. The aim is to color as much of the paper as possible, not leaving any empty spots (test coverage). Doing the normal acceptance test is similar to coloring the paper line by line. Doing real-time free testing is similar to painting without any constraints — that is, doodle as you wish, draw point by point randomly.
If the piece of paper you have to color is big, the difference between the two methods could be huge. However, if the paper is small enough, the difference can be small enough to be ignored.