It's the most ignored and misunderstood software engineering process in the most famous software management process: Scrum unit testing.
For developers, it's boring, tedious, time-consuming. There's no time in sprints to do this extra activity, and testing anything is not their job. For the product owner, unit testing will delay the release dates, may increase the number of sprints required to complete all the story points in the product backlog, and thus increases cost to the project. Performing the task doesn't even deliver anything extra to customers, so they won't pay for this extra effort or, in some unit testing techniques, for the extra code written for it. Others argue that they need to maintain an extra piece of code, so why duplicate the efforts of testing before delivering it to the test team, when the test team needs to run their test cases anyway? They'll catch all the bugs. Code review can be a substitute for this task, and it will catch code bugs too. Some people also confuse unit testing with the functional testing done in the development environment by the developer, done to check whether the planned functionality has been implemented and is working correctly.
In order to make our sprints successful, i.e., to deliver what's been promised on time, we adopt shortcuts to unit testing. The implications of this sin of sacrificing on unit testing may not always be immediately visible. But by doing it, we're losing our chance to benefit from the value additions it can offer to the project.
In Scrum methodology, requirement, design, and major code changes are inevitable, and accommodating such changes is the crux of Scrum. We'll never have 100 percent complete and frozen requirements and design documents beforehand. They evolve as the sprints progress. So writing good and strong unit test cases, covering everything based on these documents, is a major challenge. As requirements and design change, we'll need to change all those nicely written unit test cases. Scrum has the potential to handle a lot of changes in requirements coming from the customer during development, so we need to change the previously written code. Thus, refactoring of code comes into the picture as we develop our product in successive sprints. In the case of maintenance projects, the code base goes on increasing over time, and changes in it can result in broken code. To confirm that we haven't broken the code, we need to do regression testing using unit tests.
So why do we engage in unit testing?
Unit tests are the best documentation for your code, especially when you're working on a code base written by someone else and you don't know why a method or class has been written or what it's supposed to do. If unit tests are in place, you can refer to them to get all these details. They give the information on how this code is designed to be used.
Unit tests act as a proof to prove that the code that's been written is working correctly. They don't replace acceptance testing but supplement it.
Bugs caught earlier are easy to fix and economical to project. Unit tests help reduce the passing of trivial bugs on to final builds or even into the field. Bugs caught in the field are, of course, always difficult and expensive to locate and fix. Unit testing saves the hours, effort, and money needed to find and fix the problem.
Code that is unit testable is code we can bind easily to the rest of the system. Unit-testable code is cohesive and has loose coupling, so it's easy to maintain in the long run.
When should we perform unit testing?
With every code base released into the code repository, we should release the corresponding unit tests into the code repository. This way we can keep a check on the unit tests coverage. In general, we should have a count of unit tests that matches the number of public methods exposed in our project — i.e., at least one unit test per public method.
Write unit tests as you code: Once you've added a public method to your class, add a unit test to test this method right after its implementation, rather than developing the whole module and then probing all the public methods and writing unit tests for them.
Always keep unit tests up to date. When moving, modifying, or deleting any code in your project, make the required changes in the corresponding unit tests.
When fixing a bug, write a unit test for it so that you can always run these unit tests to avoid the reappearance of these bugs in the future.
How should we perform unit testing?
How? No, in Scrum we can't tell a team how to do things. The team needs to decide this based on its capacity, capabilities, and project needs. It can decide how to implement unit testing during planning meetings, so that team members can convince all the stakeholders and also plan the implementation in the most beneficial way.
Still, I must admit that the developer within me wants to underline the technicalities of unit testing, so here are some general guidelines to consider when implementing unit testing in your projects — though this is not a comprehensive list.
- We should create a separate class of files for our unit tests — that is, they should be separated from the class implementation files. This will automatically prevent our test case from testing private methods and properties of our implemented class. By segregating our classes and unit tests in different files, we have the option of changing the internal implementation of our classes whenever the need arises.
- Every unit test should be independent from the other unit tests. They need not be run in a particular order, nor should one unit test perform the setup or initialization of work for another unit test.
- In order to make our unit tests an excellent source of documentation for our implemented methods, every unit test should cover only one aspect of our code at a time. So we may need to write multiple unit tests in order to fully cover all the features of one method.
- Unit tests should probe the core behavior of each component added to our project. They should cover normal conditions — test the behavior of the method with correct input values and unanticipated conditions (for example, the behavior of the method when we feed in bad input values or values that are outside specified boundary conditions). Unit tests should test whether proper error handling is in place to tackle such unforeseen or incorrect conditions.
- Create simulations, mock objects, or fake data to feed unit tests in order to test the working of implemented components. This will help to test complex systems having lots of components as well as components that interface with third-party components.
- Unit tests need to execute the code in order to test how it's working. That is, if we're testing a method, we need to call that method directly in our unit test.
- A unit test case should verify that code is working as it's expected to work. So our test case should verify its results. We can use assertion methods in our tests for verifying the results properly.