Continuous improvement is a key principle of Scrum. Yet, though most Scrum teams conduct a sprint review and a retrospective at the end of each sprint, too many teams fail to implement the improvements they identify. In my experience, there are two common causes for this:
- Teams don’t have or maintain explicit standards for their product and process;
- Teams don't use the product backlog to schedule improvements.
Set a Dynamic Standard
A definition of done in Scrum defines general acceptance criteria to which all implemented features should adhere. This ensures that everyone has the same understanding of what it means for a feature to be “done.” Team members must all know these general acceptance criteria so that they know when they’re finished. The product owner and stakeholders need a definition of done so that they know what they can expect from the features delivered by the team. That does not mean, however, that the initial definition of done is the one you will want to use throughout the project's lifecycle. In fact, one of the reasons why an explicit definition of done is important is that it not only says what will be included in each feature, it also makes explicit what is not included. Some of the things not included may be planned additions later in the project.
Take, for example, a project we'll call BookCellar. The BookCellar team was starting to work on the implementation of a new system to sell books online. In a session with the team to establish the definition of done, Michele, the product owner, indicated that she cared most about two things:
- Getting a few essential user stories done during the first sprint so that some of the more skeptical stakeholders would feel more at ease; and
- Ensuring that, under the expected average production load, at least 95% of the transactions would complete within a second.
Sam the ScrumMaster spoke up. "Michele, we think we can get all the essential user stories you mentioned done this sprint. But, we cannot get them all done and create the performance tests necessary to ensure that they are hitting the 95% in under a second target. I'd like to propose that we focus first on getting the features in the hands of the stakeholders and save the performance testing for a future sprint."
"Sam," Michele replied, "That makes me really nervous. On my previous project, we missed our deadline because we waited until the end to run performance tests. We ran out of time to implement the necessary performance optimizations."
Sam nodded. "You're right. We don't want to wait that long. What if we add an item to the project backlog to implement performance testing as part of continuous integration? This backlog item could be picked up right after the first user stories are demonstrated. After that, we'll extend the definition of done to include the performance requirement."
Michele agreed. She liked that, as a product owner, she would be in control of when to schedule the introduction of continuous performance tests. To be able to prioritize the backlog item properly, she wanted to know how much work it would be compared to the user stories. The team subsequently estimated the new backlog item in a planning poker session.
In this example, both Sam and Michele acknowledged at the start of the project that the definition of done would not be static. They both, in fact, agree to explicitly postpone an improvement to the standard. They added the planned improvement to the backlog and estimated and prioritized it just like any other backlog item.
Plan for Improvements
The product owner is expected to balance the requirements of all the stakeholders. These stakeholders can include end users, an operations department, a marketing department and so on, but remember that the team itself is also a stakeholder. Therefore, when the team identifies an improvement that will require the team's time, that improvement should be added to the product backlog, estimated, and prioritized, just like any other planned work for the team.
When proposing to add improvement work to the backlog, a team has to discuss the costs and the benefits with the product owner. An example of an improvement that usually originates from the team is the implementation of automated functional tests. Such improvements can lead to higher quality delivered features and increased velocity. Another example might be a big refactoring to improve the architecture of a system, allowing for faster implementation of additional features.
There are many teams that identify such improvements and then plan to do them alongside "normal" work from the product backlog. This usually fails. Teams seldom find time to do this "extra" work: the focus is on items from the sprint backlog, which take up all the time that a team has. If, on the other hand, the proposed improvement is added to the product backlog, it will be estimated and prioritized with all other work. Once it is picked up in a sprint, the team will focus on it and get it done.
Inspect and Adapt - Again and Again
Once a definition of done has been established, it should be frequently inspected and adapted to reflect what is happening on the project. Let's return to our fictional BookCellar Project for another example.
The BookCellar team and its stakeholders were fairly happy. After a number of sprints, the team had implemented and demonstrated the most essential features. The stakeholders were thrilled to see so much functionality so early in the project and were delighted that they were able to provide feedback that could actually move the product closer to their true wants and needs.
During the last sprint review, however, a nasty issue had popped up. The team had been demonstrating the system using Mozilla Firefox, because this was their favorite web browser for development of the system. One of the stakeholders invited to this particular review, realizing that the majority of customers would not be using Firefox, asked the team to show some screens in Internet Explorer. This turned out to be pretty disappointing. In Internet Explorer, the layout of the screen was a mess.
After the sprint review, the product owner realized that she knew how to get this right. She remembered how the performance test requirement had been introduced into the definition of done. She defined a backlog item to improve the application so that it would comply to the style guide in the following browsers: Firefox 3.0 & 3.5, Internet Explorer 6, 7 & 8, Safari 3 & 4 and Google Chrome 3 & 4.
An ambitious definition of done can come at a cost, not only in reaching but also in maintaining it. This is especially true if manual tests are required to check if a feature is "done." Test automation, therefore, often becomes a necessity in Scrum projects, as teams must prove that the latest feature is done and that the previously completed features aren't broken. Because these tests happen at the end of every sprint, they are more work-intensive than in a non-iterative project, where regression tests are only done after longer periods of time. Without test automation, testing all features every sprint often becomes too great a burden.
Even with the best test automation, as the project goes on it may become too costly to automate all tests for "done." Sometimes, as the team inspects its velocity and retrospective outcomes, the team may learn that the definition of done needs to be relaxed. Let's look back at our BookCellar team for an example of how this might occur.
The Book Cellar team picks up the new backlog item to support a big set of browsers in the next sprint. After the sprint, the team proudly updated its definition of done to include that the system should display correctly in all these browsers.
A few sprints later, however, the team had failed for two sprints in a row to finish the stories to which it had committed. In a retrospective, the team found that manually testing the growing system for so many browsers had become a bottleneck that was slowing them down. The team discussed whether these tests could be automated but found no solutions.
The ScrumMaster asked the Product Owner whether she had statistics about which browsers were most often used by the users who visited the current website. Based on these statistics, the team and product owner changed the definition of done, limiting the browsers to only Firefox 3.5 and Internet Explorer 7 & 8.
This change proved to be worthwhile: the velocity of the team increased considerably. Around the time of the scheduled date for the first release, the product owner decided that enough features had been implemented to constitute a good release. She decided that her highest priority was to have the team look at those features in the browsers that had been excluded from the definition of done. She added an item to the product backlog to test for the excluded browsers and fix the most critical issues that the team found. The product then went live successfully, displaying perfectly in the most common browsers and acceptably in the rest.
A Definition of Done is as an agreement between the product owner and the team. As illustrated in these examples, there is a trade-off between the cost of an ambitious definition of done and its benefit. The right balance can be found by inspecting the results of changes to the definition of done and then modifying it again based on the result.
Create Working Agreements
In addition to establishing a dynamic definition of done, I strongly recommend creating a set of working agreements. Working agreements are rules that team members agree to follow to help them to work together effectively. These rules are established by the team members themselves. Within the boundaries prescribed by Scrum, a team is tasked with discovering the most effective way to work in its specific context. Therefore, while developing a product, a Scrum team also develops the process to build it. A team improves its process by continuously inspecting and adapting. Just as a definition of done contains standards for the product, working agreements help to standardize the team's process.
Besides being a good way to document a process, establishing working agreements is a positive way to help to form a team out of a group of individuals. Each member of the newly formed team exposes to the others what he thinks are important aspects of working together on the project. This allows team members to learn more about each other’s attitudes towards issues crucial for successful teamwork.
In our BookCellar example, Sam the ScrumMaster, worked on building his team before the team had even looked at the first item on product backlog. His first order of business was to organize a team session to create an initial set of working agreements. Sam asked everyone to propose topics to discuss. Once the topics were listed, the team voted on their top five topics:
- Time of the daily scrum
- Code reviews
- Version Control
- Continuous Integration
The first topic was the time of the daily Scrum. After some discussion, the team members agreed to do the daily Scrum at 9 AM. They even decided that there would be a punishment for being late: late arrivals had to buy the team cake or ice cream.
Testing was the second topic. The tester on the team had previously taken part in several agile projects. He proposed the following working agreement: For each user story, the tester would work together with a programmer to automate a basic test scenario. After that, the tester would create more extensive tests while the user story was being implemented. The other team members agreed, so Sam added this working agreement to the list.
The team continued discussing the other topics, resulting in working agreements for code reviews, version control, and continuous integration builds. The team agreed that all code should be verified by another team member, unless written while pair programming. Code should be checked in regularly but at least once a day, and sensible comments should be used for checking in. When the CI build breaks, the person responsible for breaking the build should immediately drop everything else to focus on repairing the build. Sam wrote all the working agreements on a big sheet of paper and hung it on a prominent position in the team room.
The meeting turned out to be a success. While closing the meeting, team members indicated their expectation that this would really help them to work together effectively. Besides this, another positive thing had happened: The team members had learned to appreciate each other’s viewpoints on subjects that were important to them. They realized that they had gotten to know each other much better.
Working Agreements Are Dynamic, Too
Scrum teams should evaluate the way they work in every retrospective. Changing working agreements can be a good way of implementing process improvements that are identified in retrospective meetings. If a team is not explicit about its agreements on how to work together, it is harder to change them.
A change to the working agreements is only a first step. Inventing more and more rules is simple. However, this does not automatically mean these rules will be followed. If rules that the team still regards as useful are not followed, one must look for ways to stimulate adherence to the rule. Let's see how our BookCellar team dealt with a failure to adhere to its working agreements.
Early in the project, the team found itself failing to meet sprint goals. In the retrospective, the team members discussed reasons why two of the user stories they had committed to during the sprint had not been completed. It turned out that these user stories, unlike all the others, had not started with an automated test scenario. As a countermeasure, the team decided to create a new rule: in the sprint backlog, each item would include a task to create a basic test scenario. This would help remind those unfamiliar with test-first-development to always create a test before beginning work.
Following rules is not a goal in and of itself. Instead, teams establish working agreements to achieve goals such as maintaining quality or improving velocity. After establishing or changing working agreements, the team needs to inspect whether the anticipated improvements have actually taken place. If not, the reasons need to be analyzed. Is the rule being followed? Can we think of something that makes it easier to follow the rule? Does following the rule actually lead to the expected improvement? Based on this analysis, the set of working agreements should be adapted.
Our Book Cellar team discovered another problem during its retrospectives: late arrivals at the daily scrum. One team member had showed up late especially often. He explained that he really tried hard to be on time, but always had to take his children to school before going to work. In general, he was able to make it in time, but sometimes bad traffic delayed him. He was becoming irritated by having to buy treats for being late, especially since he was making the effort to be there. The team came up with a solution. They changed the time of the daily scrum meeting to 10 AM and dropped the ice cream rule. They had no more trouble with latecomers.
In this article, I have demonstrated how to use the definition of done and working agreements as a basis for continuous improvement in Scrum projects, as illustrated in the figure below.
In successful Scrum projects, both the product under development and the process that is followed should be consistently inspected and adapted. Inspection is facilitated in Scrum by sprint reviews and retrospective meetings. It is also helped by the general transparency that Scrum delivers, for example the measurement of velocity, which makes teams’ productivity transparent. Adaptations can be implemented through changes to the definition of done and working agreements; if they require significant team work, they should be estimated and prioritized on the product backlog just like a product feature.