Get certified - Transform your world of work today


Modifying Behavior with Sprint Data

12 April 2016

Scott Lively

Can I tell you a secret? One of my biggest pet peeves is not completing stories in the sprint during which we've agreed to do them. Carrying them over is like having that itch you cannot scratch. The closure offered by delivering them at the end in a potentially shippable product increment is satisfying. Having to carry them over has consequences: We have to continue working on them. We can’t pick something else up. We are potentially blocked from shipping an increment. That last is the worst. Our stakeholders love when we deliver new increments. If we have planned correctly, it solves their most current problem in a very short amount of time. If we fail, stakeholders don’t get what they need, when they need it.

End-of-sprint data plays an important role in helping us understand where we have failed and what we need to do to improve. For this article, we are interested in the percentage of story points coming from new content vs. points from carryover content, and the percentage of points added after sprint planning. I want to focus on the first part (new content vs. carryover), its root causes, and its relationship to the second part (stories added after sprint planning).

Two general patterns emerge from looking at our end of sprint data over the last nine months. First, since we committed ourselves to getting work done within a sprint, our percentage of carryover points has generally decreased. To reach this point, we made a few simple (although not necessarily easy) changes: Most simply, we decided that lots of carryover was not acceptable and was holding us back. We could not deploy incrementally. It made us unpredictable. We also started writing smaller, better stories. We can better estimate what we can complete within a sprint by limiting the scope of stories. We learned, changed our behavior, and adapted.

Second, we see fluctuation from sprint to sprint in the amount of carryover we experience. I think there are several root causes for this.  These provide areas for our continued improvement.
  1. Story scope: We have done a better job of reigning in the scope of individual user stories. Playing regular sprint poker has helped us with this. However, sometimes we still miss the mark. Our most recent instance of this was a missed requirement that we needed to quickly implement. Its scope turned out to be larger than anticipated. Our definition of what needed to be done was inadequate. This led to misunderstanding in implementation and failed tests. We would have been better served to have taken more time understanding what needed to be done and breaking a big story down into smaller ones that could be taken up by more team members. Lesson learned (again).
  2. Cross-training: Great Scrum teams have all of the skills they need to deliver a potentially shippable product sprint after sprint. In general, we have this. However, we are fragile in this area. We could be stronger with respect to testing skills. If our primary testers are pulled into field support, we need to support each other by stepping in to fill the testing gap during that sprint. We also need to continue to become fluent in all of the technologies our product uses. Often, we see server-side experts struggle with client-side code. Similarly, client experts may not be as fluent with server code. Pairing helps bridge this gap. Communication helps. Also, see #3 below.
  3. Asking for help: We operate in a “Scrumban” environment. We plan our sprints carefully, but we also find that team members are able to pull in small items along the way. This is motivating because, ideally, the team completes what it signed up for plus some additional content that provides extra value or reduces pain for a stakeholder.
However, there is a balancing dynamic at work here. Look at the figures below. The top panel shows the measured data. The bottom panel represents a more “ideal” state. The top panel represents aggregated data across the first three sprints in our current release. The x-axis shows whether content was added after sprint planning or not. The blue bars represent items that were not carried over from a previous sprint. The red bar represents carryover. The issue occurs when comparing the red bar to the blue bar to its right (“late adder – no/carryover – yes” vs. “late adder – yes/carryover – no”). We should be decreasing the percentage of points coming from these two in favor of increasing the height of the “late adder – no/carryover – no” bar. This is shown in the bottom panel. Note that the absolute percentages shown in the bottom panel are for illustration only. The point is the relative contribution of each bar.



Operationally, we need to make sure that our priority is on finishing what we commit to first. The hurdle is in reaching out for help when we are stuck. We are a team of extremely smart people. Development problems are personal challenges. However, we need to continue to recognize that we are here to support each other. Asking for help is a good thing. It’s the behavior we want to encourage. It is not symptomatic of being unable to meet a personal challenge. It’s not an intellectual insult. We need to start asking the question, “Before I pull something else in off of the backlog, can I help someone else complete what we committed to?”
  1. Come to the “alter”: We are experimenting with a mid-sprint ceremony we call “come to the alter.” (Yes, “alter” is spelled correctly.) This is a mid-sprint assessment of how likely it is that we will complete everything we signed up for in sprint planning. Based on the assessment, we might alter the sprint plan. For example, if we are pulled into more technical support than we anticipated, we may be short on resources for testing or for specialized protocol work that not everyone is capable of doing. In that case, we might pull stories out of the sprint in order to not leave them half-implemented in the code. Alternatively, we might split a large story and do a piece of it so that we can deliver some value to stakeholders. Ideally, there are no alterations. However, the “real world” does not always make that possible. Most importantly, it gives the team another explicit opportunity to step in and help each other. If we internalize that stories are in jeopardy, someone may step up with a clever way of reducing the risk. Enabling flexibility allows us to focus and be sustainable. 
The bottom line is that we need to continue measuring how we do sprint over sprint. Further, we need to be careful about the behaviors we reinforce. We need to balance adding value for one stakeholder by pulling an issue into a sprint vs. harming another stakeholder by not finishing a story we committed to. In general, we should bias our behavior toward the stakeholder we made the commitment to during sprint planning.

Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.

Article Rating

Current rating: 4 (3 ratings)


Wes Wagner, CSPO, 4/19/2016 10:11:47 PM
We just tried a modify version of this for our current sprint. Very helpful. I know some purists may not like it, but it helps me as a PO maintain "shared understanding" with the team. We are all now better prepared for our review....

You must Login or Signup to comment.

The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.


Newsletter Sign-Up