Backlog Grooming: Part 1

3 July 2014


The practice of writing user stories is immensely popular among Agile teams and is often the default technique for product backlog creation. Although it is simple in concept, many Agile teams struggle to put the technique into practice effectively and will too often miss out on the benefits that make user stories an effective alternative or complement to other requirements-analysis tools.

The root cause of much strife for Agile teams can be traced to inadequate user stories. This article attempts to highlight the overlooked aspects of user stories that, if they were considered more carefully, would yield better results. This article also borrows from Alistair Cockburn's work on use cases.

The struggle teams have with user stories often comes from two failure patterns: muscle memory and prescriptive Agile.

Muscle memory is the phenomenon that occurs when overdeveloped skills are applied to situations where undeveloped or undiscovered skills may be more effective. In those contexts the person or team may think they are applying the new technique, but in reality it is only done in name. The dominant, previously well-developed techniques or skills are being practiced under a new name. The analogy is derived from physical training or rehabilitation, when a dominant set of muscles continues to be used for different exercises regardless of the muscle group you are trying to target. The only way to eliminate the overcompensation of the dominant muscle is to isolate the target muscle group such that the dominant muscle group has no way to contribute to the activity.

The other failure pattern is prescriptive Agile. Prescriptive Agile occurs when well-meaning practitioners follow popular Agile methods dogmatically. Prescriptive Agile is a contradiction in terms, and practitioners of prescriptive Agile need to revisit the opening statement of the Agile Manifesto. Manifestos are short and powerful. The words in the Agile Manifesto have been chosen very carefully, and alteration of the text would yield an entirely different meaning. The opening statement of the Agile Manifesto states that, "We are uncovering better ways of developing software by doing it and helping others do it."

Had the words are uncovering been replaced with have uncovered, the Manifesto would have had an entirely different meaning. Agile methods would have been dropped in a cabinet file labeled "Best Practices." What happens to a best practice when you've uncovered a better one? Craig Larman, in his book Scaling Lean and Agile, points out that there are no such things as best practices, there are only appropriate practices at the appropriate time. Experienced Agile practitioners will know that doing Agile "properly" by following a prescription of best practices is itself an oxymoron.

User stories are one such "best practice" that has suffered from both muscle memory and from prescriptive Agile. Muscle memory has led to the technique being used only partially, and prescriptive Agile has led teams into thinking that that is the only technique they should use to capture requirements.

Muscle memory is the main contributor to the common malpractice of a business analyst writing user stories. You may be thinking, "What's wrong with that?" Well, pretty much everything is wrong with business analysts writing user stories. This is part of a larger anti-pattern called "Water-Scrum-Fall," which Forrester Research has said has become de facto for most Agile teams.

Water-Scrum-Fall begins when business analysts meet with stakeholders to gather and analyze requirements. Their job is to understand the requirements better than anyone else, even though they may not be the ones actually implementing or testing them. They produce mountains of requirements to eventually feed to the development teams. They usually work on these requirements well ahead of any implementation. In fact, since often the purpose of the exercise is to find out exactly how much effort is required to do the work, they will do this before any team has been formed to implement it. This is the beginning of the "Water" part of Water-Scrum-Fall.

Once someone in the organization has estimated the effort to implement the analyzed backlog and a budget has been secured to deliver it, we begin the development iterations. This often starts with Sprint Zero. Sprint Zero is a way for teams to concede to the myth that they cannot possibly produce anything of direct value to the customer in the iteration. Instead, design and architecture work is done during the sprint, and no user stories are attempted or even looked at by the development team. In some cases, especially for larger projects, there are several Sprint Zeroes -- or sprints that produce nothing that is directly valuable to the end user.

This is the first fallacy of Water-Scrum-Fall. The word sprint, by definition, is a timeboxed period when activities of the software development life cycle occur and the team aims to produce working software at by the end. Muscle memory makes people invent terms like "analysis sprint," "design sprint" (Sprint Zero), or "regression sprint," when really what they mean is analysis phase, design phase, or regression phase. They are applying new terms to old practices.

What do the business analysts do during the design phase, or Sprint Zero? They develop more undeveloped user stories. Where are the testers at this time? Conveniently, they haven't got anything to test, which allows them to do regression testing on another project or triage the defect list of the previous disaster.

Eventually the implementation team begins to look at the stories to code up. They will have seen many of these requirements for the first time and, to no surprise, many of the stories have insufficient detail for developers to develop or are too difficult to develop to be worth the effort, given the chosen architecture. Some of those stories go back to the business analyst to rework. In the meantime, the developers crack on with the stories that are good enough to develop, while the unfinished requirements are thrown back and forth over the fence. What is meant by "fence" here is not the walls or physical rooms between departments -- Agile teams often embrace colocation -- but instead time zones between people working in the same space. The time zone is created when business analysts are working on user stories that will be developed in the future, developers are implementing stories that will be tested in the future, and testers are testing work that was developed in the past. The time zones create a virtual fence because the focus of the individual team members is different. It becomes clear during the Daily Scrum, when people zone out while others are talking about a piece of work, which is most likely not relevant to everyone. In fact, most of the work mentioned is not relevant to anyone beyond the person mentioning it, because the lack of collaboration on the highest-priority items means that there are as many different work items in progress are there are individuals in the stand-up.

The final stage in Water-Scrum-Fall is of course "Fall." In this phase you recognize that the release day is imminent and you need to de-scope a load of user stories that the business analysts put together beautifully months ago, and focus on defects. With all the user stories that are still in progress and that are not shippable, for every fix a new defect is found. The regression phase begins with a massive triaging effort. Depending on people's will to live, defects start sliding up and down the priority scale from P1 to P5. Sometimes P1 is just not urgent enough, so teams will introduce a "P Zero." Of course nothing falls off the other end of the list, because you have a spreadsheet that can hold 65,000 rows, or bug-tracking software that can store low-priority defects indefinitely. Also, teams will often have a dedicated test manager whose full-time job is to sift through this.

This sequential-delivery approach is little more than cleverly disguised Waterfall, disguised massively with new Agile and Scrum terms such as sprint, user story, and stand-up meetings.

The time zones seriously limit the degree of collaboration on user stories. James Coplien, in his book Lean Architecture (coauthored by Gertrud Bjørnvig), states that the mantra for Lean and Scrum and other Agile methods inspired by Lean, is "everybody, all together, from early on." The fundamental malpractice that results with a sequential approach instead of a simultaneous approach is that the business analysts end up writing user stories.

What makes user stories different from other requirement-gathering techniques is that creating user stories is a collaborative exercise. Unlike other requirements-capturing techniques, you don't just "write" user stories -- it is not a writing exercise -- it is a collaboration exercise involving the stakeholder and everyone who will be involved in bringing that story to deployment on a live system. This is what makes user stories distinct from other requirements-capturing formats. In fact, user stories go well beyond being just another format for documenting requirements -- they are a full technique that involves a written format, a conversation, and a set of tests.

The 3 C's of a user story

The written format, conversation, and tests are collectively known as the 3 C's of a user story: Card, Conversation, and Confirmation.

Card

The Card aspect of a user story is the written statement about what a user wants and why. This follows the all-too-familiar format: "As a <role>, I would like to <action> so that I can <goal>." One of the biggest mistakes people make is that they consider this the user story. It isn't; it only represents one-third of a user story. This statement, written on a card, is a promissory note for a future conversation. User stories have often been called "a promise for a future conversation."

A second problem with the application of this format comes from prescriptive Agile. Inexperienced practitioners eager to embrace Agile "best" practices will become a slave to the format. This is apparent when every single user story begins with "As a user. . . ." If there is only one type of user of the system, then surely it is not important to mention the user in all your stories. One of the advantages of the format is brevity. Failure to recognize this results in extra words that obscure the most important part of the story. What is the most important part of the statement? If mentioning the user happens to be inconsequential, then what remains in the statement is the action and the goal.

Let us examine which of the two is more important. The user has a job to do -- a goal. There are many ways to ensure that the user can achieve that goal; some require more effort, some less. This is a fact that we are too often unwilling to admit until dire necessity leads us into thinking of innovative ways to reduce the effort. The reduction of the effort is proof that it was a form of waste. Skeptics will still point out that the "hack," as it were, has created some additional long-term risks. Experience tells me, however, that working software is full of hacks. I've introduced hacks that have been in the software for five or ten years and are generating revenue paying for everyone's salaries. How much more time needs to elapse in order for us to admit that the better architecture wasn't absolutely vital in the first place? The systems with the suboptimal architecture are far more valuable than many systems I have worked on that have never seen the light of day because architects, analysts, and developers were adamant about building something that was scalable, extensible, maintainable, and gold-platable -- but not usable, yet. After several Sprint Zeroes, these projects were pulled.

The means to achieve that goal is secondary to the importance of the goal itself. In fact, the users may sometimes not know the best way to achieve that goal because they are not aware of the technology limitation or possibilities to achieve that goal. Without that valuable goal, the story may as well be ripped up or replaced by a goal of higher priority.

To demonstrate, victims of prescriptive Agile practices will see stories such as:
  • As a user, I would like to log in so that I can use the system.
  • As a user, I would like to log out so that no one else can steal my session.
  • As a user, I would like to reuse my profile from one session to the next.
Even in these short statements, there is so much redundancy and information that is obvious, that a two words could suffice:
  • Log in
  • Log out
  • Reuse profile
The action in these statements is obvious, and one and the same as the goal, so there is no need to mention both. Mentioning the user also has no value.

These statements say everything that needs to be said and can be written on a 3x5 index card that can be seen by everyone from across a room.

The spreadsheet effect
Part of an Agile business analyst's job is to mine value. Many business analysts who work in functional departments, even small 1- or 2-person departments, instead focus on mining information. To get a ton of steel, mining companies need to extract 4 tons of ore. To get 1 ounce of gold, you need 40 tons of ore. Information mining is much the same. To get to the valuable nuances of a user's goal, intention, and benefit requires a lot of conversation and study. The information that one sifts through during the process is an obstacle and can often obscure what is valuable.

What one writes on a card by hand is often indispensable. You are less likely to write any extra words at the expense of visibility.

Spreadsheets and templates make words cheap and easy to write. They tend to add to the amount of information that one needs to sift through.

Finally, you cannot arrange a user story in a spreadsheet as you can on a wall, in a way that will add context and enhance the team conversation that follows.

Conversation

The means or action used to achieve user story's goal can be discovered with the conversation that follows: identifying the original need to create the user story.

The second "C," Conversation, ensures that everyone involved in implementing the user story understands the purpose for its existence. Eventually, through conversation, everyone involved in implementing the story will synchronize their mental models of what the user story is meant to achieve. Defects occur when users, sponsors, developers, testers, and business analysts don't synchronize their mental models and the requirement outcomes are unclear. Defects can also occur when the technology causes and effects of the implementation are unclear. The implementation teams' ultimate aim is to implement the feature with minimal effort. Simplicity, as the Agile manifesto states, is the "maximization of work not done." It is vital that teams be completely transparent and communicative about the risk and effort in achieving the user goal. Off-the-shelf platforms that are developed mainly through customization can yield considerable productivity with little effort. But the remaining effort to get the feature exactly as the customer has required may be disproportionally large. Customers who have asked for the feature may want to reconsider whether it is needed at all, once they see the cost. When the team understands the goal, they may be able to propose an alternative, more palatable solution given the technology or platform's constraints. This is why the conversation, the second of the three C's, is so important. The increase in understanding of the technology causes and effects and the desired outcomes is a means for tackling complexity.

Technical stories
If you can't have a conversation with a user about a story, then it is probably not a user story. Yet you will often see technical people who are, again, victims of prescriptive Agile, being slaves to the format with stories such as:
  • As a developer, I would like to use a unit of work in code so that I can avoid writing transactions in my databases.
  • As an architect, I would like to ensure that the calculation modules can be exposed as an API via REST.
  • As a developer, I would like to rewrite the calculations module in Scala because it is the preferred language for doing complex calculations (and I would like to put Scala on my CV).
Users and product owners cannot appreciate, prioritize, or even understand what is being said in these stories. Lean thinking, the inspiration for many Agile practices, defines value from the customer's perspective. If it is not perceived by the customer as valuable, it is not valuable, and they cannot prioritize it. With further scrutiny, sometimes one may even expose it as a form of waste. However, software cannot exist in a vacuum, and there is a certain amount of plumbing or infrastructure that much of our software depends on.

A key insight is that these efforts are the means to a goal and not the goal itself. If the development team can figure out a way to eliminate the effort altogether through a better choice of technology, technique, or by changing the product owner's mind, then the team has effectively proven that it was a form of waste. For those nonfunctional elements that are proven to be absolutely essential, it is important to work on them iteratively and incrementally throughout the project as a means to a user goal.

The Agile Manifesto states, "Continuous attention to technical excellence and good design enhances agility." Technical excellence and good design are first-class concerns in all Agile methods, but the key word in the statement is continuous. The idea that you can get the architecture right at the start of the project, without validating that architecture through the delivery of valuable software, leads to the Sprint Zero or Sprint Minus-One malpractice of Water-Scrum-Fall.

Remember, the Agile Manifesto has three other statements that relate to this idea:
  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Working software is the primary measure of progress.
Finally, worth mentioning again: Simplicity -- the art of maximizing the amount of work not done -- is essential.

Software infrastructure is essential but must be minimal. Also, the place for it is not in technical stories but as part of the delivery of a user story that will deliver real value to the user. By addressing technical debt and architecture as a separate concern from the functional requirement, these work items will slowly slip down the list of priorities and collect at the bottom, at the expense of urgent functionality.

Of course, this will lead to another problem: The user stories will become too large to deliver in a sprint, i.e., they will become epics.

Size matters, but not as much as collaboration
It is well known that one of the qualities that makes user stories different from other requirements capturing techniques is that they are small. This is absolutely correct, but it is a secondary concern. The more important concern is that they are quick to deliver. The lead time and cycle time to deliver the story is what can make it an epic. Time is directly related to size, but there is an even greater influence: time, which is collaboration.

Teams should explore the limits of collaboration in their environment. An eight-day piece of work could be delivered seven days sooner if we threw eight people on the problem. Collaboration and swarming around the highest-priority item should be the default way of working until it is proven (or very likely) that adding one more person will actually slow the work down.

Delivering stories quickly is more important than creating stories that are small. Collaboration can play an essential role in reducing lead and cycle time.

Confirmation

The final "C" in the three C's of a user story is the Confirmation. While the opening statement is the promise of a future conversation, the confirmation in the form of an acceptance criteria closes that conversation and acts as a small contract between the development team and the customer or product owner.

The acceptance criteria is often written on the back of the card. This contract is the means to synchronize the customer's and developers' mental model of what must be delivered. However, as with many contracts, there are often arguments that stem from the varying interpretations of the contract's articles.

The problem is that human creativity, imagination, and cognitive bias resulting from previous experiences mean we each shape our mental models differently from each other. The solution, then, is to take out the human element and let a computer arbitrate. This can only happen if the acceptance criteria and the user story statement are so clear that even a computer can determine whether it is being met.

By implementing the acceptance criteria as an acceptance test that can be executed by a computer in a continuous or nightly build, the question of what was done and when was it done becomes trivial.

We make investments all the time when we purchase a mobile phone plan, take out a gym membership, or sign up for a credit card that provides us with loyalty points. We don't really examine the value of those investments until we commit to them by signing the contract. When we reach for that pen to sign, many of us will hesitate and take the time to quickly ready through parts of vast amounts of fine print. This is to ensure that our mental model is in synch with the mental model of the person or company that wrote the contract.

The automated acceptance test is effectively the signing of the contract, and it marks the end of a conversation that began with the statement written on the front of the user story card. Writing an acceptance test in no uncertain terms and in an a manner that is so simple that even a computer can verify it is an effective way to guard against scope creep. It also provides a means for the team to stay focused on the value proposition of the user's goal. Finally, it is an effective way to communicate that intent across distributed development teams when there are few opportunities to communicate face to face.

The practice of developers swarming on the single highest-priority item is rare but easy to do. The practice of developers, business analysts, developers, and testers swarming on the single-highest priority item has been, until recently, just an ideal, rare in practice. The reason for this is that, intuitively, most people believe that the sequence of analysis, design, development, and testing is a necessary and logical progression. How can all three activities be done at the same time, or in any other order? At best, it would seem that so far all we can do is concentrate the sequence of those activities into a shorter period of time, such as a sprint. This has resulted in another anti-pattern called Mini-Waterfall. Mini-Waterfall perpetuates the problem of time zones; it only makes those time zones smaller. The reality of most teams that practice Mini-Waterfall is that work in progress almost always carries over into the next sprint. A developer or business analyst believes they have made progress at the beginning and during the sprint, but they don't really know until a tester validates it. When a defect is raised at the end of the sprint and the sprint expires, the backlog item remains not done. There is no time to fix it, and it cannot be demonstrated in the sprint review or demo. Applying the Agile Manifesto's measure of progress, "Working software is the primary measure of progress," little or no progress has been made despite well-written and thought-out user stories and lots of code.

Behavior-driven development
New techniques allow us to avoid testing at the end of a sprint, viewing it as a form of waste. If we have succeeded in eliminate testing altogether, then we have demonstrated that testing is in fact a form of waste, which does not provide value as perceived by the customer. We do this by applying the Lean principle of building in quality, rather than checking for it after.

The great leap forward that allows testers, developers, business analysts, and product owners to work collaboratively and simultaneously on a story is acceptance test-driven development (ATDD). A popular form of ATDD is behavior-driven development (BDD). BDD allows tester, analyst, product owners, and developers to sit at the same screen and write an acceptance test in plain English. The plain English effectively makes this an executable specification rather than a test. When this is done, one will see that during the conversation, analysis, test creation, code design, and sign-off is being done all at the same time. Indeed, the developer still needs to implement enough code to make the test pass, but much of the design work is already done during the conversation. The difficult part of writing code is designing it -- not typing it.

The fact that everyone agrees at the same time to what is being delivered leaves little risk of rework. If the team collectively misses a detail, then a new story can be raised, but the previous story remains done if the computer says so because it has passed tests.

This collaboration, facilitated by the right tools, has allowed teams to eliminate testing altogether.

The second part of this article will look at several techniques to bridge the gap between specifier and implementer, as well as ways to measure the confidence the team has in that bridge.


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 0 (0 ratings)

Comments

Karim Harbott, CSP,CSM, 7/4/2014 6:54:12 AM
Great job Abid. I enjoyed it.

I hear a lot about 'investing in good user stories'; which generally translates to super-detailed user stories. This is generally a pattern of organisations where the PO is not part of the Team and collaboration is replaced by big up-front analysis. A good analyst can capture absolutely everything perfectly on paper right? :-)
Syed Ali, CSP,CSM, 7/8/2014 12:26:06 PM
Very well written and informative.
Akmal Nasimov, CSM, 7/10/2014 9:31:08 PM
Good article but towards the end it introduces a development style that is not explained in detail. Can you elaborate on this part, "BDD allows tester, analyst, product owners, and developers to sit at the same screen and write an acceptance test in plain English. The plain English effectively makes this an executable specification rather than a test. When this is done, one will see that during the conversation, analysis, test creation, code design, and sign-off is being done all at the same time. Indeed, the developer still needs to implement enough code to make the test pass, but much of the design work is already done during the conversation."

How is this different from sprint planning part 2 when developers are talking to each other, PO, and testers to task out each story? Who is responsible for leading this meeting, developers, PO, or testers? How's this acceptance test different from regular test cases that are prepared for each story by testers?


Thanks!

You must Login or Signup to comment.


The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.