Get certified - Transform your world of work today


Why Acceptance Criteria Are Needed Before User Stories Can Be Relatively Sized

Acceptance criteria and user story points

9 June 2014

John Hill
Sony Pictures Entertainment (Through Randstad)

In August of 2003, Bill Wake used the INVEST acronym to describe the characteristics of a good user story on the XP123 blog:

I (= Independent)
N (= Negotiable)
V (= Valuable)
E (= Estimable)
S (= Small)
T (= Testable)

I only intend to deal with the last letter above, the "T" for "testable." Bill says:

A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." Several teams have reported that by requiring customer tests before implementing a story, the team is more productive. "Testability" has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met. If a customer doesn't know how to test something, this may indicate that the story isn't clear enough, or that it doesn't reflect something valuable to them, or that the customer just needs help in testing.

For me, user stories are not "done" (or even ready for estimation) until proper acceptance criteria are in place. I don't intend to discuss what constitutes proper acceptance criteria here. For those interested in that separate topic, I've provided a link below to an article by Walter Jackson, "What Characteristics Make Good Agile Acceptance Criteria?" My only intention here is to instead strongly emphasize the need for acceptance criteria before assigning story points to a user story. I can't overemphasize just how crucial it is to have proper acceptance criteria in place before user stories are relatively sized using story points!

Bob Galen says that he normally looks for "three to five, but no more than eight crisply defined tests that are the conditions of acceptance for a user story." In my experience, if there are more than eight criteria that must be satisfied before an individual user story can be accepted (and is "done"), it might not be possible to complete that story in a single two-week sprint. Similarly, I sometimes find it nearly impossible to estimate story points for a user story without having the acceptance criteria in place. I've worked with teams that perform the relative sizing exercise (using story points) for a group of backlog stories before adding the necessary acceptance criteria. Then, when the acceptance criteria is added later, they discover that the story may in fact be too large to fit in a sprint and must be split (or they discover that the story was actually considerably smaller than estimated once the acceptance criteria is understood).

The practice of developing acceptance criteria for user stories after sizing them with story points can be costly and wasteful. My advice is to coach teams to properly define the acceptance criteria for a user story before sizing the story. It should be a standard practice that teams never size user stories without proper acceptance criteria already being in place.

Galen R. Scrum Product Ownership: Balancing Value from the Inside Out, second edition. 2013: Robert Galen and RGCC, LLC, Appendix A.

Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.

Article Rating

Current rating: 4.2 (9 ratings)


Daniel Lynn, CSP,CSM,REP, 6/9/2014 11:20:40 AM
This is an incredibly good point. I've been on teams that have said "Let's just size it quick and we'll iron out the acceptance criteria over the next few days." Invariably this leads to confusion in the story and will often lead to incorrect story sized. Acceptance criteria also lead into a conversation that gets everyone on the same page and may result in the further breakdown of the story, like you mentioned.

If I may, I did want to point out one exception I've found. I worked with a team that would do very rough sizing during a release planning - so often 20 - 30 stories. In these cases, no acceptance criteria were created, but all of the estimates were assumed to be incredibly rough. It was helpful for the PO and managers to see how progress was trending, but all stories would get acceptance criteria and be potentially re-estimated before being brought into a sprint.
John Hill, CSP,CSM,CSPO, 6/9/2014 4:30:23 PM
Daniel: Thanks for your comment. I must confess that I still trust certain experienced teams to perform relative sizing with very little information. These teams are usually very good at this exercise and since it works for them I go with the flow. Thanks again, John H.
Jeff Kosciejew, CSP,CSM,CSPO, 6/9/2014 5:28:37 PM
Hey John. Nice article. A couple of questions come to mind - hoping you might comment with your thoughts.

There's a cost to spending time writing AC and tests for stories. And if you're writing them too early, there's a good chance you're writing them before the PO has decided if they want to actually build that feature. By ballparking an estimate, you're helping your PO determine the cost, value and potential ROI of feature, before investing any significant time.

There's also a great discussions as to the value of estimation in the first place. Is it really adding value? Without trying to derail the valid points you're making, it's worth a though as to the value being delivered.
My question is, if I'm estimating work to help my PO prioritize what features they want, do I want to spend time writing detailed tests for PBI's that may not be a priority? Or, can I get more value by providing quick, relative buckets, knowing that when I come to actually develop what's been prioritized, I'm going to have some that I estimated at 5 which are really an 8, or maybe even a 13... But I'm also going to have some that I originally estimated at 5 which are in fact only 2 or 3. Or maybe I'm estimating in days... Either way, some will be bigger than I originally guessed, while some will be smaller (although my guess, based on observations is that most will end up being larger, rather than smaller!).

Before having a discussion of how to estimate, and the effort of generating requirements/tests up front, I'd love to know your thoughts on the value it delivers in the first place.
John Hill, CSP,CSM,CSPO, 6/10/2014 9:10:34 AM
Hi Jeff,
As a coach, I would prevent teams from spending time “writing detailed tests for PBI's that may not be a priority”, unless that backlog item had been selected for the next iteration during sprint planning. Acceptance criteria is much lighter weight than detailed test cases (although it must be translatable into test cases when the time is appropriate). Acceptance criteria is instead intended to represent the “conditions of satisfaction” for the product owner, business and other stakeholders. It should be relatively high-level, focusing on the intent of the story (the “what” and the “why”) while avoiding the details (the “how”) found in test cases. It must also be expressed clearly, in simple language the customer would use, just like the user story, without ambiguity as to what the expected outcome is: what is acceptable and what is not acceptable (or not needed). It should be written in minutes, not hours. Regarding the “value” of estimation (in addition to the points made in the article you cited) Tobias Mayer says things like “Estimation has been the bane of the developer’s life…Story points obfuscate rather than clarify. They increase waste rather than reduce it… Estimation as commonly practiced is mostly (if not always) useless. We cannot promise time, cost and scope, and it always results in pain if we deny that. Estimation for learning though, that’s helpful. We estimate all the time anyway, so let’s make it explicit, and do it mindfully.”

Bottom line, estimates are a condition of reality for those of us developing software. As much as Tobias doesn’t like estimating using story points, it’s much better than estimating task hours, which are not persistent (i.e., task estimates are based on the individuals doing the work and must often be re-estimated again (and again) before work begins when team membership changes). I do find value in relative estimation for teams that persist over time and achieve a reliable average velocity, aiding release planning tremendously (with accurate story point estimates and a reliable average velocity in place product owners are better-able to predict when specific functionality will become available). Despite this, however, I've heard of teams that don’t provide estimates of any type to their management. They also often disobey some of the Scrum “rules” (e.g., may not have a daily Scrum, might not use burndown charts, might not hold retrospectives, etc.). If teams can be successful without following certain Agile conventions, I’m fine with that as well (since it results from the Agile "inspect and adapt" premise). Thanks again for this thought-provoking dialogue. John H.
Jeff Kosciejew, CSP,CSM,CSPO, 6/10/2014 3:14:32 PM
Thanks, John. Great points for consideration. Somehow, I managed to post my response again today. Not sure how I did that. But, appreciate your thoughts.
Gene Gendel, CEC,CTC,CSP,CSM,CSPO, 6/10/2014 8:45:09 PM
I think that the notion of Priority would not hurt in this conversation. The higher Priority, the higher is a chance that a story will be scheduled for next sprint. It is the stories that are upcoming for the next sprint that should be looked at more accurately, and subsequently estimated more carefully. For such stories, NOT having acceptance criteria well defined would be risky.
Now, I would not be breaking these news to anyone that frequently teams are asked to estimate a backlog in more depth than just the top crust. Why? Well, strategic forecasting, budget planning,…all that good stuff. So, is this good enough of a reason to start digging through the entire backlog and try estimating vague stories without AC? Maybe,…but how? One by one? If we do so, we shall be introducing Variability (margin of error with every new estimation) that will eventually make our overall estimation very inaccurate. Besides, by the time a team gets to far away stories, PO may change their mind or something else will happen that will render those stories obsolete. Therefore, it would be advisable to use relative estimation approach for estimating work that is far away (on the scale of Priority). Specifically, as Epics get decomposed to stories, team may try keeping a relationship between smaller stories and an overarching epic. There is a pretty simple, yet very reliable technique that allows estimating just a handful of stories, more precisely, based on clear AC, but then by rolling them up to overarching epics, continue comparing (relative estimation) by epics. This saves a huge amount of time and frankly….I will make a pause here… REDUCES Variability. Donald Reinertsen refers to this as Variability Pooling. I would like to suggest his book The Principles of Product Development Flow: Second Generation Lean Product Development
Will be happy to share further thoughts if it is of interest...And thanks for the great post and comments.
Lawrence Apke, CSP,CSM, 6/18/2014 9:57:53 AM

I would like to present another perspective. Relative sizing as expressed by story points is used to provide business with an idea of what the team is capable of doing over the long term. There are not meant to be high fidelity estimates. Over time the team will "find" a velocity that will hover around and average with some standard deviation. This average velocity, along with the attendant deviation, can be used to show our business partners what the team could do over time (assuming no change from the present).

This means that the story pointing must be done for the entire backlog as soon as possible. To go into detailed acceptance criteria is a missed opportunity to give some feedback to the business about the ROI of a story. In other words, how can I know my ROI if I don't know the I? Holding relative story sizing back because we are driving out acceptance criteria details does not provide timely feedback. In addition, a great number of stories, due to scope changes never see the light of day. The time spent driving out acceptance criteria for these is waste.

If you have not been exposed before, I encourage you and others to do some research into very effective relative story sizing techniques like silent grouping which has been used very effectively to quickly provide relative sizing.
John Hill, CSP,CSM,CSPO, 6/19/2014 2:35:47 PM

Thanks for this thought-stimulating comment!

My recommendation is largely for new teams, especially when the conversations during the relative sizing exercise show that the team really doesn't understand the story enough to provide an estimate relative to other stories. Teams that have been together a long time can look at a new story and immediately see that it's relative to another story without the need for acceptance criteria, since the criteria for the new story is likely relative to the criteria for the already sized story.

I'd also like to clarify 2 other points:

1. Writing acceptance criteria should not take a long time (several minutes should be all that's needed for an entire story, unless it begins to become an epic, which is another good reason for having acceptance criteria in place before new teams do any estimating).

2. Stories should never be estimated too far in advance, especially if they may "never see the light of day". Bottom line, no one can convince me that new teams can properly size user stories without understanding what constitutes "acceptance", or more specifically, what is required for the story to be "done".

Thanks again!

John H.
Jay Brummett, CSP,CSM, 10/26/2017 12:42:17 PM
Hey Agilists!

I was trained and raised up at Nike, and our teams didn't use acceptance criteria at all. Has this become the standard now? I usually coach my teams to work away from tasks, hours, pre-assignment and acceptance criteria.


You must Login or Signup to comment.

The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.


Newsletter Sign-Up