Are We There Yet?

One teamΓÇÖs path to ΓÇ£Done NirvanaΓÇ¥

16 February 2007

Aaron Ruhnow

I am the technical lead (and now also ScrumMaster) on a development team that spent most of 2006 implementing various agile practices into our daily routine. We had traditionally followed a waterfall process, so it was a little difficult at first to get the team members (and management) to digest some of the new agile methods (Two- week iterations? Developers writing tests? No Gantt charts?)—we truly were stepping through the looking glass.

With all these new practices came a lot of uncertainty. We had new tools—new to us, at least—such as FitNesse and nUnit. We had new practices, including pattern-based design, TDD, and automated builds. We had new processes and even had a strange, new rhythm. Many of us were used to being assigned a set of tasks and then spending weeks writing requirements. Some time later we would code them. (We may or may not have been involved in estimating how long these tasks would take to complete.) Making the shift to short iterations was quite a change. Having the developers themselves write up and estimate the tasks in one half- to two-day increments was also a shift, but was a bit easier to digest because the developers quickly found they could make more accurate estimates this way. It often was difficult to focus on the essentials in the midst of all of this newness.

To gain this focus, the team started to use a task board and task cards to track progress within each iteration. However, we often struggled to understand when a task, and then a story, was really done. We had learned of the “done, done, done” mantra (coded, tested, approved by the product owner), but that wasn’t specific enough for us. We needed more guidance to know when we were actually there. For instance, in our daily standups, a typical discussion would go something like this:

Dev 1: “I finished coding the ‘Copy’ function for the story.”

Dev 2: “Did you unit test it?”

Dev 1: “Uh…I think I wrote a few.”

Dev 3: “Are the Fit tests done?”

Dev 1: “Oh yeah, I forgot about Fit.”

We needed a better definition—one we could all agree on and understand.

About that time, I attended a Certified ScrumMaster class. During the class, another student mentioned that her team had a list of criteria defining done. Intrigued, I asked if she could email it to me. She told me they didn’t have it written down; it was just understood within the team. (Guess they were already in “Done Nirvana.”) After some more prodding, she did agree to write them down just for me. From her list we created the first draft of our form of done:

A story is complete when:

  1. Coded/implemented
  2. Peer reviewed (pair programming counts as peer review)
  3. Code is run against current version in source control
  4. Code is commented in source control and checked in
  5. Code is commented with VB Commenter on Public/Friend methods
  6. Story/use case manual test plan updated
  7. Fit test written (with help of SQA person)
  8. UML diagram updated
  9. Unit tests written and passing
  10. 90 percent code coverage achieved
  11. Build and package changes are communicated to build master (i.e. introducing a new file or something)
  12. Task list hours are updated and task is closed out
  13. All to-do items in code are completed

We changed her list somewhat to emphasize several coding qualities that needed reinforcement within our team. During iteration planning and task generation, we now make separate task cards for steps like writing unit tests (before coding, of course), making Fit tests, and so on.

The list is still posted in our project room, but I guess we, too, have now reached “Done Nirvana” because the definition of done is now in our heads and for the most part put into individual task cards. The list now makes good wallpaper—and serves the role of looking good for managers that happen to enter our project room. I suppose it may also be helpful for newbies.

Maybe our list, as my fellow student’s did, can start your team down the path to “Done Nirvana.” But for now, this story is done.

Article Rating

Current rating: 0 (0 ratings)


Anonymous, 2/27/2007 11:09:14 AM
<p>I liked very much your idea/definition of ΓÇ£Story DoneΓÇ¥. In our approach, similar to this we are using Checkpoints to insure we are doing all the right tasks in right way to meet the desired sprint/release goal. We generally have following checkpoints:</p>
<li>Preparation Checkpoint (at least 7 days before Hot House ( Sprint 0))</li>
<li>Targets agreed checkpoints</li>
<li>Sprint Kick off checkpoints</li>
<li>Story completion checkpoints (Similar to your ΓÇ£Story DoneΓÇ¥)</li>
<li>Sprint end check points ( Target Confirmed checkpoints)</li>
<li>Pre-release check point (Go/No Go decision)</li>
<p>We found it difficult to adopt this discipline at the beginning but now its working fine for us because it becomes a habit for us. </p>
<p>Best Regards</p>
Anonymous, 2/28/2007 1:27:39 PM
Aaron, thanks for posting this article. The Dev 1/2/3 dialog illustrates something that isn't obvious to people at first: the robust definition of "done" recommended by Scrum is best enforced by a close-knit team with open communications. Attempts to tack quality on through formal design reviews, methodologies, or tools generally fail. People think they need a heavier, more formal process to get quality. But it turns out formality is an impediment to the open communication we need for high quality products.
Anonymous, 3/3/2007 2:30:17 PM
Yes, I agree. Less is often more when it comes to producing high quality products. Quality is a frame of mind more than it is a process - the team need to adopt a zero-defect mindset.
Adam Wernig, CSM,CSPO, 6/22/2007 12:06:33 PM
I find it very difficult for our QA members to say a task is done. They always leave time remaining because they figure they do not know if they are going to encounter bugs. The QA team also has very general tasks such as prep, test, regression/re-test. This drives me crazy but they cannot seem to come up with more specific testable items. Any ideas?
Steve Donahue, CSM, 1/7/2008 2:23:03 PM
We, too, have struggled with our definition of "Done", and I'd like to comment on how we address the issue of QA members having trouble saying a task is done. We plan a sprint much like the article says, with tasks for defining acceptance tests and executing them, and when issues arise from the testing (e.g., bugs), we add those as sprint backlog items. The acceptance testing task is done when it has achieved some level of coverage; if there are blocking bugs, those by definition have to be addressed before the testing task can in good conscience be closed. If there are other bugs not blocking the testing coverage, those are prioritized and the important ones are addressed as separate sprint backlog items; the remainder are deferred or deleted (and documented as known issues if they affect the user experience). In this way, the team knows the current status of all tasks and bugs, and the QA folks aren't pressured to declare a task done when status is clearly unknown.
Chris Sims, CST,CSP,CSM,CSPO, 10/12/2008 7:57:52 PM
Excellent post! In fact, I just quoted your list in this article on InfoQ:

It will be on the front page on Monday.


Francis Shigley, CSM, 8/28/2009 5:02:33 AM
Good Article. Having a definition of "done" that everyone agrees to and understands is essential.
Scott Green, CSPO, 2/16/2010 6:48:37 AM
My team has a similar list but I would add one additional and (for us) critical item--Functional Review. For every story in a sprint, we have a task called, "Functional Review." This is where the lead developer and quality engineer for a story demo the functionality to me--the Product Owner. All the items in the "done list" can be complete, but if the functionality doesn't meet my expectations, then we adjust.

You must Login or Signup to comment.