Agile Smells: Lack of Progress

Part One: Failings in Backlog Management

15 October 2007

Mark Randolph
Echota Technologies

Problem

Lack of Progress 

Importance

Vital, if meeting schedules or budgets is critical

Symptoms

These are smells that indicate lack of progress:

  • A backlog that never contracts, or keeps growing instead of shrinking
  • Too much work in progress
  • Partially completed features that never quite get finished
  • “It is 90 percent complete” and has been for a month
  • “Completed” features that subsequently require extensive revision or repair
  • “Completed” features that are waiting on uncompleted features
  • Stakeholders complaining about lack of progress
  • Missed deliveries
  • Unsolicited upper management attention

What’s wrong? What will you do?

Discussion

The smells cited above result from failures in one or more of these basic Scrum elements:

  • Sprint backlog management
  • Having a clear definition of each proposed feature
  • Consistent enforcement of feature completion

This article (part one of a three-part series on the subject progress smells) provides an analysis of symptoms, and suggests remedies for failings in backlog management. Part two will address smells emanating from feature definition. Part three will offer simple rules to follow to obtain a defect-free product.  As the case history below illustrates, if you do just these things right good things will happen, regardless of whether other agile or extreme programming techniques are used. 

All three parts presume an individual project already underway and failings that are within the ScrumMaster’s ability to fix once found. In other words, they assume that there is full external support and no irresolvable external meddling (which brings to mind the joke beginning, “Now, assuming a spherical chicken…”).

Analysis

The goal of analysis is to determine whether good Scrum practices are being followed. Where they are not, the aim is to identify which deficiencies are most worthwhile and easiest to fix.

Diagnosing backlog management.  A “no” answer to any the following questions suggests starting with corrections to backlog management:

1. Is a list of potential work (the “backlog”) maintained?
2. Is the backlog kept up to date, including associated burndown and velocity charts?
3. Are features assigned to the current active sprint only at the beginning of the sprint?
4. Do the features selected ensure a “potentially shippable product?”
5. Are bugs excluded from the backlog?
6. Once begun, is a feature completed?

(If you answered “no” to the first question, then go to jail, go directly to jail…)

Failures in feature definition.  These questions reveal shortcomings in feature definition:

  • How are features described? As user stories? Test cases? Functional specifications?
  • Is the product owner directly involved in feature definition? If not, is there an effective customer proxy and advocate?
  • Is the customer describing the business problem and not stipulating features?
  • Are features ranked by value to the customer so that the biggest payoffs come first?
  • Are bugs excluded from the backlog?
  • Are features the principle items tracked on the backlog?

While features may not be the only constituents of a backlog, something is smelly if they are less than 80 percent of backlog items. Most, if not all, of the other backlog items should be legitimate “non-functional” requirements—requirements externally imposed, such as, “The customer must be able to use databases from vendor XYZ.” Bugs are not features, but rather symptoms of incomplete features (see next).

Failures in feature completion.  If completion criteria is defined and enforced, the goal of “zero-defect code” is approachable.  The following questions help diagnose failures to deliver stuff that works:

  • Is the build/test/deploy cycle clearly defined? Do all developers understand and adhere to it?
  • What does “done” mean anyway?  Are completion criteria explicitly stated?
  • Is completion defined as the existence of concrete artifacts (tested archived code, executed test cases) that are easily demonstrated and audited?
  • Is testing integral to feature completion and not a separate activity?
  • Are completion criteria enforced? Is there independent evaluation of completion?
  • Are bland assurances like, “Yes, it’s done,” rejected in the absence of verification?
  • Do your build/test environments help or hinder enforcing code completion?

Taste, preference, experience, and company policy influence much of the last point, but these would be useful questions to ask:

  • Is there a protected source code repository separate from development environments?
  • Is there automated support for source code management?
  • Are final builds relying solely on source in the repository performed by someone other than the developer?
  • Do system integration and/or acceptance test environments exist independently of development environments?
  • Are automated builds used?
  • Is automated testing used?

Be sure to ask if Test Driven Development (TDD) has been considered or is in use, because that will come up again in Part Three: Achieving a Defect-Free Product.

Remedies

Collectively, the remedies offered here in part one and the remedies that will be presented in the next two parts work together to deliver:

  • A backlog with early visible progress, early visible value, and on-time completion
  • Features of great value to the customer that are easy to construct
  • A “zero-defect” product

Each of those results relies on a corresponding attitude a team can begin to embrace now:

  • Commitment at all times to a “potentially shippable product”
  • A willingness to ask, “If it does not deliver useful working code, what good is it?”
  • The realization that bugs are not inevitable

At each opportunity, but particularly at Sprint retrospectives, a ScrumMaster can explain and reinforce these attitudes. Then during daily scrums, the ScrumMaster can enforce their practice. The result is a team that willingly adopts, then conscientiously follows, good Scrum practices. 

That said, the remainder of this first article, Part One: Failings in Backlog Management, discuss corrections for backlog management weaknesses.

Healthy Backlog Management

Aiming for a potentially shippable product means that a team strives to deliver at the end of each sprint a complete, genuinely useful, and visible capability that could potentially be used in some manner by the customer, or at least demonstrated to them. This requires the following:

  • Use features—not tasks, not components, not bugs—as backlog constituents.
  • Rank features in the backlog by value to the customer, and not by any other criteria.
  • Add potential features to the backlog at any time, but do not add features to an active sprint during the sprint.
  • To the extent possible, finish what you start and don’t shuffle priorities during a sprint.
  • Leave no bug behind; fix them as you find them.

While features don’t have to be worked on in order of their value to the customer, you must have a good reason for violating that priority. Ideally, each feature selected for a sprint meets these criteria:

  • It is defined as a feature (i.e. not a component, not an activity, and not a bug fix)
  • It can be completed—truly completed—in the time available
  • No feature relies on any other feature that will be incomplete at sprint end (no external dependencies)
  • The feature is more valuable than anything else remaining on the backlog

The result:

  • Early visible progress
  • Avoidance of “completed” features that cannot be deployed
  • Reduced risk of subsequent integration problems

As an example, consider an HR project that requires (1) A front-end website; (2) A network connection to a backend e-mail system; (3) A database to store accumulated results. If only one of the following can be delivered in three weeks, which is more valuable to the HR department? 

  • Displaying a static description of benefits to all employees (i.e. a static website)
  • A polished database to store vacation schedules for unidentified employees who cannot submit requests anyway

I take door number one. The next increment of value might be a primitive database containing no more than a list of current employees that HR can begin to populate, followed by the ability to e-mail benefit announcements to those employees. If I were HR, I’d really like to reach employees now, rather than schedule vacations for employees unable to submit requests until later.

Look at what else is gained: early demonstration of integration between the selected web, database, and network technologies. (Whew! I’m glad we found that problem now! Remember the BustaGut project where the regional power grid collapsed when our server went live?)

In classic project management, scope creep occurs when changes are made to previously accepted requirements.  Scrum controls scope creep by postponing final selection and definition of features until the last possible moment. That moment is when features of the greatest current value to the customer come off the release backlog to be worked in the current sprint.

Bugs should not be tracked on the backlog, but fixed in the iteration in which they are found. And, all work stops until the bug is fixed. Yes, this policy really works, and it works well for many reasons, but fifteen years ago in his book, Debugging the Development Process, Steve McGuire gave what remains the most succinct justification of the zero defect strategy, “It takes so much less time than letting even a single bug slip by and find its way into the product’s master sources.” [1].

Case History

When Cassandra spurned Apollo, she was cursed by him to be able to foretell the future, but have no one believe her.  My boss came to me one Thursday and said he wanted to start using Scrum the following Monday. I was an untrained ScrumMaster. We had never used Scrum before and my organization had great difficulty hitting ship dates. Projects ended when we ran out of time or we ran out of money, not when the product was either feature rich or bug free. Our customer’s attitude was, “Deliver these features by this date or we take our business and walk!” We couldn’t even find all of our source code!

Three weeks into that first eight-week Scrum project, I predicted development would come in two days ahead of schedule, completely tested, documented, and with code archived.  We hit that ship date. Everything predicted had come true. This Scrum stuff really worked. 

Napoleon defined luck as preparation meeting opportunity:

  • The customer had handed us—disguised as an ultimatum—a stable feature backlog fitting the time available.
  • My project manager was open minded and cooperative (I can never thank her enough).
  • I had been studying agile programming on my own, and Scrum in particular.

We decided there would be one release in eight weeks. Our sprints would be one-week long. We would hold daily scrums, during which we would each answer three questions: What was completed? What would be done that day? What were obstacles to faster progress?

In addition, I advocated defining and enforcing criteria for feature completion. Much to my astonishment and relief, the team readily agreed:

  • The developer would perform unit testing and commit the code to the repository.
  • Another developer would perform an independent evaluation, either a test or a code review.
  • The code would be run as part of an ongoing system integration test.

We ran into interesting problems. 

The team perceived the daily Scrums as status meetings, and would veer away from the three questions into problem solving. More than one team member questioned the value of daily Scrums.  Persistent, consistent, and patient enforcement managed, but never completely overcame, this tendency.  I don’t think the team members realized how each benefited from the flexible cooperation that emerged from fluid communication. For example, one member expressed confusion over a C++ feature, so another offered to share a clarification immediately after the daily, which then moved on. Without that casual ten-second exchange, I suspect four to six hours would have been spent puzzling out a solution. Yet, the conversation happened so quickly, the team never noticed. I now call such moments to a team’s attention at retrospectives, and occasionally receive appreciative nods when I do. Or not. But the moments are real. I see them.

We had not done code reviews before, and bad practices used by weaker developers dazed me:

  • Method definitions running for hundreds of lines
  • Profligate use of global variables
  • Hard-coding directory and file names
  • No bounds checking, off-by-one errors, unhandled exceptions, etc.

The difference between weak and strong programmers was glaring. On the positive side, when our senior architect performed the code walk-throughs, he was able to communicate design intent, and because of his stature and credibility, the better developers readily accepted his direction and the weaker ones his correction.

Over a career of twenty-five years, software metrics have always held special interest for me, yet I had never seen a technique as precise, accurate, and easy to apply as burndown and velocity charts. On this project, we evaluated several commercial tracking tools, but finally stuck with a simple spreadsheet. We have since adopted Trac [2], an Open Source application, as our backlog management tool. I prefer its use to that of a spreadsheet because:

  • Features can be organized into releases and sprints
  • Team members can manage their own tickets
  • Trac integrates with Subversion [3], our source management tool, making evaluations easier
  • Data for burndown and velocity charts accumulates automatically

It remains my strong opinion that burndown and velocity charts are all that are required to run a successful Scrum project.

Admittedly, our project was greatly simplified since feature definition was not required. Rather than believe that makes our first experience with Scrum invalid, I offer it as demonstration of how important complete and stable feature definition can be. All we had done was:

  • Properly managed the sprint backlog
  • Held a clear definition of each proposed feature
  • Consistently enforced feature completion

It was that simple. 

Looking Ahead to Parts Two and Three

The analysis presented here helps sort progress smells that emanate from any of three sources:

  • Poor backlog management
  • Lack of feature definition
  • Tolerance of bugs

Part one addressed remedies for weak backlog management. Part two first poses and then answers questions related to feature definition:

  • What is and is not a feature?
  • Does it really matter if I use user stories, use cases, or functional specs?
  • How do I refuse to use IEEE Standard 830 and keep my job?

Part three defends the assertion that defect free products are feasible by answering questions like:

  • Why don’t bugs belong on the backlog?
  • What are the limits to testing?
  • What does “done” mean anyway?
Resources

[1] Steve McGuire. Debugging the Development Process. Microsoft Press. 1994. ISBN 1556156502. Page 128.

[2] Trac. Edgewall Software. http://trac.edgewall.org/

[3] Subversion. Tigris.Org.  http://subversion.tigris.org/

 

Article Rating

Current rating: 0 (0 ratings)

Comments

Trevor Donarski, CSM, 4/8/2008 3:29:53 PM
I really enjoyed this article and found that it had a lot of great information. I was wondering when we could expect parts 2 and 3?

Thanks,

Trevor
Martin Bernd Schmeil, CSM, 8/14/2008 4:22:19 PM
If you're going to use Trac, you might want to take a look at the Agilo plugin http://www.agile42.com/cms/pages/download/

You must Login or Signup to comment.