Many traditional waterfall projects rely on a set of calculations called “Earned Value” and “Schedule Performance Index” to indicate whether their project is on schedule. These metrics can be misleading and often mask some of the schedule delays they are designed to predict and prevent. Scrum provides project stakeholders with more accurate expectations regarding schedule, scope and budget than traditional “Earned Value” and “Schedule Performance Index” metrics can provide for waterfall projects.Where did the metrics originate?
Where did the metrics originate?
In 1967 the Department of Defense (DOD) introduced a set of thirty-five Cost/Schedule Control System Criteria (C/SCSC). Implementing these criteria caused complex modifications to management control systems for companies doing business with the DOD. This overall “system” and its many byproducts are now generally accepted across a variety of non-military industries (“It must be valid if the military requires it”). It has become so mainstream that its logic is built into Microsoft Project and other scheduling software.
In Put Earned Value (C/SCSC) Into Your Management Control System, Quentin W. Fleming defines five metrics required for the C/SCSC system to work (BCWS, BCWP, ACWP, SV, and CV). Of these five, only the first two definitions pertain to this discussion:
- The Plan or Budgeted Cost for Work Scheduled (BCWS)—may be called work scheduled, which is synonymous with budget or plan.
- Earned Value or Budgeted Cost for Work Performed (BCWP)—may be referred to as work performed, which is synonymous with the term Earned Value.
By dividing the work performed (BCWP) by the work scheduled (BCWS) at a specific point in time, you allegedly determine how well the actual work is tracking to the scheduled work: the Schedule Peformance Index (SPI). To determine whether or not a waterfall project is “on track,” SPI is measured religiously at specified intervals (usually monthly) using this formula: SPI = (BCWP/BCWS). Any time the SPI is less than one, the schedule is deemed to be in trouble. If, for example, 5,000 hours of work had been scheduled (BCWS) for the period being evaluated and only 4,500 hours had completed (BCWP), the SPI would be .9 (4,500/5000 = .9). This means that the project is 10 percent behind schedule. If, on the other hand, 4,500 hours had been planned and 5,000 hours had been completed, the SPI would be 1.1 (5,000/4,500 = 1.1), implying that the project is 10 percent ahead of schedule.
What’s Misleading about These Metrics?
The problem with these metrics is that traditional waterfall projects include many tasks that must be completed before any software is delivered. Consider a ten-month project whose definition phase represents 30 percent of the total work. After the definition phase is finished, 30 percent of the budget has been spent and 30 percent of the project is complete, yet there is no working software. In fact, the project will continue to consume measurable costs until the software is delivered (near the planned end of the project). If the total project was budgeted at $6 million and 30 percent of the time is required for specifications, you have just spent $1.8 million (30 percent of $6 million) for a pile of specifications, meeting minutes, approval forms, memos, and so on (see Figure 1).
Figure 1. Ten-month (60,000-hour) waterfall project at 30 percent complete.
Another problem with these metrics is that relying on SPI and Earned Value calculations often causes the late discovery of problems in the development cycle by masking real issues until near the end of the project. Both calculations show that acceptable progress is being made over time, without ensuring that the software will even work. If significant design problems are discovered late in the project schedule, typical responses include scheduling unplanned re-design, re-coding, and retesting (usually at the expense of previously planned QA). Project managers also struggle to explain why projects are at risk late in the cycle, especially after submitting acceptable monthly SPI and Earned Value (EV) reports along the way. If problems do occur, project managers typically try to re-baseline the project with a new end date in order to again report acceptable SPI and EV metrics. This causes everything to look good again on paper, with no real benefit (and a delayed release date). There is a better way.
How does Scrum Avoid Those Problems?
Scrum delivers working software sooner and provides more accurate predictions of schedule, budget, and scope. Under Scrum, we’d plan the same ten-month project by dividing the work into ten, thirty-day sprints (see Figure 2). At the end of sprint #3 (30 percent into the project), we would have completed three iterations of development at a cost of $1.8 million. In the same time as a waterfall project and at the same cost, rather than a pile of specifications and approval forms, we would have produced three deliveries of demonstrated, shippable product. This is significant.
Figure 2. Ten-month (60,000-hour) Scrum project at 30 percent complete.
Figure 3 compares the two approaches (Scrum and waterfall) on the same fictional project. Notice that while both projects estimate the same number of hours to completion, only the Scrum project delivers working software every thirty days. Only the Scrum project monitors its schedule by the work delivered rather than through a sometimes misleading calculation. Finally, only on the Scrum project does the value accrue over time as each iteration’s software is delivered, rather than by a measure of how much of the planned work has been completed, regardless of whether any software is actually being delivered.
Figure 3. Waterfall “Schedule Performance Index” and “Earned Value” calculations vs. Scrum monitoring (using the same 10-Month (60,000-hour) project).
What Conclusions Can We Draw?
SPI is a poor barometer for measuring the health of a project schedule. “Earned Value” calculations create the false perception that actual “value” is being created during the early- to mid-phases of a project (when no software is produced). Scrum’s simple inspect and adapt mechanism tied to thirty-day (time-boxed) delivery cycles intuitively provides more reliable project status than traditional DOD metrics.