Get certified - Transform your world of work today

Close

As Managers, How Do We Measure a Scrum Team's Effectiveness?

21 July 2014

Michael Glick
AOL, Inc.


Since its inception in 1993, Scrum has been embraced by software development teams across the globe as an effective, efficient method for completing projects. Reasons supporting the large adoption rate vary, but common themes include its adeptness in embracing change and team empowerment (i.e., "bottom-up" versus "top-down" management).

With any software development process, it is imperative for us as managers to consistently and comprehensively maintain an accurate pulse of the team's progress. Even though the group dynamics of a Scrum-adopting team (versus a legacy process) have changed, the need to monitor the team's successes -- and areas for improvement -- have not. The need to establish and maintain metrics on the team's key performance indicators (KPIs) is by no means a new concept. This is especially the case for software development teams, where technology fuels a trove of relevant data.

Given such a wide range of data points, data-hungry development managers might be inclined to monitor a myriad of KPIs while leveraging them in team objectives. This could spiral out of control, leading the team to focus on irrelevant metrics that could possibly cause unintended outcomes, otherwise known as the Hawthorne Effect. Scrum-Agile guru Mike Cohn once stated, "Don't go overboard and ambitiously commit to collecting 50 different data points for your team. Collect one. Use it. Then pick another." In other words, managers should identify, first, what area(s) the team needs to improve, and then which KPIs could highlight those areas.

So one might ask, as managers of a Scrum-adopting organization, which metrics could provide such an accurate pulse? Listed below are some of the KPIs I rely upon for periodic analysis.
  • Velocity
    • Determines the sum of story points completed ("done") per sprint per team.
    • Benefit: Shows the team's capacity for work on a sprint-by-sprint basis.
  • Story cycle time
    • Average number of days user stories were in a "committed to done" state on a sprint-by-sprint basis.
    • Benefit: Shows how quickly the team completes user stories.
  • Customer acceptance testing (CAT) cycle time
    • An average measure of how many days user stories were in a "done" state until accepted on a sprint-by-sprint basis.
    • Benefit: Shows the throughput of the product owner's execution of CAT.
  • Estimation accuracy
    • Aggregating story cycle time into story point buckets based on the completed user story's estimate shown over time (monthly).
    • Benefit: Shows the reality of how long it takes to complete a user story versus what's estimated.
  • Defects per release cycle
    • Shows number of defects logged per release cycle, broken down to defect type (IAT, regression, production, post-release).
    • Benefit: Shows overall quality of the team's work.
  • Defects per story point
    • Shows a ratio of defects to velocity per sprint.
    • Benefit: Shows team's average quality per story point.
  • Test cases run
    • Showing the number of test cases executed per sprint, broken down by test type (manual/automated). Can be shown against the sprint's total number of defects to show the test cases' effectiveness.
    • Benefit: Shows how much testing is being performed during each sprint.
As managers of Scrum teams, which metrics do you use to check the pulse of your organization?


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 4 (3 ratings)

Comments

Zach Bonaker, CSP,CSM,CSPO, 7/22/2014 10:25:39 AM
Hi Michael - thanks for contributing your article!

I certainly won't tell you your use of KPIs is incorrect or inappropriate. However, I don't find value in team level metrics, at least when described as "key performance indicators."

I prefer a PDCA approach - develop a hypothesis about my team(s), determine the data needed to test the hypothesis, check/test, then act. In an Agile environment, I like data to help drive decisions, not measure performance.

Some thoughts:
Velocity - a lightweight forecasting tool to benefit the business, not measure a team. Credit Big Visible: "Velocity is like a helium balloon. It will rise on its own when nothing is holding it down!" Velocity as a KPI is easily gamed and not portable between teams.

Story Cycle Time: Perhaps one team simply decomposes stories to a smaller size... in the end, if teams are delivering quality increments, is reducing this time necessary?

Estimation Accuracy - estimates are wasteful and guesses. Relative estimation is nice because it makes guesses useful. But, given estimates are wrong (a guess), why measure how less wrong they are among teams?

Defects Per Story Point: if a relative estimation scale is unique to a team, I simply don't understand what value this metric would have to both the business or a team.

The point I'm trying to convey - and not convince you of - is that metrics are just data. In the words of a brilliant researcher, Dr. Robert Briggs, "Data have no meaning without respect to the theory from which they spring."

In other words, if you're not starting with a hypothesis and selecting appropriate data to test it, you're just capturing meaningless data and allowing them to create bias.
Tim Baffa, CSM, 7/22/2014 4:47:52 PM
I would never advocate any of the proposed metrics across teams, but as a reflective exercise for each team into their productivity and accuracy, I think some of these metrics (Story cycle time, DoR before accepted into a sprint, Estimation accuracy, Defects per story point) have value.
Crystal Bishop, CSM,CSPO, 7/27/2014 9:33:40 PM
I have to admit, I flinched a bit reading this article.

I agree with Tim Baffa that these kinds of KPI's might have merit for an individual team, IF, they collecting and using for sprint retrospectives. Overtime, as they mature as a team, they would most likely no longer find value in these KPI's.

The problem I see with someone outside of the team collecting and reviewing these KPI's is it leads upper management to the belief that all scrum teams are alike and will invariably lead to comparing scrum teams against each other.

Almost all of these data points mentioned are not only subjective between teams, they are also subjective within the team itself and WILL change over time. It is the nature of the empirical process.

A story point unit in sprint 1 may not have the same 'size' of a story point in sprint 5, and again may be different in sprint 10, etc. As well as the possibility of the team deciding to use another estimation method all together 5 sprints in!

You must Login or Signup to comment.

 

The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.