Agile Metrics

Running Tested Features

9 June 2014

Raghu Angara, CSP
Infosys Technologies

Introduction

Metrics -- different things to different people, organizations, and cultures. However, the underlying focus of measurement is whether or not working software actually exists and is demonstrably suitable for its intended purpose. Agile being an empirical process, metrics determination adopts the same by demonstration at the end of every iteration and PSI.

Metrics are powerful tools for planning, inspecting, adapting, and understanding progress over time. There are several metrics that contribute to the success rate of Agile projects. We will not discuss all of them, but we certainly will identify the one metric that will not only give a clear and detailed picture of project health but will also encourage higher productivity within the team.

Running Tested Features

In general terms, the Running Tested Features (RTF) metric means how many high-risk and high-business-value working features were delivered for deployment. It is software that works, and that has the most features possible per dollar of investment. Let's do a small comparison -- Waterfall versus Agile.

In projects running traditional Waterfall, RTF value would be zero for the first several months due to planning and analysis. This would be followed by work on infrastructure, framework, and architecture. RTF would still be zero. Agile projects, however, are timeboxed to say, two weeks, resulting in a set of one or more RTFs in order of customer priority. During the iteration, there is a little analysis, a little design, some development and testing, and a little documentation, if necessary.

While both projects might finish at the same time, the Agile project will have delivered more value much earlier than the Waterfall project. Additionally, it will also have identified and mitigated risk to the project much earlier in the cycle, thereby keeping the technical debt at a manageable level.

In terms of productivity, measuring RTF is a quick way to see the state of the team. A healthy Agile team should be able to consistently deliver a set of stories over time, with any unexpected challenges or risks averaging out against features that turn out to be easier than expected.

So, what is RTF? According to Ron Jeffries, in his "A Metric Leading to Agility," RTF is defined as:
  • The desired software is broken down into features/stories.
  • These features/stories are part of what needs to be delivered.
  • Each named feature/story has one or more automated acceptance tests.
  • When the tests work, the feature/story is implemented as desired.
  • The idea is to measure, at every moment in the project, how many features/stories pass all their acceptance tests and are known to be working.

Components

However, it is not quite as simple to measure RTF. There are many smaller components that impact RTF -- defects, tests, cycle times, code coverage, etc. The individual numbers for these components play a big role in measuring RTF. The picture below gives an idea of how these components need to be interpreted and corrected for the team to be successful. The direction of the arrow signifies the desired measurement goal for each individual component.


Figure 1: Factors impacting RTF

Running Tested Features should start showing linear growth from Day One till the end of the project. The team needs to deliver and measure RTF every week.

Running: These are features that are running in the form of a single integrated software product. The feature (and inferentially the software) is either running or is not running. There is no middle ground here. For measuring RTF, all features should be integrated and running.

It should be noted that in a complicated environment, there will be several back-end and external systems that become dependencies for the features being built. For automated testing, mock-ups, stubs, or facades may be used in the interim.

Tested: Features continually pass all the tests all the time, and function in a manner defined by customer-approved specification.

Features: Real end-user features, pieces of the customer-given requirements, not technical features. The detailed specifications for these features are captured as user stories.

Visual RTF

Can we identify a good RTF metric when we see it? Let's view two different RTF graphs. The x-axis represents the days of the sprint. The y-axis represents the features completed and ready for deployment.


RTF growth: Desirable shows a linear growth for Running Tested Features in the project and all the signs of a healthy project.

RTF growth: Undesirable shows a project with unhealthy signs. The frequent dips are indicative of a project in trouble. The drops in RTF could be a result of changing requirements or failing tests or both. There could be many contributing factors, as identified by Figure 1.

Ensuring steady growth of RTF

In the words of Ron Jeffries, "To keep a single metric (RTF) looking good demands that a team become both Agile and productive."

Other factors that contribute to linear RTF growth include the following:
  • Frequent refactoring.
  • Avoid monkey clicking.
  • Run tests on every commit to source code management system.
Some additional pointers to check off:
  • Run acceptance tests on features as soon as they are implemented.
  • Unit tests are written by the developers.
  • Feature-level tests are written by the testers.
  • Testers may help developers evaluate their unit test coverage.
  • Developers may help testers write the more tricky feature tests.

Conclusion

For organizations that need to measure progress accurately on a regular basis for their Agile projects, nothing can paint a clearer picture than the Running Tested Features (RTF) metric. There are several contributing factors to ensuring linear and healthy growth of RTF. At the same time, there are factors that are detrimental to healthy growth. RTF will not only help in measuring project success but also result in creating self-contained teams fully capable of organizing themselves for higher productivity.
 

Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 4 (2 ratings)

Comments

David Grant, CSP,CSM, 6/9/2014 2:50:09 AM
Nice article. I think that -- like all metrics -- RTF needs to be interpreted carefully. Removing a feature should be a perfectly reasonable thing to do in a project if it leads to greater customer satisfaction.

Also, could you link to Ron Jeffries's article (http://xprogramming.com/articles/jatrtsmetric/) to save people Googling for it?
Jayaprakash Prabhakar, CSPO, 6/10/2014 3:59:14 AM
Good Article, Raghu !. We know this, just that teams don't measure this. Today, teams measure even engineering stories / automation work (for existing regression tests) as delivered features, which is not a good practice. It hides the true value that the team bring to customer.
We should measure this as defined, sprint over sprint, to see how agile we are.

@David - Link was pretty detailed and clarified few questions I had. Thanks for sharing it!

You must Login or Signup to comment.