Get certified - Transform your world of work today

Agile Performance Appraisals

03/14/2012 by Arpit Gautam

We all do the crazy exercise of rating team members every year. And, looking back at a software development industry that's almost 50 years old, we know certain things for sure: Software is built by teams, not individuals. Moreoever, each individual needs to actively collaborate to produce quality software. This means that everyone on the team needs to take collective ownership and help each other, because the motive is not to be a hero but to build an end product of the utmost quality and predictability.

Thanks to Scrum, we do this every day and produce good software. But also we incubate the evil of rating team members, which is anti-Agile (see "Tracking Individual Performances in Scrum"). We know that it is insane to compare work of two individuals and two teams — but we do it. Individual performance assessments are a reality. Yet Scrum supports the doctrine of cross-functional, self-managed teams in which individuals help each other in completing the tasks to deliver value. The ownership of completing user stories lies not with any one individual but with the complete, collective team.

This contradicts the appraisal process, because it is almost always centered on how an individual is performing. Inevitably, a team using Scrum has to quantify and produce matrices around an individual's performance. This raises a handful of questions:

  • Can we contribute the success of a sprint (and/or user stories) to any one of individual, when the team has completed it?
  • How can we quantify and take into account the fact that most developers sometimes help each other complete a certain task or a user story?
  • How do we encourage team members to help each other selflessly?
  • To summarize, can we measure individual velocity?

There are few ways one can address this:

Use velocity to measure individual performance. Can we find the velocity of a developer that will tell us how much he or she has achieved in a year? As we all know, velocity is a measure of how much a team can achieve in a particular sprint. It can vary across teams. Finding it for every individual in the team is difficult, if not impossible, as user stories are completed by teams and not individuals. So velocity is a team thing; we should leave it at that.

Produce a task matrix for every individual. Can we track tasks completed by an individual and find out what percentage of a user story is completed by him or her? If we knew that, it would be easy to know who has produced the most output within a team. Unfortunately (or fortunately!), individuals do help each other in tasks. So just because someone is completing more tasks doesn't mean he or she is "better" than the others at the job.

Give ratings to teams instead of individuals. What if we rate teams and appraise them instead of individuals? Now the problem boils down to how to compare two teams. Velocity and acceleration are, again, not our friends here. Comparing teams is only possible if we can measure equivalent user stories and find out how the teams are handling those, which is difficult given the divergent nature of projects in an organization.

Handle it with Agile. This is by far the most practical way of assessing an individual. Let's jot down an overview of how we'd handle the process in Agile way:

  1. Determine goals for every individual. These goals need to center on the individual — for example, how can he or she achieve these goals and become better at software creation? When we set each goal, we need to identify a nonambiguous acceptance test for it, and a way to demonstrate whether or not it is met. Let's take, for example, a goal of automating acceptance tests for the product. If this is 60 percent complete, we will say the goal is met; if it's 60 to 80 percent complete, we can say the person has exceeded expectations; if it is 80 to 100 percent complete, then the person has far exceeded expectations. The thing to notice here is that it looks like an individual goal at first glance, but it does require collaboration, so either everyone will meet it or no one will. Therefore, it's a good contender as a goal for every developer on the team.
  2. Create an appraisal backlog for the individual, with goals and acceptance criteria for each goal. Capture the information about how and why the tests for these goals are failing at this point in time.
  3. The product owner for this backlog should be the mentor for the individual. He or she will prioritize the goals at this stage.
  4. Now the mentor needs to assign story points to various goals. For example, the acceptance test goal discussed above can be assigned a value of 8 on a scale of 11 points, as it requires quite a bit of effort.
  5. The mentor is free to add more goals to an individual's goal backlog after due discussion with him or her.
  6. The mentor and individual need to identify a sprint length, after which they will review the product backlog and have a demo. Most organizations already do this exercise and have one-on-one meetings once every month or two. The different thing we need to do is to qualify the goal on the nonambiguous acceptance criteria and produce a rating for it. So, for example, after a month the acceptance test automation is reviewed and it's discovered that only 50 percent automation has been achieved up to now. The mentor can then talk to individual(s) to determine whether the goal is too difficult, whether they are facing any blocks in fulfilling it, or whether an individual is not just able to meet the goal. The mentor can then play the role of enabler and help the team member(s) achieve the goal — or revise the goal, decreasing its weight and acceptance criteria to make it more achievable. This makes the goal easier but less rewarding.
  7. Next, the sprint's backlog will be derived on outcome of this sprint. If individual has failed to meet a goal with 5 story points, he won't be given a goal of 8 story points in next sprint. He should be given a goal with 5 or fewer points. This will ultimately help his or her overall growth.
  8. In the demo, make the acceptance tests pass and qualify the goal as done. Additionally, you could have optional criteria for exceeding or far exceeding the goal (note that we determined a percentage range of automation when we discussed the goal in the first step).
  9. In the ultimate performance review, one could look at the matrix of goals and qualified acceptance criteria. This matrix should capture the information from every iteration/month.
  10. As we saw in earlier approaches, it is extremely difficult to compare one project to other. In the same way, it is not practical to compare two individuals on the basis of features they helped add to a product. This analysis should be done on the basis of goals set, and these goals should not be based on sprint work alone.
  11. Count the number of goals met/exceeded/far exceeded. This matrix can then be used to calculate performance ratings in a transparent way.

Scrum is hard, and we have just made the appraisal process more difficult and demanding by using it. If used and adapted as necessary, however, this process should take care of various issues we encounter in the appraisal process. Just as in other Scrum endeavors, the mentor/product owner has a huge role to play, as we are making him or her the owner of the performance of the individual. Going Agile, though, should keep these goals aligned and realistic with the business and its changes, and hence should give individuals realistic chances to grow and measure their annual performances.

To summarize, we are acknowledging that business changes, and performance goals must change accordingly. Making the goal evaluation cycle iterative and adaptive makes the outcome more predictable — and, indeed, makes the process possible.