Should We Standardize Scrum?

A Thought on Scrum

6 August 2013

Srinath Chandrasekharan
HCL Technologies


In the last few years, Agile, and especially Scrum, has moved from the periphery of software development to the mainstream. Many organizations have already started to adopt Agile for all of their projects. Many of those who haven’t have plans to do so in the future.

In this scenario, it is obvious to look at the challenges of scaling, and many books have been written on this topic.  In my experience, scaling Scrum really means two types of scaling:
  1. In the first scenario, the number of teams adopting Scrum or Agile in an organization increases. Each team could be relatively small in size (10 to 20 people). However, each team is, by and large, independent in terms of working and does not usually step in on other applications or products. An example is an IT services vendor who has many clients, each of whom want to do Scrum/Agile in one form or another.
  2. In the second scenario, large teams (50+) adopt Agile. These teams are working on a single application/product and are usually broken down based on module or skill (database, designers, architects, etc.). Usually these teams have a fairly large amount of work related to integration; such types of projects can be found in a product organization or an organization that wants to enable its operations using IT. For example, a bank, health care clinic, marketing company, etc. In many cases, the IT vendors as mentioned in scenario 1 help these organizations. 
Most of the scaling-related challenges are written for scenario 2. This is because they are far more complex and usually at an enterprise level.  

Still another scenario, which is many individual teams adopting Agile, poses different types of challenges for which I feel there is not much content available. Hence I would like to share my thoughts on the same, based on my own experience.

In my organization, we have clients:
  • With different levels of maturity in Agile practices.
  • With different experiences of what works and what does not.
Each of them evolves a set of practices or defines a process that is suited to their ways of working. I have seen that many of our clients do not have a common Scrum process implemented across their landscape. Also, as an IT vendor with experience across domains, technologies, cultures, time zones, etc., the vendor (like us) is expected to have a well-defined process that has been created from the best practices across the spectrum. This process is expected to provide an answer to many of the situations faced by the clients.    

As examples, I have seen clients working on the same floor but having:
  • Different tools (despite same technology being used).
  • Different team composition (from completely onsite a to a mix of offshore and onsite).
  • Different sprint lengths.
While there is nothing wrong in this way of working, there is usually an expectation from senior management (both client and vendor) to compare the health of different teams in terms of product quality, process effectiveness, profitability etc. As one person asked me recently, "How do I know which team is doing well compared to others?" Those familiar with Waterfall would be able to come up with metrics that can be used to compare projects and so get an idea of which team is doing well and which not. However, there is no one common set of metrics that can be used for Agile projects.

Due to this need for comparison, we came up with some metrics related to defects, code coverage, stories delivered against stories planned, clients satisfaction with software delivered, stories accepted versus stories rejected in sprints, etc. The data points gave us some sense of progress, while still not completely answering the question of comparison across projects.

However, we faced many challenges in getting the data points. Some of the questions for which we have found answers after a lot of struggle and discussion within the teams are:
  • When do you measure defects? Within the sprint or after it is delivered? What if a defect is reported a few months after the release? Can we tag it back to a sprint and see its effectiveness?
  • Is code coverage a good indicator of quality?
  • If the client tests the software at a release level, sometimes it is difficult to differentiate acceptance and rejection at a sprint level. So do we track metrics at the release level?
  • How do we measure the success of sprints when the feedback is received late (after a few sprints)?
Each team has its own solution that is correct in its context. In addition to this, we are also expected, as an organization, to give ready-made answers to questions like:
  • What's the ideal combination for a distributed team?
  • Which roles should be present at onsite and which ones at offshore?
  • What tools should be used?
  • What will be the teams velocity (especially when it comes to RFPs)?
In such a situation, the question that comes up is: Should we standardize the Scrum practices and make sure that every team follows a written process so the comparisons can be easy? This seems against the very idea of being Agile, where we allow teams to take decisions and react in a way they think is best for them, both operationally and in terms of process.

The approach we are adopting is to strike a balance between the two approaches, which is quite hard to achieve. The biggest pitfall we should avoid is creating practices that are written in stone. Even if we do write down some documented processes, there should be a focused, periodic (maybe every month) review of these, and modifications based on feedback from the teams implementing these processes.


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 0 (0 ratings)

Comments

Bhoodev Singh, CSP,CSM,CSPO, 8/6/2013 2:03:11 AM
It may not work for clients based projects because of tons of reasons like compliance, cost, their own processes and tools, etc. Apparently, it's a common practice for internal projects, so i would be surprised if it does not work for your internal projects.

You must Login or Signup to comment.