What’s in a Name?

Could the Language We Use be Holding Us Back?

6 March 2014

Karim Harbott
McKinsey & Company


Over the last few years, Agile has acquired a lot of unwanted baggage. Some of that baggage is a result of botched transformation attempts. Some is from the preexisting organizational dysfunctions that a transition to Agile merely uncovers. Some is from a lack of understanding, and some is just down to people's unwillingness to change. However it has come about, the term "Agile" has -- in some circles -- become a loaded one.

Despite being involved with Agile for years, it is not my passion. Agile is a set of values, tools, and practices that allows me to pursue my real passion: delivering high-quality software quickly and efficiently to realize business value. That is what drives me. That is what excites me. If there were a better way to achieve that, I would switch to using that method tomorrow. In short, for me, Agile is a means, not an end.

In my experience working on multiple Agile transformations, I have come to understand the following: Senior executives do not care about Agile. They do not care about tools, techniques, or processes. What they care about are outcomes and capabilities. When asked what is wrong with their software projects, they do not tell you that they are not Agile enough. They tell you one or more -- and sometime all -- of the following: There is a lack of speed to market. Projects do not deliver what the users need. Software quality is low. There is little or no visibility into progress and risks.

A consultant who aims to sell Agile is likely to fail. A consultant who has a plan to eliminate the business issues, and who communicates in a way that the business can understand, will almost certainly get the required buy-in from leadership. Every transformation must have a "why." Why are we making these often-painful changes? If the answer is "to become Agile," then motivating people to change will prove an uphill battle. If the answer is to get us to a place where we can deliver faster, where we can deliver what the users need, where high quality is sacrosanct, and where there is absolute visibility into progress and risks, I do not know an executive who would not jump on board.

Let's take a look at how we can address some of the issues.

Lack of speed to market

This is by far the most common complaint. Good ideas take way too long to become value-creating features. In today's lightning-fast digital world, companies are releasing features sometimes multiple times per day. It is no longer good enough to have a great idea and then to spend 6 to 12 months in a darkened room building it out. Your competitors will either beat you to market or the idea will lose its relevance as the world moves on. A short lead-time between the "lightbulb moment" and real users interacting with the feature is a huge competitive advantage.

There are a few ways to increase the speed to market. The easiest win is to identify a minimum viable product (MVP) and to release just that. There have been various studies indicating that vast swathes of features remain unused. The most famous of these, from the Standish Group, suggests that 19 percent are never used and 45 percent are rarely used. That is, 64 percent of effort, of cost, and of time is being used to build features that add little value. Whatever the actual numbers may be, it is clear that, for most projects, the majority of the business value lies in the top 30 to 40 percent of the features. Given this knowledge, if you do nothing else but prioritize wisely, the time to market can be reduced by 60 to 70 percent. That is a breathtakingly simple way to reduce a 12-month project to 4 to 5 months, with minimal reduction in benefits and zero disruption.

So why is this not done as standard practice? One reason is that with the Waterfall model, requirements become a shopping list of everything that may possibly be of use at some point, now or in the future. These exhaustive requirements are born out of a fear that if things are left out at the start, there will never be another chance to ask for them. One technique to address this issue is to use an incremental delivery model. Small chunks of fully functioning, fully tested features get released in small batches. Much like building a house room by room instead of laying all of the foundation, then building all the walls, etc. One could choose to move in after just one room was complete if so inclined. In this model, the most valuable features can be released quickly and begin to realize the business benefit much earlier. We can then work on the next batch of features. Features can be added and reprioritized at any time. This gives much greater freedom to business owners and reduces the need to specify absolutely everything up front. The MVP becomes the pure focus of the first release.

Not delivering what the user needs

Software exists for those who use it. If we -- as software development professionals -- are not delivering what the users need, then we are failing. Even if we deliver on time. Even if we deliver on budget. Even if we deliver what was asked for. If we do not deliver what the user needs, we have failed them. Now, I understand that this does not sound very fair. We can only deliver what the users ask for, right? Not exactly. It is not easy to specify a software solution perfectly the first time. It is even harder for people who do not build software for a living. It is for us to guide them toward a solution that fulfills their needs.

The Waterfall method makes this very difficult. Requirements are gathered at the start of the project and handed around various functions in big documents that few people read. The solution is delivered 12 to 18 months later and we hope that we got it right and that the landscape has not changed. Some people run user acceptance testing (UAT) at the end of a project, but, in reality, that is way too late. There is rarely enough time to make any changes and it becomes a mere formality, a box-ticking exercise.

On the other hand, a word that often goes hand in hand with incremental is iterative. Features are built and delivered every 5 to 10 days and we show the customers what we have built. They get the opportunity to use the features, provide feedback, think of more features they would like, and shape the software to their vision in an ongoing conversation. It is like a laser-guided missile, always self-correcting to remain on target. The target here is a set of features that delights the users and fulfills their needs completely. Working in this iterative and incremental way, involving your users and incorporating feedback, you cannot fail to deliver exactly what they want and need, as opposed to what they initially asked for.

Low quality

Defects, bugs, glitches. Software is full of them. In the best-case scenario, a bug will be annoying to the user. In the worst-case scenario, it will cost millions of pounds or even lives. Bugs also take up a lot of time from engineers who could be building new features. Instead, they are fixing old ones. This is one of the things that slow projects down toward the end. It is called failure demand, and it arises from a failure to build the feature properly the first time. So-called time-saving measures end up costing way more time than they save. Quality cannot be added in at the end of the process by an external quality team undertaking a testing phase. It must be an integral part of every activity. It must be baked into the product from the start.

To do this, we need to have a relentless focus on technical excellence. The code should be written with tests in mind. That may mean a tester and a developer pairing, it may mean writing the tests first. Automated unit tests can help to make sure that each component of the code does what it should. Automated acceptance tests can do the same at a functional level. The code literally tests itself. These techniques test the code at different levels to provide extra confidence. A mind-set of a feature not being complete until all tests are in place and passing will stop bugs building up. It will help us find issues early on so we do not store up bugs, and therefore risk, until the end of the project -- or, even worse, miss them altogether. We can also be confident that new features have not broken old ones, or at least we can be alerted instantly when they do, so we can fix them quickly.

Poor visibility of progress and risks

This is one of the most overlooked issues in software today. Management and business owners need to know the actual state of a project. I have seen countless Waterfall projects reporting as "green" right through requirements gathering, analysis, design, and development. Then, after 10 months of being "green," the testing starts and it is realized that there are some serious issues. All the risk has been deferred and is now biting back. This usually requires significant rework, and deadlines typically shift by a few months.

While the slipping of the delivery date is an inconvenience, it is the fact that it is reported so late that causes the real issue. Too late for anyone to respond or to take corrective action. It is crazy to report that you are 50 percent through a project and on track before a line of code has been written. This breeds distrust. There can be little faith that the next time a project has a status of "green," everything is as rosy as the project manager's reports suggest. This is an extremely unhealthy situation.

Instead of measuring progress by which documents have been created, we can use tested, working software as the main measure of progress. If we have built half of the features, tested them at a unit and functional level, and verified our architecture by running load and capacity tests, we can be fairly confident that we are 50 percent through the project. And because we are dealing with issues as we go, we will be able to report schedule over- or underruns after a few weeks rather than a few months. We uncover the nasty surprises early. This transparency allows for much more effective decision making.

Conclusion

There is a vast array of tools and techniques to mitigate the main causes of software project failure. As you will no doubt have noticed, many of these tools, techniques, values, and principles are instantly recognizable as being Agile. I have covered only a few here. There is a lot involved in a transformation. The detail should be saved for when working with the technical teams. When dealing with leadership, it is advisable to stick to solutions to common problems that they encounter with software projects. Focus on outcomes and talk in a language that the business understands.

The lexicon of Agile and Scrum can, at times, appear a little esoteric. People do not necessarily understand reduced batch sizes, queuing theory, and continuous integration. They do, however, understand small increments of working software, removing bottlenecks, and high quality. It is vital that leadership buys into what we are doing. They must understand what we are doing and why we are doing it. We must tailor the message to the audience. After all, what is more likely to be adopted by a CIO: an implementation of a Scrum/XP hybrid or the pragmatic application of the practices outlined above to gain a competitive advantage? As with so many things, language and presentation matter.


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 4.5 (8 ratings)

Comments

Michael Zadda, CSM, 3/6/2014 11:14:54 AM
Well written! solutions on the low quality section echoes Extreme Programming methodology.
Gurpreet Singh, CSP,CSM, 3/6/2014 1:16:47 PM
Awesome piece of writing! "No methodology can ensure the success of any project; it is the team who does the actual ground work". Agile is just a means and not the end ... truly agreed! Bravo! A beer from my side!

You must Login or Signup to comment.