Do We Really Need a Defect Tracking System in Agile?

3 January 2014

Ashok Singh
Simple Agile Corp


Defects are synonymous with software development, even though they do not rhyme well with quality assurance. Probably software engineers are the only professionals in the world who love to keep a list of their failures. Have you ever seen physicians, or for that matter any other professionals, flaunting the list of their glorious failures? We use Facebook, LinkedIn, and Twitter every day, but do we ever care about any defect so long as it doesn't impede our ability to function normally? Probably we don't, unless you are a heck of a nerd out to challenge those titans. Most of us log into the platform, tweet it, like it, post it, share it, and get out of it after having some fun.

Defect management traces its roots in traditional methods. Traditionally, testers usually create spec defects, code defects, and design defects because they work off a set of deliverables handed off in each phase. That happens primarily because the test phase comes toward the end, after the development phase, where the rubber meets the road and stakeholders get the first glimpse of their working software.

But in an Agile project there are no hand-offs, there is no set documentation. A self-organizing team works closely with the product owner and other stakeholders to deliver high-quality software.

So how should we define a defect in an Agile project? Is it a tester's (or developer's) perception of the problem, or is it the product owner's mandate for accepting a story?

A defect, in my honest opinion, is a critical component of "Done" that defies the expectation of a product owner, or stops the product owner from committing the story to the real world. When a product owner accepts the story, we conclude that it has met the Definition of Done even if there are known problems with the working software. A tester's perception of the problem fizzles out in the face of customer need.

On the flip side, if a product owner is not satisfied with the reported problems in the software, then the story does not meet the Definition of Done, indicating that it requires immediate action before the story can be shipped.

At a later date, if customers report a defect that either escaped the product owner's attention or, under some circumstance, the product owner accepted the story as "Done," the report goes into the backlog and becomes a story that should be prioritized just like all other stories. Now it becomes the responsibility of the product owner to get that prioritized and fixed in the upcoming iteration.

A strong self-organizing team understands the intent of the product and works closely with the product owner, so there should not be many of those escaped defects popping up in the backlog. But in case you find the backlog littered with customer-reported defects, then it's the time to bring the topic back to the retrospective table and look for dysfunctions within the team. The beauty of agility is it never fails . . . it simply exposes the dysfunctions of the team.

I remember I was assigned a critical bug-bash project in a major software company. I was given $1.5 million to create 3 Scrum teams and fix 3,300-plus defects that survived many releases for more than 5 years. I started with the dated defects and put some logical reasoning to action to filter out critical defects. I was not surprised to find that customers cared only about a handful of them (around 35 with workarounds), and most of them were reported by customers themselves. We fixed the critical customer-impacting defects and then closed the rest of them in bulk.

So why did we have that mountain of moles (or defects)? That's because the traditional model encourages testers to create defects liberally to save the grace that obfuscates the virtual reality. The saga of 3,300 defects never quantified the true customer experience, and even after two years of outright closing (without fixing), those defects never resurfaced in any backlog anywhere.

Similarly, how critical is it to have defect tracking system to populate myriad test cases? Who would consume the data or refer to hundreds of test cases after the story meets the product owner's expectations or the Definition of Done? I firmly believe that the back of sticky notes (of stories) is the best place to write test cases. The lack of space on the back of sticky notes leads me to believe that we are still fascinated by documentation and hesitant to break from the past on our way to adopt agility.

In essence, true agility promotes creating working software that meets customer expectations rather than creating a product that is a work of art.


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 0 (0 ratings)

Comments

Phillip Stiby, CSM, 1/6/2014 10:51:17 AM
Interesting, you do realise that most Scrum tools are in fact bug tracking tools in disguise, for example Jira, Target Process to name two.

I'd say that bug tracking tools exist as a way and means to manage defects, very much like you can use a backlog.

However Scrum is effective for developing and delivering value to a business. But when an organisation has a Service Level Agreement with a client to resolve specific defects in say 48 hours this dosn't fit into a sprint cycle and from a scrum perspective is bad to de-rail development and velocity by fixing criticle bugs.

So in large enough organisations its best to have a dedicated support team and platform and protect the development team from distraction.

You must Login or Signup to comment.