This is the "first-aid kit" of my friend, who is an advanced consultant at a fast-growing IT consultancy. These are tips and tricks he discussed with me for helping teams that are struggling to deal with "time addiction."
Respect the "definition of ready"
There is nothing that uses time like a task that is not ready yet. A task that is not ready is like the fish that is yet to be found in the lake. We can drink as much beer as we want sitting by the lake, while people keep passing by us, asking how many we've caught. But really, time gets used up without anything getting done.
The definition of "ready" for us means that the task is timeboxed and, preferably, analyzed. Nothing goes into the sprint backlog without an estimate approved by the development team. Even when there is a critical issue raised, it has to be timeboxed before adding or swapping it in the sprint backlog.
Do a time attack on analysis
It is genuinely possible to get a defect with a high priority that cannot be replicated by developers. It has happened many times to us.
In this case we ask the customer to approve a timebox, for example two hours or four hours, for us to use for the analysis of the defect. At the end of this timebox, the developer writes his findings in TFS or Jira and asks the customer if he can proceed using an additional X number of hours in further analysis, if he still is in the process of replicating. In this way the customer has visibility on the time spent on analysis. No customer has ever said "no" to this. This way the expectations are clear and surprises between both the parties are lessened.
Watch for time bandits, aka change requests
It is sometimes possible for the customer to come up with a big project requirement in one line. For example, "Convert our website into a mobile application" -- and they want a "quick" estimate! This is most common when the customer has more than one partner, to compare estimates before they commission the project. To analyze the project and timebox the stories, and give an estimate in hours to the customers, we need time.
Yes, we have to run more backlog refinement sessions. So initially we tell the customer that we will use a maximum of something like 30 hours to do the backlog refinement, and we will give the estimate in two days. Even if he doesn’t like our estimate and chooses to drop the requirement, the customer should agree to pay for this initial 30 hours. And, again, no customer has ever said "no." It's better for them to risk 30 hours than 300. (Sometimes we have offered the customer that 30 hours for free in the hope that he will approve starting the project.)
Beware of test iterations, another time burglar
Test iterations should be kept to a minimum, because they add to the transaction cost of the sprints in a heavy way. Usually a defect is over-discussed by the customer beyond the natural limits of its priority, and we all start wondering how to avoid this situation in the future. Well, we never have really been able to avoid it; we've just we ended up creating more test iterations and making our process heavyweight.
We learned this in a hard way. We learned to "do things right the first time" instead of "doing things many times." We make sure that part of our retrospective is to weed out those additional processes promised during the review with stakeholders that do not add value anymore. There should be zero percent rework done by the development team and the product owner when it comes to testing. The right to decide on the process to deliver stays with the team, not the stakeholders.
Beware of remote jockey machines
Sometimes, to get a new customer, the sales team promises that all development will be done in "high secure." What they usually mean is a remote server in a highly secure hosting environment. This actually inflates the estimates, because the people who are going to do the developing usually work at a development center somewhere in Asia or Eastern Europe. They work on this remote machine, which is taken care of nicely in a customer's headquarters, maintained by a reputed blue-chip hosting partner at the customer's location. What we forget is that only two sessions are allowed on this machine, while the rest of the developers are waiting around somewhere.
Integrating and hosting the acceptance tests from the development center environment can save a lot of time. We did not compromise security. But wherever possible we made sure that enough access was provided to necessary parties so that it still made sense and was viable to deliver in a secure way. Ten people cannot live in the same apartment just because it's highly secured.
Stop the bandwidth fasting
Usually bandwidth of the development center will be low, especially when it is located in remote places. And usually the process to expand bandwidth is tedious. But each MBPS less in bandwidth will increase the development cost ten times. This is a main reason to get high estimates from remote teams. People tend to behave in the way we measure them. If development teams are paid on a time-and-money basis, then the more hours they invoice the customer, the higher the appraisal and rating they will be given. If they are measured this way, it's hard to get them fix their own network.
We made sure that network quality is a measure of governance of IT goals with the remote partners. When this was done for six to eight months, the estimate overheads due to network issues disappeared in a magical way.
Split only when it makes sense
A test case can be split into ten test cases, adding ten times more time in coordination and management meetings. This will looks like a good thing to do "administer with ease." But this is actually an "administration virus."
Never split the task without a reason. Never break a test case without a reason. Because each time we break it, we might be mercilessly breaking a "one-piece flow" somewhere in the project. This will inflate the estimates of the project at the end of the day. We also weeded out some of the processes, like creating defects on an issue that is not "done." We avoid this by mapping every bug and every change request onto a "released or fixed" version. This way we will know, even after ten years, when this bug or change request was raised.
Misunderstanding of pair programming
Pair programming means many different things to different teams. Interesting definitions I have heard so far include: "From 8 a.m. to 4 p.m. I will be programming on this PC, from 4 p.m. to 11 p.m. my colleague will program on the same PC, so we are pair programming." "After lunch, Mr. X usually makes some errors in his code, so I will sit by him and monitor him." "We make sure to wait with all our questions for our colleagues, so that after 3 p.m. we can pair program and clarify the errors. We usually wear headphones until 3 p.m., indicating that we don't want to be disturbed."
Awareness of and workshops about continuous delivery made a difference. The team was then able to write test and code simultaneously, for the same piece of functionality, by two different team members at the same time. Setting clear expectations about pair programming was a success. The team's awareness of the value of completing an entire story, instead of cherry-picking various tasks from various stories, took a lot of pressure off the product owner.
The misery of a single point of contact
One day a product owner called a developer to find out what was going on with a task. It hardly took three minutes. But the developer raised this as an impediment in the Daily Scrum because he lost focus due to "disturbance." The development team came up with the idea that, instead of wasting time with the product owner talking to every developer, we would make a process. We would have one person talk to the product owner daily and communicate on behalf of all of us.
My question here is, how long does this "one person" take to communicate to the team? It's almost the same time that the product owner took to communicate to the team directly. And the whole "cycle of communication" took a full day, so the product owner was not able to make decisions
. So really, what we "improved" here is that we added a new job role and a new process with a lot of chance for mistakes and miscommunication, but no real value. Sprint retrospectives should carefully eliminate such unnecessary processes. The principle of inspect, adapt, and transform should be used here.
Make the middle manager a "resting actor"
There is no faster way to make a team inefficient than to have several middle-level managers. Sometimes the job title makes no sense, e.g., "review coordinator," "lead system architect," "application process manager," "test director."
The team should have only three roles: ScrumMaster, product owner, development team. Simplicity and direct communication always save a lot of time. I am not against having specialists on teams. But middle-level management within a Scrum team creates unnecessary bottlenecks in continuous delivery.
Scrum teams do not report, they produce
Every team was giving two weekly status reports, one for development and another for service desk-related tasks. These reports highlighted the team's achievements, tasks accomplished, issues faced, risks expected to materialize, hours spent so far, approval of new estimates. Some of these couldn't wait a week to be communicated to the customer.
The team's achievements, tasks accomplished, hours spent so far will be briefly documented and archived in the minutes of the meeting of the sprint review by the ScrumMaster to help the product owner. Some service desk systems today show an Agile dashboard for each sprint, which gives this exact picture. There is really no need to document this separately. The product owner can use this to report to IT governance, the steering committee, etc. A voice recorder is also allowed in some companies. The issues and risks faced by the team should be raised to the product owner and ScrumMaster immediately; they should not wait to go into some report that hardly gets read. Any expected action should be added to the product backlog and prioritized just like other tasks. Lastly, the approval of new estimates should be done as part of the backlog refinement sessions.