Our team encountered two serious problems and learned from our mistakes with them. We were biased with a Waterfall model -- the only model that we knew how to use to manage and run a software project.
Running a waterfall model inside a sprint
When the sprint starts rolling, even on the first day, a developer takes a user story and starts to actually code the piece of information. At the same time, the tester creates the test scripts based on the test plan detailed in sprint planning.
On this Day One, the daily stand-up is normally missed or takes less than 10 minutes, as the working group picks up the activities that they have agreed on during the sprint planning. For a 30-day sprint, effectively we would have 29 daily stand-ups. I have found 20 percent of 29 days to be greatly effective, while the remaining 70 percent are just spent tracking and describing progress status. Finally, 10 percent of the 29 days of stand-ups tend to be very ineffective.
The 20 percent is again split up at the start of the sprint when the coder actually starts working on the first piece. He has made up his mind to stick with the promised schedule, and he gets it 90 percent of the time. The second gear-up actually happens just two or three days before the sprint ends. Fast-tracking, working extra hard to deliver, is in every one's mind. Teams who have practiced Scrum can identify this over a period of time, as it happens commonly in self-organized teams.
The 70 percent is the actual threat, where we mistakenly fall into a Waterfall model, due to unexpected delays, perhaps even from a single resource standpoint. Once a delay is found or the velocity of the team moves to and fro, the shift affects the next step.
The developer delayed the process by two days and now starts to work on Task B with a delay of two days. The tester had no work for two days; over a period of time, we see the developers creating function after function and the tester testing feature after feature inside the sprint. We are running Waterfall without even recognizing it, each stage being dependent on the last, each team member waiting for work and each one extending or delaying what was promised.
It is actually a complicated process to run parallel, with test sprint runs to find the team's velocity. Over a period, it gives you the truth, the facts of what really can be delivered as a team, not as an individual. The PDCA (plan-do-check-act) cycle shows the real threat.
A highly self-organized team provides 100 percent of the planned results. The threat is not whether they produce less than 100 percent; the threat is when the team produces more than 100 percent. The ScrumMaster changes the velocity of the team and it grows with a decent run. Most of the time, in my practices, I have seen teams take the best velocity for the sprint. In the 70 percent in-progress meet-ups in Daily Scrums, they take more user stories. The user stories that are picked up may be left without a single bit of work done due to the test bugs that had to be reworked or completed in full, pulling the velocity of the person or the team to the positive side. In the next sprint, if the ScrumMaster decides to go with the best bet for the resource, it becomes an even more cumbersome process to detail the real value. It's fine if the velocity and the deliverable can be managed, but, each time the sprint runs, it has to be managed well. The ScrumMaster gets the experience of validating the real velocity, which comes with running a few sprints and gaining experience with the team.
The problem of estimating features is ongoing, as features are always different from one another. When we spend time creating a feature and labeling it "Done," we also take measure to see whether it can be automated. The time taken to automate the feature would be normally 5 percent of the estimated user story size. This is sometimes a kill when we automate a feature and try to use it; the same 5 percent goes in integration even if the feature is ready to plug and play. The save is only for the testers who do not recreate the scripts to run.
We have our user stories complete, an acceptance criteria, and a proper done statement, and next are estimating the size, deriving a duration (using the velocity), and drawing a schedule. We do these one after the other and continue until we reach the end of our velocity. Here we are running a clear Waterfall model that can't be stopped. Over a period of time, the team understands with a shock that they are not acting in an iterative mode. They have unknowingly become trapped in the Waterfall model, and the way to stop is to terminate the sprint and start working with the sprint review and retrospection to identify where we could change.
Terminating a sprint is not as easy as it takes, as we would be running in the 70 percent in-progress status. There are deliverables half-done and team members waiting for their share to shed. It becomes a cumbersome war within the team to deliver. The elasticity of the team would help it bring over few portions of digestible delivery. If the sprint is terminated and has to restart, the work doesn't restart from scratch but from the part that was half done.
If you are starting to follow Scrum, I would highly suggest running two full cycles of demo sprints with the executing team to understand how well it works with your project and organization. Even before following Scrum, make sure
you have a proper need to use Scrum. Threats like these are lessons learned from my practice.