Get certified - Transform your world of work today


Row, Row, Row the Scrum Boat Gently Down the Waterfall

How to use Scrum in Waterfall-like or unsynchronized environments

11 August 2016

Michael Kogan
Western Digital


For several years, I have been responsible for software systems and products that were either part of a larger system or were interacting with other systems — both software and hardware — as part of solutions delivered to customers. When I started the transformation to Scrum, those other systems were still being developed using traditional Waterfall methods in a worst case, or using a so-called "Scrumfall" in a best case. In any case, the teams were not part of a scaled Scrum framework (such as Nexus or SAFe), so when the other systems did not use sprints, they were not in sync with ours in any useful way.

My team and I had to come up with a way to keep our Scrum framework while adjacent systems didn't, or did but weren't synchronized to ours. During this attempt, I made several observations and gained insights that guided me along the way, and I would like to share them in this article.

Whose product is it anyway?

First, obviously, we are not in total control of the whole solution, so a Potentially Shippable Product (PSP) definition could be tricky to establish. We could say that our PSP is the part of the system that we own, but this is wrong. Eventually the product that will be delivered to customers is comprised of several parts, of which ours is only one. So we need to come to terms with the fact that PSP is not totally ours, and this is a joint effort by more than one team.

Crystallizing the interface

It often happens that when our sprints or releases are being planned, we realize that the interfaces with the external systems are incomplete or downright missing. At that point, it is important to understand that one cannot add user stories that depend on an interface to the sprint backlog until this interface is well defined. It doesn't have to be implemented in the other system, but it must be well defined. For example, a well-defined API is a precondition for the Definition of Ready for a product backlog item.

To keep functionality slices thin, the product owner (PO) might try to negotiate the API with the external teams by using a piecemeal approach; that is, defining the interface one piece at a time, in the order of precedence that is important to the customer. Keep in mind that this kind of API crystallization takes time, and so the PO and the team should start the preplanning and interteam engagement early in the current sprint to be ready for the planning meeting of the next one. When the relevant piece of the interface is well defined, the corresponding user story can go into the sprint.

Verification and Definition of Done

At this point, one might ask: If we don't own the other side of the interface, how can we ever test our story and declare the item "done?" Well, if you have the interface defined, then you can comprehensively test it by using software mocking techniques. There are quite a few mocking frameworks available these days covering most, if not all, of the popular programming languages. Once the tests are in place, they should continue to be used as part of the automatic test suite, proving the API correctness in every build. And this brings me to the question of Definition of Done (DoD).

How do we define the DoD in this case? The challenge here is obviously settling the tension between a "vertical" piece of functionality, from the customer perspective, and the fact that our team cannot verify this piece's completion in isolation. To resolve this, I suggest defining Done as the point when we are ready to integrate with the other systems. Again, this is the only one under our control. For example, part of DoD could be completing and passing all the mocked tests.

Completing the PSP

Naturally, on a product level, a complete slice can only be declared done after all the subsystems complete the integration among them, and all the system-level tests pass. Theoretically, this phase should be smooth and short, but in reality, it can be so only to the extent of the accuracy and the stability of the interface definition and their implementation robustness. Almost always the API will change pre-integration, when new insights are reached, and so the integration becomes the most challenging, surprising, and poorly estimated effort, submitting itself to the Pareto principle. In other words, the integration tends to take the lion's share of the PSP development.

To accommodate the above constraints, a team should allocate time and resources for the integration effort up front, thus deviating from fundamental Scrum guidelines by allowing a gap between sprints. This approach will not work in all cases; thus the team can develop its own flavor during the retrospectives.

Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.

Article Rating

Current rating: 3 (2 ratings)


Be the first to add a comment...

You must Login or Signup to comment.

The community welcomes feedback that is constructive and supportive, in the spirit of better understanding and implementation of Scrum.


Newsletter Sign-Up