Although it had been a different team setup, our team delivered value every sprint because of the way it was organized across on-site and offshore offices. This setup helped the team deliver successful sprints with an Agile mindset. The support from the on-site stakeholders was 100 percent. We had a highly motivated team.
Here I highlight some of the factors that led to the success of the project.
We had a seven-member team who were "pigs." The highlight of the team was that it had "chickens" who were 100% involved.
The following chickens were on our team:
- Lead system architect (LSA) — on-site and offshore offices
- Lead business analyst (LBA) — on-site office
- ScrumMaster — on-site office
Initially, we had a two-week sprint that produced the following results:
- It seemed very short, with little time for development.
- We delivered less functionality, so the team was not comfortable with the outcome.
We then changed this to a four-week sprint, with the following results:
- The team seemed relaxed.
- No urgent matters were knocking at the door. On the surface, it seemed like a Waterfall model execution.
- The team geared up in the second week.
- A lot of work piled up on the third week!
- The team had to stretch to complete the work.
Consequently, we modified the duration to a three-week sprint and experienced the following results:
- The team was focused.
- The work moved forward smoothly.
- The team had enough time to develop and test code.
Everyone chose his or her stories from the sprint backlog. However, this did not go well because stories were interdependent and touched the same code. To resolve this issue, we grouped the stories by function whereby a team member was able to handle the story independently.
Here's what we accomplished during Sprint Zero:
- Set up the infrastructure.
- Optimized the workstations.
- Reviewed the project scope and had the product owner (PO) sign off on it.
- Sized the critical top-level items.
- Created a test strategy and related test plans.
Sprint: Day 1
We managed several activities during sprint Day 1:
- Reviewed the product backlog.
- Set the sprint goal.
- Decomposed backlog items into tasks.
- Included traditional tasks:
- Reviewed code.
- Created unit test case.
- Reviewed/updated design documentation.
- Created test scripts.
- Managed defects.
- Participated in release grooming.
- Business analyst (BA) and PO reviewed test cases.
- Updated the tracking tool daily to determine progress of burn-down charts.
- Ensured that design/code were review at every sprint.
- Bugs were logged in a spreadsheet.
- When the code moved to the system integration environment, bugs were triaged with support from the BA and team and then logged into the tool.
- Completed premerge and postmerge testing.
- We held the grooming session two days a week.
- We used Planning Poker® cards to get alignment on story points.
This helped the team to understand the overall architectural road map and future plans. It is an added advantage if teams are colocated for at least the first two sprints; it builds trust.
- The entire team was available for release grooming meetings.
- We had frequent video conference calls to build relationships.
- The on-site LSA visited our offshore team for two sprints.
We had 100% attendance from all stakeholders in this meeting.
The team updates the tracker tool and then attends the meeting. This helped everyone know the collective status, and we had more visibility into the progress made.
Sprint: Last day
We had a four-hour review/retrospective during the on-site overlap time.
- To ensure an effective sprint demo on the last day, we had mid-sprint demos.
- Development teams handled the demos individually.
- We had interesting and effective methods for retrospection. Some of them were:
- Mad, Sad, Glad game.
- One word / one liner about the sprint.
- Each team member highlighted the team's good qualities.
This fostered worthwhile collaboration with the on-site team.
We discussed trends on velocity to determine how we could improve. We reestimated stories from past sprints and adjusted the velocity.
Our Definition of Done
We used the following checklist to determine whether we satisfied the Definition of Done:
- Completed code.
- Completed unit testing and secured sign-off.
- Completed code review.
- Completed test plans/scripts.
- Passed acceptance criteria.
- Fixed all Severity 1 issues.
- Met all usability guidelines.
- Ran the Performance Log Analyzer tool.
Issues we encountered
- We could not perform QA testing for the first two sprints because we did not have the third-party dependency services ready. Team stretched out during the rest of the sprints to get this done successfully.
- The two- and four-week sprints did not work well.
- We had dependencies with non-Agile teams.
We followed these best practices to ensure the project's success:
- Handle demos individually.
- Hold release planning two days a week to improve the product backlog.
- Conduct the retrospection in interesting ways.
- Hold efficient daily stand-ups with 100% attendance.
- Mandate performance testing at the end of every sprint.
- Require the daily logging of burned hours and status in the tracking tool.
- Offshore and on-site LSA ensure that a code review is completed in every sprint.
- Mandate code merge every two weeks.
- Have BA or PO review test cases.
- Hold triage meeting before issues are logged in the tracking tool.
- PEGA best practices:
- Establish naming conventions.
- Develop a code review checklist.
- Update history and comments.
- Use out-of-the-box functionality and avoid customizations to improve performance.
- Work on technical debts in every sprint.
- Refactor code.
- Implement continuous integration, which helps detect build issues immediately.
- Define stand-up/planning/review/retro/grooming meetings.
- Implement fixed merge cycles twice a month, which enabled continuous integration.
- Run system integration testing at every sprint.
- Run regular design reviews to ensure adherence to best practices.
- Instead of component or layer teams, establish feature teams.
This reduces integration time, improves quality, and builds knowledge redundancy.
Ways to improve
Through our experience, we defined ways to improve our team and work processes:
- Combine Kanban board with Scrum.
- Follow extreme programming practices:
- Claim collective code ownership through pair programming.
- Test drive development by using NUnit.
- Create simple design.
- Avoid rework.
- Improve process cycle efficiency by brainstorming ways to increase the value-added steps and decrease the nonvalue-added steps.
- POs identify user stories well in advance. They need to be clear about the product vision for the current or next release.
- Identify dependencies/infrastructure requirements early in the project during elaboration.
- Ensure that all stakeholders from dependent teams are part of the Scrum calls.
- Improve on estimations by baselining/rebaselining user stories.
- Ensure that ScrumMaster validates the team maturity level in estimating user stories.
- Keep additional complete stories ready in the product backlog.
- Understand velocity trends during retrospectives.
- Build an effective cross-functional team.
- Identify training gaps and plan the training well.
- Run many team-building activities.
- Encourage the team to embrace change and to take up more work.