Start measuring the team velocity with story points and cycle time.
- The core team was working well together at this point, and adding a scrum master helped the team stay on track. - And, adding a release manager with her motto of always be shipping kept stakeholders in the loop and showed progress. A scrum master solves many outstanding issues that the team had run into. On the estimation front, our teams had trouble estimating how long features would take. Initially tickets were estimated in hours, but the number of hours used wasn't very accurate unless they were very small tasks. - In more than once, the team spent a lot of time bikeshedding about whether a specific large task was 25 or 30 hours to do.
In the end it's just a large task, and the variance up front in that estimate didn't matter much. Instead of using hours, we initially pivoted to using tee-shirt sizing for estimates as a first step. We used small, medium, large, and extra large shirt sizes to size the tickets. XXL they saved for me. - [Instructor] Cards added to the to-do column would get an initial tee-shirt size as an estimate of how long the feature would take to complete. This was immensely helpful to Ernest and me to prioritize the features in our backlog.
- [Instructor] Eventually, we moved to use story points to estimate the effort required. So, we used a common technique, which is a modified Fibonacci sequence of one, three, five, eight, 13, 20, and 40 points to size the tickets. And that ended up working well for our team. - [Instructor] Also, over time, everyone realized that, you know, three-story points ended up being about a days amount of work. And was that was used as a metric, as a guideline. - [Instructor] Pure Agilists might frown at this behavior, but it helped clear up confusion in our org versus adding more of it.
We also knew that it's more important to make Agile work for you instead of trying to blindly follow all the instructions that purer Agile methodologies advocate. - That's right. Help me help you. On a different note, our business stakeholders kept asking us for progress updates during the sprint. - Yep, bi-weekly and monthly update emails weren't enough for them, and we needed a way to track our progress and update others more quickly. - This helped us figure out what the most important metrics to monitor were.
- By this point, we had a couple of teams doing more of a scrum life cycle versus the Ops side who are more comfortable with Kanban approach. - We set a goal of monitoring progress and burn rates. For the sprint teams, we tracked the burn-down charts for each sprint, epic release burn-down charts and velocity. Let's look at what these are. - [Instructor] The sprint burn-down chart helped us visualize progress within a specific sprint and track the completion of committed work throughout the sprint. - [Instructor] We also watched for anti-patterns such as a team finishing too early in a sprint, missing forecasts, sudden drops versus a gradual burn-down, or sudden sculpt changes during sprints.
- The epic and release burn-down helped us to track overall progress of the project to make sure we would be able to release on time. We use these burn-down charts to look for progress being made across several alliterations, scope creep issues, and to verify that the team was actually shipping incremental releases during the development of some of the larger features. - [Instructor] The velocity charts allowed us to track the average amount of work the team completed during a sprint. This helped us with forecasting how much would actually get done over many sprints and how quickly the team could work through the backlog.
Initially, velocity was all over the place, but we noticed that after a few iterations, it became more consistent. Tracking velocity allowed us to ask questions on sprint forecasting, development challenges, and, you know, if there are any outside pressures affecting our delivery. - For the teams doing Kanban, we track the following metrics, work in progress or WIP, throughput, and lead and cycle times. - Above all, it was important to us that the team had a good flow, meaning that work was progressing in a steady and predictable way.
- [Instructor] We tracked and recorded the number of unfinished cards each week, and ended up creating a Cumulative Flow Diagram from this. Tracking WIP, gave us an understanding of how much work's in process, and as a result, not yet providing value. - Next, tracking throughput gave us insight into how much work was being completed during an iteration. - Tracking throughput wasn't helpful by itself, but combined with lead time and cycle time, it gave us a better picture of the impact of our changes.
Lead time tracks the total amount of time it takes from when work is requested until it's delivered. While cycle time tracks the amount of time we spend working on it while it's on our board. - Tracking these three metrics allowed us to optimize specific areas of our workflow and improved our overall efficiency. - The most important thing we learned from gathering and monitoring metrics is that we should use our metrics for good and not weaponize them. We'd heard horror stories from other organizations where teams were compared against each other using an arbitrary metric, leading to politicking and conflict.
- Or, the scenario where because teams weren't hitting a specific metric, they had to stay late every night and weekends for months. Nobody likes doing that. - Metrics inevitably reveal places we need to improve. More of that in our next video.
- What is agile?
- What is lean?
- Measuring success
- Learning and adapting
- Building a culture of metrics
- Continuous learning
- Advanced concepts