Monday, December 17, 2018

What the Military can teach us about Sprint Planning

As a coach, I've seen many Sprint Planning sessions. Some are more effective than others. When it comes to planning, the military has a long history and we can learn a few things from them.
Plans are worthless, but planning is everything
                                                                              - Dwight Eisenhower
I have always interpreted the meaning behind this quote by Eisenhower to mean that through the act of planning, we gain a shared understanding of what we are attempting to do; the plan itself may change, but the goal won't

The military talks about "Commander's Intent" which looks at the mission, the desired end state, and the purpose of an operation. While this seems to be bigger than a sprint goal, a good sprint goal will help the team understand the end-state of the sprint.

If the team has a good sprint goal, it will help them deliver on the intent but still give some flexibility on how they deliver. Without following any specific military planning technique, here are some other aspects of a military plan that we can borrow in our sprint planning:

  • Resources & People: Do we have all the equipment we need? Are test environments ready? Do we have test data? Do we know who is going to be available? Any planned vacations or holidays that impact the sprint? I encourage my scrum masters to keep a spreadsheet to keep track of the team so they can tailor their capacity to the available developers. 
  • Lessons Learned: Have we included kaizens from the last retrospective into our plan? Do we have them on the sprint backlog?
  • The Plan: The team should self-organize around the work that needs to be accomplished.
  • Contingencies: Once we have a plan, do we thing about what could go wrong, a technique the military calls Red Teaming. What do we do if the test environment goes down? What if that snow being predicted for the end of the week is worse than they predict?

Taking the extra time to discuss all these aspects to the plan will build a stronger understanding of the plan, which will help the team make better decisions when things don't go according to plan.

Tuesday, December 04, 2018

Getting Rusty

I was in the Moab, Utah area last weekend and got some mountain biking in. I haven't spent a lot of time mountain biking this year so I was a bit rusty. By the second day I was starting to get my rhythm but it took a little time.

Just like mountain biking, our agile skills can get rusty. I ran a Lean Coffee last week and before it I read a couple articles because I hadn't done a Lean Coffee in a while and wanted to make sure I didn't miss anything.

I've been working with some of my newer Scrum Masters on facilitation techniques, another skillset that can easily get rusty if you don't do enough of it.

One of my favorite facilitation tools is POWER;

Purpose - why are we having this meeting/workshop.

Outcomes - what do we expect to walk away with.

What's in in for me - Why will participants want to attend and what can they get out of it

Engage - how will you as the facilitator engage the participants. Think about activities, items on the tables to play with, or even snacks.

Roles & Responsibilities - What can the participants do.

I like using this as a way to prepare for workshops so that I can make sure the workshop provides value to the participants. Using this keeps me from getting Rusty on my facilitation techniques.

Thursday, November 29, 2018

If You're Going to Offshore, Get it Right

In general, I prefer having fully co-located teams but I've been working with a number of organizations that use off-shoring as part of their delivery model. Some have the right approach, others are missing the mark.

When one organization I worked with decided to move to a Scrum framework, they were also setting up an off-shore model with developers in India. In this case, they brought those developers to the U.S. to be part of the team formation process; learning the Scrum framework, setting up a team operating agreement etc.

When these individuals went back to India, they were set up with the right equipment; good audio and video capability that gave then a tele-presence ability. Each morning the US based part of the team would go to a video conference room and spend the first part of their day with the India part of the team. They could see/hear each other well, share documents, and even walk through code together. It was a pretty effective approach.

Counter this with another client of mine. They also have India-based developers, but they have not had the opportunity to travel to the US. They don't have any real tele-presence, just conference calls and screen sharing. They don't really participate, just listen in on discussions from a US based conference room. Self-organizing is also absent, they are assigned tasks by the lead developer, who is in the US. My observation is that they aren't getting much value out of this approach.

I'm a fan of the Media Richness Theory and have used it with my clients. I have also taken a page from Crew Resource Management (CRM) and their communications practices. One of my favorite assertive communications tools is SBAR (situation, background, assessment, recommendation). I have taught this technique to a number of teams as part of a focus on building up their teaming capability.

Given the choice, I would have all my teams co-located. When that isn't possible, I try to bring them together as often as possible and use good tele-presence tools when they are not together. Regardless of your model, you still have to teach them good communications and teaming techniques so they can be as effective as possible in any configuration.

Monday, September 24, 2018

How to Budget For Your Company’s Technical Debt

Guest post by Dr. Mik Kersten

While “technical debt” is a term that’s frequently used by technologists, the implication and understanding of it tends to be opaque to the business until it’s too late - just look at how Nokia lost the mobile market that it helped create.

The business and finance side of Nokia had the usual tools for assessing financial risks - but why do we not have an equivalent tool for the operational or existential risks when the debts come from the more intangible investment in technology?

What’s technical debt?
Technical debt refers to the refactoring "shortcuts" taken in IT to meet requirements like time to value (TtV) and speed-to-market. Technical debt is like cholesterol; the more it accumulates, the more it impedes the flow of value.
Legacy systems are a perfect example of technical debt. We are all too familiar with that system that everyone dreads to touch and hopes that it doesn’t malfunction because any modifications to improve its business value will cost time and money. Yet the longer you wait, the costlier it will get due to lack of knowledge and support.

Speed-to-market pressures also increase the debt – such as first-to-market, responding to time-critical customer needs, and faster customer feedback to improve performance and value. Compromises are made with the notion of dealing with the consequences later.

Sometimes it’s as simple as realizing the technology or architecture chosen for a particular product is no longer scaling and needs refactoring. All of these technical decisions impact delivery speed and must be managed to ensure any future changes or products are not delayed. Taking on technical debt is not necessarily a bad thing, as long as it is understood by the business decision makers who put in place a plan for that debt to be paid down.

Why should the business care – isn’t this a cost that IT manages?
Technical debt additionally impacts delivery teams by bloating their work-in-progress (WIP) with neglected work. Neglected work can impact a team’s ability to focus and complete value-adding work. Less completed work leads to longer time to delivery, lower product quality, and less value, impacting customer satisfaction and business performance.

How can IT make technical debt visible to the business?
In a project-oriented view, where changes to IT systems are just another initiative, it’s difficult to prioritize and fund critical changes that will improve the speed of future changes. Yet software needs to go through a period of refactoring to maintain performance.

A product-oriented view enables the business to understand how all work interlinks, providing the ability to predict and fix for the impact of technical debt. However, you can’t measure what you can’t see, so it’s crucial to make technical debt visible. The Flow Framework – a new way of seeing, measuring, and managing product delivery – introduces two metrics that help increase visibility of technical debt:

Flow Distribution
This metric shows the distribution of the different types of work that the IT team has delivered, such as value-adding work like features and functionality, and revenue-protecting work like defects, security related work. Yet the more new functionality they deliver, the less time they have to do the other types of work like defect resolution, working on security features etc. It’s important to keep an eye on what the levels are for technical debt in this equation. Has the level of completed tech debt work fallen in the last few releases? If so, it is leading indicator of more defects/delays in the future releases. In addition, the Flow Framework expands on the notion of technical debt to include infrastructure debt (e.g., data centers and servers), and debt in the value streams themselves (e.g., lack of automation).

Flow Load
Flow Load is a Flow Framework metric that shows the amount of work that a team or seat of teams have taken on. How much work do they have and what proportion of it is technical debt? Is there a lot of technical debt left in the backlog accepted but unfinished because new work is taking precedence? Accumulating technical debt on the backlog can have an increasingly negative impact a company and its products.

Budgeting for technical debt
There are two parts to budgeting for technical debt.

1)   Correlating the impact of tech debt with value-adding work can help business understand its danger on the bottom line. Time and resources must be set aside every financial year to tackle technical debt as debt often accumulates due to a lack of funding and sponsorship.

2)   IT and business should look at trends that determine an appropriate level for when appropriate action must be taken to “pay back". It’s similar to the error budget in Systems Reliability Engineering that helps product development and system reliability teams work together on a level of unreliability that can be tolerated.

If technical debt is not actively monitored, it will gradually impact the flow of value to customer-facing products. Neglect the build-up and a cardiac arrest is inevitable. Make sure technical debt is visible and measured so that the business and IT can team up to proactively tackle and reduce technical debt to ensure a healthy product portfolio that can sustain the business.
Dr. Mik Kersten is the CEO of Tasktop and author of Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. For more information, please visit,

Wednesday, September 05, 2018

Interview at Agile2018

A long-time colleague and friend, Dave Prior, interviewed me at Agile2018 about my presentation on being an agile coach at Toyota. You can find it here.

Thursday, August 09, 2018

Agile 2018 wrap up

I'm on my way home after spending the week at Agile2018. It's been a great conference. I reconnected with old friends, met some new ones, and attended some great presentations.

Monday's keynote was Dom Price (@domprice) from Atlassian. He shared how Atlassian tracked team health via their playbook. One interesting statistic he shared was that 78% of people don't trust their team mates. He also hit a theme that was repeated by many of the speaker; focus on outcomes, not outputs.
Focus on outcomes, not outputs
Wednesday's keynote was a focus on metrics by Troy Magennis. At first I considered not attending, but I'm glad I did. Troy talked about data as a people problem. He said a good way to get "crappy" data is to embarrass people. The data isn't enough, we have to be able to tell a story with it.

There were a number of other sessions that I took something away from;

  • Scott Ambler talked about architecture. A key message was that there are no "best practices" and the approach you apply depends on the context.
  • Tricia Broderick gave a session on facilitation. A point that stuck with me was that if we need to select mentoring or training or coaching based on the situation we're in and our desired outcome.
  • David Bland gave a session on experiments as they relate to new products. He turned the cycle in Lean Startup around and said we should start with Learn. From there, we decide what we want to measure, and based on that we decide what to build. 
This is just a snapshot of the fours days I attended. I have other notes and ideas of things I'm going to use when I get back to my "day job" of coaching. I recommend anyone looking for an infusion of new ideas to consider attending next year. 

Friday, August 03, 2018

Regression to the Mean or High Performing

I had an interesting conversation with one of my Product Owners this week. She thought that over time, planning poker values feel prey to Regression toward the Mean because human nature was such that people didn’t want to stand out and therefore tried to give estimates in line with what they thought others would.

I countered by saying if people were afraid to state what they truely thought the value should be, there is probably a psychological safety issue going on. The purpose of using poker cards during the activity is to avoid anchoring; having everyone influenced by one person stating aloud their estimate. 
After the conversation, another idea crept into my head. I tell teams that small stories are better. I am even a proponent of #noestimates in the right situation. So from this perspective, as a team moves towards high performance, they will get good at vertical slicing and that will lead to smaller stories, and therefore smaller estimate values.

So in response to the original statement, I don't think on a well-functioning team, the estimates are prone to regression toward the mean. However, I can see it happening on a team that is dealing with psychological safety issues. The real test is to watch the velocity over time. If regression toward the mean were happening, the velocity would be dropping over time. If the team is healthy, they will have a steady, or even increasing velocity.