Monday, February 16, 2015

What good are story points and velocity in Scrum?

We use velocity as a measure of how many story points to take into the next sprint. When you take in enough stories, and story points, so that you reach your average velocity, then, you can end the sprint planning meeting.
Although this is a common approach, it is exactly how you should not use story points in Scrum. It leads to over-commitment and spillover (started, but unfinished work) at the end of the sprint. Both of these are bad for performance. How should you use story points in planning? How do you create the Forecast? And what do you do if the team runs out of work?

The first thing to remember is that Development Team is self-organizing. They have exclusive jurisdiction over how much work they take on. The Product Owner has final say over the ordering of items in the backlog, but nobody tells the the Development Team how much work to take on! Not the Product Owner, not the ScrumMaster, and certainly not the math!

As a Product Owner, I would use story points to help set medium and long-term expectations on what is really achievable. Wish and probable reality need to be more or less in sync with each other. If the disparity is too big, it's the Product Owner's job to fix the problem, and she has lots of options: less scope, simpler acceptance criteria, more time, more people, pivot, persevere, or even abandon.

As a ScrumMaster, I would use velocity to identify a number of dysfunctions. A wavy burndown chart is often a symptom of stories that are too big, excessive spillover, or poorly understood acceptance criteria (to name the most likely causes). A flattening burn-down chart is often a sign of technical debt. An accelerating burn-down chart may be sign of management pressure to perform (story point inflation). A lack of a burn-down or velocity chart may be a sign of flying blind!

As a member of the Development Team, I would use the estimate in story points to help decide whether stories are ready to take into the sprint. An individual story should represent on average 10% or less of the team's capacity.

How to create the Sprint Forecast

How much work should the team take on in a sprint? As Scrum Master, I would ask the team, can you do the first story? Can you do the first and the second? Can you do first, the second and the third? Keep asking until the team hesitates. As soon as they hesitate, stop. That is the forecast.

Why should you stop at this point? Taking on more stories will add congestion and slow down the team. Think of the highway at rush hour. Do more cars on the road mean the traffic moves faster? Would be nice.

Why do you even make a forecast? Some projects say, let's just get into a state of flow, and pull work as we are ready to take it. This can work too, but my own experience with that approach has been mixed. It is very easy to lose focus on getting things done and lose the ability to predict what can be done over a longer period of time. So I believe Sprint Forecasts are useful because they help us inspect-and-adapt enroute to our longer term goal.

What about "yesterday's weather"? Can we use the results of the last sprint to reality check the forecast for this sprint? Sure! If your team promised 100 but only delivered 70 or less, this is a sign that they should not commit to more than 70, and quite probably less. I call this "throttling", and it is one of my 12 Tips for Product Owners who want better performance from their Scrum Teams. But yesterday's weather is not a target, it's a sanity check. If it becomes your target, it may be holding you down.

What if the team runs out of work?

On the one hand, this is easy. If the team runs out of work, they can just ask the Product Owner for more. A working agreement can streamline this process, for example, Team, if you run out of work, you can:

  • Take the top item from the product backlog.
  • Contact me (the Product Owner) if you get down to just one ready item in the backlog
  • Implement your top priority improvement to our code ("refactoring")

Implementing improvements from the last retrospective is usually a particularly good idea, unless you are very close to a release. There are investments in productivity that will often pay huge dividends, surprisingly quickly!


3 comments:

Anonymous said...

Thank you Peter for your newest post!

First, I do agree with "(...) nobody tells the the Development Team how much work to take on! Not the Product Owner, not the Scrum Master, and certainly not the math!"

But I think, the [a: max. of hours a team can be productive during the next sprint] vs. [b: the average velocity over the last 3 sprints] can give a PO a good feeling on [c: what seems to be possible].

But this [c] IS NOT the [d: what must be possible] for the next sprint. Estimation is still done by the dev. team and nobody else! And the commitment from the dev. team is still asked "(...) keep asking until the team hesitates" and even more, requested.

This [c] may have a influence to the priorities given by PO for some critical user stories and may (or has) a influence to a release plan - at least when it comes to a first "go live"/"go productive"/"go public" with a product.

The key thing here is for sure to have a realistic [a], a good feeling what does mean the [b] (it goes down, it stays or it goes up = why and what does that mean?) and to have experiences with "go public" or even better with "continuous deployment" while developing a product.

What you think, Peter?

Peter said...

Dear Anonymous -- and I hope your not that Anonymous ;-)

These topics -- how much value is there in an estimate, or what does the team really commit to in sprint planning -- have been the subject of much debate over the years.

A few years ago, Ken Schwaber created a storm by changing the result of Sprint Planning 1 from a commitment to a forecast. And the discussion around #noestimates have been especially entertaining!

The idea behind "forecast instead of commitment" is the team commits to do its best. The team may not deliver all it forecast, but what it delivers will work (it's "done"), be on time (i.e. by the end of the sprint), and be on budget (no overtime or delays).

So Scrum has a default agreement on how the team should fail if it can't meet the commitment. I have found it helpful to discuss with agreement with the Stakeholders. Since the alternatives are usually bad for quality, cost, and release predictability, most PO's and stakeholders go along with the standard agreement (although there have been some special cases, usually in the last sprint before an important deadline). Those that value scope over quality usually end up unhappy.

I have seen teams waste a lot of time doing detailed task estimates in sprint planning 2. It's hard work and the value add is small. So keep planning and estimating as lightweight as possible.

My own experience has been the best when teams focus on getting things really done in one story before moving on to the next one. That means zero tolerance for bugs! This gives higher velocity and better release predictability the getting things sort of done, and fixing the details later. It does require a closer collaboration with the Product Owner and/or other stakeholders, but I find it gives better results.

To address you last question, I think this is good conversation to have with the product owner. It builds trust and alignment on both values and priorities.



Glen B. Alleman said...

For any forecasting method, SP or "hog's heads" (a unit we used in physics undergrad course to make the TA's do some of the work), must have the variance.
Average is meaningless without the variance. It's not the Average that should be used anyway. It's the Most Likely (the mode, the most recurring value).
All the examples of #NE using past performance (empirical) data use Average. This is bad math at least and unreliable math at worst.
Next is the time series evolution of the past performance. This is easily determined with ARIMA and an R desktop environment.
The result of the "simple" approach is disappointing forecast and the misunderstanding that estimates don't produce good results and blaming these on estimating in principle, rather than the improper use of estimates.
I dislike platitudes but - "It's a poor workman who blames his tools."
Learn to use the tools, and only then determine if they are applicable to the problem at hand - the "value at risk."