Tuesday, October 5, 2010

Getting to Good User Stories

User Stories have become so popular for identifying Product Backlog Items (PBIs) that the term 'Story' is often used as synonym for PBI — regardless of whether the item in question is formulated as a user story or not. Not all user stories are created equal - good user stories can be turned into functionality rapidly and predictably by a good Scrum team. Bad stories clog up the works and teams have troubles finishing them in the sprint. So what makes a good story and how do you get it small enough to implement in a sprint?

A user story answers three questions: Who? What? and Why? A user story leaves open the questions How? and When? Mike Cohn popularized the canonical form of a user story: As a <class of user> I want <some function> to achieve <some purpose>. For example:
  • As a Job Hunter, I want to find and apply for interesting jobs, so I can find a good job and earn a living.
  • As an Internet shopper, I want to select books from a catalog and order them. (Technically this is not a user story, because their is no explicitly specified goal. Do we need to specify a goal here? Or is it self-explanatory? )
One starting point is the INVEST guideline for good stories. Let's see how these stories hold up on the INVEST scale:
  • Independent - the stories can be submitted to the team in any order - OK
  • Negotiable - how the stories are to be implemented is subject to discussion - OK
  • Valuable - the feature provides value to the customer or user - OK
  • Estimable - the team can estimate the effort involved - hmm, our sample stories are a bit vague
  • Small - they are definitely not small - they might be OK for planning for two quarters from now, but no way can they be implemented in a sprint
  • Testable - yes and no. I would argue that yes, they are testable, but that there is so much room for interpretation, that it is impossible to say what tests are needed and which tests are not needed to confirm that these stories were implemented properly.
If you are in doubt as to whether your stories are good stories, ask yourself if the stories answer the three questions (Who-What-Why) and ask your team if the story satisfies the INVEST criteria.

So how do you make a story smaller? Here is a list of 13 patterns which I would recommend:
  1. Split on user roles/personae
  2. Split on conjunctions (and, or, etc).
  3. Split on functional components
  4. Split on test cases
  5. Split on business priorities
  6. Split on business process alternatives
  7. Sequences - Build the pipeline one segment at a time
  8. Sequences - Bore a pilot hole and then make the tunnel bigger
  9. Split on non-functional requirements
  10. Separate Goal and Function
  11. Split on Data Types
  12. Split on Data Operations
  13. Split on Levels of Quality
This is a long list, so I will work through the list and provide more details about what each of them mean in the coming entries.

Here are some patterns which I would not recommend employing on the product backlog entries:
  • Spiking
  • Layering
  • Development Process
Spiking is simply dividing the story into analysis and implementation, with a time box and a set of set of questions to answer about the story. It is a good engineering practice, but the analysis part delivers no value to the customer or user. So I would call it part of backlog grooming and insist that it fit into the 10% or so of the team capacity which it invests to getting the backlog item "Ready" to implement.

Layering can be formulated as 'user' story: "As a developer, I want a DB-Schema, so I can deliver value to the customer in a future sprint." Development Process is pretty much the same: "As a developer, I want a spec, so I can implement some value in a future sprint." Both of these stories fail the Valuable test. They are not valuable to the customer or user.

BTW - There are cases where the developer is a legitimate user, such as when the developer needs a certain functionality to identify and fix problems. But when the developer needs an artifact to deliver value in a future sprint, this is a warning sign that you are falling back into waterfall thinking.


Daniel Serodio said...

That's an interesting paradox. When you split a User Story, it may become testable and estimable, but OTOH it ceases to provide value to the user - so the PO can't prioritize them - and they're not independent anymore.

Peter said...

Hi Daniel,

It's not a paradox if you split user stories vertically: each story after splitting still has a user/customer level acceptance test associated with it.

The acceptance test demonstrates the "V" in invest. Obviously the value of a split story may be very small, but it's not zero.

If you split stories "horizontally", i.e. Specifcation, DB layer, XML-Layer, UI-Layer, then you are in the situation you describe. There is no acceptance test except for the final layer.

This brings to a two-tiered concept of done. Level 1: the acceptance test for this story passes; Level 2, the collection of stories is useful enough to the customer to justify releasing it.

Level 2 done may be big enough that it cannot be accomplished in one sprint. So the P-O decides if that level has been achieved at the end of each sprint.