Skip to main content

Prejudice: Scrum lacks an overview

Over the last few days, there has been a discussion on the scrum mailing list on whether scrum provides a good overview of the entire architecture and whether it's possible to know whether the project is on course to meet its short term and long term goals.

There are three questions here:
  1. What will the system do? (Functionality)
  2. How with the team implement the system? (Tasks)
  3. How will the features of the system be realized? (Architecture)
Under Scrum, the product backlog answers question 1 and only question 1. The product backlog is nothing other than a proposed feature list with an estimated effort and a priority associated with each feature.

The features are usually defined as "user stories" such as "a job hunter can upload his resume to the site to speed up applying for advertised jobs". Who wants it? What can he do? Why does he want it? As you get closer to implementing a particular story you may have to break it up into smaller stories, so the team can actually implement the individual functions in the course of one sprint. The fundamental message is that the team is delivering working functionality every month. The functionality may be a very small increment, but it is working and tested and of production quality.

The priorities define what the customer would like to have next.

So: you negotiate work between team and product owner purely on the basis of what the system will do. You also measure progress based on how much of the needed functionality has actually been implemented.

How do you measure progress? At the beginning of the project, sum the estimates for all the functionality required. At the end of each sprint, deduct the estimates for the functionality actually completed (adjust as well for changes in scope which were caused by adding, removing or reestimating stories). Do not deduct anything for work partially completed (not done) or for work with does not provide actual functionality (e.g. a design document). Graph the estimated remaining effort as a function of time. This is the burn down chart. If you keep the sprint length constant, the slope of the chart (your velocity) will validate your estimate for the time needed to complete the project. Expressed mathematically:
  • estimatedSprintsRemaining = roundUp(functionalityRemaining / functionalityAccomplishedProSprint) ; 0 )
  • estimatedTimeRemaining = estimatedSprintsRemaining * sprintDuration
If your estimates are right and you scope is constant, then estimatedSprintsRemaining should decrease by one every sprint. If not, there is a problem, and the product owner and scrummaster need to deal with it.

So you see, at the planning and controlling level, we don't talk about "tasks" at all, only functionality.

Question 2 is the responsibility of the team.

At the start of each sprint, the team commits to a set of functions to be implemented by the end of the sprint. As a first step, the team decides what has to be done to implement the agreed upon functionality. This is a list a of tasks and each task is estimated in hours. The estimate at the user story level is usually more abstract. This estimate is updated daily. How much work is remaining (how much has been invested is not relevant!). This is also graphed daily to produce the sprint burn down chart. The slope of this graph tells you whether the team expects to complete all the functionality of the current sprint.

Question 3 is also the responsibility of the team, but the level of quality you need to achieve depends on the larger goals of the system and is best agreed on with the product owner at the beginning of the project. If the system has an expected life of more than 6 months or so, the architecture will have to grow and change over time. Your engineering practices must support that. Otherwise your test and maintenance costs explode, and after a few years, the system is no longer maintainable and you have to build something new. (Explaining this to customers outside the IT branch can be difficult!).

Implementing architectural changes is called refactoring and being able to refactor reliably is an essential engineering discipline. To do this effectively requires extensive automated test suites -- "as-built documentation" -- not extensive blueprints written before anybody wrote a line of code.

Project Overview with Scrum

So with through the Product Burn Down chart we review progress toward the overall goals, and with the Sprint Burn Down chart we evaluate progress to the immediate goals on a daily basis. Taken together, these two charts forecast whether the project goals will be met.

A detailed architecture specification at the beginning is not helpful. The architecture needs to adapt over time. So keeping it minimal and flexible is usually advantageous.


Popular posts from this blog

Sample Definition of Done

Why does Scrum have a Definition of Done? Simple, everyone involved in the project needs to know and understand what Done means. Furthermore, Done should be really done, as in, 'there is nothing stopping us from earning value with this function, except maybe the go-ahead from the Product Owner. Consider the alternative:
Project Manager: Is this function done?
Developer: Yes
Project Manager: So we can ship it?
Developer: Well, No. It needs to be tested, and I need to write some documentation, but the code works, really. I tested it... (pause) ...on my machine. What's wrong with this exchange? To the developer and to the project manager, "done" means something rather different. To the developer in this case, done means: "I don't have to work on this piece of code any more (unless the tester tells me something is wrong)." The project leader is looking for a statement that the code is ready to ship.

At its most basic level, a definition of Done creates a sh…

Explaining Story Points to Management

During the February Scrum Breakfast in Zurich, the question arised, "How do I explain Story Points to Management?" A good question, and in all honesty, developers can be an even more critical audience than managers.

Traditional estimates attempt to answer the question, "how long will it take to develop X?" I could ask you a similar question, "How long does it take to get the nearest train station?

The answer, measured in time, depends on two things, the distance and the speed. Depending on whether I plan to go by car, by foot, by bicycle or (my personal favorite for short distances) trottinette, the answer can vary dramatically. So it is with software development. The productivity of a developer can vary dramatically, both as a function of innate ability and whether the task at hand plays to his strong points, so the time to produce a piece of software can vary dramatically. But the complexity of the problem doesn't depend on the person solving it, just …

Scaling Scrum: SAFe, DAD, or LeSS?

Participants in last week's Scrum MasterClass wanted to evaluate approaches to scaling Scrum and Agile for their large enterprise. So I set out to review the available frameworks. Which one is best for your situation?

Recently a number of approaches have started gaining attention, including the Scaled Agile Framework ("SAFe") by Dean Leffingwell, Disciplined Agile Development (DAD), by Scott Ambler, and Large Scale Scrum (LeSS), by Craig Larman and Bas Vodde. (Follow the links for white papers or overviews of each approach).

How to compare these approaches? My starting point is Scrum in the team. Scrum has proven very effective at helping teams perform, even though it does not directly address the issues surrounding larger organizations and teams. An approach to scaling Scrum should not be inconsistent with Scrum itself.

Scrum implements a small number of principles and constraints: Inspect and Adapt. An interdisciplinary Team solves the problem. Deliver something of va…