Skip to main content

Using affinity estimating to choose the Sessions for the Lean Agile Scrum Conference in Zurich

The LAS Conference Organizing Committee met last night to select the sessions for the LAS Conference in September. We had some 26 proposals from 19 speakers from Europe, North America and Asia. Everyone is under time pressure. How do we get through the task of sifting through all the proposals and agreeing on a program quickly and effectively?

While co-teaching a CSM Course with Peter Hundermark, I saw a new estimating technique (well, new for me ;-) 'affinity estimating' as an alternative to planning poker. I thought this would be useful for selecting the talks and tutorials, so we decided to give it a try.

  1. Everybody read all the submissions (46 pages!)
  2. For each Submission, I created a card with the title of the submission. I used yellow cards for talks, blue cards for tutorials and green cards for submissions that could go either way. (Next time, I would also put on the page number of the submission in the printout)
  3. I created column headers (white) Great, Good, Maybe and Questionable.
First we discussed our criteria for a good presentation, decided to rate the talks on their intrinsic quality first, and then handle in a second step the evaluation criteria necessary to create an attractive, balanced program. We decided we wanted 10 selections (for the program) and some alternates (just in case!). Then we agreed to try out the Affinity Estimating to rate the submissions.

Affinity estimating is pretty simple: for each story size, one column (e.g. 1, 2, 3, 5, 8, 13, 20, Too Big). To estimate a story, put its card down in a column, and anybody can slide it around to another column. If it stops moving, you have an estimate. If not, you need to discuss the story to come to a consensus. It can be very useful for (preliminary) estimates of large backlogs, because it is very fast and requires little discussion.

So we set up our columns, put down 6 cards, and moved them around until they stabilized. If they didn't stabilize, we put them in a 'to be discussed' column to discuss at the end. When all 6 cards were placed, we put down 6 more, et cetera until all cards were placed.

At this point, we put all the 'maybe' and 'questionable' cards aside. We were left with 14 cards.

We discussed the criteria for creating the program. There were a number of points: the mix of management, technical, team and corporate topics as well as Swiss and International speakers. What questions do we think will be important to the participants. We also wanted to ensure a balance of the various consulting companies. It wouldn't do to have one or two companies dominate the proceedings.

Next we set up the 10 Slots (4 Tutorials and 3 x 2 Tracks ) and used the same process to put stories to tracks, looking to create the right balance. Again, moving cards around on the table was an easy way to identify which topics complimented each other, which were overlapping, etc.

Eventually we were happy and the cards stopped moving around. We wrote down the results and also took a picture of the final assignments.

That left four cards left over. We decided that these would be our alternates (and we think we have enough good material for an evening event in October or November).

That was it. Four was a good number of participants in the meeting. We managed to stay in our timebox of 2 hours without really thinking about it.

And looking at the results, I think we have a good balance of interesting talks for the Conference this fall (which I think is really a statement about the speakers and their submissions!)

P.S. What happens next for the speakers? I will contact them later this week, so we can start to prepare the program.


Popular posts from this blog

Scaling Scrum: SAFe, DAD, or LeSS?

Participants in last week's Scrum MasterClass wanted to evaluate approaches to scaling Scrum and Agile for their large enterprise. So I set out to review the available frameworks. Which one is best for your situation?

Recently a number of approaches have started gaining attention, including the Scaled Agile Framework ("SAFe") by Dean Leffingwell, Disciplined Agile Development (DAD), by Scott Ambler, and Large Scale Scrum (LeSS), by Craig Larman and Bas Vodde. (Follow the links for white papers or overviews of each approach).

How to compare these approaches? My starting point is Scrum in the team. Scrum has proven very effective at helping teams perform, even though it does not directly address the issues surrounding larger organizations and teams. An approach to scaling Scrum should not be inconsistent with Scrum itself.

Scrum implements a small number of principles and constraints: Inspect and Adapt. An interdisciplinary Team solves the problem. Deliver something of va…

Sample Definition of Done

Why does Scrum have a Definition of Done? Simple, everyone involved in the project needs to know and understand what Done means. Furthermore, Done should be really done, as in, 'there is nothing stopping us from earning value with this function, except maybe the go-ahead from the Product Owner. Consider the alternative:
Project Manager: Is this function done?
Developer: Yes
Project Manager: So we can ship it?
Developer: Well, No. It needs to be tested, and I need to write some documentation, but the code works, really. I tested it... (pause) ...on my machine. What's wrong with this exchange? To the developer and to the project manager, "done" means something rather different. To the developer in this case, done means: "I don't have to work on this piece of code any more (unless the tester tells me something is wrong)." The project leader is looking for a statement that the code is ready to ship.

At its most basic level, a definition of Done creates a sh…

10 Warning Signs, that your team is not self-organizing

How do you know that self-organization is working? The Bern Chapter of Scrum Breakfast Club looked into this questions, and identified the following warning signs (which I have taken the liberty of translating).

The team reports to the Scrum Master at the Daily ScrumPeople wait for instructions from the Scrum MasterTeam members don't hold each other responsible [for their commitments]The same impediment comes up twice"That's the way it is" => resignation"I" instead of "We"Flip charts are lonelyCulture of conflict-avoidanceDecisions processes are unclear, nor are they discussedPersonal goals are more important than team goals
To this list I would add my a couple of my favorites:
you don't see a triangle on the task board (not working according prioritization of stories)after the daily Scrum, people return directly to their desks (no collaboration)there are a least as many stories in progress as team members (no pairing)
P.S. You can join the …