Agile antipattern: Burndown “wall”

sprintburndown4Does your team have an iteration burndown chart (giving credit only for completed stories) look like the one to the left? If so there are a couple of possible explanations.  Last week I blogged about how this could be a symptom of working on user stories that are too large.  However, there is another possible explanation, and it is one that is FAR harder to solve.  The culprit is pushing testing to the end of the iteration.  The difference between the two is the steepness and abruptness of the wall.  When testing at the end is truly being done there are no stories finished until the last couple of days of the iteration while large stories pushes most until the last half of the iteration.  See last week’s entry on large stories to see the subtle difference in the burndown charts.

This is basically a version of the code freeze during an iteration antipattern.  It is all too common for an organization which switches from waterfall to agile to continue to try to do testing the same way it had been done previously.  This generally is done by having a handoff from development to testing during the iteration.  Since waterfall tested large chunks of code, most organizations have developers write as much code as they can prior to sending a large chunk to testing.  In order for this to occur the development goes for as long as possible during the iteration.  Stories can’t be completed until they are tested, so all of the completion of stories happens in the last couple of days of the iteration.  This forms the burndown “wall” shown in the sample burndown chart.

So what is wrong with having a burndown wall if everything gets completed?  I’m going to surprise you and say there is still a LOT wrong even if everything gets completed in this structure.  Here is a list of issues you probably want to look at if this is your current situation:

  1. What are the developers doing while testing is going on?  They SHOULD NOT be working on the next iteration.
  2. If developers are spending their time fixing the defects found during testing how do you guarantee all the defects will get fixed and tested in time for the end of the iteration?
  3. What happens if testing runs up against a big problem and can’t finish testing the stories prior to the end of the iteration?
  4. If there is pressure to finish testing in time does testing subconsciously cut down on the breadth or depth of testing?
  5. What are the testers doing during the early part of the iteration when the developers are writing a pile of code?

These are real concerns and in my experience one or more of them will cause significant issues in almost every iteration.  If a team actually manages to overcome all of these issues in the majority of their iterations then there is still one more question to answer: Is it the most efficient way?

Team after team after team has proven it is not the most efficient way to do things.  When teams start doing things differently using Acceptance Test-Driven Development (ATDD) using automated tests they are much more efficient.  It also gives them the ability to accept stories every day.  Being able to accept stories every day gets rid of the wall and makes a more classic looking burndown chart.  Knowing things are completed and working every day will help the entire team feel better about how they are performing.

Until next time work at Making Agile a Reality® by integrating testing and development so stories can be accepted every single day and see how much your team can improve!

Related Articles

Responses

  1. Great post.
    I saw a burndown like this on my last project on the very first sprint. It was due to testing, but not directly from the team (TDD before dev and exploratory or manual testing after. Part of our definition of done included the product owner saying that they were happy with the story explicitly rather than implicitly via automated acceptance tests. This resulted in a sprint that would have had a “normal” burndown but because the PO had other items on their agenda this test did not get done until the last day of the sprint and we had a wall like your example. We made some progress in changing this pattern. Do you have any advice on getting the PO to agree that their acceptance tests are enough once automated?

  2. Mike, I think a PO probably should actually see it rather than relying on automation. Sorry to have to go against what you wanted me to say. However, I tell teams I train that a PO that is unavailable is the #1 cause of project failure. A better question is to ask the PO about the risk involved. If he or she doesn’t approve something during that last day, what happens? Is it ok for the work to spill over into the next iteration. I consider that a MUCH worse issue because it leads to all kinds of possible bad behavior on the part of the team and the PO.

    The goal of teams should be to have the ability to have a story accepted every day (they won’t always use that ability, but they should have it). If this means the PO says automation is ok, or has a proxy PO or is simply more available doesn’t really matter to ME. It should matter to the team and the PO though. Get a way to say yes we can accept something every day and this problem (and many others) will go away.

  3. Hi Bob,

    I faced the very same issue, when I introduced agile the very first time to a team. But instead of having a burndown chart, we used a more Kanban like story wall as our main information radiator. So instead of seeing a “cliff”, we saw a big cluster of stories gathering in the “testing” column of our wall. And, of course, we had all the issues you mention in your post.

    For us, making the developers aware of their responsibility to drive a story to done did the trick. They had to make sure they get a sign off by QA and finally the PO. Before we changed to that model, they had a “commit and forget” mentality: “Someone will be there to tell me if there is a problem with my stuff…”

  4. Matthias, thanks for the story. It is all too common. It sounds like you weren’t quite doing true Kanban where a WIP limit for testing could have helped. I know Kanban is usually a pull system and you can’t pull more into a state beyond the WIP limit, but a limit on pushing into a state can also be done. Limiting WIP in all states almost universally improves a team’s performance.

    Yesterday in a talk at Google someone tweeted that Jeff Sutherland said “a team that concentrates on finishing ONE story before moving to the next will be twice as productive as other teams.” I thought that was rather enlightening since his expertise is in the area of hyper-productive agile teams.