Agile antipattern: Code freezes during each iteration

Over the past 18 months I’ve encountered a number of teams where it is standard practice to have a code freeze late in the iteration.  The reason given for this was “to allow QA to test what we created during the iteration.” I’m sorry, but I have to be blunt here – this isn’t agile! It leads to 3 main questions:

  1. What are the developers doing after the code freeze?
  2. What happens if defects are uncovered after the code freeze?
  3. Why could testing not be done earlier in the iteration?

The answers to these questions are often enlightening.  Let’s take them one at a time:

1.  What are developers doing after the code freeze?

I’ve heard two primary answers to this.  The first is that developers are fixing defects during that time.  When pressed, I usually find out these are defects from earlier iterations (although this isn’t always the case – more on that later).  If we are fixing defects from previous iterations while QA is working on testing the current iteration, how do we ever get caught up?  Maybe I’m being a bit dense, but I just don’t see the math working on this one.

The second most common answer is developers are working ahead on things for the next iteration.  Said a different way, developers are creating yet more untested code before the commitment for the current iteration has been met.  Let’s think about this model for just a minute.  Let’s assume we are in iteration 1 and the team does a code freeze on day 7.  Coding occurred for 7 days and testing will go for 3 days.  But the developers start stuff from the next iteration on day 7, so for the next iteration they get 3 days from this iteration and 7 from the following iteration for a total of 10 coding days, followed by 3 testing days.  This cycle continues until the project completes.  Anyone feeling sick yet?  Again, being blunt, this is not agile, it is mini-waterfall or what I prefer to call “wagile.”

2.  What happens if defects are discovered after the code freeze?

Again, in Family Feud style, the most common answer is the defects are fixed in the following iteration.  This is a continuation of the first half of the previous answer.  How can the math ever work?

However, this isn’t always the answer.  Sometimes the answer is developers fix them as they are found.  This answer is MUCH less common, but does exist.  The problem with this answer is very simple: if it is possible for developers to fix defects in real time after the code freeze, why could this not be done earlier in the iteration?  Which leads to question 3…

3.  Why could testing not be done earlier in the iteration?

Two answers here are equally common.  The first I’ll cover is the “it takes too much time/effort to move the code to our QA environment so we only do it once per iteration.”  I understand the need for a QA environment for testing, but is it necessary for ALL testing?  In most of the cases I’ve seen it is possible to do a tremendous amount of testing in a development environment to make sure everything is basically working before promotion to a QA environment.  The QA environment should be renamed the Verification environment.  Ideally we want to use it to verify everything works in the stable environment and passes all tests just like it did in the development environment.  So step one is to do more testing in the development environment.  Step two is to use some automation tools to build a stable environment quickly.  When pressed, most teams can create an automated process to do a bare-metal environment build within a short period of time.  It is worth the effort.  If this exists, a new build could be deployed at any time, but at least nightly.  Combine it with automated regression testing and really get some value from it!

The second answer I usually hear is “we don’t know what to test until the code is done anyway.”  Uh oh, can anyone even count how many ways this statement isn’t agile in nature???  Let’s remember the agile process:

  1. Product Owner (or similar role) creates stories and some basic acceptance criteria.
  2. During iteration planning more acceptance criteria are created based on conversations between tester, developer and Product Owner.
  3. During iteration the developer and tester collaborate on their understanding so the code can be written and acceptance criteria can be turned into tests.
  4. During development the developer can (and should) access any acceptance tests already written to see how their code is doing.
  5. The developer isn’t “done” until the code passes all unit tests and acceptance tests.  Remember the definition of done!

When things are done in this manner the tester ALWAYS knows what to test because they are in close communication with the developer.  Collaboration is vital to agile success.  This is just one example driving it home.

The bottom line for me is there is no legitimate reason for having a code freeze during an iteration.  Teams need to either invest the time/effort to put together a system which doesn’t require a code freeze, or stop calling themselves agile.  This is a nasty antipattern which will cause confusion or worse.

Thank you for reading.  Until next time I’ll be Making Agile a Reality™ for my clients by helping them avoid code freezes during iterations.

Related Articles


  1. Amen!

    I consider this one of the many symptoms of mini-waterfall. I guess kayaking over a 25-foot waterfall every month is better than doing Niagara Falls in a barrel every 18 months, but I agree that it’s not what I’d call Agile.

    A related symptom that makes me crazy is having a “stabilization” period during the iteration. My solution: stop making things unstable all the time, and you won’t need that.

  2. Yes – you make a mockery of velocity calculations if you are adding further critical work to the backlog almost as fast as you burn it down! It’s one form of project “creep”, one of the parameters to my iterative development calculators which you’ll find from the sidebar of my site. Do please give them a whirl, tell me what you think!

  3. Dear All,

    This post closely relates to one of the questions that I have about scrum (see Q1 below). I’ll post all 3 questions here, if anyone knows the answers, please let me know.

    Thank you for your help!!


    I am a programmer working at a company that uses scrum. I have worked here for 10 months now but I still do not understand some of the rules of scrum. I wonder if anyone here could answer my questions.

    Q1: During every sprint we are supposed to deliver a fully implemented and QA-d features. In theory it means that during the last 3 days of a sprint a developer cannot work because his work would not be tested (we have a large and complex app, testing every feature takes lots of time). Also, during the beginning of a sprint in theory QA does not do anything, before the first new feature is implemented since everything was perfect at the end of the previous sprint. Of course in practice things happen differently, but this is my understanding of what we try to achieve. What is the resolution? Is it a good approach to say that QAing a feature should be done in the next sprint, so no feature should be planned for 1 sprint?

    Q2: Our team is organized around a feature, not around technology, and we have a several layers application built on completely different technologies. Therefore in our team we have an SQL expert, some ASP.NET programmers, some C++ guys, a driver developer and some qa people (we have 11 members). We also have external dependencies, because we implement an add-on into a platform, so of course we depend on everything in the platform. Now when it comes to estimates, you can give 1 estimate (story point) to each PBI. But since we take on work based on priorities, it can happen that from 70 story points, which is our average velocity, 55 would have to be be implemented by the single SQL guy and e.g. the C++ guys do not do anything. In this situation, is there a problem with how the teams are organized, or should the PBIs be separated into “horizontal slices”? Or where is the mistake?

    Q3: What is the usual way to manage team dependencies? As I have said, we are developing an add-on into a large platform. Is it something usual that every team changes code until the last minute and then there is a hardening sprint and that’s it for testing? Or is it usual that we turn to dependencies/gantt charts: PBIxxx has to be implemented before PBIyyy can be picked up. In this case of course PBIxxx and PBIyyy are not vertical slices.

  4. I would have to disagree with this post. Code freeze as a word, changed its meaning since the waterfall days, but because we have no better words, we still call the time period where a deliverable is locked and no longer mutated a code freeze.

    let’s see how we can answer the 3 questions
    1. What are developers doing after the code freeze?
    Answer: Let’s say you have 10 stories of equal size and 10 developers, each developer would take 1 day to finish each of their own story. Then your QA would write test cases and automation on the 10 stories on the first day, and finish testing on the first 4 hours of the second day. Let’s further venture that you have absolutely no bugs whatsoever. Then effectively, you have a two day sprint. And you can ship your product by noon on the second.
    Sounds freaking agile enough right? This is however still showing an effective code freeze for the 4 hours QA is testing. Because you are cutting the code based on the 10 stories that are developed.

    So what would the 10 developers work during the second day? Well, they have to work ahead! And on the third day by 12:00pm, QA would be finishing up the test again and ready to ship the product.

    Now, if you absolutely have no code freeze, then by the shipping time of 12:01pm, the 10 developers would have developed half of the next 10 stories, well, are you going to ship them? Because you have no code freeze, I guess you would then content to ship the half finished 10 stories from day 2, right? No, I don’t think you are advocating shipping half baked code. Then to the original 10 stories finished on day 1, you are doing a code freeze on them, no matter what you call it, it’s a code freeze.

    We can simplify this math to an even smaller scale to make it easy. There is one story that takes 2 hours to code and 30 minutes to test, then you would ship it to production. OK… now you have a 30 minutes code freeze on this single story in a sprint of 2.5 hours.

    2. What happens if defects are discovered after the code freeze?
    Fix it or roll back the feature based on risks

    3. Why could testing not be done earlier in the iteration?
    Answer – test can be done as soon as code is checked in. Yep…. and in the previous example, all code where checked in on day one and test runs a few hours on day two. Gasp, what happened between the tests run? Didn’t we just have a code freeze on the 10 stories from day 1? What does a code freeze even mean here? Oh it means dev can no longer change anything on the 10 stories from day 1 during the test unless there’s a bug discovered. But hey we have no bugs ever! So we got a 4 hours code freeze and the product is ready to ship on 12:01pm.

    Now let’s address the automation testing part. I can write up the automation faster than anyone, but the fact is, if someone built a new calendar widget on the screen and I’m going to automate this sucker, it would take 4 hours after that calendar widget is checked in. Then I’m going to run the new automation tests with all other automation suites which would take 8 hours to run. Assuming developers have no bugs, then I just got an 12 hour code freeze. If you have no code freeze on the calendar, dev would keep tweaking it left and right. Then I can only guarantee the version of which I tested with my automation, but not whatever he developed after my regression started. All of a sudden, my version of the code sounds like the version post code freeze! And the dev’s new version post the regression starting point sounds like … future development!