BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Why Crunch Mode Doesn't Work

Why Crunch Mode Doesn't Work

Bookmarks
In 2004 a blog post of a disgruntled spouse of an employee of an international electronic games company, sparked a mountain of media coverage and on line discussion. Evan Robinson picks up the mantle with an article for the IGDA on 6 reasons why crunch mode doesn't work:
  1. Productivity varies over the course of the workday, with the greatest productivity occurring in the first four to six hours. After enough hours, productivity approaches zero; eventually it becomes negative.
     
  2. Productivity is hard to quantify for knowledge workers.
     
  3. Five-day weeks of eight-hour days maximize long-term output in every industry that has been studied over the past century. What makes us think that our industry is somehow exempt from this rule?
     
  4. At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
     
  5. Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
     
  6. Error rates climb with hours worked and especially with loss of sleep . Eventually the odds catch up with you, and catastrophe occurs. When schedules are tight and budgets are big, is this a risk you can really afford to take?
Indeed the evidence for the 8 hour day, 5 days a week has been around and in practice since 1926:
When Henry Ford famously adopted a 40-hour work week in 1926, he was bitterly criticized by members of the National Association of Manufacturers. But his experiments, which he'd been conducting for at least 12 years, showed him clearly that cutting the workday from ten hours to eight hours — and the work week from six days to five days — increased total worker output and reduced production cost. Ford spoke glowingly of the social benefits of a shorter work week, couched firmly in terms of how increased time for consumption was good for everyone. But the core of his argument was that reduced shift length meant more output.
So what elements are there to this that ends up affecting the software industry so much? Commonly projects are planned on the flawed assumption that there is a fixed amount of work to be done - a common mistake named the 'lump-of-labour fallacy". Agile methodologies such as Scrum avoid making this assumption, although this doesn't avoid the end of iteration crunch, it does cap the crunch time to a percentage of the iteration. Learning often is either inadequately or not planned for at all, and can take up to 70% of time to deliver on a project (see "The Secret Sauce Of Software Development").

So, if we (as managers) know this is wrong, why does it keep happening? The author poses his viewpoint:
Managers decide to crunch because they want to be able to tell their bosses "I did everything I could." They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven't really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done.
Esther Derby has a different view point - that is we fail to plan for what could go wrong:
We go through stages of understanding the problem—we gather requirements, develop analysis models, and then design software solutions. We develop plans to build and deploy the solution. We come up with a well-ordered set of actions that will lead us logically and inevitably to the goal.

And then we skip an important step. We don’t sit down and think about what could go wrong. We learn about weakness in our plan and design approach as we go. Discovering oversights by running into walls costs money, causes delays, and can compromise quality.
Inevitably it seems that the factors that bring about crunch time are entirely human. What methods have readers to combat the crunch time phenomenon? Is it simply a human facet of engineering, or is it something that is wholly unnecessary?

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Planning for "what could go wrong"

    by Michael Neale,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think the reason there is this optimism, and ignoring what "could" go wrong - because if we did add it all up, the risk and cost would pretty much freak everybody out and nothing could get done. Its more a case of people ignoring the risks and hoping it works out.

    Its not just software either. Ask anyone who has built a house, renovated, built a bridge, a building etc.

    This sort of problem is not unique to software (why do we assume that software projects are so much worse then in other "building" type industries?).

  • Bogus schedules, unknown output, and no risk management

    by Dean Schulze,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I generally agree with this.

    A few thoughts.

    Most software schedules are falsehoods and everyone knows it. This is where the problem of crunch time starts.

    Very few managers know how to measure software output. They measure costs, but not output. This is what drove the offshoring phenomena. They can't measure output for onshore or offshore projects but they can measure costs so they go with what is cheapest and hope they get a working project.

    Acknowledging risks gets you labeled a "negative thinker". You can't mitigate risks if you can't talk about them.

  • Link to IDGA Article, and Profitability

    by Peter Wagener,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Maybe I missed it, but I couldn't find the link to the actual IDGA article. For others, here it is.

    Working as an hourly contractor, I am very careful on how much I work each week. I really try to keep it in the 40 - 45 hours range. I don't want to work too much, because I think it hurts my *profitability* in the long-term. To whit:

    1. Working more than was promised in the contract, in order to meet a pressing deadline, tends to blow up my clients' budgets. This is more immediately noticeable for me (as a contractor) than most people I work with (salaried employees).

    2. Just by observing the results of my work over the years, I know that pushing myself to work more results in lower-quality code: More bugs, less documentation, more corner-cases that I failed to consider. As a result my client is less likely to want to retain me at a decent hourly rate.

    Making these observations at a personal level is easy to do; doing so at a group or corporate level is obviously more difficult. The article makes a compelling case for it, although I wonder if many software development companies will take the lessons to heart.

  • Re: Planning for

    by Adron Hall,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "Its not just software either. Ask anyone who has built a house, renovated, built a bridge, a building etc."

    Absolutely wrong. Just check out the percentage of housing, sky scrapers, office complexes, bridges, rail construction, and other efforts that are built and finished on or before completion time. There is a major difference between an industry that has about 30 years of experience vs. one that has been around for well over a thousand years. They are not, at this stage, directly comparable. Maybe in a parable sense but nothing more.

    "This sort of problem is not unique to software (why do we assume that software projects are so much worse then in other "building" type industries?)."

    It IS unique to software because of the ignored risks and irgnored facets of the creative thought process required to build software. Agile is a prime example of just how different it is. Nobody builds a house, then says alright, here's what I got, what else do we need to do. Nope, everything is planned before and part of the schedule is built in as risk mitigation. Software doesn't have that. Risk mitigation is rarely spoken of and even more rarely is it planned for accordingly.

  • Re: Bogus schedules, unknown output, and no risk management

    by Adron Hall,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "Most software schedules are falsehoods and everyone knows it. This is where the problem of crunch time starts."

    It could not have been written better. Most software schedules are exactly that, falsehoods.

  • Agile process means less crunch time

    by Kirstan Vandersluis,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Software development is hard to schedule accurately because user expectations evolve during the application building process. The analogy with home construction is a customer walking through a half-built house and saying, "now that I see the fireplace there, I can see it would be much better over here". A scenario for escalating, unpredictable costs, for sure. In software development, we allow users to make changes, recognizing they don't have an equivalent frame of reference as home customers. If you've lived in a house your whole life and seen models of the features you're buying, you can be expected to order what you want. But if you're a manager trying to automate a complex business process, you don't have a firm grasp on what's even possible. It's understandable that you don't communicate your requirements until you start seeing what's possible in a working system. Requirements change, schedules slip, developers work overtime, costs escalate.

    Iterative development processes have evolved over the last 20 years to address this phenomenon. Today's agile processes work well in the face of dynamic requirements. Users see frequent releases of new functionality, and can make adjustments as warranted. Short term scheduling tends to be pretty accurate, though long term scheduling, and overall project cost may still not be as accurate as managers hope. But even if overall costs can't be accurately predicted, it is much more likely that the resulting system meets users' needs. And very often, intermediate releases begin providing real value, building credibility for the development group while helping justify costs.

    The result for developers is that the more predictable short term schedule usually means far less "crunch time".

  • What happened after the initial blog post....

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I was looking for data on this issue and came across this:

    The writer of the original 2004 blog and her husband were plaintiffs in one of 3 lawsuits against that employer. In April 2006, the employer settled to the tune of $14.9 million dollars.

  • Anyone got a link to the graph of GDP vs Workweek Length?

    by Deborah (Hartmann) Preuss,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I saw a graph last year showing a dozen or more countries, plotting their average workweek length and GDP, suggesting an inverse relationship of sorts. Anyone got the link?

    While looking for that graph, I read a lot of interesting things. I'm in Canada, where we apparently have it better than the US. I was interested to read about such things as:

    • people penalized or fired in retaliation for their taking the full vacation allowed by company policies,

    • on average, Americans work 9 weeks more per year than peers in western Europe

    • many Americans get no paid vacation at all. Ack! In Canada, every full-timer has a minimum of 2 weeks (taken or paid out)!

    • The 1993 European "Working Time Directive," primarily about safeguarding workers' rights, including a maximum 48-hour week, a rest period of 11 consecutive hours a day, and a rest break when the day is longer than six hours.

    • TimeDay.org seems to have declared October 24th "Take Back Your Time" day

    There's more interesting history listed here by a group touting "Timesizing, not downsizing".

  • Re: Planning for

    by Evan Robinson,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Except that the excuse of optimism only makes sense for the first project. After that, everybody knows that stuff goes wrong.

    You're right that the problem isn't unique to software, but I think the severity of the problem is larger in software. By some measures, 75% of software projects fail (IIRC) and the vast majority are late and over budget by huge amounts. I've done a 4 year remodel on a house, OK? I know that construction doesn't run to plan either, but it does a great deal better than software.

    The old saw about "if builders built buildings the way programmers build software, the first woodpecker to come along would destroy civilization" isn't quite right, but at least most buildings do what they're supposed to do -- they stand up and the keep the water and the wind out. We can't say that about most software projects -- not only are they late and over budget, they very often don't do what they're supposed to do.


    p.s. -- yes, I am the Evan Robinson that wrote the article.

  • Re: Bogus schedules, unknown output, and no risk management

    by Evan Robinson,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Very few managers know how to measure software output. They measure costs, but not output. This is what drove the offshoring phenomena. They can't measure output for onshore or offshore projects but they can measure costs so they go with what is cheapest and hope they get a working project.


    There's a pretty good argument that it's not just hard but impossible to meaningfully measure programmer output. The conventional source-level measures are all language dependent and easily game-able (and all, or nearly all, programmers like to game systems). Function point-like measures are incredibly heavy and often don't deal well with really interactive software (like, say, games). Feature level measures are so fuzzy as to be essentially useless. About the only output measure that's not complete shite is bugs (finding and fixing), and even that is about much more than just programmers -- the efficiency of your QA organization has a lot to do with it as well. And even if rate of bugs produced/fixed were fully measured, it doesn't tell you a lot about how fast your programmers are producing useful code.

    In many ways, software development is not a science but an art, and it's notoriously difficult to measure art.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT