Key Takeaways
- The purpose of software engineering is avoiding failure more than aiming at success
- Software projects are unique and applying a recipe based approach is dangerous, the process must be tuned to fit the project
- Short iterations are fine in most cases, but can sometimes be dramatically counter-productive
- Testing is a noble aim, but one must decide how and what to test
- No rule should be inherited from a higher authority, guru, book, movement or methodology without extensive questioning
Darius Blasband has written the book The Rise and Fall of Software Recipes which challenges the conventional wisdom of software engineering, he protests against the adoption of recipes and standards-based approaches and rails against the status-quo. He calls himself a codeaholic who advocates for careful consideration of the specific context and the use of domain specific languages wherever possible.
You can download two sample chapters: Introduction and Recipes.
InfoQ: Why did you write this book, what is the problem you are addressing?
Darius Blasband: I started making notes that eventually led to this book when on the road, spending hours and days in planes, trains and hotels, crossing multiple time zones to attend business meetings that sometimes did not even amount to an hour in total. A gigantic waste of time.
This was not just a way for me to keep busy.
The problem I am trying to address with this book is the lack of historical perspective in software engineering. New techniques, methodologies or programming languages are presented as the philosopher’s stone, without reflecting on the hundred similar best-things-since-slice-bread that have come and gone over the last four decades.
Take the Agile movement for instance. It is presented as the ultimate solution to all problems related to software. If its self-serving prose is to be believed, no good software was ever written before the twenty-first century.
That is obviously (and fortunately) not true.
A minimal sense of perspective and history shows that many of the techniques Agile advocates have been around for a very long time, and the funky terminology and cult-like following aside, it contributes precious little to the state of the art in software development.
In another but just as important vein in this lack of perspective, most software projects are architected to optimize development time, making the best use of the most advanced technology and available skills.
Many mission-critical software systems must be kept alive for twenty years or more. They must survive numerous tectonic shifts, the demise of once all-powerful vendors, key people retiring together with their hard earned expertise, etc.
If one looks at the big picture, it is clear that optimizing development time at the expense of the – admittedly less exciting – concerns for the long term exploitation of any serious software system amounts to a dramatic misallocation of resource.
InfoQ: Why were you the right person to write the book?
Darius Blasband: Am I really? I would not be so sure.
But if I must come with an answer, I’d say:
- Because no one else did.
- Because most people with 30 years of experience in industrial software developments have moved away from purely technical matters a long time ago, and technicalities matter, much more than common management wisdom would suggest
- Because I don’t answer to anyone (except for my wife, goes without saying) and I can question the status quo, flame the platitudes of the day with no impact on my professional life
- Because I have been fortunate enough to work on very different projects over the years, large and small, for small shops or for multinationals, mundane administrative applications or challenging one-of-a-kind cutting edge projects.
InfoQ: What is a “software recipe” and what is the fundamental issue with them?
Darius Blasband: I refer to a software recipe as any methodological component, any definition of the software development process, any a priori subdivision in tasks that constrains the developers in the way they work, independently from the application at hand, the technical environment, performance constraints, etc.
Waterfall-inspired methodologies are recipes.
The Agile movement claims that what it advocates does not amount to a methodology, but they still insist on short iterations, continuous testability, etc. It is very prescriptive, and in my eyes, it definitely is a recipe as well.
To make a long story short, I’m a not a fan of software recipes.
Because software comes in many shapes and forms, because the environmental constraints can vary dramatically, so do the sizes of projects, their criticality, their interfaces with the external world, their sensitivity to time, to budget, etc.
In a nutshell, software projects are very different from each other. Applying a pre-munched rulebook across the board amounts to ignoring these differences, assuming they matter less than whatever similarity remains, an assumption that should not be made without extensive empirical validation.
And to say the least, this validation has rarely been conducted formally, and when it has, it has not been very conclusive.
InfoQ: Does your approach mean that there is no process or methodology which teams should follow? If so what do they do instead?
Darius Blasband: When it comes to software development, I am not a proponent of anarchy (even though it does work better than many would fear). A project can be structured and tasks can be defined. In short, and to varying various degrees, a process can be put in place.
This process must be tuned to fit the project, and should not come from a generic one-size-fits-all rule book:
- Short iterations are fine in most cases, but can sometimes be dramatically counter-productive. Piling up weekly ladders sound like progress when oblivious of the final goal of landing on the moon.
- Testing is a noble aim, but one must decide how and what to test: components, system, performance, functionality, stability, robustness, etc; short of which people end up testing for the sake of testing, testing for the wrong things, and delivering little value in practice.
In short, one can define a process to suit the project at hand, where all decisions must be based on a rational assessment of the facts. Nothing should be done dogmatically. No rule should be inherited from a higher authority, guru, book, movement or methodology without extensive questioning.
If the answer to any question of why something is done goes along the lines of “Because the Agile manifesto says so”, you know you are in trouble.
InfoQ: What is a codeaholic and how is that different from a code & fix hacker?
Darius Blasband: A hacker focuses on getting to a working solution as fast as possible, knows all the tricks in and off the book, how to cut corners to achieve his/her goals. A hacker focuses on executability almost exclusively. “It just works” is his/her motto.
A codeaholic is driven by very different motivations.
He (or she) understands that code is written once and read many times, and will not sacrifice the latter for the instant gratification of the former. In a codeaholic’s eyes, the true value of a piece of software lies in its ability to be maintained, understood, modified.
Therefore, elegance, clarity of purpose and compliance to standards matter, because they make the code more predictable, manageable and understandable.
Getting the software to work, provide the required functionality with adequate performance is essential, but it is not the end of the story. It is its mere beginning. Making it manageable, readable, structured, elegant, is where the real work is, where the hard part really begins.
InfoQ: You make the statement that “The purpose of software engineering is avoiding failure more than aiming at success” – why, and what are the implications of this on the way we develop software?
Darius Blasband: As for to the “why”, it comes from the intrinsic complexity of the trade. The number of moving parts is such that left to their own devices, at least one will fail more often than not.
As an industry, we address this by a number of means:
- We use redundancy, to mitigate the effect of failures
- We overspend on testing, showing our – often justified – lack of trust in the quality of what development teams deliver
- When possible, we simplify the solution, cut in the scope, in effect reducing the number of moving parts
- We go for intermediate deliveries, so that at worst, even a project that runs late and over budget has something to show and has delivered something of value
- As odd as it sounds, we even go as far as educating our users in how tolerable some failures must be (Would you be willing to buy a car from a dealer who would warn you that it may not start a couple of times a year, but that’s fine because you can exit the car, lock it, and restart the whole process of unlocking and entering it, then retry to start the engine? And twice a year is no big deal after all. Stop whining).
InfoQ: You advocate for Domain Specific Languages, how can these help and what are the implementation challenges and opportunities for DSLs?
Darius Blasband: It is fair to state that depending on who you talk to, the DSL concept can describe vastly different realities. Many will focus on what it is (an extension to an existing language, or how orthogonal it is in terms of how its concepts interact with each other).
My personal definition of a DSL is more about how it is being used: a DSL is a language that is managed/defined by a small group of people, who can tune it to fit their very specific needs.
To a degree, the size of the audience is a subjective matter: how small must a group be to qualify as small enough to be an audience for a DSL?
On the other hand, changeability is a more objective measure.
By allowing the DSL to be an adjustable as opposed to an intangible parameter, a DSL strategy allows a project to balance between the amount of the complexity that is put in the paradigm, the pre-defined capabilities of the DSL, and the part that is put in user code, written in the DSL.
It is an extreme form of divide and conquer.
This being said, I am deeply biased. I am a language/compiler person. Writing a DSL comes easier to me than to most.
But even for engineers without extensive language expertise, in any serious DSL project, the effort required by the language infrastructure (lexical, syntactical, semantic analysis, etc.) is dwarfed by all the ancillary tasks: interfacing it with the rest of the infrastructure, providing the paradigm basic block, supporting the users, etc.
The imbalance is such that cutting down on the language infrastructure, and extending an existing language instead, in effect turning the DSL into a mere library, is often a poor design decision.
Foregoing on the ability to operate on the syntax limits one’s ability to make the DSL more flexible and more expressive.
Not only does syntax matters. It is also a relatively cheap and easy way to make a serious difference, and deliver on the DSL promise.
InfoQ: You make suggestions about debugging techniques in the book – what are some of the key points?
Darius Blasband: If limited to the process of extracting bugs from a system, debugging is an expensive activity. The wisdom one painstakingly acquires about why and how the actual behaviour differs from the expected one, vanishes as soon as the debugging session ends.
It is not only expensive: it is also frustrating, as one can always wonder how the bug creeped into the system in the first place.
But it does not have to be frustrating. Debugging can be more than a process. It can be a deliverable. It can add to the value of the software, which then benefits more than from the sole correction of the defect.
The simplest way to increase value is to implement a policy that ensures that bugs are reproduced in a test case before any attempt to their resolution, so that they can’t happen again without being detected by running the test suite. Not only is the software better by having the bug removed, but the expected behaviour is now formally documented by an executable test case.
But there is no such thing as a single best way to debug software. Each software developer has his/her own preferred tool or process to do so.
Some use all the bells and whistles of the sophisticated debuggers that are now available.
I am too much of a technophobe (and too lazy) to go through the process of even learning how to use these ever more complex debugging tools. I just can’t be bothered.
I use a much simpler technique, not as primitive as crippling my code with print statement, but barely.
When dealing with a buggy piece of software, I add assertions (available in some form in virtually all languages today) that check for the conditions that represent the expected behaviour of the system. I iteratively reduce the scope of my bug (things are all right when entering it, and faulty when exiting it) by adding more and more precise assertions, until I find the source of the problem, and fix it.
The beauty of this approach lies in the way these assertions remain in the code.
Not only would a reappearance of the same or similar defect be detected much faster, these assertions also improve the quality of the software by expressing formally a known property of the system, a piece of implicit knowledge of the developer, left behind for the future maintainers of the code.
And these assertions are infinitely more reliable than plain comments that have a long-standing tradition of not being maintained as the code is updated, and progressively drifting from its actual behavior. Assertions are executable, and tested for.
InfoQ: Do you have a single message that you want readers to get from the book?
Darius Blasband: Learn from history.
Machines get more powerful: storage, bandwidth as well as computing power are now as good as free, new languages appear, but the most important ingredient barely evolves: we, as software developers, are not significantly smarter (or dumber for that matter) than the generations that preceded us. We are so much very Gaussian, almost depressingly so.
The industry as a whole has spent over thirty years looking for the philosopher’s stone, a way of organizing the development process, a methodology, a formal subdivision in tasks that would allow us to produce adequate results reliably and deterministically, in short, a software recipe.
It has failed, over and over again.
After thirty years, it is time to acknowledge that it always will, because it bumps into the elementthat has not evolved: us humans
In my opinion, thinking otherwise today amounts to chasing unicorns, under the fallacious assumption that no one having never seen any does not necessarily mean there aren’t any.
But some things just aren’t there.
Live with it.
About the Book Author
Darius Blasband has a master’s degree and a PhD from the Université Libre de Bruxelles (Belgium). His focus is legacy modernization, articulated around compilers for legacy languages. Darius is the founder and CEO of Raincode, main designer and implementer of its core technology, an acclaimed speaker in academic and industrial circles.