BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Q&A on the Book Righting Software

Q&A on the Book Righting Software

Key Takeaways

  • Avoid designing against the requirements
  • Design to encapsulate volatility and changes
  • Provide composable design that can address all requirements
  • Design the project to build the system
  • Provide good options for management to choose

The book Righting Software by Juval Löwy provides a structured way to design a software system and the project to build it. Löwy explains why the common way of designing the system against the requirements cannot work, and instead proposes to use volatility-based decomposition to encapsulate changes inside the system’s building blocks. Following the system design, Löwy explains how to design the project in order to provide decision makers with several viable options trading schedule, cost, and risk. The book offers case studies, guidelines and directives, and advice on how to present the design, and how to earn the trust and respect management.

InfoQ readers can download an extract of Righting Software.

InfoQ interviewed Juval Löwy about the drawbacks of functional or domain decomposition and the alternative volatility-based decomposition and composable design, the role of the core team in projects, doing estimations, and designing projects.

InfoQ: What made you decide to write this book?

Juval Löwy: The dark state of the industry. As many know, the software industry is in a deep crisis: most software projects fail to deliver on their commitments, and the projects that do make it are late, over budget, expensive, and rife with defects. Most development teams have reached the limit of their ability to make sense of the complexity. While this is not a new phenomenon, in recent years it is getting worse. The root cause is poor design, be it of the software system itself or of the project used to build that system.

Over the past thirty years I have developed a series of ideas, techniques, rules and guidelines on how to design software correctly, based on common engineering principles. I have taught these to thousands of architects the world over in my Master Classes, and have helped hundreds of companies repeatedly deliver great software. And yet these ideas were never seen in print before. The world is now ready now for the message, and I hope the book will bring about a software engineering renaissance.

InfoQ: For whom is the book intended?

Löwy: The primary audience of the book is software architects, because software architects are responsible for the engineering aspects of the project: maintainability, extensibility, reuse, affordability, and feasibility.

While this book targets software architects, it has a much broader appeal to all senior software professionals, such as project managers, quality control people, or those who wear multiple hats. Developers wanting to grow their skills also benefit greatly from the book. A common feedback is "I wish I had read this years ago". Regardless of the readers’ current position, Righting Software is transformative and the book will launch their careers like nothing else ever will.

InfoQ: This book describes "The Method", an analysis and design technique. What does it look like?

Löwy: Righting Software presents the first principles in software engineering, as well as a comprehensive set of tools and techniques that apply to software systems and projects. I call this "The Method". The Method is actually a two-part formula: it is system design, plus project design. The Method is a structured way to design a software system and the project to build it.

With system design, The Method lays out a way of breaking down a system into small modular components. The Method offers guidelines for the structure, role, and semantics of the components and how these components should interact. The result is the architecture of the system.

With project design, The Method helps you provide management with several options for building the system. Each option is some combination of schedule, cost, and risk. Each option also serves as the system assembly instructions, and it sets up the project for execution and tracking.

To succeed, you must have both parts of The Method.

The Method actually starts by instructing what not to do – you must avoid functional decomposition; that is, reflecting the required functionality of the system in the structure of the architecture. The reason is simple: as the requirements change, with functional decomposition the design will have to change, inflicting horrendous cost and pain on all involved. Unfortunately, almost all software systems today are designed using some flavor of functional decomposition such as domain decomposition. The correct way of designing the system is to decompose based on volatility – you identify areas of change in the system, and encapsulate these in services. You then provide the required behaviors by integrating these areas of volatility. Now when the requirements change, the changes are contained and not spread across the system.

With project design, The Method prescribes how to build the project network of activities, how to identify the critical path and calculate the duration of the project. You then assign resources against the project network, and find the cost of the project. From the available time between activities you can also calculate the risk of the project using my risk models. Since even a simple project can have several possible options that trade time, cost, and risk, The Method shows how to model these options and how to narrow it down to several good viable options for managers to choose from.

InfoQ: What are the challenges and drawbacks of functional or domain decomposition?

Löwy: I have already mentioned the obvious drawback; that functional decomposition cannot handle change, because by definition the change is not in any one place. That is just the beginning. Functional decomposition creates either an explosion of little components (corresponding to each functionality) with extremely expensive integration cost, or several huge components (ugly dumping grounds of functionalities), with impossibly high internal complexity. Functional decomposition pollutes the clients with business logic because it is up to the clients to invoke the functionalities. Over time, the system therefore migrates to the clients, and developers have to maintain the business logic in multiple places. This makes adding new clients or changing existing clients notoriously difficult. Functional decomposition creates multiple points of entry where you have to worry about security and scalability. Functional decomposition prevents reuse because the functionalities are tightly coupled to each other. You can never just pick up a component from one system and drop it in another system. Finally, due to the high degree of internal complexity, functional decomposition leads to untestable systems, and therefore, low quality.

But the real challenge with functional decomposition is its allure. It looks so simple, so intuitive, so straightforward: just add the A component, the B component, and the C component. Developers, managers, and customers just cannot resist the temptation of on the one hand supposedly adding value with design, and on the other hand getting away with it effortlessly. Since there is no free lunch in the universe, everyone ends up paying for it in multiples.

InfoQ: How does volatility-based decomposition work?

Löwy: As I said, with volatility-based decomposition you base your building blocks on areas of change in the required behavior of the system. You start thinking of your system as a series of vaults, where each of the vaults (as a component of the architecture) encapsulates some volatility. Now when a change happens, it is contained inside one of the vaults. Whatever was inside the vault may be destroyed by the change, but there are no painful side effects and expensive interactions across the system.

This is actually a universal principle of a good design, and the advantages of volatility-based decomposition are not limited to software systems. Consider your own body. A functional decomposition of your body would have components for every task you are required to do, from driving, to programming, to changing a light bulb; yet your body does not have any such components. You accomplish a task such as programming by integrating areas of volatility. For example, your heart provides an important service for your system: pumping blood. Pumping blood has enormous volatility to it: high blood pressure and low pressure, pulse rate, activity level (sitting or running), and so on. Yet, all that volatility is encapsulated behind in a service called the heart. Would you be able to program if you had to care about the volatility involved in pumping blood? You can also integrate into your implementation external areas of encapsulated volatility. Your computer is different from any other computer, yet all that volatility is encapsulated. As long as the computer can send a signal to the screen, you do not care what happens behind the graphic port. You perform the task of programming by integrating encapsulated areas of volatility; some internal, some external. You can reuse the same areas of volatility (such as the heart) while performing other functionalities such as driving. There is simply no other way of designing and building a viable system. You can replace or upgrade your computer with no ill side effects to your heart or car. Decomposing based on volatility is the essence of system design. All well-designed systems, software and physical systems alike, encapsulate their volatility inside the system’s building blocks.

InfoQ: What benefits does volatility-based decomposition bring?

Löwy: The obvious benefit is the ability to handle changes. During development and certainly during maintenance, requirements always change. That is actually a very good thing because the change in requirements is what keeps the software industry going. Unlike functional decomposition, which maximizes the cost of the change (because they are spread all over a very complex system), volatility-based decomposition minimizes the cost of handling the changes. You have contained the changes and are able to respond quickly to changes in the business landscape. That is the essence of agility that all businesses are craving and all functional decompositions always fail to deliver.

InfoQ: How can people develop skills for identifying areas of volatility?

Löwy: There are two levels of skills. At the first level, there are generic techniques that you can leverage to identify areas of volatility. For example, you can always ask what could change along the two axes of time and customers. Even if presently the system is perfectly aligned with a particular customer’s needs, over time, that customer’s business context will change and with it the customer’s requirements and expectations of the system will change. If you were to recognize that change in advance, you could encapsulate it and prepare for when it happens. Similarly, if you could freeze time and examine your customer base, are all your customers now using the system in exactly the same way? What are some of them doing that is different from the others? Again, identify those volatilities and encapsulate them. You can try to design a system for your competitor (or another division in your company) and identify the things you do differently and the things that pertain to the nature of your business which are not volatile.

The second level of skills is even more powerful. While volatility-based decomposition is a universal principle of a good design of all types of systems, software architects only have to design software systems. It turns out that software systems share common areas of volatility. Over the years I have found these common areas of volatility within hundreds of systems. Furthermore, there are typical interactions, constraints, and run-time relationships between these common areas of volatility. If you recognize these and practice applying them, you can produce correct system architecture quickly, efficiently, and effectively. What the book provides is classification for the areas of volatility, guideline for the interaction and operational patterns. Having clear, consistent structure for components in your architecture and their relationship is not just a good starting point, but is essential for communication; you can easily convey your design intent to other architects or developers. Even communicating with yourself in this way is very valuable, as it helps to clarify your own thoughts.

InfoQ: You did state to never design against the requirements in the book Can you elaborate why?

Löwy: This is more than a statement; it is the design Prime Directive. It is the only way to handle the unavoidable and highly welcomed changes to the requirements. I know it goes contrary to what most have been taught and have practiced. Any attempt at designing against the requirements will always guarantee pain, because when the requirements change, so will your design, which is extremely expensive in time and cost. People may even be fully aware that their design cannot work and has never worked, but lacking alternatives they resort to the one option they know — to design against the requirements, all the other problems of functional decomposition we discussed earlier pale in comparison to the inability to handle change. Designing against the requirements guarantees the lack of ability to quickly respond to changes.

InfoQ: What should people do instead?

Löwy: The solution is what I call composable design. You need to identify the smallest set of building blocks that you can put together to satisfy all requirements. By "all" I mean present and future, known and unknown requirements. Nothing less will do, or you will find yourself unable to handle a future change. The good news is that most requirements are just variations of other requirements. There are actually two types of requirements: core use cases that represent the essence of the business, and everything else, the "fluff", such as the happy case, the sad case, and the incomplete case. It turns out that the nature of the business hardly ever changes. If you have the smallest set of components that you can put together to satisfy the core use cases, then all the other use cases represent just different interactions between the same building blocks. Now when the requirements change, your design does not. This again is a universal design principle, and a simple example is the design of the human body. Homo sapiens appeared on the plains of Africa 200,000 years ago, when the requirements at the time did not include being a software architect. How can you possibly fulfill the requirements for a software architect today while using the body of a hunter-gatherer? The answer is that while you are using the same components as a prehistoric man, you are integrating them in different ways. The single core use case has not changed: survive.

One last point: when I say the smallest set of components, it is another key observation. One god component is obviously too small of a number, and a component per requirement is obviously too large. The smallest set of services required in a typical software system is 10 services in order of magnitude. This particular order of magnitude is another universal design concept. How many internal components (heart, liver, etc.) does your body have, as an order of magnitude? Your car? Your laptop? For each, the answer is about 10 because of combinatorics. If the system supports the required behaviors by combining the 10 or so components, a staggering number of such combinations becomes possible even without allowing repetition of participating components or partial sets. As a result, a relatively small number of valid internal components can support an astronomical number of possible use cases.

InfoQ: Switching gears here to the other part of your methodology, to project design, what roles does a project core team have and how do these roles collaborate?

Löwy: We focused so far on architects; yet architects cannot work in isolation. The project needs three logical roles: a project manager, a product manager and an architect. These three roles are what I call the core team. The project manager shields the team from the organization and the noise it creates, the project manager tracks progress, and reports status to management. The product manager encapsulates the customers, acting as a proxy for the customers. The architect is the technical manager, acting as the design lead, the process lead, and the technical lead of the project. The architect not only designs the system, but also sees it through development. The architect needs to work with the product manager to produce the system design and with the project manager to produce the project design.

The mission of the core team is to reliably answer the questions of how long it will take and how much it will cost. Project design provides these answers, but project design in turn  requires the architecture. In this respect, the architecture is merely a means to an end: project design. Since the architect needs to work with the product manager on the architecture and with the project manager on the project design, the project requires the core team from the beginning. Let me reiterate that while the architect collaborates with the project and product managers, the architect is the owner of the system design and the project design.

InfoQ: How can we do estimation better?

Löwy: There are quite a few effective estimation techniques for individual activities, and I describe these in the book. However, while you can always estimate better, the objective is to get estimations to be just good enough. For that you need to focus on accuracy rather than precision. It is fairly easy and quick to provide accurate estimations such as 5, 10, or 15 days. In case of uncertainty, there are also simple techniques for addressing uncertainty that yield excellent results, such as orders of magnitude or high/low/excepted values. An important point is that while you estimate individual activities, you calculate the duration, cost, and risk of the project. These calculations are often very precise, and combined with multiple project design options will maximize your ability to meet your commitments, often with very high correlation to reality.

In addition, with good project design, estimations and specific resources are secondary. The topology of the project network (which derives from the architecture) dictates the duration of the project, not the capabilities of the developers or, to a point, the variation in individual estimations. As long as the estimation is more or less correct, then it does not matter if the real duration involved is somewhat larger or smaller. With a decent-size project you will have dozens of activities whose individual estimations may be off in either direction. Overall, these offsets will tend to cancel each other. You can further compensate for the unforeseen by designing the project for the correct level of risk, a key technique in The Method.

InfoQ: You suggested in the book to describe options from which management can choose, and enabling objective evaluation of these options. Can you elaborate why this is important?

Löwy: A key concept in project design is that there is no "THE Project"; there are only options. This is just life; you do not reside in the only house you could live in. For the given set of constraints on budget, commute time, schools, taste, environment, risk, etc., you chose a house that addressed all of these dimensions, likely out of a small set of good candidates that offered different combinations. The same is true for anything else such as employment or even choosing a spouse.

When you design the project, you must provide management with several viable options trading schedule, cost, and risk. These options allow management and other decision makers to choose up front the solution that best fits their needs and expectations. Devising these options is a highly engineered design task. I say it is "engineered" not just because of the design and calculations involved, but because engineering is all about tradeoffs and accommodating reality. Projects designed to meet an aggressive schedule will cost more and be far riskier and more complex than projects designed to reduce cost and minimize risk. Project design narrows this spectrum of near-countless possibilities to several good project design options, such as the least expensive way to build the system, the fastest way to deliver the system, the safest way of meeting your commitments, and even the best combination of schedule, cost, and risk.

You must provide an environment in which managers can make good decisions. It is crucial to present them with only good options. Whichever option they do choose will then be a good decision.

InfoQ: What are your suggestions for designing projects?

Löwy: People tend to focus on the technical aspects of project design, on the numbers, charts, and techniques. Having practiced project design for decades, I find that it is actually a mindset, not just an expertise. You should not simply calculate the risk or the cost and try to meet your commitments. You must strive for a complete superiority over every aspect of the project. You should prepare mitigations for everything the project can throw at you — which requires going beyond the mechanics and the numbers. You should adopt a holistic approach that involves your personality and attitude, how you interact with management and the developers, and the recognition of the effect that design has on the development process and the product life cycle. The project design ideas of Righting Software open a portal to a parallel level of excellence in software engineering. It is up to you to keep improving, to develop your own style, and to adapt.

About the Book Author

Juval Löwy is the founder of IDesign and a master software architect. Over the past twenty years Löwy has led the industry in architecture and project design with some of his ideas such as microservices serving as the foundation of current software design and development. Löwy has helped countless companies deliver quality software on schedule and on budget and has mentored generations of architects across the globe, sharing his insights, techniques, and breakthroughs. Löwy participated in the Microsoft internal strategic design reviews and is a frequent speaker at major international software development conferences. Löwy published several bestsellers, and his recent book is Righting Software (Addison-Wesley, 2019). Löwy conducts Master Classes teaching the skills required of modern software architects and how to take an active role in design, process, and technology. Microsoft recognized Löwy as a Software Legend as one of the world's top experts and industry leaders.

Rate this Article

Adoption
Style

BT