BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Fundamental Truth behind Successful Development Practices: Software is Synthetic

The Fundamental Truth behind Successful Development Practices: Software is Synthetic

Bookmarks

Key Takeaways

  • "Synthetic" means to find truth through experience, using empirical practices to build understanding.
  • Software systems demand synthetic problem solving - we test, observe, and experiment to understand its correctness and value.
  • The history of building software systems is a history of formalizing synthetic ways of working into successful development practices.
  • A variety of synthetic management approaches have emerged to pair with the synthetic practices used to build software systems.
  • Synthetic management contrasts with the analytic approaches needed to run a business, creating new kinds of constraints to the flow of value.

Software is synthetic

How the Map Problem Was Solved

Long before machine learning algorithms beat the world champions of Go, even before searching algorithms beat the grandmasters of Chess, computers took their first intellectual victory against the power of the human mind when they solved the Map Problem.

The solution was the first time a significant achievement in the world of pure mathematics was found using a computer. It shocked the world, upending a sacred belief in the supremacy of the human mind.

The Map Problem asks how many colors are needed to color a map so that regions with a common boundary will have different colors.

Simple to explain, easy to visually verify, but difficult to formally prove, the Map Problem captured the imagination of the world in many ways. At first, for the elegant simplicity of its statement, and then, for over a hundred years in the elusiveness of its solution. Finally, for the ugly, inelegant approach used to produce the result.

The problem is foundational in graph theory, since a map can be turned into a planar graph that we can analyze. To solve the problem, a discharging method can be used to find reducible graph configurations, the reduced configurations can be restricted to unavoidable sets, and the unavoidable sets can be tested to see how many colors are actually needed.

If we are able to test every possible unavoidable set and find that only four colors are required, then we will have the answer. But the act of reducing and testing sets by hand is extremely laborious, perhaps even barbaric. When the Map Problem was first proposed in 1852, this approach could not possibly be considered.

Of course, as computing power grew, a number of people caught on to the fact that the problem was ripe for exploitation. We can create algorithms that have the computer do the work for us - we can programmatically see if any counterexample exists where a map requires more than four colors. In the end, that’s exactly what happened. And so we celebrated: four colors suffice! Problem solved?

These are not the proofs you are looking for

Mathematics is analytical, logical. When examining a proof, we want to be able to work through the solution. With the Map Problem, mathematicians desired just such a proof, something beautiful, something that reveals a structure to explain the behavior of the system, something that gives an insight into the inner workings of the problem.

When the solution was presented at the 1976 American Mathematical Society summer meeting, the packed room was nonplussed. They did not get something beautiful - they instead got printouts. Hundreds and hundreds of pages of printouts, the results of thousands of tests.

Was this even a proof? The conference was divided. They said the solution was not a proof at all, but rather a "demonstration". They said it was an affront to the discipline of mathematics to use the scientific method rather than pure logic. Many were unprepared to relinquish their life’s work to a machine.

The anxiety over the solution to the Map Problem highlights a basic dichotomy in how we go about solving problems: the difference between analytic and synthetic approaches.

The solution to the Four Color Theorem was not analytic: it was synthetic - it arrived at truth by observation, exhaustively working through all the discrete cases. It was an empirically-derived answer that used the capability of computers to work through every possibility. Analysis was used to prove the reducibility of the graphs, but the solution was produced from the programmatic observation of every unavoidable sets: encoded experiences, artificial as they may be. There is no proof in the traditional sense - if you want to check the work, you need to check the code.

But this is exactly how we solve all problems in code. We use a dual-synthetic approach of creation and verification: we synthesize our code, and then prove it synthetically. We leverage the unique capabilities afforded by the medium of code to test, observe, experience, and "demonstrate" our results as fast as possible. This is software.

Software is synthetic

Look across the open plan landscape of any modern software delivery organization and you will find signs of it, this way of thinking that contrasts sharply with the analytic roots of technology.

Near the groves of standing desks, across from a pool of information radiators, you might see our treasured artifacts - a J-curve, a layered pyramid, a bisected board - set alongside inscriptions of productive principles. These are reminders of agile training past, testaments to the teams that still pay homage to the provided materials, having decided them worthy and made them their own.

What makes these new ways of working so successful in software delivery? The answer lies in this fundamental yet uncelebrated truth - that software is synthetic.

Software systems are creative compounds, emergent and generative; the product of elaborate interactions between people and technology. These are not the orderly, analytic worlds that our school-age selves expected to find. So full of complexity and uncertainty, we have to use a different way to arrive at a solution.

How "work" works in these synthetic systems is different. We use different approaches, ones that control the "synthetic-ness" and use it to our advantage. When we learn to see this - when we shift our mental models about what the path to certainty looks like - we start to make better decisions about how to organize ourselves, what practices to use, and how to be effective with them.

We are accustomed to working on systems that yield to analysis, and where we find determinism in the physical world, we can get real value from deep examination and formal verification. Since we can’t use a pull request to change the laws of physics (and a newer, better version of gravity will not be released mid-cycle), it’s worth spending the time to plan, analyze, and think things through. Not to mention the significant barriers to entry - there is a cost to building things with physical materials that demands certainty before committing.

Software, being artificial, offers a unique ability to get repeatable value from the uncertainty of open-ended synthesis, but only if we are willing to abandon our compulsion to analyze.

There is no other medium in the world with such a direct and accessible capability for generating value from emergent experiences. In code we weave together ideas and intentions to solve our very real problems, learning and uncovering possibilities with each thread, allowing knowledge to emerge and pull us in different directions.

The synthetic way of working shapes the daily life of development, starting with the most basic action of writing code and moving all the way up into the management of our organizations. To understand this, let’s work our way through the process.

Will it work?

Acts of creation are processes of synthesis, and software primarily is a creative act. It is the practice of bringing together millions of discrete elements to create something new and different. In the process of combining these elements we have the opportunity to create something new, with both beauty and function.

We start with a set of discrete primitives, elements, and structures. We use structure and flow to bring them together into algorithms. In this coupling we produce new elements, discrete in their own right, gathering new properties and capabilities. Arranging algorithms, we create runtimes and libraries, scripts and executables. The joining of these brings forth systems - programs and applications, infrastructure and environments. From these we make products, give birth to businesses, create categories, and build entire industries.

Though our systems are cobbled together from the complex synthesis of millions of tiny pieces, they hang together under the gravity of people and their intentions. "As a user ..." But those intentions may come into conflict, and the conflicts only become visible when put things together.

Our understanding of the systems we are building emerges and changes throughout the process of creation (this is the root of much technical debt, by the way). We accept that we cannot truly know our system until we see it: we learn from the understanding that rises bit by bit, commit by commit, as we watch our ideas coming to life. "Will it work?" To answer this question, we write the code, we run it, we test it: we go and see.

This process of creation demands verification through experience. This is how we find the truth of our synthesis. With computers we can do this continuously, we can automate it. In the medium of code, the "experiencing" is cheap and readily available to us. Validating our assumptions is as simple as running the code. This recursive recourse to experience is the fundamental organizing principle in our process of creation.

In synthetic systems, the first thing we need to do is to experience our work, to learn from our synthesis. As we give instructions to the system, we need to give the system the ability to teach us. The greater the uncertainty of our synthesis, the more we have to learn from our experiences.

So we optimize our daily routines for this kind of learning. Test driven development, fast feedback, the test pyramid, pair programming; these are all cornerstones of good software development because they get us closer to experiencing the changes we make, in different ways, within the context of the whole system. Any practice that contributes to or adds dimensions on how we experience our changes will find itself in good company here.

Does it still work?

As a system grows to include large numbers of components and contributors, its dynamics become increasingly unpredictable. When we make a change in a complex system there is no amount of analysis that will allow us to determine with certainty whether the change will work exactly as intended.

In complex systems we are always walking at the edge of our understanding. We are working our way through a complicated networked system, a web of converging intentions, and changes to one part routinely have unintended consequences elsewhere. Not only do we need to test our changes, but we need to know if we broke anything else. "Does it still work?"

To answer this question, we push the synthetic approach across the delivery lifecycle and into adjacent domains.

We externalize our core workflow, using support systems to validate and maintain the fidelity of our intentions as our work moves through the value stream. We create proxies for our experience - systems that "do the experiencing" for us and report back the results. We use continuous integration to verify our intentions and get a shared view of our code running as a change to the entire system. We use continuous delivery to push our changes into production-like, source-controlled environments. And we use our pipelines to maintain constancy and provide fast feedback as our work moves into progressively larger boundaries of the system.

The last decade has seen an ever-expanding set of these practices that externalize and proxy our individual experiences: test automation, repeatable builds, continuous integration, continuous delivery, and so on. We put many of these under the banner of DevOps, which we can understand simply as synthetic thinking working its way down the value chain. The concept has proven itself so useful that our industry has developed a compulsion to extend the concept further and further: DevSecOps, DevTestOps, BizDevOps, NetDevOps, etc.

And it makes sense why we would want to create these new ontologies, these combined categories of understanding. We want to build a single view of what it means to be "done" or to "test" or to "create value", one that combines the different concepts that our disciplines use to experience the world. These are examples of our industry looking to extend what is intuitively useful: the capability to apply synthetic thinking across the software value stream.

Why doesn’t it work?

Once we have our software running in production, we need to extend our synthetic way of working one more time - into the observability of our system. Our primary goal is to build systems that are designed to discover problems early and often so we can learn and improve.

As our system grows, services grind together like poorly fit gears. Intentions conflict, assumptions unravel, and the system begins to operate in increasingly unexpected ways. Rather than spending intellectual capital trying to predict all possible failure modes, we have learned to use practices that allow us to see deeply into the system, detect anomalies, run experiments, and respond to failure.

Observability is required when it becomes difficult to predict the behavior of a system and how users will be impacted by changes. In other words, to ensure that our interactions with system phenomena align with customer experiences.

Making the system observable involves a practice of combining context, information, and specific knowledge about the system to create the conditions for understanding. The investment we make is in our capability to integrate knowledge - the collection of logs, metrics, events, traces and so on, correlated to their semantics and intent. We approach these problems empirically too, using experiments to discover, hypothesize and validate our intuitions.

Planning for failure involves an equally synthetic set of practices. The field of resilience engineering has been gaining momentum in the software industry, asking, how can we build a system that allows us to cope with surprise and learn from our environment? We can build our systems to allow emergent behavior to teach us, while we build our organizations with the adaptive capacity to respond and change.

Chaos engineering also embraces the experimental approach - rather than trying to decompose the system and try to decipher all its faults, let’s get our hands dirty. Let’s inject faults into the system and watch how it fails. Inspect, adapt, repeat. This is an experience-driven approach that has seen a lot of success building scaled software systems.

With operational practices like these, we embrace the fact that our systems are complex and our work will always carry dark debt, no matter how good the code coverage is. Our best recourse is to give ourselves the ability to react quickly to unexpected events or cascading changes.

Synthetic Management

At this point, it should come as no surprise that the synthetic nature of our systems also needs to push its way up and into our management stack. Our work requires a dynamic, experience-driven management model.

We know this dragon well. It has been articulated in many forms over the years: Big Ball of Mud, The Mythical Man Month, Worse is Better, the Agile Manifesto; each is an evolution of the elements required to accommodate the fundamental truth of our work - that synthetic work needs synthetic management.

In the earliest days of programming, we were forced to use analytic patterns to solve coding problems. The first computers were so expensive and inaccessible that programmers had to work through their algorithms with pencil and paper before trying anything out (think, Knuth’s Art of Computer Programming). The practice of running experiments to see what works was simply not an option.

Just as programmers had to do the analysis to get as close to a solution as possible before committing it to code, managers had to do the analysis to ensure that their incredibly constrained resources were used most efficiently. The pattern of working that developed involved a lot of planning with a big design up front, and that pattern still persists to this day, despite being irrational in the world of cheap computing power.

Synthetic thinking asks us to set aside those traditional roots, leave behind the historical memory of how to solve problems with computers. Instead of formal proofs, we prove things by seeing them work, inside the context of the systems they are supposed to work in.

The management practices we use in software have come to de-prioritize that classic analytic approach (think, Project Management). We understand that analysis can only take us so far before we reach the edge of understanding, absent of emergent knowledge, collective learning and systemic properties.

Where analysis begets planning, synthesis demands experience. Agile practices are predictably useful because in software development there is no substitute for experience. Our systems, environments, organizations, and markets are constantly changing, and so our teams need to be equipped with the same responsive and adaptive capacities that we expect our systems to have.

To manage a software system, we adopt flexible approaches that allow us to experience and learn: the inspecting and adapting of Scrum; the sensing and responding of Cynefin; the observing and orienting of OODA - these practices all embrace the non-linear nature of our systems, and the synthetic approach to building understanding within them.

In concert with the evolution of our technical ontologies, we are creating more expansive management ontologies. We have been rethinking what it means to be a ‘team’ as we bring together the organizational and cultural elements of our end-to-end value delivery process. A software system is a symmathesy in which mutual learning is done through emergent social networks. We recognize that synthetic work necessitates a diverse, collectively learning system to get fullness from experiential knowledge and subjective meaning.

As an industry, we are defining and refining our own set of "Synthetic Management" approaches that allow our teams to close the gap between the act of creation and the moment of experience, connecting people to their work and to the contexts they inhabit. The next challenge, then, is bringing our work into the orbit of a powerful force that we often call, simply, "the business".

New Kinds of Constraints

Our fundamental truth is contradicted by the need for business to be stable and analytic, calculating and certain. The business is driven by money and metrics, contracts and deadlines, users and platforms. To deliver value to customers, we need to continuously interface with this contrary,  yet complementary, set of intentions.

The tension between the synthetic nature of software and the analytic needs of the business creates constraints that put pressure on our work. The constraints can be valuable or pernicious, depending on how we design our organizational system. They demand trade-offs, sacrifices and debts, and these can be managed intentionally, or left ad-hoc and made cumbersome. It is not the tension itself, but rather the mismanagement of it, that creates problems.

All too often, organizational systems are not optimized to transfer knowledge between contexts - it becomes a painful exercise of extraction. We are asked to invest in work that provides the analytic side with confidence, but the synthetic side with nothing. We are forced to work inside opaque reporting abstractions that do not map to our own work structures. These organizations inadvertently put developers under constant pressure to compromise on synthetic values.

Meanwhile, successful organizations actively facilitate flow between the analytic and synthetic contexts: interfaces are created from which the business can gather needed information; social learning eases reporting requirements and routes around organizational hierarchies; useful abstractions like OKRs provide direction without disrupting execution. We optimize the organizational system to create generative feedback loops, allowing us to easily share our learning and diffuse knowledge appropriately.

The ways that we manage constraints across the Janus-face of business are critical to how we build dynamic learning organizations, which depend on a balanced flow of information and knowledge. But with an understanding of the analytic-synthetic dichotomy in hand, we can think more deeply about how to be effective.

While there is more investigation to be done around how to manage these constraints to flow across the intellectual boundaries of the business, new ways of working have already emerged to help us.

New Ways of Working

The solution to the Map Problem was impactful because it changed the world’s understanding of what we consider to be "verifiable information". It showed us a new frontier of ways to discover things about the world, and it did so by using the newly emerging dual-synthetic approach of solving problems in the medium of code.

Similarly, the synthetic management practices born in software have opened up the frontier of ways to manage a business. We see greater openness to different ideas, and the digital world demands it: we have at the center of work a complex system that demands looking at how we can push new ways of working right across the business. The pendulum has swung away from managing software like we manage the business, towards managing the business like we manage software.

Significant momentum is growing to soften the analytic sides of the business, align it in a way that interfaces more easily with synthetic thinking. Beyond Budgeting moves us past the constraints of quarterly budgets, Design Thinking popularizes empirical techniques for discovery and experiential learning, Business Agility seeks to align business operations with the nature of complex adaptive systems, and the Teal movement is gaining traction as an alternative way to manage a business as a self-organizing, collective-learning system.

If we create organizations in which we are able to harness the power of emergent knowledge - if we are successful in designing a synthetic management system for our business - we are rewarded. But, if we spend too much lost in the paralysis of analysis, we fail.

As we break the management molds to deliver software products, we get the opportunity to break them elsewhere. With software we have demonstrated success with practices that don’t fit the analytic style. Will they work elsewhere? There’s only one way to find out: go and see.

About the Author

John Rauser is a software engineering manager at Cisco Cloud Security, based out of Vancouver, Canada. His team is building the next generation of network and security products as cloud services. John has spent the last 10 years working in a variety of different roles across the spectrum of IT, from sysadmin to technology manager, network engineer to infosec lead, developer to engineering manager. John is passionate about synthetic management, new ways of working, and putting theory into practice. He speaks regularly at local and international conferences and writes for online publications.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • What about formal reasoning

    by Barillet Cyril,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hello.

    Firstly, thank you for your post. I think that it is very interesting.

    Secondly, sorry for my english in the following part.

    I would like focusing on this sentence "Software systems demand synthetic problem solving - we test, observe, and experiment to understand its correctness and value." and I propose the following sentence instead "Majority of software systems use synthetic problem solving".
    Indeed, there are several methods as "proof by induction" (www.khanacademy.org/computing/ap-computer-scien...) or "reductio ad absurdum" used to validate the algorithm correctness.
    Do we need to consider these methods synthetic ?
    If the answer is no then the "synthetic" aspect that you are speaking here is the result of the need to build faster and faster, reduce the time to market (with acceptance of risks) or is it the result of missing tools (methods to prove the correctness) ?

    My goal here is just to try to understand if there are no alternative way to "Software systems demand synthetic problem solving" and why we are not think about these alternatives.

    Kind Regards.

  • Re: What about formal reasoning

    by John Rauser,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Barillet, thank you for reading the article and providing thoughtful feedback. In my opinion, the problem with formally proving code is a scaling issue.

    I am not aware of any organizational system that permits a sustainable, maintainable practice for formal proving code across a large codebase over an extended period of time. On the other hand, there are many such organizational systems for the empirical approach. Why would that be?

    I suggest that its because Formal (analytic) proof cannot scale with a synthetic system. The proof lives outside the code, outside the system. It does not "take advantage" of the medium of code. The medium of code favours "programmable" proof that lives with the code and executes with the code. Tests are easy to implement as part of the system, and so they have become the primary method of demonstrating "correctness". Software is synthetic and therefore lends itself to such synthetic proof :)

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT