Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations How Will Evolutionary Architecture Evolve?

How Will Evolutionary Architecture Evolve?



Rebecca Parsons examines some possible futures for the principles and practices of Evolutionary Architecture.


Dr. Rebecca Parsons is Thoughtworks' CTO. She recently co-authored the book Building Evolutionary Architectures with colleagues Neal Ford and Pat Kua. Before ThoughtWorks she worked as an assistant professor of computer science at the University of Central Florida, after completing a Director's Post Doctoral Fellowship at the Los Alamos National Laboratory.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Parsons: My name is Rebecca Parsons. I'm here to talk to you about, effectively, the evolution of evolutionary architecture. This track is about architecture in 2025. I'd like to talk about how evolutionary architecture might change over the next few years. First, I want to point out a couple of things. I remember several years ago, a journalist told me that they wanted me to make a 10-year prediction for what technology would be like. This was in the year 2017, which is right around the 10-year anniversary of when the iPhone was created. If you think about all of the change that has occurred in those 10 years, the fact that this journalist wanted to question me about technology in a decade's time, I thought it was preposterous. At least this is only two years away, but I will admit, my crystal ball is still broken. What I want to try to do during this talk is, first, I'll give you a brief introduction to the underlying principles and definitions of evolutionary architecture to provide a little context for when I start to talk about how things might actually be different.

Key Ideas and Questions

First, the definition, what is an evolutionary architecture? Evolutionary architecture supports guided, incremental change across multiple dimensions. There are lots of important words in that definition. We'll talk about those. The first is, why is it called evolutionary? When Neal Ford, my colleague, and I first started talking about this, I sat in on one of his early talks. He was calling it emergent architecture. Neal and I had a very robust discussion about why that was a really bad name. Because when you hear the word emergent, it just seems like, ok, I'm just doing what I can to survive, and some things are happening. I am perfectly confident, and capable, and happy with the definition of emergent design. Because regardless of the system, you and I are probably going to agree on what constitutes good code and bad code. That's not the same thing for an architecture. That first part of the definition, guided, is what we're trying to look at in terms of being able to say, what constitutes good for this architecture. That's where we introduce this notion of a fitness function. A fitness function is an objective characterization on how well a particular system reflects a desired behavioral characteristic. The single most important thing about a fitness function is you and I will never disagree on what that computation results in. Something like maintainable can't be a fitness function. Something like respects the particular naming standard, and coding standard, and has a cyclomatic complexity less than x, those we would all agree on. Those I think, arguably, do apply in the case of thinking about maintainability. It's this fitness function mechanism that is a key part of how we specify what we want this architecture to achieve. What are the important characteristics? What are those levels that we need to support?

The second is, how do we actually operationalize this? How do we make this an incremental improvement? There are two aspects of this, how do we get the new functionality in place? Then, how do we provide the mechanism on this path to production? It's the second one that I want to focus on here. Many times, when I've been on stage talking about evolutionary architecture, people will come up afterwards and whisper, don't you think you're being professionally irresponsible to talk about evolutionary architecture? Because, architecture, it's the rock, it's the foundation. How in the world can you evolve something that is so fundamental? What people are actually starting to recognize though is that they don't really have a choice. The technology ecosystem is changing so rapidly that they have to do something about it. One of the important practices and enablers for evolutionary architecture is the level of rigor and automation that comes along with continuous delivery, and ultimately, continuous deployment. If you don't have those kinds of mechanisms, you don't have enough derisking in your deployment pipeline to take on some of these tasks. We need to really be conscious of what we're actually taking on when we say, we're going to allow you to evolve your architecture. Because although I do firmly believe we have to be able to evolve that architecture, the people weren't wrong, you can disrupt a lot by making some of these fundamental architectural changes. There's a reason some people say, for example, how you define architecture is the things that are hard to change. Much like the motivation behind XP, and agile more broadly, the idea behind evolutionary architecture is, let's go ahead and try to get good at changing those things that are hard to change. Just because they're hard to change, doesn't mean you won't have to change them.

Multiple Dimension Things (-ilities)

Finally, let's think about this multiple dimension things. Here is a list from several years ago from Wikipedia of a whole bunch of -ilities. One of the things that we have to grapple with as architects is the fact that you cannot be ideal on all of those -ilities, because many of them are actually mutually exclusive. You cannot maximize something without negatively impacting something else. You can't maximize both of them, you have to decide, what is the right balance I want between these things. Each system is different. Each organization is different. Each application and domain, they have different architectural requirements, and we should not be using the same architectural standards across every single application. That holds for evolvability as well. There may be systems because it's a one-off, as an example, that you don't care about evolvability. You probably don't care about maintainability, or many of the other -ilities. It may be that you've got a dataset to process that is only ever going to be used once, but it's massive. Maybe you're going to take a lot of time to optimize the runtime performance of that thing, even though you're going to throw it away. Not all systems need to be evolvable. If they are, and I would assert that many of the enterprise applications that exist out there need to have a certain degree of evolvability, then this is what we want to be looking at.

Principles of Evolutionary Architecture

Evolutionary architecture has some underlying principles. I'm going to first review the principles. Then for each one of these principles, I'm going to speculate, what might things look like in a few years. For these principles, the first is the last responsible moment. We want to delay decisions as long as possible, so that we have as much information as possible. You will need to trade that off between what are the decisions that you're making, the architectural decisions, the development decisions that you're making before you've made this other architectural decision? What might the consequence be for that? You determine this last responsible moment by looking at, how does this particular decision track to your fitness functions, those -ilities and those values that you've decided are critical for the success of your system? Because fundamentally, if a particular architectural characteristic is not terribly important, then you shouldn't worry too much about decisions in that realm. One of my favorite examples of this is I worked early in my Thoughtworks career on a trading system. When you hear trading, everybody thinks, low latency, high throughput performance is king. This particular one they've never in their wildest dreams assumed they would ever have more than 100 or so transactions a day, maybe 200 a day: not an hour, not a minute, a day. What they really cared about was never losing a message. We focused our efforts on ensuring that we understood the communication hierarchy and then the synchronization points between the various systems that were actually scattered across the globe. Those were the -ilities that mattered to us, and so that's where we focused our attention.

The second principle, architect and develop for evolvability. If you've decided that evolvability is a first-class citizen for you, it's going to impact not only the way you write the code, but how you structure the code. I'll start with develop for evolvability first. The single most important thing here is, how easy is it for you to understand the code that's there? Readability is key. That's where software quality metrics come in. We want to look at this and say, how easy is it for me to pick up a new piece of code and understand what it does? Because if I don't understand what it does, then I'm not going to be able to evolve it. When we think about architecting for evolvability, this has taken into account different aspects. How do we divide up our system? Where do we draw our boundaries? This is where we think about coupling and cohesion, and all of those architectural terms. Fundamentally, the place to start is to think about dividing up your systems around the concepts that are relevant in the business. Personally, I believe one of the reasons our first attempt at service-oriented architectures failed is that we were drawing the boundaries around systems, more so than we were drawing the boundaries around concepts. Study up on your domain driven design and apply it to your architecture.

The third principle is Postel's Law. Simply put, be generous in what you receive and be cautious in what you send. Ask the account to receive, make sure you don't open yourself up for a security hole. The more you can adhere to that principle, the more you can make yourself impervious to all but essentially breaking changes. If all you need is a postcode, don't validate the address. That way, if I decide I want to split the address into two lines, you don't have to change because you're not paying attention to any part of that message that you don't need. Architect for testability. One of the things that we have found is that the ability to test something and how testable something is, is a pretty good indication on how well you've drawn your boundaries, and how well you understand where those boundaries are. If you focus on testability at all levels of the test pyramid, you'll end up with a better design and a better architecture.

Finally, the dreaded people issue, Conway's Law. So many people try to get around Conway's Law, but it's always going to win. Conway's law effectively says that any system an organization builds is going to reflect the communication structures, I normally say communication dysfunctions of the organization that builds it. If you want a three-stage pipeline, have three groups. If you have four groups, you're never going to get a three-stage pipeline. Those are the principles as of today. These have stayed relatively constant over the period of time. The actual principles haven't changed much.

What Will Be Different in a Couple Years' Time?

What do we think will be different in a couple of years' time? My first wild speculation here is that I don't think there's any indication yet that there's a principle out there that we're missing. I'd be happy to debate it with people. I actually think the principles themselves are pretty solid. I also really don't think there's going to be much change from the perspective of how the principles influence our way of doing architecture. Postel's Law, and the last responsible moment, I don't think either one of those are going to be radically different in 2025. They're still going to have the same impact. We're still going to be asking the same questions. We might be dealing with different issues relative to these principles, but I don't think the principles and our approaches are going to change fundamentally in the next couple of years.

Architect for Evolvability

For the purposes of this presentation, I want to separate out thinking about architecting and thinking about developing. First off, architecting for evolvability. I think we're going to continue to see innovations in architecture. If you think about all of the innovations that occurred, and the adoption of innovations that really weren't that popular, I think a lot of that has been enabled by continuous delivery. Because continuous delivery gives us that stable path to production, which allows us to do things that we simply couldn't do before. Think about going back in time in to the late '90s, and telling somebody that they need to deploy a microservices architecture, which meant that you had all of these different processes, and that means I need 70 Oracle licenses. It simply wasn't practical. You couldn't have somebody manually type in all of the stuff to deploy some of these architectures. They were just too complex. I think we're going to continue to see innovations in that space. What they're going to look like, I don't know. I think we're going to continue to see innovations in architecture.

One of the areas that I think we are going to see more activity on, broadening out from where they have application right now, is these mixed physical-digital systems. We've had intelligent manufacturing. We've had robots in factories. We've had intelligent warehousing. I even worked in intelligent warehousing as my first full-time job out of university, back before fire was invented, of course. We've had these systems for a long time, but they're starting to permeate more fields. We have things like autonomous vehicle technology, which is a fascinating combination of machine learning systems and sensors, and radar, all combined into one. How we create these systems are going to make us think about architecture in a different way. When we think about architecture in a different way, we might end up starting to have different aspects of architecture that we have to take into account.

What does it mean to evolve a platform? I remember years ago, I was having a conversation with a colleague, shortly after Martin published the book on "Domain-Specific Languages" that I worked on with him. One of the things that he was struggling with is, what is the relationship between agile and incremental design and such with a domain-specific language? Because, aren't you laying things out when you decide what the language is going to look like? How would you actually go about evolution? There's similar kinds of questions to be asked about platforms. There's a level of abstraction that's introduced there. When we talk about what are the drivers for evolutionary architecture, a lot of it has to do with business regulations changing, and business model lifetime, and expectations of users changing. That's very much in the outer realm. The platform is supposed to provide this enabling characteristic for building all of those rapidly changing applications. As more platforms come into being, being able to evolve the platform will be important as well. Then you have all kinds of questions around API breaking changes, and all of those kinds of things. We might want to think about what it means to evolve things in a slightly different way, when we're talking about platforms.

Finally, what about the evolution of augmented reality, virtual reality systems, the metaverse? I think we have to step back a little bit because in many cases, what does it mean to solve a business problem in augmented reality? People talk about virtual storefronts, and try-ons, and all of that, but what about applying for a mortgage? I think we have to think a lot about, what does it mean to have these experiences, and because of the level of immersion, the way things change within that world might have a greater impact than just a traditional software change. I think we're going to be thinking differently about, how do we architect systems, when what we're dealing with is effectively an artificial world.

Develop for Evolvability

What about developing for evolvability? What does that look like? I have to start with AI assistants. We've been doing experimentation with the coding aspects of ChatGPT, we've got Copilot out there. I think these things are going to continue to evolve and continue to impact the way we develop things. We really haven't talked that much about, how does this intersect with our ability to develop for evolvability? Next, what is the impact of low-code and no-code platforms on evolvability? In theory, these should be easier to develop with. That's the whole spiel behind it. Given the fact that these platforms very much have a sweet spot, how can we distinguish between those kinds of changes that will be easy to make and those kinds of changes that will be a challenge to make? You have to understand a whole lot about how the low-code platform actually works to be able to answer a question like that.

What about fitness functions for new languages, or languages in new settings, or compliance? We've already started to see innovation around compliance as code and fitness functions that actually encode our compliance and regulatory expectations. I think we're going to continue to see an increase in regulatory scrutiny, and what impact is that going to have on our fitness functions? I think we're going to see some creativity around, what are some new kinds of fitness functions we can construct to allow us to adapt to these changing environments? Possibly better metrics that help us get a better handle on, what does it mean for code to be readable? We've struggled for a long time on, how do we get good code metrics? How do we deal with the fact that there's essential complexity in all problems? It's not like we can say, nothing's more complex than this, because maybe the problem itself is more complex than that. Perhaps in the next couple of years, we'll start to see a bit more innovation around that as well. That might be wishful thinking, but it would be nice.

Architect for Testability

What about testability? I'm from Thoughtworks, of course, I have to talk about testability. I think the first thing is we are going to see an increased reliance on testing in production, on more dynamic fitness functions, which are monitoring systems as they are running. I don't know if you all caught it, but if you look back at that slide with all of the -ilities, when we pulled that, observability was not an -ility. That's clearly something that is becoming more important as our systems and our architectures get more complex. I think we're going to see a shift in emphasis in our testing regimens. I think we're also going to see a broadening of the suite of fitness functions and more creativity around some of these fitness functions. We've already seen some remarkable examples of how people have crafted fitness functions to fit a particular problem. I think we're going to continue to see a lot of those coming out as well. You might wonder why I put this under testability. If you've thought about it, many of these fitness functions mirror things that actually look like tests. If you've ever run a performance test to see how much throughput you've got, you've run a fitness function. Fitness functions, to me, fit squarely under the guise of how we think about testability.

I think we're going to also see increased use of AI in testing, particularly married with more dynamic testing. We might see continued innovation in things like an artificial immune system, self-healing systems to allow a running system to detect when something's going wrong and to guard against particular threats, and counter them. There's a lot left to be explored here. I think we'll see that becoming more common in a broader range of systems than it currently is. Finally, we're going to see more innovation, I believe, in what does it mean to test a machine learning model? What does it mean to test when you've got reinforcement learning going on? Which is, effectively, a model being able to update itself. There's a lot of thinking that still needs to be done around how we actually manage some of those things. We've made significant advances already, but there's still a lot to be done.

Conway's Law

Finally, Conway's Law. I think the big thing around Conway's Law is, how is it going to be impacted by our new remote and hybrid ways of working? Much of the research that has been done about, how do we use Conway's Law, the inverse Conway maneuver, or the reverse Conway maneuver, ok, this is the kind of system I want so I'm going to reorganize my team here. Most of that thinking has been done in the context of primarily colocated teams. Or if they're distributed still, within a location, you've got a colocated team. The dynamic is very clearly different when you have hybrid, or fully remote teams, where individuals are sitting in their own rooms like I am now, coding. That is one thing, I think, with respect to Conway's Law that we're going to have to reexamine. Because it's not like we can suddenly say, there's going to be a component of our architecture for every individual developer scattered around there. That's just impractical. What does it mean then instead? How do we properly support teams in this hybrid or fully remote environment?

Practices of Evolutionary Architecture

Now let's talk about practices. How do we actually do stuff? The evolutionary architecture of today has four practices that I normally try to highlight. The first is evolutionary database design. We keep telling Pramod Sadalage that he ought to put out a new version of the book, "Refactoring Databases," and just swap the title and subtitle, and turn it into, "Evolutionary Database Design: Refactoring Databases." DBAs, fundamentally, are one of the few roles within the software development lifecycle that I feel have a legitimate complaint about incremental design and deployment, because of data migration. This book, this approach to refactoring databases gets us around that. Neal and I often say, if you take one thing away from this talk, take away the notion that you can, in fact, more easily migrate data using this evolutionary database design approach. Contract testing, this is another way that you can allow teams to be as independent of each other as possible. Because if I understand the assumptions you're making of me and you understand the assumptions I'm making of you, then I can change whatever I want, as long as I don't break your test. You can change whatever you want, whenever you want, as long as you don't break a test. That's a very important practice to allow systems to evolve more quickly because you have that signal that says, time for a conversation. Otherwise, I can just cheerfully ignore you.

Third, choreography. This one is a little more controversial, because much like microservices, it sets a high bar in terms of complexity. There are problems that simply do not occur if you use an orchestration model with an orchestrator, as opposed to a choreography model. If evolvability is one of these critical -ilities, using a choreographed approach to the interaction of your various small C components, the different parts of your architecture, gives you much more flexibility than if you have an orchestrator. You do pay a cost in terms of complexity. That needs to be taken into account. Finally, continuous delivery. To me, this is a critical underpinning practice. Because what you don't want to be doing is making major changes to your architecture, when you're not sure if the configuration parameters are still what you expect them to be. You want to be able to quickly diagnose and roll back and understand what happened. That's very difficult to do if you've got some poor soul there at 3:00 on a Sunday morning, trying to type in everything, just right. No matter how good your runbook is, nobody is at their best at 3:00 on a Sunday morning. They're just not.

Practices Will Evolve but Not Radically Change

How are these practices going to be different in 2025? What kinds of things might we see? I don't think these practices in and of themselves are going to change that much. With respect to evolutionary database design, we're probably going to continue to see innovations in persistence frameworks, but the fundamental principles of how you approach refactoring databases, so far, at least, have morphed quite nicely across the different persistence paradigms. Even though we're going to see innovations there, I think the fundamental practice of refactoring databases and evolutionary database design will be maintained. There might be some assistance we can get from some of these AI tools in terms of helping to develop these contracts. In my experience, some of the best enterprise architects were so good at their job, because they really understood how the different systems depended on each other and those hidden assumptions that you never really knew were there. They knew where the bodies were buried in the architecture. Sometimes those things are really hard to elicit. It may be that as we get better AI tools and code analysis tools, it will be easier to extract those relationships to construct those contract tests. Again, fundamentally, the premise of why you do contract testing remains the same.

Similarly, I think we might continue to see innovations on how can we make it easier to take advantage of choreography. There is some element of it that truly is essential complexity. As I said before, there are errors that just really can't happen if you have an orchestrator. There might be some of the other complexity we might be able to ameliorate with some tools, potentially, with some AI support. Finally, in the realm of continuous delivery, as we continue to innovate new systems, we have to think about, what is the right automation? What are the right tools to support continuous delivery of this new thing? I think we're going to continue to see people figuring out additional ways to provide proper telemetry to understand the business value of new features, or to provide tools that allow more readily for rollback. There are lots of different things that are truly involved in a true continuous delivery, or continuous deployment setting, which is the next stage of maturity. I think we're going to see some innovation with respect to some of those tools.


Fundamentally, evolutionary architecture is going to evolve. The whole premise of evolutionary architecture is that the technology ecosystem, as well as the expectation ecosystem, and the regulatory ecosystem, and business models, all of these things will continue to evolve. Particularly as it relates to the technology ecosystem, that's going to require our approach to evolutionary architecture to evolve. I really don't think it's likely to be a revolution, though, particularly since we're talking 2025. That's not that far away. I might be singing a different tune if all of a sudden quantum comes onto the scene, or we truly get an artificial general intelligence. My expectation is that although the pace of change is significant, within the next two years, we're unlikely to see something that is going to cause a revolution in our approach to evolutionary architecture. There very well might be new -ilities. One of the things that I tell people about evolutionary architecture is if you're not on a fairly regular basis scanning your list of fitness functions and the prioritization you have on the different -ilities to determine if that is still valid, I think over the next couple years, it's quite possible. There will be new aspects, new architectural characteristics that we'll have to say, ok, where does this slot in? As I pointed out earlier, that list that I showed you from Wikipedia from several years ago, didn't even have observability on it. It had operability. It had some other things, but specifically did not have observability on it. I think we're going to continue to see different -ilities arise as we have different approaches, and different perspectives on our systems, and different kinds of systems. We might have at least closer to a self-driving car within a couple of years. We might have vastly improved systems for digital twins. That will probably bring us some new -ilities. My crystal ball is still broken.


See more presentations with transcripts


Recorded at:

Sep 20, 2023