BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Migrating Monoliths to Microservices with Decomposition and Incremental Changes

Migrating Monoliths to Microservices with Decomposition and Incremental Changes

This item in japanese

Bookmarks

Key Takeaways

  • Microservices migrations are not a trivial change. You have to understand if it will really solve your problem otherwise you might create a tangled entity that might kill you.
  • There are different types of monoliths and  some of them might work and be enough for business needs. Monoliths are not an enemy that should be killed.
  • Microservices is about independent deployability. There are some decomposition and incremental changes patterns that can help you to evaluate and migrate to a microservices architecture.
  • When you start to work with microservices, you will realize there are some really complex challenges as consequence. So microservices shouldn't be a default choice. You've got to think carefully about whether or they're right for you.

This article is based on a transcript from Sam’s presentation at QCon London, as captured by Leandro Guimarães, and reviewed by Sam.

 

At QCon London, I spoke about monolith decomposition patterns and how we get to microservices. I like to compare them to nasty jellyfish because they're sort of these tangled entities that also sting us and might kill us. That has a lot in common with the average enterprise microservice migration. 

A number of organisations are going through some sort of digital transformation. Scratch the surface of any current digital transformation and we’ll find microservices. We know that digital transformation is a big thing because any airport lounge right now has adverts of major IT consultancies selling you on digital transformation, be it Deloitte, DXC, Accenture, or whoever else. Microservices are all the rage.

When I talk about microservices, though, I focus on the outcome rather than on the technology we use to implement them. There are many reasons why we might pick a microservice architecture, but the one I keep coming back to is the property of independent deployability. There's a piece of functionality, a change that we want to make to how our system behaves. We want to get that change out as quickly as possible.

Figure 1: A simple diagram illustrating the microservices approach.

Compare these microservices architectures to the monolith. We have this vision of the monolith as a single, impenetrable block to which we can make no changes. A monolith has come to be considered the worst thing in our lives, a millstone around our necks. I think that's grossly unfair. Ultimately, the term “monolith” in the last two or three years has replaced the term we used before, which was “legacy”. This is a fundamental problem, because some people are starting to see any monolith as legacy and therefore something to be removed. I think that's deeply inappropriate.

Types of monoliths

Monoliths come in multiple shapes and sizes. When I talk about a monolithic application, I talk about the monolith primarily as a unit of deployment. We can think about the classical monolith, which is all the code packaged in a single process. Maybe it’s a WAR file in Tomcat, maybe a PHP-based application, but all of the code is packaged together in a single deployable unit, which talks to a database. 

This monolith type can be considered a simple distributed system. A distributed system is one that consists of multiple computers that talk to each other over a non-local network. In this situation, all our code is packaged in a single process and, importantly, all our data lives in one giant database which runs on a different machine. Having all of our data in a single database is something that can cause us much pain in the future.

Figure 2: The modular monolith.

We also can consider a variation of this single-process monolith called the modular monolith. This modular monolith is using cutting-edge ideas (from the early 1970s!) around structured programming, which some of us are still getting to grips with decades later. As shown in Figure 2, we have broken down our single-process monolithic application into modules. If we get our module boundaries right, we can work on each module independently. The process of deployment, though, remains inherently a statically linked approach: we have to link all modules to make a deployment. Think about a Ruby application consisting of lots of GEM files, NuGet packages, or JAR files assembled via Maven.

While we still have a monolithic deployment, a modular monolith has some significant benefits to it. Breaking our code down into those modules does give us a degree of independent working. It can make it easy for different teams to work together and to approach and address different aspects of the system. I think this is a highly underrated option. The problem with this is that people tend to be poor at defining module boundaries — more to the point, even if they are good at defining module boundaries, they are not good at having the discipline to maintain those boundaries. Unfortunately, sensible concepts of structured programming or modularization tend to descend into that “ball of mud” problem.

Many organizations I work with would be better off with a modular monolith than with microservice architecture. I’ve told half of my clients over the last three years, “Microservices are not for you.” Some of those clients even listened to me. For many of them, a good way to define module boundaries would be good enough for their purposes. They get a much simpler distributed system and a degree of independent, autonomous working. 

Figure 3: A variation of a modular monolith.

We have variations on the modular monolith. The one in Figure 3 looks a bit odd, but is something that I’ve proposed a number of times, especially for startups, who I often think would be better of deferring microservices. In Figure 3, we've taken the modular monolith and we've broken down that single monolithic database that backs it so that we're storing and managing the data for each module in isolation. 

While this looks odd, this ultimately is a hedging architecture. It is a recognition that one of the most difficult things in decomposing a monolithic architecture is dealing with the data tier. To come up in advance with what we think are going to be our separate databases linked to those modules should make it easier to migrate to separate microservices later. If I'm working on module C, I have full ownership and control over the data associated with module C. When module C becomes a separate service, I should have an easier time to migrate it.

An old colleague of mine when I was still working at ThoughtWorks, Peter Gillard-Moss, first showed me this pattern. He came up with it for an internal system we were working on. He said, “I think this could work. We're not sure if we want to do services, so maybe it should be a monolith.”

I said, “Look, give it a go. See what happens.” I spoke to Peter last year, about six years on, and ThoughtWorks still haven't changed the architecture. It's still running quite happily. They have different people working with different modules, and having the data separated even at that level gives them significant benefits.

Figure 4: The distributed monolith. (Cue ominous music.)

Now we come to the worst of the monoliths, the distributed monolith. Our application code is now running on separate processes that communicate with each other. For whatever reason, we have to deploy the entire system as a unit in a lockstep release. Often this can occur because we've gotten our service boundaries wrong. We're smearing business logic all over different layers. We've not listened to the messages around coupling and cohesion and now our invoicing logic is in 15 different places across our services stack. We're having to coordinate changes between multiple teams to get anything done. Lots of crosscutting changes in an organization is often a sign that either the organizational boundaries or the service boundaries are in the wrong place.

The problem with a distributed monolith is that it is inherently a more distributed system, with all the associated design, runtime, and operational challenges, yet we still have the coordination activities that a monolith demands. I want to deploy my thing live, but I can't. I've got to wait till you've done your change, but you can't do your change because you're waiting on somebody else. Now, we agree: “Okay, well, on 5 July, we're all going to go live. Is everybody ready? Three, and two, and one, and deploy.” Of course, it always all goes fine. We never have any issues with these types of systems.

If an organization has a full-time release-coordination manager or another job along those lines, chances are it has a distributed monolith. Coordinating lockstep deployments of distributed systems is not fun. We end up with a much higher cost of change. The scopes of deployments are much larger. We have more to go wrong. We also have this inherent coordination activity, probably not only around the release activity but also around the general deployment activity. 

Even a cursory examination of lean manufacturing teaches that reducing handoffs is key to optimizing throughput. Waiting for somebody else to do something for me creates waste. It creates bottlenecks in our throughput. To ship software more quickly, reducing handoffs and reducing coordination are key. Distributed monoliths, unfortunately, tend to create environments that force that coordination to happen. 

Sometimes, our problem is not where the service boundaries are. Sometimes, it can start purely from how we do our software development. Some people fundamentally misunderstand the release train. The release train was always considered to be a remedial release technique, not an aspirational activity. We would pick something like a release train to help an organization move to continuous delivery. The concept of a release train is that on a regular basis, maybe every four weeks, all of the software that's ready goes out the door. If our software isn't ready, it is bumped to the next release train. For many organizations, this is a step forward. We are supposed to reduce the intervals between release trains and eventually get rid of it altogether. All too many organizations, though, adopt the release train and never move on.

When a bunch of teams all work toward the same release train, all the software that’s ready ships as the release train leaves — and suddenly we have lots of services being deployed at once. This is a real issue. When practicing the release train, one of the important things is, at the very least, to break those release trains down so that they are team release trains. Allow separate teams to schedule when their own trains leave the station. Ultimately, we should get rid of these trains. They are supposed to be only a step towards continuous delivery.

Unfortunately, some excellent efforts at marketing agile have codified the release train as the ultimate way to deliver software. We know they've done that, because the laminated SAFe diagrams hanging in many corporate organizations have the phrase “release train” printed right on them. This is not good. Regardless of SAFe or whatever other problems you might have with it, the release train is a remedial technique. It's training wheels on our bike. We're supposed to be moving forward to continuous delivery. The problem, if we stick with these release trains for too long, is that we will end up with a distributed monolith for our architecture because we get used to deploying all of our services together. Be aware of that. It may not happen overnight. We might start off with an architecture that could support independent deployments but if we stick with release trains for too long, our architecture will start to coalesce around those release practices.

Ultimately, the distributed monolith is a problem because it has all the complexity of a distributed system as well as the downsides of a single unit of deployment. We should want to move past it to better ways of working. The distributed monolith is a tricky thing and there's a lot of advice out there about how to deal with it. Sometimes, the right answer is to merge it back into a single-process monolith. But if we have a distributed monolith today, our best course is to work out why we have that and to start movement towards making parts of our architecture independently deployable before adding any new services. Adding new services to that mix is likely going to make our world much more difficult.

How to migrate a monolith to a microservices architecture

We use microservice architecture for its property of independent deployability. We want to be able to deploy a change in a service to production without changing anything else. This is the golden rule of microservices. In a presentation or article, that seems really easy. In real life, it's a lot more difficult to make happen, especially given that most people don't start from scratch. The vast majority have a system that they feel is too big, and they want to break it into smaller pieces. They wonder where to start.

Domain-driven design (DDD) has some great ways to help us find our service boundaries. When I'm working with organizations that are looking at microservice migration, we often start with performing a DDD modeling exercise on the existing monolithic application architecture. We do that to figure out what is happening inside the monolith and to determine the units of work from a business-domain point of view.

Although a monolith might seem like a giant box, when we apply DDD and project a logical model onto that monolith, we realize that the insides are organized into things like order management, PDF rendering, client notifications, and more. While the code is probably not organized around these concepts, from the point of view of our users or of a business-domain model, these concepts exist in the code. For reasons I won't go into now, those business domain-boundaries, often called “bounded contexts” in DDD speak, become our units of decomposition.

Figure 5: Finding the units of decomposition and dependencies in a monolith.

The first thing to do is to ask where we start, what things can we prioritize, and what are our units of work. In the initial monolith in Figure 5, we've got some order management, invoicing, and notifications. A DDD modelling exercise will give us a sense of how these things are related. Hopefully, we’ll come up with a directed acyclic graph of dependencies between these different pieces of functionality. (If we get a cyclic graph of dependencies, we have to do some more work.) In this monolith, we can see lots of things that depend on the ability to send notifications to our customers. That seems to be a core part of our domain.

Checkpoint: Are microservices the right solution for my problems?

We can start asking questions, like what we should extract first. I can look at it purely through this lens. We might see that notifications are used by lots of things — and if microservices are better, then extracting something that's used by lots of parts of my system will make more things better. Maybe we should start there. But look at all of those inbound dependencies. Because so many things are calling out to notifications, it will be difficult to detangle this, to rip it out of the existing monolithic architecture. Concepts like invoicing or order management in that monolithic system seem to be more self-contained. They're likely going to be easier things to decompose. And deciding on which piece to start with speaks fundamentally to an incremental approach to decomposition. 

Before anything, take to heart that the monolith is not the enemy. I want you to really reason about that. People see any monolithic system as being a problem. One of the most concerning things I've seen over the last couple of years is that microservices seem for many now to be the default choice.

Some of you may remember an old saying: "Nobody ever got fired for buying IBM." The idea was that because everybody else was buying IBM, you might as well buy IBM too — if the things you bought didn't work for you, it couldn't be your fault because everybody's doing it. You didn't have to stick your head above the parapet. Now that everyone's doing microservices, we have the same problem. Everyone is clamoring for microservices. That’s good for me: I write books on the subject. It might not be good for you.

Fundamentally, it comes down to what problems we're trying to solve. What are we trying to achieve that our current architecture doesn't let us do? Maybe microservices are the answer, or maybe something else is. It's crucial that we understand what it is we're trying to achieve because without that comprehension, it's going to be difficult for us to establish how to migrate our system. What we're doing is going to change how we decompose a system and how we prioritize that work.

The metaphor I use for microservice migrations is that it's not like a switch. There’s no on/off toggle. It's more like turning a dial. In adopting microservices, we turn up that dial and add one or two services. We want to see how a service works for us, if it gives us what we need, if it solves our problems. And if it does and we like it, we can keep turning that dial. 

What I see a lot of people do, though, is to crank that dial around, add 500 services, then plug in headphones and check the volume. That's a great way to blow your eardrums. We just don't know the problems we're going to face, the things that aren't going to hit us on a developer’s laptop. They're going to hit in production. When we’ve gone from a monolithic system to 500 services all at once, all the issues hit us all at once. Regardless of whether we want to end up with one, two, or five services or we want to be like Monzo and have 800 or 1,500 services, starting with a small turn of the dial is important. We need to pick a few services to start our migration. We get those running in production, learn from that experience, and bring that learning forward as quickly as possible. By turning this dial gradually, creating and releasing new microservices in an incremental fashion, we can better detect and handle the issues as they arise. The problems each project is going to face is going to vary on so many different factors.

We want to extract some functionality from our monolithic system, have it talk to and integrate with the remaining monolith, and do that as quickly as possible. We don't want to do big-bang rewrites anymore. When we used to deploy software every year to our users, we had a 12-month window in which we could say, “We've treated our existing system so badly that now it's impossible to work with, but we've got 12 months until the next release. If we try, we can completely rewrite the system, and we won't make any of the mistakes we made in the past, and we'll have all the existing functionality and a lot more functionality besides, and it's all going to be fine.” 

That was never true when we were releasing software every year. I don't know how we justify it now when people expect software to be released monthly, weekly, or daily. To paraphrase Martin Fowler, "If you do a big-bang rewrite, the only thing you're certain of is a big bang." I love explosions in action films, but not in my IT projects. We need to think a bit differently about how we make these changes.

Deploying a first microservice from a monolith 

I'm a big fan of incremental evolution of architectures. We shouldn't see our architectures as fixed. We need patterns that help us move monoliths toward microservices in incremental ways. 

One of the first  application patterns to look at is the strangler fig, named after a plant that takes root in the canopy of trees and sends tendrils down around a tree, wrapping itself around the trunk. By itself, a strangler fig couldn't get up into the forest canopy to get enough sunlight so instead of growing up from a sapling like a normal tree, it wraps around existing structure. It relies on the existing height and strength of the tree. Over time, as these figs mature and become bigger, they may be able to stand by themselves. If the underlying tree dies and rots away, the fig tree is left with a hollow column in the middle. These things look like wax dripped around other trees — really disturbing looking stuff.

But this idea is useful as a pattern for application migration strategy. We take an existing system that does all the things we want it to, our existing monolithic application, and we start to wrap our new system around it. In our case, that's going to be our microservice architecture. There are two keys to implementing a strangler fig application. The first is asset capture, the process of identifying which functionality we're going to migrate to a microservice architecture. Then we need to be able to divert calls. The calls that used to go to the monolithic application are going to have to be diverted to where the new functionality lives. If the functionality hasn't been migrated, those calls are not diverted; it's pretty straightforward. 

Some people get confused about how to move functionality. If we’re really lucky, we might be able simply to copy the code. If the code for our invoicing service is in a nice box called “invoicing” in our monolithic code base, we can cut and paste it into our new service. I would argue that if that's the state of your code base, you probably don't need any help. More likely, we're going to have to scurry through the system, trying to collect all the bits of invoicing. We are probably going to do some pre-refactoring exercises. Maybe we can reuse that code, but in that case, it's going to be copy and paste, not cut and paste. We want to leave the functionality in the monolith for reasons I'll come back to. More often, people will do some rewriting.

There are lots of different ways to implement the strangler fig. Let’s look at a simple one that uses an old-fashioned bit of HTTP.

Say we have a monolithic system driven via HTTP. This could be a headless application. We could be intercepting calls with an API boundary underneath the user interface. What we need is something that can allow us to redirect calls, so we're going to make use of some kind of HTTP proxy. The reason HTTP works so well as a protocol for these kinds of architectures is because it's extremely amenable to transparent redirection of calls. A call over HTTP can be diverted to lots of different places. Loads of software out there can do this for you, and it's extremely simple.

Figure 6: The HTTP proxy intercepts calls to the monolith, adding a network hop.

The first thing to do is put a proxy between the upstream traffic and the downstream monolithic system, and nothing else. We would deploy this proxy into production. At this point, it is diverting no calls. We can find whether or not it works, in production. One of the things to worry about is the quality of our network because we've added a network hop. Calls usually go straight into our monolithic system but now pass via our proxy. Latency is the killer in these situations. The diversion through a proxy should only add an overhead of only a tiny number of milliseconds to existing calls — less than 10 milliseconds would be great. If it adds something like 200 milliseconds of latency for one extra network hop, we're going to need to pause our microservice migration because we've got other big issues that need to be solved first.

With a functioning proxy in place, we're next going to work with our new invoicing service. We deploy it into production. We can do that safely even though it’s not yet fully functional because it’s not yet being used. We need to separate the ideas of deployment to production and use in our head. As we start with microservices, we want to be deploying functionality into production on a regular basis to make sure that our deployment mechanism works. We can test a new service in isolation as we add the functionality. It's not released to our users yet but it's in the production environment. We can hook it up to our dashboards, we can make sure the log aggregation is working, or whatever else we want to do. 

The key thing is that it operates on one service. We can even break down the extraction of that one service into lots of little steps: getting the skeleton service up, implementing the methods, testing it in production, and then deploying the release. When we're ready, when we think the functionality is equivalent to the old system, we just reconfigure the proxy to divert calls from the old monolith functionality to our new microservice. 

At this stage, you might think you should now remove the old functionality from the monolith. Don’t do this yet! If we have a problem with the new microservice in production, we've got an extremely fast remediation technique: we just revert the proxy configuration to divert the traffic back to the monolith with the original functionality. For this to work though, we do have to consider the role of data - we’ll come back to that shortly.

We want the extraction of this functionality into a  microservice to be a true refactoring, changing the structure of the code but not its behavior. The microservice should be functionally equivalent to the equivalent functionality in the monolith’s. We should be able to switch between them until we're happy that the microservice is working properly.

 If we want to retain the ability to toggle between which implementation of the functionality is live, it’s important that we shouldn't be adding new functionality or changing existing functionality until the migration is completed, and we no longer want the ability to toggle between which implementation of the functionality is live.

This simple strangler fig technique works surprisingly well in a large number of situations. This example used HTTP but I've seen this work with FTP too. I've done this with message interceptors. I've done this with uploading fixed files: we insert the fixed file, strip out the stuff that we want for our new service, and pass the rest on. 

Figure 7: A directed acyclic graph of a microservices architecture.

Using  the “branch by abstraction” pattern to evolve a monolith migration

The strangler fig works quite well for something like invoicing or order management, pieces of functionality that sit higher up in our call stack, as depicted in the graph of dependencies inside our monolith in Figure 7. But no call enters the monolithic system for something like the ability to reward points for loyalty or the ability to send notifications to customers. The call that comes into the monolith is “place order” or “pay invoice”. Only as a side effect of those operations might we award points or send an email. As a result, we can't intercept calls to, say, loyalty or notifications outside the perimeter of our monolith. We have to do that inside the monolith. 

Imagine we're going to extract notifications. We have to extract that piece of functionality and intercept those inbound links in an incremental way so that we don’t break the rest of the system.

A technique called “branch by abstraction” can work well. Branch by abstraction is a pattern often discussed in the context of trunk-based development, which is a good way to develop software. Branch by abstraction is also useful as a pattern in this context. We create a space in our existing monolithic system where two implementations of the same piece of functionality can coexist. In many ways, this is an example of the Liskov substitution principle. This is a separate implementation of exactly the same abstraction. For this example, we're going to extract the notifications functionality from the existing code.

Figure 8: Branch by abstraction used to migrate to a microservice.

The notifications code is scattered all over our system. The first thing we want to do is gather all of that notifications functionality for our new service. We're going to hide the service behind the abstraction point. We want our invoicing code and orders code to access this functionality via a clear abstraction point.. Initially, we have one implementation of that notification abstraction - one that encapsulates all the existing notification-related functionality that lives inside the monolith. . All of our calls — to SMTP libraries, to Twilio, to send SMSes — would get bundled into this implementation.

At this point, all we've done is created a nice abstraction point in our code. We could stop here. We've clarified our code base and made it more testable, which already are improvements. This is a good, old-fashioned bit of refactoring. We've also created, though, an opportunity to change the implementation of notifications that invoicing or orders use. We could do this refactoring effort over days or weeks while doing other stuff like actually shipping features. 

Next, we start creating our new implementation of the notifications service. This is going to be split into two bits. We've got the implementation of the new interface that lives inside the monolith, but that is just going to be client code that calls out to the other bit: our new notification microservice. We can safely deploy these implementations because, again, they're not yet being used. We're integrating our code more frequently, reducing the merge effort, and making sure that everything works.

Once the pairing of our new service-calling implementation inside the monolith and our notifications service outside the monolith works, all we need to do is switch the implementation of the abstraction we're using. We can use feature toggles, text files, specific tools, or whatever we want for this. We haven't removed the old functionality yet, so if we have a problem, we can flick that toggle back and revert to the old functionality. Again, the migration of this one service is broken down into lots of smaller steps and we're trying to get to production as quickly as possible in all of them.

Once everything is working, we can choose to clean up the code. We could remove the feature flag once we no longer need it or we could even remove the old code. It's now easy to remove the old code because we've just spent some time putting all of that code into one nice box. We delete that class and it's gone. We’ve made the monolith smaller and everyone feels good about themselves. 

Validating microservices migration using parallel run

In terms of restructuring or refactoring code, I strongly recommend Working Effectively with Legacy Code, by Michael Feathers. His definition of legacy code is code without tests. The book has loads of great ideas about how to find and create those abstractions in a code base without disrupting the existing system. Even if you don't go to microservices, just creating that abstraction point is probably going to leave your code in a better, more testable state.

I've emphasized that it's good not to remove the old implementations too quickly. There are benefits to having both implementations at the same time. It opens up interesting approaches to how we deploy and roll out our software. When a call comes into the abstraction point, it could trigger calls to both implementations. This is called a parallel run. This can help us make sure that our new microservice base implementation has the same functional equivalency. We execute both copies of that functionality, then compare the results.

For the comparison, simply execute both implementations and compare the results. We have to designate only them as our source of truth because we wouldn't want both to cascade: in the case of notifications, for example, that could result in sending two emails when we want to send only one. A parallel run is a useful, direct live comparison, not just of the functional equivalency but also of the acceptable non-functionals. We not only test if we created the right email or sent it to the correct dummy SMTP server, but also whether that new service responds as quickly or within an acceptable error rate.

Normally, the older functionality would be our trusted implementation, whose results we are going to use. We run them side by side for a period of time and if we are getting acceptable results in the new implementation, we eventually can get rid of the old one.

GitHub do this. They have created a library called GitHub Scientist, which is a little Ruby library for wrapping different abstractions and scoring them. We can use this to do a live comparison wherever we’re refactoring critical code paths in an application. GitHub Scientist has been ported to a bunch of different languages, inexplicably including three different ports for Perl: clearly, parallel runs are a big thing in the Perl community. There's loads of good advice out there on how to do parallel runs inside your application.

Separating deployment from release idea: the fundamental change

Fundamentally, we need to separate the idea of deployment from the idea of release. Traditionally, we would consider these two activities to be one and the same as deploying our software is the same as releasing to our production users. This is why everyone is scared about anything happening in production and that's how production becomes this gated environment.

We can separate these two concepts. The act of deploying something into production is not the same as releasing it to our users. This idea underpins what people are now calling “progressive delivery”, which is an umbrella term for a bunch of different techniques including canary releasing, blue/green deployments, dark launching, etc. We can get our software out quickly, but we don't have to expose it to any customers. We can move it to production, test it there, and bear any pain ourselves.

If we separate deployment from release, deployment has so much less risk. We become braver about making changes. We'll be able to make more frequent releases and those releases will have much lower risk. 

James Governor, co-founder of RedMonk, has a nice overview of progressive delivery over on the company’s blog. Look into progressive delivery, but the most important takeaway is that active deployment is not the same thing as the active release, and you can control how that release activity happens.

Migrating simple data access in a microservices approach

We have our existing monolithic application and our data locked away in our system, as shown in Figure 9. We've decided to extract our invoicing functionality, but it needs to access the data. 

Figure 9: Accessing old data from the new microservice.

Option one is to just directly access the monolith’s data. If we’re still testing and switching between invoicing live in the monolith and invoicing live in the microservice, we want data compatibility and consistency across those two implementations and this will give it to us. This is acceptable for a short period of time but it’s contrary to one of the golden rules of databases: thou shalt not share databases. This is not something to rely on for the long term because of the fundamental coupling issues that it causes. We want to maintain independent deployability.

Figure 10: Directly accessing old data from the new microservice.

In Figure 10, we have a shipping service and our database, and we’ve allowed somebody else to access our data. We've exposed our internal implementation details to an external party. It makes it hard for the developer of the shipping service to know what can safely change. There is no separation between what is shared and what is hidden.

In the 1970s, David Parnas developed this concept called “information hiding”, which is how we think about modular decomposition. We want to hide as much information as possible inside the boundary of a module or of a microservice. If we create a well-defined service interface for sharing data instead of directly exposing our database, that interface grants the developer of our shipping service an explicit understanding about the contract and what they can expose to the outside world. As long as the developer maintains that contract, they can do whatever they want in the shipping service. This is about allowing independent evolution and development of these services. Don't do direct database access, except in an extremely limited number of circumstances.

Moving away from direct access, we've got two options: either we want to access somebody else’s data or we want to keep our own data. Let's imagine for this example that we have decided that the new invoicing microservice is good enough to be our source of truth.

At this moment, if the data that we want to use is somebody else's data then that probably means the data belongs to the monolith and we have to ask the monolith for it. We create some kind of explicit service interface on the monolith itself — in our example, an API — and we can fetch the data we want.

Figure 11: The new microservice uses an explicit service interface on the monolith.

We’re the invoicing service, not the order service but we could need, say, orders data. The orders functionality lives in the monolith, so we are going to go there to get that data. This sort of point of view, this scheme makes us define the service interfaces on the monolith as we expose different sets of data and as we do so, we see the shape of other entities emerging from the monolith. We might discover an order service waiting to erupt from inside the monolith like the chestburster in Alien, although in this context, the monolith would be played by John Hurt and it would die.

The other option for data is our service’s own data — in this example, invoicing data inside the monolith’s database. At this point, we have got to move the data over to an invoicing database, and this is really hard. Taking data out of an existing system, especially a relational database, causes a lot of pain. I'm going to look at a quick example with the challenges it can create. I'm going to throw us right into the deep end, which is how we deal with joins.

The final challenge: Dealing with join operations

Figure 12: The monolith that sells compact disks online.

Figure 12 depicts an existing monolithic application that sells compact discs online. (You can tell how long I've been using this example for.) The catalog functionality knows how much something costs and stores information in the line-items table. Finance functionality manages our financial transactions and stores data in its ledger table. One of the things that we need to do is to generate a list of our top 10 albums sold each week. That's a straightforward join operation in this situation. We do a select on our ledger table to pull the 10 top sellers. We limit that select based on the row and everything else. That would allow us to get the list of IDs.

Figure 13: The microservices architecture that sells compact disks online.

When we move to the microservices world, we need to do a join operation in the application tier. We pull financial transactions from the finance database. Information about the  items we sell  live in the catalog database. To generate that top-10 list, we’re going to have to pull the IDs of the best sellers from the ledger table and then go to the catalog microservice and ask for information about the items we sold. Our join operations, which we used to do in a relational tier, move to the application tier. 

This can become  horrific in terms of things like latency. Instead of doing a single round trip for the join operation, now we’re doing one call to the finance service to pull the top 10 IDs, then another to the catalog service to ask for those 10 IDs, then the catalog service asks the catalog database for those 10 IDs, then we get the response. Figure 13 illustrates the concern.

Figure 14: The microservices architecture leads to more hops and latency.

We haven't even touched on issues like our lack of data integrity in this situation how a relational database can enforce things referential integrity.

Conclusion

If you want to dive deeper into the issues around things like handling latency and data consistency, these are outlined in depth in my book Monolith to Microservices. 

Whether or not you decide to go ahead with your own migration to microservices, I urge you to think carefully about what you are doing and why. Don’t get fixated on the activity of creating microservices. Instead, be clear-minded about the outcome you are trying to achieve. What outcome do you think microservices will bring? Focus on that instead — you may well find that you can achieve that same outcome without going into the complicated world of microservices in the first place.

About the Author

Sam Newman is an independent consultant specializing in helping people ship software fast. He has worked extensively with the cloud, continuous delivery, and microservices, and is especially preoccupied with understanding how to more easily deploy working software into production. For the last few years, he has been exploring the capabilities of microservice architectures.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • The key points

    by Marcin Ros,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The article is lenghty but also very high level and not exposing the reality. E.g. the 'joint operations' in fact is missleading - nobody does it that way. The domain decomposition is a key to avoid such a scenario, using bounded context which sometimes inevitably leads to some duplications of data which e.g. Album ID kept in both Catalogue an Finance table and updated by asynchronous events. You know what I mean.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT