BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Komand Principal Engineer Sean Kelly on Microservice Fallacies

Komand Principal Engineer Sean Kelly on Microservice Fallacies

This item in japanese

Sean Kelly, a Principal Engineer at Komand wrote an article on the back of a lightning talk he gave at the Boston Golang meetup last year where he spoke about experiences with microservices. He starts by grounding his audience with what they should expect to get from listening to (or reading) what he has to say:

I’m going to cover a few of the major fallacies and “gotchas” of the Microservices movement, coming from someone who worked at a company that also got swept up in the idea that breaking apart a legacy monolithic application was going to save the day. While I don’t want the takeaway of this blog post to be “Microservices == Bad”, ideally anyone reading this should walk away with a series of issues to think about when deciding if the move to a Microservice based architecture is right for them.

Of course, no discussion about microservices can begin without at least trying to define what they are or are not. However, as Kelly points out, there is no "perfect" or agreed definition:

What this actually means in practice is that a microservice only deals with as limited an area of the domain as possible, so that it does as few things as necessary to serve its defined purpose in your stack.

And most microservice practitioners use REST (HTTP) or some RPC protocol for communication with and between microservices. The result is that developers think this is a pretty simple thing to do and ...

we’ll just wrap tiny pieces of the domain in a REST API of some kind, and we’ll just have everyone talk to each other over the network.

Unfortunately, and this is the meat of what Kelly has to say, in his experience this leads to five "truths" about microservices which are not always quite so "true". The first is that they will lead to cleaner code. However, as he then says “You don’t need to introduce a network boundary as an excuse to write better code”.

The simple fact of the matter is that microservices, nor any approach for modeling a technical stack, are a requirement for writing cleaner or more maintainable code. It is true that since there are less pieces involved, your ability to write lazy or poorly thought out code decreases, however this is like saying you can solve crime by removing desirable items from store fronts. You haven’t fixed the problem, you’ve simply removed many of your options.

Kelly goes on to suggest that architecting around coarser grained "logical" services might be a better approach, at least initially. According to Kelly, some of the benefits of using this instead of going straight to microservices include less use of the network for communication. He suggests that this very closely resembles a Service Oriented Architecture. Though of course that raises another unanswered question: is there a difference between SOA and microservices? We've illustrated a few discussions on this topic in the past, for instance.

On to the next "truth" about microservices, which for Kelly is that "It’s easy to write things that only have one purpose".

While it might seem simple at the outset, most domains (especially in newer companies which need to prototype, pivot, and generally re-define the domain itself many times) do not lend themselves to being neatly carved into little boxes. Often times, a given piece of the domain needs to reach out and get data about other parts to do its job correctly. This becomes even more complex when it needs to delegate the responsibility of writing data outside of its own domain. Once you’ve broken out of your own area of influence, and need to involve others in the request flow to store and modify data, you’re in the land of Distributed Transactions (sometimes known as Sagas).

OK, we'll ignore the fact that whilst Sagas can be a form of distributed transactions not all distributed transactions are Sagas! The core of Kelly's argument is that there can be a lot of complexity involved in wrapping multiple remote services into a given request. For instance, can they be called in parallel or do they need to be invoked sequentially? What about errors? We heard about some of these complexities recently from others, such as Alvaro Videla, for instance.

Kelly's third "truth" is "They’re faster than monoliths":

This is a tough one to dispel because in truth you often can make individual systems faster by paring down the number of things they do, or the number of dependencies they load up, etc etc. But ultimately, this is a very anecdotal claim. While I have no doubt folks who pivoted to microservices saw individual code paths isolated inside of those services speed up, understand that you’re also now adding the network in-between many of your calls. The network is never as fast as co-resident code calls, although often times it can be “fast enough”.

Kelly believes that at least for some of the examples where people say they resulted in faster microservices there were other factors at play and specifically new languages or technology stacks which were used in the re-architecture/re-implementation. If these were used with the original monolith then perhaps performance would also have been improved?

"Truth" number four is, "It’s easy for engineers to not all work in the same codebase". However, for Kelly the follow up to this could be “A bunch of engineers working in isolated codebases leads to ‘Not my problem’ syndrome”, with a microservices architecture possibly leading to other problems that dwarf any possible benefits.

The biggest is simply that to do anything, you have to run an ever-increasing number of services to make even the smallest of changes. This means you have to invest time and effort into building and maintaining a simple way for engineers to run everything locally. [...] Additionally, it also makes writing tests more difficult, as to write a proper set of integrations tests means understanding all of the different services a given interaction might invoke, capturing all of the possible error cases, etc etc.

For his fifth and final "truth", Kelly has "It’s the simplest way to handle autoscaling, plus Docker is in here somewhere".

It’s not incorrect to say that packaging your services as discrete units which you then scale via something like Docker is a good approach for horizontal scalability. However, it’s incorrect to say that you can only do this with something like a microservice. Monolithic applications work with this approach as well. [...] while a microservice approach guides you into this approach from the get go, you can apply the exact same method of scaling your stack to a more monolithic process as well.

Kelly finished his lightning talk, and hence his article, with a discussion of when developers should consider using microservices. He starts with the importance of developers and architects understanding ...

... the domain you’re working in. If you can’t understand it, or if you’re still trying to figure it out, microservices could do more harm than good. But if you have a deep understanding, then you know where the boundaries are, what the dependencies are, so a microservices approach could be the right move.

He goes on to discuss the importance of also understanding your workflows (again, another reference to distributed transactions). As well as understanding the workflows, being able to monitor them is extremely important and probably even more so than in a monolithic implementation because you may find it harder to determine precisely which microservice is the cause of a bottleneck or a failure. In a sense, what Sean suggests about understanding your architecture before leaping into microservices is similar to what others have said in the past, such as Simon Brown when talking about Distributed Balls of Mud:

If you're building a monolithic system and it's turning into a big ball of mud, perhaps you should consider whether you're taking enough care of your software architecture. Do you really understand what the core structural abstractions are in your software? Are their interfaces and responsibilities clear too? If not, why do you think moving to a microservices architecture will help? Sure, the physical separation of services will force you to not take some shortcuts, but you can achieve the same separation between components in a monolith.

But back to Kelly. Given all of this, what would be his recommendation to those considering microservices?

If you were to ask me, I’d advocate for building “Internal” services via cleanly defined modules in code, and carve them out into their own distinct services if a true need arises over time. This approach isn’t necessarily the only way to do it, and it also isn’t a panacea against bad code on its own. But it will get you further, faster, than trying to deal with a handful or more microservices before you’re ready to do so.

Rate this Article

Adoption
Style

BT