Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News The Long History of Microservices

The Long History of Microservices

This item in japanese

Microservices have a very long history, not as short as many believe. Neither was SOA invented in the 90s. We have been working with the core ideas behind services for five decades, Greg Young explained at the recent Microservices Conference in London, during his presentation about the core ideas that has led to the microservices ideas of today.

Young refers to Martin Fowler for some major characteristics of a microservice, most importantly the ability to replace services in a system independently from each other, the organization around business capabilities and using smart endpoints and dumb pipes, all of which Young notes also are aspects of SOA done right.

Young notes that in the original object orientation model from the 1970s, an object is a little computer that you send messages to and tell it do things. The Actor model from the same period is based on similar concepts, with an Actor being the small computer you send messages to through its mailbox. Both are predecessors to the core aspects of microservices; we may change our tooling or message transports but the ideas are still the same. Today we say that SOA has failed but that microservices will succeed, but Young claims that nothing is wrong with the basic concepts of SOA. The good aspects of microservices were already there in SOA.

Looking at what we have learned over the last 50 years, Young refers to the 1st law of distributed computing that states that we should not distribute a system unless we really need it. There is nothing wrong in making services and then hosting them in the same server, or even in the same process. He claims that most systems, especially smaller business systems, do not need to distribute for scalability, but they may be distributed for availability.

What we want is isolation between services. When running each service in its own process within the same server, we are guaranteed that a contract is followed. One step towards more isolation is to run each service inside a Docker container. This gives a bit more isolation because less things are shared between the services. One further step is to run a node per microservice.

One reason for choosing a certain level of isolation is dealing with correlated failures. When running all microservices inside the same process, all services will be killed if the process is restarted. By running each service in its own process, only one service is killed if that process is restarted, but restarting the server will kill all services. Running each service on its own server or two instances on two servers will allow for servers to be restarted with less effect on availability of each service.

Each of these layers of isolation also has cost associated with it. Running with multiple processes on the same server is easier to deal with compared to, for example, a node per service. None of these levels are correct and the others wrong; rather it’s all about trade-offs. By increasing your level of isolation, you also increase your cost and complexity. Young paraphrases Simon Brown:

If you can’t manage building a monolith inside a single process, what makes you think putting network in the middle is going to help?

For Young, one advantage with these types of different isolations is that we don’t have to decide upfront, and we don’t have to use the same level in production as in development, and he notes that this is a main benefit of a microservices style architecture.

Next year’s Microservices Conference in London is scheduled for November 6-7, 2017.

Rate this Article