Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News The Benefits of Microservices

The Benefits of Microservices

Gene Kim (moderator), Gary Gruver, Andrew Phillips and Randy Shoup have discussed some of the benefits of microservices in a recent online panel.

During the online panel Exploring the Uncharted Territory of Microservices organized by XebiaLabs, the following participants discussed the benefits of microservices: Gene Kim, author, researcher and former CTO at Tripwire, moderator; Gary Gruver, President at Practical Large Scale Agile LLC and former Director of Engineering for the LaserJet Enterprise at HP; Randy Shoup is Consulting CTO at Randy Shoup Consulting and former Director of Engineering for Cloud Computing at Google; Andrew Phillips is VP Product Management at XebiaLabs. We have extracted the main ideas, paraphrasing them in a short form here.

Gruver said that microservices enable architects to build large systems composed by small services that encapsulate the functionality corresponding to a single feature. This goes in contrast to large enterprise monolithic applications and has certain benefits in terms of managing the teams working on such projects, dealing with code changes and release cycles.

Large monolithic codebases can have a deferred feedback loop, Gruver mentioning the case of a code breaking change that took 6 months to be discovered and one week to be fixed.

Shoup considered that some of the most successful Internet companies –Amazon, eBay, Google, Netflix, Twitter, etc. - have moved to microservices because “the monolith grows and there is so much coordination overhead and so much fear of making change that everything slows down.”

Shoup added: microservices, Agile, DevOps are facets of the same idea - breaking what we do into small, manageable pieces, and do those independently in parallel, enabling large companies to move at the speed of the small ones. Organizing the company and the development teams around microservices results in very small autonomous teams of 3-5 people responsible for one or several microservices. These teams make their own technology and methodology choices. This makes developers feel responsible for highly specialized and small codebase, getting to know them very well and resulting in high efficiency when it comes to improving them or fixing defects.

The Google Megastore service is being developed by a small team of only 6 people and serves the needs of several other services including Google App Engine and Google Cloud Datastore. The later is one of the largest NoSQL services in the world and, yet, it has a team of only 6-8 people, according to Shoup.

The risk associated with code changes is not linear with the number of modified lines of code. In a monolithic application, as the number of code changes grows, the risk rises exponentially because of all the dependencies that tend to build up in such systems over time. In the microservices world, using a continuous release cycle, changes are pretty small and developers can rapidly go over them and fix the eventual defects found, reducing the risk of a deployment. This results in higher velocity with lower associated risk, added Shoup.

Gruver continued by saying that in contrast managing and coordinating a release for a large monolithic codebase containing thousands of changes done by hundreds of developers is daunting. One needs to make sure everything is in place, everything is in sync.

Phillips remarked that, like Agile, DevOps and Continuous Delivery, microservices is not a silver bullet for everything, but it works in a certain context, and there is a cost associated with using such an approach. There is still room for monolithic applications in enterprises. Nonetheless, large codebases can be problematic, Phillips mentioning a Microsoft Word bug that has been around since 1995, and nobody has managed to trace it down.

Another benefit of microservices mentioned by Phillips is that smaller codebases help developers focus and have a higher empathic relationship with the users of their product, leading to a better motivation and clarity in their work. The closer relationship with the users leads to a shorter feedback loop, finding faster which features should be implemented and what defects have appeared. 

The panelists also addressed the issues related to changing a monolithic application into a microservices-based one. Gruver suggested: don’t re-architect everything, but start small, finding a meaningful business feature that would benefit from being implemented as a microservice, implement it, and test the system against it. Check how it integrates with the rest of the process, how continuous integration works for it. If it works well, remove the old code and start using it. The litmus test for success is if the organization can move faster by developing and releasing microservices independently. If that goal is not achieved, then it is no point in introducing a microservice architecture.

Shoup shared from his experience at eBay which re-architected their system by ordering the functionality based on revenue and started with the one that was the most profitable, then they worked it down through features that had a lower financial impact. There may be some functionality that never gets re-architected because it is of so little impact that it is not worthy doing that. When re-architecting a component, the first step is to separate it from the rest of the system by introducing an interface, then changing the implementation behind the interface as desired. The process can then be repeated with other components.

Phillips considered that it is important to have a suite of tests when re-architecting a component, running them against the new interface before and after the new implementation is in place to make sure code changes are not modifying the system’s behavior.

Gruver added that he would not start a re-architecting process without having a suite of automated tests. Before re-architecting anything or implementing microservices, make sure you understand what business problem you are trying to solve. Then set objectives and goals and make yourself accountable.

Phillips mentioned that high-coupling between services is worse than duplicating data between them, so he would rather have two copies of the same data than one copy and two services that end up being merged into one. (Some consider data duplication between services as an anti-pattern. See Udi Dahan on Defining Service Boundaries, Ed.)

Answering a question from participants, Shoup said that re-architecting a monolithic application into a microservices one may require pulling data out of a RDBMS and moving it into NoSQL data stores. One approach is to design microservices around domain entities persisted into private NoSQL databases accessible through microservices’ interface. There should be only one copy of writable data per entity, but there can be multiple cached read-only copies spread throughout the system for performance reasons.

Regarding the issue of managing the large number of interconnections between services, Shoup answered that Amazon or Google do not have an API gateway or an ESB to control the graph of relationships because from a service’s perspective things are very simple. A microservice does not need to be aware of all the interactions between all the other microservices, but rather only those with its own clients and the services it depends upon. The relationship with a client may include authorization and possibly managing a quota of requests per second served, while the relationship upstream means simply making a request to another service. The communication channel must be standardized though, so the services can understand each other, possible choices being RESTful Web APIs, RPC, or others. There can be both a synchronous mechanism and an asynchronous one. He mentioned Google using for this purpose a RPC protocol which has just been open sourced.

In terms of governance, Shoup recommends for each team to be responsible for their service(s), including the operational effort involved, and should have SLAs agreed both upstream and downstream establishing the functional requirements.

Rate this Article