Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Lessons Learned Adopting Microservices at Gilt, Hailo and nearForm

Lessons Learned Adopting Microservices at Gilt, Hailo and nearForm


This article contains an extensive interview on the microservices adoption process, the technologies used, the benefits and difficulties of implementing microservices, with representatives from Gilt, Hailo and nearForm.

If we were to consider Gartner’s Hype Cycle, microservices are perhaps right before the Peak of Inflated Expectations. There is a good number of early adopters, and microservices are quite present in the specialized media, including here on InfoQ. Successful implementations – Amazon, Google, Netflix, etc. – have demonstrated that this technology is viable and worth considering.

Today we are presenting an interview with three companies – Gilt, Hailo and nearForm – which are sharing from their experience building a microservices platform from scratch or by re-architecting a monolithic platform by gradually introducing microservices. The interviewees are: Adrian Trenaman, SVP of Engineering at Gilt, Feidhlim O'Neill, VP of Platform and Technical Operations for Hailo, and Richard Rodger, CTO of nearForm.

InfoQ:  Please tell us about the microservices adoption process in your company. Why microservices? What technologies have you used to implement them? How long did it take?

Adrian Trenaman: Adoption is high, now at approx. 300 services. Adoption was driven by an organization structure of autonomous, KPI driven teams, supported by integrated tooling that makes it easy to create and deploy services. Adoption also spurred by the adoption of Scala as a 'new way' to write services.

We had a number of large monolithic applications and services. It was getting increasingly harder to innovate fast as multiple teams committed to the same codebase and competed for test and deployment windows. Adopting a microservices architecture offered a vision of smaller, easy to understand, units of deployment that teams can deploy at will. 

We are using Scala, SBT, Zookeeper, Zeus (Riverbed) Transaction Manager, Postgres, RDS, Mongo, Java, Ruby, Backbone, Kafka, RabbitMQ, Kinesis, Akka, Actors, Gerrit, Opengrok, Jenkins, REST, and Apidoc.

The adoption process took a period of 1.5-2 years, and is ongoing.

Feidhlim O'Neill: Hailo went through a replatforming exercise and our new platform was build from the ground up using microservices. Microservices was just evolving as a viable software architecture and we felt it support how we wanted to work.

We trialed a number of technologies and ultimately decided on a combination of what we knew (Cassandra, Zookeeper, etc.) and some new technologies. Selecting golang as our primary language was one of the riskiest choices but has paid off. From project kick off to the first components live was about 6 months. The full migration was around 12 months.

Richard Rodger: We are an enterprise Node.js consultancy (one of the largest!), so we were naturally drawn towards the microservice style, as it is a natural fit for the lightweight and network friendly nature of Node.js. We began to adopt after inviting Fred George, one of the earliest advocates, to speak at one of our meetups. We found him to be inspirational. As we began to adopt microservices, we tried out a number of approaches. In some sense, there is a tiering to the architecture, in that many adoptees are simply spitting large web apps into lots of little web apps, whereas people like Fred are going fully asynchronous for each unit of business logic. We have run all these variants in production, and what we have found is that this choice is not as important as it looks on the surface. More important is to provide a message transportation layer between services that abstracts this question away. Then you have the freedom to arrange communications between your services as appropriate, whilst ensuring that your developers do not have to worry about the transport layer, or the evils of service discovery.

We use the microservice architecture for a very simple reason: we can build better systems and delivery them more quickly. It is much easier to deal with changing requirements, before and after go-live, because you only change small pieces at a time, rather than making high-risk full redeployments. Microservices are easy to specify and test. If you think about, they are black boxes that react to certain messages, and possibly emit certain other messages. This is a very clean interface that you can define and test very clearly. Scaling is much easier. The whole application does not have to be performant, only the parts that are the bottlenecks. And you can scale them by adding more instances of a given service. A service that, by definition, is stateless, and therefore easy to scale linearly. Finally, microservices make project management so much easier. Each microservice should take about a week to write. This gives you nice easy blocks of effort to work with. If a developer makes a mess, you can, literally, genuinely, throw the code away and start again. It’s only a week of work. This means that technical debt accumulates much more slowly. And that helps you move faster.

We build using Node.js, and it really is perfect for microservices. For communication between services, we have an abstraction layer that gives us the flexibility we need. As we’ve grown, we’ve found the need to build some services in other languages. This can happen for many reasons, performance, integration, or simply the availability of talent both internally and at our clients. We’ve defined the abstraction layer as a simple protocol so that it’s easy to add new services in other languages. Calling it a protocol is almost too much in fact - it’s really just the exchange of JSON documents, embellished with some pattern matching. For message transport we’ve used everything from point-to-point HTTP and TCP, to web sockets, to messaging systems, even Redis pub/sub.

Our learning and adoption time took about two years to fully develop a high-performing approach. These days there’s so much more reference material, books case studies and conference talks that this time is much shorter. In fact, speaking of books, our team is writing at least two on the subject of microservices, so look out for those later this year.

InfoQ: When does it make sense to do microservices?

Adrian Trenaman:

When you can isolate a piece of domain functionality that a single service can 'own'. 

When the service can fully own read and write access to it's own data store. 

When multiple teams are contributing to a monolithic system but keep on stepping on each others toes.

When you want to implement continuous deployment.

When you favor an 'emergent' architecture rather than a top-down design.

Feidhlim O'Neill: We really wanted to have parallel 'mission' product development teams that were fully independent. By decomposing our business logic into 100s of microservices we are able to sustain parallel changes across multiple business lines.

Richard Rodger: Whenever you prefer to keep your business open, even if that means loosing a percentage of revenue due to errors. It’s funny how the world of enterprise software seems to glorify the notion of absolute correctness. Every database and application should only have ACID transactions. And then, when you ask the leadership of those organizations which they prefer, you find that keeping the shop doors open is much more important. For example, consumer barcodes are not always accurate - the price at the till does not match the price on the label. Supermarkets somehow seem to stay open.

Microservices as an architecture value availability over consistency. They keep your site, mobile app or service up and running. There will be errors in some percentage of the data. You get to tune that percentage by increasing capacity, but you never get away from it completely. If your business can tolerate errors, then microservices are for you.

Obviously, there are systems that need to be 100% accurate. And the best way to achieve this is with large scale (and expensive) monoliths, both in terms of software, and hardware. Financial, medical, and real-time systems are obvious examples. But there are large amounts of software that is pointlessly slow and expensive to build simply because we aren’t paying attention to business realities.

InfoQ: What are some of the difficulties in implementing microservices?

Adrian Trenaman:

Hard to replicate production into a staging environment - you need to either test in production or invest in sandbox / stage automation.

Ownership - you end up with a lot of services: as teams change, services can become orphaned.

Performance - call stack can become complex with cycles & redundant calls. Solve with lambda-architectural approaches .

Deployment - you need to have clear, consistent technology on how to continuously deploy software.

Client Dependencies - avoid writing service clients that pull in large number of dependent libraries. Can lead to conflicts. Also, rolling out en-masse changes to those libraries is time-consuming.

Audit & Alerting - you need to move towards tracking and auditing business metrics rather than just low-level performance metrics. 

Reporting - having decentralized your data, your data team will probably need you to send data to the data warehouse for analysis. Need to look at real-time data transports to get the data 'out' of your service.

Accidental complexity - the complexity of the system moves out of the code, and get's lost in the white space, the interconnectivity between services.

Feidhlim O'Neill: The temptation to go live before you have all the automation and tooling complete. For example debugging 200 services without the right tools at the macro (what's change) and micro (trace) is neigh on impossible. You need to figure in automation and tooling from day one not as an afterthought.

Richard Rodger: Not abstracting your message transportation will end in tears. Typically people first start our by writing lots of small web servers, a sort of mini-SOA with JSON, and then run into problems with dependency management, and the really nasty one, service discovery. If your service needs to know where any other services are on the network, you’re heading for a world of pain. Why? You’ve just replicated a monolith, but now your function calls are HTTP calls. Not a huge win once things get big. Instead, think messages first. You service sends messages out into the world, but does not know or care who will get them. Your service receives messages from the world, but does not know or care who sent them. It’s up to you as architect to make this a reality, but it’s not that hard. Even if you are doing point-to-point behind the scenes for performance reasons, still make sure your service code does not know this, by writing a library to serve as an abstraction layer.

InfoQ: What are the benefits of implementing them?

Adrian Trenaman:

Faster time to market.

Continuous deployment.

Easy to understand components (not withstanding that the complexity sometimes just moves elsewhere in the system)

Easy to create, easy to tear down (although you need to have a 'clean shop' mentality).

Feidhlim O'Neill: Parallel Continuous deployments and ease of refactoring services.

Richard Rodger: The big benefit is speed of execution. You and your team will deliver faster on an ongoing basis, because you have reducing deployment risk (it’s easy to rollback, just stop the new service!), removed the need for big refactoring - it’s only a week of code, and you’ve removed hard-coded decencies of language platforms or even things like databases.

The other benefit is that you have less need of project management ceremony. Microservice systems suffer so much less from the standard pathologies of software development that strict development processes are not as effective in ensuring delivery. It’s easy to see why high levels of unit test coverage are a must for monoliths. Or that pair-programming is going to help. Or any of the Agile techniques. The costs of technical debt in a monolith are so much more expensive, so it makes sense for the team to be micro-managed. In the microservice world, because the basic engineering approach is just much better suited to under-specified and rapidly changing requirements, you have less need for control. Again, one week of bad code won’t kill, and you’ll see it right away.

InfoQ: How does microservices compare to a traditional SOA system?

Adrian Trenaman: For me, microservices is just taking SOA further, adapting the concept to avoid monolithic services/codebases and focus on delivering continuous innovation to production across multiple teams.

Feidhlim O'Neill: Decomposing the business logic into independent services is probably the main takeaway. Someone once described it to me Microservice Architecture as a SOA design pattern and I guess that makes a lot of sense. There are lots of similarities - monolith v micro being the main difference.

Richard Rodger: It’s a radically different approach. There’s no concept of strict schemas. There’s an insistence on small services. There a recognition that the edges are smart and the network dumb, so complexity does not build up in weird places. You don’t have to deal with versioning issues. Why? You run new and old versions of a service together, at the same time, and gradually migrate over, all the while watching your performance and correctness measures. The lack of strict schemas is exactly what makes this possible.

Gilt, Hailo, nearForm and several other companies will be present at the Microservices Day in New York on July 13, sharing from their experience adopting and running a microservices architecture.

About the Interviewees

Adrian Trenaman is VP of Engineering at Gilt. Ade is an experienced, outspoken software engineer, communicator and leader with more than 20 years of experience working with technology teams throughout Europe, US and Asia in diverse industries such as financial services, telecom, retail, and manufacturing. He specializes in high-performance middleware, messaging and application development, and is pragmatic, hard-working, collaborative and results-oriented. In the past, he has held the positions of CTO of Gilt Japan, Tech Lead at Gilt Groupe Ireland, Distinguished Consultant at FuseSource, Progress Software and IONA Technologies, and Lecturer at the National University of Ireland in Maynooth. He became a committer for the Apache Software Foundation in 2010, has acted as an expert reviewer to the European Commission, and has spoken at numerous Tech events.Ade holds a Ph.D, Computer Science from the National University of Ireland, Maynooth, a Diploma in Business Development from the Irish Management Institute, and a BA (Mod. Hons) Computer Science from Trinity College, Dublin.


Richard Rodger is a technology entrepreneur who has been involved in the Irish Internet industry since its infancy. Richard founded the internet start-up in 2003. He subsequently joined the Telecommunication Software and Systems Group (TSSG) and became CTO of one of its successful spin-off companies, FeedHenry Ltd. More recently, he became CTO and founder of Richard holds degrees in Computer Science (WIT), and Mathematics and Philosophy (Trinity College Dublin). Richard is a regular conference speaker and is a thought leader on system architectures using Node.js. Richard is the author of Mobile Application Development in the Cloud by Wiley. He tweets at @rjrodger and blogs here.


Feidhlim O'Neill has spend over 20 years working in a variety of tech companies, in the UK and US, from startups to Nasdaq-100 companies. He spent 10 years at Yahoo! in a variety of senior positions in service and infrastructure engineering. Feidhlim works at Hailo where he oversees their new goland microservices platform built on AWS.

Rate this Article