BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Application Integration for Microservices Architectures: A Service Mesh Is Not an ESB

Application Integration for Microservices Architectures: A Service Mesh Is Not an ESB

Leia em Português

This item in japanese

Lire ce contenu en français

Bookmarks

Key Takeaways

  • Integration of APIs, services, data, and systems has long been one of the most challenging yet most essential requirements in the context of enterprise software application development.
  • We used to integrate all of these disparate applications in point-to-point style, which was later replaced by the ESB (Enterprise Service Bus) style, alongside the Service Oriented Architecture (SOA). 
  • As the popularity of microservices and "cloud native" architectures grows, the concept of a "service mesh" has emerged. The key idea with a service mesh is to keep all the business logic code as part of the service, while offloading the network communication logic to the inter-service communication infrastructure.
  • Since a service mesh offers some of the capabilities that are part of ESBs, there is a misconception that this is a distributed ESB, which also takes care of application integration. That is not correct. 
  • A service mesh is only meant to be used as infrastructure for communicating between services, and developers should notbe building any business logic inside the service mesh. Other frameworks and libraries can be used to implement cloud native enterprise application integration patterns.
Want to learn more about Service Mesh?
Read our ultimate guide to managing service-to-service communications in the era of microservices and cloud.
Read the guide
Service Mesh

Integration of APIs, services, data, and systems has long been one of the most challenging yet most essential requirements in the context of enterprise software application development.

We used to integrate all of these disparate applications in point-to-point style, which was later replaced by enterprise service buses (ESBs) alongside service-oriented architecture (SOA).

However, in modern microservices and cloud-native architecture, we barely talk about application integration anymore. That doesn’t mean that all these modern architectures have solved all the challenges of enterprise application integration.

Application integration challenges have stayed pretty much the same, but the way we solve them has changed.

From ESB to smart endpoints and dumb pipes

Most enterprises that adopted SOA used an ESB as the central bus to connect and integrate all the disparate APIs, services, data, and systems.

If a given business use case required talking to different entities in the organization, it was the job of the ESB to plumb all these entities and create composite functionality.

Hence, ESB solutions are usually powerhouses of all the built-in integration capabilities, such as connectors to disparate systems and APIs, message routing, transformation, resilient communication, persistence, and transactions.

[Click on the image to enlarge it]

Figure 1: Using ESB for integration

However, microservices architecture opts to replace the ESB with the building of smart endpoints and dumb pipes, meaning that your microservices have to take care of all the application integration.

The obvious tradeoff from the decentralization of the smart ESB is that the code complexity of your microservices will dramatically increase, as they have to cater to these application integrations in addition to the service’s business logic. For example, figure 2 shows several microservices (B, F, and G) that act as smart endpoints, which contain both the logic of communication structure between multiple other services and business logic.

[Click on the image to enlarge it]

Figure 2: Microservices inter-service communication and composition

One of the other challenges with microservices architecture is how to build the commodity features that are not part of the service’s business logic such as resilient communication, transport-level security, publishing statistics, tracing data to an observability tool, etc. Those services themselves have to support such commodity features as part of the service logic. It is overwhelmingly complex to implement all of these in each microservice, and the effort required could greatly increase if our microservices are written in multiple (polyglot) languages. A service mesh can solve this problem.

[Click on the image to enlarge it]

Figure 3: A service mesh in action

The key idea of a service mesh is to keep all the business-logic code as part of the service while offloading the network-communication logic to the inter-service communication infrastructure. When using a service mesh, a given microservice won’t directly communicate with the other microservices. Rather, all service-to-service communications will take place via an additional software component, running out of process, called the service-mesh proxy or sidecar proxy. A sidecar process is colocated with the service in the same virtual machine (VM) or pod (Kubernetes). The sidecar-proxy layer is known as the data plane. All these sidecar proxies are controlled via the control plane. This is where all the configuration related to inter-service communications is applied.

A service mesh is not for application integration

Since a service mesh offers some of the capabilities that are part of ESBs, there is a misconception that it is a distributed ESB that also takes care of application integration. That is not correct. A service mesh is only meant to be used as infrastructure for communicating between services, and we shouldn’t be building any business logic inside it. Suppose that you have three microservices called X, Y, and Z, which communicate in request/response style with X talking to both Y and Z in order to implement its business functionality (see figure 4). The composition business logic should be part of microservice X’s code, and the service mesh sidecar shouldn’t contain anything related to that composition logic.

[Click on the image to enlarge it]

Figure 4: Service composition logic versus service mesh

Similarly, for any service that uses event-driven communication, the service code should handle all the business-logic details (and it’s also worth mentioning that service-mesh implementations are yet to fully support event-driven architecture). So, even if we run our microservices or cloud-native applications on top of a service mesh, the integration of those services or applications is still essential. Application integration is one of the most critical yet largely concealed requirements in the modern microservices and cloud-native architecture era.

Integration in microservices and cloud-native apps

In the context of microservices and cloud-native apps, application integration or building smart endpoints is all about integrating microservices, APIs, data, and systems. These integration requirements range from the integration of several microservices to integrating with monolithic subsystems to create anti-corruption layers. A closer look at application-integration requirements in microservices and the cloud-native applications reveals the following key capabilities that we need to have in an application-integration framework:

  • The integration runtime must be cloud native, able to run smoothly within Docker/Kubernetes and provide seamless integration with the cloud-native ecosystem.
  • It needs service orchestrations/active compositions so that a given service contains the logic that invokes multiple other services to compose a business functionality.  
  • It needs service choreography/reactive compositions so that inter-service communication takes place via synchronous event-driven communication and no central service contains the service-interaction logic.
  • I must have built-in abstractions for a wide range of messaging protocols (HTTP, gRPC, GraphQL, Kafka, NATS, AMQP, FTP, SFTP, WebSockets, TCP).
  • It must support forking, joining, splitting, looping, and aggregation of messages or service calls.
  • It needs to store and forward, persistent delivery, and idempotent messaging techniques.
  • It must have message-type mapping and transformations.
  • It must integrate with SaaS (e.g., Salesforce), proprietary (e.g., SAP), and legacy systems.
  • There should be business-logic-oriented routing of messages.
  • It must support distributed transactions with compensations.
  • It must have long-running workflows.
  • The anti-corruption layers must bridge microservices and monolithic subsystems.

All these capabilities are common in any microservices or cloud-native application, but building them from scratch can be a daunting task. This is why it’s really important to carefully analyze these integration capabilities when we build microservices or cloud-native applications and to pick the right technology or framework based on the integration requirements. For example, if we need to build a service that has a complex orchestration logic then we should select the integration framework or technology that makes it easy to write those kinds of compositions. If we want to build a service that is long-running and has compensation capabilities then we need to select a framework that has built-in support for workflows and compensations (in the InfoQ article "Events, Flows and Long-Running Services: A Modern Approach to Workflow Automation", Martin Schimak and Bernd Rücker provide great in-depth analysis of the current state of workflow technologies for cloud-native architectures).

Although application integration has largely been neglected by most of the microservices experts, authors such as Christian Posta (former chief architect at Red Hat and field CTO at Solo.io) have emphasized the importance of application integration, such as in Posta’s blog post "Application Safety and Correctness Cannot Be Offloaded to Istio or Any Service Mesh". Bilgin Ibryam has written about how application-integration architecture has evolved from SOA to cloud-native architecture in his InfoQ article on "Microservices in a Post-Kubernetes Era", in which he emphasizes the decentralization of the application integration with cloud-native architecture and how application integration is being built on top of the service mesh.

Development and integration in the CNCF landscape

The Cloud Native Computing Foundation (CNCF) is at the forefront of building microservices and cloud-native applications, and it aims to build sustainable ecosystems and foster a community around a constellation of high-quality projects that orchestrate containers as part of a microservices architecture. The CNCF hosts projects composed of open-source technologies and frameworks that can implement different aspects of microservice or cloud-native architecture. It is interesting to see where these application-integration technologies fit into their technology stack.

The CNCF's recommended path through the cloud-native landscape has an App Definition and Development section, but no category dedicated to application development or integration.Given the importance of application integration, however, we could see it enter the CNCF landscape in the future. Figure 5 includes Application Integration technologies under App Definition and Development.  

[Click on the image to enlarge it]

Figure 5: Application integration in a future CNCF landscape

Technologies for application integration

Although there are quite a lot of monolithic application-integration technologies available, most are not suitable for cloud-native or microservices architectures. Only a handful of existing integration providers have implemented cloud-native variants of their products and tools.

There are dedicated integration frameworks that facilitate all of the common integration patterns in the application-integration space. Usually, most of these technologies are inherited from the conventional ESB-based integration but they have been modified and natively integrated into cloud-native architectures:

  • Apache Camel/Camel-K is one of the popular open-source integration frameworks and the Camel-K project offers seamless support for the Kubernetes ecosystem on top of the Camel runtime.
  • WSO2 Micro Integrator is a cloud-native variant of the open-source WSO2 Enterprise Integrator platform. Micro Integrator offers a lightweight integration runtime that natively works in the Kubernetes ecosystem.
  • Although Spring Integration doesn’t have a dedicated runtime to work on Kubernetes, it works well for building application integrations in cloud-native architecture.

Some of the application development frameworks also cater to the application-integration requirements:

  • Spring Boot is not an integration framework per se but it has many substantial capabilities required for application integration.
  • Vert.x is a toolkit for building reactive cloud-native applications, which can also be used for application integration.
  • Micronaut is a modern, JVM-based, full-stack framework for building modular and easily testable microservice and serverless applications. There are quite a few integration abstractions built into the framework, and it avoids the complexities of conventional frameworks such as Spring.
  • Programming languages such as Go, JavaScript/Node.js, etc., have certain application-integration features built in or available as libraries. There are emerging new languages such as Ballerina that offer integration abstractions as part of the language.
  • Quarkus is a new Kubernetes-native Java stack that has been tailored for GraalVM and OpenJDK HotSpot, assembled from the best-of-breed Java libraries and standards. It’s a combination of multiple application development libraries such as RESTeasy, Camel, Netty, etc.

Conclusion

With the segregation of monolithic applications into microservice and cloud-native applications, the requirement to connect these apps is becoming increasingly challenging. The services and applications are dispersed across the network and connected via disparate communication structures. Realizing any business use case requires the integration of the microservices, which needs to be done as part of the service implementation logic. As a result, cloud-native application integration is one of the most critical yet largely concealed requirements in the modern era of microservices and cloud-native architecture.

The service-mesh pattern overcome some of the challenges in integrating microservices but only offers the commodity features of inter-service communication, which are independent from the business logic of the service, and therefore any application-integration logic related to the business use case should still be implemented at each service level. Accordingly, it is important to select the most appropriate development technology for building integration-savvy services and minimize the development time required to weave together services. Several frameworks and technologies are emerging to fulfill these application-integration needs in the cloud-native landscape, which we need to evaluate against each specific use case.

About the Author

Kasun Indrasiri is the director of Integration Architecture at WSO2 and is an author/evangelist on microservices architecture and enterprise-integration architecture. He wrote the books Microservices for Enterprise (Apress) and Beginning WSO2 ESB (Apress). He is an Apache committer and has worked as the product manager and an architect of WSO2 Enterprise Integrator. He has presented at the O'Reilly Software Architecture Conference, GOTO Chicago 2019, and most WSO2 conferences. He attends most of the San Francisco Bay Area microservices meetups. He founded the Silicon Valley Microservice, APIs, and Integration meetup, a vendor-neutral microservices meetup in the Bay Area.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • BPM

    by Andy Leung,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Not sure if you know. Service mesh is basically an unorganized way of routing business logic around. We are putting heavy lifting into each sidecar, not only it is not easy to maintain, it is not easy to understand the system from the beginning as sidecar not only controls the service consuming other services but the way it is being consumed too. That's exactly why back in 2005 or earlier, to make ESB and SOA truly successful, we need to group these service mesh communication logic together as business process because we won't know the system without understanding the business. And thus BPM was to integrate with ESB. A BPM is to drive business logic at high level and leave all service centric processing to each service node attached to ESB. But it failed because each node attached to ESB was designed to be big and heavy. And most of the time ESB was based on SOAP back then, so performance was never great. To improve it, simply design each service to be small and adapt the RESTful ESB.

  • Good Premise, Not So Sure About Conclusion

    by Andy Hitchman,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The service mesh should support integration between services, but I don't see why it needs to handle varied protocols, or the higher order messaging patterns you describe, or even transactions.

    How about keeping it simple? Keep all behviour in your service. Use a library like 0MQ for messaging. Draw your config from k8s, or etcd or similar.

  • Re: BPM

    by Kasun Indrasiri,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I think there is a lot of misconception around service mesh concept. As I've mentioned in the above article, service mesh is not meant to be used to distribute the business logic(so what you have stated here is incorrect in my view). Rather it should be used as the communication infrastructure between the services. All the logic related to the business use case is implemented as part of the service logic itself.
    Regarding BPM and ESB, BPM is mostly used for long-running, stateful processes while ESB logic was primarily for stateless business logics (orchestrations). The problem with ESB or BPM is the centralization of the business logic into a monolithic bus. With microservices, we break that centralized business logic into multiple services and they communicate with each other over the network. A service mesh is an infrastructure that helps you to simplify that inter-service communication.

  • Re: Good Premise, Not So Sure About Conclusion

    by Kasun Indrasiri,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yeah, proxing messages flowing through different protocols may sounds like an overkill. In fact, most of the existing service mesh implementations only support protocols such as HTTP, gRPC etc. Other protocols such as AMQP, Kafka etc. are not supported by service mesh and they meant to be implemented in the same way that you have mentioned. However that makes live bit harder for the developers as they need to support all the in-built capabilities of service mesh (e.g. metrics, tracing, resiliency) as part of the code they write to integrate with a given protocol which is not supported by the service mesh.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT