BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles How Unnecessary Complexity Gave the Service Mesh a Bad Name

How Unnecessary Complexity Gave the Service Mesh a Bad Name

Key Takeaways

  • There is immense value to adopting a service mesh, but it must be done in a lightweight manner to avoid unnecessary complexity.
  • Take a pragmatic approach when implementing a service mesh by aligning with the core features of the technology, and watching out for distractions.
  • Some of the core features of service mesh include standardized monitoring, automatic encryption and identity, smart routing, reliable retries, and network extensibility.
  • Service meshes can offer powerful features, but these can be distractions from the core benefits, and not seen as primary reasons to implement a service mesh.
  • Some notable distractions that may not be necessary for your initial implementation include complex control planes, multi-cluster support, Envoy, WASM, and A/B Testing.
     

Service meshes are a hot topic in the world of Kubernetes, but many would-be adopters have left disillusioned. Service mesh adoption has been limited by overwhelming complexity and a seemingly endless array of vendor solutions. After navigating this space myself, what I’ve found is that there is immense value to adopting a service mesh, but it must be done in a lightweight manner to avoid unnecessary complexity. And despite the general disillusionment, the future of the service mesh remains bright.

Learning on the Job

My entry into the world of service meshes started with my role as a cloud architect at a long-standing Fortune 500 technology company. At the beginning of our service mesh journey, I had the benefit of many strong engineers by my side, but most had little to no experience in cloud development. Our organization was born before the cloud, and it was taking time to fully realize the cloud’s value. Our legacy lines of business primarily focused on the hardware elements of the technology stack, and cloud decisions were initially driven by the processes developed to ship hardware or to deliver firmware and drivers for this hardware. 

As this organization went through its “digital transformation,” it became increasingly reliant on delivering high-quality software services and gradually developed better methodologies. But as the cloud architect, I was still navigating business processes that prioritized hardware, and engineering teams with disparate skill sets, processes, and beliefs. In time, my team and I became proficient and successful at migrating .NET applications to Linux, adopting Docker, moving to AWS, and the best practices that go along with it, such as continuous integration, automated deployments, immutable infrastructure, infrastructure as code, monitoring, etc. But challenges still existed.

During this time we started to split up our application into a set of microservices. At first, this was a slow transformation, but eventually this approach caught on and developers started to prefer building new services over adding to existing ones. Those of us on the infrastructure team saw this as a success. The only problem was that the number of networking-related issues was skyrocketing, the developers were looking to us for answers and we weren’t ready to effectively combat the onslaught.

The Service Mesh to the Rescue

I first heard of the service mesh in 2015 when I was tinkering with service discovery tools and looking for easy ways to integrate with Consul. I loved the idea of offloading app responsibilities to a “sidecar” container and found some tools that could help do this. Around this time, Docker had a feature called “linking” that let you place two applications in a shared networking space so they could communicate via localhost. This feature provided an experience similar to what we now have inside a Kubernetes pod: Two services, built independently, could be composed at deployment time to enable some additional capabilities.

I have always jumped at the opportunity to solve big problems with simple solutions and so the power of these new capabilities struck me immediately. While this tool was built to integrate with Consul, in practice, it could do anything you wanted. It was a layer of the infrastructure that we owned and could use to solve problems once for everyone.

One concrete example of this came early in our adoption process. At the time, we were working on standardizing logging output across many different services. By adopting service mesh and this new design pattern, we were able to trade our people problem—getting devs to standardize their logs—for a technical problem—passing all service traffic through a proxy that could do the logging for them. This was a major step forward for our team.

Our implementation of a service mesh was very pragmatic and aligned well with the core features of the technology. However, much of the marketing hype can focus on lesser-needed edge cases, and it is important to be able to identify those distractions when evaluating if a service mesh is right for you.

Core Features

The core features that a service mesh can deliver fall into four key areas of responsibility: observability, security, connectivity, and reliability. These features include:

Standardized Monitoring

One of the biggest wins we achieved, and the simplest to adopt, was standardized monitoring. It has a very low operational cost and can be made to fit into whatever monitoring system you are using. It enables organizations to capture all of their HTTP or gRPC metrics and store them in a standard way across the system. This controls complexity and alleviates burdens on application teams that no longer need to implement Prometheus metric endpoints or standardize log formats. It also enables users to get an unbiased view of their application’s golden signals

Automatic Encryption and Identity

Certificate management is very hard to get right. If an organization hasn’t already invested in this, they should find a mesh to do it for them. Certificate management requires the maintenance of complex infrastructure code with huge security implications. In contrast, meshes will be able to integrate with orchestration systems to know the identity of the workload that can be used to enforce policy when needed. This allows for a really strong security posture equivalent or better to those delivered by a featureful CNI like Calico or Cilium.

Smart Routing

Smart routing is another feature that enables meshes to “do the right thing” when sending requests. Use cases include:

  1. Optimizing traffic using a latency weighting algorithm
  2. Topology-aware routing to increase performance and reduce costs
  3. Timing out a request based on the likelihood it will succeed
  4. Integrating with orchestration systems for IP resolution instead of relying on DNS
  5. Transport upgrading, such as HTTP to HTTP/2

These features may not strike the average person as exciting, but they fundamentally add value over time.

Reliable Retries

Retrying requests in distributed systems can be cumbersome, however, it is almost always required for implementation. Distributed systems will often convert one client request into many more requests downstream, meaning the likelihood of “tail” scenarios, such as anomalous failed requests occurring, is greatly increased. The simplest mitigation to this is retrying failed requests.

The difficulty comes from avoiding “retry storms” or a “retry DDoS,” which is when a system in a degraded state triggers retries, increasing load and further decreasing performance as retries increase. A naive implementation won’t take this scenario into account as it may require integrating with a cache or other communication system to know if a retry is worth performing. A service mesh can do this by providing a bound on the total number of retries allowed throughout the system. The mesh can also report on these retries as they occur, potentially alerting you of system degradation before your users even notice.

Network Extensibility

Perhaps the best attribute of a service mesh is its extensibility. It offers an added layer of adaptability to take on whatever IT throws at it next. The design pattern of sidecar proxies is another exciting and powerful feature, even if it is sometimes oversold and over-engineered to do things users and tech aren’t quite ready for. While the community waits to see which service mesh “wins,” a reflection of the over-hyped orchestration wars before it, we will inevitably see more purpose-built meshes in the future and, likely, more end-users building their own control planes and proxies to satisfy their use cases.

Service Mesh Distractions

The value of a platform or infrastructure-controlled layer cannot be overstated. However, navigating the service mesh world taught me that one major challenge to entry is simply that the core problems a service mesh solves are often not even a focal point of the communication from most service mesh projects! 

Instead, much of the communication from service mesh projects is around features that sound powerful or exciting but are, in the end, distractions. This includes:

Powerful (read: “complex”) Control Planes

It is incredibly difficult to run complex software well. This is why so many organizations are using the cloud to offload this bit using fully managed services. So why would a service mesh project make us responsible for operating such complex systems? The complexity of a system is not an asset, it’s a liability, yet most projects tout their featureset and configurability.

Multi-cluster Support

Multi-cluster is a hot topic right now. Eventually, most teams will be running multiple Kubernetes clusters. But the major pain point of multi-cluster is that your Kubernetes managed network is cut in half. A service mesh helps address this Kubernetes scale-out issue, but it doesn’t ultimately enable anything new. Yes, multi-cluster support is necessary, but its promises vis-a-vis the service mesh are over-promoted.

Envoy

Envoy is a great tool, but it’s being presented as some sort of standard, which is problematic. Envoy is one of many out-of-the-box proxies you could base a service mesh platform on. But there is nothing inherently special about Envoy that makes it the right choice. Adopting Envoy opens a set of significant questions for your organization, including:

  • Runtime cost and performance (all those filters add up!)
  • Compute resource requirements and how that scales with load
  • How to debug errors or unexpected behavior
  • How your mesh interacts with Envoy and what the configuration lifecycle is
  • Time to operational maturity (this may take longer than you expect)

The choice of proxy in a service mesh should be an implementation detail, not a product requirement.

WASM

I am a huge fan of Web Assembly (WASM), having used it successfully to build frontend apps in Blazor. However, WASM as a tool for customizing service mesh proxy behavior puts you squarely in the realm of acquiring a brand new software lifecycle overhead that is completely orthogonal to your existing software lifecycle! If your organization is not ready to build, test, deploy, maintain, monitor, rollback, and version code, (affecting every request running through its system), then you’re not ready for WASM.

A/B Testing

I didn’t realize until it was too late that A/B testing is actually an application-level concern. It’s fine to provide primitives at the infrastructure layer to enable it, but there is no easy way to fully automate the level of A/B testing most organizations need. Oftentimes the application is going to need to define unique metrics that define a positive signal to the test. If an organization wants to invest in A/B testing at the service mesh level, this is what a solution will need to support: 

  1. Fine control over deployment and rollback since it’s likely multiple different “tests” will be going on at the same time
  2. Ability to capture custom metrics that the system is aware of and can make decisions based on
  3. Exposing controls for the direction of traffic based on characteristics of the request, which may include parsing the entire request body

This is a lot to implement and no service mesh does this out of the box. Ultimately our organization opted for a feature-flagging solution outside of the mesh, which accomplished this with great success and minimal effort.

Where Did We End Up?

Ultimately the challenges we faced were not unique to the service mesh. The organization we worked for had a set of constraints that required us to be pragmatic about the problems we solved and how we solved them. The problems we faced included: 

  • A large organization with lots of developers of varying skill sets
  • Generally immature cloud and SaaS capabilities
  • Processes optimized for non-cloud software
  • Fragmented software engineering methods and beliefs
  • Limited resources
  • Aggressive deadlines 

In short, we had few people, a lot of problems, and the need to show value quickly. We had to support developers who were not primarily web or cloud devs and we needed to scale to support large engineering organizations with disparate methods and processes for doing cloud things. We need to focus the majority of our efforts on solving fundamental problems low on the maturity curve.

In the end, when faced with our own service mesh decision, we decided to build on the Linkerd service mesh as it most closely aligned with our priorities: low operational costs (both compute and human), low cognitive overhead, a supportive community, and transparent stewardship—while meeting our feature requirements and budget. Having spent a short stint on the Linkerd steering committee (they love honest feedback and community engagement), I learned how closely it aligned with my own engineering principles. Linkerd recently reached graduated status at the CNCF, which was a long time coming, underscoring the maturity of the project as well as its wide adoption.

About the Author

Chris Campbell has been a software engineer and architect for over a decade, working with multiple teams and organizations to adopt cloud native technology and best practices. He’s split his time between working with business leaders on adopting software delivery strategies that accelerate business and working with engineering teams to deliver scalable cloud infrastructure. He’s most interested in technology that improves developer productivity and experience.

Rate this Article

Adoption
Style

BT