BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Decoupling in Cloud Era: Building Cloud Native Microservices with Spring Cloud Azure

Decoupling in Cloud Era: Building Cloud Native Microservices with Spring Cloud Azure

Bookmarks

Key Takeaways

  • Cloud native applications should exploit full advantages of cloud, rather than just migrating into cloud
  • Microservice go with cloud native hand in hand, by running on cloud computing environment
  • Centralized config, services discovery, asynchronous message driven and distributed tracing are microservice infrastructure
  • Spring Cloud provides common microservice patterns and abstraction without locking on specific implementation
  • Spring Cloud Azure follows abstractions provided by Spring Cloud, and provides seamless integration with Azure services

For the past decade, Spring has been famous for its dependency injection feature which helps Java developers build loosely coupled system. Put simply, users just need to focus on abstraction provided by interface, then instance of concrete implementation will be ready for use. As cloud become increasingly popular in recent years, how to exploit the auto-scaling and auto-provisioning features provided by cloud environment and loose coupling with specific cloud vendors becomes an interesting challenge. That's where cloud native come into play. Let's move forward to see what cloud native and microservice are first.

Cloud native and microservice

What exactly is "Cloud Native"?

Even I have seen this term many times, and it's still not an easy question. Cloud native is a lot more than just signing up with a cloud provider and using it to run your existing applications. It pushes you to rethink the design, implementation, deployment, and operation of your application. Let's look at two popular definitions first:

Pivotal, the software company that offers the popular Spring framework and a cloud platform, describes cloud native as:

Cloud native is an approach to building and running applications that fully exploit the advantages of the cloud computing model.

The Cloud Native Computing Foundation, an organization that aims to create and drive the adoption of the cloud-native programming paradigm, defines cloud-native as:

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

To summarize, cloud native application should utilize the advantages of the cloud computing model. And microsevice is one way to implement it. Maybe the following explanation will make this more clear:

A cloud native application is specifically designed for a cloud computing environment as opposed to simply being migrated to the cloud.

What exactly is "Microservice"?

Let's check Martin Fowler's definition of microservice first:

Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

But I love another simple one:

Microservices are small, focused and autonomous services that work together.

Small and focused means Single Responsibility. One service just needs to do one thing well. Autonomous means fault tolerance and each service evolves and deploys independently from each other.

The concept of microservices is nothing new. But it hasn't been so popular due to hard implementation in traditional on-premise organizations. Now cloud-based microservices taking full advantage of scalability, reliability and low maintenance costs provided by cloud have become more popular.

Key benefits are outlined below:

  1. Resilience. One component's failure shouldn't take down the whole system. This is usually achieved by defining a clear service boundary.
  2. Scaling. You don't need to scale everything together when only one part is constrained in performance.
  3. Ease of deployment. Making a change to a single service can speed the release cycle and smooth the troubleshooting process.
  4. Composablility. Since each service focuses on one thing, it's easy to reuse them like a unix pipeline.
  5. Optimizing for replaceablity. One small individual service is easier to replace with a better implementation or technology.

Spring Cloud

In order to implement microservice architecture easily, the industry has identified some common patterns to help this. Some well-known patterns are centralized configuration management, service discover, asynchronous message-driven and distributed tracing. Spring Cloud provides these patterns as usable building blocks and helps us follow the cloud native best practices. Besides this, Spring Cloud's unique value could be represented in several aspects:

  1. Define a common abstraction for frequently used patterns. This is another beautiful application of Spring's decoupling philosophy. Each pattern is not tightly coupled with concrete implementation. Take config server as example; you have the freedom to change backend storage without affecting other services. Discovery and Stream also followed this.
  2. Modular component. On first impression, many people think Spring Cloud as a fully-packaged solution. Actually, it's not an all-or-nothing solution. You can only choose one module, then use it as one microservice; other services have the freedom to use any other framework. It's like Lego-brick; you can pick only the pieces you like, and the only thing you need to make sure of is that the pieces be compatible with others.

Now let's look at how spring cloud module fits into microservice pattern.

Centralized config management via Spring Cloud Config

To meet the requirement of Store config in environment and micro-service architecture, we need to put all services' config in one centralized place. To fulfill this, the following features are needed:

  1. Support multiple environments such as dev, test and prod. Then we can have one package built for all environments.
  2. Transparent config fetching. These centralized config should be fetched automatically without any user coding.
  3. Automatic property refresh when property changes. The service should be notified by such change and reload new property.
  4. Maintain change history and easily revert to older version. This is a really useful feature to revert mistaken changes in production environment.

Spring Cloud Config supports all these features by using one simple annotation @EnableConfigServer in config service and include starter in other services to enable client. For more details, please refer to the Spring Cloud Config doc.

Service Discovery via Spring Cloud Discovery

Service discovery plays an important role in most distributed systems and service oriented architectures. The problem seems simple at first: how do clients know the IP and port for a service that could exist on multiple hosts? Things get more complicated as you start deploying more services in versatile cloud environment.

In reality, this is usually done in two steps:

  • Service Registration - The process of a service registering its location in a central registry. It usually registers its host and port and sometimes authentication credentials, protocols, versions and environment details.
  • Service Discovery - The process of a client application querying the central registry to learn the location of services.

In choosing a service discovery solution, several aspects should be considered:

  • Fault Tolerance - What happens when a registered service fails? Sometimes it is unregistered immediately in a graceful shutdown, but most of the time we need a timeout mechanism. Services are continuously sending heartbeats to ensure liveness. Besides this, clients also need to be able to handle failed services by retrying another one automatically.
  • Load Balancing - If multiple hosts are registered under a service, how do we balance the load across the hosts? Will load balancing happen on the registry side or client side? Can we provide our custom load balancing policy?
  • Integration Effort - How complicated is the integrating process? Does it only involve some new dependency and/or configuration change? Or invasive discovery code? Sometimes a separate sidekick process is a good option when your language is unsupported.
  • Availability Concerns - Is registry itself highly available? Can it be upgraded without any downtime? The registry should not be a single point of failure.

Spring Cloud provides a common abstraction for both registration and discovery, which means you just need to use @EnableDiscoveryClient to make it work. Examples of discovery implementations include Spring Cloud Discovery Eureka and Spring Cloud Discovery Zookeeper. You should choose concrete implementation based on your user case. For more details, please refer to the Spring cloud Discovery doc.

Message-driven architecture via Spring Cloud Stream

Suppose we have some microserivces, which must then communicate with each other. Obviously, the traditional synchronous way is blocking and hard to scale, which can't survive in a sophisticated distributed environment, so asynchronous message-driven is right way to go. In the modern world, every request cloud be considered a message. So various messaging middle-wares get birthed with their own message formats and APIs. It's a disaster to make all these middle-wares communicate with each other. Actually, solving this problem is easy; just define a unified message interface, then each middle-ware provides an adapter which knows how to convert between their message format and a standard one. Now you have grasped the core design principle of Spring Integration. Spring Integration is motivated by the following goals:

  1. Provide a simple model for implementing complex enterprise integration solutions.
  2. Facilitate asynchronous, message-driven behavior within a Spring-based application.
  3. Promote intuitive, incremental adoption for existing Spring users.

And guided by the following principles:

  1. Components should be loosely coupled for modularity and testability.
  2. The framework should enforce separation of concerns between business logic and integration logic.
  3. Extension points should be abstract in nature, but within well-defined boundaries to promote reuse and portability.

For more details, please refer to the Spring Integration doc.

However, Spring Integration is still at a lower level and contains non-intuitive confusing terminologies. The programming model isn't as easy to use as other Spring technologies. So Spring Cloud Stream was invented. It is based on a standard message format and various adapters provided by Spring integration, working at a high level binder abstraction to produce, process and consume messages in a much easier way. It looks like a unix pipeline; you just need to worry about how to process messages. Messages will come and go as you expected. Spring Cloud Stream offers high level features as follows:

  1. Consumer Group. This is first introduced and popularized by Apache Kafka. It can support publish-subscribe and competing queue in one programming model.
  2. Partition. Based on the partition key provided by user, the produced message with same the partition key will be guaranteed in one physical segment. This is critical in stateful processing, since related data needs to be processed together, for either performance or consistency reasons.
  3. Automatic content negotiation. Automatic message type conversion based on which type of message users accept.

For more details, please refer to the Spring Cloud Stream doc.

Distributed tracing via Spring Cloud Sleuth and Zipkin

Under microservice architecture, one external request might involve several internal service calls, and these services might spread over many machines. Although most solutions have implemented centralized logging storage and search, it is still hard to trace end-to-end transactions spanning across multiple services. Figuring out how a request travels through the application also means manually searching log keywords many times to find clues. This is really time-consuming and error-prone, especially when you may not have enough understanding of microservice topology. Actually, what we need is to correlate and aggregate these logs in one place.

Spring Cloud Sleuth implements such correlation by introducing the concepts of Span and Trace. The Span represents a basic unit of work, such as calling a service, identified by span ID. A set of spans form a tree-like structure called Trace. The trace ID will remain the same as one microservice calls the next. Both will be included in each log entry. Furthermore, it automatically instruments common communication channels:

  • Requests over Spring Cloud Stream Binder we discussed before
  • HTTP headers received at Spring MVC controllers
  • Requests made with RestTemplate
  • ...and most other types of requests and replies inside Spring-ecosystem

With such raw data at hand, it's still hard to understand things such as which microservice call consumes the most time. Zipkin provides a beautiful UI to help us visualize and understand. You can have a Zipkin server ready for use with a simple annotation @EnableZipkinServer. For more details, please refer to the Spring Cloud Sleuth doc

Spring Cloud Azure

Spring Cloud provides great support for common patterns, but there is still a gap for implementing microservices on specific cloud environment. So Spring Cloud Azure follows common abstractions provided by Spring Cloud, then takes one step further to provide automatic resource provision and auto config Azure service specific properties. With this, users just need to understand the logical concepts of Azure services without touching and suffering from low level details about config and SDK API. Take Azure Event Hub as a example; you only need to know that this is a message service with a similar design as Kafka, then you can use Spring Cloud Stream Binder for Event hub to produce and consume from it.

The design motivations are as follows:

  1. Seamless Spring Cloud integration with Azure. Users can easily adopt Azure services without having to modify existing code. Only dependencies and a few configuration settings are needed.
  2. Least configuration. This is done by taking advantage of Spring boot auto config to preconfigure default properties' values based on Azure Resource Management API. Users can override these properties with their own.
  3. Provision resource automatically. If resources do not exist, our modules will create them under user specified subscription and resource groups.
  4. No cloud vendor lock in. With our modules, users can easily consume Azure services as well as benefit from conveniences provided by Spring Cloud, without getting locked in with one specific cloud vendor.

Auto config and resource provision with Azure Resource Manager

One of the things developers hate doing is configuration. Before configuring each property, developers need to go through the documentation and fully understand each property's meaning, then carefully copy each property from some place and paste it into the application's property file. However, the process is not done yet; they also need to properly comment on each property to let other developers know which one to change in which scenario and avoid mistaken changes. That's the pain point we want to solve, so we have auto config based on Spring boot. If you want to use event hub, you don't need to understand what connection string is, you just fill event hub namespace(like Kafka cluster name) and event hub name(like Kafka topic name); other things will be auto configured. Of course, you have the ability to provide your customized config to override default ones.

One great benefit of cloud is programmable API to create and query resources you own. This is the key to automation. Based on Azure resource manager, Spring Cloud Azure provides automatic resource provision. The resource has a broader range than you expect; one example is a consumer group of event hub. When you have a new service acting as another new consumer group, you really don't want to create this consumer group manually.

Spring Cloud Stream Binder with Event Hub

We have talked about the benefits of Spring Cloud Stream. Let's suppose you have used this project, but you want to migrate into Azure. You might have already used Kafka or RabbitMQ binder, but it seems that Azure doesn't provide such managed Kafka or RabbitMQ offerings. Then how could you migrate with little effort? Actually, you don't care which message middle-ware you're using, you just want to have one thing which provides similar function and performance requirements. So you can just change the dependency from kafka binder into event hub binder without any code change to smooth cloud migration. If you want to take full advantage of event hub binder in the future, you need to know the following features:

Consumer Group

Event Hub provides similar support of lightweight consumer groups as Apache Kafka, but with a slightly different logic. While Kafka stores all committed offsets in the broker, you have to store offsets of event hub messages being processed manually. Event Hub SDK provides the function to store such offsets inside Azure Storage Account.

Partitioning Support

Event Hub provides a similar concept of physical partition as Kafka. But unlike Kafka's auto rebalancing between consumers and partitions, Event Hub provides a kind of preemptive mode. Storage account acts as a lease to determine which partition is owned by which consumer. When a new consumer starts, it will try to steal some partitions from the most heavy-loaded consumer to achieve workload balancing.

Checkpoint support

In a distributed publish-subscribe messaging system, there are three messaging semantics: at-least-once, at-most-once, and exactly-once. For now, we are only considering consumer side:

  • At least once: The consumer receives and processes the message. It doesn't send an acknowledgement to the broker until having finished message processing successfully. If for some reason the processing isn't finished, e.g. the consumer node is down, the same message will be reprocessed (as the next available event in the partition) so this ensures at least once consumption. In this case, the consumer may process the same message more than once (until it is successfully processed). manual checkpoint mode provides the capability to do manual checkpoints after processing the message.
  • At most once: In this mode, the consumer receives the message and sends an acknowledgement to the message broker immediately. Then it starts processing. But if the consumer is down during processing, the message will not be reprocessed again since the broker thinks the consumer has already received the message. In this case, consumer processes one message at most once, but it may miss some messages due to processing failure. The is default batch checkpoint mode supported by event hub binder.
  • Exactly once: With at least once semantics, we can have a unique message id to deduplicate an already processed message. Or we can have idempotent message processing. Learn more about Exactly once.

By exposing Checkpointer through custom message header, event hub binder could support different message consuming semantics.

For more details, please refer to the Spring Cloud Stream Event Hub binder doc. Also you can follow Sample to try it.

Spring resource with Azure Storage blob

Spring resource provides a common interface to manipulate stream based resource, such as UrlResourceClassPathResource and FileSystemResource. It's obvious that Azure storage blob is a good fit for this as BlobResource. In this Resource, all implementation details are hidden and missing files will be automatically created.

For more details, please refer to the Spring Resource with Azure Storage doc. Also you can follow Sample to try it.

Spring Cloud Azure Playground: One-click run microservice

Although Spring Cloud has provided complete support for building microservices, for new users who want to build a runnable microservices, it's still a challenging task. Steps include:

  • Set up module and dependencies for each microservice. Although Spring Initializr could help with this, it's still huge effort since the number of microservice might be big.
  • Ensure the dependency and version among all services are compatible.
  • Configure properties for each service and some properties are related. This is error-prone when doing manually.
  • Common infrastructure services provided by Spring Cloud have their own annotation and config to make services run. You need to follow offical samples to make all these correct.
  • Many users want to run these services locally by docker. Manually writing dockerFile is time-consuming and requires a deep understanding of relationship of microservices.

To solve the issues above, we built Spring Cloud Azure Playgroup to help users do this easily. The features supported include:

  • Based on Spring Initializr, generate all services at one time.
  • Each microservice contains dependency, annotation, config and sample code. One command to run based on dockerFile.
  • User can customize a service's name and port to avoid conflict with other services running locally.
  • Options to push to GitHub or download. GitHub is easy to share with team members.

You can try this on https://aka.ms/springcloud.

Spring Cloud Azure on Github

This project is open source. You can contribute and submit issues here. Please star if you like it.

About the Author

Zhongwei Zhu is a software engineer with a depth of experience designing and architecting distributed systems, big data solutions and microservices. He currently works for Microsoft, as the main contributor of Spring Cloud Azure, focusing on improving Java experience on Azure.

Rate this Article

Adoption
Style

BT