BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles A Cloud-Native Architecture for a Digital Enterprise

A Cloud-Native Architecture for a Digital Enterprise

Bookmarks

Key Takeaways

  • By becoming a digital enterprise, a company can integrate and expose its business capabilities as APIs by digitizing an entire value chain.
  • API-led integration provides a platform to enable enhanced digital experiences for consumers. Agility, flexibility, and scalability are key to becoming a successful digital enterprise.
  • Cloud-native applications are all about dynamism; microservice architecture is critical to accomplish this goal. Combining cloud-native technologies with an API-led integration platform helps to increase productivity by having agility, flexibility, and scalability through automation and services.
  • This article describes a vendor/technology-neutral reference architecture for a cloud native digital enterprise and this can be mapped into different cloud-native platforms (Kubernetes and service mesh), cloud providers (Microsoft Azure, Amazon AWS, and Google GCP), and infrastructure services.
     

In an era of digital transformation, (digital) enterprises are looking for fast innovation through effective collaboration to deliver more value to their customers, with dramatically less effort.

An enterprise in any sector can integrate, expose, and monetize its business capabilities by digitizing entire value chains. To stay ahead of the competition, enterprises expose integrated business functionalities as APIs, which are the products of the 21st century.

Every company is looking for an API-led integration platform to enable enhanced digital experiences for its consumers. Apart from integration and API platforms, these architectures should be able to provide agility, flexibility, and scalability.

This is where cloud-native technologies come into the picture. Combining cloud-native technologies with an API-led integration platform helps to increase productivity by having agility, flexibility, and scalability through automation and services.

What is cloud native?

Cloud native is a term used to explain technologies that help to create, deploy, and operate applications in a scalable environment such as in public, private, and hybrid clouds. It also refers to explaining the characteristics of these applications specifically made to address scalability.

Cloud native has its own foundation: the Cloud Native Computing Foundation (CNCF). It aims to build sustainable ecosystems and foster communities to support the growth and health of cloud-native, open-source software.

Figure 1—Cloud-native reference architecture by the CNCF

Figure 1 illustrates the reference architecture presented by the CNCF and each layer has its own specialized cloud-native software stacks; many of them are governed by the CNCF.

The infrastructure layer represents the actual computing resources and the provisioning layer covers host management activities such as installing and setting up operating systems.

The runtime layer mainly consists of the container runtime. The container runtime interface (CRI) allows users to plug in different implementations of container runtimes. Docker is the widely used container runtime. The container network interface (CNI) enables APIs to plug in different container network runtime implementations and the container storage interface (CSI) provides a common standard to connect container orchestration platforms to plug into persistent storage.

Container orchestration and management layer helps to manage a large number of containerized application deployments across multiple container host machines. Cloud Foundry, Mesos, Nomad, and Kubernetes are popular container orchestrators used in the cloud-native space.

Cloud-native application developers are mainly engaged with the functionality of the application definition layer, which defines application composition, application-specific configurations, deployment properties, image repositories, continuous integration/continuous delivery, etc.

Characteristics of a Cloud-Native Digital Enterprise

Cloud-native applications are all about dynamism, and microservice architecture (MSA) is critical to accomplish this goal. MSA helps to divide and conquer by deploying smaller services focusing on well-defined scopes. These smaller services need to integrate with different software as a service (SaaS) endpoints, legacy applications, and other microservices to deliver business functionalities. While microservices expose their capabilities as simple APIs, ideally, consumers should access these as integrated, composite APIs to align with business requirements. A combination of API-led integration platform and cloud-native technologies helps to provide secured, managed, observed, and monetized APIs that are critical for a digital enterprise.

Figure 2—A reference cloud-native architecture for a digital enterprise

The infrastructure and orchestration layers represent the same functionality that we discussed in the cloud-native reference architecture. Cloud Foundry, Nomad, and Kubernetes are some examples of current industry-leading container orchestration platforms. Istio and Linkerd are two leading service meshes built on top of these orchestration platforms. OpenFaaS, Knative, AWS Lambda, Azure Functions, Google Functions, and Oracle Functions are a few examples of functions as a service platform (FaaS). [Editor's note: these lists of products were revised after intial publication for correctness and clarity.]

Each microservice or serverless function is developed by a small team with the freedom of choosing appropriate technologies. Digital enterprises can have in-house or cloud orchestration platforms to deploy these MSA-based applications, and, if enterprises use serverless functions, then it is recommended to use a FaaS platform provided by a well-known cloud provider.

Once the microservices are defined and implemented, they should be bundled with all their dependencies and shipped as container images. Docker is the most popular container image format. These images should be stored in a registry where other developers, as well as runtime environments, can cloud-pull and create containers out of these images.

Figure 3—Container image creation

The container orchestration platform is scheduled and creates a container (runtime) in a worker node. Each container gets its own IP address, storage, and namespace with the allocated CPU and memory resources. In addition to all application dependencies, environment-specific properties such as configurations, certificates, and credentials should be associated with the container runtime.

Figure 4—Configs, credential, and certificate association with the container

Microservices communicate with each other to complete a given business task. When the number of services increases, we need the right discovery service to communicate with a unique name (service-name) such as a domain name service (DNS).

Also, the main benefit of MSA is that it is easy to scale. When scaling out, ingress traffic should be routed to each container with a suitable load balancing mechanism. To do this, each application should have a proper load balancer bound to a service name.

Figure 5—Scaling, load balancing, and service name resolving

Health check and auto-healing is another important feature that comes with cloud-native orchestration platforms. Orchestration platforms perform a health check probe for each container and can auto-heal if something is wrong.

Individual microservices that are deployed as containers should be able to scale in and out depending on the load spikes. Running unnecessary containers wastes computing resources and having a short number of containers can cause service downtime. Container orchestrators can monitor these load spikes and can remove unnecessary containers or add additional containers to scale in and out. CPU usage, memory usage, and in-flight-request counts (such as load balancer routing queue) are a few well-known load spike monitoring factors that help to compute scale in and out decisions.

Figure 6—Autoscaling

MSA produces frequent releases and these need to be seamlessly rolled out into production. Even though we perform thorough testing, sometimes we need to roll back to a stable state due to a late-found error. To mitigate such situations, we should have different deployment strategies. Six Strategies for Application Deployment by Etienne Tremel, a Software Engineer at Container Solution, explains well-defined deployment practices in the cloud-native industry.

API Gateway

Microservices aren’t designed from an end-users’ point of view, where users want access to the system with their business needs. To expose system functionality as business APIs, these microservices need to integrate with different SaaS endpoints, legacy applications, and other microservices to perform the defined business functionality. Integrations are often supported by enterprise service bus (ESB) functionality such as routing, transformations, orchestration, aggregation, and resilient patterns.

However, in general, an ESB is a monolithic system and doesn’t fit well with MSA. Alternatively in MSA, integration and aggregation can be done in another microservice to expose meaningful APIs to consumers. These APIs should be secured, managed, observed, and monetized. This requires a governance model with policy enforcement. This is where API gateways are important.

An API gateway can be used as a policy enforcement point of API governance while working in sync with control and management plane components like lifecycle management, traffic control, policy control, and identity and access management (IAM). The following are the key functionalities of an API gateway:

  • Authentication and Security enforce standard authentication and security across all microservices.
  • API rate limiting protects the backend microservice by controlling requests that go over the limit.
  • Dynamic API discovery and routing enables discovery and routing to application developers.
  • API load balance and failover enables scalability and high availability.
  • API shaping optimizes bandwidth usage and enhances the user experience.
  • API composition aggregates multiple microservice responses and creates a single composite API response.
  • API mediation and transformation integrates legacy systems by using mediation and message transformation.
  • Response caching enhances performance.

Aligning with the MSA, API governance can be achieved via three main API gateway deployment patterns—Centralized/Shared, Private Jet, and Sidecar.

Centralized/Shared Gateway

Figure 7—Centralized/Shared API Gateway

The shared cluster of API gateways handles all API requests. These requests can be internal as well as external API calls. An API gateway cluster can be scaled horizontally and the load is distributed among all the API gateway containers. A shared API gateway adds an additional hop into inter-microservice communications. The same gateway cluster can be used to manage external APIs as well as internal APIs or can have a dedicated API gateway layer to manage external traffic.

Private Jet Gateway

Figure 8—Private Jet API Gateway

In this pattern, each individual microservice has a dedicated API gateway. This provides maximum security as well as guarantees resource allocation for API execution. A single private jet API gateway can be attached to a cluster of microservices of the same type. Load balancing and failover features will be necessary and naturally fit into this kind of scenario. A private jet API gateway itself can be scaled independently. This pattern also increases one network hop in inter-microservice communication similar to a centralized API gateway.

Sidecar Gateway

Figure 9—Sidecar API Gateway

The sidecar pattern reduces the additional external network hops that are required in the centralized and private jet gateway patterns while having the local network call to communicate. A sidecar is heavily used in service mesh architectural patterns. Offloading all service-to-service communication matters, such as discovery, reliable delivery, routing, failover, load balancing, etc., into a mesh sidecar will give freedom to developers to focus on business functionality.

A sidecar API gateway pattern can be used when and where you want to have service-mesh architecture.

Figure 10—Gateway convergence

Technology is evolving in a way that all types of gateways, such as API gateways, ingress gateways, service mesh gateways, and micro integrators, are merging into a single, all-in-one gateway. However, even if these gateways merge into a single gateway concept, depending on the use case and the requirement, in some cases, it is good to use multiple gateways to have a clean and scalable architecture.

Control and Management Plane

API gateways are the interception point to policy enforcement, capture stats, metrics, and analyze analysis to find out how APIs are behaving. Managing these APIs is a necessity in today’s digital economy. Control and management planes should provide API management capabilities such as:

  • Design and lifecycle management
  • Control access and security enforcement
  • Scalability and API traffic management

Unlike monolith architecture, auditing and tracing are hard problems in decentralized architectures such as MSA. Having the necessary interceptors to collect metrics, stats, and data is critical. The control and management planes should have capable analytics tools and engines to analyze these collected metrics, stats, and data to generate business intelligence reports. These reports can be utilized to monetize business capabilities by combining them with the defined business plans.

An externally accessible self-service developer portal helps application developers or API users easily discover these APIs and use them with a well-defined business plan. The developer experience is key to the adoption and success of your APIs, and having a feedback mechanism, such as customer ratings and forums, is key for a developer portal.

Comprehensive observability systems and business insight reports help to get an understanding of how these APIs are behaving. These dashboards and reports can be used by both business and operations leaders to gain a 360-degree view of their digital business.

To Read Further

If you want to learn more about cloud-native digital enterprise architecture, read my paper about a vendor/technology-neutral reference architecture for a cloud-native digital enterprise. The architecture defined in this paper can be mapped into different cloud-native platforms (Kubernetes and service mesh), different cloud providers (Microsoft Azure, Amazon AWS, and Google GCP), and infrastructure services. These reference implementations will be covered in separate papers.

Conclusion

By becoming a digital enterprise, companies in any sector can integrate and expose their business capabilities as APIs. These APIs should be secured, managed, observed, and monetized. An API-led integration platform is essential for digital enterprises whether they start with a greenfield or brownfield implementation.

Cloud-native technologies are critical to accomplishing agility. Containers and orchestration platforms help to have a scalable system by providing the required abstractions, automation, and operational tools.

Combining cloud-native technologies with an API-led integration platform creates an effective architecture for a digital enterprise to increase productivity by having automation, production or operation, and services.

About the Author

Lakmal Warusawithana is the Senior Director - Developer Relations at WSO2. In 2005, Lakmal co-founded the thinkCube, the pioneers in developing the next generation of Collaborative Cloud Computing products that are tailored towards Telecom operators. He oversaw the overall engineering process, giving special attention to scalability and service delivery of thinkCube solutions. Prior to co-founding thinkCube, Lakmal spent four years at ITABS, a company that specialized in Linux based server deployments that came with a custom easy-to-use server management interface.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Low quality article...

    by Dusan Dimitric,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Very poorly written article. Lots of repetition, abstract nonsense, black & white fallacies... all in all, very hard to read...

    I expect more from InfoQ...

  • Re: Low quality article...

    by Thomas Betts,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Dusan,

    We're sorry this article did not meet your expectations.

    I was the editor for this article. I've read it again, and I think it does provide a clear, high-level overview of cloud-native concepts. I'd be interested in some specific examples of sections you believe were sub-par for InfoQ.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT