BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Adoption of Cloud Native Architecture, Part 2: Stabilization Gaps and Anti-Patterns

Adoption of Cloud Native Architecture, Part 2: Stabilization Gaps and Anti-Patterns

Bookmarks

Key Takeaways

  • Monolithic apps related anti-patterns come in three flavors: client/server (monolith) apps migrated to cloud platform as-is, monolithic application with bundled services, and monolithic application artificially broken down into microservices (Distributed Monolith)
  • A proper service interaction & governance model, using either service orchestration or choreography techniques and a service mesh solution, helps avoid Death Star anti-pattern.
  • Development teams should strive for the ideal zone by properly managing the technology use/reuse as well as dependencies on those technologies (loose coupling).
  • Three main capabilities we’ll typically need in distributed computing environments when using microservices: Resilience, Security, and Observability
  • Digital transformation to cloud native platforms typically occurs in three different areas: Application, Platform, and Process

     

In the first part of this article series, we discussed various architecture paradigms and application hosting models, from mainframes to the recent cloud-native platforms. Each new hosting model offered in the industry initially results in unintended consequences such as architecture deviations and anti-patterns leading to significant technical debt. But over time, new design patterns evolve to address these gaps.

In this article, we'll discuss some of the anti-patterns when using microservices architecture in applications. We'll also talk about the architecture adoption pendulum, which describes how IT teams need to balance architecture and technology stability, by taking measures to not reinvent the wheel in every application and at the same time resisting the tendency to arbitrarily reuse technologies and application frameworks.

Anti-patterns due to stabilization gaps

Anti-patterns can occur in any phase of the software development and deployment lifecycle, when we knowingly or by mistake use the technologies and best practices in an inappropriate context. The same is true when using microservices and container-based technologies. If we are not careful, the applications end up suffering from the architecture gaps and anti-patterns.

Following are some of the anti-patterns that architecture and Dev teams need to be careful about:

  • Monolithic Hell (including Distributed Monolith),
  • Death Star,
  • Jenga Tower (also known as Logo Slide or Frankenstein), and
  • Square Wheel.

Let’s delve into each of these anti-patterns and how they impact cloud-native adoption efforts.

Monolithic Hell

The Monolithic Hell anti-pattern is common when adopting microservices and cloud-native technologies in our applications. Monolithic apps, if migrated to cloud platforms as is without proper due diligence and refactoring where applicable, can result in multiple issues and lead to anti-patterns.

There are few different variations of monolithic hell when apps are deployed to a cloud platform. We’ll review three in this article, as shown in Figure 1:

  • client/server (monolith) apps migrated to a cloud platform as is,
  • a monolithic application with bundled services, and
  • a monolithic application artificially broken down into microservices (Distributed Monolith).


Figure 1. Monolithic app flavors

Let’s see how each of these different flavors impacts the application support in the areas of deployment, monitoring, and scalability.

Monolithic application scenario 1: Client/server application migrated as is

A monolithic application is the most common anti-pattern scenario in cloud-native adoption. Teams try to migrate their current client/server applications designed for monolithic application servers like WebLogic, WebSphere and IIS. These applications tend to be large in scope, typically containing hundreds of thousands, if not millions, lines of code. They’re also tightly coupled with each business function they support. They typically lack clear boundaries of business functionality, also known as bounded context in domain-driven design.

There is nothing wrong with monolithic apps in general if the different business functions they support are closely related to each other and they all need to be called in the same transactional context. These different functions also should have the same lifecycle in terms of enhancements and production deployments.

But if an application or system needs to support business functions that are not closely related to each other, have different lifecycles of changes, or have different performance and scalability needs, then monolithic applications become a challenge.

Application development and support start becoming overhead and a burden when the business needs change at different paces or in different parts of the system. Having a single app responsible for multiple business functions means that anytime we need to deploy enhancements or a new version of a specific function, we must shut down the whole application, apply the new feature, and restart the application. This can cause outages and disrupt other functions in the application.

This is why microservices design is not a technology-based concern but a business-driven one. Best practices like business-capability-driven architecture and domain-driven design help identify the right candidates in a system to implement as microservices. This approach also helps identify or define the boundaries of these microservices.

The need for good interface design is very important when we have a system with several microservices which need to interact with each other to fulfill a business use case.
In Figure 2 (as discussed in Jonas Bonér’s Reactive Microsystems: The Evolution of Microservices at Scale), we can see a single application with three components (modules) that has been migrated to the cloud platform as is, with no design or code changes, whose maintenance in the cloud incurs a technical debt in the long term.

 
Figure 2. Monolithic app migrated to cloud as is

To summarize this anti-pattern, below are the advantages and limitations of pushing a client/server (monolith) application to a cloud platform without any changes.

Pros:

  • No additional training required other than the familiarity with the cloud platform.
  • Self-contained, isolated apps.

Cons:

  • Common components packaged and deployed multiple times as separate instances.
  • Component changes cause all apps to be redeployed as a whole.
  • Big-bang deployments.
  • Large footprint per app.
  • High cost of change.

Large self-contained applications are not ideal for cloud adoption due to the cost of maintaining those apps after deployment as well as the inability to take advantage of cloud services like resiliency at the right level of granularity.

Leadership insight

Monolith Hell can occur under the radar as organizations race to move legacy apps to the cloud in the shortest time and least effort possible. The anti-pattern emerges as the consequence of such migrations.

Leaders should have architecture controls in place to look for the signs described in the Cons list above, as early as possible. For example, in the "client/server monolith moved as is" scenario just described, leaders can detect the Monolithic Hell anti-pattern by examining the planned business functions and cadence of deployments to determine if the application to be moved to the cloud will still operate as one large black box.

Jon Moore discussed in this article how to implement effective architecture controls in large organizations.

It’s important to note that "monolith" is not a term based on the size of the deployment artifact. It usually refers to the grouping of non-similar business functions into a single application.

Sometimes, we see different business functions (or services) packaged in a single application for the sake of convenience or other operational constraints (such as Ops preferring to support few applications in production where a single app with different services makes more sense than deploying each service as a separate application).

Monolithic application scenario 2: Apps with bundled services

The second scenario is when applications with different services are bundled into one single application that is then deployed to the cloud platform.
Even though the services support different business functions, the team nevertheless bundles those services into a single deployment artifact. There are few reasons why this approach is followed.

It's easier to deploy and monitor a single application from an operational standpoint. This is a preferred option in organizations that have not fully embraced DevOps practices. More smaller applications, i.e. microservices, are seen to cost too much overhead compared to one large application and not encouraged by operations teams.
But there are several challenges to this approach. It loses scalability at the level of individual services; it lacks the flexibility of deploying new versions of each service without having to bring down the other services bundled in the single application.

It’s worth the effort for organizations with any applications that fall into this category to separate them into several smaller applications and deploy them to the cloud platform independently.

Jonas Bonér discussed in detail applications that have heterogeneous business functions bundled into a single app.

This anti-pattern is represented in Figure 3, which illustrates an array of different services that don’t share a common business domain, context, or lifecycle, but which are artificially packaged together into a single application regardless.

Figure 3. Monolithic app with bundled services

In summary, this anti-pattern comes with the following pros and cons.

Pros:

  • Ease of deployment because of fewer (i.e. single) applications to deploy.
  • Fewer applications to maintain and monitor.
  • Less surface area of the system to support and secure.

Cons:

  • Not easy to deploy any modifications or enhancements to a specific business function without adversely impacting every other business function bundled in the single monolithic application.
  • Longer deployment cycles due to the dependencies of services on the single monolithic application that contains them.
  • If each business function (i.e. microservice) requires a different level of security or scalability needs, the whole application, along with all bundled services, will need to accommodate the same requirements and SLAs.

Practices such as CI/CD, automation of the build, and deployment to the cloud platform can help avoid this anti-pattern. Even though microservices increase the overall surface area of the system and number of applications to deploy and maintain, the automation helps minimize manual deployments of each microservice.

Leadership insight

Leaders should resist the pressure to bundle services into large applications to simplify operations. Instead, they should pursue modern DevOps principles across the entire organization.

Microservices do not need to be forced into a single application for the sake of operations. Leaders should champion changes in the entire organization to allow for groups of microservices to be supported as a logical application while allowing for deployment, scalability, versioning, and load balancing of individual microservices.

When designing systems with different microservices bundled into one deployable unit (for convenience or other reasons), don’t share the same database between those microservices. Each service should still have its own datastore to manage the state. This way, the services can easily be separated out at a later date.

Monolithic application scenario 3: Distributed monoliths

The third scenario in microservices anti-patterns is the Distributed Monolith, also known as a Microlith. This is the other extreme of the previous anti-pattern. In this scenario, a single monolithic application is artificially broken down into multiple different services and deployed as separate apps -- but all these services are still tightly coupled in terms of dependencies and lifecycle management. They all need to be called in the same transactional context, as in the first scenario we discussed above. In a way, this is the "worst of both worlds" microservices anti-pattern.

Figure 4 depicts services C1, C2, and C3, which are all dependent on each other but are packaged and deployed as individual services. This is worse than the other two anti-patterns because it not only increases the overall surface area of the system with multiple applications, it also provides no flexibility or loose coupling in managing the lifecycle of these apps independently, which is one of the main advantages of transitioning to microservices architecture.

Figure 4. A distributed monolith

Identify the bounded context in business domains to define the boundaries for each microservice. We'll discuss how business and technical capabilities should be managed to help with the scope, design and interfaces of microservices in more detail later in the article.

To summarize, the Distributed Monolith anti-pattern comes with the following pros and cons.

Pros:

  • Smaller applications to deploy.
  • Relatively better independence of managing each business separately.

Cons:

  • Dependent services are deployed separately, causing more challenges in terms of managing the interaction between those services running in different run-time contexts.
  • Any of the services cannot be modified or upgraded without adversely impacting the rest of the services that logically part of the same application.

Leadership insight

Distributed monoliths are hard to spot. They usually develop out of the technical team’s good intentions to break down a large application into microservices, and to the untrained eye, it may as well look like a sound cloud-native practice.

The defense against distributed monolith is to have solid architecture governance and formal design reviews where teams can be alerted to the practice before progressing too far with the development.

Death Star

There are three main capabilities we typically need in distributed computing environments when using microservices:

  • resilience,
  • security, and
  • observability.

Let’s briefly look at each of these capabilities and see how an ungoverned/undergoverned microservices architecture poses challenges.

Resilience

•    Traffic management and routing is not centralized.
•    Service discovery needs to be implemented by each application.
•    Other networking requirements like location transparency and load balancing become complicated.
•    Microservices-related requirements like service retry, timeout. and circuit-breaker policies are not centrally managed and are enforced in a decentralized manner.

Security

Sometimes we need to secure the interaction between two microservices — for example, only the microservices that have an authorized security certificate (TLS) can call a service. This is not possible in architecture where any microservice can willy-nilly call any other microservice. Securing microservices communication is very critical in cloud native applications.

Observability

Observability includes capabilities like application and system monitoring and distributed tracing to find out which specific components of a system are running slower or are experiencing failure. This has become one of the most important capabilities in microservices  based architectures.

We will look at these three capabilities in detail one more time in part three of this article series and explain how a solution like a service mesh can help in all three areas.
Each application needs to implement the code or use embedded libraries to enable these features, rather than use a centralized configuration-based solution.
Death Star architecture has been well documented as a microservices anti-pattern when APIs call APIs calling APIs. Arnon Rotem-Gal-Oz, in his book SOA Patterns, discussed the SOA anti-pattern called "The Knot", which is another name for Death Star pattern.

Figure 5. The Death Star anti-pattern

The Death Star anti-pattern evolves as organizations transition their application architectures to leverage more and more microservices and container technologies without a good service interaction or governance strategy. In this scenario, any service can invoke any other service in the organization without orchestration capabilities in the middle, such as proper traffic management (ingress/egress), security between services, and, last but not least, comprehensive monitoring and observability. The more microservices there are, the bigger the Death Star and its unintended consequences and architecture debt.

As we can see in Figure 5, several organizations experienced the Death Star anti-pattern and had to come up with other patterns such as service registry/discovery, traffic routing/splitting/throttling, mTLS-based service authentication, service graphs, and distributed tracing to address the challenges posed by the lack of services governance.
As described in articles "Virtual Panel: Microservices Interaction and Governance Model - Orchestration v Choreography" and "Virtual Panel: Microservices Communication and Governance Using Service Mesh", a proper service interaction model, using either service orchestration or choreography techniques and definitely a service-mesh solution, will help avoid this anti-pattern.

Let’s look at some of the pros (not many) and cons of this approach.

Pros:

  • Not really an advantage, but this model makes it easier initially to deploy microservices even though long-term support is going to be a challenge.
  • In Dev teams’ defense, this anti-pattern typically occurs over a long period of deploying microservices without a good governance plan.

Cons:

  • No centralized policy enforcement for any security requirements.
  • No good way to monitor the calls between microservices.

Leadership insight

Death Star is the natural result of developing microservices when APIs are allowed to call other APIs without an orchestration or choreography strategy.
In other words, if your organization is developing and deploying microservices at full speed and these services are allowed to call each other without a proper mediation component in the architecture, it’s just a matter of time before the Death Star manifests and becomes evident.

Jenga Tower and Square Wheel

The rapid evolution of modern architecture and cloud-native design inspired the open-source community and specialized software vendors to write modularized and reusable software like never before.

The industry can take advantage of a vast array of ingenious solutions offered as tools, modules, services, frameworks, and all sorts of components. Many of them are available for free or come as part of a license and subscriptions to software vendors. Some of these are offered in packages and are designed to complement each other, offering basic capabilities that would take a long time to build from scratch.

Organizations have been responding to this evolution by moving away from "not invented here" (NIH) and "do it yourself" (DIY) anti-patterns towards the "best of breed" (BOB) architecture model.

BOB architectures embrace the diversity of tools offered by vendors and the open-source community in order to obtain the best possible implementation of the basic design patterns needed to support their systems. It is not uncommon to see modern systems running on frameworks such as Spring Boot using a variety of add-ins like Spring Cloud, Spring Data, and Spring AOP. Modern architectures will combine other technologies like Kafka and Spark to achieve complementary capabilities.

However, breaking away from NIH and DIY anti-patterns and towards BOB must be done with caution, as moving the pendulum from one extreme to the other could cause two undesired effects: Jenga Tower and Square Wheel.

Jenga Tower/Logo Slide/Frankenstein

Nitin Borwankar, who wrote the first of the 97 Things Every Software Architect Should Know, points out that using the latest technologies just because we can -- or, in his humorous example, to include them in our resumes -- is a practice that should be avoided as it brings unintended consequences.

Organizations often have their best technologists in charge of defining the tech stack that should be part of a system, and somewhat expect them to take advantage of the newest and greatest available tools. The misbalance happens when this expectation becomes much more important than the consequences of using too many tools, some never intended to be used together and/or before never tested in the same ecosystem.

We usually see this anti-pattern manifest as a Logo Slide, which is a tech-stack slide full of logos and abbreviations that receives a place of prominence on architecture documents that boast about the abundance of latest technologies that were combined to produce this ultimate tech stack.

At a superficial level, the abundance of latest and greatest tech on the Logo Slide could convey the impression of a highly innovative and cutting-edge tech stack. After all, who doesn’t like the idea of working with fancy and powerful tools?

The problem comes as pure math: the more black-box tools in the stack, the more things will need to be stitched together, configured, and supported. This undesired effect inspired the other name of the anti-pattern: Frankenstein. Just as the fictional monster had large stitches gruesomely holding it together, a high number of tools that work well independently may, when forced into the same ecosystem, display gruesome artificial stitching that will spread throughout the entire stack.

We have been observing an escalation of malfunction and incidents caused by tools that work well separately but, when combined to support a specific business need, do not perform as a coherent and stable ecosystem.

There are also hidden costs of maintenance and support that usually manifest themselves after the solution delivery. Vendors and the open-source community have been updating their products more and more often, and each individual tool needs to be maintained and patched after every release. Regression and integration tests with the other tools must be done to ensure the updates have not adversely affected the integration with the ecosystem. The more tools we use in the apps, the more integration, tests, patches, and exponentially increasing updates we need. This scenario of constant change in the toolset explains the Jenga Tower name. Just like the game in which any move can bring the whole tower down, each individual tool’s release has the potential to bring the whole stack down if it disturbs its relationship with the ecosystem.
Another trap in the arbitrary pursuit of tools happens when multiple tools are used to provide a small subset of their capabilities. This usually comes up as "we use this for that, and that for that, and that for that ..."

In this scenario, multiple tools are chosen but only the capabilities in which they are the best of breed are used, and the result could be a system that, for example, uses 20% of five tools’ capabilities instead of 80%-100% of the capabilities of one tool that might not be as attractive.

The end result resembles a Jenga Tower whose blocks are piled up together at the cost of fragility.

Leadership insight

Leaders can spot the Jenga Tower anti-pattern when the ecosystem that composes the business application seems to have too many moving parts. Some of the signs to watch for are:

  • subcomponents that pass QA but fail integration tests as there are too many other components to integrate with;
  • cumbersome and time-consuming integration tests;
  • too many components that need constant patching, causing instability in the business app; and
  • too many bugs that cannot be fixed because they are inside black-box components, requiring the development of custom workarounds.

Leaders should foster the use of the latest modern and solid technologies but should also introduce focused technical leadership over the integration of the parts. Do not expect that because components work well independently they will automatically compose a great run-time ecosystem.

Architecture reviews should pay special attention to this anti-pattern to avoid validating the integrated solution in the merits of the individual components only. The stack needs to be reviewed, validated, and approved as a whole.

Square Wheel:

Square Wheel happens when a tool of preference can be used to provide parts of the capabilities needed but the tool ultimately compromises the sanity of the architecture or causes unintended side effects.

This can be caused by another anti-pattern, the Golden Hammer, when architects and teams have a tool of preference in which they are well versed and know how to use the tool to produce the necessary results, but at the cost of an architectural compromise. For example, data engineers could choose to encapsulate business logic on stored procedures if RDBMS is their tool of preference.

Another cause could be related to the Jenga Tower and it comes from the appetite to use a tool either because it’s the latest and greatest or because the team wants to claim that they have used such a tool.

But the most common appearance of Square Wheel comes when the tool does part of the work needed and it does it well but at a cost, since the tool cannot be decoupled from unintended behavior and work needs to be done to eliminate or mask the unintended behavior.

Regardless of the cause, the Square Wheel anti-pattern refers to a tool that either does a poor job at the intended core capability or delivers that capability with detrimental consequences.

Examples include:

  • enabling verbose tracing at the entire database to track historical transactions and
  • using AOP to inject business logic.

Leadership insight

The Square Wheel anti-pattern usually emerges from technical teams that are either untrained, and thus tend to choose the same tools to solve different problems without going through the architectural due diligence of assessing specialized tools, or technical teams that are led by strong technologists who tend to favor experimentation over stability of the resulting ecosystem.

Regardless of the cause, organizations must foster strong architecture-review practices to ensure the best choices of tools for all use cases and that the ecosystem resulting from the combination of all selected tools will be stable and maintainable.

Following table summarizes the anti-patterns we discussed so far.

Anti-pattern Characteristics How to Avoid It?
Monolithic Hell  Monolithic apps are migrated to cloud platforms as is without any refactoring. Have architecture controls in place to look for Monolithic Hell anti-pattern by examining the planned business functions and cadence of deployments to determine if the application to be moved to the cloud will still operate as one large black box.
Distributed Monolith Also known as a Microlith, this anti-pattern occurs when a single monolithic application is artificially broken down into multiple services and deployed as separate apps, but all these services are still tightly coupled. Establish solid architecture governance and formal design reviews where teams can be alerted to the distributed monolith anti-pattern before progressing too far with the development.
Death Star This anti-pattern evolves as organizations transition their application architectures to leverage more and more microservices and container technologies without good service orchestration. Other design patterns like service mesh and sidecar can help with managing the interaction between the microservices.
Jenga Tower/Logo Slide/Frankenstein Tech-stack slide full of logos and abbreviations in architecture documentation that boast about the abundance of latest technologies that were combined to produce this ultimate tech stack. Foster the use of the latest modern technologies but should also introduce focused technical leadership over the integration of the parts. Architecture reviews should pay attention to this anti-pattern to avoid validating the integrated solution in the merits of the individual components only.
Square Wheel   The Square Wheel anti-pattern refers to a tool that either does a poor job at the intended core capability or delivers that capability with detrimental consequences. Organizations must foster strong architecture-review practices to ensure the best choices of tools and that the ecosystem resulting from the combination of all selected tools will be stable and maintainable.


The challenges of microservices and cloud architecture anti-patterns we discussed affect the way development teams design, code, and deploy their services to production environment.

We conclude that new and emerging architecture models, if not adopted properly, could lead to architecture and design deviations and anti-patterns.

Architecture adoption pendulum

Getting the architecture right is like finding the balance of a swinging pendulum. If we are not careful, the software we develop can end up at one of the two extremes of the architecture pendulum: control of destiny and speed.

Control of destiny

On one hand, as shown on the left side of the diagram in Figure 6, development teams could start writing software components from scratch for everything they need, instead of using or reusing existing solutions.

This is commonly known as "reinventing the wheel" (RTW), NIH, or DIY software development. In the architecture adoption pendulum, we call this the "anti-patterns zone" (APZ). In this zone, teams would like to have more control of software development with no dependencies on other teams that may be creating common services for the application development team to consume.

This can be identified by the following characteristics:

  • Every project is a green field.
  • Technologies, not architecture, drive everything.
  • There is a lot of code to maintain in the long run.
  • There is no agility because each team is working in isolation and potentially creating the whole solution from scratch.

These challenges can be addressed by following the practices like:

  • Manage software as a product and establish organization-level product backlogs, roadmaps, and feature-delivery timelines, instead of at the individual team or project level.
  • Prioritize the product features based on what’s best for the whole organization, not what’s best for a specific team.
  • Leverage projects as means/vehicles to deliver the product features to production.
  • Identify dependencies between different projects earlier in the lifecycle and address any delivery risks as early in the process as possible.
  • Cross-team collaboration and timely communication are very important for an agile and value-driven software delivery process.

Figure 6. Architecture adoption pendulum

Speed

On the other hand, we have seen development teams using the latest technologies arbitrarily without much thought into the overall strategic architecture and long-term consequences of using the technology for technology’s sake.

As the industry changes faster and faster, the pendulum has swung from the left side all the way to right. This approach also falls into the APZ because there teams tend to try different technologies together, even though those technologies may not be compatible with each other.

Ultimately, these diverse technologies may not provide a cohesive software architecture and will make the overall system brittle in terms of extensibility and prone to security vulnerabilities. This is where anti-patterns like Square Wheel, Jenga Tower/Logo Slide, and Frankenstein emerge.

Following are the characteristics that you may see in projects that fall into this APZ.

  • Projects are used as R&D labs to experiment with a bunch of technologies because the development teams want to show senior management that they are using cutting-edge technologies.
  • Teams play with technologies for technology’s sake.
  • There is too much experimentation without any business value.

Architecture Goldilocks zone: Best of both worlds

As we can see in Figure 6, it’s critical to attain a balance between the two extremes. In this ideal zone, teams can enjoy architecture and technology stability by properly managing the technology use/reuse as well as dependencies on those technologies (loose coupling).

We call this the Architecture adoption Goldilocks zone. Some of the big rules that help with achieving this balance are:

  • use and reuse of best-of-breed technologies and frameworks;
  • single and centralized implementation for each business and technical capability; and
  • service-layer abstraction for commonly used business and technical functions like authentication, user authorization, data caching, customer notifications, and so on.

In their book Continuous Delivery in Java, Abraham Marín-Pérez and Daniel Bryant describe the evolution of language, application architecture, and deployment platforms that propelled the goal of continuous delivery of business features to customers.

In future articles in this series, we’ll explore more about this balance and what architectural and organizational changes can accelerate cloud-native adoption.
It’s important to remember that today’s shiny leading-edge technology could be tomorrow’s technical and security debt. Best practices like cloud-native architecture and 12-factor apps help us to get to and stay in the zone.

Organizations need to find the right balance between creating DIY solutions and arbitrarily using latest technologies for the technology’s sake. A cloud-native common services platform helps to achieve this balance. Such an approach consists of different types of services for addressing business, technical, and platform infrastructure requirements by providing a good level of abstraction to the client applications.

Common services platform

A common services-based development platform can help organizations successfully adopt a cloud-native architecture strategy with the right balance of developing solutions from scratch and using outside technologies. This common platform uses the cloud hosting model as the foundation and provides just enough abstraction to the actual implementation of specific technologies.

The common development platform should allow for seamless changes in the future so that new capabilities can be plugged into the architecture without major design or code changes. Some of the big rules to look for in the architecture adoption zone are:

  • security,
  • stability, and
  • agility.

Cloud-native patterns and practices

The digital transformation to cloud-native platforms typically occurs in three different areas:

  • application,
  • platform, and
  • process.

Let’s discuss each of these areas in detail.

Application level

Development teams should embrace best practices like 12-factor apps and containerization when adopting cloud-native architecture in their organizations. Services-based solutions are the critical components of this transformation.

Emerging microservices patterns like service mesh ease the digital transformation by offering essential capabilities like resiliency, service-level security, and observability out of the box.

We’ll discuss the service-mesh architecture and how it can help with cloud-native adoption in Part 3 of this article series.

Platform level

The platform-level cloud-native transformation includes optimizing the deployment models and taking advantage of techniques like containerization and platform as a service (PaaS) using technologies like Kubernetes (K8s) and Pivotal Cloud Foundry (PCF).

An ideal cloud native platform should support the following capabilities:

  • agile deployment of web applications and services without a lot of infrastructure overhead and delays,
  • run-time elasticity of services using auto scalability or similar resiliency mechanisms, and
  • end-to-end monitoring and alerting features for production support and troubleshooting in case any errors or outages occur.

Deployment process

The third focus area is the deployment itself. The deployment processes include CI/CD automation and cloud-native DevOps.
CI/CD automation helps with faster and frequent deployments of applications and services and other platform resources like back-end services, in test and production environments.

And cloud-native DevOps practices help with monitoring of services in real time and providing necessary support for the availability and performance and SLAs of the services hosted on the platform.

We’ll delve into more details on cloud-native DevOps topic in Part 4 of this article series.

Conclusions

Anti-patterns may arise when using microservices architecture in an organization. IT teams need to balance the architecture adoption pendulum between reinventing the wheel for every application and resisting the tendency to reuse technologies and application frameworks for the sake of reuse.

In the next article, we’ll dive into one of the popular design patterns in microservices architecture, the service mesh. We’ll discuss the details of what a service mesh is, how it helps cloud-native applications with capabilities like traffic management, security, and observability. Service mesh is a collection of design patterns, some of which are Sidecars, Circuit Breakers, and Fault Injections. We’ll discuss these design patterns in detail in the next article.

References

About the Authors

Srini Penchikala is a senior IT architect for Global Manufacturing IT at General Motors in Austin, Texas. He has over 25 years of experience in software architecture, design, and development, and has a current focus on cloud-native architectures, microservices and service mesh, cloud data pipelines, and continuous delivery. Penchikala is the co-creator and lead architect in implementing an enterprise cloud-native service-mesh solution in the organization. Penchikala wrote Big-Data Processing with Apache Spark and co-wrote Spring Roo in Action, from Manning. He is a frequent conference speaker, is a big-data trainer, and has published several articles on various technical websites.

Marcio Esteves is the director of applications development for Tokyo Marine HCC in Houston, Texas, where he leads solution architecture, QA, and development teams that collaborate across corporate and business IT to drive adoption of common technologies with a focus on revenue-generating, globally deployed, cloud-based systems. Previously, Esteves was chief architect for General Motors IT Global Manufacturing, leading architects and cloud-native engineers responsible for digital-transformation-leveraging technologies such as machine learning, big data, IoT, and AI/cloud-first microservices architectures. Esteves developed the vision and strategy and led the implementation of an enterprise cloud-native service-mesh solution at GM with auto-scalable microservices used by several critical business applications. He also serves as board technical advisor for VertifyData in downtown Austin.

 

 

Rate this Article

Adoption
Style

BT