BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity

Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity

Bookmarks

Key Takeaways

  • Architecture stabilization gaps and anti-patterns can emerge as part of a hasty microservices adoption.

  • Understanding the caveats and pitfalls of historic paradigm shifts should enable us to learn from previous mistakes and position our organizations to thrive at the latest technology waves.

  • It’s important to know the pros and cons of different architectural styles like monolithic apps, microservices, and serverless functions.

  • Repeating cycle of architecture evolution: initial stage of not knowing best practices in the new paradigm, which accelerates the technical debt. As the industry develops new patterns to address the gaps, teams adopt new standards and patterns.

  • Consider the architecture patterns as strategies that favor rapid technological evolution while protecting the business apps from volatility.
     

Technology trends such as microservices, cloud computing, and containerization have been escalating so quickly in recent years that most of these technologies are now part of the day-to-day duties of top IT engineers, architects, and leaders.

We live in a cloud-enabled world. However, being cloud-enabled does not mean being cloud-native. In fact, it’s not only possible but dangerous to be cloud-enabled without being cloud-native.

Before we examine these trends and discuss what architectural and organizational changes corporations should implement to take full advantage of a cloud-enabled world, it is important to look at where we have been, where we are, and where we are going.

Understanding the caveats and pitfalls of the historic paradigm shifts should allow us to learn from previous mistakes and position our organizations to thrive on the latest waves of this technology.

Anti-patterns

As we briefly walk through this evolution, we’ll be exploring the concept of anti-patterns, which are common responses to a recurring problem that are usually ineffective and risk being counterproductive.

This article series will describe the anti-patterns mentioned.

Architecture evolution

For the last 50 years or so, software architecture and application hosting models have experienced major transformation from mainframes to microservices and serverless.

Figure 1 shows this evolution of architecture models and the paradigms they promoted.

Figure 1: Architecture evolution from mainframe to cloud and microservices

Centralized

Back in the ’70s and ’80s, mainframes were the way of computing. Mainframes are based on a centralized data storage and computing model, with basic client terminals used for data entry and data display on primitive screens.

The original mainframe computers used punch cards and most of the computation happened within batch processes. There was no online processing and latency was at 100% as nothing was processed in real time.

Some evolution happened within the mainframe paradigm with the introduction of online processing and user interface terminals. The overall paradigm of a massive central unit of processing contained within the four walls of a single organization still had a "one size fits all" approach, however, and that was only partially able to supply the capabilities needed by most business applications.

Centralized -> decentralized

Client/server architecture put most of the logic on the server side and some of the processing on the client. Client/server was the first attempt in distributed computing to replace the mainframe as the primary hosting model for business applications.

In the first few years of this architecture, the development community was still writing software for client/server using the same procedural, single-tier principles that they had used for mainframe development, which resulted in anti-patterns like spaghetti code and the blob.  This organic growth of software also resulted in other anti-patterns like big ball of mud. The industry had to find ways to stop teams from following these bad practices and so had to research what was necessary to write sound client/server code.

This research effort mapped out several anti-patterns and best-practice design and coding patterns. It introduced a major improvement called object-oriented programming (OOP), which had inheritance, polymorphism, and encapsulation features, along with paradigms to deal with decentralized data (as opposed to a mainframe with one version of the truth) and guidance for how industry could cope with the new challenges.

The client/server model was based on three-tier architecture consisting of presentation (UI), business logic, and data tiers. But most of the applications were written using two-tier models with a thick client encapsulating all presentation, business, and data-access logic, directly hitting the database. Although the industry had started to discuss the need to separate presentation from business from data access, that practice didn’t really become vital until the advent of Internet-based applications.

In general, this model was an improvement over the mainframe limitations, but the industry soon ran into its limitations like needing to install the client application on every user’s computer and an inability to scale at a fine-grained level as a business function.

Decentralized -> connected/shared (www)

During mid ’90s, the Internet revolution occurred and a completely new paradigm arrived with it. Web browsers became the client software while web and application servers hosted all the processing and logic. The World-Wide Web (www) paradigm promoted a true three-tier architecture with presentation (UI) code hosted on web servers, business logic (API) on application servers, and the data stored in database servers.

The development community started to migrate from thick (desktop) clients to thin (web) clients, driven mainly by ideas like service-oriented architecture (SOA) that reinforced the need for a three-tiered architecture and fueled by improvements to client-side technologies and the rapid evolution of web browsers. This move sped up time to market and required no installation of the client software. But developers were still creating software as tightly coupled designs, leading to jumble and other anti-patterns.

The industry in response came up with evolved three-tiered architectures and practices such as domain-driven design (DDD), enterprise integration patterns (EIP), SOA, and loosely coupled techniques.

VM-hosted -> cloud-hosted

The first decade of the 21st century saw a major transformation in application hosting when hosting became available as a service in the form of cloud computing. Application use cases requiring capabilities like distributed computing, network, storage, compute, etc., became much easier to provision with cloud hosting at a reasonable cost compared to traditional infrastructure. Also, consumers were taking advantage of elasticity of the resources to scale up and down based on the demand. They only needed to pay for the storage and compute resources that they used.

The elastic capabilities introduced in IaaS and PaaS allow for a single instance of a service to scale as needed, eliminating duplication of instances for the sake of scalability. However, these capabilities cannot compensate for the duplication of instances for other purposes, such as having multiple versions, or as a byproduct of monolith deployments.

The appeal of cloud-based hosting is that the dev and ops teams no longer had to worry about server infrastructure. It offered three different hosting options:

  • Infrastructure as a service (IaaS): Developers choose the server specifications to host their applications and the cloud provides the hardware, OS, and networking. This is the most flexible of all three flavors but does put some burden on the dev teams, which must specify the servers.
  • Platform as a service (PaaS): Developers only need to worry about their application and configuration. The cloud provider takes care of all the server infrastructure, network, and monitoring tasks.
  • Software as a service (SaaS): The cloud provider offers the actual applications hosted on the cloud so the client organizations can consume the application as a whole without responsibility even for the application code. This option provides the software services out of the box but it’s inflexible if the client needs to have any custom business functions outside of what’s offered by the provider.

PaaS became the sweet spot among the cloud options because it allows developers to host their own custom business application without having to worry about provisioning or maintaining the underlying infrastructure.

Even though cloud hosting encouraged modular application design and deployment, many organizations found it enticing to lift and shift legacy applications that had not been designed to work on an elastic distributed architecture directly to the cloud, resulting in a somewhat modern anti-pattern called "monolith hell".

To address these challenges, the industry came up with new architecture patterns like microservices and 12-factor apps.

Moving to the cloud also presented industry with the challenges of managing the application dependencies on third-party libraries and technologies. Developers started struggling with too many options and not enough criteria for selecting third-party tools, and we started seeing some dependency hell.

Dependency hell can occur at different levels:

  • Libraries: Improper management of dependencies of libraries (JARs in the Java world and DLLs in the .NET world) can lead to problems. For example, a typical Spring Boot app includes 140+ library JAR files. Make sure you package no unnecessary libraries in your application.
  • Classes: Clarify all dependencies of objects inside the application. For example, a Controller class depends on a Business Service class, and the Service class in turn depends on a Repository class. Take time to review dependencies within the application during the code reviews and ensure that there are no incorrect dependencies.
  • Services: If you are using microservices in your system, verify that there are no direct dependencies between different services.

Library-based dependency hell is a packaging challenge and the latter two are design challenges. A future article in this series will examine these dependency-hell scenarios in more detail and offer design patterns for avoiding the unintended consequences to prevent any proliferation of technologies.

Microservices: Fine-grained reusability

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments.

With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for  true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services.

Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems (client applications or other services). The services can also be scaled up or down based on demand, at the individual service level.

A client application that needs to use a specific business function calls the appropriate microservice without requiring the developers to code the solution from scratch or to package the solution as library in the application. The microservices approach encouraged a contract-driven development between service providers and service consumers. This sped up the overall time of development and reduced dependency among teams. In other words, microservices made the teams more loosely coupled and accelerated the development of solutions, which are critical for organizations, especially the business startups.

Microservices also help establish clear boundaries between business processes and domains (e.g., customer versus order versus inventory). They can be developed independently within that vertical modularity known as the "bounded context" in the organization.

This evolution also accelerated the evolution of other good practices like DevOps, and it provided agility and faster time to market at the organization level. Each development team would own one or more microservices in its domain and be responsible for the whole process of designing, coding, deploying to production as well as post-production support and maintenance.

However, similar to the previous architecture models, the microservices approach ran into its own issues.

Legacy applications that had not been designed as microservices from bottom-up started being cannibalized in attempts to force them into a microservices architecture, leading to the anti-pattern known as monolith hell. Other attempts tried to artificially break monolithic applications into several microservices even though these resulting microservices were not isolated in terms of functionality and still heavily depended on other microservices broken out of the same monolithic application. This is the anti-pattern called microliths.

It's important to note that monoliths and microservices are two different patterns, and the latter is not always a replacement for the former. If we are not careful, we can end up creating tightly coupled, intermingled microservices. The right option depends on the business and scalability requirements of an application’s functionality.

Another undesired side effect of the microservices explosion is the so-called "Death Star" anti-pattern. Microservices proliferation without a governance model in terms of service interaction and service-to-service security (authentication and authorization) often results in a situation where any service can willy-nilly call any other service. It also becomes a challenge to monitor how many services are being used by different client applications without decent coordination of those service calls.

Figure 2 shows how organizations like Netflix and Twitter ran into this nightmare scenario and had to come up with new patterns to cope with a "death by Death Star" problem.

Figure 2: Death Star architectures due to microservices explosion without governance

Although the examples depicted in figure 2 might look like extreme cases that only happen to giants, do not underestimate the exponential destructive power of cloud anti-patterns. The industry must learn how to operate a weapon that is massively larger than anything the world has seen before. "Great power involves great responsibility," said Franklin D. Roosevelt.

Emerging architecture patterns like service mesh, sidecar, service orchestration, and containers can be effective defense mechanisms against malpractices in the cloud-enabled world.

Organizations should understand these patterns and drive adoption sooner rather than later.

Quick tour through critical cloud-first design patterns

Service mesh

With the emergence of cloud platforms, especially the container orchestration technologies like Kubernetes, service mesh has been gaining attention. A service mesh is the bridge between application services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. It allows the applications to offload these capabilities from application- level libraries and allows developers to focus on business logic.

Some service mesh technologies like Istio also support features like chaos injection so that developers can test the resilience and robustness of their application and its potentially dozens of interdependent microservices.

Service mesh fits nicely on top of platform as a service (PaaS) and container as a service (CaaS), and enhances the cloud-adoption experience with the above-mentioned common platform services.

A future article will delve into the service-mesh-based architectures with discussion on specific use cases and comparison of solutions with and without service mesh.

Serverless architecture

Another trend that has received a lot of attention in the last few years is serverless architecture, also known as serverless computing. Serverless goes a step further than the PaaS model in that it fully abstracts server infrastructure from the application developers.

In serverless, we write business services as functions and deploy those functions to the cloud infrastructure. Some examples of serverless technologies are Amazon Lambda, Spring Cloud Function, Google Cloud Functions, and Microsoft Azure Functions.

The serverless model sits in between PaaS and SaaS in the cloud-hosting spectrum, as shown in the diagram below.

Figure 3: Cloud computing, containers, service mesh, and serverless

In a similar conclusion to the discussion of monolithic versus microservices, not all solutions should be implemented as functions. Also, we should not replace all microservices with serverless functions just like we shouldn’t replace or break down all of monolithic apps into microservices. Only the fine-grained business and technical functions like user authentication or customer notification should be designed as serverless functions.

Depending on our application functionality and non-functional requirements like performance and scalability and the transaction boundaries, we should choose the appropriate monolith, microservices, or serverless model for each specific use case. It’s typical that we may need to use all three of these patterns in a solution architecture.

If not designed properly, serverless solutions can end up becoming nanoliths, where each function is tightly coupled with other functions or microservices and cannot operate independently.

Container technologies

Complementary trends like container technologies came out around the same time as microservices to help with deploying the services and apps in microserver environments that offered true isolation of business services and scalability of individual services. Container technologies like Docker, containerd, rkt, and Kubernetes can complement the microservices development very well. Nowadays, we cannot mention one - microservices or containers - without the other.

Monolith versus microservice versus serverless

As mentioned earlier, it’s important to know the pros and cons of the three architectural styles: monolithic apps, microservices, and serverless functions. A written case study on monolith versus microservices describes in detail one decision to avoid microservices.

Table 1 highlights the high-level differences between these three options.

Architecture style When to use When to not use Examples for use
Monolith Application has different modules that are completely dependent on each other from a transactional context. Requires immediate consistency for all data operations. Application modules can be broken down into atomic business or technical functions. ERP or CRM systems
Microservice Application modules are independent of each other in their run-time lifecycle and for transaction management. Data operations in each module can be performed in a stateless manner. If there are any dependencies between the modules they can still be loosely coupled with eventual consistency support.

Note: Sometimes teams artificially break down related functions into microservices and experience the limitations of microservices model.
Application modules cannot be independently deployed and used without hard dependency on other modules. Customer service
Order service
Inventory service
Serverless Application modules can be broken down into single functions, business or technical, with complete independence and individual scalability policies.

Application is completely shut down when there's no traffic.

Dev teams don't have to care about underlying infrastructure.
Jobs that run for extended periods of time, CRUD services, or stateful services. Authentication
Notification
Event streaming

Table 1: Service architecture models and when to use or avoid them

Stabilization gaps

It’s important for us to keep an eye on the anti-patterns that may develop in our software architecture and code over time. Anti-patterns not only cause technical debt but, more importantly, they could drive subject-matter experts out of the organization. An organization could find itself with only the people who don’t bother about the architecture deviations or anti-patterns.

After the brief history above, let’s focus on the stabilization gaps and anti-patterns that can emerge as part of a hasty microservices adoption.

Specific factors like the team structure in an organization, the business domains, and the skillsets in a team determine which applications should be implemented as microservices and which should remain as monolith solutions. But we can look at some general considerations for choosing to design a solution as a microservice.

The Eric Evans book, Domain-Driven Design (DDD), transformed how we develop software. Eric promoted the idea of looking at business requirements from a domain perspective rather than from one based on technology.

The book considers microservices to be a derivation of the aggregate pattern. But many software development teams are taking the microservices design concept to the extreme, by attempting to convert all of their existing apps into microservices. This has led to anti-patterns like monolith hell, microliths, and others.

Following are some of the anti-patterns that architecture and dev teams need to be careful about:

  • monolithic hell
  • microliths
  • Jenga tower
  • logo slide (also known as Frankenstein)
  • square wheel
  • Death Star

We’ll look in more detail at each of these anti-patterns in the next article.

Evolving architecture patterns

To close the stabilization gaps and anti-patterns found in different application hosting models, the industry has come up with evolved architecture patterns and best practices to close the gaps.

These architecture models, stabilization gaps and patterns are summarized in the table below.

Hosting model Description Stabilization gap/ anti-patterns Patterns
Centralized Central data storage and computing model. Client terminals only used for data entry and data display.    
Decentralized Client/server architecture with most of the application logic handled on the server side. Basic validation and some processing occur on the client. Spaghetti code, jumble Object-oriented programming

Connected/shared

 

Web applications promoting a three-tier architecture with presentation logic (UI) hosted on web servers, business logic (API) on application servers, and the data in database servers. Jumble Domain-driven design, enterprise integration pattern, SOA
Cloud Hosted Cloud-computing model that transformed how applications are hosted and scaled. Gave us a third option of "host on the cloud" in addition to the good old "buy versus build" options. Monolith hell, dependency hell Microservices
Microservices Fine-grained service model that encapsulates specific business functions to lightweight apps (microservices) with clear boundaries between business domains. Monolith hell, microliths, Death Star Service mesh, sidecar, service orchestration

Table 2: Application hosting models, anti-patterns, and patterns

Figure 4 shows all these architecture models, the stabilization gaps in the form of anti-patterns, and the evolved design patterns and best practices.

Figure 4: Architecture evolution and application-hosting models

What history teaches us

Figure 5 lists the steps of architecture evolution, including the initial stage of not knowing the best practices in the new paradigm, which accelerates the technical debt. As the industry develops new design patterns to address the stabilization gaps, teams adopt the new standards and patterns in their architecture.

Figure 5: Architecture models and adoption of new patterns

Business and technology

IT leaders must protect their investment against the ever-growing rapid transformation of technologies while providing a stable array of business applications running on a constantly evolving and optimizing technological foundation. IT executives across the globe have been dealing with this problem more and more frequently.

They and we should embrace the evolution of technology but not at the price of constant instability of the apps supporting the business.
Disciplined systematic architecture should be able to deliver just that. Consider the patterns discussed in this article series as strategies that favor rapid technological evolution while protecting the business apps from volatility. Let’s explore how that can be done in the next article.

Conclusions

The various hosting models, from mainframes to the recent cloud-native architecture, impact the way we develop, deploy, and maintain business apps. Each time the industry discovered a new hosting model, teams faced challenges in harvesting the full benefits of the architecture. This led to unintended consequences like architecture deviations and anti-patterns, which caused significant technical debt. Over time, new design patterns evolved to address the stabilization gaps opened by the new hosting model.

Technical-debt management plays a crucial role in the overall systems as well as in teams’ health. IT leaders who don’t deal with technical debt in a timely fashion risk creating software-related and organizational harm. Technical debt can feed on itself and create even more debt while causing the institutionalization of bad practices and repelling top talent.

When these signs are present, immediately stop and evaluate. Then take firm action.

Make sure you empower your teams to address technical debt in all its forms.

Future articles in this series will examine a common services platform that my organization developed during the adoption of microservices architecture. We’ll also discuss how the company leveraged different cloud-native architecture components like containers, PaaS, and service mesh.

The next article will dive into the anti-patterns that teams should be aware of and cloud-native design patterns they should adopt in their architectures. We'll discuss the details of adopting an enterprise cloud-native service-mesh strategy that'll help with a lot of these capabilities. Finally, we'll share several recommendations for architecture and organizations.

References

About the Authors

Srini Penchikala is a senior IT architect for Global Manufacturing IT at General Motors in Austin, Texas. He has over 25 years of experience in software architecture, design, and development, and has a current focus on cloud-native architectures, microservices & service mesh, cloud data pipelines, and continuous delivery. Penchikala is the co-creator and lead architect in implementing an enterprise cloud-native service-mesh solution in the organization. Penchikala wrote Big-Data Processing with Apache Spark and co-wrote Spring Roo in Action, from Manning. He is a frequent conference speaker, is a big-data trainer, and has published several articles on various technical websites.

Marcio Esteves is the Director of Applications Development for Tokyo Marine HCC in Houston, Texas, leading Solution Architecture, QA and Development teams collaborating across corporate and business IT to drive adoption of common technologies with a focus on revenue generating, globally deployed, cloud-based systems. Previously, Esteves was the Chief Architect for General Motors IT Global Manufacturing, leading architects and cloud-native engineers responsible for digital-transformation leveraging technologies such as machine learning, big data, IoT, and AI/Cloud first microservices architectures. Esteves developed the vision and strategy and led the implementation of an enterprise cloud-native service-mesh solution at GM with auto-scalable microservices used by several critical business applications. He also serves as board technical advisor for VertifyData in downtown Austin.

 

Rate this Article

Adoption
Style

BT