Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles DevOps and Cloud InfoQ Trends Report – June 2022

DevOps and Cloud InfoQ Trends Report – June 2022

Key Takeaways

  • Data Observability will help organizations better understand and troubleshoot their data-intensive systems.
  • Serverless Databases and "Serverless as a Baseline Expectation" are becoming table stakes for cloud services being used by developers. We see an increase in the offerings by public cloud providers and the adoption of serverless and distributed SQL databases for cloud-native applications.
  • FinOps practices continue to mature. Awareness of cloud spending has increased and offerings have evolved to provide better insights. Furthermore, adoption of the practices of FinOps Foundations, combined with companies’ own practices, have increased as well – hence we believe this topic should move to the early adopter’s stage. 
  • eBPF and WASM are exciting new technologies that are being used to unlock new approaches to observability, monitoring, and security within the service meshes and ingress domains. We believe this is in the innovators stage. 
  • Supply chain attacks have increased dramatically over the past year, and new innovations have been introduced to help combat this. We expect to see more companies improving how they manage their software supply chain as attacks continue to increase in complexity.
  • Low- or no-code platforms continue to mature, especially for internal tooling and automation uses. This maturing of the platforms in both usability and effectiveness will allow innovator companies to start to leverage this technology better.
  • We also see the trend of "Developer Experience as Decision Driver" gaining more traction, particularly within the cloud platform space. The role of "Platform Engineer" is emerging within many sizes of organizations to support the building of related platform abstractions, APIs, and tooling.

This article summarizes how the InfoQ Editors and friends of InfoQ currently see the “cloud computing and DevOps” space, which focuses on fundamental infrastructure and operational patterns, the realization of patterns in technology frameworks, and the design processes and skills that a software architect or engineer must cultivate.

Both InfoQ and the QCon conference series focus on topics that we believe fall into the “innovator, early adopter, and early majority stages” of the diffusion of technology, as defined in Geoffrey Moore’s book Crossing the Chasm. We try to identify ideas that fit into what Moore referred to as the early market, where “the customer base is made up of technology enthusiasts and visionaries who are looking to get ahead of either an opportunity or a looming problem.” We are also looking for ideas that are likely to “cross the chasm” to broader adoption. It is perhaps worth saying, in this context, that a technology’s exact position on the adoption curve can vary. For example, microservices are widely adopted among Bay Area companies but may be less widely adopted and perhaps less appropriate elsewhere.

In this edition of the cloud computing and DevOps trend report, one participant of this report identified an emerging trend named “Data Observability.” With this emerging technology, organizations can fully understand the health of the data in their systems. It automates data monitoring along the pipelines to facilitate data issues troubleshooting and prevention so data teams can improve their time to value and scale their productivity. In addition, with Data observability, communication across teams improves through evidence-based information on data management ecosystems.

Furthermore, there is an increased awareness of the impact cloud computing has on the environment. Sustainability Accounting moves from “innovators” to “early adopters.” Microsoft, Google, and AWS globally operate large-scale data centers and thus consume a vast amount of energy. Yet, they are more efficient in energy consumption than traditional data centers. An IDC report from last year estimates that between 2021 and 2024, the shift to cloud computing should prevent at least 629 million metric tons of CO2 emissions. Participants in this report have also identified it and made news items on the evolution of cloud services for carbon footprint awareness of workloads. 

In addition, there has been increased adoption of Cross-Cloud and Cloud-Native Hybrid Approaches. We believe this topic should move from the “innovators” to the “early adopters” stage of our graph. Public Cloud providers continue to invest in offerings. Google with Anthos for Virtual Machines, Microsoft with Container App Services, and AWS with Outposts by adding two new form factors for space-constrained locations.

Support for cross-cloud application and deployment platforms also continues to increase. New CNCF sandbox projects like KubeVela and Porter simplify packaging and deploying application code across cloud environments. 

As noted by a few participants, eBPF and WASM are providing some exciting opportunities for monitoring, observability, and service mesh technologies. For example, Cilium is using the technology to produce a sidecar-less service mesh, although as the article highlights, there is still debate about whether this is the best approach for all use cases. For observability, it can improve the monitoring of networking on a server or container without the need for installing numerous agents. For example, Teleport is an open-source multi-protocol identity-aware access proxy. The team made use of eBPF hooks at the beginning of the SSH session to capture detailed audit logs of the SSH session.

Supply chain security has been a hot topic with several high-profile security events over the past year. A report from Aqua Security found that supply chain security attacks increased by 300% from 2020 to 2021 without a similar increase in the level of security. In response to the increase in attacks, the Open Source Security Foundation (OpenSSF) announced the Alpha-Omega Project to improve supply chain security across open source software (OSS) projects. The project will focus on improving the security posture of the most widely deployed and critical OSS projects. GitHub and Google released a new version of the OpenSSF’s Scorecards project. Scorecards is an automated security tool that identifies risky supply chain practices in open source projects. We expect to see more announcements and technologies released in this area over the coming year.

Several contributors called out that no-code or low-code solutions are becoming more prevalent in DevOps usages. As noted by Tracy Miranda, Head of Open Source at Chainguard, “with low code and no code, ‘citizen’ developers are empowered to build applications and help unclog always-under-pressure IT departments.” These solutions can be used to help push along improvements in internal tooling and automation. We expect to see more developments in this area as well. 

For context, here is what the topic graph looked like for the first half of 2021. The 2022 version is at the top of the article.

The following is a lightly edited copy of the corresponding internal chat log between several InfoQ cloud computing DevOps topic editors and InfoQ contributors, which provides more context for our recommended positioning on the adoption graph.

Lena Hall, Head of AWS Developer Relations:

Serverless as a Baseline Expectation: Early Adopters, to replace serverless databases. Serverless experiences are becoming a baseline expectation for cloud services among developers. Serverless capability is frequently associated with application development, as it initially offered serverless functions and logic execution. Later we started seeing serverless databases, and now we notice more products adding serverless functionality to meet customer expectations. Serverless is becoming a fundamental need over just a nice-to-have experience to enable design for change and flexibility among modern cloud development’s dynamic, event-driven, micro-service-oriented nature. For example, we have recently seen Amazon Redshift adding serverless to reduce the operational burden, remove the need to set up and manage infrastructure and get insights by querying data in the data warehouse. We see serverless being adopted to improve the developer experience by deploying machine learning models for inference without configuring or managing infrastructure. 

At the same time, the serverless capability of cloud services is also an essential step on the path to building shared self-serve infrastructure within organizations (which is one of the pillars of the Data Mesh concept). Shared self-serve infrastructure will rely on the self-service capabilities of products or services required for infrastructure provisioning. Still, the goal of shared self-serve infrastructure has a more extensive scope—reducing duplicated effort and skills needed to operate the data pipelines technology stack and infrastructure in each domain within an organization. For example, it will help provide an abstraction that individual data products can work with and include capabilities for observability, catalog and lineage, federated Identity management, scalability, unified access control, analytical engines, data storage, and others.

Developer Experience as Decision Driver: Early Adopters. Developer Experience is becoming an increasingly crucial factor in developers’ decisions to advocate for specific technology adoption. A lack of good Developer Experience may mean a risk or a bottleneck for significant capabilities, like scalability, automation, speed of innovation, or even security. Many developer communities prefer niche, highly specialized cloud computing providers (for example, Vercel, Netlify, Stripe, etc.) when they see superior developer experience for their workloads, the ability to deploy instantly and scale automatically with no supervision. Top cloud computing leaders can learn from smaller players and invest heavily in Developer Experience to continue meeting customer expectations. We start seeing more progress and priority in this area, as an example of AWS Proton developer experience to automate container and serverless deployments management, bringing together infrastructure as code, CI/CD pipeline, and observability self-service capabilities without requiring expertise in each of the underlying services involved. Another major area of developer experience we should watch is bridging the gap between local development and deployment environments and production cloud deployment. 

Data Observability: Innovators. Data Observability is an emerging technology that helps us better understand and troubleshoot our data-intensive systems. We have been familiar with the concept of Observability for a while, as there has been a lot of work done by wonderful companies like Honeycomb—Charity Majors has some of the best resources on it. The concept of data observability was first coined in 2019 by Barr Moses of Monte Carlo Data. To quote her, “Data Observability is an organization’s ability to fully understand the health of the data in their systems.” Data downtime is periods of time when data is partial, erroneous, missing, or otherwise inaccurate. Introducing data observability eliminates data downtime by applying best practices learned from DevOps to data pipeline observability. It uses automated monitoring, alerting, and triaging to identify and evaluate data quality and discoverability issues. This leads to healthier pipelines and more productive teams.

Democratized AI for developers: Early Adopters. We see more adoption and availability of pre-trained models available in the cloud that developers can use to address common use cases without a machine learning experience. This removes the barrier for the bigger audience of developers to start benefiting from rather complex technology—good examples are Amazon AI Self-Services like Rekognition and Azure Cognitive Services.

Renato Losio, Principal Cloud Architect at Funambol and AWS Data Hero:

FinOps: Early Adopters. Almost unknown a year ago, the term “FinOps” became very popular, most of the time to indicate practices that companies were already doing under other names.

Serverless databases: Maybe still Early Adopters or already Early Majority but becoming a mandatory offer for any data-related service. In the last 12 months, we have seen a big jump in the offering and adoption of serverless and distributed SQL databases for cloud-native applications. Serverless options are now available in the data warehouse and analytics space. Some, including AWS, are cutting corners on the serverless concepts—for example, no data API or scale to zero—to offer a broader audience and workloads elasticity.

Edge Computing: Now, Early Majority, the options are everywhere, often used without even knowing it. Edge computing has added new options, delivering data processing, analysis, and storage close to the end-users. Cloud providers extended their infrastructure and services to more locations, integrating on-premises/data centers to support new scenarios at the edge.

Sustainability Accounting: Early Adopters, with software-as-a-service solutions from most cloud providers to record and reduce deployments’ environmental impact. It is often simply a proxy for cost management/reductions with a high risk of greenwashing.

Chaos Engineering Practices: Early Majority. More tools are available, even as a service, with larger or more significant companies finally doing it or at least pretending to do it in some forms.

Steef-Jan Wiggers, Technical Integration Architect at HSO and Microsoft Azure MVP:

FinOps: From Innovators to Early Adopters. I, too, agree with Renato. More companies have become cost aware of their cloud spending and adopted the practices from the FinOps Foundation. Moreover, public cloud vendors expand and enhance their cost management services, like AWS with AWS Billing Conductor, part of the AWS Cloud Financial Management solution, and AWS Management Console Home Page. Google with Suspend/Resume of Compute Engine and Unattended Project Recommender. And finally, Microsoft. Besides the public cloud vendors, many other commercial third-party offerings are available, like Cast AI, CloudZero, and Yotascale.

The latest State of FinOps Report 2021 presented a clear takeaway: cloud financial management (i.e., FinOps) has become a mainstream practice in large organizations across all sizes of cloud spending.

Next to FinOps, Sustainability Accounting also moves from Innovators to Early Adopters. Public Cloud Providers Microsoft, Google, and AWS are vast energy consumers due to their global spread of data centers. They, and many others, signed the climate pledge. To make customers aware of the carbon footprint of their workloads, AWS released a Customer Carbon Footprint Tool, and Google previewed the Carbon Footprint service. At the same time, Microsoft already had a Sustainability Calculator out since 2020.

Cross-Cloud/Cloud-Native Hybrid Approaches: Also, from Innovators to Early Adopters. Public Cloud providers continue to invest in these offerings. For instance, Google with Anthos for Virtual Machines in preview, Microsoft with Container App Services to General Availability (GA), and AWS with Outposts added two new form factors for space-constrained locations.

Data Ops: Data Ops was getting a lot of traction last year, especially in data governance, data lineage, data quality, and data catalog tooling. Microsoft Purview is a good example, recently rebranded to Microsoft Purview, which includes data governance from Microsoft Data and AI, and compliance and risk management from Microsoft Security. The service is complemented by identity and access management, threat protection, cloud security, endpoint management, and privacy management capabilities. In my view, it is still in the “Early Adopters” stage. 

Daniel Byrant, Director of DevRel @ Ambassador Labs | InfoQ News Manager | QCon PC:

Platform Engineering, Golden Paths, and Developer Experience: Building on Lena’s comments about developer experience becoming a decision driver, I’m also seeing this in the platform space. Cloud-native engineers are increasingly becoming “full cycle developers” and need to code, ship, and run applications. They are also being told to “shift left the ilities,” like security, reliability, extensibility, etc., and engineers can’t do this without supporting tools, abstractions, and platforms. The role of “Platform Engineer” is rising up within all sizes of organizations to build these supporting platforms.

From a technical perspective, the Spotify team has long talked about how they create “Golden Paths” and Netflix has a similar approach to “Paved Roads.” The Team Topologies authors have covered these organizational and cultural aspects, and the popularity of the book is going from strength to strength. For readers interested, I covered this topic in more depth in my recent KubeCon EU talk “From Kubernetes to PaaS to … Err, What’s Next?

eBPF and Wasm—A Proxy extension showdown? As mentioned by others here, eBPF and Wasm are two hot technologies, which although fundamentally different, compete in a lot of overlapping use cases. One use case that came to the forefront at KubeCon EU was implementing extensibility within network proxies. The Cilium folks have done great work with eBPF for providing service mesh-like networking extensions in the Kernel, and the Google and Envoy teams have made a lot of great progress in adding in extensibility via Wasm plugins into Envoy Proxy. 

I’ve joked that this is a “showdown,” when in reality, these are two complementary technologies that should provide a lot of benefits for end users of service meshes and other cloud networking solutions. Hopefully, this is a precursor to some of the currently (complicated) networking stacks getting pushed down into the platform. Which brings me to my next point.

Simplifying K8s Networking—Envoy Gateway: Another exciting announcement in the cloud networking space at KubeCon EU was the Envoy Gateway project. This builds on the work undertaken by the Emissary-ingress, Project Contour, and Kubernetes Gateway API communities, with the goal of simplifying ingress within Kubernetes. This drive to simplification is part of a large effort underway in the space.

Secure Software Supply Chain: Two projects in this space, Software Bill of Materials (SBOM) and Supply-chain Levels for Software Artifacts (SLSA) are getting a lot of attention. I’ve personally learned a lot from Kelsey Hightower about the need for these projects. There are currently a lot of vendors emerging in this space, as the tech is complicated to get right and the stakes are high—just look at the increasing supply chain hacks. As much as build-time security is important, we’re also starting to see resurgent interest in runtime security detection e.g. The Falco Project (which, coincidentally, uses eBPF).

Feynman Zhou, CNCF & CDF Ambassador, Product Manager

New topics in Early Innovators

  • Software secure supply chain (DevSecOps)
  • Application definition and orchestration (New application frameworks are evolved in CNCF sandboxes, such as KubeVela and Porter)
  • Low-code platform
  • WebAssembly

New topics and changes in Early Adopters

  • Policy as Code (OPA and Kyverno are modernizing the Kubernetes policy management)
  • Cross-cloud/Cloud-native hybrid approaches (Hybrid cloud and multi-cloud are a new trend in enterprise architecture)
  • Service Mesh
  • Service Proxy

Moving to Early Majority

  • Cloud FaaS/BaaS should be renamed to FaaS/BaaS (Cloud-agnostic serverless platforms are emerging, such as OpenFaaS, OpenFunction, Knative, Kubeless, etc.)
  • Chaos engineering practice

Mostafa Radwan, Principal @ CloudRoads | DevOps Editor 


Adding Quantum Cloud Computing

Almost all of the major cloud providers today offer quantum computing services as part of their cloud offerings. This could be either via direct access to quantum computers (i.e: IBM) or simulators (i.e: Amazon Bracket).

Adding eBPF

There are many applications of eBPF including the monitoring and debugging of applications. 

Early Adopters

Moving FinOps to Early Adopters from Innovators

More professionals are being trained on the principles and practices of FinOps. It’s not widely adopted but many are looking into it or hiring FinOps professionals to be part of their cloud Ops organization. 

Moving Cross-Cloud/Multi-Cloud to Early Adopters from Innovators

Organizations large and small realized they don’t want cloud lock-in and prefer to

harness the unique features each cloud provider offers that optimize for security and 

cost. Also, some enterprises are considering or already using cloud-native clusters that span multiple clouds. 

Shaaron Alvares, Enterprise Agility @Salesforce. Agile & DevOps Transformation Leader

I looked at the graph from last year, and I don’t see that much has changed. But there are 2 trends that could be in the Innovators or Early Adopters? 

Touchless (or no-code) IT-Ops instead of no-Ops

Touchless (or no-code) Automation Software 

And I’m not sure we can use this as a practice but DevOps DORA Metrics could be in the Early Adopters. 

Matt Campbell, Lead Editor, DevOps | VP, Cloud Platform @ D2L:

As noted by some other folks, supply chain security has seen a lot of development over the past year. The large increase in attacks has led to several new developments. Adoption of this has been quick as organizations move to combat these new threats.

The movement toward codifying everything continues to expand. Documentation as code (or continuous documentation) is seeing some new developments with companies wanting to streamline the documentation process and ensure critical documents are kept up to date. I see this in the Early Adopter phase.

SLOs as a tool for communicating outcomes and goals have continued to see some adoption over the past year. The increased understanding of the limitations of traditional incident reporting metrics (such as incident counts or MTT* metrics) means that alternates are needed. I expect to see the various observability platforms start to add SLO-creation and tracking tooling soon. I see this as a late innovator topic for now. 

A new trend that is exciting is what The VOID is currently doing in aggregating into one location software incident reports. This is allowing for interesting data analysis on these reports, which will hopefully lead to findings and improvements that improve software resiliency. I would put this industry aggregated incident analysis into the Innovator box. 

About the Authors

Rate this Article