BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Software Architecture and Design InfoQ Trends Report—April 2021

Software Architecture and Design InfoQ Trends Report—April 2021

This item in japanese

Key Takeaways

  • In a cloud-native world, architects are reprioritizing the -ilities they consider most important. Innovative architects are designing for resilience, observability, portability, and sustainability.
  • Dapr and the Open Application Model are two ways to make building distributed systems easier, and it will be interesting to watch how they are adopted in the future.
  • The pendulum seems to be coming to rest, after swinging to extremes between monoliths and microservices. As a result, architects are relying on well-established patterns and designs that focus on high cohesion and low coupling, regardless of the underlying technology.
  • In fully-remote work environments, architects are finding new ways to communicate with their teams, and finding replacements for the water cooler chats which were useful for gathering knowledge.
  • The next generation of GraphQL features, notably GraphQL Federation and GraphQL Microservices, are showing where to go next after companies have strong adoption of GraphQL.

Each year, the InfoQ editors discuss the current state of software architecture and design, and identify the key trends to watch, resulting in this report and the accompanying graph. This year, we’re letting you listen in to the discussion, on an episode of the InfoQ podcast. The editors were joined by Holly Cummins, an innovation leader on IBM’s corporate strategy team, and a previous speaker at QCon.

Designing for ___

We start by looking at which “-ilities” are most important to architects. A software architect is responsible for the cross-cutting concerns and making sure that individual components of a large system can work together seamlessly to meet overall objectives. In 2021, four areas we feel architects are concerned with are designing for resilience, designing for observability, designing for portability, and designing for sustainability.

Designing for resilience is vital for modern, distributed systems, where any individual component could fail, and the overall system should remain available. In many ways, the ideas being implemented are not new—just becoming more important as distributed systems and modular architectures are more common. Daniel Bryant referred to the work done by David Parnas in the 1970s, and Michael Nygard’s more recent book, Release It!, as good sources for ideas regarding circuit breakers, timeouts, retries, and other fundamental requirements for a resilient system. What is new is finding ways to solve those problems across a system, such as using a cloud-native service mesh, or even building on a framework such as Dapr

Further down the adoption curve, there has been a steady increase in the adoption of asynchronous programming techniques and event-driven architectures. This adoption is the result of both a lower barrier to entry for implementing asynchronous patterns and a built-in benefit of increased system resilience.

However, the flip side to event-driven architectures, and asynchronous systems, in general, is they are still difficult to reason about and understand. Which ties into the rise of designing for observability. Very often, observability is seen as a run-time need—can we tell if the system is behaving as expected? But, for architects, observability is becoming increasingly important as a design-time need—can we understand all the interactions occurring within the complex system?

In 2021, the innovators are finding ways to provide both the run-time and design-time observability benefits almost automatically. By removing the burden of developers having to manually implement observability non-functional requirements, it becomes less likely that a key component will be missing from the big picture of the system. That then leads to being able to use the observability to create accurate, living architecture diagrams, and a corresponding mental model of the system.

Another focus area for architects is designing for portability, whether that’s for multi-cloud or hybrid-cloud. In most cases, there are no reasons for architects to design for the lowest common denominator to enable true multi-cloud portability or avoiding vendor lock-in. However, especially with corporate acquisitions, CTOs are more likely to have systems that run in separate hosting environments, including AWS, Azure, GCP, and on-prem. When making decisions regarding standardization, architects need to pick their battles.

Holly Cummins has spoken on the subject of designing for sustainability, which is one of the new innovator trends. This is emerging because people are realizing the software industry is responsible for a level of carbon usage comparable to the aviation industry. Some of this is almost directly measurable, as the bill for compute usage is highly correlated to energy consumption. Where CTOs and architects can have an impact is by either reducing unnecessary compute usage or utilizing more sustainable cloud hosting options. Some cloud data centers run on 100% green energy, while data centers in Virginia are powered by coal. If latency is less of a concern, then it may make sense to host in Iceland instead of Virginia. While many companies look to reduce their hosting costs purely for economic reasons, some are choosing to make sustainability a priority, and architecting and deploying their systems accordingly.

Correctly-built distributed systems

The topic of microservices has steadily moved across the trends graph and has been categorized as a late majority trend for some time, as it has become easier to build distributed systems. However, we’re continuing to see some pushback against the overuse of microservices as an attempt to solve all problems. In some cases, this has led to major reversals, such as going back to a monolith. As the pendulum stops swinging, it seems we’re finally setting on a sane approach for most systems.

Some of the trends around building distributed systems, or modular monoliths, all come back to fundamental architectural principles, such as high cohesion and low coupling. Domain-Driven Design, while considered a late majority trend, continues to be emphasized by architects looking for good guidance on context mapping and identifying boundaries within a system. Similarly, the C4 model can be very useful to create a hierarchical set of architecture diagrams to help understand your system.

Data architecture

InfoQ is continuing to see innovation in the overlap between software architecture and data architecture. Data mesh, added to the graph last year, remains an innovator trend this year. It’s joined by data gateways, which are somewhat like API gateways but focus on the data aspect. As microservices have led to a polyglot persistence layer, API gateways offer abstractions, security, scaling, federation, and contract-driven development features.

The role of the architect

We continue to look at the role software architects play in their organizations. Beyond the traditional “boxes and arrows” responsibilities, architects are serving as technical leaders and mentors to other team members. Architects also need to be able to communicate with many audiences, described by Gregor Hohpe as riding the architect elevator—talking to the CTO and other executives, then traveling down to the engine room to work with the developers.

For many teams, communication styles were very disrupted due to the pandemic and many companies adopting a long-term remote working strategy. This means architects have lost the ability to learn by osmosis simply because they could sit in the same room as the developers and overhear conversations. Where this has been helpful, it has led to more written communication, whether in IM chat rooms, or architecture decision records, and keeping those up to date because teams are regularly referring to them. The leading architects are finding ways to leverage the constraints of a fully remote team to their advantage, and creating better software designs because of it.

Other topics

Dapr and the Open Application Model (OAM) were both introduced by Microsoft in late 2019. OAM is a specification for defining cloud-native applications and focuses on the application, rather than the container or orchestrator. Similarly, Dapr is a framework that has pluggable components meant to make cloud-native development easier. Although Microsoft was involved in their creation, both are open source projects, work on any cloud provider, and Dapr may become a CNCF project. Both Dapr and OAM have yet to see major adoption and are therefore clearly innovator trends to keep an eye on.

WebAssembly is another innovator trend. For architects, it will be interesting to see if it is used as just a supplement to web frameworks and mobile development, or if systems will be designed with WebAssembly in mind, and how that will manifest.

A final note about GraphQL, which crossed the chasm on the trends graph last year. Since then, there has been innovation, particularly at Netflix, for the next-generation of GraphQL functionality, notably GraphQL Federation and GraphQL microservices. Just as the sprawl created by microservices led to new patterns for managing that sprawl, for companies that have invested heavily in GraphQL, they are needing GraphQL Federation to assist with managing the new complexity. This isn’t a problem every company will run into, but it remains useful to know and see where it goes in the future.

About the Authors

Thomas Betts is the Lead Editor for Architecture and Design at InfoQ, and a Sr. Principal Software Engineer at Blackbaud. For over two decades, his focus has always been on providing software solutions that delight his customers. He has worked in a variety of industries, including retail, finance, health care, defense and travel. Thomas lives in Denver with his wife and son, and they love hiking and otherwise exploring beautiful Colorado.

Daniel Bryant is the Director of Dev Rel at Ambassador Labs, and is the News Manager at InfoQ, and Chair for QCon London. His current technical expertise focuses on ‘DevOps’ tooling, cloud/container platforms and microservice implementations. Daniel is a leader within the London Java Community (LJC), contributes to several open source projects, writes for well-known technical websites such as InfoQ, O'Reilly, and DZone, and regularly presents at international conferences such as QCon, JavaOne, and Devoxx.

Holly Cummins is an innovation leader in IBM Corporate Strategy, and spent several years as a consultant in the IBM Garage. As part of the Garage, she delivers technology-enabled innovation to clients across various industries, from banking to catering to retail to NGOs. Holly is an Oracle Java Champion, IBM Q Ambassador, and JavaOne Rock Star. She co-authored Manning's Enterprise OSGi in Action.

Eran Stiller is the CTO and Co-Founder of CodeValue. As CodeValue’s CTO, Eran designs, implements, and reviews various software solutions across multiple business domains. With many years of experience in software development and architecture and a track record of public speaking and community contribution, Microsoft recognized Eran as a Microsoft Regional Director (MRD) since 2018 and as a Microsoft Most Valuable Professional (MVP) on Microsoft Azure since 2016.

This article is a summary of the AI, ML, and Data Engineering InfoQ Trends 2022 podcast and highlights the key trends and technologies in the areas of AI, ML, and Data Engineering.

In this annual report, the InfoQ editors discuss the current state of AI, ML, and data engineering and what emerging trends you as a software engineer, architect, or data scientist should watch. We curate our discussions into a technology adoption curve with supporting commentary to help you understand how things are evolving.

In this year’s podcast, InfoQ editorial team was joined by an external panelist Dr. Einat Orr, co-creator of the open source project LakeFS, and a co-founder and CEO at Treeverse, as well as a speaker at the recent QCon London conference.

The following sections in the article summarize some of these trends and where different technologies fall in the technology adoption curve.

The Rise of Natural Language Understanding and Generation

We see Natural Language Understanding (NLU) and Natural Language Generation (NLG) technologies as early adopters. The InfoQ team has published about recent developments in this area including Baidu’s Enhanced Language RepresentatioN with Informative Entities (ERNIE), Meta AI’s SIDE, as well as Tel-Aviv University’s Standardized CompaRison Over Long Language Sequences (SCROLLS). 

We have also published several NLP-related developments such as Google Research team’s Pathways Language Model (PaLM), EleutherAI’s GPT-NeoX-20B, Meta’s Anticipative Video Transformer (AVT), and BigScience Research Workshop’s T0 series of NLP models.

Deep Learning: Moving to Early Majority

Last year, as we saw more companies using deep learning algorithms, we moved deep learning from the innovator to the early adopter category. Since last year, deep learning solutions and technologies have been widely used in organizations, so we are moving it from early adopter to early majority category.

There were several publications on this topic as podcasts (Codeless Deep Learning and Visual Programming), articles (Institutional Incremental Learning based Deep Learning Systems, Loosely Coupled Deep Learning Serving, and Accelerating Deep Learning with Apache Spark and NVIDIA GPUs) as well as news items including BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) from BigScience research workshop, Google AI’s deep learning language model called Minerva and OpenAI’s open-source framework called Video PreTraining (VPT).

Vision Language Models

Interesting developments in image processing related AI models also include DeepMind’s Flamingo, an 80B parameter vision-language model (VLM) that combines separately pre-trained vision and language models and answers users questions about input images and videos. 

Google’s Brain team has announced Imagen, a text-to-image AI model that can generate photorealistic images of a scene given a textual description.

Another interesting technology, digital assistants, is also now in the early majority category.

Streaming Data Analytics: IoT and Real-Time Data Ingestion

Streaming first architectures and streaming data analytics have seen increasing adoption in various companies, especially in the IoT and other real-time data ingestion and processing applications. 

Sid Anand’s presentation on building & operating high-fidelity data streams and Ricardo Ferreira’s talk on building value from data in-motion by transitioning from batch data processing to stream based data processing are excellent examples of how stream based data processing is a must-have in strategic data architectures. Also, Chris Riccomini in his article, The Future of Data Engineering, discussed the important role stream processing plays in the overall data engineering programs.

Chip Huyen spoke at last year’s QCon Plus online conference on Streaming-First Infrastructure for Real-Time ML and highlighted the advantages of a streaming-first infrastructure for real-time and continual machine learning, the benefits of real-time ML, and the challenges of implementing real-time ML.

As a reflection of this trend, streaming data analytics and technologies, such as Spark Streaming have been moved to late majority. Same for Data Lake as a Service which gained further adoption last year with products like Snowflake.

AI/ML Infrastructure: Building for Scale

Highly scalable, resilient, distributed, secure, and performant infrastructure can make or break the AI/ML strategy in an organization. Without a good infrastructure as the foundation, no AI/ML program can be successful in the long term. 

At this year’s GTC conference, NVIDIA announced their next-generation processors for AI computing, the H100 GPU and the Grace CPU Superchip.

Resource Negotiators like YARN and container orchestration technologies like Kubernetes are also now in the late majority category. Kubernetes has become the de facto standard for cloud platforms and multi-cloud computing is gaining attention in deploying applications to the cloud. Technologies like Kubernetes can be the enablers for automating the complete lifecycle of AI/ML data pipelines including the production deployments and post-production support of the models.

We also have a few new entrants in the Innovators category. These include Cloud agnostic computing for AI, Knowledge Graphs, AI pair programmer (like Github Copilot), and Synthetic Data Generation.

Knowledge Graphs continue to leave a large footprint in the enterprise data management landscape with real-world applications for different use cases including data governance.

ML-Powered Coding Assistants: GitHub Copilot

GitHub Copilot, announced last year, is now prime time-ready. Copilot is an AI-powered service that helps developers write new code by analyzing already existing code as well as comments. It helps with the overall developers’ productivity by generating basic functions instead of us writing those functions from scratch. Copilot is the first among many solutions to come out in the future, to help with AI-based pair programming and automate most of the steps in the software development lifecycle.

Nikita Povarov, in the article AI for Software Developers: a Future or a New Reality, wrote about the role of AI developer tools. AI developers may attempt to use algorithms to augment programmers’ work and make them more productive; in the software development context, we’re clearly seeing AI both performing human tasks and augmenting programmers’ work.

Synthetic Data Generation: Protecting User Privacy

On the data engineering side, synthetic data generation is another area that’s been gaining a lot of attention and interest since last year. Synthetic data generation tools help to create safe, synthetic versions of the business data while protecting customer privacy.

Technologies like SageMaker Ground Truth from AWS that users can now create labeled synthetic data with. Ground Truth is a data labeling service that can produce millions of automatically labeled synthetic images.

Data quality is critical for AI/ML applications throughout the lifecycle of those apps. ​​Dr. Einat Orr spoke at QCon London Conference on Data Versioning at Scale and discussed the importance of data quality and versioning of large data sets. Version control of the data allows us to ensure we can reproduce a set of results, better lineage between the input and output data sets of a process or a model, and also provides the relevant information for auditing.

Ismaël Mejía at the same conference talked about how to adopt open source APIs and open standards to more recent data management methodologies around operations, data sharing, and data products that enable us to create and maintain resilient and reliable data architectures.

In another article Building End-to-End Field Level Lineage for Modern Data Systems, authors discuss data lineage as a critical component of the data pipeline root cause and impact analysis workflow. To better understand the relationship between source and destination objects in the data warehouse, data teams can use field-level lineage. Automating lineage creation and abstracting metadata down to the field-level cuts down on the time and resources required to conduct root cause analysis.

Early adopters category also includes new entries. These include Robotics, Virtual Reality, and related technologies (VR/AR/MR/XR) as well as MLOps.

MLOps: Combining ML and DevOps Practices

MLOps has been getting a lot of attention in companies to bring the same discipline and best practices that DevOps offers in the software development space.

Francesca Lazzeri, at her QCon Plus Conference, spoke about MLOps as the most important piece in the enterprise AI puzzle. She discussed how MLOps empowers data scientists and app developers to help bring the machine learning models to production. MLOps enables you to track, version, audit, certify, reuse every asset in your machine learning lifecycle, and provides orchestration services to streamline managing this lifecycle.

MLOps is really about bringing together people, processes, and platforms to automate machine learning-infused software delivery and also provide continuous value to our users.

She also wrote about what you should know before deploying ML applications in production. Key takeaways include using open source technologies for model training, deployment, and fairness and automating the end-to-end ML lifecycle with machine learning pipelines.

Monte Zweben talked about Unified MLOps to bring together core components like Feature Stores and model deployment.

Other key trends discussed in the podcast (LINK) are:

  • In AI/ML applications, the transformer is still the architecture of choice.
  • ML models continue to get bigger, supporting billions of parameters (GPT-3, EleutherAI’s GPT-J and GPT-Neo, Meta’s OPT model).
  • Open source image-text data sets for training things like CLIP or DALL-E are enabling data democratization to give people the power to take advantage of these models and datasets.
  • The future of robotics and virtual reality applications are going to be mostly implemented in the metaverse.
  • AI/ML compute tasks will benefit from the infrastructure and cloud computing innovations like multi-cloud and cloud-agnostic computing.

For more information, check out the 2022 AI, ML, and Data Engineering podcast recording and transcript as well as the AI, ML & Data Engineering content on InfoQ.

Rate this Article

Adoption
Style

BT