BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Architecture in a Flow of AI-Augmented Change

Architecture in a Flow of AI-Augmented Change

Listen to this article -  0:00

Key Takeaways

  • Three years into the AI revolution, enterprises are grappling with an interesting paradox. Despite racing to adopt AI, most organizations remain trapped in pilot purgatory. The disconnect isn’t technological; it’s organizational and cultural.
  • AI is a force multiplier. In organizations with clear domain ownership, AI augments the organization within well-defined boundaries, enabling semi-autonomous decisions safely.   
  • If your team is organized and works well together, AI acts as a turbo-boost, propelling projects to completion. The same is true in reverse for the more traditional bureaucratic "dysfunctional" organizations; AI will amplify the dysfunction.
  • As organizations move from AI pilots to continuously evolving implementations, architects will need to rapidly translate business needs into trustworthy solutions. Fast flow becomes essential, enabled by clear domain boundaries, aligned value streams, and streamlined team interactions.
  • The potential for AI to improve organizational flows, requires architects and teams to organically form new ways of working; enabling faster design enhancements at scale while also encouraging collaboration across teams.

This article was written by participants of the online InfoQ Certified Architect Program. It represents the capstone of their work, reflecting the cohort's collective learnings on the intersection of AI and modern software architecture.

Introduction: The paradox

The numbers tell an interesting story. McKinsey’s State of AI report reveals that 72% of organizations have adopted AI in at least one business function. The pace of investment is staggering, with hundreds of billions of dollars being invested in AI research. Enterprises are allocating unprecedented budgets to AI initiatives. Yet in reality, most organizations are failing to capture meaningful value.

Consider the contrast. Uber’s Michelangelo platform (Wang, Kai, et al.) was built over the years as their ML platform. This platform enabled development teams to rapidly own their domain and deploy solutions from route optimization to demand forecasting. As AI advanced, they leveraged GenAI to accelerate development. Uber had organizational and technical foundations in place, positioning them well for the AI boom.

Meanwhile, most large enterprises that have adopted AI in the last couple of years are unable to progress past the pilot phase. These enterprises have access to the same technology and knowledge, but lack the organizational structure and platform to harness the value from AI. Gartner's 2025 research shows that only 45% of organizations with high AI maturity maintain projects for more than three years. Furthermore, the Gartner survey found that 57% of high-maturity organizations are ready to adopt new AI solutions compared with only 14% of low-maturity organizations.

What we talk about vs what actually matters

AI adoption has shown diverse paces across multiple sectors and across different types of work within organisations, as evidenced in McKinsey’s State of AI report. In software platforms and engineering, AI adoption has accelerated significantly.

As the development and DevOps community embraces AI, the amplification effect is evident, with Google’s DORA State of AI-assisted Software Development report highlighting measurable gains from AI adoption, including improvements in:

  • Individual effectiveness
  • Shift to more valuable work
  • Product performance
  • Code quality
  • Organisational performance

By contrast, despite these technical advances, organizational impacts remain inconsistent. The survey at JetBrain’s State of Developer Ecosystem report notes that more than 50% of developers spend between 20% and 60% of their time in meetings, work-related chats, and emails. Communication and alignment remain as core challenges. The tension: producing results faster doesn’t necessarily mean a faster path to more business value.

Spending more effort on understanding value streams and business context is a path to delivering higher value. Architects play a key role, bridging the gap between organizational purpose and development speed. Architects help teams design resilient, scalable systems, translating business intent into robust technical implementations.

Ultimately, what actually matters isn’t just how fast we code or deploy, but how effectively we integrate continuous enhancements into business operations and the organization.

Journey to apply AI in a continuous flow of change

It is clear that Artificial Intelligence (AI), particularly GenerativeAI and AI agents, is motivating organizations to rethink how they adapt to continuous change. However, this doesn’t imply that existing conventional approaches to defining architecture are obsolete. Instead, the journey involves augmenting established architecture practices with AI to achieve a continuous flow between business context, design, and delivery.

Anthropic’s Economic Index report classifies the patterns of interaction of users with Claude in two categories based on a dataset of millions of anonymized Claude conversations: automation (where AI completes tasks without significant user input) and augmentation (where user and AI collaborate to complete tasks). In less than two years, the share of automation interactions has grown from 41% to 49% while the share of augmentation interactions decreased from 55% to 47%.

This is an interesting proxy of applying AI to software-related work: while AI tools and agentic AI evolve fast, users increasingly rely on AI for more sophisticated organizational tasks. In technology-driven organizations, it can be inferred that while AI has increased individual productivity for discrete tasks (automation), the larger efficiency gains in organizational processes (linked to augmentation) face further roadblocks: data readiness and business-domain contextual information. In CGI’s architecture engagements, these challenges routinely surface as organizations attempt to scale AI beyond isolated pilots, underscoring the importance of domain clarity and context engineering.

Architects support organizations and their move from individual productivity to overall operational efficiency. This requires architectural effort to secure the organization’s assets: data availability, set clear business context boundaries, and realign processes. As observed, these are either human, organizational, or informational requirements, versus purely modernizing the technology stack. Emerging new business requirements will demand consistency and clarity for AI agents or AI-supported development teams to achieve organization-level improvements. With this context, Architecture becomes more about enabling the flow and co-evolution of humans working alongside AI agents.

Why does fast flow come first?

As articulated before, AI flows into an organization like water through existing channels. Where pathways are clear and direct, flow accelerates. Where paths twist, narrow, or dead-end, it creates floods and bottlenecks. Fast flow in the context of an organization is the ability to react to changes quickly and course-correct based on early feedback. Fast flow focuses on system velocity, the ability to gather, understand, and transform business needs into delivered solutions. This flow must align with actual user needs and the business strategy, not just technical efficiency. Organizations achieving fast flow share four characteristics:

  1. Business alignment
  2. Clear domain boundaries
  3. Managed cognitive load
  4. Optimized interaction patterns

Strategic alignment ensures AI solves real problems. Before considering the technology, fast-flow organizations maintain tight coupling between business strategy, users' needs, and technical domains. Teams understand why their work matters to users and how it drives business value. When these organizations adopt AI, it is targeted at specific user pains and strategic business objectives. Oftentimes, organizations deploy AI to "modernize" without a clear connection to user outcomes. They may achieve technical success, but business failure.

Clear domain boundaries enable AI autonomy within a business context. When teams own distinct domains that map to business capabilities and user needs, AI can operate safely within those boundaries. This alignment means AI decisions inherently reflect business rules and user needs. Without this clarity, technical boundaries don’t match business boundaries. This misalignment amplifies ambiguity and complexity in the system. From a Domain-Driven Design perspective, to enable AI-supported organizational efficiency, architects might need to update interaction patterns of its bounded contexts, to review key processes in the core business subdomain, or to optimize processes in supporting business subdomains.

Optimizing interaction patterns maintains user focus across boundaries. In fast-flow organizations, team interactions are deliberate and are reevaluated often. Without this discipline, you’ll often see many teams running into the same issues, solving the same problems, and building the same solutions. To combat this, you’ll also often see organizations try to force everyone to talk to everyone, leading to chaos. Fast-flow organizations adopt the Team Topologies framework, enabling teams to work together and communicate effectively, when necessary, with each other.

Activating the flow

To illustrate the flow of applying AI in architecture, we describe one organization’s journey to leverage AI when generating specifications for new solutions. They didn’t start with this conclusion in mind, but rather with the story of getting started, activating the flow, and evolving the architectural approach as they progressed.

The following case study serves as proof of architectural flow in action. In one CGI initiative the objective was to translate AI use-cases into AI-enabled implementations at scale across a range of business solutions.

The team started with strategic guidance to move beyond workshops and accelerate moving business use cases for applying AI into Proof of Concept (PoC) implementations. To this end, the team introduced a cohort-based program to advance these AI use cases, collaborating with teams to elaborate on the business value, user journey, and potential AI use cases. Iterating very quickly, we moved user interface (UI) prototypes into PoC implementations.

Business owners were able to quickly visualize the potential impact and benefit of AI themselves through the UI prototypes and also see how other teams were thinking. Notwithstanding the invaluable collaboration experience, developers also had a first visual to target their implementation results. The regular positive feedback gave testimony that the program was resonating with the contributors. The diagram below shows the iterative cycle and the input/output of the cohort-based process: defining user journey, prototyping, and evolving AI-assisted specifications.

To get started, the team surveyed business owners to confirm the priority themes along with user stories using a consistent MadLib format as listed below (based on Next Best Action / Recommendation):

For <user persona>, who needs to <what are they trying to do?>, they do <what steps are completed?>. If instead, our solution recommended <what is the next best action, decision, or other outcome?> then we could <what is the key benefit or value that results?>

Teams were invited based on the strategic theme set for a cohort focus. During the first workshop, the cohort elaborated on their selected user story and defined the user journey. Over time, cohorts pivoted towards using natural conversation to describe their context instead of submitting input. Hence, user stories evolved to be generated when the business owner narrates a short story. One consideration that influenced the techniques applied throughout the cohort was retaining focus within a boundary that balanced delivering a compelling result in a short timeframe.

To this end the first workshop focused the cohort on defining user journey maps using a concise layout (see image below) using the backdrop of a process per a value stream map: (Triggering event, Step 1, Step 2, …, up to Step 6, End event, result or outcome) and swimlanes aligned to a storyboard: (Persona, Actions, Sentiment, Pain points, Opportunities, Value delivered).

With the context and set boundaries for the PoC scope, UX Designers then collaborated with the cohort to quickly design UI mock-ups and clickable prototypes. The designers presented to the cohort in a cross-over ceremony, from which a lean, cross-functional squad (full-stack developer, data scientist, prompt engineer, and AI solution architect) then progressed the PoCs using alternative models, algorithms, and approaches. The closing ceremony demonstrated feasible options for each of the PoC results. The lean squad’s focus was on applying AI to directly solve business challenges, and given the priority for the cohort’s focus, a more direct connection to delivering business value.

The team staggered, delivering 2 to 3 UI prototypes or AI PoC implementations each month. They ran a number of these cohorts with invaluable feedback and continually evolving the tactics as they progressed. However, the next challenge encountered was progressing these early-stage solutions towards production-ready implementations.

Sure enough, the first question posed by the implementing team: "Where is the specification?". While the PoCs were documented, they were not considered a design specification that another team could implement. This challenge is common in CGI’s transformation programs as well, PoCs create clarity but often lack the structured specifications required for downstream delivery teams.

That said, over the course of 6 months, the team had over 50 hours of design sessions, UI prototype walk-throughs, and PoC implementation demonstrations. To avoid the penalty of going back, engaging analysts and designers to elaborate on the intended solution, we instead applied context engineering, working with foundation models (achieving parity between Gemini and GPT).

The hypothesis: Given a knowledge base of curated design patterns and a standard output, the team can generate consistent user stories and use-case specifications from a conversation transcript or a workshop mural. The results were considered directionally correct by the business owners.

The generated specification outlined typical normal flows and exceptions, along with design elements such as system components and dependencies, with preliminary structure and interaction diagrams. With iteration, even journey maps were included, with sentiment scores assigned based on key terms used in the story narration. The team extended the specification to include acceptance criteria, initial test cases, and the start of checklist questions for Security, Data Privacy, among other concerns, to be answered as part of the implementation.

The foundation for generating these specifications was a knowledge base centered around an implementation design pattern. The team extended the traditional Design Pattern format to include checklist questions so that generated specifications included other factors to consider, including:

  • Adoption & Business value. Ensures the solution delivers measurable business outcomes and is embraced through clear value alignment and user adoption strategies;
  • User Experience. Emphasizes considerations for implementing the solution to ensure user interactions are intuitive, transparent, and trustworthy, enhancing user confidence, engagement, and satisfaction throughout the user’s journey.
  • Operational factors. Focuses on maintaining reliable, scalable, and explainable AI systems through robust monitoring, performance, and lifecycle management;
  • Compliance & Governance. Embeds ethical, legal, and audit frameworks to ensure responsible, transparent, and compliant AI operations;
  • Data Sources & Integration. Defines considerations such as enabling data quality, accessibility, and access/connectivity to data required to power and sustain the solution effectively; and
  • Security & Data Privacy. Shifts to the far left, ensuring that Security and Data Privacy considerations are factored in from the outset.

From the outset, the implementation design patterns have been catalogued and organized by archetypes that are aligned to the priority of focused AI use-cases business owners ranked, including:

  • Archetype 1: Conversational Agents
  • Archetype 2: AI-Powered Anomaly Detection
  • Archetype 3: Hyper-personalization & Next Best Action / Recommendation
  • Archetype 4: Intelligent workflow automation
  • Archetype 5: Multimodal & Omnichannel AI
  • Archetype 6: Predictive Maintenance and Safety Surveillance
  • … < other pattern archetypes will emerge as the flow continues>
  • Default Pattern Implementation Considerations

Based on the intent to continue evolving these AI implementation design patterns, we introduced design pattern archetypes as outlined in the following diagram. The visual provides a useful classification of the archetypes by architectural concerns (UX & Interactions, Monitoring & Sensing, and Business Process & Rules).

As the team builds up the catalogue, it can now generate a starting point for a comprehensive specification from a workshop discussion, including acceptance criteria and "shifting to the far left" factors to consider, such as security, data privacy, and even a legal point of view. That said, the team’s focus is on creating a robust context of architecture knowledge, not on developing tools.

AI and AI Agents are continuously advancing and evolving. While the team is examining how it can forward-generate UI mock-ups using the specifications, the interest from an architecture point of view is grounded on a well-curated knowledge base. The team certainly gains confidence that it is on the right track, given the emergence of Specification-Driven Development (SDD), as summarized in understanding SDD, and approaches such as Spec Kit - AI-Powered SDD toolkit, among others. This journey is by no means complete. The ways that AI can be used in architecture will continue to evolve, changing the architect’s way of working.

The previous case study is well-suited for situations that are "greenfield", given the space for architects and teams to ground and guide the application of AI when activating new solutions or innovations to existing systems. That said, organizations sustain a well-established landscape of systems that support core business capabilities. These systems will form a mosaic of implementations, each with a different lifecycle. For those systems that have been purpose-built, over time, it is likely that the most recent views of architecture are enshrined in code.

Certainly, the rise of coding assistants, copilots, and coding agents has caught the attention of developers who are increasingly using these aides in their day-to-day activities. From code-completions, generating documentation to the emergence of more advanced capabilities across the software development lifecycle (SDLC), the flow of change in the SDLC continues unabated.

The opportunity to apply AI for architecture in this brownfield of existing solutions is broad and multifaceted. The potential use cases for applying AI are wide-ranging, and AI can essentially be used at various stages to help architects bridge organizational business capabilities with technical implementations. Some examples include, among many others:

  • Using AI in support of Architecture assessments. Assessing various aspects of architectural decisions is a well-suited class of use for applying context engineering with advanced foundation models (e.g., Gemini, GPT, Claude, among others). More robust trade-off results (for example, scanning technology options to enable a capability gap within the architecture) will need to be grounded with key architecture principles and other architecturally significant requirements. Furthermore, using few-shot prompting techniques will help target the trade-off analysis by providing examples of what is considered good, neutral, or bad. These tactics can be applied to a range of proposing architectural trade-offs (for example: given a proposed solution implementation, is it better suited to monolithic vs microservices, event-based vs API-based, etc.)
  • Using AI for generating prescribed architecture. One realistic brownfield ( applications developed in a different era) application challenge within larger organizations is undocumented architectural drift over time, or the lack of documentation and architecture diagrams. These applications are mostly large-scale monolithic applications, and people with application knowledge have most often either transferred or left the company. Therefore, manually interacting with the application is a risky, time-consuming challenge. In this situation, AI can help evaluate the application and generate necessary documentation, such as operation manuals, architecture diagrams, data models, and dependencies.
  • AI Agents that propose updates to the Architecture and Design. In DevSecOps, AI agents are increasingly deployed to ingest a steady stream of runtime events. These agents may be monitoring security vulnerabilities in deployed solutions or runtime logs to reduce event noise and redundant ticket creation. By analyzing recurring vulnerabilities or runtime patterns, there is a point at which agents can make recommendations to improve the design.

Conclusion: Path forward

Artificial Intelligence won’t create organizational capability on its own. Humans need to ground it with organizational knowledge before AI can be harnessed to amplify what is already well established. The difference between success and stagnation won’t rest solely on model or tool selection and infrastructure. It will also be dependent on whether the organization has the architectural foundations in place to align with the fast flow of AI evolution. Clear domains, managed cognitive load, and aligned value streams allow AI to enhance delivery rather than add complexity. In this sense, architects and architecture become the enablers of AI, not bystanders.

As we’ve seen in one case study, AI’s promise emerges when architects shift from controlling outcomes to curating context, framing the semantics, boundaries, and guardrails that make AI and eventual AI-agent autonomy safe.

The real question isn’t how to adopt AI, but how to evolve our architectures to support a continuous flow of AI-augmented change.

About the Authors

Rate this Article

Adoption
Style

BT