BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News AI Is Amplifying Software Engineering Performance, Says the 2025 DORA Report

AI Is Amplifying Software Engineering Performance, Says the 2025 DORA Report

Listen to this article -  0:00

Artificial intelligence is rapidly reshaping the way software is built, but its impact is more nuanced than many organizations expected. The 2025 DevOps Research and Assessment (DORA) report, titled State of AI-Assisted Software Development, finds that AI does not automatically improve software delivery performance. Instead, it acts as a multiplier of existing engineering conditions, strengthening high-performing teams while exposing weaknesses in organizations with fragmented processes and poorly structured development systems.

The research draws on survey responses from nearly 5,000 technology professionals and more than 100 hours of qualitative interviews. Its central conclusion is clear: the success of AI in software engineering depends less on the sophistication of the tools and more on the strength of the organizational systems surrounding them. Engineering culture, platform capabilities, development workflows, and internal knowledge systems ultimately determine whether AI improves productivity and delivery outcomes or simply accelerates complexity.

One of the most striking findings of the report is how quickly AI tools have become embedded in the daily workflow of developers. Approximately ninety percent of developers now report using some form of AI assistance in their work. Around two-thirds say they rely heavily on these tools for tasks such as writing code, generating documentation, debugging problems, or exploring unfamiliar frameworks. Many developers also report measurable productivity improvements, with a large proportion indicating that AI helps them solve problems faster and write code more efficiently.

Despite these gains, the report highlights a persistent tension between productivity and trust. While developers frequently rely on AI to accelerate development tasks, a significant portion remain cautious about the accuracy and reliability of AI-generated code. This hesitation reflects broader concerns around maintainability, correctness, and long-term system stability. In practice, AI often increases the volume and speed of code production, but without the appropriate engineering discipline, these gains may not translate into improved software delivery performance.

This dynamic leads to one of the report's most important conclusions: AI amplifies the quality of the engineering system it operates within. Organizations with mature DevOps practices, well-defined development workflows, and strong platform capabilities are far more likely to convert AI-driven productivity gains into measurable improvements in delivery performance. In contrast, organizations with fragmented tooling, unclear processes, or inconsistent development practices may experience the opposite effect. In these environments, AI can accelerate the creation of technical debt, increase code review complexity, and introduce instability into already fragile systems./p>

To help organizations understand how to successfully integrate AI into their development environments, the report introduces the DORA AI Capabilities Model. Rather than focusing on specific tools or technologies, this framework identifies a set of organizational capabilities that enable AI to deliver meaningful value.

The first of these capabilities involves establishing a clear organizational strategy for AI adoption. Organizations that succeed with AI tend to define explicit policies and guidelines around how AI tools should be used, governed, and integrated into engineering workflows. This clarity helps teams adopt AI consistently while reducing the risks associated with uncontrolled experimentation.

Another important capability is the presence of a healthy data ecosystem. AI tools rely heavily on access to reliable and well-structured information, particularly internal documentation, architectural knowledge, and historical development data. When this information is scattered or poorly maintained, AI tools struggle to generate meaningful assistance.

Closely related to this is the accessibility of internal knowledge. Organizations that maintain high-quality documentation, searchable knowledge repositories, and structured internal data enable AI systems to become far more effective assistants to developers. In these environments, AI tools can provide contextual recommendations that align with the organization's architecture, coding standards, and operational practices.

The report also emphasizes that foundational engineering practices remain critical. Mature version control workflows, disciplined code review processes, and consistent development standards form the backbone of effective AI-assisted engineering. Rather than replacing these practices, AI depends on them. Without them, the increased speed of development can quickly create operational risk.

User-centric development is another factor strongly associated with successful AI adoption. Teams that maintain a strong focus on user outcomes, rather than purely technical outputs, tend to integrate AI more effectively into their workflows. This orientation ensures that AI accelerates the delivery of meaningful features rather than simply increasing the volume of code produced.

Platform engineering also emerges as a critical enabler. Internal platforms that standardize development environments, deployment pipelines, and infrastructure services allow AI tools to operate within a consistent and predictable ecosystem. This consistency makes it easier for developers to integrate AI suggestions into their workflows without introducing additional complexity or operational risk.

Finally, the report reinforces the importance of working in small batches. Smaller, incremental changes improve code review quality, reduce deployment risk, and make it easier to maintain system stability. When AI tools generate large or complex code changes, these practices become even more important for maintaining control over the development process.

The research also highlights the growing importance of platform teams in the era of AI-assisted development. Organizations that invest in platform engineering capabilities: shared tooling, standardized environments, and well-defined developer workflows, tend to experience significantly better outcomes when introducing AI tools. Platforms provide the structured foundation that allows AI to scale across teams while maintaining consistency and reliability.

Without this foundation, AI adoption can create new forms of complexity. Developers may generate larger pull requests, introduce inconsistent coding patterns, or rely on AI suggestions that do not align with established architectural standards. Over time, these challenges can slow delivery and increase operational risk.

The report also examines the potential impact of AI on system stability. While AI clearly accelerates development productivity, it can also encourage rapid experimentation and larger changesets. This dynamic may increase the likelihood of defects, deployment failures, or operational instability if not properly managed. As a result, organizations adopting AI must strengthen, not relax, their engineering discipline.

Beyond the technical aspects of AI adoption, the report places significant emphasis on the human and cultural dimensions of engineering systems. Teams that successfully integrate AI tend to foster strong collaboration between developers, platform teams, and security specialists. They invest in training programs that help developers understand how to use AI effectively, and they establish communities of practice where engineers can share insights and best practices.

In contrast, organizations that allow AI adoption to emerge purely through grassroots experimentation often struggle to scale its benefits. Individual teams may achieve productivity gains, but these improvements remain isolated and fail to translate into broader organizational performance.

The findings suggest that the future of high-performing engineering organizations will be defined not only by DevOps maturity but by the integration of AI-assisted workflows, platform engineering, and strong developer experience practices. Together, these elements form the foundation of a new generation of engineering systems designed to support both human and machine collaboration.

For technology leaders, the message of the report is both encouraging and cautionary. AI has the potential to significantly accelerate software development and improve the developer experience. However, these benefits are not automatic. Organizations must first build strong engineering foundations, invest in platform capabilities, and cultivate a culture that supports disciplined experimentation.

In the end, the report's central insight is simple but powerful. Artificial intelligence will not fix broken engineering systems. But for organizations that have already built strong foundations, it may become one of the most powerful accelerators of engineering performance yet.

About the Author

Rate this Article

Adoption
Style

BT