On a recent AI DevOps Podcast, Paul Duvall discussed how agentic AI patterns are reinforcing core engineering discipline as the capability of modern models increases. He also shared his repository of agentic AI engineering patterns, where he is documenting and evolving practices for AI assisted software development.
Duvall, author of Continuous Integration: Improving Software Quality and Reducing Risk, positioned his collection of patterns as an exploration of how established engineering practices are being adapted through hands-on use of agentic AI in client work. He emphasised grounding AI generated output in shared patterns, stating that "engineering practices are becoming even more relevant when you have AI generating code."
Given the volume of code generated by AI, Duvall emphasised the continued importance of trunk based development, committing early and often, and automated testing, explaining that these become essential for maintaining quality as the rate of change increases.
Duvall also described a shift in how developers interact with code, observing that he is "not reviewing every line of code now" when working with AI generated output, as the volume of change makes this increasingly impractical. Instead, Duvall emphasised relying on automated validation and agentic guard rails, including codified skills that allow agents to review and refine their own output.
Duvall also discussed how approaches such as specification driven development are evolving existing engineering practice. Duvall's repository includes examples of agent readable specifications for an AWS IAM policy generation scenario, defining expected behaviour, constraints, and acceptance criteria up front, enabling agents to generate and validate output against a clear specification. Describing how familiar test first patterns are being adapted to guide AI assisted workflows, he said:
I'm literally... replicating what we did with Agile and XP... it literally says red, green, refactor... I go through that process.
Duvall also highlighted challenges earlier in the agentic lifecycle, particularly around defining intent. He noted that while AI tools can generate code quickly, vague or underspecified inputs often lead to inconsistent or unpredictable results. This has led to increased focus on driving agents with clearer specifications, including structured prompts that describe intent through role, context, and constraints, alongside specification driven development and acceptance tests derived from defined behaviour, noting that "if you don't fully describe what the intent is" you will "have random results."
A similar focus on clearer specifications was recently discussed on the DevSecOps Talks podcast with Paul Stack, System Initiative's director of product, who works on SWAMP, an agentic open source platform for automating and validating infrastructure. Stack described how he was restructuring development processes around agents, even to the point of refusing pull requests in favour of Github Issue based workflows that feed into specification driven development. He said:
We do not accept pull requests... if you have a design... open an issue and we'll interactively walk through this and we'll design it together.
Appearing on Scott Hanselman's podcast, Gergely Orosz, author of The Pragmatic Engineer newsletter, discussed an open source project that refrains from merging pull requests in favour of "remixing", where contributed PRs are rebuilt by agents in line with project standards. Contrasting this with autonomous agents using fully automated "Ralph loops," where subagents iteratively refine solutions until requirements are met, Hanselman acknowledged that although architectural and design "taste" is important in critical systems, the mentality of an "infinitely patient junior engineer" can be well suited to toil.
Stack also emphasised the importance of providing accurate architectural patterns and practices so that agents can "produce the code in a way that was coherent with your codebase," alongside defining architecture, constraints, and testing expectations up front. Like InfoQ's reporting of Boris Cherny's agentic workflow, Stack said that he also uses Claude's "plan mode" to review intent before execution, helping avoid "AI horror stories."
Duvall also pointed to the importance of shifting-right and extending these feedback loops into production. He described how observability, telemetry, and even tests in production can be used to shorten feedback cycles, interpreting and sending live signals back into the development lifecycle. Looking ahead, he suggested that AI may result in smaller, more focused teams, describing a move towards a "one pizza team" as coordination overhead reduces and automation increases.
Duvall suggested that, as with earlier shifts in engineering, quality is increasingly achieved through automation rather than human inspection. He said:
You're putting in mechanisms... such that the code is reviewed... but it might not be reviewed literally by you every single time.
Duvall and Stack both highlighted that AI assisted development requires a mix of shift left practices and shift right feedback, where behavioural definitions and production state become part of the validation process. Duvall also noted the advantages of AI analysing production telemetry more expansively to identify patterns and surface issues earlier.
Duvall's repository is continuously updated and defines structured patterns with maturity levels across development, security, and operational scenarios. The patterns include specification driven development, codified rules and architectural constraints, atomic decomposition with parallel agents, and observable development for workflows with automated traceability.
Acknowledging the shift beyond code-centric development, Orosz reflected that engineering identity and practice will move up a level, beyond the code itself. He said:
I think there is something much more than coding that makes us special and I think we should cultivate that.