InfoQ Homepage AI Architecture Content on InfoQ
-
Private AI Compute Enables Google Inference with Hardware Isolation and Ephemeral Data Design
Google announced Private AI Compute, a system designed to process AI requests using Gemini cloud models while aiming to keep user data private. The announcement positions Private AI Compute as Google's approach to addressing privacy concerns while providing cloud-based AI capabilities, building on what the company calls privacy-enhancing technologies it has developed for AI use cases.
-
Amazon Adds A2A Protocol to Bedrock AgentCore for Interoperable Multi-Agent Workflows
Amazon announced support for the Agent-to-Agent (A2A) protocol in Amazon Bedrock AgentCore Runtime, enabling communication between agents built on different frameworks. The protocol allows agents developed with Strands Agents, OpenAI Agents SDK, LangGraph, Google ADK, or Claude Agents SDK to "share context, capabilities, and reasoning in a common, verifiable format."
-
Kimi's K2 Opensource Language Model Supports Dynamic Resource Availability and New Optimizer
Kimi released K2, a Mixture-of-Experts large language model with 32 billion activated parameters and 1.04 trillion total parameters, trained on 15.5 trillion tokens. The release introduces MuonClip, a new optimizer that builds on the Muon optimizer by adding a QK-clip technique designed to address training instability, which the team reports resulted in "zero loss spike" during pre-training.
-
Anthropic Adds Sandboxing and Web Access to Claude Code for Safer AI-Powered Coding
Anthropic released sandboxing capabilities for Claude Code and launched a web-based version of the tool that runs in isolated cloud environments. The company introduced these features to address security risks that arise when Claude Code writes, tests, and debugs code with broad access to developer codebases and files.
-
New Claude Haiku 4.5 Model Promises Faster Performance at One-Third the Cost
Anthropic released Claude Haiku 4.5, making the model available to all users as its latest entry in the small, fast model category. The company positions the new model as delivering performance levels comparable to Claude Sonnet 4, which launched five months ago as a state-of-the-art model, but at "one-third the cost and more than twice the speed."
-
QCon London 2026 Announces Tracks: AI Engineering, Building Teams, Tech of Finance, and More
The QCon London 2026 tracks are live: 15 practitioner-curated deep dives on AI adoption, resilient architectures, distributed systems, performance, modern languages, data, security, and Staff+ leadership, rooted in real production lessons.
-
Inside the Architectures Powering Modern AI Systems: QCon San Francisco 2025
Senior engineers face fast-moving AI adoption without clear patterns. QCon SF 2025 brings real-world lessons from teams at Netflix, Meta, Intuit, Anthropic & more, showing how to build reliable AI systems at scale. Early bird ends Nov 11.
-
Open Practices for Architecture and AI Adoption
Andrea Magnorsky presented on Byte-Sized Architecture at Cloud Native Summit 2025, as a format for building shared understanding through small, recurrent workshops. Ahilan Ponnusamy and Andreas Spanner discussed the Technology Operating Model for AI adoption. Both approaches drew on the Open Practice Library for human-centred collaboration and driving architectural evolution.
-
How Netflix is Reimagining Data Engineering for Video, Audio, and Text
Netflix has introduced a new engineering specialization—Media ML Data Engineering, alongside a Media Data Lake designed to handle video, audio, text, and image assets at scale. Early results include richer ML models trained on standardized media, faster evaluation cycles, and deeper insights into creative workflows.
-
“A Security Nightmare”: Docker Warns of Risks in MCP Toolchains
A new blog post from Docker warns that AI-powered developer tools built on the Model Context Protocol (MCP) are introducing critical security vulnerabilities — including real-world cases of credential leaks, unauthorized file access, and remote code execution.
-
Databricks Agent Bricks Automates Enterprise AI Development with TAO and ALHF Methods
Databricks introduced Agent Bricks, a new product that changes how enterprises develop domain-specific agents. The automated workflow includes generating task-specific evaluations and LLM judges for quality assessment, creating synthetic data that resembles customer data to supplement agent learning, and searching across optimization techniques to refine agent performance.
-
Amazon Launches Bedrock AgentCore for Enterprise AI Agent Infrastructure
Amazon announced the preview of Amazon Bedrock AgentCore, a collection of enterprise-grade services that help developers deploy and operate AI agents at scale across frameworks and foundation models. The platform addresses infrastructure challenges developers face when building production AI agents.
-
GitHub Unveils Prototype AI Agent for Autonomous Bug Fixing
GitHub unveils a groundbreaking AI coding agent that autonomously identifies bugs and proposes fixes via pull requests, marking a shift towards independent code maintenance. Leveraging advanced semantic analysis and vulnerability libraries, this tool aims to alleviate developers' workload, allowing them to prioritize complex problem-solving.
-
AWS Introduces Open Source Model Context Protocol Servers for ECS, EKS, and Serverless
AWS has launched open-source Model Context Protocol (MCP) servers on GitHub to supercharge AI development within Amazon ECS, EKS, and Serverless environments. These specialized tools equip developers with real-time, context-specific insights, enhancing application deployment, troubleshooting, and operational efficiency. Empower your cloud experience today!
-
HashiCorp Releases Terraform MCP Server for AI Integration
HashiCorp has released the Terraform MCP Server, an open-source implementation of the Model Context Protocol designed to improve how large language models interact with infrastructure as code.