InfoQ Homepage OpenAI Content on InfoQ
-
OpenAI Introduces Websocket-Based Execution Mode to Reduce Latency in Agentic Workflows
OpenAI introduces a WebSocket-based execution mode for its Responses API to improve agentic workflow performance in coding agents and real-time AI systems. The update reduces latency by up to 40 percent by replacing HTTP request-response cycles with persistent connections, improving streaming, tool execution, and multi-step orchestration in production-scale AI systems.
-
Mistral Adds Remote Agents and Work Mode to Le Chat
Mistral has released Mistral Medium 3.5, a 128-billion parameter model designed to handle instruction following, reasoning, and coding within a single system, and introduced new cloud-based agent capabilities in its Vibe and Le Chat products.
-
OpenAI Extends the Responses API to Serve as a Foundation for Autonomous Agents
OpenAI announced they are extending the Responses API to make it easier for developer to build agentic workflows, adding support for a shell tool, a built-in agent execution loop, a hosted container workspace, context compaction, and reusable agent skills.
-
HubSpot’s Sidekick: Multi-Model AI Code Review with 90% Faster Feedback and 80% Engineer Approval
HubSpot engineers introduced Sidekick, an internal AI powered code review system that analyzes pull requests using large language models and filters feedback through a secondary “judge agent.” The system reduced time to first feedback on pull requests by about 90 percent and is now used across tens of thousands of internal pull requests.
-
AWS Launches Managed Openclaw on Lightsail amid Critical Security Vulnerabilities
AWS launched managed OpenClaw on Lightsail for AI agent deployment while security concerns mount. The 250k-star GitHub project is affected by CVE-2026-25253, which enables one-click RCE, with 17,500+ vulnerable instances exposed. Bitdefender found 20% of ClawHub skills malicious. AWS blueprint provides automated hardening, but doesn't address architectural security limits.
-
OpenAI Secures AWS Distribution for Frontier Platform in $110B Multi-Cloud Deal
OpenAI's $110B funding includes AWS as the exclusive third-party distributor for the Frontier agent platform, introducing an architectural split: Azure retains stateless API exclusivity; AWS gains stateful runtime environments via Bedrock. Deal expands the existing $38B AWS agreement by $100B and commits 2GW of Trainium capacity.
-
Anthropic Study: AI Coding Assistance Reduces Developer Skill Mastery by 17%
Anthropic research shows developers using AI assistance scored 17% lower on comprehension tests when learning new coding libraries, though productivity gains were not statistically significant. Those who used AI for conceptual inquiry scored 65% or higher, while those delegating code generation to AI scored below 40%.
-
OpenAI Introduces Harness Engineering: Codex Agents Power Large‑Scale Software Development
OpenAI introduces Harness Engineering, an AI-driven methodology where Codex agents generate, test, and deploy a million-line production system. The platform integrates observability, architectural constraints, and structured documentation to automate key software development workflows.
-
OpenAI Launches Frontier, a Platform to Build, Deploy, and Manage AI Agents across the Enterprise
OpenAI Frontier is an enterprise platform for building, deploying, and managing AI agents, designed to make AI agents reliable, scalable, and integrated into real company systems and workflows.
-
OpenAI Publishes Codex App Server Architecture for Unifying AI Agent Surfaces
OpenAI has recently published a detailed architecture description of the Codex App Server, a bidirectional protocol that decouples the Codex coding agent's core logic from its various client surfaces. The App Server now powers every Codex experience, including the CLI, the VS Code extension, and the web app, through a single, stable API.
-
Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
Agoda engineers developed API Agent, enabling a single MCP server to access any internal REST or GraphQL API with zero code and zero deployments. The system reduces overhead from multiple APIs, supports AI-assisted queries, and uses in-memory SQL post-processing for safe, scalable data handling across internal services.
-
OpenAI Scales Single Primary PostgreSQL Instance to Millions of Queries per Second for ChatGPT
OpenAI described how it scaled PostgreSQL to support ChatGPT and its API platform, handling millions of queries per second for hundreds of millions of users. By running a single-primary PostgreSQL deployment on Azure with nearly 50 read replicas, optimizing query patterns, and offloading write-heavy workloads to sharded systems, OpenAI maintained low-latency reads while managing write pressure.
-
Xcode 26.3 Brings Integrated Agentic Coding for Anthropic Claude Agent and OpenAI Codex
The latest release of Xcode, Xcode 26.3, extends support for coding agents, such as Anthropic's Claude Agent and OpenAI's Codex, helping developers tackle complex tasks and improve their productivity.
-
Vercel Introduces Skills.sh, an Open Ecosystem for Agent Commands
Vercel has released Skills.sh, an open-source tool designed to provide AI agents with a standardized way to execute reusable actions, or skills, through the command line.
-
OpenAI Begins Article Series on Codex CLI Internals
OpenAI recently published the first in a series of articles detailing the design and functionality of their Codex software development agent. The inaugural post highlights the internals of the Codex harness, the core component in the Codex CLI.