InfoQ Homepage Generative AI Content on InfoQ
-
Teleport Report Finds Over-Privileged AI Systems Linked to Fourfold Rise in Security Incidents
Enterprises that grant excessive access permissions to AI systems experience 4.5 times as many security incidents as those that do not, according to The 2026 State of AI in Enterprise Infrastructure Security, a report published by infrastructure identity company Teleport. The study found that identity management hasn't kept up with AI adoption in production systems.
-
QCon London 2026: AI Agents Write Your Code. What’s Left for Humans?
Hannah Foxwell began her QCon London 2026 talk by noting that the long-sought velocity in development has arrived, but the industry is unsure how to use it. She set aside the technical details of agentic coding, focusing instead on its implications for the people working with these systems.
-
Vercel Releases JSON-Render: a Generative UI Framework for AI-Driven Interface Composition
Vercel has open-sourced json-render, a framework that enables AI models to create structured user interfaces from natural language prompts. Released under the Apache 2.0 license, it supports multiple frontend frameworks and features a catalog of components defined by developers. Community feedback includes both support and skepticism, highlighting its differences from existing standards.
-
QCon London 2026: Refreshing Stale Code Intelligence
At QCon London 2026, Jeff Smith discussed the growing mismatch between AI coding models and real-world software development. While AI tools are enabling developers to generate code faster than ever, Smith argued that the models themselves are increasingly “stale” because they lack the repository-specific knowledge required to produce production-ready contributions.
-
Where Do Humans Fit in AI-Assisted Software Development?
An article on Martin Fowler’s blog by Kief Morris examines the role of humans in AI-assisted software engineering, arguing developers are unlikely to move fully “out of the loop.” Instead, teams may work “on the loop,” designing tests, specifications, and feedback mechanisms to guide AI agents, as industry discussions focus on how such systems should be verified and governed.
-
QCon London 2026: Reliable Retrieval for Production AI Systems
At QCon London 2026, Lan Chu, AI tech lead at Rabobank, shared lessons from deploying a production AI search system used internally by more than 300 users across 10,000 documents. Her experience shows that most failures in RAG systems stem from indexing and retrieval, rather than the language model itself.
-
QCon AI Boston’s Early Program Focuses on the Engineering Work behind Production AI
As teams move AI from pilots to production, the hard problems shift from demos to dependability. The first confirmed talks for QCon AI Boston (June 1–2) focus on context engineering, agent explainability, reasoning beyond basic RAG, evaluation, governance, and platform infrastructure needed to run AI reliably under real-world constraints.
-
MongoDB Introduces Embedding and Reranking API on Atlas
MongoDB has recently announced the public preview of its Embedding and Reranking API on MongoDB Atlas. The new API gives developers direct access to Voyage AI’s search models within the managed cloud database, enabling them to create features such as semantic search and AI-powered assistants within a single integrated environment, with consolidated monitoring and billing.
-
Cloudflare's Matrix Homeserver Demo Sparks Debate over AI-Generated Code Claims
A Cloudflare blog post claiming a "production-grade" Matrix homeserver on Workers didn't survive community scrutiny. Missing federation, incomplete encryption, and TODO comments in authentication logic pointed to unreviewed AI output. Matrix's Matthew Hodgson welcomed the effort but noted the implementation "doesn't yet constitute a functional Matrix server."
-
Tracking and Controlling Data Flows at Scale in GenAI: Meta’s Privacy-Aware Infrastructure
Meta has revealed how it scales its Privacy-Aware Infrastructure (PAI) to support generative AI development while enforcing privacy across complex data flows. Using large-scale lineage tracking, PrivacyLib instrumentation, and runtime policy controls, the system enables consistent privacy enforcement for AI workloads like Meta AI glasses without introducing manual bottlenecks.
-
Human‑Centred AI for SRE: Multi‑Agent Incident Response without Losing Control
A growing body of recent research and industry commentary suggests that a shift in how organisations approach site reliability engineering is underway. Rather than handing the pager to a machine, teams are designing multi-agent AI systems that work alongside on-call engineers, narrowing the search space and automating the tedious steps while leaving judgment calls to humans.
-
Cloudflare Year in Review: AI Bots Crawl Aggressively, Post-Quantum Encryption Hits 50%, Go Doubles
Cloudflare has recently published the sixth edition of its Radar Year in Review. The results reveal 19% yearly growth in global internet traffic, Googlebot dominance, increasing crawl-to-refer ratios, and broad adoption of post-quantum encryption. Over 20% of automated API requests were made by Go-based clients, almost doubling adoption over the previous year.
-
SIMA 2 Uses Gemini and Self-Improvement to Generalize across Unseen 3D and Photorealistic Worlds
Google DeepMind researchers introduced SIMA 2 (Scalable Instructable Multiworld Agent), a generalist agent built on the Gemini foundation model that can understand and act across multiple 3D virtual game environments. The SIMA 2 architecture uses a Gemini Flash-Lite model trained on a mixture of gameplay and Gemini pretraining data.
-
Target Improves Add to Cart Interactions by 11 Percent with Generative AI Recommendations
Target has deployed GRAM, a GenAI-powered accessory recommendation system for the Home category, using large language models to prioritize product attributes and capture aesthetic cohesion. The system helps shoppers find compatible accessories, integrates human-in-the-loop curation, and achieved measurable improvements in engagement and conversion.
-
Meta Details GEM Ads Model Using LLM-Scale Training, Hybrid Parallelism, and Knowledge Transfer
Meta released details about its Generative Ads Model (GEM), a foundation model designed to improve ads recommendation across its platforms. The model addresses core challenges in recommendation systems (RecSys) by processing billions of daily user-ad interactions where meaningful signals such as clicks and conversions are very sparse.