InfoQ Homepage Large language models Content on InfoQ
-
GitHub Expands Copilot Ecosystem with AgentHQ
GitHub has announced AgentHQ, a new addition to its platform that aims to unify the fragmented landscape of AI tools within the software development process.
-
Android GenAI Prompt API Enables Natural Language Requests with Gemini Nano
The ML Kit GenAI Prompt API, now available in alpha, enables Android developers to send natural language and multimodal requests to Gemini Nano running on-device, extending the text summarization and image description capabilities introduced with the initial GenAI release.
-
Cursor 2.0 Expands Composer Capabilities for Context-Aware Development
Cursor has launched version 2.0 of its AI-driven code editor, featuring Composer, a new model that enables developers to write and modify code through natural language interaction.
-
Apple Releases Pico-Banana-400K Dataset to Advance Text-Guided Image Editing
Pico-Banana-400K is a curated dataset of 400,000 images developed by Apple researchers to make it easier to create text-guided image editing models. The images were generated using Google's Nano-Banana to modify real photographs from the Open Images collecion and were then filtered using Gemini-2.5-Pro based on their overall quality and prompt compliance.
-
Inside the Architectures Powering Modern AI Systems: QCon San Francisco 2025
Senior engineers face fast-moving AI adoption without clear patterns. QCon SF 2025 brings real-world lessons from teams at Netflix, Meta, Intuit, Anthropic & more, showing how to build reliable AI systems at scale. Early bird ends Nov 11.
-
The Architectural Shift: AI Agents Become Execution Engines While Backends Retreat to Governance
A fundamental shift in enterprise software architecture is emerging as AI agents transition from assistive tools to operational execution engines, with traditional application backends retreating to governance and permission management roles. This transformation is accelerating across sectors, with 40% of enterprise applications expected to include autonomous agents by 2026.
-
NVIDIA Introduces OmniVinci, a Research-Only LLM for Cross-Modal Understanding
NVIDIA has introduced OmniVinci, a large language model designed to understand and reason across multiple input types — including text, vision, audio, and even robotics data. The project, developed by NVIDIA Research, aims to push machine intelligence closer to human-like perception by unifying how models interpret the world across different sensory streams.
-
Anthropic Introduces Skills for Custom Claude Tasks
Anthropic has unveiled a new feature called Skills, designed to let developers extend Claude with modular, reusable task components.
-
DeepSeek AI Unveils DeepSeek-OCR: Vision-Based Context Compression Redefines Long-Text Processing
DeepSeek AI has developed DeepSeek-OCR, an open-source system that uses optical 2D mapping to compress long text passages. This approach aims to improve how large language models (LLMs) handle text-heavy inputs.
-
Google Research Open-Sources the Coral NPU Platform to Help Build AI into Wearables and Edge Devices
Coral NPU is an open-source full-stack platform designed to help hardware engineers and AI developers overcome the limitations that prevent integrating AI in wearables and edge devices, including performance, fragmentation, and user trust.
-
Google Introduces LLM-Evalkit to Bring Order and Metrics to Prompt Engineering
Google has introduced LLM-Evalkit, an open-source framework built on Vertex AI SDKs, designed to make prompt engineering for large language models less chaotic and more measurable. The lightweight tool aims to replace scattered documents and guess-based iteration with a unified, data-driven workflow.
-
Researchers Introduce ACE, a Framework for Self-Improving LLM Contexts
Researchers from Stanford University, SambaNova Systems, and UC Berkeley have proposed Agentic Context Engineering (ACE), a new framework designed to improve large language models (LLMs) through evolving, structured contexts rather than weight updates. The method, described in a paper, seeks to make language models self-improving without retraining.
-
Hugging Face Introduces RTEB, a New Benchmark for Evaluating Retrieval Models
Hugging Face unveils the Retrieval Embedding Benchmark (RTEB), a pioneering framework to assess embedding models' real-world retrieval accuracy. By merging public and private datasets, RTEB narrows the "generalization gap," ensuring models perform reliably across critical sectors. Now live and inviting collaboration, RTEB aims to set a community standard in AI retrieval evaluation.
-
10 AI-Related Standout Sessions at QCon San Francisco 2025
Join us at QCon San Francisco 2025 (Nov 17–21) for a three-day deep dive into the future of software development, exploring AI’s transformative impact. As a program committee member, I’m excited to showcase tracks that tackle real-world challenges, featuring industry leaders and sessions on AI, LLMs, and engineering mindsets. Don’t miss out!
-
Paper2Agent Converts Scientific Papers into Interactive AI Agents
Stanford's Paper2Agent framework revolutionizes research by transforming static papers into interactive AI agents that execute analyses and respond to queries. Leveraging the Model Context Protocol, it simplifies reproducibility and enhances accessibility, empowering users with dynamic, autonomous tools for deeper scientific exploration and understanding.