InfoQ Homepage News
-
IBM Cloud Code Engine Serverless Fleets with GPUs for High-Performance AI and Parallel Computing
IBM Cloud Code Engine’s new Serverless Fleets revolutionizes how enterprises tackle compute-intensive tasks. Harnessing integrated GPU support, it simplifies the execution of large-scale workloads with a fully managed, pay-as-you-go model. This efficient platform eliminates operational complexities, enabling developers to focus on innovation while ensuring cost-effectiveness and scalability.
-
Hugging Face Introduces RTEB, a New Benchmark for Evaluating Retrieval Models
Hugging Face unveils the Retrieval Embedding Benchmark (RTEB), a pioneering framework to assess embedding models' real-world retrieval accuracy. By merging public and private datasets, RTEB narrows the "generalization gap," ensuring models perform reliably across critical sectors. Now live and inviting collaboration, RTEB aims to set a community standard in AI retrieval evaluation.
-
Testing Organizations' Widespread Adoption of Agentic AI, but Leadership Lags in Understanding
Nearly all software testing teams are either using or plan to use agentic AI, but many leaders admit they lack a clear grasp of testing realities, according to a recent survey of 400 testing executives and engineering leaders.
-
HashiCorp Warns Traditional Secret Scanning Tools are Falling behind
HashiCorp has issued a warning that traditional secret scanning tools are failing to keep up with the realities of modern software development. In a new blog post, the company argues that post-commit detection and brittle pattern matching leave dangerous gaps in coverage.
-
10 AI-Related Standout Sessions at QCon San Francisco 2025
Join us at QCon San Francisco 2025 (Nov 17–21) for a three-day deep dive into the future of software development, exploring AI’s transformative impact. As a program committee member, I’m excited to showcase tracks that tackle real-world challenges, featuring industry leaders and sessions on AI, LLMs, and engineering mindsets. Don’t miss out!
-
Paper2Agent Converts Scientific Papers into Interactive AI Agents
Stanford's Paper2Agent framework revolutionizes research by transforming static papers into interactive AI agents that execute analyses and respond to queries. Leveraging the Model Context Protocol, it simplifies reproducibility and enhances accessibility, empowering users with dynamic, autonomous tools for deeper scientific exploration and understanding.
-
Genkit Extension for Gemini CLI Brings Framework-Aware AI Assistance to the Terminal
Introducing Google's Genkit Extension for Gemini CLI: a groundbreaking tool that delivers framework-aware AI assistance directly to the terminal. Streamline your Genkit application development with context-aware code generation, debugging, and best practices—all without leaving the command line. Unleash productivity and innovation in building generative AI applications.
-
GitHub MCP Registry Offers a Central Hub for Discovering and Deploying MCP Servers
GitHub has recently launched its Model Context Protocol (MCP) Registry, designed to help developers discover and use the AI tools directly from within their working environment. The registry currently lists over 40 MCP servers from Microsoft, GitHub, Dynatrace, Terraform, and many others.
-
Seed4J 2.0 Delivers a Migration from JHipster Lite
The release of Seed4J 2.0 delivers a migration from JHipster Lite 1.35.0. Seed4J is a “modular code generator that helps developers bootstrap their applications with clarity, structure, and purpose.” Pascal Grimaud, creator of Seed4J and former co-leader of JHipster, spoke to InfoQ about this migration.
-
OpenAI Adds Full MCP Support to ChatGPT Developer Mode
OpenAI has rolled out full Model Context Protocol (MCP) support in ChatGPT, bringing developers a long-requested feature: the ability to use custom connectors for both read and write actions directly inside chats. The feature, now in beta under Developer Mode, effectively turns ChatGPT into a programmable automation hub capable of interacting with external systems or internal APIs.
-
Java News Roundup: Jakarta Query and Spring Milestones, Open Liberty, Camel, Quarkus, Grails
This week's Java roundup for October 6th, 2025, features news highlighting: milestone releases of Jakarta Query 1.0, Spring AI 1.1 and Spring Batch 6.0; the October 2025 edition of Open Liberty; point releases of Quarkus, Apache Camel and JetBrains Ktor.
-
OpenAI Study Investigates the Causes of LLM Hallucinations and Potential Solutions
In a recent research paper, OpenAI suggested that the tendency of LLMs to hallucinate stems from the way standard training and evaluation methods reward guessing over acknowledging uncertainty. According to the study, this insight could pave the way for new techniques to reduce hallucinations and build more trustworthy AI systems, but not all agree on what hallucinations are in the first place.
-
AWS Introduces ECS Managed Instances for Containerized Applications
AWS recently announced Amazon ECS Managed Instances, a new feature in ECS designed to simplify the deployment of containerized applications on EC2 instances. The service automatically manages instance provisioning, scaling, and maintenance, thereby reducing the operational overhead associated with maintaining container infrastructure.
-
Claude Sonnet 4.5 Tops SWE-Bench Verified, Extends Coding Focus beyond 30 Hours
Anthropic's Claude Sonnet 4.5, its most advanced coding model, excels in task performance and safety, achieving a 98.7% safety score and improving real-world coding capabilities. Enhanced reasoning skills allow for sustained multi-step tasks, with notable user gains reported. This drop-in replacement demonstrates a powerful balance of capability and security for users.
-
PlanetScale Extends Database Platform to PostgreSQL
PlanetScale has announced the general availability of its managed sharded Postgres service, built for performance and reliability on AWS or Google Cloud. The launch extends PlanetScale's offerings to PostgreSQL users, adding to the company's existing popular MySQL-based platform built on top of Vitess.