InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
Mistral Releases Devstral, an Open-Source LLM for Software Engineering Agents
Mistral AI announced the release of Devstral, a new open-source large language model designed to improve the automation of software engineering workflows, particularly in complex coding environments that require reasoning across multiple files and components.
-
Uber Completes Massive Kubernetes Migration for Microservices and Large-Scale Compute Workloads
Uber has successfully completed a large Kubernetes migration, transitioning its entire compute platform from Apache Mesos to Kubernetes across multiple data centers and cloud environments.
-
Google Enhances LiteRT for Faster On-Device Inference
The new release of LiteRT, formerly known as TensorFlow Lite, introduces a new API to simplify on-device ML inference, enhanced GPU acceleration, support for Qualcomm NPU (Neural Processing Unit) accelerators, and advanced inference features.
-
Redis Returns to Open Source under AGPL License: Is It Too Late?
Redis 8 has recently hit general availability, switching to the AGPLv3 license. A year after leaving its open source roots to challenge cloud service providers and following the birth of Valkey, Redis has rehired its creator and moved back to an open source license.
-
HashiCorp Releases Terraform MCP Server for AI Integration
HashiCorp has released the Terraform MCP Server, an open-source implementation of the Model Context Protocol designed to improve how large language models interact with infrastructure as code.
-
Prime Intellect Releases INTELLECT-2: a 32B Parameter Model Trained via Decentralized Reinforcement
Prime Intellect has released INTELLECT-2, a 32 billion parameter language model trained using fully asynchronous reinforcement learning across a decentralized network of compute contributors. Unlike traditional centralized model training, INTELLECT-2 is developed on a permissionless infrastructure where rollout generation, policy updates, and training are distributed and loosely coupled.
-
Gemma 3 Supports Vision-Language Understanding, Long Context Handling, and Improved Multilinguality
Google’s generative artificial intelligence (AI) model Gemma 3 supports vision-language understanding, long context handling, and improved multi-linguality. In a recent blog post, Google DeepMind and AI Studio teams discussed the new features in Gemma 3. The model also highlights KV-cache memory reduction, a new tokenizer and offers better performance and higher resolution vision encoders.
-
OpenAI Launches Codex Software Engineering Agent Preview
OpenAI has launched Codex, a research preview of a cloud-based software engineering agent designed to automate common development tasks such as writing code, debugging, testing, and generating pull requests. Integrated into ChatGPT, Codex runs each assignment in a secure sandbox environment preloaded with the user's codebase and configured to reflect their development setup.
-
Windsurf Launches SWE-1 Family of Models for Software Engineering
Windsurf has introduced its first set of SWE-1 models, aimed at supporting the full range of software engineering tasks, not limited to code generation. The lineup consists of three models SWE-1, SWE-1-lite, and SWE-1-mini, each designed for specific scenarios.
-
OpenAI’s Stargate Project Aims to Build AI Infrastructure in Partner Countries Worldwide
OpenAI has announced a new initiative called "OpenAI for Countries" as part of its Stargate project, aiming to help nations develop AI infrastructure based on democratic principles. This expansion follows the company's initial $500 billion investment plan for AI infrastructure in the United States.
-
Llama 4 Scout and Maverick Now Available on Amazon Bedrock and SageMaker JumpStart
AWS recently announced the availability of Meta's latest foundation models, Llama 4 Scout and Llama 4 Maverick, in Amazon Bedrock and AWS SageMaker JumpStart. Both models provide multimodal capabilities and follow the mixture-of-experts architecture.
-
Mistral Unveils Medium 3: Enterprise-Ready Language Model
Mistral AI has unveiled Mistral Medium 3, a mid-sized language model aimed at enterprises seeking a balance between cost-efficiency, strong performance, and flexible deployment options. The model is now available through Mistral’s platform and Amazon SageMaker, with further releases planned for IBM WatsonX, Azure AI Foundry, Google Cloud Vertex AI, and NVIDIA NIM.
-
CMU Researchers Introduce LegoGPT: Building Stable LEGO Structures from Text Prompts
Researchers at Carnegie Mellon University have introduced LegoGPT, a system that generates physically stable and buildable LEGO® structures from natural language descriptions. The project combines large language models with engineering constraints to produce designs that can be assembled manually or by robotic systems.
-
Google Cloud Enhances AI/ML Workflows with Hierarchical Namespace in Cloud Storage
On March 17, 2025, Google Cloud introduced a hierarchical namespace (HNS) feature in Cloud Storage, aiming to optimize AI and machine learning (ML) workloads by improving data organization, performance, and reliability.
-
Anthropic Introduces Web Search Functionality for Claude Models
Anthropic has announced the addition of web search capabilities to its Claude models, available via the Anthropic API. This update enables Claude to access current information from the web, allowing developers to create applications and AI agents that provide up-to-date insights.