InfoQ Homepage Retrieval-Augmented Generation Content on InfoQ
News
RSS Feed-
Microsoft Introduces CoRAG: Enhancing AI Retrieval with Iterative Reasoning
Microsoft AI has introduced Chain-of-Retrieval Augmented Generation (CoRAG), a new AI framework designed to enhance Retrieval-Augmented Generation (RAG) models. Unlike traditional RAG systems, which rely on a single retrieval step, CoRAG enables iterative search and reasoning, allowing AI models to refine their retrievals dynamically before generating answers.
-
Micronaut Framework 4.7.0 Provides Integration with LangChain4j and Graal Languages
The Micronaut Foundation has released Micronaut Framework 4.7.0 in December 2024, four months after the release of version 4.6.0. This version provides LangChain4J support to integrate LLMs into Java applications. Micronaut Graal Languages provides integration with Graal-based dynamic languages such as the Micronaut GraalPy feature to interact with Python.
-
AMD and Johns Hopkins Researchers Develop AI Agent Framework to Automate Scientific Research Process
Researchers from AMD and Johns Hopkins University have developed Agent Laboratory, an artificial intelligence framework that automates core aspects of the scientific research process. The system uses large language models to handle literature reviews, experimentation, and report writing, producing both code repositories and research documentation.
-
Amazon Bedrock Introduces Multi-Agent Systems (MAS) with Open Source Framework Integration
Amazon Web Services has released a multi-agent collaboration capability for Amazon Bedrock, introducing a framework for deploying and managing multiple AI agents that collaborate on complex tasks. The system enables specialized agents to work together under a supervisor agent's coordination, addressing challenges developers face with agent orchestration in distributed AI systems.
-
Google Vertex AI Provides RAG Engine for Large Language Model Grounding
Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.
-
RAG-Powered Copilot Saves Uber 13,000 Engineering Hours
Uber recently detailed how it built Genie, an AI-powered on-call copilot designed to improve the efficiency of on-call support engineers. Genie leverages Retrieval-Augmented Generation (RAG) to provide accurate real-time responses and significantly enhance the speed and effectiveness of incident response. Since its launch, Genie has answered over 70,000 questions, saving 13,000 engineering hours.
-
RAG-Based Ingestion for Generative AI Applications with Logic Apps Standard in Public Preview
Microsoft's new built-in actions for document parsing and chunking in Logic Apps Standard revolutionizes Generative AI ingestion. Streamline workflows with low-code solutions that transform structured and unstructured data into AI-ready formats. Unlock automation across industries, enhancing searchability and knowledge management effortlessly.
-
Anthropic Unveils Contextual Retrieval for Enhanced AI Data Handling
Anthropic has announced Contextual Retrieval, a significant advancement in AI systems' interaction with extensive knowledge bases. This technique addresses the challenge of context loss in Retrieval-Augmented Generation (RAG) systems by enriching text chunks with contextual information before embedding or indexing.