InfoQ Homepage Artificial Intelligence Content on InfoQ
-
Reducing False Positives in Retrieval-Augmented Generation (RAG) Semantic Caching: a Banking Case Study
In this article, author Elakkiya Daivam discusses why Retrieval Augmented Generation (RAG) and semantic caching techniques are powerful levers for reducing false positives in AI powered applications. She shares the insights from a production-grade evaluation with 1,000 query variations tested across seven bi-encoder models.
-
Training Data Preprocessing for Text-to-Video Models
In this article, author Aleksandr Rezanov discusses the data preparation for generative text-to-image models to accelerate work on video generation services to be used in TV series and films. He explains how data is prepared and can serve as a starting point for creating custom datasets to develop proprietary models.
-
Building a RAG Application with Spring Boot, Spring AI, MongoDB Atlas Vector Search, and OpenAI
The RAG paradigm redefines AI: it combines generative models and business data for accurate, contextualised responses. The article shows how to integrate Spring Boot, Spring AI, MongoDB Atlas and OpenAI into a powerful and flexible pipeline capable of transforming the way businesses access and create value from data, with applications ranging from finance and healthcare to customer service.
-
A Plan-Do-Check-Act Framework for AI Code Generation
AI code generation tools promise faster development but often create quality issues, integration problems, and delivery delays. A structured Plan-Do-Check-Act cycle can maintain code quality while leveraging AI capabilities. Through working agreements, structured prompts, and continuous retrospection, it asserts accountability over code while guiding AI to produce tested, maintainable software.
-
Exploring the Unintended Consequences of Automation in Software
This article lays out some of the common assumptions and misconceptions about automation and its role in software (and software incidents), what our research has found regarding how automation shows up in software incidents, and some ideas around how people can better design automated tools to help people better handle software incidents.
-
Disaggregation in Large Language Models: the Next Evolution in AI Infrastructure
Large Language Model (LLM) inference faces a fundamental challenge: the same hardware that excels at processing input prompts struggles with generating responses, and vice versa. Disaggregated serving architectures solve this by separating these distinct computational phases, delivering throughput improvements and better resource utilization while reducing costs.
-
InfoQ AI, ML and Data Engineering Trends Report - 2025
This InfoQ Trends Report offers readers a comprehensive overview of emerging trends and technologies in the areas of AI, ML, and Data Engineering. This report summarizes the InfoQ editorial team’s and external guests' view on the current trends in AI and ML technologies and what to look out for in the next 12 months.
-
Virtual Panel: How Software Engineers and Team Leaders Can Excel with Artificial Intelligence
Artificial intelligence is impacting the individual work of software developers, how professionals work together in teams, and how software teams are being managed. In this panel, we'll discuss how artificial intelligence is reshaping software development, and what mindset and skills are required for software developers and engineering leaders to become adaptable and resilient in the age of AI.
-
Effective Practices for Architecting a RAG Pipeline
Hybrid search, smart chunking, and domain-aware indexing are key to building effective RAG pipelines. Context window limits and prompt quality critically affect LLM response accuracy. This article provides lessons learned from setting up a RAG pipeline.
-
How Causal Reasoning Addresses the Limitations of LLMs in Observability
Large language models excel at converting observability telemetry into clear summaries but struggle with accurate root cause analysis in distributed systems. LLMs often hallucinate explanations and confuse symptoms with causes. This article suggests how causal reasoning models with Bayesian inference offer more reliable incident diagnosis.
-
MCP: the Universal Connector for Building Smarter, Modular AI Agents
In this article, the authors discuss Model Context Protocol (MCP), an open standard designed to connect AI agents with tools and data they need. They also talk about how MCP empowers agent development, and its adoption in leading open-source frameworks.
-
The Virtual Think Tank: Using LLMs to Gain a Multitude of Perspectives
The virtual think tank leverages LLMs to simulate diverse stakeholder and expert perspectives, enabling architects to explore trade-offs, challenge biases, and refine decisions. By prompting personas of real industry experts, the method fosters rich, contextual debates—offering a scalable, low-cost alternative to a traditional think tank.