InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
Eclipse LMOS: Launching AI Agents across Europe at Breakneck Speed
In this talk, the authors share some of our company’s key learnings in developing customer-facing LLM-powered applications deployed across Europe. They used multi-agent architecture and systems design to create an open-source set of tools, a framework, and a full-fledged platform to accelerate the development of AI agents. This is a summary of a presentation from InfoQ Dev Summit Boston 2024.
-
Building Trust in AI: Security and Risks in Highly Regulated Industries
Explore the transformative power of responsible AI across industries, emphasizing security, MLOps, and compliance. As AI drives innovation—from predicting hurricanes to enhancing legal workflows—organizations must prioritize ethical practices, transparency, and robust governance to safeguard sensitive data while navigating an evolving regulatory landscape.
-
Launching GenAI Productivity Tools: Insights and Lessons
In this article, based on a talk at QCon San Francisco 2024, author Mandy Gu shares some of the ways her company uses GenAI to enhance productivity and the lessons they learned along the way, including failed bets and features that were rolled back because of low user adoption. Most important, they learned to focus on building tools that were aligned with business goals.
-
Prompt Injection for Large Language Models
This article will cover two common attack vectors against large language models and tools based on them, prompt injection and prompt stealing. We will additionally introduce three approaches to make your LLM-based systems and tools less vulnerable to this kind of attacks and review their benefits and limitations, including fine-tuning, adversarial detectors, and system prompt hardening.
-
The End of the Bronze Age: Rethinking the Medallion Architecture
A shift left approach to data processing relies on data products that form the basis of data communication across the business. This addresses many flaws in traditional data processing and makes data more relevant, complete, and trustworthy.
-
Elevate Developer Experience with Generative AI Capabilities on AWS
This is a summary of a talk I gave at InfoQ Dev Summit Munich 2024. I discussed the transformative potential of generative AI in enhancing developer experiences, particularly through AWS. I’ll introduce key tools like Amazon Bedrock, Code Review Assistant, Agentic Code Generation, and Code Summarization in this article.
-
A Framework for Building Micro Metrics for LLM System Evaluation
LLM accuracy is a challenging topic to address and is much more multi-dimensional than a simple accuracy score. Denys Linkov introduces a framework for creating micro metrics to evaluate LLM systems, focusing on goal-aligned metrics that improve performance and reliability. By adopting an iterative "crawl, walk, run" methodology, teams can incrementally develop observability.
-
Navigating Responsible AI in the FinTech Landscape
Explore the dynamic intersection of responsible AI, regulation, and ethics in the FinTech sector. This article highlights key challenges and innovative practices as organizations navigate compliance with evolving guidelines like the EU AI Act. Discover how to balance transparency, efficiency, and risk management for sustainable AI growth in your business.
-
Architectural Intelligence – the Next AI
Architectural Intelligence is the ability to look beyond AI hype and identify real AI components. Determining how, where, and when to use AI elements comes down to traditional trade-off analysis. Like any technology, AI can be used creatively, but inappropriately. Identify if AI makes sense for your use case, then work to use it effectively to meet your needs.
-
Efficient Resource Management with Small Language Models (SLMs) in Edge Computing
Small Language Models (SLMs) bring AI inference to the edge without overwhelming the resource-constrained devices. In this article, author Suruchi Shah dives into how SLMs can be used in edge computing applications for learning and adapting to patterns in real-time, reducing the computational burden and making edge devices smarter.
-
Being a Responsible Developer in the Age of AI Hype
Justin Sheehy emphasizes that AI is code, not magic, and warns against inflated claims about AI capabilities. He urges developers to approach AI with healthy skepticism, seeking verifiable evidence and focusing on ethical practices, including addressing bias, privacy, and data integrity. Clear communication about AI’s limitations and accountable use are essential to prevent hype and misuse.
-
Virtual Panel: What to Consider When Adopting Large Language Models
Four experts discuss some issues people should think about when adopting LLMs and how they can make the best choice for their specific use case. Topics include how to choose between an API-based vs. self-hosted LLM, when to fine-tune an LLM, how to mitigate LLM risks, and what non-technical changes organizations need to make when adopting LLMs.