InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
Nvidia Ingest Aims to Make it Easier to Extract Structured Information from Documents
Nvidia Ingest is a new microservice aimed at processing document content and extracting metadata into a well-defined JSON schema. Ingest is able to process PDFs, Word, and PowerPoint documents and extract structured information from tables, charts, images, and text using optical character recognition.
-
Microsoft Research AI Frontiers Lab Launches AutoGen v0.4 Library
Microsoft Research’s AI Frontiers Lab has announced the release of AutoGen version 0.4, an open-source framework designed to build advanced AI agent systems. This latest version as stated marks the complete redesign of the AutoGen library, focusing on enhancing code quality, robustness, usability, and the scalability of agent workflows.
-
DeepSeek Open-Sources DeepSeek-V3, a 671B Parameter Mixture of Experts LLM
DeepSeek open-sourced DeepSeek-V3, a Mixture-of-Experts (MoE) LLM containing 671B parameters. It was pre-trained on 14.8T tokens using 2.788M GPU hours and outperforms other open-source models on a range of LLM benchmarks, including MMLU, MMLU-Pro, and GPQA.
-
Google Releases Experimental AI Reasoning Model
Google has introduced Gemini 2.0 Flash Thinking Experimental, an AI reasoning model available in its AI Studio platform.
-
Google Vertex AI Provides RAG Engine for Large Language Model Grounding
Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.
-
Apache Hudi 1.0 Now Generally Available
The Apache Software Foundation has recently announced the general availability of Apache Hudi 1.0, the transactional data lake platform with support for near real-time analytics. Initially introduced in 2017, Apache Hudi provides an open table format optimized for efficient writes in incremental data pipelines and fast query performance.
-
Major LLMs Have the Capability to Pursue Hidden Goals, Researchers Find
Researchers at AI safety firm Apollo Research found that AI agents may covertly pursue misaligned goals and hide their true objectives. Known as in-context scheming, this behavior does not seem to be accidental as LLMs explicitly reason about deceptive strategies and consider them a viable strategy.
-
Microsoft Research Introduces AIOpsLab: a Framework for AI-Driven Cloud Operations
Microsoft Research unveiled AIOpsLab, an open-source framework designed to advance the development and evaluation of AI agents for cloud operations. The tool provides a standardized and scalable platform to address challenges in fault diagnosis, incident mitigation, and system reliability within complex cloud environments.
-
Shaping an Impactful Data Product Strategy
Lior Barak and Gaëlle Seret advocate proactive, business-focused strategies for data engineering. Barak proposes a 3-year roadmap using his Data Ecosystem Vision Board to align teams on strategic capabilities and measure ROI, cost, and impact. Seret promotes a "data as a product" approach, co-creating visions with stakeholders and evolving shared taxonomies to ensure long-term alignment.
-
HuatuoGPT-o1: Advancing Complex Medical Reasoning with AI
Researchers from The Chinese University of Hong Kong, Shenzhen, and the Shenzhen Research Institute of Big Data have introduced HuatuoGPT-o1, a medical large language model (LLM) designed to improve reasoning in complex healthcare scenarios.
-
Google Releases PaliGemma 2 Vision-Language Model Family
Google DeepMind released PaliGemma 2, a family of vision-language models (VLM). PaliGemma 2 is available in three different sizes and three input image resolutions and achieves state-of-the-art performance on several vision-language benchmarks.
-
Nvidia Announces Arm-Powered Project Digits, Its First Personal AI Computer
Capable of running 200B-parameter models, Nvidia Project Digits packs the new Nvidia GB10 Grace Blackwell chip to allow developers to fine-tune and run AI models on their local machines. Starting at $3,000, Project Digits targets AI researchers, data scientists, and students to allow them to create their models using a desktop system and then deploy them on cloud or data center infrastructure.
-
Google Expands Gemini Code Assist with Support for Atlassian, GitHub, and GitLab
Google recently announced support for third-party tools in Gemini Code Assist, including Atlassian Rovo, GitHub, GitLab, Google Docs, Sentry, and Snyk. The private preview enables developers to test the integration of widely-used software tools with the personal AI assistant directly within the IDE.
-
Nvidia Nemotron Models Aim to Accelerate AI Agent Development
Nvidia has launched Llama Nemotron large language models (LLMs) and Cosmos Nemotron vision language models (VLMs) with a special emphasis on workflows powered by AI agents such as customer support, fraud detection, product supply chain optimization, and more. Models in the Nemotron family come in Nano, Super, and Ultra sizes to better fit the requirements of diverse systems.
-
Netflix Enhances Metaflow with New Configuration Capabilities
Netflix has introduced a significant enhancement to its Metaflow machine learning infrastructure: a new Config object that brings powerful configuration management to ML workflows. This addition addresses a common challenge faced by Netflix's teams, which manage thousands of unique Metaflow flows across diverse ML and AI use cases.