InfoQ Homepage Machine Learning Content on InfoQ
-
Rhymes AI Unveils Aria: Open-Source Multimodal Model with Development Resources
Rhymes AI has introduced Aria, an open-source multimodal native Mixture-of-Experts (MoE) model capable of processing text, images, video, and code effectively. In benchmarking tests, Aria has outperformed other open models and demonstrated competitive performance against proprietary models such as GPT-4o and Gemini-1.5.
-
AI and ML Tracks at QCon San Francisco 2024 – a Deep Dive into GenAI & Practical Applications
At QCon San Francisco 2024, explore two AI/ML-focused tracks highlighting real-world applications and innovations. Learn from industry experts on deploying LLMs, GenAI, and recommendation systems, gaining practical strategies for integrating AI into software development.
-
NVIDIA Unveils NVLM 1.0: Open-Source Multimodal LLM with Improved Text and Vision Capabilities
NVIDIA unveiled NVLM 1.0, an open-source multimodal large language model (LLM) that performs strongly on both vision-language and text-only tasks. NVLM 1.0 shows improvements in text-based tasks after multimodal training, standing out among current models. The model weights are now available on Hugging Face, with the training code set to be released shortly.
-
Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison
Hugging Face has recently released Open LLM Leaderboard v2, an upgraded version of their benchmarking platform for large language models. Hugging Face created the Open LLM Leaderboard to provide a standardized evaluation setup for reference models, ensuring reproducible and comparable results.
-
Meta Releases Llama 3.2 with Vision, Voice, and Open Customizable Models
Meta recently announced Llama 3.2, the latest version of Meta's open-source language model, which includes vision, voice, and open customizable models. This is the first multimodal version of the model, which will allow users to interact with visual data in ways like identifying objects in photos or editing images with natural language commands among other use cases.
-
OpenAI Releases Stable Version of .NET Library with GPT-4o Support and API Enhancements
OpenAI has released the stable version of its official .NET library, following June's beta launch. Available as a NuGet package, it supports the latest models like GPT-4o and GPT-4o mini, and the full OpenAI REST API. The release includes both sync and async APIs, streaming chat completions, and key-breaking changes for improved API consistency.
-
PyTorch Conference 2024: PyTorch 2.4/Upcoming 2.5, and Llama 3.1
The PyTorch Conference 2024, held by The Linux Foundation, showcased groundbreaking advancements in AI, featuring insights on PyTorch 2.4, Llama 3.1, and open-source projects like OLMo. Key discussions on LLM deployment, ethical AI, and innovative libraries like Torchtune and TorchChat emphasized collaboration and responsible practices in the evolving landscape of generative AI.
-
Microsoft Launches Azure AI Inference SDK for .NET
Microsoft launched Azure AI Inference SDK for .NET, streamlining access to generative AI models in the Azure AI Studio model catalog. This catalog includes models from providers like Azure OpenAI Service, Mistral, Meta, Cohere, NVIDIA, and Hugging Face, organized into three collections: Curated by Azure AI, Azure OpenAI Models, and Open Models from Hugging Face Hub.
-
AWS Announces General Availability of EC2 P5e Instances, Powered by NVIDIA H100 Tensor Core GPUs
Amazon Web Services (AWS) has launched EC2 P5e instances featuring NVIDIA H100 Tensor Core GPUs, substantially boosting AI and HPC performance. With enhanced memory bandwidth, these instances reduce latency for real-time applications. Ideal for tasks like LLM training and simulations, they offer improved scalability and cost-efficiency, making them pivotal for modern cloud computing.
-
Leveraging the Transformer Architecture for Music Recommendation on YouTube
Google has described an approach to use transformer models, which ignited the current generative AI boom, for music recommendation. This approach, which is currently being applied experimentally on YouTube, aims to build a recommender that can understand sequences of user actions when listening to music to better predict user preferences based on their context.
-
Pinterest Modernises Machine Learning Infrastructure with Ray
Pinterest, the visual discovery platform, has revealed details about its journey to modernise its machine learning infrastructure using Ray, an open-source distributed computing framework. In a recent blog post, the company shared insights into the challenges faced and solutions implemented as they integrated Ray into their large-scale production environment.
-
Meta Releases Llama 3.1 405B, Largest Open-Source Model to Date
Meta recently unveiled its latest language model, Llama 3.1 405B. This AI model is the largest of the new Llama models, which also include 8B and 70B versions. With 405 billion parameters, 15 trillion tokens, and 16,000 GPUs, Llama 3.1 405B offers a range of impressive features.
-
AWS Introduces Amazon Q Developer in SageMaker Studio to Streamline ML Workflows
AWS announced that Amazon SageMaker Studio now includes Amazon Q Developer as a new capability. This generative AI-powered assistant is built natively into SageMaker’s JupyterLab experience and provides recommendations for the best tools for each task, step-by-step guidance, code generation, and troubleshooting assistance.
-
Amazon SageMaker Now Offers Managed MLflow Capability for Enhanced Experiment Tracking
AWS has announced the general availability of MLflow capability in Amazon SageMaker. MLflow is an open-source tool commonly used for managing ML experiments. Users can now compare model performance, parameters, and metrics across experiments in the MLflow UI, keep track of their best models in the MLflow Model Registry, and automatically register them as a SageMaker model.
-
Apple WWDC: iOS18 and Apple Intelligence Announcements
At WWDC 2024 Apple unveiled "Apple Intelligence," a suite of AI features coming to iOS 18, iPadOS 18, and macOS Sequoia. Apple’s aim with Apple Intelligence is to seamlessly integrate AI into the core of the iPhone, iPad, and Mac experience.