InfoQ Homepage Machine Learning Content on InfoQ
-
Cloudflare's Journey in ML and AI: MLOps Platform and Best Practices
Cloudflare's blog described its MLOps platform and best practices for running Artificial Intelligence (AI) deployment at scale. Cloudflare's products, including WAF attack scoring, bot management, and global threat identification, rely on constantly evolving Machine Learning (ML) models. These models are pivotal in enhancing customer protection and augmenting support services.
-
Google Launches New Multi-Modal Gemini AI Model
On December 6, Alphabet released the first phase of its next-generation AI model, Gemini. Gemini was overseen and driven by its CEO, Sundar Pichai and Google DeepMind. Gemini is the first model to outperform human experts on MMLU (Massive Multitask Language Understanding), one of the most popular methods to test the performance of language models.
-
Anthropic Announces Claude 2.1 LLM with Wider Context Window and Support for AI Tools
According to Anthropic, the newest version of Claude delivers many “advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and our new beta feature: tool use.” Anthropic also announced reduced pricing to improve cost efficiency for our customers across models.
-
Grafana Cloud Kubernetes Monitoring with Machine Learning Predictions
Managing cloud costs can be challenging as Kubernetes fleets scale. To address this issue, Grafana Cloud has introduced a cost-monitoring feature within Kubernetes Monitoring. In particular, Grafana Cloud’s Kubernetes Monitoring now offers ML predictions for CPU and memory usage.
-
Mojo Language SDK Available: Mojo Driver, VS Code extension, and Jupyter Kernel
Mojo SDK is available for developers. It contains the mojo driver, the Visual Studio Code extension and the Jupyter kernel. For now, SDK is available for MacOS and Linux.
-
OpenAI Announces New Models and APIs at First Developer Day Conference
OpenAI announced additions and price reductions across its platform at its first Developer Day. The updates include the introduction of a new GPT-4 Turbo model, an Assistants API, and multimodal capabilities, among others.
-
Microsoft Releases DeepSpeed-FastGen for High-Throughput Text Generation
Microsoft has announced the alpha release of DeepSpeed-FastGen, a system designed to improve the deployment and serving of large language models (LLMs). DeepSpeed-FastGen is the synergistic composition of DeepSpeed-MII and DeepSpeed-Inference . DeepSpeed-FastGen is based on the Dynamic SplitFuse technique. The system currently supports several model architectures.
-
Seven Essential Tracks at QCon London 2024: GenAI, FinTech, Platform Engineering & More!
InfoQ’s international software development conference, QCon London, returns on April 8-10, 2024. The conference will feature 15 carefully curated tracks and 60 technical talks over 3 days.
-
Ethical Machine Learning with Explainable AI and Impact Analysis
As more decisions are made or influenced by machines, there’s a growing need for a code of ethics for artificial intelligence. The main question is, “I can build it, but should I?” Explainable AI can provide checks and balances for fairness and explainability, and engineers can analyze the systems' impact on people's lives and mental health.
-
PyTorch 2.1 Release Supports Automatic Dynamic Shape Support and Distributed Training Enhancements
PyTorch Conference 2023 presented an overview of PyTorch 2.1. ExecuTorch was introduced to enhance PyTorch's performance on mobile and edge devices. The conference also had a focus on community with new members added to the PyTorch Foundation and a Docathon announced.
-
Google Cloud Ops Agent Can Now Monitor Nvidia GPUs
Google Cloud announced that Ops Agent, the agent for collecting telemetry from Compute Engine instances, can now collect and aggregate metrics from NVIDIA GPUs on VMs.
-
Defensible Moats: Unlocking Enterprise Value with Large Language Models at QCon San Francisco
In a recent presentation at QConSFrancisco, Nischal HP discussed the challenges enterprises face when building LLM-powered applications using APIs alone. These challenges include data fragmentation, the absence of a shared business vocabulary, privacy concerns regarding data, and diverse objectives among stakeholders.
-
Canonical Launches Charmed MLFlow to Simplify Management and Maintenance of ML Workflows
Based on the open-source MLflow platform, Canonical Charmed MLFlow aims to simplify the task of managing machine learning workflows and artifacts by using alternative packaging system and orchestration engine.
-
Unpacking How Ads Ranking Works @ Pinterest: Aayush Mudgal at QCon San Francisco
At QCon San Francisco, Aayush Mudgal gave a talk on Pinterest's ad ranking strategy. Pinterest does both candidate retrieval and ranking, supported by user interaction data and what they are currently watching. They use neural networks to create embeddings for ads and users, where ads which are close to the user should be relevant. They train and deploy models on a daily basis.
-
Grafana Introduces ML Tool Sift to Improve Incident Response
Grafana Labs has introduced "Sift," a feature for Grafana Cloud designed to enhance incident response management (IRM) by automating system checks and expediting issue resolution. Sift automates various aspects of incident investigation. Sift provides valuable insights into potential issues within Kubernetes environments, helping engineers focus on resolving incidents.