InfoQ Homepage Large language models Content on InfoQ
-
Google Open Sources 27B Parameter Gemma 2 Language Model
Google DeepMind recently open-sourced Gemma 2, the next generation of their family of small language models. Google made several improvements to the Gemma architecture and used knowledge distillation to give the models state-of-the-art performance: Gemma 2 outperforms other models of comparable size and is competitive with models 2x larger.
-
Amazon Q Apps Aim to Simplify the Creation of Generative AI Apps for the Enterprise
Part of its Amazon Q Business offering, Amazon Q Apps enable the creation of generative AI-powered apps integrating enterprise data that can be shared securely within an organization. Along with their general availability, Amazon announced new APIs for Amazon Q Apps and more granular data source definition.
-
Amazon SageMaker Now Offers Managed MLflow Capability for Enhanced Experiment Tracking
AWS has announced the general availability of MLflow capability in Amazon SageMaker. MLflow is an open-source tool commonly used for managing ML experiments. Users can now compare model performance, parameters, and metrics across experiments in the MLflow UI, keep track of their best models in the MLflow Model Registry, and automatically register them as a SageMaker model.
-
Amazon Brings AI Assistant to Software Development as Part of Amazon Q Suite
Amazon has recently released Amazon Q Developer Agent, an AI-powered assistant that uses natural language input from developers to generate features, bug fixes, and unit tests within an integrated development environment (IDE). It employs large language models and generative AI to understand a developer's natural language request, and then generate the necessary code changes.
-
Xcode 16 Brings Predictive Code Completion Using Custom Model
At WWDC 2024, Xcode and Swift Playground senior manager Ken Orr presented the most salient features of the upcoming version of Xcode, Xcode 16, including predictive code completion and many bug fixes and improvements.
-
Mistral Introduces AI Code Generation Model Codestral
Mistral AI has unveiled Codestral, its first code-focused AI model. Codestral helps the developers with coding tasks offering efficiency and accuracy in code generation.
-
Meta Open-Sources MEGALODON LLM for Efficient Long Sequence Modeling
Researchers from Meta, University of Southern California, Carnegie Mellon University, and University of California San Diego recently open-sourced MEGALODON, a large language model (LLM) with an unlimited context length. MEGALODON has linear computational complexity and outperforms a similarly-sized Llama 2 model on a range of benchmarks.
-
Slack Combines ASTs with Large Language Models to Automatically Convert 80% of 15,000 Unit Tests
Slack's engineering team recently published how it used a large language model (LLM) to automatically convert 15,000 unit and integration tests from Enzyme to React Testing Library (RTL). By combining Abstract Syntax Tree (AST) transformations and AI-powered automation, Slack's innovative approach resulted in an 80% conversion success rate, significantly reducing the manual effort required.
-
AI and Software Development: Preview of Sessions at InfoQ Events
Explore the transformative impact of AI on software development at InfoQ's upcoming events. Senior software developers will share practical applications and ethical considerations of AI technology through technical talks.
-
OpenAI Publishes GPT Model Specification for Fine-Tuning Behavior
OpenAI recently published their Model Spec, a document that describes rules and objectives for the behavior of their GPT models. The spec is intended for use by data labelers and AI researchers when creating data for fine-tuning the models.
-
Cloudflare AI Gateway Now Generally Available
Cloudflare has recently announced that AI Gateway is now generally available. Described as a unified interface for managing and scaling generative AI workloads, AI Gateway allows developers to gain visibility and control over AI applications.
-
JLama: The First Pure Java Model Inference Engine Implemented With Vector API and Project Panama
Karpathy's 700-line llama.c inference interface demystified how developers can interact with LLMs. Even before that, JLama started its journey of becoming the first pure Java-implemented inference engine for any Hugging Face model, from Gemma to Mixtral. Leveraging the new Vector API and PanamaTensorOperations class with native fallback the library is available in Maven Central.
-
Recap of Google I/O 2024: Gemini 1.5, Project Astra, AI-powered Search Engine
Google recently hosted its annual developer conference, Google I/O 2024, where numerous announcements were made regarding Google’s apps and services. As anticipated, AI was a focal point of the event, being incorporated into almost all Google products. Here is a summary of the major announcements from the event.
-
Google Brings Gemini Nano to Chrome to Enable On-Device Generative AI
At its Google I/O 2024 developer conference, Google announced it is working to make support for on-device large language models a reality by bringing the smallest of its Gemini models, Gemini Nano, to Chrome.
-
OpenAI Announces New Flagship Model GPT-4o
OpenAI recently announced the latest version of their GPT AI foundation model, GPT-4o. GPT-4o is faster than the previous version of GPT-4 and has improved capabilities in handling speech, vision, and multilingual tasks, outperforming all models except Google's Gemini on several benchmarks.