InfoQ Homepage Artificial Intelligence Content on InfoQ
-
How a Software Architect Uses Artificial Intelligence in His Daily Work
Software architects and system architects will not be replaced anytime soon by generative artificial intelligence (AI) or large language models (LLMs), Avraham Poupko said. They will be replaced by software architects who know how to leverage generative AI and LLMs, and just as importantly, know how NOT to use generative AI.
-
Latin America Launches Latam-GPT to Improve AI Cultural Relevance
Latin America is advancing in the development of artificial intelligence with the creation of Latam-GPT, a language model designed to better represent the history, culture, and linguistic diversity of the region.
-
UC Berkeley's Sky Computing Lab Introduces Model to Reduce AI Language Model Inference Costs
UC Berkeley's Sky Computing Lab has released Sky-T1-32B-Flash, an updated reasoning language model that addresses the common issue of AI overthinking. The model, developed through the NovaSky (Next-generation Open Vision and AI) initiative, "slashes inference costs on challenging questions by up to 57%" while maintaining accuracy across mathematics, coding, science, and general knowledge domains.
-
OpenAI Cancels o3 Release and Announces Roadmap for GPT 4.5, 5
OpenAI is restructuring its AI strategy to focus solely on GPT-5, consolidating capabilities like reasoning, voice synthesis, and deep research into one unified model. This shift aims to simplify product offerings and enhance user experience, with tiered subscription levels for varying intelligence. As competition heats up, the success of GPT-5 will be pivotal for OpenAI’s future.
-
OpenAI Features New o3-mini Model on Microsoft Azure OpenAI Service
OpenAI has launched the advanced o3-mini model via Microsoft Azure, enhancing AI applications with improved cost efficiency, faster performance, and adjustable reasoning capabilities. Designed for complex tasks, it supports structured outputs and backward compatibility. With widespread access, the o3-mini empowers developers to drive innovation across various industries.
-
OpenEuroLLM: Europe’s New Initiative for Open-Source AI Development
A consortium of 20 European research institutions, companies, and EuroHPC centers has launched OpenEuroLLM, an initiative to develop open-source, multilingual large language models (LLMs). Coordinated by Jan Hajič and co-led by Peter Sarlin, the project aims to provide transparent and compliant AI models for commercial and public sector applications.
-
Hugging Face Expands Serverless Inference Options with New Provider Integrations
Hugging Face has launched the integration of four serverless inference providers Fal, Replicate, SambaNova, and Together AI, directly into its model pages. These providers are also integrated into Hugging Face's client SDKs for JavaScript and Python, allowing users to run inference on various models with minimal setup.
-
Block Launches Open-Source AI Framework Codename Goose
Block’s Open Source Program Office has launched Codename Goose, an open-source, non-commercial AI agent framework designed to automate tasks and integrate seamlessly with existing tools. Goose provides users with a flexible, on-machine AI assistant that can be customized through extensions, enabling developers and other professionals to enhance their productivity.
-
AMD and Johns Hopkins Researchers Develop AI Agent Framework to Automate Scientific Research Process
Researchers from AMD and Johns Hopkins University have developed Agent Laboratory, an artificial intelligence framework that automates core aspects of the scientific research process. The system uses large language models to handle literature reviews, experimentation, and report writing, producing both code repositories and research documentation.
-
DeepSeek Release Another Open-Source AI Model, Janus Pro
DeepSeek has released Janus-Pro, an updated version of its multimodal model, Janus. The new model improves training strategies, data scaling, and model size, enhancing multimodal understanding and text-to-image generation.
-
OpenAI Presents Research on Inference-Time Compute to Better AI Security
OpenAI presented Trading Inference-Time Compute for Adversarial Robustness, a research paper that investigates the relationship between inference-time compute and the robustness of AI models against adversarial attacks.
-
Amazon Bedrock Introduces Multi-Agent Systems (MAS) with Open Source Framework Integration
Amazon Web Services has released a multi-agent collaboration capability for Amazon Bedrock, introducing a framework for deploying and managing multiple AI agents that collaborate on complex tasks. The system enables specialized agents to work together under a supervisor agent's coordination, addressing challenges developers face with agent orchestration in distributed AI systems.
-
Using Machine Learning on Microcontrollers: Decreasing Memory and CPU Usage to Save Power and Cost
According to Eirik Midttun, artificial intelligence (AI) and machine learning (ML) are useful tools for interpreting sensor data, especially when the input is complex, such as vibration, voice, and vision. The main challenges of using machine learning on microcontrollers are the constraints in computing power available and cost-related requirements that come with microcontroller-based designs,
-
Microsoft Research Unveils rStar-Math: Advancing Mathematical Reasoning in Small Language Models
Microsoft Research unveiled rStar-Math, a framework that demonstrates the ability of small language models (SLMs) to achieve mathematical reasoning capabilities comparable to, and in some cases exceeding, larger models like OpenAI's o1-mini. This is accomplished without the need for more advanced models, representing a novel approach to enhancing the inference capabilities of AI.
-
Microsoft Research Introduces AIOpsLab: a Framework for AI-Driven Cloud Operations
Microsoft Research unveiled AIOpsLab, an open-source framework designed to advance the development and evaluation of AI agents for cloud operations. The tool provides a standardized and scalable platform to address challenges in fault diagnosis, incident mitigation, and system reliability within complex cloud environments.