InfoQ Homepage Google Content on InfoQ
-
Google Stax Aims to Make AI Model Evaluation Accessible for Developers
Google Stax is a framework designed to replace subjective evaluations of AI models with an objective, data-driven, and repeatable process for measuring model output quality. Google says this will allow AI developers to tailor the evaluation process to their specific use cases rather than relying on generic benchmarks.
-
Google's Agent Development Kit for Java Adds Integration with LangChain4j
The latest release of the Agent Development Kit for Java, version 0.2.0, marks a significant expansion of its capabilities through the integration with the LangChain4j LLM framework, which opens it up to all the large language models supported by the framework.
-
Google Introduces VaultGemma: An Experimental Differentially Private LLM
VaultGemma is a 1B-parameter Gemma 2-based LLM that Google trained from scratch using differential privacy (DP) with the aim of preventing the model from memorizing and later regurgitating training data. While still a research model, VaultGemma could enable applications cases in healthcare, finance, legal, and other regulated sectors.
-
Google Brings Optimized Resource Shrinking in Latest Android Gradle Plugin
The latest version of the Android Gradle Plugin (AGP) introduces an optimized resource-shrinking approach that unifies code optimization and resource shrinking, achieving up to 50% reduction in app size for apps that share significant resources and code across different form factor, says Google.
-
Google DeepMind Launches EmbeddingGemma, an Open Model for On-Device Embeddings
Google DeepMind has introduced EmbeddingGemma, a 308M parameter open embedding model designed to run efficiently on-device. The model aims to make applications like retrieval-augmented generation (RAG), semantic search, and text classification accessible without the need for a server or internet connection.
-
Google Spanner Unifies OLTP and OLAP with Columnar Engine
Google Spanner now features a columnar engine, allowing its distributed database to handle both OLTP and OLAP workloads on a single platform. This hybrid architecture eliminates the need for separate data warehouses and ETL pipelines. The engine's columnar storage and vectorized execution accelerate analytical queries up to 200x on live data, which is especially beneficial for AI applications.
-
Android Studio Narwhal Extends Gemini AI Capabilities
The latest Android Studio Narwhal 3 Feature Drop introduces enhancements aimed at boosting developer productivity, including support for resizable Compose previews, new app Backup & Restore tools, and expanded Gemini capabilities such as automatic code generation from UI screenshots.
-
Google Launches Gemini 2.5 Flash Image with Advanced Editing and Consistency Features
Google released Gemini 2.5 Flash Image (nicknamed nano-banana), its newest image generation and editing model. The system introduces several upgrades over earlier Flash models, including character consistency across prompts, multi-image fusion, precise prompt-based editing, and integration of world knowledge for semantic understanding.
-
Google Cloud Unveils New Data Security Posture Management Offering in Preview
Google Cloud unveils its new Data Security Posture Management (DSPM) offering, enhancing data governance, privacy, and compliance. This innovative solution provides visibility into sensitive data, helping organizations identify risks and enforce controls. With advanced features integrated into the Security Command Center, it addresses the evolving challenges of cloud data security.
-
Google Veles is a New Open-Source Secret Scanner Powering GCP
Google Veles is a newly released open-source secret scanner, launched as part of Google's broader OSV-SCALIBR (Software Composition Analysis LIBRary) ecosystem. Veles integrates seamlessly with other OSV-SCALIBR tools and also powers secret scanning in Google Cloud, while remaining available as a standalone module.
-
Gemini 2.5 Deep Think Parallelizes Creative Problem-Solving
As part of Google AI Ultra subscription, Gemini 2.5 Deep Think is a model designed for creative problem-solving through the use of parallel thinking techniques and extended inference time.
-
Jetpack Compose Enhances Scrolling, Lazy Lists, and More
The latest release of Jetpack Compose, as usual named after its introduction month, adds new APIs for rendering shadows, 2D scrolling, improved list performance, and more.
-
Google Launches Jules, an Asynchronous Coding Agent Powered by Gemini 2.5
Google has moved Jules, its asynchronous, agent-based coding assistant, out of beta and into general availability, positioning it as a tool for developers who want to offload routine programming tasks. Powered by the Gemini 2.5 Pro model, Jules is designed to handle a wide range of coding activities, from writing tests and building new features to fixing bugs or generating audio changelogs.
-
Google Cloud Launches 'Cloud Setup' to Streamline Foundational Infrastructure
Google Cloud has launched Google Cloud Setup, a streamlined service for creating secure, best-practice cloud environments. Offering guided workflows for various needs—proof-of-concept, production, and enhanced security—this tool reduces manual efforts, enabling rapid application deployment in minutes, not days. Enjoy hassle-free configuration with built-in best practices and cost-effective access.
-
Google Launched LangExtract, a Python Library for Structured Data Extraction from Unstructured Text
Google has introduced LangExtract, an open-source Python library designed to help developers extract structured information from unstructured text using large language models such as the Gemini models.