InfoQ Homepage Google Cloud Content on InfoQ
-
Google Cloud Introduces Bigtable Tiered Storage
Google Cloud recently introduced the preview of Bigtable tiered storage. The new feature allows developers to manage both hot and cold data within a single Bigtable instance, optimizing costs while maintaining access to all data.
-
Google Cloud KMS Launches Post-Quantum KEM Support to Combat "Harvest Now, Decrypt Later" Threat
Google Cloud's Key Management Service now supports post-quantum Key Encapsulation Mechanisms (KEMs), addressing future threats from quantum computing. This update empowers organizations to prepare against "Harvest Now, Decrypt Later" attacks while ensuring long-term data confidentiality.
-
Google Cloud Outlines Key Strategies for Securing Remote MCP Servers
Google Cloud published a guide that lays out strategies for securing remote Model Context Protocol (MCP) server deployments, particularly in contexts where AI systems depend on external tools, databases, and APIs.
-
Google Introduces LLM-Evalkit to Bring Order and Metrics to Prompt Engineering
Google has introduced LLM-Evalkit, an open-source framework built on Vertex AI SDKs, designed to make prompt engineering for large language models less chaotic and more measurable. The lightweight tool aims to replace scattered documents and guess-based iteration with a unified, data-driven workflow.
-
Terraform Google Cloud Provider 7.0 Reaches General Availability
HashiCorp has released version 7.0 of the Terraform provider for Google Cloud, introducing security-focused improvements such as ephemeral resources, write-only attributes, and stricter validation. The update enhances secret handling and reliability but introduces breaking changes requiring careful migration.
-
Google Cloud Observability Adopts OpenTelemetry Protocol for Native Trace Ingestion
Google Cloud has announced native support for the OpenTelemetry Protocol (OTLP) in its Cloud Trace service, marking a significant step toward vendor-neutral observability infrastructure. The new capability allows developers to send trace data directly using OTLP through the telemetry.googleapis.com endpoint, eliminating the need for vendor-specific exporters and custom data transformations.
-
Google Cloud Run Now Offers Serverless GPUs for AI and Batch Processing
Google Cloud has launched NVIDIA GPU support for Cloud Run, enhancing its serverless platform with scalable, cost-efficient GPU resources. This upgrade enables rapid AI inference and batch processing, featuring pay-per-second billing and automatic scaling to zero. Developers can access seamless GPU support easily, making advanced AI applications faster and more accessible.
-
Google Cloud Enhances AI/ML Workflows with Hierarchical Namespace in Cloud Storage
On March 17, 2025, Google Cloud introduced a hierarchical namespace (HNS) feature in Cloud Storage, aiming to optimize AI and machine learning (ML) workloads by improving data organization, performance, and reliability.
-
Google Cloud Announces Rapid Storage for Millisecond-Latency Workloads
At the recent Google Cloud Next 2025, the cloud provider announced Rapid Storage, a new Cloud Storage zonal bucket designed to deliver consistent single-digit millisecond data access for frequently accessed data and latency-sensitive applications. The new storage class provides under 1ms random read and write latency, 20x faster data access, and 6 TB/s of throughput.
-
Google Cloud WAN Aims to Transform Enterprise Networking
Google has launched Cloud WAN, a robust managed WAN solution built on its global network, featuring 202 PoPs and 2M miles of fiber. It promises secure, high-performance connectivity at lower costs, addressing the complexities of modern enterprise needs. With faster speeds and significant TCO savings, Cloud WAN integrates seamlessly with existing providers.
-
Google Cloud Announces Firestore with MongoDB Compatibility
During the recent Google Cloud Next 2025, the cloud provider announced the preview of Firestore with MongoDB compatibility. This new option provides MongoDB API and query language to store and query semi-structured JSON data in Google Cloud’s real-time document database.
-
Gemini to Arrive On-Premises with Google Distributed Cloud
Google's Gemini models are set to revolutionize on-premises AI with their upcoming launch on Google Distributed Cloud (GDC) in Q3 2025. Partnering with NVIDIA, organizations can harness advanced AI while maintaining strict compliance and data residency. With flexible infrastructure and secure environments, Gemini enables real-time insights for data-driven decision-making across industries.
-
Google Cloud Introduces Multi-Cluster Orchestrator for Cross-Region Kubernetes Workloads
Google Cloud has announced the launch of Multi-Cluster Orchestrator (MCO), a new solution designed to simplify the deployment and management of Kubernetes workloads across multiple clusters spanning different regions. The tool aims to address challenges organizations face when operating applications across geographically distributed environments.
-
QCon London 2025: Distributed Event-Driven Architectures across Multi-Cloud Boundaries
At QCon London 2025, Teena Idnani from Microsoft addressed the rise of multi-cloud adoption, revealing that 89% of organizations embrace this strategy. Using the fictional FinBank, she showcased practical strategies to overcome latency, resilience, event ordering, and duplication challenges, emphasizing the importance of security, observability, and continuous team education.
-
Optimize AI Workloads: Google Cloud’s Tips and Tricks
Google Cloud has announced a suite of new tools and features designed to help organizations reduce costs and improve efficiency of AI workloads across their cloud infrastructure. The announcement comes as enterprises increasingly seek ways to optimize spending on AI initiatives while maintaining performance and scalability.