BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google Cloud Unveils AlloyDB AI: Transforming PostgreSQL with Advanced Vector Embeddings and AI

Google Cloud Unveils AlloyDB AI: Transforming PostgreSQL with Advanced Vector Embeddings and AI

During the recent Google Cloud Next, Google announced AlloyDB AI in preview as an integral part of AlloyDB for PostgreSQL, allowing developers to build generative (gen) Artificial Intelligence (AI) applications leveraging large language models (LLMs) with their real-time operational data through built-in, end-to-end support for vector embeddings.

Earlier, the company launched support for the pgvector on Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL, bringing vector search operations to the managed databases, allowing developers to store vector embeddings generated by Large Language Models (LLMs) and perform similarity searches. AlloyDB AI is built on the basic vector support available with standard PostgreSQL, providing developers with the ability, according to the company, "to create and query embeddings to find relevant data with just a few lines of SQL — no specialized data stack required, and no moving data around."

In addition, AlloyDB AI brings a few other new capabilities into AlloyDB that can help developers incorporate their real-time data into gen AI applications:

  • Enhanced vector support, faster than standard PostgreSQL queries, through tight integrations with the AlloyDB query processing engine. Furthermore, the company also introduced quantization techniques based on its ScaNN technology to support four times more vector dimensions and a three-times space reduction when enabled.
  • Access to local models in AlloyDB and remote models hosted in Vertex AI, including custom and pre-trained models. Developers can train and fine-tune models with the data stored in AlloyDB and then deploy them as endpoints on Vertex AI.
  • Integrations with the AI ecosystem, including Vertex AI Extensions (coming later this year) and LangChain, which will offer the ability to call remote models in Vertex AI for low-latency, high-throughput augmented transactions using SQL for use-cases such as fraud detection.

Andi Gutmans, GM & VP of Engineering, Google Cloud Databases, wrote in a Google blog post:

AlloyDB AI allows users to easily transform their data into vector embeddings with a simple SQL function for in-database embeddings generation, and runs vector queries up to 10 times faster than standard PostgreSQL. Integrations with the open-source AI ecosystem and Google Cloud’s Vertex AI platform provide an end-to-end solution for building gen AI applications.

A respondent on a Reddit thread asked, based on the Andi statement, if Google tries to Embrace, extend, and extinguish (EEE) PostgreSQL with AlloyDB AI, with another answering:

I think what you're trying to say is that just because someone - especially [large company] - tries to improve/integrate popular open projects doesn't mean it's always EEE.

Which I doubt EEE is purposeful the majority of the time initially, even if has the potential to become that later. In the case of Google, I think this would be a case of "how do we add value to our product to sell" followed by "this feature costs us too much resources to maintain, let's cut it and focus on [new feature]"

In addition, other database- and public cloud providers already support vector embeddings, including MongoDB, DataStax's Cassandra database service Astra, open-source PostgreSQL (via Pgvector), and Azure Cognitive Search. The latter recently has a new capability for indexing, storing, and retrieving vector embeddings from a search index in preview.

Lastly, AlloyDB AI is available in AlloyDB on Google Cloud and AlloyDB Omni at no additional cost. The pricing details of AlloyDB are available on the pricing page.

About the Author

Rate this Article

Adoption
Style

BT