Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Enhancing AI Capabilities: Google Cloud Integrates Vector Search in Managed Databases

Enhancing AI Capabilities: Google Cloud Integrates Vector Search in Managed Databases

Google Cloud recently added support for the pgvector on Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL. The extension brings vector search operations to the managed databases, allowing developers to store vector embeddings generated by large language models (LLMs) and perform similarity searches.

Cloud SQL and AlloyDB can now be paired with generative AI services on Vertex AI, helping create AI-enabled applications that are aware of the application and user state. Sandhya Ghai, senior product manager at Google, and Bala Narasimhan, product manager at Google, explain:

Vector embeddings are numerical representations typically used to transform complex user-generated content like text, audio, and video into a form that can be easily stored, manipulated, and indexed. These representations are generated by embeddings models such that, if two pieces of content are semantically similar, their respective embeddings are located near each other in the embedding vector space. Vector embeddings are then indexed and used to efficiently filter data based on similarity.

For example, developers can use Vertex AI’s pre-trained models across text and images to generate embeddings and store and index them in a database, simplifying the search for similar records.

The pgvector extension can now be installed within an existing database using the CREATE EXTENSION command:


postgres=> CREATE TABLE embeddings(
  	id INTEGER,
  	embedding vector(3)

postgres=> INSERT INTO embeddings
               	(1, '[1, 0, -1]'),
               	(2, '[1, 1, 1]'),
               	(3, '[1, 1, 50]');


The new feature can also help developers leverage pre-trained LLMs, as Ghai and Narasimhan explain:

One thing to note about LLMs is that they have no concept of state. (...) Embeddings allow you to store large contexts such as documentation or long-term chat histories in your database and filter them to find the most relevant information. You can then feed the most relevant pieces of chat history or documentation to the model to simulate long-term memory and business-specific knowledge.

Google Cloud released a Colab notebook and a video to build AI-powered apps using pgvector, the open-source framework LangChain, and LLMs. Showing how to add generative AI features to a sample Python application, Saket Saurabh, senior software engineer at Google, writes:

The pgvector extension also introduces new operators for performing similarity matches on vectors, allowing you to find vectors that are semantically similar. Two such operators are:
‘<->’: returns the Euclidean distance between the two vectors. (...)
‘<=>’: returns the cosine distance between the two vectors.

Google Cloud is not the only cloud provider targeting vector databases in the last few months, with Amazon RDS for PostgreSQL supporting the pgvector extension and Microsoft showing how Azure Data Explorer (ADX) can be used as a vector database and discussing several connectors to vector databases.

About the Author

Rate this Article