Jump to content

Build powerful gen AI applications with Firestore vector similarity search


Recommended Posts

Creating innovative AI-powered solutions for use cases such as product recommendations and chatbots often requires vector similarity search, or vector search for short. At Google Cloud Next ‘24, we announced the Firestore vector search in preview, using exact K-nearest neighbor (KNN) search. Developers can now perform vector search on transactional Firestore data without the hassle of copying data to another vector search solution, maintaining operational simplicity and efficiency.

Developers can now utilize Firestore vector search with popular orchestration frameworks such as LangChain and LlamaIndex through native integrations. We’ve also launched a new Firestore extension to make it easier for you to automatically compute vector embeddings on your data, and  create web services that make it easier for you to perform vector searches from a web or mobile application. 

In this blog, we’ll discuss how developers can get started with Firestore’s new vector search capabilities.

How to use KNN vector search in Firestore

The first step in utilizing vector search is to generate vector embeddings. Embeddings are representations of different kinds of data like text, images, video, etc in a continuous vector space, and capture semantic or syntactic similarities between the entities they represent.  Embeddings can be calculated using a service, such as the Vertex AI text-embeddings API

Once the embeddings are generated you can store them in Firestore using one of the supported SDKs. For example, let’s say you’ve generated an embedding using your favorite embedding model for the data in the field “description” in the collection “beans”. You can now add that generated embedding as a vector value to the field “embedding_field”. Simply run the following command using the NodeJS SDK:

code_block
<ListValue: [StructValue([('code', 'const db = new Firestore();\r\nlet collectionRef = db.collection("beans");\r\nawait collectionRef.add({\r\n name: "Kahawa coffee beans",\r\n type: "arabica",\r\n description: "Information about the Kahawa coffee beans.",\r\n embedding_field: FieldValue.vector([0.1, 0.3, ..., 0.2]), // a vector with 768 dimensions\r\n});'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e9803cb1550>)])]>

Alternatively, rather than calling the embedding generation service from your application for a field, you can also automate the generation of the vector embeddings based on field values in your document and your favorite embedding model by using the Firestore vector search extension.

The next step is to create a Firestore KNN vector index on the “embedding_field” where the vector embeddings are stored. During the preview release, you will need to create the index using the gcloud command line tool. 

Continuing with our example, this is how you would create a Firestore KNN vector index:

code_block
<ListValue: [StructValue([('code', 'gcloud alpha firestore indexes composite create\r\n--collection-group=beans\r\n--query-scope=COLLECTION\r\n--field-config field-path=embedding_field,vector-config=\'{"dimension":"768", "flat": "{}"}\''), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3e9803cb15b0>)])]>

Once you have added all the vector embeddings and created the vector index, you are ready to run the K-Nearest Neighbor search. You will then utilize the “find_nearest” call to pass the query vector embedding with which to compare the stored embeddings and to specify the distance function you want to utilize.

In our example, to do a KNN search on the “embedding_field” in the “beans” collection using COSINE vector distance, you run the following query:

code_block
<ListValue: [StructValue([('code', 'collectionRef = db.collection("beans");\r\nlet vectorQuery: VectorQuery = collectionRef.findNearest(\r\n "embedding_field",\r\n FieldValue.vector([0.4, 0.1, ..., 0.3]), // a vector with 768 dimensions\r\n\r\n {\r\n limit: 5,\r\n distanceMeasure: "EUCLIDEAN",\r\n }\r\n);\r\nawait vectorQuery.get();'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e9803cb1610>)])]>

How to use pre-filtering with KNN vector search

One of the key benefits of Firestore’s KNN vector search is that it can be used in conjunction with some of the other query predicates like equality conditions to pre-filter the data set to only the vectors with which you want to do the search. This helps you to reduce the search space and get more relevant and faster results.

In order to pre-filter and run the vector search on the filtered data set, you will need to first create a composite index using the gcloud command line tool, by including the fields that you want to pre-filter along with the vector field.

For example, if you want to pre-filter on the field “type” in our “beans” collection for doing a KNN vector search, create a Firestore KNN composite vector index, using the command below:

code_block
<ListValue: [StructValue([('code', 'gcloud alpha firestore indexes composite create\r\n--collection-group=beans\r\n--query-scope=COLLECTION\r\n--field-config=order=ASCENDING,field-path="type"\r\n--field-config field-path=embedding_field,vector-config=\'{"dimension":"768", "flat": "{}"}\''), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3e9803cb1670>)])]>

Once the index is created, you can do a KNN search with the pre-filter, as the example below:

code_block
<ListValue: [StructValue([('code', 'collectionRef = db.collection("beans");\r\nvectorQuery = collectionRef\r\n .where("type", "==", "arabica")\r\n .findNearest("embedding_field", FieldValue.vector([0.4, 0.1, ..., 0.3]), {\r\n limit: 5,\r\n distanceMeasure: "EUCLIDEAN",\r\n });\r\nawait vectorQuery.get();'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e9803cb16d0>)])]>

Ecosystem integrations

To provide application developers with tools to help them quickly and more efficiently build retrieval augmented generation (RAG) solutions using vector search and foundation models, Firestore KNN vector search now has integrations with LangChain Vector Store and LlamaIndex. These integrations provide developers with access to accurate and reliable information stored in Firestore in their workflows, which is orchestrated using LangChain or LlamaIndex, enhancing the credibility and trustworthiness of LLM (large language model) responses. Additionally, it enables enhanced contextual understanding, by pulling in contextual information from Firestore resulting in highly relevant and personalized responses tailored to customer needs. 

For more information regarding the integrations, please see the Firestore for LangChain (or Datastore for LangChain), and the LlamaIndex-Firestore integration website

As indicated earlier, we also announced the availability of a new Firebase extension that will enable developers to use their favorite embedding model to automatically compute and store embeddings for a given field of a Firestore document. This extension also makes it easier to perform vector similarity searches by generating embeddings based on a query value, for input into vector search. For more information, see the Firestore vector search extension web page.

Pricing

Firestore customers are charged for the number of KNN vector index entries read during the computation and document reads only for resultant documents matching the query. For detailed pricing, please refer to the pricing page. 

Next steps

To learn more about Firestore and its vector search, check out the following resources:


Thanks to Minh Nguyen, Senior Product Manager Lead for Firestore for his contributions to this blog post.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...