Jump to content

Build generative AI applications with similarity search in Cloud SQL for MySQL


Recommended Posts

Generative AI is transforming application development across industries as developers build brand new user experiences that weren’t possible before. We’re already seeing customers like Linear build amazing new AI-powered applications with Google Cloud databases. Recently, we announced that you can now also use Cloud SQL for MySQL to perform similarity searches by indexing and searching for vector embeddings generated by your favorite large language model (LLM). Cloud SQL now allows you to store vectors in the same Cloud SQL for MySQL instance you’re already using, and then search against your vector store using either an exact nearest neighbor (KNN) or approximate nearest neighbor (ANN) search.

Vector search in Cloud SQL for MySQL is built on Google’s open-source ScaNN libraries, which support multiple ANN index types: Tree-AH, Tree-SQ, and Brute Force with autotuning. Cloud SQL supports multiple distance measures, such as cosine, L2, and dot_product. Combining your vector store with your operational data allows you to create more meaningful and relevant experiences by leveraging vector search augmented with real-time data. Let’s dig in to how you can use this capability, currently in preview! Fill out our preview form here to request access.

Let’s imagine you’re building a new website for a library system that helps library patrons pick out e-books that they might enjoy. Users will tell you an e-book they like, and then your website finds other e-books that they might also enjoy. You’ve just upgraded the backend of your website to Cloud SQL for MySQL’s new Enterprise Plus edition, because your library serves customers in a large city who use library services around-the-clock, and you wanted to make sure your website is taking advantage of Enterprise Plus edition’s 99.99% SLA for high availability and up to 3x higher read throughput.

The journey begins by enabling a new MySQL flag called cloudsql_vector. In order to use similarity search, you’ll need to turn your data into embeddings (vectors) and store these vectors in the catalog table. The next sections are easy-to-follow steps that guide you through:

  1. Getting embeddings
  2. Storing and indexing those embeddings
  3. Performing similarity searches

Let’s assume that your catalog table is called library_catalog and includes details like book titles, descriptions, and copies available. You want to update your library’s catalog to include vector embeddings for all items currently in circulation. You can add a new column to store these vectors — let’s call the column item_embedding, with vector data type. To do this, you would update your table like this.

code_block
<ListValue: [StructValue([('code', 'ALTER TABLE library_catalog ADD COLUMN item_embedding VECTOR(3)\r\nUSING VARBINARY;'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b17f0>)])]>

Next, generate a vector embedding for each item currently in your catalog. Use your favorite LLM for this — for example, you could use Vertex AI’s pre-trained text embeddings model to create embeddings based off of the item’s description in your catalog. The below example uses the textembedding-gecko@001 model.

code_block
<ListValue: [StructValue([('code', 'from vertexai.language_models import TextEmbeddingModel\r\n\r\n\r\ndef text_embedding() -> list:\r\n """Text embedding with a Large Language Model."""\r\n model = TextEmbeddingModel.from_pretrained("textembedding-gecko@001")\r\n embeddings = model.get_embeddings(["What is life?"])\r\n for embedding in embeddings:\r\n vector = embedding.values\r\n print(f"Length of Embedding Vector: {len(vector)}")\r\n return vector\r\n\r\n\r\nif __name__ == "__main__":\r\n text_embedding()'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b12e0>)])]>

Then, update that item’s row to store its new vector embedding.

code_block
<ListValue: [StructValue([('code', "// Replace '[x,y,z]' with the vector embedding returned by the \r\n// model api call from the above python example\r\nUPDATE library_catalog \r\nSET item_embedding=string_to_vector('[x,y,z]') \r\nWHERE id=1;"), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b1040>)])]>

Once you’ve updated all of the items in your catalog, you can add an index to your catalog to make it easy to perform similarity searches against items in the catalog. We support three different index types:

  • Tree-SQ: This is the default algorithm as it requires reduced memory and supports persistence across restarts minimizing operational toil. It also features slightly better recall due to reduced compression compared to Tree-AH, but with some additional compute cost.
  • Tree-AH: Ideal for applications that prioritize speed and compute efficiency at the cost of additional memory with slight operational overhead.
  • Brute-Force: Suitable when the distance measure needs to be precise at the cost of speed.

You could also perform brute force searches on your vectors to get the closest nearest neighbor (KNN) by not adding an index. By default, we’ll create a Tree-SQ index.

code_block
<ListValue: [StructValue([('code', "CALL mysql.create_vector_index(\r\n'vectorIndex', 'library_catalog', 'item_embedding',\r\n'table_size=<num of rows>')"), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b1610>)])]>

Now, when a customer comes to the library website, they get suggestions of books similar to their favorite book, combined with a filter on what items are currently available for check out. For example, your website could run this query to get books similar to one that the customer says they like:

code_block
<ListValue: [StructValue([('code', "// Replace '[x,y,z]' with the vector embedding for the \r\n// query / search terms returned by the model api call \r\n// from the above python example\r\nSELECT book_title, num_available \r\nFROM library_catalog \r\nWHERE NEAREST(item_embedding) TO (string_to_vector('[x,y,z]'));"), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b1dc0>)])]>

If you wanted to get the exact nearest items in your catalog, you could instead search for:

code_block
<ListValue: [StructValue([('code', "SELECT book_title, num_available, dist, \r\n vector_distance(item_embedding, string_to_vector('[x,y,z]')) \r\nFROM books \r\nWHERE num_available > 0 \r\nORDER BY dist \r\nLIMIT 10;"), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ea3cd0b1ac0>)])]>

And this is just the start. Watch the video below to see this in action - you can follow this example, or customize for retail, healthcare, financial industries, and more. Interested in trying the preview? Fill out this form and we’ll be in touch!

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...