Jump to content

Amazon Titan Text V2 now available in Amazon Bedrock, optimized for improving RAG


Recommended Posts

The Amazon Titan family of models, available exclusively in Amazon Bedrock, is built on top of 25 years of Amazon expertise in artificial intelligence (AI) and machine learning (ML) advancements. Amazon Titan foundation models (FMs) offer a comprehensive suite of pre-trained image, multimodal, and text models accessible through a fully managed API. Trained on extensive datasets, Amazon Titan models are powerful and versatile, designed for a range of applications while adhering to responsible AI practices.

The latest addition to the Amazon Titan family is Amazon Titan Text Embeddings V2, the second-generation text embeddings model from Amazon now available within Amazon Bedrock. This new text embeddings model is optimized for Retrieval-Augmented Generation (RAG). It is pre-trained on 100+ languages and on code.

Amazon Titan Text Embeddings V2 now lets you choose the size of of the output vector (either 256, 512, or 1024). Larger vector sizes create more detailed responses, but will also increase the computational time. Shorter vector lengths are less detailed but will improve the response time. Using smaller vectors helps to reduce your storage costs and the latency to search and retrieve document extracts from a vector database. We measured the accuracy of the vectors generated by Amazon Titan Text Embeddings V2 and we observed that vectors with 512 dimensions keep approximately 99 percent of the accuracy provided by vectors with 1024 dimensions. Vectors with 256 dimensions keep 97 percent of the accuracy. This means that you can save 75 percent in vector storage (from 1024 down to 256 dimensions) and keep approximately 97 percent of the accuracy provided by larger vectors.

Amazon Titan Text Embeddings V2 also proposes an improved unit vector normalization that helps improve the accuracy when measuring vector similarity. You can choose between normalized or unnormalized versions of the embeddings based on your use case (normalized is more accurate for RAG use cases). Normalization of a vector is the process of scaling it to have a unit length or magnitude of 1. It is useful to ensure that all vectors have the same scale and contribute equally during vector operations, preventing some vectors from dominating others due to their larger magnitudes.

This new text embeddings model is well-suited for a variety of use cases. It can help you perform semantic searches on documents, for example, to detect plagiarism. It can classify labels into data-based learned representations, for example, to categorize movies into genres. It can also improve the quality and relevance of retrieved or generated search results, for example, recommending content based on interest using RAG.

How embeddings help to improve accuracy of RAG
Imagine you’re a superpowered research assistant for a large language model (LLM). LLMs are like those brainiacs who can write different creative text formats, but their knowledge comes from the massive datasets they were trained on. This training data might be a bit outdated or lack specific details for your needs.

This is where RAG comes in. RAG acts like your assistant, fetching relevant information from a custom source, like a company knowledge base. When the LLM needs to answer a question, RAG provides the most up-to-date information to help it generate the best possible response.

To find the most up-to-date information, RAG uses embeddings. Imagine these embeddings (or vectors) as super-condensed summaries that capture the key idea of a piece of text. A high-quality embeddings model, such as Amazon Titan Text Embeddings V2, can create these summaries accurately, like a great assistant who can quickly grasp the important points of each document. This ensures RAG retrieves the most relevant information for the LLM, leading to more accurate and on-point answers.

Think of it like searching a library. Each page of the book is indexed and represented by a vector. With a bad search system, you might end up with a pile of books that aren’t quite what you need. But with a great search system that understands the content (like a high-quality embeddings model), you’ll get exactly what you’re looking for, making the LLM’s job of generating the answer much easier.

Amazon Titan Text Embeddings V2 overview
Amazon Titan Text Embeddings V2 is optimized for high accuracy and retrieval performance at smaller dimensions for reduced storage and latency. We measured that vectors with 512 dimensions maintain approximately 99 percent of the accuracy provided by vectors with 1024 dimensions. Those with 256 dimensions offer 97 percent of the accuracy.

Max tokens 8,192
Languages 100+ in pre-training
Fine-tuning supported No
Normalization supported Yes
Vector size 256, 512, 1,024 (default)

How to use Amazon Titan Text Embeddings V2
It’s very likely you will interact with Amazon Titan Text Embeddings V2 indirectly through Knowledge Bases for Amazon Bedrock. Knowledge Bases takes care of the heavy lifting to create a RAG-based application. However, you can also use the Amazon Bedrock Runtime API to directly invoke the model from your code. Here is a simple example in the Swift programming language (just to show you you can use any programming language, not just Python):

import Foundation
import AWSBedrockRuntime 

let text = "This is the text to transform in a vector"

// create an API client
let client = try BedrockRuntimeClient(region: "us-east-1")

// create the request 
let request = InvokeModelInput(
   accept: "application/json",
   body: """
   {
      "inputText": "\(text)",
      "dimensions": 256,
      "normalize": true
   }
   """.data(using: .utf8), 
   contentType: "application/json",
   modelId: "amazon.titan-embed-text-v2:0")

// send the request 
let response = try await client.invokeModel(input: request)

// decode the response
let response = String(data: (response.body!), encoding: .utf8)

print(response ?? "")

The model takes three parameters in its payload:

  • inputText – The text to convert to embeddings.
  • normalize – A flag indicating whether or not to normalize the output embeddings. It defaults to true, which is optimal for RAG use cases.
  • dimensions – The number of dimensions the output embeddings should have. Three values are accepted: 256, 512, and 1024 (the default value).

I added the dependency on the AWS SDK for Swift in my Package.swift. I type swift run to build and run this code. It prints the following output (truncated to keep it brief):

{"embedding":[-0.26757812,0.15332031,-0.015991211...-0.8203125,0.94921875],
"inputTextTokenCount":9}

As usual, do not forget to enable access to the new model in the Amazon Bedrock console before using the API.

Amazon Titan Text Embeddings V2 will soon be the default LLM proposed by Knowledge Bases for Amazon Bedrock. Your existing knowledge bases created with the original Amazon Titan Text Embeddings model will continue to work without changes.

To learn more about the Amazon Titan family of models, view the following video:

The new Amazon Titan Text Embeddings V2 model is available today in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) AWS Regions. Check the full Region list for future updates.

To learn more, check out the Amazon Titan in Amazon Bedrock product page and pricing page. Also, do not miss this blog post to learn how to use Amazon Titan Text Embeddings models. You can also visit our community.aws site to find deep-dive technical content and to discover how our Builder communities are using Amazon Bedrock in their solutions.

Give Amazon Titan Text Embeddings V2 a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

-- seb

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...