Jump to content

Featured Replies

Posted

You can use retrieval augmented generation (RAG) to refine and improve the output of a large language model (LLM) without retraining the model. However, many data sources include sensitive information, such as personal identifiable information (PII), that the LLM and its applications should not require or disclose — but sometimes they do. Sensitive information disclosure is one of the OWASP 2025 Top 10 Risks & Mitigations for LLMs and Gen AI Apps. To mitigate this issue, OWASP recommends data sanitization, access control, and encryption.

This post shows how HashiCorp Vault’s transit secrets engine can be configured to encrypt and protect sensitive data before sending it to an Amazon Bedrock Knowledge Base created by Terraform...

View the full article

  • James changed the title to Protect data privacy in Amazon Bedrock with Vault

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...