Jump to content

Search the Community

Showing results for tags 'hcp vault'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 16 results

  1. Vault is a secrets management platform that provides encrypted storage for long lived secrets, identity brokerage using ephemeral credentials, and encryption as a service. Unless you’re using HashiCorp Cloud Platform to host Vault (which is always recommended if you can support it), deploying and running Vault clusters will likely be a manual process. Any time a server needs to be restarted, an engineer would need to login and restart the service. This is what orchestrators like HashiCorp Nomad and Kubernetes were built to automate. While Kubernetes has a wide array of components to manage additional use cases, Nomad is mainly focused on scheduling and cluster management. It’s simple to run, lightweight, and supports running VMs, containers, raw executables, JAR files, Qemu workloads, and more with custom task driver plugins. By running Vault as a Nomad job (Nomad’s term for workloads), operators can manage and schedule Vault servers with a low-complexity architecture. This post shows how to deploy and configure Vault servers on Nomad using HCP Terraform. The secrets consumption will be done using the Nomad and Vault CLI’s, respectively, to show the underlying workflows. The Terraform code will be split in two, with separate configuration for the infrastructure and the Vault deployment. This is done to manage the states for these workspaces separately and share dependency outputs between them. Deployment architecture This deployment architecture requires five virtual machines (VMs) — one is the Nomad server, and the other four are the Nomad clients that run the Vault servers, including a backup server for Vault. These VMs will be deployed to Amazon EC2 instances. The VMs will all live in the same virtual private cloud (VPC) subnet. HCP Terraform and directory setup Because this approach splits the architecture into multiple workspaces, you need to configure remote backends for each HCP Terraform workspace so that output dependencies can be shared between them. To create these workspaces, create a directory structure that contains a folder for each workspace. The directory structure should look like this: ├── 1-nomad-infrastructure ├── 2-nomad-configuration ├── 3-nomad-example-job-deployment 3 directories The remote backend is HCP Terraform. To create the remote backends, create a file called backend.tf in each of the directories. Here is a shell script that will create the directory structure and write the relevant backend.tf files in all of the directories. Networking for the Nomad cluster To create the infrastructure for Nomad, navigate to the 1-nomad-infrastructure directory. First, set up your AWS Terraform provider. Here is the provider.tf code. Once the provider is configured, you’re ready to deploy a VPC and a subnet. To do this, there is another file in the same directory, called network.tf, which contains the code below: module "vpc" { source = "terraform-aws-modules/vpc/aws" name = "my-vpc" cidr = "10.0.0.0/16" azs = ["eu-west-1a"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"] enable_nat_gateway = true enable_vpn_gateway = false enable_dns_support = true enable_dns_hostnames = true tags = { Terraform = "true" Environment = "dev" } }This code deploys all the resources required for a fully functional network, including resources for a working AWS VPC, associated subnets, and NAT gateways. It uses the community Terraform module called the AWS VPC Terraform module, available on the Terraform Registry. Configuration of Nomad servers Before you can write the Terraform code to deploy the five VMs, you need to write some shell scripts to configure the servers during their deployment as a prerequisite. The first is for the Nomad server called nomad-server.sh: #! /bin/bash -e # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" server { enabled = true bootstrap_expect = ${NOMAD_SERVER_COUNT} server_join { retry_join = ["provider=aws tag_value=${NOMAD_SERVER_TAG} tag_key=${NOMAD_SERVER_TAG_KEY}"] } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl enable nomad systemctl restart nomadThis script does a number of things to configure the Nomad server: Installs the Nomad binary. Creates the Nomad directory that contains everything it needs to function. Creates a Nomad configuration file for the server and places it in the Nomad directory created in step 2. This configuration uses a feature called cloud auto-join that looks for pre-specified tags on the VM and automatically joins any VMs with these tags to a Nomad cluster. Enables access control lists (ACLs) for Nomad. Starts the Nomad service. This script runs on the Nomad server VM when deployed using cloud-init. Notice the script contains three variables for cloud auto-join: ${NOMAD_SERVER_COUNT}, ${NOMAD_SERVER_TAG_KEY}, and ${NOMAD_SERVER_TAG}. The values of these variables will be rendered by Terraform. To deploy this VM and run this script, use the file named compute.tf, which is in the directory mentioned above: data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical }The code above uses a data source to locate the Amazon Machine Image (AMI) used to deploy the VM instance. The code below creates a security group attached to the VM, allowing SSH access to it on port 22: resource "aws_security_group" "ssh" { vpc_id = module.vpc.vpc_id name = "allow_ssh" ingress { from_port = 22 protocol = "tcp" to_port = 22 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "allow_ssh" } }The following code creates another security group that allows access to Nomad’s default port, which is port 4646. This is attached to the Nomad server VM: resource "aws_security_group" "nomad" { vpc_id = module.vpc.vpc_id name = "nomad_port" ingress { from_port = 4646 protocol = "tcp" to_port = 4648 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "nomad" } }The next piece of code creates a security group to allow egress connections outside of the network: resource "aws_security_group" "egress" { vpc_id = module.vpc.vpc_id name = "egress" egress { from_port = 0 protocol = "-1" to_port = 0 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "egress" } }The code below creates the final security group of this walkthrough that allows access to Vault on its default port of 8200. This will be attached to the Nomad clients that will run Vault: resource "aws_security_group" "vault" { vpc_id = module.vpc.vpc_id name = "vault" ingress { from_port = 8200 protocol = "tcp" to_port = 8201 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "vault" } }The code below creates an IP address for the Nomad server in the first resource and associates it with the Nomad server VM in the second. Terraform will not perform the association until the Nomad server VM has been deployed. The IP address is deployed first because it is used in the Nomad config file that is deployed as part of the VM provisioning to enable the OIDC discovery URL on the Nomad server. resource "aws_eip" "nomad_server" { tags = { Name = "Nomad Server" } } resource "aws_eip_association" "nomad_server" { instance_id = aws_instance.nomad_servers.id allocation_id = aws_eip.nomad_server.id }The other prerequisite for deploying the VM for the Nomad server is an SSH key pair, which enables authentication to the VM when connecting via SSH: resource "aws_key_pair" "deployer" { key_name = "deployer-key" public_key = file(var.ssh_key) } The code below deploys the VM for the Nomad server: resource "aws_instance" "nomad_servers" { ami = data.aws_ami.ubuntu.id instance_type = "t3.micro" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name user_data = templatefile("./servers.sh", { NOMAD_SERVER_TAG = "true" NOMAD_SERVER_TAG_KEY = "nomad_server" NOMAD_SERVER_COUNT = 1 NOMAD_ADDR = aws_eip.nomad_server.public_ip }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id ] lifecycle { ignore_changes = [ user_data, ami ] } tags = { Name = "Nomad Server" nomad_server = true } }Some points to note about this code: This resource usescloud-init to deploy the script to the VM that installs the dependent packages and configure the server for Nomad. The script is rendered using the templatefile function and populates the variable values in the script template with the values specified in the above resource: NOMAD_SERVER_TAG: for cloud auto-join NOMAD_SERVER_TAG_KEY: for cloud auto-join NOMAD_SERVER_COUNT: to specify how many servers Nomad expects to join the cluster NOMAD_ADDR: to configure Nomad’s OIDC discovery URL The data source used to obtain the AMI will always fetch the latest version. This means the VM could be deployed more if desired. The lifecycle block ignores changes specifically related to the AMI ID. It associates the security groups created above to the VM. It also adds the SSH key pair to the VM to aid in SSH authentication. This is useful for troubleshooting. Configuring Nomad clients for the Vault servers To deploy the Nomad clients for Vault, you take a similar approach to deploying the server. The main difference is the cloud-init script and the number of servers deployed. The script below (client.sh) is used to configure each Nomad client: #! /bin/bash -e # Install the CNI Plugins curl -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz -o /tmp/cni.tgz mkdir -p /opt/cni/bin tar -C /opt/cni/bin -xzf /tmp/cni.tgz # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Create Vault directory. mkdir -p /etc/vault.d # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl -y sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y # Install Java sudo apt install default-jre -y # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" client { enabled = true node_pool = "vault-servers" node_class = "vault-servers" server_join { retry_join = ["${NOMAD_SERVERS_ADDR}"] } host_volume "/etc/vault.d" { path = "/etc/vault.d" read_only = false } } plugin "docker" { config { allow_privileged = true } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl enable nomad systemctl restart nomadThe script above is similar to that of the Nomad server. One key difference is the Nomad client configuration file, which includes the following configuration items: Node pool: This Nomad feature can group pieces of compute infrastructure together. In this case, you want dedicated servers to run your Vault cluster to reduce the security blast radius. Grouping these servers lets you specify what node pool the Vault servers should run on and create policies around this to prevent other Nomad jobs from being deployed to these client nodes. Host volume: Vault can store encrypted secrets that other authorized Nomad jobs can retrieve. This means that Vault is a stateful workload that requires persistent storage. Host volumes expose a volume on the VM to Nomad and lets you mount the volume to a Nomad job. This means that if a job is restarted, it can still access its data. Docker plugin: The Docker plugin is enabled because Vault will be run in a container as a Nomad job. This has the potential to ease the upgrade paths by simply changing the tag on the image used. Before deploying the Nomad clients, you need to perform a quick health check on the Nomad server to ensure it is available for clients to join it. To check this, use TerraCurl to make an API call to the Nomad server to check its status: resource "terracurl_request" "nomad_status" { method = "GET" name = "nomad_status" response_codes = [200] url = "http://${aws_eip.nomad_server.public_ip}:4646/v1/status/leader" max_retry = 4 retry_interval = 10 depends_on = [ aws_instance.nomad_servers, aws_eip_association.nomad_server ] }This checks that Nomad has an elected leader in the cluster and expects a 200 response. If it does not get the desired response, TerraCurl will retry every 10 seconds for a maximum of 4 retries. This prevents potential race conditions between the Nomad server and the clients’ provisioning. Now you’re ready to deploy the Nomad client VMs. This is similar to the server deployed before, with a few key differences: Vault requires more compute power so the instance type is bigger. It uses the count feature because you need three Vault nodes. The script rendered by the templatefile function needs only one variable value this time (the Nomad server address). resource "aws_instance" "nomad_clients" { count = 3 ami = data.aws_ami.ubuntu.id instance_type = "t3.medium" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name associate_public_ip_address = true user_data = templatefile("./clients.sh", { NOMAD_SERVERS_ADDR = "${aws_instance.nomad_servers.private_ip}" }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id, aws_security_group.vault.id ] tags = { Name = "Vault on Nomad Client ${count.index + 1}" nomad_server = false } lifecycle { ignore_changes = [ user_data, ami ] } depends_on = [ terracurl_request.nomad_status ] }Configuration of the Nomad client for the Vault backup server The next piece of infrastructure to deploy is the VM used as the Vault backup server. The server makes backups of the Vault cluster. It’s best practice to store backups away from the Vault cluster, so create a separate node pool for the backup server. Run the vault-backup-server.sh script below, which is located in the same directory you’ve been working in so far: #! /bin/bash -e # Install the CNI Plugins curl -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz -o /tmp/cni.tgz mkdir -p /opt/cni/bin tar -C /opt/cni/bin -xzf /tmp/cni.tgz # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Install Vault sudo apt-get install vault -y # Create Vault directory. mkdir -p /etc/vault.d # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl -y sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y # Install Java sudo apt install default-jre -y # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" client { enabled = true node_pool = "vault-backup" node_class = "vault-backup" server_join { retry_join = ["${NOMAD_SERVERS_ADDR}"] } host_volume "vault_vol" { path = "/etc/vault.d" read_only = false } } plugin "docker" { config { allow_privileged = true } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl restart nomadNext, you need to actually deploy the VM using the code below, which is added to the compute.tf file: resource "aws_instance" "nomad_clients_vault_backup" { count = 1 ami = data.aws_ami.ubuntu.id instance_type = "t3.medium" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name associate_public_ip_address = true user_data = templatefile("./vault-backup-server.sh", { NOMAD_SERVERS_ADDR = "${aws_instance.nomad_servers.private_ip}" }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id, aws_security_group.vault.id ] tags = { Name = "Vault backup server" nomad_server = false } lifecycle { ignore_changes = [ user_data, ami ] } depends_on = [ terracurl_request.nomad_status ] }Nomad ACL configuration As part of the Nomad server configuration deployed by the cloud-init script, ACLs were enabled. This means that an access token is required before any actions can be performed. As this is a new install of Nomad, a token does not exist yet. To bootstrap Nomad’s ACL system with the initial management token, you can use Nomad’s API and TerraCurl. TerraCurl is useful in this scenario because the Nomad Terraform provider does not support this bootstrapping functionality. You can write the TerraCurl resource to bootstrap the Nomad ACL system. There is a file in the 1-nomad-infrastructure directory called nomad.tf that includes the following code: resource "terracurl_request" "bootstrap_acl" { method = "POST" name = "bootstrap" response_codes = [200, 201] url = "http://${aws_instance.nomad_servers.public_ip}:4646/v1/acl/bootstrap" }This code makes a POST API call to the Nomad server using the public IP address that was assigned to the VM during creation. It uses Terraform’s interpolation and joins that to the Nomad API endpoint /v1/acl/bootstrap to make the call. This resource tells Terraform to expect either a 200 or 201response code from the API call or Terraform will fail. The response body from the API call is stored in state. Terraform outputs In order to provide some of the computed values to other workspaces, you need to output them. To do this, create a file called outputs.tf in the same directory as above and insert the following code: output "nomad_server_public_ip" { value = aws_eip.nomad_server.public_ip } output "nomad_server_private_ip" { value = aws_instance.nomad_servers.private_ip } output "nomad_clients_private_ips" { value = aws_instance.nomad_clients.*.private_ip } output "nomad_clients_public_ips" { value = aws_instance.nomad_clients.*.public_ip } output "terraform_management_token" { value = nomad_acl_token.terraform.secret_id sensitive = true } output "nomad_ui" { value = "http://${aws_eip.nomad_server.public_ip}:4646" }The directory structure should now look like this: ├── 1-nomad-infrastrcuture │ ├── anonymous-policy.hcl │ ├── backend.tf │ ├── client-policy.hcl │ ├── clients.sh │ ├── compute.tf │ ├── network.tf │ ├── nomad-client-vault-backup.sh │ ├── nomad.tf │ ├── outputs.tf │ ├── providers.tf │ ├── servers.sh │ └── variables.tf ├── 2-nomad-configuration │ ├── backend.tf └── 3-nomad-job-example-deployment └── backend.tf 3 directories, 14 files Now you can run terraform plan and terraform apply to create these resources. In the next blog This blog post showed how to deploy the infrastructure required to run Vault on Nomad. It covered some Terraform directory structure concepts and how they relate to workspaces. It also covered deploying and configuring Nomad, as well as Nomad ACLs. Part 2 of this blog series will look at deploying Vault as a Nomad job and configuring it, while Part 3 will explore deploying some automation to assist in the day-to-day operations of Vault. View the full article
  2. Secrets sync, now generally available in Vault Enterprise 1.16, is a new feature that helps organizations manage secrets sprawl by centralizing the governance and control of secrets that are stored within other secret managers. Shift-left trends have caused organizations to distribute their secrets to multiple secret managers, CI/CD tools, and platforms to bring them closer to the developer for easy usage. This proliferation of secrets’ storage locations complicates secrets management, limiting visibility , fostering inconsistent management, and compounding challenges with governance and compliance. Secrets management doesn’t live up to its full potential unless it is centralized and managed on one platform. Secrets sync represents another step toward that vision by helping organizations resolve secrets-management fragmentation by providing a single management plane and controlling the distribution of secrets for last-mile usage. In this post, we’ll dive deeper into how secrets sync works and its benefits. How does secrets sync work? First, let’s look at a short demo of how secrets sync works: Secrets sync lets users manage multiple external secrets managers, which are called destinations in Vault. Supported destinations include: AWS Secrets Manager Google Cloud Secrets Manager Microsoft Azure Key Vault GitHub Actions Vercel Engineering and security teams can generate, update, delete, rotate, and revoke secrets from Vault’s user interface, API, or CLI and have those changes synchronized to and from external secrets managers to be used by your cloud-hosted applications. Secrets sync lets organizations manage sync granularity by supporting secret access via paths and keys so organizations can remain consistent with their existing operations. Additionally, secrets sync supports alternative authentication methods for organizations that don’t support or allow personal access tokens or long-lived credentials. Supported alternative authentication methods include: GitHub App for GitHub destinations Google Kubernetes Engine workload identity federation for Google Cloud destinations STS Assume Role for AWS destinations Benefits of secrets sync As more and more organizations adopt a multi-cloud approach, they face challenges around isolated secrets management, compliance, and reporting tools, as well as protecting expanded attack surfaces. Isolated secrets management solutions are primarily concerned with unifying secrets across solutions that are specific to their own platform, which can’t provide a complete solution and therefore is not suitable for multi-cloud environments, secrets management complexities associated with secret sprawl, multi-cloud adoption, or large-scale SaaS usage. Benefits include: Maintain a centralized secrets management interface: Centralized secrets management is better secrets management. Instead of context switching between multiple cloud solutions and risking breaches via human error, secrets are all synced back to Vault to be managed and monitored there. Better governance: Give security and compliance stakeholders one solution to implement and govern security best practices, as well as monitor compliance. A single management plane makes governance teams more productive and makes their goals more achievable. Higher developer productivity: Syncing secrets to a single management plane also makes development teams more productive. There’s no longer a need to interface with one cloud vendor’s key manager when deploying to that cloud, and another key manager when working in another cloud. Central visibility of secrets activity across teams: Once they’re synced and centralized, Vault operators can audit in one place. Track when, by whom, and where secrets are modified or accessed with advanced filtering and storing capabilities. Last-mile secrets availability for developers: Centralize secrets in one solution while syncing secrets to existing platforms that may require use of the local cloud service provider’s secrets manager (e.g. AWS Secrets Manager, Azure Key Vault, etc.). How HashiCorp resolves secret sprawl Resolving secrets sprawl requires a comprehensive approach governing people, processes, and technology. Secrets sync is a powerful tool to assist organizations management of secret sprawl. Secrets sync is supported on Vault Enterprise, as well as our multi-tenant SaaS solution, HCP Vault Secrets. Additionally, HCP Vault Radar helps platform engineering and security teams reduce the risk of secret sprawl by detecting unmanaged, hard coded, and leaked secrets by scanning data sources regularly used by developers. When an unsecure secret is detected, Vault Radar supports multiple remediation workflows to secure the organization’s technology stacks. To get started with HashiCorp Vault, visit the Vault product page. To learn more about what’s new in Vault Enterprise, go to the Vault Enterprise release page. Please contact us if you’d like to discuss your secrets management journey. Secrets sync documentation Solutions to secret sprawl Vault Enterprise secrets sync tutorial Centralize control of external secrets managers with Vault Enterprise secrets sync (video) View the full article
  3. Most Kubernetes resources and workloads reference the Kubernetes Secret object for credentials, API tokens, certificates, and other confidential data. Kubernetes stores secrets unencrypted by default and requires role-based access control (RBAC) rules to ensure least-privilege access. However, it does not offer a straightforward method for tracking the lifecycle and distribution of the secret. Rather than store secrets in Kubernetes, you can use a centralized secrets management solution like HashiCorp Cloud Platform (HCP) Vault Secrets to audit and manage secrets. This post demonstrates how to use the Vault Secrets Operator (VSO) to retrieve dynamic secrets from HCP Vault Secrets and write them to a Kubernetes Secret for other workloads and resources to reference. HCP Vault Secrets stores a set of secrets and supports the management and audit of their lifecycle and distribution. The demo app repository for this tutorial uses HashiCorp Terraform to write the secrets into HCP Vault Secrets, deploy the Vault Secrets Operator to the Kubernetes cluster, and deploy the custom resources to synchronize a secret for Argo CD to create a private repository. This workflow minimizes the need to refactor applications to access a secrets manager directly by using native Kubernetes Secrets. Store secrets in HCP Vault Secrets HCP Vault Secrets enables you to manage the lifecycle of credentials and track their usage. For example, a GitOps tool like Argo CD requires credentials to access a private repository on GitHub. These credentials may include a private key, username and password, or token to allow Argo CD to read from a repository. If you set up the GitHub App, you need to store the application identifier and private key for Argo CD to use. Storing the credentials in HCP Vault Secrets and installing the Vault Secrets Operator ensures the credentials get synchronized with a Kubernetes Secret, which Argo CD will reference. The configuration example below uses the HCP provider for Terraform to store the GitHub App’s private key, application identifier, and application installation identifier in a HCP Vault Secrets application named argocd: resource "hcp_vault_secrets_app" "argocd" { app_name = "argocd" description = "Secrets related to running Argo CD on Kubernetes" } resource "hcp_vault_secrets_secret" "argocd_github_app_id" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppID" secret_value = var.argocd_github_app.id } resource "hcp_vault_secrets_secret" "argocd_github_app_installation_id" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppInstallationID" secret_value = var.argocd_github_app.installation_id } resource "hcp_vault_secrets_secret" "argocd_github_app_private_key" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppPrivateKey" secret_value = base64decode(var.argocd_github_app.private_key) } resource "hcp_vault_secrets_secret" "argocd_github_url" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "url" secret_value = var.argocd_github_app.url }After applying the Terraform configuration, you can access the secrets in HCP Vault Secrets under the argocd application. You then need to synchronize these secrets from the argocd HCP Vault Secrets application into Kubernetes for Argo CD to reference. Install Vault Secrets Operator Vault Secrets Operator helps synchronize secrets from HCP Vault Secrets or Vault into Kubernetes Secrets. The Operator handles the creation of custom resources that define the authentication to, and retrieval of, secrets from HCP Vault Secrets. Here’s how to install Vault Secrets Operator using its Helm chart: $ helm repo add hashicorp https://helm.releases.hashicorp.com$ helm install -n vault-secrets-operator --create-namespace \ vault-secrets-operator hashicorp/vault-secrets-operatorThe operator’s Helm chart includes a set of custom resource definitions for authenticating to HCP Vault Secrets. Authenticate to HCP Vault Secrets from Kubernetes The Vault Secrets Operator needs a service principal to authenticate to HCP Vault Secrets. You can use Terraform to create a separate service principal with the viewer role (read-only access) that gives the operator read access to the secret: resource "hcp_service_principal" "argocd" { name = "argocd" } resource "hcp_service_principal_key" "argocd" { service_principal = hcp_service_principal.argocd.resource_name } resource "hcp_project_iam_binding" "argocd" { project_id = var.hcp_project_id principal_id = hcp_service_principal.argocd.resource_id role = "roles/viewer" }Save the HCP principal’s client ID and key to a Kubernetes Secret in the argocd namespace: apiVersion: v1 data: clientID: REDACTED clientSecret: REDACTED kind: Secret metadata: name: hvs-service-principal namespace: argocd type: OpaqueThe Vault Secrets Operator refers to the HCPAuth resource to authenticate to an HCP project with the read-only service principal you created in the argocd namespace: apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPAuth metadata: name: default namespace: argocd spec: method: servicePrincipal organizationID: HCP_ORG_ID projectID: HCP_PROJECT_ID servicePrincipal: secretRef: hvs-service-principalAfter deploying the HCPAuth resource to the cluster, you can now define a resource to synchronize secrets from HCP Vault Secrets to Kubernetes. Sync secrets from HCP Vault Secrets Use the HCPVaultSecretsApp resource to define the secrets VSO synchronizes from Vault to Kubernetes. You can define a destination Kubernetes Secret for the credentials and each key in HCP Vault Secrets will map to a key in the secret. If the name of the HCP Vault Secrets key does not match the required Kubernetes secret key you need for a workload, you can configure transformations for each key in the secret. VSO will also refresh the secret on an interval defined in the refreshAfter attribute. For example, Argo CD creates private repositories by scanning for Kubernetes Secrets with the argocd.argoproj.io/secret-type label. The HCPVaultSecretsApp resource for this tutorial’s GitHub repository includes that label in the destination. It also reads each secret from HCP Vault Secrets and maps it to the keys required by Argo CD, such as githubAppID and githubAppPrivateKey. The repository secret for Argo CD also requires the type key, which is set to git. apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPVaultSecretsApp metadata: name: github-creds namespace: argocd spec: appName: argocd destination: create: true labels: argocd.argoproj.io/secret-type: repo-creds hvs: "true" name: github-creds overwrite: false transformation: templates: githubAppID: name: githubAppID text: '{{- get .Secrets "githubAppID" -}}' githubAppInstallationID: name: githubAppInstallationID text: '{{- get .Secrets "githubAppInstallationID" -}}' githubAppPrivateKey: name: githubAppPrivateKey text: '{{- get .Secrets "githubAppPrivateKey" -}}' type: name: type text: git url: name: url text: '{{- get .Secrets "url" -}}' hcpAuthRef: default refreshAfter: 1hWhile the key names in HCP Vault Secrets do match the required keys for Argo CD, you add the transformations to demonstrate the value of re-mapping secrets and adding required fields, such as the type. In general, use the transformation field to create a Kubernetes Secret that conforms to the expected schema of any resource that uses it. Once you apply the resource, VSO creates a Kubernetes Secret named github-creds with the fields and values defined in the transformation. $ kubectl get secrets -n argocd github-creds -o yamlapiVersion: v1 data: _raw: REDACTED githubAppID: ODU4OTMx githubAppInstallationID: NDg2Mzg5OTI= githubAppPrivateKey: REDACTED type: Z2l0 url: REDACTED kind: Secret metadata: labels: app.kubernetes.io/component: secret-sync app.kubernetes.io/managed-by: hashicorp-vso app.kubernetes.io/name: vault-secrets-operator argocd.argoproj.io/secret-type: repo-creds hvs: "true" name: github-creds namespace: argocd ownerReferences: - apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPVaultSecretsApp name: github-creds uid: 729d7860-0065-4802-b892-dffbe15bbffb type: OpaqueArgo CD recognizes the secret because of the argocd.argoproj.io/secret-type: repo-credslabel. It creates a repository resource linked to the repository URL and GitHub App. To verify any changes or review access to the secret, you can use the activity logs for the application in HCP Vault Secrets. The activity logs in HCP Vault Secrets indicate that the argocd service principal used by VSO has listed secrets under the argocd application. If you create a new GitHub App and update the secrets in HCP Vault Secrets, VSO updates the github-creds secret with the new application IDs and private keys the next time it refreshes the secret. Argo CD updates the repository resource to use the new secrets without disrupting the repository connection. If you need to make changes to many GitHub Apps or credentials, you can update them all in HCP Vault Secrets without searching through Kubernetes namespaces and clusters. Learn more If you currently store credentials in Kubernetes Secrets, you can copy them to HCP Vault Secrets and create resources to synchronize them into your Kubernetes cluster. This process avoids significant refactoring of Kubernetes workloads and lets you manage and track the secret lifecycles in a central location. While the example demonstrates how to synchronize secrets for Argo CD, you can use this pattern for other Kubernetes workloads and resources. To get started, you can: Sign up for HCP to start using HCP Vault Secrets. Review our documentation to learn more about HCP Vault Secrets and check out our tutorials on using HCP Vault Secrets with Kubernetes. Find a complete list of available sources for secrets syncing in the Vault Secrets Operator documentation. Learn how to use Vault to manage API tokens for the Terraform Cloud Operator. View the full article
  4. We are pleased to announce the general availability of HashiCorp Vault 1.16. HashiCorp Vault provides secrets management, data encryption, identity management, and other workflow capabilities for applications on any infrastructure. Vault 1.16 focuses on enhancements to Vault’s performance and administration as well as support for new integrations and workflows. Key additions in this release include: Secrets sync Customizable GUI banners Proactive event notifications PKI secrets engine support for Enrollment over Secure Transport (EST) (beta) Plugin workload identity Audit log filtering Vault Secrets Operator enhancements Vault Enterprise Long-Term Support (LTS) Let’s take a closer look: Secrets sync (Enterprise, GA) Vault Enterprise secrets sync supports one-way synchronization of Vault’s KV v2 secrets with five external destinations: AWS Secrets Manager Microsoft Azure Key Vault Google Cloud Secret Manager GitHub Action Secrets Vercel Project Environment Variables To solve the problem of some managed services requiring users to fetch secrets from the native cloud-service provider’s secrets manager, secrets sync synchronizes secrets to the five destinations listed above. This capability lets Vault remain the system of record for these secrets, caching them as read-only values in the external system. Users can then read the secrets from the external secret manager. Based on customer feedback from the beta release, today’s GA release includes feature and performance improvements along with significant UI enhancements to provide a richer management experience. For example, it is now possible to create and configure a sync destination completely within the user interface. Customizable GUI banners (Enterprise) This new capability addresses important feedback from customers requesting the ability to make announcements to their internal Vault users with important information such as scheduled maintenance or known issues. Here is a simple example: It allows admins to configure custom messages that display on the login page both before and after you log in. The amount of time those messages live is also configurable. The ability to make proactive announcements promises to help reduce the support load on operators by keeping relevant announcements inside the context of Vault, rather than through external channels. Proactive event notifications (Enterprise) Customers want to know immediately when certain events occur, such as when a secret is created, modified, or deleted. Applications and external integrations that rely on Vault for secrets management also need to be alerted when a secret in Vault is updated. This new event notification system allows subscribers to register to be notified of changes to their KV v2 or database secrets. At release, subscribers can include destinations such as the Vault agent (to allow for static secret caching), Vault subsystems such as plugins, or another Vault instance. We plan to support more secrets engines in the future. PKI support for IoT/EST based devices (beta) In response to customer demand for automated certificate enrollment and management using the Enrollment over Secure Transport (EST) protocol, this release includes a beta capability to natively support EST in Vault PKI. EST is an industry standard protocol for securing devices based on PKI certificates. Vault customers can take advantage of this capability to manage large-scale certificate issuance for EST-capable devices such as IoT devices, network devices, routers, etc. Plugin workload identity (Enterprise) This new plugin capability helps eliminate long-lived cloud provider credentials for auth methods, secrets engines, and secret syncing. Admins can now use workload identity federation (WIF) to establish trust relationships and enable the exchange of identity from one system with another. For example, a Vault-created identity token can be exchanged for AWS credentials provided there is a trust relationship between Vault and AWS. This promises to eliminate a class of concerns around providing security credentials to Vault plugins. Customized audit log filtering (Enterprise) In response to customer requests for greater customization of audit logs to support compliance needs, audit logs are now tunable so that customers can specify which fields they want to log and where to send those logs based on their content. This new capability lets customers collect specific events to better meet their compliance reporting needs. Vault Secrets Operator (VSO) enhancements Since the release of Vault Secrets Operator for Kubernetes in June of 2023, we continue to enhance the capability to directly connect Vault to native Kubernetes secrets. The recent VSO 0.5 release, included two feature enhancements based on customer requests: A rich templating library to transform secrets from Vault or HCP Vault Secrets, before storing them in the destination Kubernetes Secret. This gives customers the flexibility to fine tune the secrets to match the required formats. To enable upgradability for Vault Enterprise customers via OperatorHub in Red Hat OpenShift, VSO now provides seamless upgrades with the Operator Lifecycle Manager (OLM). Note: VSO supports multiple recent Vault releases and is released asynchronously to Vault. Vault Enterprise Long-Term Support (LTS) For customers who are comfortable doing minor version upgrades, but can’t fit in a major version jump more than once every two years, LTS provides a way to stay on a single major version and receive critical bug and security fixes after the first year. Vault Enterprise joins other HashiCorp products in offering LTS with the following key characteristics: Extended maintenance: Two years of critical fixes provided via minor releases Efficient upgrades: Support for direct upgrades from one LTS release to the next, reducing major version upgrade risk and improving operational efficiency LTS is available now to all Vault Enterprise customers with self-managed deployments. To upgrade your Vault Enterprise deployment to an LTS version (1.16), refer to Vault’s upgrade documentation. If you currently have Vault Enterprise 1.16 deployed, you’re already running a maintained LTS version — no further action is required at this time. For more information, visit the LTS documentation page. Upgrade details This release also includes more new features, workflow enhancements, general improvements, and bug fixes. All of these updates can be found in the Vault 1.16 changelog. Please visit the Vault release highlights page for step-by-step tutorials demonstrating the new features. As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing security@hashicorp.com — do not use the public issue tracker. For more information, please consult our security policy and our PGP key. For more information about HCP Vault and Vault Enterprise, visit the HashiCorp Vault product page. View the full article
  5. We are pleased to announce a Long-Term Support (LTS) release program for HashiCorp Vault Enterprise, starting with version 1.16 series. Going forward, the first major Vault release of each calendar year will be an LTS release. The challenge: Balancing operational overhead with timely upgrades Vault’s Enterprise product support policy has traditionally maintained N-2 releases, meaning that bugs and vulnerabilities will continue to be addressed in the current major version and the previous two. For example, once 1.16 is generally available, 1.15 and 1.14 will keep receiving patches, but 1.13 will no longer receive updates and is effectively no longer a supported version. The N-2 window presents a challenge for some customers to stay up to date as it requires updates to new major versions at least once a year. Customers often use Vault in a mission-critical workflow and may not have the resources to perform minimal-downtime upgrades (such as autopilot upgrades or, SOP upgrade procedure) or prepare necessary testing to certify a new version of Vault. However, customers who don’t upgrade miss out on critical fixes and security patches once they’re out of the N-2 window. The solution: Long-Term Support releases After receiving significant customer interest in longer support windows, we are happy to announce Long-Term Support (LTS) for Vault Enterprise. LTS helps customers reduce the time and effort required to upgrade their releases, lowering risk and improving operational efficiency by providing a single major version to receive critical bug and security fixes for up to two years. Vault Enterprise joins other HashiCorp commercial products in offering LTS with the following key characteristics: Extended maintenance: An additional year of minor releases containing fixes for critical bugs and security vulnerabilities, for a total of two years Efficient upgrades: Support for direct upgrades from one LTS release to the next, to support variable customer timelines and reduce operational overhead Getting started with Vault Enterprise LTS Vault Enterprise 1.16 release series is the first long-term support release of self-managed Vault. To upgrade your Vault Enterprise deployment to an LTS version (1.16.X), refer to Vault’s upgrade documentation. Once you’re running a maintained version of Vault Enterprise LTS, HashiCorp recommends upgrading once a year to the next LTS version. The next LTS Vault release will be 1.19. This upgrade pattern ensures your organization is always operating a maintained release, minimizes major version upgrades, and maximizes predictability for planning purposes. For more information, refer to the Vault Enterprise LTS documentation and to HashiCorp’s multi-product LTS statement. Next steps for HashiCorp Vault Get started with Vault through our many tutorials for both beginners and advanced users. Learn more about Vault Enterprise’s capabilities by starting a free trial. View the full article
  6. The HashiCorp Terraform Cloud Operator for Kubernetes continuously reconciles infrastructure resources using Terraform Cloud. When you use the operator to create a Terraform Cloud workspace, you must reference a Terraform Cloud API token stored in a Kubernetes secret. One way to better secure these secrets instead of hard-coding them involves storing and managing secrets in a centralized secrets manager, like HashiCorp Vault. In this approach, you need to synchronize secrets revoked and created by Vault into Kubernetes. An operator like the Vault Secrets Operator (VSO) can retrieve secrets from an external secrets manager and store them in a Kubernetes secret for workloads to use. This post demonstrates how to use the Vault Secrets Operator (VSO) to retrieve dynamic secrets from Vault and write them to a Kubernetes secret for the Terraform Cloud Operator to reference when creating a workspace. While the example focuses on Terraform Cloud API tokens, you can extend this workflow to any Kubernetes workload or custom resource that requires a secret from Vault. Install Vault and operators The Terraform Cloud Operator requires a user or team API token with permissions to manage workspaces, plan and apply runs, and upload configurations. While you can manually generate a token in the Terraform Cloud UI, configure Vault to issue API tokens for Terraform Cloud. The Terraform Cloud secrets engine for Vault handles the issuance and revocation of different kinds of API tokens in Terraform Cloud. Vault manages the token’s lifecycle and audits its usage and distribution once you reference it in the Terraform Cloud Operator. The demo repository for this post sets up the required infrastructure resources, including a: Vault cluster on HCP Vault Kubernetes cluster on AWS After provisioning infrastructure resources, the demo repo installs Helm charts for Vault, Terraform Cloud Operator, and Vault Secrets Operator in their own namespaces using Terraform. If you do not use Terraform, install each Helm chart by CLI. First, install the Vault Helm chart. If applicable, update the values to reference an external Vault cluster: $ helm repo add hashicorp https://helm.releases.hashicorp.com $ helm install vault hashicorp/vaultInstall the Helm chart for the Terraform Cloud Operator with its default values: $ helm install terraform-cloud-operator hashicorp/terraform-cloud-operatorInstall the Helm chart for VSO with a default Vault connection to your Vault cluster: $ helm install vault-secrets-operator hashicorp/vault-secrets-operator \ --set defaultVaultConnection.enabled=true \ --set defaultVaultConnection.address=$VAULT_ADDRAny custom resources created by VSO will use the default Vault connection. If you have different Vault clusters, you can define a VaultConnection custom resource and reference it in upstream dependencies. After installing Vault and the operators, configure the Kubernetes authentication method in Vault. This ensures VSO can use Kubernetes service accounts to authenticate to Vault. Set up secrets in Vault After installing a Vault cluster and operators into Kubernetes, set up the secrets engines for your Kubernetes application. The Terraform Cloud Operator needs a Terraform Cloud API token with permissions to create projects and workspaces and upload Terraform configuration. On the Terraform Cloud Free tier, you can generate a user token with administrative permissions or a team token for the “owners” team to create workspaces and apply runs. To further secure the operator’s access to Terraform Cloud, upgrade to a plan that supports teams to secure the Terraform Cloud Operator’s access to Terraform Cloud. Then, create a team, associate a team token with it, and scope the token’s access to a Terraform Cloud project. This ensures that the Terraform Cloud Operator has sufficient access to create workspaces and upload configuration in a given project without giving it access to an entire organization. Configure the Terraform Cloud secrets engine for Vault to handle the lifecycle of the Terraform Cloud team API token. The demo repo uses Terraform to enable the backend. Pass in an organization or user token with permissions to create other API tokens. resource "vault_terraform_cloud_secret_backend" "apps" { backend = "terraform" description = "Manages the Terraform Cloud backend" token = var.terraform_cloud_root_token }Create a role for each Terraform Cloud team that needs to use the Terraform Cloud Operator. Then pass the team ID to the role to configure the secrets engine to generate team tokens: resource "vault_terraform_cloud_secret_role" "apps" { backend = vault_terraform_cloud_secret_backend.apps.backend name = "payments-app" organization = var.terraform_cloud_organization team_id = "team-*******" }Build a Vault policy that allows read access to the secrets engine credentials endpoint and role: resource "vault_policy" "terraform_cloud_secrets_engine" { name = "terraform_cloud-secrets-engine-payments-app" policy = <The Terraform Cloud Operator needs the Terraform Cloud team token to create workspaces, upload configurations, and start runs. However, you may also want to pass secrets to workspace variables. For example, a Terraform module may need a username and password to configure HCP Boundary. You can store the credentials in Vault’s key-value secrets engine and configure a Vault policy to read the static secrets. After setting up policies to read the required secrets, create a Vault role for the Kubernetes authentication method, which allows the terraform-cloud service account to authenticate to Vault and retrieve the Terraform Cloud token: resource "vault_kubernetes_auth_backend_role" "terraform_cloud_token" { backend = "kubernetes" role_name = "payments-app" bound_service_account_names = ["terraform-cloud"] bound_service_account_namespaces = ["payments-app"] token_ttl = 86400 token_policies = [ vault_policy.terraform_cloud_secrets_engine.name, ] }Refer to the complete repo to configure the Terraform Cloud secrets engine and store static secrets for the Terraform Cloud workspace variables. Sync secrets from Vault to Kubernetes The Terraform Cloud Operator includes a custom resource to create workspaces and define workspace variables. However, dynamic variables refer to values stored in a Kubernetes Secret or ConfigMap. Use VSO to synchronize secrets from Vault into native Kubernetes secrets. The demo repo for this post retrieves the Terraform Cloud team token and static credentials and stores them as a Kubernetes secret. VSO uses a Kubernetes service account linked to the Kubernetes authentication method role in Vault. First, deploy a service account and service account token for terraform-cloud to the payments-app namespace: apiVersion: v1 kind: ServiceAccount metadata: name: terraform-cloud namespace: payments-app --- apiVersion: v1 kind: Secret metadata: name: terraform-cloud-token namespace: payments-app type: kubernetes.io/service-account-tokenThen, configure a VaultAuth resource for VSO to use the terraform-cloud service account and authenticate to Vault using the kubernetes mount path and payments-app role defined for the authentication method. The configuration shown here sets Vault namespace to admin for your HCP Vault cluster: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultAuth metadata: name: terraform-cloud namespace: payments-app spec: method: kubernetes mount: kubernetes namespace: admin kubernetes: role: payments-app serviceAccount: terraform-cloud audiences: - vaultTo sync the Terraform Cloud team token required by the Terraform Cloud Operator to a Kubernetes secret, define a VaultDynamicSecret resource to retrieve the credentials. VSO uses this resource to retrieve credentials from the terraform/creds/payments-app path in Vault and creates a Kubernetes secret named terraform-cloud-team-token with the token value. The resource refers to VaultAuth for authentication to Vault: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: name: terraform-cloud-team-token namespace: payments-app spec: mount: terraform path: creds/payments-app destination: create: true name: terraform-cloud-team-token type: Opaque vaultAuthRef: terraform-cloudWhen you apply these manifests to your Kubernetes cluster, VSO retrieves the Terraform Cloud team token and stores it in a Kubernetes secret. The Operator’s logs indicate the handling of the VaultAuth resource and synchronization of the VaultDynamicSecret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-14T16:38:47Z DEBUG events Successfully handled VaultAuth resource request {"type": "Normal", "object": {"kind":"VaultAuth","namespace":"payments-app","name":"terraform-cloud","uid":"e7c0464e-9ce8-4f3f-953a-f8eb10853001","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331817"}, "reason": "Accepted"} 2024-03-14T16:38:47Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"d1563879-41ee-4817-a00b-51fe6cff7e6e","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331826"}, "reason": "SecretSynced"}Verify that the Kubernetes secret terraform-cloud-team-token contains the Terraform Cloud team token: $ kubectl get secrets -n payments-app \ terraform-cloud-team-token -o jsonpath='{.data.token}' | base64 -d ******.****.*****Create a Terraform Cloud workspace using secrets You can now configure other Kubernetes resources to reference the secret synchronized by VSO. For the Terraform Cloud Operator, deploy a Workspace resource that references the Kubernetes secret with the team token: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: payments-app-database namespace: payments-app spec: organization: hashicorp-stack-demoapp project: name: payments-app token: secretKeyRef: name: terraform-cloud-team-token key: token name: payments-app-database ## workspace variables omitted for clarityThe team token has administrator access to create and update workspaces in the “payments-app” project in Terraform Cloud. You can use a similar approach to passing Kubernetes secrets as workspace variables. Deploy a Module resource to apply a Terraform configuration in a workspace. The resource references a module source, variables to pass to the module, and outputs to extract. The Terraform Cloud Operator uploads a Terraform configuration to the workspace defining the module. apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: database namespace: payments-app spec: organization: hashicorp-stack-demoapp token: secretKeyRef: name: terraform-cloud-team-token key: token destroyOnDeletion: true module: source: "joatmon08/postgres/aws" version: "14.9.0" ## module variables omitted for clarityTerraform Cloud will start a run to apply the configuration in the workspace. Rotate the team API token Terraform Cloud allows only one active team token at a time. As a result, the Terraform Cloud secrets engine does not assign leases to team tokens and requires manual rotation. However, Terraform Cloud does allow issuance of multiple user tokens. The secrets engine assigns leases to user API tokens and will rotate them dynamically. To rotate a team token, run a Vault command to rotate the role for a team token in Terraform Cloud: $ vault write -f terraform/rotate-role/payments-appVSO must update the Kubernetes secret with the new token when the team token is rotated. Edit a field in the VaultDynamicSecret resource, such as renewalPercent, to force VSO to resynchronize: $ kubectl edit VaultDynamicSecret terraform-cloud-team-token -n payments-app # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: annotations: ## omitted spec: ## omitted renewalPercent: 60 vaultAuthRef: terraform-cloudVSO recognizes the new team token in Vault and reconciles it with the Kubernetes secret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-18T16:10:19Z INFO Vault secret does not support periodic renewal/refresh via reconciliation {"controller": "vaultdynamicsecret", "controllerGroup": "secrets.hashicorp.com", "controllerKind": "VaultDynamicSecret", "VaultDynamicSecret": {"name":"terraform-cloud-team-token","namespace":"payments-app"}, "namespace": "payments-app", "name": "terraform-cloud-team-token", "reconcileID": "3d0a15f1-0edf-450b-8be1-6319cd3b2d02", "podUID": "4eb7f16a-cfcb-484e-b3da-54ddbfc6a6a6", "requeue": false, "horizon": "0s"} 2024-03-18T16:10:19Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"f4f0483c-895d-4b05-894c-24fdb1518489","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"1915673"}, "reason": "SecretRotated"Note that this manual workflow for rotating tokens applies specifically to team and organization tokens generated by the Terraform Cloud secrets engine. User tokens have leases, which VSO handles automatically. VSO also supports the rotation of credentials for static roles in database secrets engines. Set the allowStaticCreds attribute in the VaultDynamicSecret resource for VSO to synchronize changes to static roles. Learn more As shown in this post, rather than store Terraform Cloud API tokens as secrets in Kubernetes, you can manage the tokens with Vault and use the Vault Secrets Operator to synchronize them to Kubernetes secrets for the Terraform Cloud Operator to use. By managing the Terraform Cloud API token in Vault, you can audit its usage and handle its lifecycle in one place. In general, the pattern of synchronizing to a Kubernetes secret allows any permitted Kubernetes custom resource or workload to use the secret while Vault manages its lifecycle. As a result, you can track the usage of secrets across your Kubernetes workloads without refactoring applications already using Kubernetes secrets. Learn more about the Vault Secrets Operator in our VSO documentation. If you want to further secure your secrets in Kubernetes, check out our blog post comparing three methods to inject secrets from Vault into Kubernetes workloads. If you support a GitOps workflow in your organization and want to empower teams to deploy infrastructure resources using Kubernetes, review our documentation on the Terraform Cloud Operator to deploy and manage infrastructure resources through modules. Refer to GitHub for a complete example provisioning a database and other infrastructure resources. View the full article
  7. HCP Vault Radar is HashiCorp’s new secret scanning product that expands upon Vault’s secret-lifecycle management use cases to include the discovery and prioritization of unmanaged secrets. In January, we announced a limited beta program for HCP Vault Radar. Today, we’re announcing that HCP Vault Radar is entering limited availability. The limited availability phase signals that HCP Vault Radar is prepared for production workloads while the product continues to onboard additional data sources. Reduce the risk of secret sprawl While IT organizations aim to securely store and manage every secret, developers often hard-code secrets into repositories, share secrets on messaging platforms, and include secrets in wikis or tickets. Sometimes, machines leak plaintext secrets in log data when errors occur. These unmanaged secrets pose a threat to organizations’ security posture if compromised. HCP Vault Radar helps DevOps and security teams reduce the risk associated with secret sprawl by detecting unmanaged and leaked secrets, as well as those that have been hard coded or reside in plaintext, so they can take appropriate actions to remediate these secrets. HCP Vault Radar scans for secrets like usernames and passwords, API keys, and tokens in popular developer tools and repositories. Prioritize remediation effectively Secret scanning is helpful only if it detects signals from noise. As a result, secret scanning tools should help you determine if exposed secrets are low-risk or significant threats. That’s why HCP Vault Radar has invested heavily in reducing false positives and helping users effectively prioritize remediation efforts. To rank the severity of an exposed secret, HCP Vault Radar uses a combination of data points to answer the following questions: Was the secret found on the latest version of the code or document? Does content have high entropy or randomness that likely indicates it's a secret? Is the secret currently active and in use? Is this secret currently live in Vault? Version history Knowing whether the secret was discovered in the latest version of a file is important for remediation prioritization, as well as governance. If the potential secret was discovered in a current file, HCP Vault Radar assigns a higher priority because it is likely that this finding has not been previously evaluated and is therefore more likely to be a secret. Governance and compliance teams are also interested in secrets that are discovered in historical versions of a file. They need to understand when the secret was published and why it wasn’t remediated. Depending on their findings they may need to adjust their workflows or determine if ignore rules need to be introduced or adjusted. String randomness Another way HCP Vault Radar reduces false positives is by evaluating the entropy (randomness) of content. Relying on pattern-matching text searches alone increases the risk of missing secrets that don’t adhere to a particular format. Entropy algorithms are highly effective at identifying random or complex strings that frequently indicate the content is a secret. Evaluating string literals in code for entropy allows HCP Vault Radar to identify potentially suspicious strings in any format. HCP Vault Radar employs security scanning best practices by combining pattern matching with entropy to prevent secrets from making it to production. Activeness checks Credentials currently in use present a bigger threat than those that were created for test purposes and never worked or aren’t active. When HCP Vault Radar finds a credential, it will call out to the associated application to check if the secret is still active. Active credentials are marked as high risk within the prioritization portal. Today, HCP Vault Radar can test for: Google Cloud API keys Amazon Web Services (AWS) credentials Personal access tokens for GitHub JSON web tokens (JWT) Vault correlation To further support prioritization, HCP Vault Radar can correlate if a leaked secret is stored in Vault. Most credentials in Vault are used in critical production environments, so HCP Vault Radar will give exposed secrets a higher severity score when also found in Vault’s key value stores. Scan top developer tools in the cloud or on-premises HCP Vault Radar supports secret scanning from the SaaS HashiCorp Cloud Platform as well as an agent-based variant that allows for on-premises and self-managed scanning. Results from all sources are integrated into the HCP dashboard for a streamlined user experience when prioritizing remediation of unmanaged secrets. HCP Vault Radar also includes a CLI for point-in-time on-demand use cases. It is best leveraged for non-continuous or non-rule-based scans of a particular file or folder’s contents. Once the CLI scan is completed, the results are uploaded to the HCP portal to take advantage of Vault Radar’s prioritization and remediation workflow functionality. HCP Vault Radar customers can scan the following data sources (Terraform Cloud, Terraform Enterprise, and Slack are newly available): Git-based version control systems (GitHub, GitLab, Bitbucket, etc.) AWS Parameter Store Server file directory structures Confluence HashiCorp Vault Amazon S3 Terraform Cloud Terraform Enterprise JIRA Docker images Slack Remediate unmanaged secrets In addition to secret scanning, HCP Vault Radar supports a robust set of remediation workflows via ticketing and alerting solutions. HCP Vault Radar includes integrations with the following alerting and ticketing solutions for further security incident response: Microsoft Teams Slack (Integration Demo) PagerDuty Splunk JIRA ServiceNow These integrations leverage common tools in DevOps, platform engineering, and security teams’ workflows supporting incident response processes. HCP Vault Radar transmits all of the information necessary to remediate its findings, including: Author Location Activeness If the secret is in the current version of a document or history Whether the secret is in plaintext The secret is never exposed in the HCP console. Instead, the user is provided a link to the location where the secret can be found, investigated, and remediated when necessary. Prevent secret leakage In addition to integrating with alerting and ticketing tools to remediate findings that have already been published, HCP Vault Radar integrates with GitHub to prevent possible secrets from being merged into the main branch in the first place. The GitHub app will alert you to any sensitive data found in the pull request, including both the tip of the pull request and the history of any commits. The alert includes details on what type of secret was found and where it was found. Pull request checks allow HCP Vault Radar to perform a scan against a GitHub pull request when the HCP Vault Radar GitHub application is installed for a repository. The HCP Vault Radar application for GitHub will run for every pull request and for every new commit to any open pull requests. By default, the HCP Vault Radar GitHub application does not block any pull requests. Our usage statistics show more than 80% of warnings about secrets in pull requests are fixed prior to merging into the main branch, so a non-blocking workflow tends to be a better fit for developer needs. However, organizations can configure GitHub to block the pull request from being merged if the check fails due to sensitive data being discovered. Learn more about HCP Vault Radar HCP Vault Radar is an exciting new addition to HashiCorp Vault’s secret lifecycle management capabilities that helps enterprises reduce risk associated with credential exposure. Discovery of unmanaged secrets and subsequent remediation workflows further differentiate Vault’s secrets lifecycle management offering by enabling organizations to take a proactive approach to remediation before a data breach occurs. To learn more, check out these resources: HCP Vault Radar product documentation Solutions to secret sprawl blog post New HCP Vault Secrets, Radar, and other features fight secret sprawl blog post Contact HashiCorp sales Demo View the full article
  8. Many organizations rely on X.509 public key infrastructure (PKI) certificates to protect their services and workloads. This standard supports a wide range of configurable fields and extensions, making it highly flexible and allowing for a high degree of customization. Many of these fields are relevant for a large number of use cases and are natively supported by HashiCorp Vault. However, due to the sheer number of fields defined in the X.509 standard, there were cases where a field was not implemented in Vault and could not be validated by Vault. To address these use cases, we have introduced support for Certificate Issuance External Policy Services (CIEPS) in Vault Enterprise 1.15. Issuing a PKI certificate Issuing a PKI certificate with Vault is simple. Once a certificate authority (CA) has been established and configured with a role, authenticated clients can submit certificate issuance or signing requests against that role. The role defines the allowed values in the certificate and compares the request against those values. If the values match, a certificate is issued or signed. If they do not match, the request is rejected. This workflow is suitable for all requests that include fields and extensions natively supported by Vault. In scenarios where customers require certificates to be issued with custom fields, Vault offers the option to use the sign-verbatim method, described in the next section. Challenges with sign-verbatim requests Issuing or signing a certificate with the sign-verbatim method allows the request to use arbitrary fields and extensions that are not supported with the role-based process outlined above. However, because Vault does not validate those custom fields, the use of sign-verbatim requires a different validation approach. HashiCorp Sentinel is a policy as code framework that allows the definition of fine-grained, logic-based policies. It can be used to provide additional validation for requests made to endpoints and entities in Vault. When defined for a particular endpoint, the Sentinel policy evaluates all requests to that endpoint and determines whether to allow or reject them based on the result of the policy check. With Sentinel, specific fields can be denied or required. When fields are defined as required in the Sentinel policy, the client has to know and specify all required fields in the request. Missing fields would lead to the request being denied, and resolving the issue requires clients to have knowledge of very specific PKI fields, which may not be easy to find. In addition, Sentinel operates on a deny/accept basis and lacks the ability to modify requests. Another possible approach when issuing or signing certificates verbatim would be to set up a validation service in front of Vault. With this approach, the client communicates with and authenticates to the validation service instead of directly to Vault. The validation service then validates incoming requests and forwards valid requests to the sign-verbatim endpoint. However, in order to do this, the validation service needs to authenticate to Vault, which introduces an additional authentication and authorization step. Clients who need to use this validation service and Vault for other use cases, such as role-based certificate request validation, would need to manage authentication information for both services. Similar to the Sentinel approach, a validation service running in front of Vault cannot modify certificate signing requests (CSR). Even if the request could be modified, it would happen outside of Vault which leaves the question of how those modifications could be audited. In addition, in order to remain compatible with Vault, the validation service would need to support the ACME protocol and any protocols that may be added to Vault in the future. Both approaches, sign-verbatim with Sentinel or with a validation service, are good for allowing and rejecting requests. However, they do not provide the ability to modify requests and lack auditability. As a result, the sign-verbatim solution is not user-friendly and is not popular among organizations that use Vault. Vault’s Certificate Issuance External Policy Service To address these challenges, we designed and implemented a new Vault Enterprise feature called CIEPS in HashiCorp Vault 1.15. CIEPS is part of the PKI secrets engine and offers a new mechanism that allows customers to use and validate any field specified in the x.509 standard. This enables a high degree of customizability to address less-common use cases. With CIEPS, customers can build a custom policy service that runs outside of Vault. In the service, custom issuance policies can be specified. A Vault operator can then configure the PKI secrets engine to refer to this custom policy service. This lets Vault handle communication with the service. When Vault receives a request that requires validation by the external policy service, it forwards the request to the custom policy service for validation and approval. The external policy service processes the request and, based on the defined policies, determines if the request should be approved or denied. When the request is approved, Vault updates select fields if necessary and then issues the certificate. If the request is denied, the policy service can specify a custom error message in its response to Vault. Vault includes the error message in its response to the original request, allowing a better user experience with customized error messages instead of generic, uninformative ones. Once everything has been set up, clients can authenticate to and request certificates from Vault. The interaction between Vault and the external policy service is not visible to the client. For the client, Vault's behavior remains unchanged and the certificate request workflow remains the same. The only difference is the API endpoint to which the request is sent in Vault. No additional authentication or authorization step is necessary. This lets the client interact only with Vault as the central platform, instead of having to target an additional service. The external policy service can also modify the original requests if necessary. Since it is directly integrated with Vault, all requests and responses are part of Vault’s audit logs, ensuring complete auditability of the issuance process. The benefits of using an external policy service with Vault include: Auditability: As Vault is handling the communication and requests to the service, all requests and responses between Vault and the external policy service are tracked in Vault’s audit logs. User experience: Clients interact only with Vault, the request workflow stays the same, and error messages are customizable. Certificate modification: The original request can be customized according to the defined policies. Flexibility: The external policy service uses a customer-defined external service for validation and is therefore more flexible than the role-based system. CIEPS in Vault PKI: Adding flexibility With CIEPS, organizations now have granular control and the ability to create flexible certificate structures that meet their PKI requirements, while having HashiCorp Vault validate and issue them. CIEPS allows for the use and validation of any field or extension specified in the x.509 standard, addressing use cases that are not supported natively and therefore were previously not possible in Vault. Additionally, the workflow for end users remains the same, ensuring a consistent user experience. Interested in learning more about the Certificate Issuance External Policy Service? Check out our hands-on tutorial, which includes an example service to get you started. To learn more about how HCP Vault and Vault Enterprise can help with your PKI requirements, contact HashiCorp sales. HashiCorp Vault PKI resources PKI design considerations: Read this technical guide on how to build modern PKI with HashiCorp Vault. What is public key infrastructure (PKI): See how PKI governs the provisioning of digital certificates to protect sensitive data, establish digital identities, and secure communications. What is ACME PKI? Learn about the ACME protocol for PKI, the common problems it solves, and why it should be part of your certificate management roadmap. PKI hosting: Cloud-based PKI vs. self managed: Discover the benefits and differences between hosting PKI workloads in the cloud versus on-premises. X.509 certificate management with Vault: A look at practical public key certificate management in HashiCorp Vault using dynamic secrets rotation. View the full article
  9. PCI DSS (Payment Card Industry Data Security Standard) is a global standard that establishes technical and operational criteria for protecting payment data. The PCI Security Standards Council announced PCI DSS v4.0 on March 31, 2022. Now organizations have until March 31, 2025 to comply with the new standard. HashiCorp Vault can play a significant role in helping your organization attain PCI DSS certification by providing secure and compliant management of sensitive data, including cardholder data. This post will give a brief overview of the areas where Vault can help your organization comply with the new standards in time... View the full article
  10. HashiCorp Vault scans for skeletons in your code closetView the full article
  11. Today at HashiConf, we are pleased to announce the alpha program for HashiCorp Cloud Platform (HCP) Vault Radar, HCP Vault Secrets general availability, secrets sync beta for Vault Enterprise, and HashiCorp Vault 1.15. These new capabilities help organizations secure their applications and services as they leverage a cloud operating model to power their shift to the cloud. Enabling a cloud operating model helps organizations cut costs, reduce risks, and increase the speed at which developers build and deploy secure applications. The new capabilities boost Vault’s focus on helping organizations use identity to achieve their security goals by: Centrally managing and enforcing access to secrets and systems based on trusted sources of application and user identity. Eliminating credential sprawl by identifying static secrets hardcoded throughout complex systems and tooling across your entire cloud estate. Reducing manual overhead and risk associated with managing access to infrastructure resources like SSH, VPNs, as well as applications and services. Automatically implementing authentication and authorization mechanisms to ensure only authorized services can communicate with one another. View the full article
  12. Today at HashiConf, we are pleased to announce the general availability of HCP Vault Secrets, a new software-as-a-service (SaaS) offering of HashiCorp Vault that focuses on secrets management. Released in beta earlier this year, HCP Vault Secrets lets users onboard quickly and is free to get started. The general availability release of HCP Vault Secrets builds on the beta release with production-ready secrets management capabilities, additional secrets sync destinations, and multiple consumption tiers. During the public beta period, we worked on improvements and additions to HCP Vault Secrets. Focused on secrets management for developers, these additions will help our users to: Boost security across clouds and machines: Centralize where secrets are stored and minimize context switching between multiple solutions to reduce the risk of breaches caused by human error. Increase productivity: Improve security posture without expending additional time and effort. Enhance visibility of secrets activity across teams: Understand when secrets are modified or accessed — including by whom, when, and from where — with advanced filtering and storage. Comply with security best practices: Eliminate manual upgrade requirements with fully managed deployment to keep your instance up to date and in line with security best practices. Last-mile secrets availability for developers: Centralize secrets in HCP Vault Secrets while syncing secrets to existing platforms and tools so developers can access secrets when and where they need them. View the full article
  13. At HashiDays in June, we announced the public beta for a new offering on the HashiCorp Cloud Platform: HCP Vault Secrets is a powerful new tool designed to identify, control, and remediate secrets sprawl and centralize secrets management by synchronizing secrets across platforms. Secrets are unlike traditional credentials because they are leveraged by developers, applications, services, infrastructure, and platforms to establish trusted identities. As organizations distribute their workloads across more platforms they lose centralized control over identity security and become more exposed to secrets sprawl. This post reviews the secrets sync beta feature released as part of Vault Enterprise 1.15 and discusses how it will help organizations corral secrets sprawl and regain centralized control and visibility of their secrets... View the full article
  14. HashiCorp is pleased to announce the general availability of Vault 1.13. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure. Vault 1.13 focuses on Vault’s core secrets workflows as well as team workflows, integrations, and visibility. The key features in this release include improvements to: Multi-namespace access workflows Azure auth method Google Cloud secrets engine KMIP secrets engine MFA login Vault Agent Certificate revocation for cross-cluster management Additional new features include: Event-based notifications (alpha) Vault Operator (beta) HCP link for self-managed Vault (private beta) PKI health checks Managed Transit keys »Multi-namespace access improvements When customers have secrets distributed across multiple (independent) namespaces, their applications need to authenticate multiple times to Vault, creating an unnecessary burden. Additionally, customers using Vault Agent need to run separate Vault Agent instances to talk to each namespace. Vault 1.13 includes namespace improvements to alleviate these challenges by enabling a single agent instance to be able to fetch secrets across multiple namespaces. »Microsoft Azure auth improvements Prior to Vault 1.13, only Microsoft Azure virtual machines could leverage the Azure auth method to authenticate to Vault. Vault 1.13 introduces the ability to authenticate to Azure Functions and the Azure App Service. »Google Cloud secrets engine improvements To avoid exhausting Google Cloud’s 10-key limit when the same service account is used with 10 or more Vault roles, Vault 1.13 improves the Google Cloud secrets engine integration by using its own service account with Google Cloud’s account impersonation feature to generate access tokens. »Event-based notifications (alpha) Vault 1.13 introduces an alpha release of a lightweight event-based notification feature that can be used to notify downstream applications of changes to Vault secrets. Applications and external entities can subscribe to events via a websockets API. »Undo logs We identified a race condition associated with Vault’s replication that is triggered when frequent updates to a set of keys can occasionally result in a merkle diff or sync loop. This condition was corrected in Vault 1.12. However, it was disabled in 1.12. Vault 1.13 includes this correction as enabled by default. »MFA login improvements As of 1.10, Vault introduced Login MFA, a standardized configuration for integration with Duo, PingIdentity, Okta, and TOTP, but some customers have found UX challenges with these configurations. With these improvements introduced in 1.13, Customers will be able to migrate to Login MFA more easily. Login MFA will be easier to configure and debug. Prior to 1.13, customers had to specify an MFA ID for each MFA provider, which is a long UUID. This can be cumbersome for customers with many clusters. 1.13 introduces MFA name as a human-readable alias. Passcode was not being used consistently across methods, now there is consistent behavior across the CLI and API, regardless of method. »Vault Operator (beta) Kubernetes applications using Vault for secrets management have had the option of leveraging a sidecar or the CSI secrets store provider to inject secrets into files. This created a number of challenges. First, these approaches required applications to be modified if you wanted the ability to read from a file. Furthermore, applications need to be aware of when credentials have been modified so that they can be re-read from the file. Vault 1.13 introduces the Vault Operator to provide a resolution to these challenges. The Operator will allow customers to natively sync secrets from Vault to Kubernetes clusters. The Vault Operator will not be immediately available with the GA launch of 1.13. Its availability date will occur in late March. »Vault Agent improvements Vault 1.13 includes several enhancements to the Vault Agent. Users can get started with Vault Agent without needing to set up auth methods. This feature is for learning and testing. It is not recommended for production use. Listeners for the Vault Agent can now be configured with a role of “metrics only” so that a service can be configured to listen on a particular port for the purpose of metrics collection only. The Vault Agent can now read configurations from multiple files. The Vault Agent persists logging when there is a mismatch between the agent and server. »HCP link for self-managed Vault (private beta) In Vault 1.13 and the HashiCorp Cloud Platform (HCP), we’ve introduced a feature enabling active connections between self-managed Vault clusters and HCP. The feature is similar to the Consul global management plane. You sign up for free on HCP to use it, and you can connect self-managed clusters into one platform and access applications built on top of HCP, such as the operations monitoring and usage insights features currently in private beta. We’ve chosen specifically to add these capabilities to HCP because adding data processing functionalities to the Vault binary could increase memory consumption. To participate in this private beta you must first upgrade to Vault 1.13. However, the upgrade doesn’t need to occur in your production environments. Simply upgrade a pre-production environment to Vault 1.13 and you can start testing. If you would like to participate in the beta please contact our product team by emailing siranjeevi.dheenadhayalan@hashicorp.com. »Certificate revocation for cross-cluster management To improve the cross-cluster certificate management user experience, Vault 1.13 extends revocation capability support (including certificate revocation lists (CRLs) and online certificate status protocol (OCSP)) across multiple Vault clusters for the same PKI mount. »PKI health checks Vault 1.13 introduces PKI health check CLI commands to help organizations troubleshoot issues and avoid downtime. The following configurations are included: CA CRL validity Role permissions Audit for potentially unsafe parameters API’s Certificate volume »Enhanced KMIP with PKCS#11 operations In Vault 1.13 we have added support in the KMIP secrets engine (KMIP Asymmetric Key Lifecycle, Advanced CryptoGraphic Server profiles) for key PKCS#11 operations such as signature (sign/verify), random number generation (RNG), and MAC sign/verify. These should enable better KMIP/PKCS#11 integrations for Vault with a broader base of devices and software. »Managed Transit keys Customers with particularly conservative risk profiles require their keys to be created and stored within hardware security modules (HSMs) or an external KMS. We want to make it easier for organizations to leverage Vault across all sections of their infrastructure, including HSMs and KMS. Vault 1.13 introduces support for offloading Transit key cryptographic operations (encrypt, decrypt, MAC sign/verify, RNG generation) from Transit to an HSM or cloud KMS. Organizations can now generate new Transit keys and key pairs using key material in an external HSM or KMS, and then use them for encryption and signatures on secrets in Vault. These features make it possible to centralize secrets and encryption management in Vault even in cases where Vault doesn’t support certain cryptographic algorithms (such as SEED/ARIA) required by an organization’s compliance policies. »Enable rotate-root in Azure auth method While the rotate-root endpoint is available in the Azure secrets engine, it has not been implemented in our Azure auth method. For customers who want to rotate their client secrets in the Azure auth method mounts on a routine basis, this requires manual rotation. To do this, users need to manually rotate the client secret in Azure, and then manually update the Azure auth configuration. At scale, this is impractical. Vault 1.13 adds rotate-root capabilities to the Azure auth method, allowing users to generate a new client secret for the root account defined in the config of Azure auth method mounts. The generated value will be known only by Vault. »Upgrade details This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.13 changelog lists all the updates. Please visit the Vault Release Highlights page for step-by-step tutorials demonstrating the new features. Vault 1.13 introduces significant new functionality. As such, please review the Upgrading Vault page, as well as the Feature Deprecation Notice and Plans page for further details. As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing security@hashicorp.com — do not use the public issue tracker. For more information, please consult our security policy and our PGP key. For more information about Vault Enterprise, visit hashicorp.com/products/vault. View the full article
  15. The introduction of 5G networking and its accompanying Service-Based Architecture (SBA) control plane brings a noteworthy shift: Instead of a traditional design consisting of proprietary signaling between physical, black-box components, SBA uses a commodity-like, microservice implementation that is increasingly cloud native, relying on standard RESTful APIs to communicate. This requires a reset in how carriers implement security, one where proven cloud concepts will likely play a significant role. This post will show how the HashiCorp suite of products, especially HashiCorp Vault’s PKI functionality, are well suited for SBAs and cater to a variety of 5G core use cases, with tight Kubernetes integrations and a focus on zero trust networking. These tools provide a strong foundation for 5G environments because many of the constructs included in SBA resemble a modern, zero trust service mesh. Vault in particular offers full PKI management and a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. »The New Face of Telecom Networking The 3GPP standards body mandates a 5G mobile packet core based on discrete software components known as Network Functions (NF). The specifications clearly articulate requirements for NF communication pathways (known as reference points), protocols, service-based interfaces (SBI), and critically, how these network channels are secured. SBI representation of a 5G service-based architecture Orchestration platforms have opened up powerful integration, scaling, and locality opportunities for hosting and managing these NFs that were not possible in earlier manifestations of cellular technology. A mature 5G core could span multiple datacenters and public cloud regions, and scale to thousands of worker nodes. An entire Kubernetes cluster, for example, may be dedicated to the requirements of a single NF: internally, a function may consist of many pods, deployments, services, and other Kubernetes constructs. The SBI itself could be any network interface associated with an NF that is attached to the control plane network for the purpose of consuming and/or providing a service in accordance with the specification. The 5G SBA also brings new security challenges and opportunities. »Securing Network Function Communication Security architecture and procedures for 5G System (3GPP TS 33.501) is the document of record that details various security-related requirements within 5G SBA. Section 13.1.0 states: All network functions shall support mutually authenticated TLS and HTTPS as specified in RFC 7540 [47] and RFC 2818 [90]. The identities in the end entity certificates shall be used for authentication and policy checks. Network functions shall support both server-side and client-side certificates. TLS client and server certificates shall be compliant with the SBA certificate profile specified in clause 6.1.3c of TS 33.310 [5]. mTLS is a fundamental requirement within the 5G SBA for securing SBI flows at the authentication level. But what about authorization? One NF in particular is especially crucial in the context of security: the Network Repository Function (NRF) is responsible for dynamically registering all SBA components as they come online, acting as a kind of service discovery mechanism that can be queried in order to locate healthy services. In addition, the NRF has universal awareness of which functions should be permitted to freely communicate, issuing appropriately scoped OAuth2 tokens to each entity. These tokens authorize network flows between NFs, further securing the fabric. NF authentication and authorization flow There are two modes of service-to-service communication described in the 3GPP specifications. In the Direct Communication mode, NFs engage in service discovery and inter-function network operations as explained above. However, in the Indirect Communication mode, a Service Control Proxy (SCP) may optionally intercept flows and even broker discovery requests with the NRF on behalf of a consumer. Various SCP implementations can augment SBA service networking by introducing intelligent load balancing and failover, policy-based controls, and monitoring. »If it Looks Like a Mesh, Walks Like a Mesh… To summarize, the 5G SBA includes a number of broad technology constructs: Microservice architecture based on Kubernetes Hybrid-cloud/multi-cloud capabilities Service discovery and load balancing Network authentication via mTLS OAuth2 token-based authorization Optional proxy-based mode (policy and telemetry) If this is starting to sound familiar, you’re not alone. While the indirect communication mode is optional (and does not specify a sidecar proxy), these elements combined closely resemble a modern, zero trust service mesh. Intentional or not, this emergent pattern could evolve towards the same architectural trends, platforms, and abstractions being adopted elsewhere in modern software. To that end, HashiCorp‘s enterprise products cater to a variety of core 5G use cases, with tight Kubernetes integrations and a keen focus on zero trust networking: HashiCorp Terraform: Builds reliable multi-cloud infrastructure and deploys complex workloads to Kubernetes using industry-standard infrastructure as code practices HashiCorp Consul: Discovers services and secure networks through identity-based authorization HashiCorp Vault: Protects sensitive data and delivers automated PKI at scale to achieve mTLS for authenticated SBI communications HashiCorp Vault in particular presents an attractive solution for easily securing SBI flows with mTLS authentication. Vault is a distributed, highly available secrets management platform that can span multiple private and public cloud regions, accommodating a wide variety of SBA consumer personas and environments. Several replication options offer robust disaster recovery features, as well as increased performance through horizontal scaling. Vault high-availability architecture »Certificate Lifecycle Automation with Vault The PKI functionality of Vault (one of many secret engines available) is powerful, comprehensive, and simple to implement. Vault supports an arbitrary number of Certificate Authorities (CAs) and Intermediates, which can be generated internally or imported from external sources such as hardware security modules (HSMs). Fully automated cross-signing capabilities create additional options for managing 5G provider trust boundaries and network topologies. Access to Vault itself must be authenticated. Thankfully, this is a Kubernetes-friendly operation that permits straightforward integration options for container-based NF workloads. Supported authentication methods include all of the major public cloud machine-identity systems, a per-cluster native Kubernetes option, and JWT-based authentication that incorporates seamlessly with the OIDC provider built into Kubernetes. The JWT-based method is capable of scaling to support many clusters in parallel, utilizing the service account tokens that are projected to pods by default. Once successfully authenticated to Vault, a policy attached to the auth method dictates the client’s ability to access secrets within an engine. These policies can be highly granular based on a number of parameters, such as the client’s JWT token claims, Microsoft Azure Managed Identity, AWS Identity and Access Management (IAM) role, and more. Vault logical flow from authentication to secret consumption If a policy grants access to a PKI secrets engine, the client may request a certificate specifying certain parameters in the API request payload, such as: Common name Subject alternative names (SANs) IP SANs Expiry time The allowed parameters of the request are constrained by a role object configured against the PKI engine, which outlines permitted domain names, maximum TTL, and additional enforcements for the associated certificate authority. An authenticated, authorized, and valid request results in the issuance of a certificate and private key, delivered back to the client in the form of a JSON payload, which can then be parsed and rendered to the pod filesystem as specified by the NF application’s requirements and configuration. The processes described to authenticate and request certificates can be executed by API call from the primary container, aninitcontainer, or any of a number of custom solutions. To reduce the burden of developing unique strategies for each NF, organizations may instead choose to leverage the Vault Agent Injector for Kubernetes to automate the distribution of certificates. This solution consists of a mutating admission controller that intercepts lifecycle events and modifies the pod spec to include a Vault Agent sidecar container. Once configured, standard pod annotations can be used by operations teams to manage the PKI lifecycle, ensuring that certificates and private keys are rendered to appropriate filesystem locations, and are renewed prior to expiry, without ever touching the NF application code. The agent is additionally capable of executing arbitrary commands or API calls upon certificate renewal, which can be configured to include reloading a service or restarting a pod. The injector provides a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. Vault JWT Auth Method with Kubernetes as OIDC provider Vault also integrates with Jetstack cert-manager, which grants the ability to configure Vault as a ClusterIssuer in Kubernetes and subsequently deliver certificates to Ingresses and other cluster objects. This approach can be useful if the SBI in question specifies a TLS-terminating Ingress Controller. Software vendors building 5G NFs may alternatively decide to incorporate Vault into their existing middleware or configuration automation via a more centralized API integration. For example, a service may already be in place to distribute certificates to pods within the NF ecosystem that have interfaces on the SBI message bus. This solution might rely on a legacy certificate procurement protocol such as CMPv2. Replacing this mechanism with simple HTTP API calls to Vault would not only be a relatively trivial effort, it would be a shift very much in the spirit of the 3GPP inclination towards standard RESTful, HTTP-based communications, and broader industry trends. »Working Together to Make 5G More Secure HashiCorp accelerates cloud transformation for Telcos pursuing automation, operational maturity, and compliance for 5G networks. Join the HashiCorp Telco user group to stay up to date with recent developments, blog posts, talk tracks, and industry trends. Reach out to the HashiCorp Telco team at telco@hashicorp.com. View the full article
  16. We are pleased to announce the general availability of HashiCorp Vault 1.12. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure. Vault 1.12 focuses on improving Vault’s core workflows as well as adding new features such as Redis and Amazon ElastiCache secrets engines, a new PKCS#11 provider, improved Transform secrets engine usability, updated resource quotas, expanded PKI revocation and telemetry capabilities, and much more. Key features and improvements in Vault 1.12 include: PKCS #11 provider (Vault Enterprise): Added the Vault PKCS#11 provider, which enables the Vault KMIP secrets engine to be used via PKCS#11 calls. The provider supports a subset of key generation, encryption, decryption, and key storage operations. Transparent Data Encryption for Oracle (Enterprise): Support for Vault to manage encryption keys for Transparent Data Encryption (TDE) with Oracle servers. Transform secrets engine (Vault Enterprise): Added the ability to import externally generated keys for bring-your-own-key (BYOK) workflows, added MSSQL external storage support, and added support for encryption key auto-rotation via an auto_rotate_period option. Resource quotas: Enhanced path and role-based resource quotas with support for API path suffixes and auth mount roles. For example, a trailing wildcard * can be added as part of the path, so auth/token/create* would match both auth/token/create and auth/token/create-orphan but not auth/token/lookup-self. Versioned plugins: Added the concept of versions to plugins, making plugins “version-aware” and enabling release standardization and a better user experience when installing and upgrading plugins. Namespace custom metadata (Vault Enterprise): Support for specifying custom metadata on namespaces was added. The new vault namespace patch command can be used to update existing namespaces with custom metadata as well. OIDC provider interface update (UI): Our design and user research teams gathered community feedback and simplified the setup experience for using Vault as an OIDC provider. With just a few UI clicks, users can now have a default OIDC provider configured and ready to go. Okta number challenge interface update (UI): Added support for Okta’s number challenge to the Vault UI. This enables users to complete the Okta number challenge from a UI, CLI, and API. PKI secrets engine: We are improving Vault’s PKI engine revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta certificate revocation list (CRL) to track changes to the main CRL. These changes offer significant performance and data transfer improvements to revocation workflows. PKI secrets engine telemetry: Support for additional telemetry metrics for better insights into certificate usage via the count of stored and revoked certificates. Vault’s tidy function was also enhanced with additional metrics that reflect the remaining stored and revoked certificates. Redis secrets engine: Added a new database secrets engine that supports the generation of static and dynamic user roles and root credential rotation on a standalone Redis server. Huge thanks to Francis Hitchens, who contributed a repository to HashiCorp. Amazon ElastiCache secrets engine: Added a new database secrets engine that generates static credentials for existing managed roles in Amazon ElastiCache. LDAP secrets engine: Added a new LDAP secrets engine that unifies the user experience between the Active Directory (AD) secrets engine and OpenLDAP secrets engine. This new engine supports all implementations from both of the engines mentioned above (AD, LDAP, and RACF) and brings dynamic credential capabilities for users relying on Active Directory. KMIP secrets engine (Vault Enterprise): Added support to the KMIP secrets engine for the operations and attributes in the Baseline Server profile, in addition to the already supported Symmetric Key Lifecycle Server and the Basic Cryptographic Server profiles. This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.12 changelog list all the updates. Please visit the Vault Release Highlights page for step-by-step tutorials demonstrating the new features. »PKI Secrets Engine Improvements We are improving Vault PKI Engine’s revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta CRL to track changes to the main CRL. These enhancements significantly streamline the PKI engine, making the certification revocation semantics easier to understand and manage. Additionally, support for automatic CRL rotation and periodic tidy operations helps reduce operator burden, alleviate the demand on cluster resources during periods of high revocation, and ensure clients are always served valid CRLs. Finally, support for bring-your-own-cert (BYOC) allows revocation of no_store=true certificates and, for proof-of-possession (PoP), allows end users to safely revoke their own certificates (with corresponding private key) without operator intervention. PKI and managed key support for RSA-PSS signatures: Since its initial release, Vault's PKI secrets engine supported only RSA-PKCS#1v1.5 (public key cryptographic standards) signatures for issuers and leaves. To conform with guidance from the National Institute of Standards and Technology (NIST) around key transport and for compatibility with newer hardware security module (HSM) firmware, we have included support for RSA-PSS (probabilistic signature scheme) signatures. See the section on PSS Support in the PKI documentation for limitations of this feature. PKI telemetry improvements: This release adds additional telemetry to Vault’s PKI secrets engine, enabling customers to gather better insights into certificate usage via the count of stored and revoked certificates. Additionally, the Vault tidy function is enhanced with additional metrics that reflect the remaining stored and revoked certificates. Google Cloud Key Manager support: Managed keys let Vault secrets engines (currently PKI) use keys stored in cloud KMS systems for cryptographic operations like certificate signing. Vault 1.12 adds support for Google Cloud KMS to the managed key system, where previously only AWS, Microsoft Azure, and PKCS#11 HSMs were supported. For more information, please see the PKI Secrets Engine documentation. »PKCS #11 Provider Software solutions often require cryptographic objects such as keys or X.509 certificates. Some external software must also perform operations including key generation, hashing, encryption, decryption, and signing. HSMs are traditionally used as a secure option but can be expensive and challenging to operationalize. Vault Enterprise 1.12 is a PKCS#11 2.40 compliant provider, extended profile. PKCS#11 is the standard protocol supported for integrating with HSMs. It also has the operational flexibility and advantages of software for key generation, encryption, and object storage operations. The PKCS#11 provider in Vault 1.12 supports a subset of key generation, encryption, decryption, and key storage operations. Protecting sensitive data at rest is a fundamental task for database administrators that enables many organizations to follow industry best practices and comply with regulatory requirements. Administrators of Oracle databases will also now be able to enable Transparent Data Encryption (TDE) for Oracle because of this feature. TDE for Oracle performs real-time data and log file encryption and decryption transparently to end user applications. For more information, please see the PKCS#11 provider documentation. »Transform Secret Engine Enhancements Transform is a Vault Enterprise feature that lets Vault use data transformations and tokenization to protect secrets residing in untrusted or semi-trusted systems. This includes protecting compliance-regulated data such as social security numbers and credit card numbers. Oftentimes, data must reside within file systems or databases for performance but must be protected in case the system in which it resides is compromised. Transform is built for these kinds of use cases. With this release, we added the ability to import externally generated keys for BYOK workflows, MSSQL external storage support, and support for encryption key auto-rotation via an auto_rotate_period option. Bring your own key (BYOK): Added the ability to import externally generated keys to support use cases where there is a need to bring in an existing key from an HSM or other outside system. In release 1.11, we introduced BYOK support to Vault, enabling customers to import existing keys into the Vault Transit secrets engine and enabling secure and flexible Vault deployments. We are extending that support to the Vault Transform secrets engine in this release. MSSQL support: An MSSQL store is now available to be used as an external storage engine with tokenization in the Transform secrets engine. Refer to the following documents: Transform Secrets Engine (API), Transform Secrets Engine, and Tokenization Transform for more information. Key auto rotation: Periodic rotation of encryption keys is a recommended key management practice for a good security posture. In Vault 1.10, we added support for auto key rotation in the Transit secrets engine. In Vault 1.12, the Transform secrets engine has been enhanced to let users set the rotation policy during key creation in a time interval, which will cause Vault to automatically rotate the Transform keys when the time interval elapses . Refer to the Tokenization Transform and Transform Secrets Engine (API) documentation for more information. For more information, please see the Transform Secrets Engine documentation. »Other Vault 1.12 Features Many new features in Vault 1.12 have been developed over the course of the 1.11.x releases. You can learn more about how to use these features in our detailed, hands-on HashiCorp Vault guides. You can consult the changelog for full details, but here are a few of the larger changes and depreciation notices: Terraform provider for Vault: The Terraform provider uses Vault’s sys/seal-status endpoint to get the Vault server’s version, and then determine the correct features available for use. Vault usage metrics: Enhanced the /sys/internal/counters API to support setting the end_date to the current month. When this is done, the new_clients field will have the approximate number of new clients that came in for the current month. Licensing enhancement (Vault Enterprise): Updated license termination behavior so that production licenses no longer have a termination date, which makes Vault more robust for Vault Enterprise customers. AAD Graph on Azure Secrets Engine removed: We added a use_microsoft_graph_api configuration parameter for using Microsoft Graph API, since the Azure Active Directory API is being removed. X.509 certificates with SHA-1 signatures support removed: Please migrate off SHA-1 for certificate signing. Go (Golang) version 1.18 removes support for SHA-1 by default, however, you can set a Go environment variable to restore SHA-1 support if you need to continue using SHA-1 (supported until Go 1.19). Standalone database engines impacted experience: If you use any standalone database engines, please migrate away from their usage. With this release, Vault will log error messages and shut down. Any attempts to add new mounts will result in an error. Please migrate to database secrets engines. AppID impacted experience: If you use AppID, please migrate away from its usage. With this release, Vault will log error messages and shut down. Please migrate to the AppRole auth method. »Upgrade Details Vault 1.12 introduces significant new functionality. As such, please review the Upgrading Vault page, as well as the Feature Deprecation Notice and Plans page for further details. As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing security@hashicorp.com — do not use the public issue tracker. For more information, please consult our security policy and our PGP key. For more information about Vault Enterprise, visit hashicorp.com/products/vault. You can download the open source version of Vault at vaultproject.io. We hope you enjoy HashiCorp Vault 1.12. View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...