Search the Community
Showing results for tags 'secrets management'.
-
hcp vault Running Vault on HashiCorp Nomad, Part 1
Hashicorp posted a topic in Infrastructure-as-Code
Vault is a secrets management platform that provides encrypted storage for long lived secrets, identity brokerage using ephemeral credentials, and encryption as a service. Unless you’re using HashiCorp Cloud Platform to host Vault (which is always recommended if you can support it), deploying and running Vault clusters will likely be a manual process. Any time a server needs to be restarted, an engineer would need to login and restart the service. This is what orchestrators like HashiCorp Nomad and Kubernetes were built to automate. While Kubernetes has a wide array of components to manage additional use cases, Nomad is mainly focused on scheduling and cluster management. It’s simple to run, lightweight, and supports running VMs, containers, raw executables, JAR files, Qemu workloads, and more with custom task driver plugins. By running Vault as a Nomad job (Nomad’s term for workloads), operators can manage and schedule Vault servers with a low-complexity architecture. This post shows how to deploy and configure Vault servers on Nomad using HCP Terraform. The secrets consumption will be done using the Nomad and Vault CLI’s, respectively, to show the underlying workflows. The Terraform code will be split in two, with separate configuration for the infrastructure and the Vault deployment. This is done to manage the states for these workspaces separately and share dependency outputs between them. Deployment architecture This deployment architecture requires five virtual machines (VMs) — one is the Nomad server, and the other four are the Nomad clients that run the Vault servers, including a backup server for Vault. These VMs will be deployed to Amazon EC2 instances. The VMs will all live in the same virtual private cloud (VPC) subnet. HCP Terraform and directory setup Because this approach splits the architecture into multiple workspaces, you need to configure remote backends for each HCP Terraform workspace so that output dependencies can be shared between them. To create these workspaces, create a directory structure that contains a folder for each workspace. The directory structure should look like this: ├── 1-nomad-infrastructure ├── 2-nomad-configuration ├── 3-nomad-example-job-deployment 3 directories The remote backend is HCP Terraform. To create the remote backends, create a file called backend.tf in each of the directories. Here is a shell script that will create the directory structure and write the relevant backend.tf files in all of the directories. Networking for the Nomad cluster To create the infrastructure for Nomad, navigate to the 1-nomad-infrastructure directory. First, set up your AWS Terraform provider. Here is the provider.tf code. Once the provider is configured, you’re ready to deploy a VPC and a subnet. To do this, there is another file in the same directory, called network.tf, which contains the code below: module "vpc" { source = "terraform-aws-modules/vpc/aws" name = "my-vpc" cidr = "10.0.0.0/16" azs = ["eu-west-1a"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"] enable_nat_gateway = true enable_vpn_gateway = false enable_dns_support = true enable_dns_hostnames = true tags = { Terraform = "true" Environment = "dev" } }This code deploys all the resources required for a fully functional network, including resources for a working AWS VPC, associated subnets, and NAT gateways. It uses the community Terraform module called the AWS VPC Terraform module, available on the Terraform Registry. Configuration of Nomad servers Before you can write the Terraform code to deploy the five VMs, you need to write some shell scripts to configure the servers during their deployment as a prerequisite. The first is for the Nomad server called nomad-server.sh: #! /bin/bash -e # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" server { enabled = true bootstrap_expect = ${NOMAD_SERVER_COUNT} server_join { retry_join = ["provider=aws tag_value=${NOMAD_SERVER_TAG} tag_key=${NOMAD_SERVER_TAG_KEY}"] } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl enable nomad systemctl restart nomadThis script does a number of things to configure the Nomad server: Installs the Nomad binary. Creates the Nomad directory that contains everything it needs to function. Creates a Nomad configuration file for the server and places it in the Nomad directory created in step 2. This configuration uses a feature called cloud auto-join that looks for pre-specified tags on the VM and automatically joins any VMs with these tags to a Nomad cluster. Enables access control lists (ACLs) for Nomad. Starts the Nomad service. This script runs on the Nomad server VM when deployed using cloud-init. Notice the script contains three variables for cloud auto-join: ${NOMAD_SERVER_COUNT}, ${NOMAD_SERVER_TAG_KEY}, and ${NOMAD_SERVER_TAG}. The values of these variables will be rendered by Terraform. To deploy this VM and run this script, use the file named compute.tf, which is in the directory mentioned above: data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical }The code above uses a data source to locate the Amazon Machine Image (AMI) used to deploy the VM instance. The code below creates a security group attached to the VM, allowing SSH access to it on port 22: resource "aws_security_group" "ssh" { vpc_id = module.vpc.vpc_id name = "allow_ssh" ingress { from_port = 22 protocol = "tcp" to_port = 22 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "allow_ssh" } }The following code creates another security group that allows access to Nomad’s default port, which is port 4646. This is attached to the Nomad server VM: resource "aws_security_group" "nomad" { vpc_id = module.vpc.vpc_id name = "nomad_port" ingress { from_port = 4646 protocol = "tcp" to_port = 4648 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "nomad" } }The next piece of code creates a security group to allow egress connections outside of the network: resource "aws_security_group" "egress" { vpc_id = module.vpc.vpc_id name = "egress" egress { from_port = 0 protocol = "-1" to_port = 0 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "egress" } }The code below creates the final security group of this walkthrough that allows access to Vault on its default port of 8200. This will be attached to the Nomad clients that will run Vault: resource "aws_security_group" "vault" { vpc_id = module.vpc.vpc_id name = "vault" ingress { from_port = 8200 protocol = "tcp" to_port = 8201 cidr_blocks = [ "0.0.0.0/0" ] } tags = { Name = "vault" } }The code below creates an IP address for the Nomad server in the first resource and associates it with the Nomad server VM in the second. Terraform will not perform the association until the Nomad server VM has been deployed. The IP address is deployed first because it is used in the Nomad config file that is deployed as part of the VM provisioning to enable the OIDC discovery URL on the Nomad server. resource "aws_eip" "nomad_server" { tags = { Name = "Nomad Server" } } resource "aws_eip_association" "nomad_server" { instance_id = aws_instance.nomad_servers.id allocation_id = aws_eip.nomad_server.id }The other prerequisite for deploying the VM for the Nomad server is an SSH key pair, which enables authentication to the VM when connecting via SSH: resource "aws_key_pair" "deployer" { key_name = "deployer-key" public_key = file(var.ssh_key) } The code below deploys the VM for the Nomad server: resource "aws_instance" "nomad_servers" { ami = data.aws_ami.ubuntu.id instance_type = "t3.micro" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name user_data = templatefile("./servers.sh", { NOMAD_SERVER_TAG = "true" NOMAD_SERVER_TAG_KEY = "nomad_server" NOMAD_SERVER_COUNT = 1 NOMAD_ADDR = aws_eip.nomad_server.public_ip }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id ] lifecycle { ignore_changes = [ user_data, ami ] } tags = { Name = "Nomad Server" nomad_server = true } }Some points to note about this code: This resource usescloud-init to deploy the script to the VM that installs the dependent packages and configure the server for Nomad. The script is rendered using the templatefile function and populates the variable values in the script template with the values specified in the above resource: NOMAD_SERVER_TAG: for cloud auto-join NOMAD_SERVER_TAG_KEY: for cloud auto-join NOMAD_SERVER_COUNT: to specify how many servers Nomad expects to join the cluster NOMAD_ADDR: to configure Nomad’s OIDC discovery URL The data source used to obtain the AMI will always fetch the latest version. This means the VM could be deployed more if desired. The lifecycle block ignores changes specifically related to the AMI ID. It associates the security groups created above to the VM. It also adds the SSH key pair to the VM to aid in SSH authentication. This is useful for troubleshooting. Configuring Nomad clients for the Vault servers To deploy the Nomad clients for Vault, you take a similar approach to deploying the server. The main difference is the cloud-init script and the number of servers deployed. The script below (client.sh) is used to configure each Nomad client: #! /bin/bash -e # Install the CNI Plugins curl -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz -o /tmp/cni.tgz mkdir -p /opt/cni/bin tar -C /opt/cni/bin -xzf /tmp/cni.tgz # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Create Vault directory. mkdir -p /etc/vault.d # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl -y sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y # Install Java sudo apt install default-jre -y # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" client { enabled = true node_pool = "vault-servers" node_class = "vault-servers" server_join { retry_join = ["${NOMAD_SERVERS_ADDR}"] } host_volume "/etc/vault.d" { path = "/etc/vault.d" read_only = false } } plugin "docker" { config { allow_privileged = true } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl enable nomad systemctl restart nomadThe script above is similar to that of the Nomad server. One key difference is the Nomad client configuration file, which includes the following configuration items: Node pool: This Nomad feature can group pieces of compute infrastructure together. In this case, you want dedicated servers to run your Vault cluster to reduce the security blast radius. Grouping these servers lets you specify what node pool the Vault servers should run on and create policies around this to prevent other Nomad jobs from being deployed to these client nodes. Host volume: Vault can store encrypted secrets that other authorized Nomad jobs can retrieve. This means that Vault is a stateful workload that requires persistent storage. Host volumes expose a volume on the VM to Nomad and lets you mount the volume to a Nomad job. This means that if a job is restarted, it can still access its data. Docker plugin: The Docker plugin is enabled because Vault will be run in a container as a Nomad job. This has the potential to ease the upgrade paths by simply changing the tag on the image used. Before deploying the Nomad clients, you need to perform a quick health check on the Nomad server to ensure it is available for clients to join it. To check this, use TerraCurl to make an API call to the Nomad server to check its status: resource "terracurl_request" "nomad_status" { method = "GET" name = "nomad_status" response_codes = [200] url = "http://${aws_eip.nomad_server.public_ip}:4646/v1/status/leader" max_retry = 4 retry_interval = 10 depends_on = [ aws_instance.nomad_servers, aws_eip_association.nomad_server ] }This checks that Nomad has an elected leader in the cluster and expects a 200 response. If it does not get the desired response, TerraCurl will retry every 10 seconds for a maximum of 4 retries. This prevents potential race conditions between the Nomad server and the clients’ provisioning. Now you’re ready to deploy the Nomad client VMs. This is similar to the server deployed before, with a few key differences: Vault requires more compute power so the instance type is bigger. It uses the count feature because you need three Vault nodes. The script rendered by the templatefile function needs only one variable value this time (the Nomad server address). resource "aws_instance" "nomad_clients" { count = 3 ami = data.aws_ami.ubuntu.id instance_type = "t3.medium" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name associate_public_ip_address = true user_data = templatefile("./clients.sh", { NOMAD_SERVERS_ADDR = "${aws_instance.nomad_servers.private_ip}" }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id, aws_security_group.vault.id ] tags = { Name = "Vault on Nomad Client ${count.index + 1}" nomad_server = false } lifecycle { ignore_changes = [ user_data, ami ] } depends_on = [ terracurl_request.nomad_status ] }Configuration of the Nomad client for the Vault backup server The next piece of infrastructure to deploy is the VM used as the Vault backup server. The server makes backups of the Vault cluster. It’s best practice to store backups away from the Vault cluster, so create a separate node pool for the backup server. Run the vault-backup-server.sh script below, which is located in the same directory you’ve been working in so far: #! /bin/bash -e # Install the CNI Plugins curl -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz -o /tmp/cni.tgz mkdir -p /opt/cni/bin tar -C /opt/cni/bin -xzf /tmp/cni.tgz # Install Nomad sudo apt-get update && \ sudo apt-get install wget gpg coreutils -y wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt-get update && sudo apt-get install nomad -y # Create Nomad directory. mkdir -p /etc/nomad.d # Install Vault sudo apt-get install vault -y # Create Vault directory. mkdir -p /etc/vault.d # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl -y sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y # Install Java sudo apt install default-jre -y # Nomad configuration files cat < /etc/nomad.d/nomad.hcl log_level = "DEBUG" data_dir = "/etc/nomad.d/data" client { enabled = true node_pool = "vault-backup" node_class = "vault-backup" server_join { retry_join = ["${NOMAD_SERVERS_ADDR}"] } host_volume "vault_vol" { path = "/etc/vault.d" read_only = false } } plugin "docker" { config { allow_privileged = true } } autopilot { cleanup_dead_servers = true last_contact_threshold = "200ms" max_trailing_logs = 250 server_stabilization_time = "10s" enable_redundancy_zones = false disable_upgrade_migration = false enable_custom_upgrades = false } EOF cat < /etc/nomad.d/acl.hcl acl = { enabled = true } EOF systemctl restart nomadNext, you need to actually deploy the VM using the code below, which is added to the compute.tf file: resource "aws_instance" "nomad_clients_vault_backup" { count = 1 ami = data.aws_ami.ubuntu.id instance_type = "t3.medium" subnet_id = module.vpc.public_subnets.0 key_name = aws_key_pair.deployer.key_name associate_public_ip_address = true user_data = templatefile("./vault-backup-server.sh", { NOMAD_SERVERS_ADDR = "${aws_instance.nomad_servers.private_ip}" }) vpc_security_group_ids = [ aws_security_group.ssh.id, aws_security_group.egress.id, aws_security_group.nomad.id, aws_security_group.vault.id ] tags = { Name = "Vault backup server" nomad_server = false } lifecycle { ignore_changes = [ user_data, ami ] } depends_on = [ terracurl_request.nomad_status ] }Nomad ACL configuration As part of the Nomad server configuration deployed by the cloud-init script, ACLs were enabled. This means that an access token is required before any actions can be performed. As this is a new install of Nomad, a token does not exist yet. To bootstrap Nomad’s ACL system with the initial management token, you can use Nomad’s API and TerraCurl. TerraCurl is useful in this scenario because the Nomad Terraform provider does not support this bootstrapping functionality. You can write the TerraCurl resource to bootstrap the Nomad ACL system. There is a file in the 1-nomad-infrastructure directory called nomad.tf that includes the following code: resource "terracurl_request" "bootstrap_acl" { method = "POST" name = "bootstrap" response_codes = [200, 201] url = "http://${aws_instance.nomad_servers.public_ip}:4646/v1/acl/bootstrap" }This code makes a POST API call to the Nomad server using the public IP address that was assigned to the VM during creation. It uses Terraform’s interpolation and joins that to the Nomad API endpoint /v1/acl/bootstrap to make the call. This resource tells Terraform to expect either a 200 or 201response code from the API call or Terraform will fail. The response body from the API call is stored in state. Terraform outputs In order to provide some of the computed values to other workspaces, you need to output them. To do this, create a file called outputs.tf in the same directory as above and insert the following code: output "nomad_server_public_ip" { value = aws_eip.nomad_server.public_ip } output "nomad_server_private_ip" { value = aws_instance.nomad_servers.private_ip } output "nomad_clients_private_ips" { value = aws_instance.nomad_clients.*.private_ip } output "nomad_clients_public_ips" { value = aws_instance.nomad_clients.*.public_ip } output "terraform_management_token" { value = nomad_acl_token.terraform.secret_id sensitive = true } output "nomad_ui" { value = "http://${aws_eip.nomad_server.public_ip}:4646" }The directory structure should now look like this: ├── 1-nomad-infrastrcuture │ ├── anonymous-policy.hcl │ ├── backend.tf │ ├── client-policy.hcl │ ├── clients.sh │ ├── compute.tf │ ├── network.tf │ ├── nomad-client-vault-backup.sh │ ├── nomad.tf │ ├── outputs.tf │ ├── providers.tf │ ├── servers.sh │ └── variables.tf ├── 2-nomad-configuration │ ├── backend.tf └── 3-nomad-job-example-deployment └── backend.tf 3 directories, 14 files Now you can run terraform plan and terraform apply to create these resources. In the next blog This blog post showed how to deploy the infrastructure required to run Vault on Nomad. It covered some Terraform directory structure concepts and how they relate to workspaces. It also covered deploying and configuring Nomad, as well as Nomad ACLs. Part 2 of this blog series will look at deploying Vault as a Nomad job and configuring it, while Part 3 will explore deploying some automation to assist in the day-to-day operations of Vault. View the full article -
Kubernetes has transformed container Orchestration, providing an effective framework for delivering and managing applications at scale. However, efficient storage management is essential to guarantee the dependability, security, and efficiency of your Kubernetes clusters. Benefits like data loss prevention, regulations compliance, and maintaining operational continuity mitigating threats underscore the importance of security and dependability. This post will examine the best practices for the top 10 Kubernetes storage, emphasizing encryption, access control, and safeguarding storage components. Kubernetes Storage Kubernetes storage is essential to contemporary cloud-native setups because it makes data persistence in containerized apps more effective. It provides a dependable and scalable storage resource management system that guarantees data permanence through migrations and restarts of containers. Among other capabilities, persistent Volumes (PVs) and Persistent Volume Claims (PVCs) give Kubernetes a versatile abstraction layer for managing storage. By providing dynamic provisioning of storage volumes catered to particular workload requirements, storage classes further improve flexibility. Organizations can build and manage stateful applications with agility, scalability, and resilience in various computing settings by utilizing Kubernetes storage capabilities. 1. Data Encryption Sensitive information kept in Kubernetes clusters must be protected with data encryption. Use encryption tools like Kubernetes Secrets to safely store sensitive information like SSH keys, API tokens, and passwords. Encryption both in transit and at rest is also used to further protect data while it is being stored and transmitted between nodes. 2. Use Secrets Management Tools Steer clear of hardcoding private information straight into Kubernetes manifests. Instead, use powerful secrets management solutions like Vault or Kubernetes Secrets to securely maintain and distribute secrets throughout your cluster. This guarantees that private information is encrypted and available only to approved users and applications. 3. Implement Role-Based Access Control (RBAC) RBAC allows you to enforce fine-grained access controls on your Kubernetes clusters. Define roles and permissions to limit access to storage resources using the least privilege concept. This lowers the possibility of data breaches and unauthorized access by preventing unauthorized users or apps from accessing or changing crucial storage components. 4. Secure Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) Ensure that claims and persistent volumes are adequately secured to avoid tampering or unwanted access. Put security rules in place to limit access to particular namespaces or users and turn on encryption for information on persistent volumes. PVs and PVCs should have regular audits and monitoring performed to identify and address any security flaws or unwanted entry attempts. 5. Enable Network Policies To manage network traffic between pods and storage resources, use Kubernetes network policies. To guarantee that only authorized pods and services may access storage volumes and endpoints, define firewall rules restricting communication to and from storage components. This reduces the possibility of data exfiltration and network-based assaults and prevents unauthorized network access. 6. Enable Role-Based Volume Provisioning Utilize Kubernetes’ dynamic volume provisioning features to automate storage volume creation and management. To limit users’ ability to build or delete volumes based on their assigned roles and permissions, utilize role-based volume provisioning. This guarantees the effective and safe allocation of storage resources and helps prevent resource abuse. 7. Utilize Pod Security Policies To specify and implement security restrictions on pods’ access to storage resources, implement pod security policies. To manage pod rights, host resource access, and storage volume interactions, specify security policies. By implementing stringent security measures, you can reduce the possibility of privilege escalation, container escapes, and illegal access to storage components. 8. Regularly Update and Patch Kubernetes Components Monitor security flaws by regularly patching and updating Kubernetes components, including storage drivers and plugins. Keep your storage infrastructure safe from new attacks and vulnerabilities by subscribing to security advisories and adhering to best practices for Kubernetes cluster management. 9. Monitor and Audit Storage Activity To keep tabs on storage activity in your Kubernetes clusters, put extensive logging, monitoring, and auditing procedures in place. To proactively identify security incidents or anomalies, monitor access logs, events, and metrics on storage components. Utilize centralized logging and monitoring systems to see what’s happening with storage in your cluster. 10. Conduct Regular Security Audits and Penetration Testing Conduct comprehensive security audits and penetration tests regularly to evaluate the security posture of your Kubernetes storage system. Find and fix any security holes, incorrect setups, and deployment flaws in your storage system before hackers can exploit them. Work with security professionals and use automated security technologies to thoroughly audit your Kubernetes clusters. Considerations Before putting suggestions for Kubernetes storage into practice, take into account the following: Evaluate Security Requirements: Match storage options with compliance and corporate security requirements. Assess Performance Impact: Recognize the potential effects that resource usage and application performance may have from access controls, encryption, and security rules. Identify Roles and Responsibilities: Clearly define who is responsible for what when it comes to managing storage components in Kubernetes clusters. Plan for Scalability: Recognize the need for scalability and the possible maintenance costs related to implementing security measures. Make Monitoring and upgrades a Priority: To ensure that security measures continue to be effective over time, place a strong emphasis on continual monitoring, audits, and upgrades. Effective storage management is critical for ensuring the security, reliability, and performance of Kubernetes clusters. By following these ten best practices for Kubernetes storage, including encryption, access control, and securing storage components, you can strengthen the security posture of your Kubernetes environment and mitigate the risk of data breaches, unauthorized access, and other security threats. Stay proactive in implementing security measures and remain vigilant against emerging threats to safeguard your Kubernetes storage infrastructure effectively. The post Mastering Kubernetes Storage: 10 Best Practices for Security and Efficiency appeared first on Amazic. View the full article
-
- 1
-
- kubernetes
- storage
- (and 11 more)
-
Most Kubernetes resources and workloads reference the Kubernetes Secret object for credentials, API tokens, certificates, and other confidential data. Kubernetes stores secrets unencrypted by default and requires role-based access control (RBAC) rules to ensure least-privilege access. However, it does not offer a straightforward method for tracking the lifecycle and distribution of the secret. Rather than store secrets in Kubernetes, you can use a centralized secrets management solution like HashiCorp Cloud Platform (HCP) Vault Secrets to audit and manage secrets. This post demonstrates how to use the Vault Secrets Operator (VSO) to retrieve dynamic secrets from HCP Vault Secrets and write them to a Kubernetes Secret for other workloads and resources to reference. HCP Vault Secrets stores a set of secrets and supports the management and audit of their lifecycle and distribution. The demo app repository for this tutorial uses HashiCorp Terraform to write the secrets into HCP Vault Secrets, deploy the Vault Secrets Operator to the Kubernetes cluster, and deploy the custom resources to synchronize a secret for Argo CD to create a private repository. This workflow minimizes the need to refactor applications to access a secrets manager directly by using native Kubernetes Secrets. Store secrets in HCP Vault Secrets HCP Vault Secrets enables you to manage the lifecycle of credentials and track their usage. For example, a GitOps tool like Argo CD requires credentials to access a private repository on GitHub. These credentials may include a private key, username and password, or token to allow Argo CD to read from a repository. If you set up the GitHub App, you need to store the application identifier and private key for Argo CD to use. Storing the credentials in HCP Vault Secrets and installing the Vault Secrets Operator ensures the credentials get synchronized with a Kubernetes Secret, which Argo CD will reference. The configuration example below uses the HCP provider for Terraform to store the GitHub App’s private key, application identifier, and application installation identifier in a HCP Vault Secrets application named argocd: resource "hcp_vault_secrets_app" "argocd" { app_name = "argocd" description = "Secrets related to running Argo CD on Kubernetes" } resource "hcp_vault_secrets_secret" "argocd_github_app_id" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppID" secret_value = var.argocd_github_app.id } resource "hcp_vault_secrets_secret" "argocd_github_app_installation_id" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppInstallationID" secret_value = var.argocd_github_app.installation_id } resource "hcp_vault_secrets_secret" "argocd_github_app_private_key" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "githubAppPrivateKey" secret_value = base64decode(var.argocd_github_app.private_key) } resource "hcp_vault_secrets_secret" "argocd_github_url" { app_name = hcp_vault_secrets_app.argocd.app_name secret_name = "url" secret_value = var.argocd_github_app.url }After applying the Terraform configuration, you can access the secrets in HCP Vault Secrets under the argocd application. You then need to synchronize these secrets from the argocd HCP Vault Secrets application into Kubernetes for Argo CD to reference. Install Vault Secrets Operator Vault Secrets Operator helps synchronize secrets from HCP Vault Secrets or Vault into Kubernetes Secrets. The Operator handles the creation of custom resources that define the authentication to, and retrieval of, secrets from HCP Vault Secrets. Here’s how to install Vault Secrets Operator using its Helm chart: $ helm repo add hashicorp https://helm.releases.hashicorp.com$ helm install -n vault-secrets-operator --create-namespace \ vault-secrets-operator hashicorp/vault-secrets-operatorThe operator’s Helm chart includes a set of custom resource definitions for authenticating to HCP Vault Secrets. Authenticate to HCP Vault Secrets from Kubernetes The Vault Secrets Operator needs a service principal to authenticate to HCP Vault Secrets. You can use Terraform to create a separate service principal with the viewer role (read-only access) that gives the operator read access to the secret: resource "hcp_service_principal" "argocd" { name = "argocd" } resource "hcp_service_principal_key" "argocd" { service_principal = hcp_service_principal.argocd.resource_name } resource "hcp_project_iam_binding" "argocd" { project_id = var.hcp_project_id principal_id = hcp_service_principal.argocd.resource_id role = "roles/viewer" }Save the HCP principal’s client ID and key to a Kubernetes Secret in the argocd namespace: apiVersion: v1 data: clientID: REDACTED clientSecret: REDACTED kind: Secret metadata: name: hvs-service-principal namespace: argocd type: OpaqueThe Vault Secrets Operator refers to the HCPAuth resource to authenticate to an HCP project with the read-only service principal you created in the argocd namespace: apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPAuth metadata: name: default namespace: argocd spec: method: servicePrincipal organizationID: HCP_ORG_ID projectID: HCP_PROJECT_ID servicePrincipal: secretRef: hvs-service-principalAfter deploying the HCPAuth resource to the cluster, you can now define a resource to synchronize secrets from HCP Vault Secrets to Kubernetes. Sync secrets from HCP Vault Secrets Use the HCPVaultSecretsApp resource to define the secrets VSO synchronizes from Vault to Kubernetes. You can define a destination Kubernetes Secret for the credentials and each key in HCP Vault Secrets will map to a key in the secret. If the name of the HCP Vault Secrets key does not match the required Kubernetes secret key you need for a workload, you can configure transformations for each key in the secret. VSO will also refresh the secret on an interval defined in the refreshAfter attribute. For example, Argo CD creates private repositories by scanning for Kubernetes Secrets with the argocd.argoproj.io/secret-type label. The HCPVaultSecretsApp resource for this tutorial’s GitHub repository includes that label in the destination. It also reads each secret from HCP Vault Secrets and maps it to the keys required by Argo CD, such as githubAppID and githubAppPrivateKey. The repository secret for Argo CD also requires the type key, which is set to git. apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPVaultSecretsApp metadata: name: github-creds namespace: argocd spec: appName: argocd destination: create: true labels: argocd.argoproj.io/secret-type: repo-creds hvs: "true" name: github-creds overwrite: false transformation: templates: githubAppID: name: githubAppID text: '{{- get .Secrets "githubAppID" -}}' githubAppInstallationID: name: githubAppInstallationID text: '{{- get .Secrets "githubAppInstallationID" -}}' githubAppPrivateKey: name: githubAppPrivateKey text: '{{- get .Secrets "githubAppPrivateKey" -}}' type: name: type text: git url: name: url text: '{{- get .Secrets "url" -}}' hcpAuthRef: default refreshAfter: 1hWhile the key names in HCP Vault Secrets do match the required keys for Argo CD, you add the transformations to demonstrate the value of re-mapping secrets and adding required fields, such as the type. In general, use the transformation field to create a Kubernetes Secret that conforms to the expected schema of any resource that uses it. Once you apply the resource, VSO creates a Kubernetes Secret named github-creds with the fields and values defined in the transformation. $ kubectl get secrets -n argocd github-creds -o yamlapiVersion: v1 data: _raw: REDACTED githubAppID: ODU4OTMx githubAppInstallationID: NDg2Mzg5OTI= githubAppPrivateKey: REDACTED type: Z2l0 url: REDACTED kind: Secret metadata: labels: app.kubernetes.io/component: secret-sync app.kubernetes.io/managed-by: hashicorp-vso app.kubernetes.io/name: vault-secrets-operator argocd.argoproj.io/secret-type: repo-creds hvs: "true" name: github-creds namespace: argocd ownerReferences: - apiVersion: secrets.hashicorp.com/v1beta1 kind: HCPVaultSecretsApp name: github-creds uid: 729d7860-0065-4802-b892-dffbe15bbffb type: OpaqueArgo CD recognizes the secret because of the argocd.argoproj.io/secret-type: repo-credslabel. It creates a repository resource linked to the repository URL and GitHub App. To verify any changes or review access to the secret, you can use the activity logs for the application in HCP Vault Secrets. The activity logs in HCP Vault Secrets indicate that the argocd service principal used by VSO has listed secrets under the argocd application. If you create a new GitHub App and update the secrets in HCP Vault Secrets, VSO updates the github-creds secret with the new application IDs and private keys the next time it refreshes the secret. Argo CD updates the repository resource to use the new secrets without disrupting the repository connection. If you need to make changes to many GitHub Apps or credentials, you can update them all in HCP Vault Secrets without searching through Kubernetes namespaces and clusters. Learn more If you currently store credentials in Kubernetes Secrets, you can copy them to HCP Vault Secrets and create resources to synchronize them into your Kubernetes cluster. This process avoids significant refactoring of Kubernetes workloads and lets you manage and track the secret lifecycles in a central location. While the example demonstrates how to synchronize secrets for Argo CD, you can use this pattern for other Kubernetes workloads and resources. To get started, you can: Sign up for HCP to start using HCP Vault Secrets. Review our documentation to learn more about HCP Vault Secrets and check out our tutorials on using HCP Vault Secrets with Kubernetes. Find a complete list of available sources for secrets syncing in the Vault Secrets Operator documentation. Learn how to use Vault to manage API tokens for the Terraform Cloud Operator. View the full article
-
Explore how Akeyless Vaultless Secrets Management integrates with the Kubernetes Secrets Store CSI Driver to enhance security and streamline secrets management in your Kubernetes environment. The post Enhancing Kubernetes Secrets Management with Akeyless and CSI Driver Integration appeared first on Akeyless. The post Enhancing Kubernetes Secrets Management with Akeyless and CSI Driver Integration appeared first on Security Boulevard. View the full article
-
- kubernetes
- secrets
-
(and 4 more)
Tagged with:
-
AWS Systems Manager Parameter Store Overview: Part of AWS Systems Manager, it offers secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. Features: Secure storage of secrets and configuration data, integration with AWS Identity and Access Management (IAM) for access control, history tracking of parameter changes, and the ability to reference AWS Secrets Manager secrets. AWS Secrets Manager Overview: Specifically designed for managing secrets, AWS Secrets Manager enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Features: Secret rotation with built-in support for RDS, DocumentDB, and Redshift, automatic integration with AWS services, encryption using AWS Key Management Service (KMS), and detailed audit trails via AWS CloudTrail. AWS Config Overview: Provides a detailed view of the configuration of AWS resources in your account, including how resources are related to one another and how they were configured in the past. This service facilitates compliance auditing, security analysis, resource change tracking, and troubleshooting. Features: Continuous monitoring and history of AWS resource configurations, predefined rules for evaluating the configurations of AWS resources, and the ability to create custom rules. AWS AppConfig Overview: Part of AWS Systems Manager, AppConfig helps you manage, validate, and deploy application configurations. It allows you to separate your application code from its configuration, helping you increase application resilience and deployment agility. Features: Validation of configurations against a schema or a set of tests before deployment, deployment strategies for controlled rollout, and integration with AWS CloudWatch for monitoring. Key Practices for Managing Configurations and Secrets in AWS: Principle of Least Privilege: Ensure that only necessary permissions are granted for reading and writing secrets or configurations. Secrets Rotation: Use AWS Secrets Manager for automated secrets rotation, which is crucial for maintaining security. Environment Separation: Separate your environments (e.g., production, development, and testing) within your AWS account to prevent accidental access or changes to sensitive information. Audit and Monitoring: Utilize AWS CloudTrail and AWS Config for auditing and monitoring changes to configurations and secrets. This can help in identifying unauthorized access or non-compliant configurations. Encryption: Use encryption for data in transit and at rest. AWS services like Secrets Manager and Parameter Store encrypt the data at rest using KMS. Google Cloud: Secret Manager: Google Cloud’s Secret Manager provides a secure and convenient way to store and manage API keys, passwords, certificates, and other sensitive data. It’s designed to centralize the storage of application secrets, which helps in maintaining a consistent and secure environment across your applications. Cloud Key Management Service (KMS): While primarily focused on managing cryptographic keys, Google Cloud KMS can also be used for storing small amounts of encrypted data such as secrets or configuration values. Azure: Azure Key Vault: Azure Key Vault is a tool for securely storing and accessing secrets, such as API keys, passwords, certificates, or cryptographic keys. It offers secure secret management and key storage capabilities, ensuring that sensitive information is protected and accessible only to authorized applications and users. Azure App Configuration: Specifically designed for managing application settings and feature flags, Azure App Configuration provides a centralized service to manage application settings and control their access. It can be used in conjunction with Azure Key Vault for a comprehensive solution to configuration and secrets management. The post Secrets Manager: Managing configurations, secrets, and parameters in AWS, Azure and Google Cloud appeared first on DevOpsSchool.com. View the full article
-
Unlike traditional credentials that are used by an individual to gain access to a particular system, secrets are leveraged by developers and their code to access applications, services, infrastructure, and by platforms to establish trusted identities. They can include usernames and passwords, API tokens, TLS certificates, keys, and more. As explained in the video below, however, as organizations distribute their workloads across more platforms they can lose centralized control over identity security and become more exposed to secret sprawl: New secrets challenges Many organizations' technical landscapes are evolving rapidly. For some, that means the adoption of different cloud infrastructure services, CI/CD platforms, and SaaS products. While these technologies promise to boost flexibility, speed, and efficiency, they can also result in a fragmented approach to security and the management of secrets. These distributed environments can increase the spread and risks of secret sprawl. To resolve various degrees of secret sprawl, four solutions are required: A centralized secrets management platform A way to scan and locate all of the secrets in your IT environments A secrets management platform that can synchronize secrets from external secrets managers A set of controls and encryption to limit access to secrets This post shares examples for each of the four solutions to help your organization get secret sprawl under control with the help of a centrally managed, auditable platform. Adopt a centralized secrets management platform Having an organization’s secrets siloed does more harm than good. Every additional copy of a secret created opens another window for a secret to be leaked and a subsequent attack. A critical starting point in resolving secret sprawl is to adopt a centralized system where you can store, organize, manage, and ultimately protect all your secrets. There’s no way to build solid governance, auditing, and security around organizational access to secrets if you don’t centralize your management through one control plane. Without a centralized control plane, you can’t easily scale security practices, so you’ll be constantly reinventing the security wheel in different corners of your organization. This fragmentation can also bring significant costs, including an increased threat of a significant security breach. The first step in resolving secret sprawl is to adopt a centralized system to store, organize, manage, and ultimately protect all your secrets. HashiCorp Vault brought secrets management into the mainstream as a tool category and has had the industry’s broadest adoption for years. Vault is an identity-based secrets and encryption management system that provides encryption services gated by authentication and authorization methods to ensure secure, auditable and restricted access to secrets. It is used to secure, store, generate, rotate, and protect secrets and other sensitive data using a UI, CLI, or API. Centralize secrets with synchronization integrations Some organizations store secrets in multiple solutions across different cloud service providers (AWS Secrets Manager, Microsoft Azure Key Vault, Google Cloud Secret Manager, GitHub, and others). While this may meet the requirement of bringing all secrets under management, there’s still a degree of secrets sprawl when security and site reliability engineering (SRE) teams must learn and use multiple solutions, each with their own nuances, in order to perform security reviews, implement organization-wide policies, carry out incident response tasks, or perform company audits. The solution is to have a central, cloud-agnostic secrets management platform that can synchronize secrets produced in multiple cloud-specific secrets managers and also push its own secrets out to those solutions. Vault’s secrets sync feature allows users to synchronize secrets when and where they require them, continually synchronizing secrets from Vault to external secrets managers so they are always up to date. With secret sync, Vault is able to: Offer complete visibility to satisfy your organization’s compliance and governance requirements. Easily manage, provision, rotate, and revoke secrets for external secrets managers using Vault’s UI, CLI, or API. Provide granular access controls and policies to determine who has access to which secrets across all secrets managers in an organization. Vault supports secrets sync with common secrets storage locations, including: AWS Secrets Manager Microsoft Azure Key Vault Google Cloud Secret Manager GitHub Vercel Vault has multiple platform and hosting options that allow organizations to take advantage of secrets sync, including HCP Vault, HCP Vault Secrets, and Vault Enterprise. By adopting one of these Vault offerings, organizations can resolve challenges created by secrets management tooling fragmentation and eliminate the need for context switching among multiple secrets management platforms. Find secrets with scanning A key piece of any secrets centralization plan is being able to locate API keys, database credentials, security certificates, passwords or publicly identifiable information (PII) across potentially very large IT estates and complex software supply chains. This is where secret scanning tools are required. HCP Vault Radar automates the initial scanning and ongoing detection and identification of unmanaged secrets so that they can either be revoked or migrated into Vault. HCP Vault Radar uses 300+ scanning algorithms to scan through several data sources, including: Git providers AWS Parameter Store Confluence Amazon S3 Local folders HCP Vault Radar also integrates with Vault Enterprise to scan supported data sources for the presence of leaked secrets currently in Vault that are actively being used. Using additional metadata from the scan, Radar will give the secrets it discovers a risk rating to prioritize which ones may need immediate attention. Together, these features give organizations a more complete view of their secrets, allowing teams to find and manage all secrets and manage risk. HCP Vault Radar is currently in preview. Organizations interested in testing it can request to be a part of our early access program. HCP Vault Radar is scheduled to be released in beta in January 2024 and we anticipate general availability later in 2024. Watch the video below to learn more: Limiting access to secrets Another important method to mitigate secret sprawl is to limit access to the usable secret. There are two common tactics used to restrict access and usability of secrets to limit sprawl. Encryption An effective secret management platform should provide encryption for its secrets storage. Encryption secures your data in such a way that it’s useless to anyone who doesn’t have the decryption key. Should the encrypted secret be obtained by a threat actor it can’t be used to gain access to systems, APIs, or infrastructure. Vault provides a security barrier for all requests made to the API & data tier. The security barrier automatically encrypts all data leaving Vault using a 256-bit Advanced Encryption Standard (AES) cipher in the Galois Counter Mode (GCM) with 96-bit nonces. The nonce is randomly generated for every encrypted object. When data is read from the security barrier, the GCM authentication tag is verified during the decryption process to detect any tampering. Least privileged access To maintain the privacy of your sensitive data, you must control access to it with effective controls. Use granular controls to grant secrets access to users, and don’t give a user access to a secret that they don’t need. Temporary access for temporary usage should also be the standard procedure. Ensure that your organization strictly limits unrestricted access or root users. Even when you are working with developers, restrict their access to the areas they are working on, so they don’t have a free pass to all your secrets or inadvertently leak them. Get started Vault has multiple platform and hosting options to help your organization resolve secrets sprawl. HCP Secrets: HCP Cloud Secrets is a multi-tenant SaaS solution focused on secrets management use cases, ease of adoption, and secret sync to resolve sprawl. This option has a low barrier of entry that helps to improve developer agility and can immediately benefit an organization's security posture. HCP Vault: HCP Vault is similar to HCP Cloud Secrets in that it is a SaaS-based solution. However, HCP Vault is a dedicated solution that services a broader set of use cases including secrets management, secrets sync (beta), PKI certificate management, and encryption. Vault Enterprise: Vault Enterprise is a self-managed platform servicing the widest range of security use cases including secrets management (static and dynamic), secrets sync (beta), PKI certificate management, encryption, and advanced data protection. For more information, view a tutorial of Vault Enterprise secret sync and AWS Secrets Manager. HCP Vault Radar: HCP Vault Radar automates the initial scanning and ongoing detection and identification of unmanaged secrets so that they can either be revoked or migrated into Vault. HCP Vault Radar is currently in an early adoption alpha program. To learn more or to be considered for the early adoption program, click here. Resolving secrets sprawl requires a comprehensive approach covering people, processes, and technology. HashiCorp Vault can be a valuable technology within that approach, but for the best results it should be integrated into a broader compliance strategy that includes training, regular audits, and ongoing monitoring of your systems and processes. To get started with HashiCorp Vault, visit the Vault product page. To learn more about what’s new in Vault Enterprise, go to the Vault Enterprise release page. Please contact us if you’d like to discuss your secrets management journey. View the full article
-
Forum Statistics
67.7k
Total Topics65.6k
Total Posts