Jump to content

Search the Community

Showing results for tags 'secrets'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOps Forum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Logging, Monitoring & Observability
    • Red Hat OpenShift
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Secrets sync, now generally available in Vault Enterprise 1.16, is a new feature that helps organizations manage secrets sprawl by centralizing the governance and control of secrets that are stored within other secret managers. Shift-left trends have caused organizations to distribute their secrets to multiple secret managers, CI/CD tools, and platforms to bring them closer to the developer for easy usage. This proliferation of secrets’ storage locations complicates secrets management, limiting visibility , fostering inconsistent management, and compounding challenges with governance and compliance. Secrets management doesn’t live up to its full potential unless it is centralized and managed on one platform. Secrets sync represents another step toward that vision by helping organizations resolve secrets-management fragmentation by providing a single management plane and controlling the distribution of secrets for last-mile usage. In this post, we’ll dive deeper into how secrets sync works and its benefits. How does secrets sync work? First, let’s look at a short demo of how secrets sync works: Secrets sync lets users manage multiple external secrets managers, which are called destinations in Vault. Supported destinations include: AWS Secrets Manager Google Cloud Secrets Manager Microsoft Azure Key Vault GitHub Actions Vercel Engineering and security teams can generate, update, delete, rotate, and revoke secrets from Vault’s user interface, API, or CLI and have those changes synchronized to and from external secrets managers to be used by your cloud-hosted applications. Secrets sync lets organizations manage sync granularity by supporting secret access via paths and keys so organizations can remain consistent with their existing operations. Additionally, secrets sync supports alternative authentication methods for organizations that don’t support or allow personal access tokens or long-lived credentials. Supported alternative authentication methods include: GitHub App for GitHub destinations Google Kubernetes Engine workload identity federation for Google Cloud destinations STS Assume Role for AWS destinations Benefits of secrets sync As more and more organizations adopt a multi-cloud approach, they face challenges around isolated secrets management, compliance, and reporting tools, as well as protecting expanded attack surfaces. Isolated secrets management solutions are primarily concerned with unifying secrets across solutions that are specific to their own platform, which can’t provide a complete solution and therefore is not suitable for multi-cloud environments, secrets management complexities associated with secret sprawl, multi-cloud adoption, or large-scale SaaS usage. Benefits include: Maintain a centralized secrets management interface: Centralized secrets management is better secrets management. Instead of context switching between multiple cloud solutions and risking breaches via human error, secrets are all synced back to Vault to be managed and monitored there. Better governance: Give security and compliance stakeholders one solution to implement and govern security best practices, as well as monitor compliance. A single management plane makes governance teams more productive and makes their goals more achievable. Higher developer productivity: Syncing secrets to a single management plane also makes development teams more productive. There’s no longer a need to interface with one cloud vendor’s key manager when deploying to that cloud, and another key manager when working in another cloud. Central visibility of secrets activity across teams: Once they’re synced and centralized, Vault operators can audit in one place. Track when, by whom, and where secrets are modified or accessed with advanced filtering and storing capabilities. Last-mile secrets availability for developers: Centralize secrets in one solution while syncing secrets to existing platforms that may require use of the local cloud service provider’s secrets manager (e.g. AWS Secrets Manager, Azure Key Vault, etc.). How HashiCorp resolves secret sprawl Resolving secrets sprawl requires a comprehensive approach governing people, processes, and technology. Secrets sync is a powerful tool to assist organizations management of secret sprawl. Secrets sync is supported on Vault Enterprise, as well as our multi-tenant SaaS solution, HCP Vault Secrets. Additionally, HCP Vault Radar helps platform engineering and security teams reduce the risk of secret sprawl by detecting unmanaged, hard coded, and leaked secrets by scanning data sources regularly used by developers. When an unsecure secret is detected, Vault Radar supports multiple remediation workflows to secure the organization’s technology stacks. To get started with HashiCorp Vault, visit the Vault product page. To learn more about what’s new in Vault Enterprise, go to the Vault Enterprise release page. Please contact us if you’d like to discuss your secrets management journey. Secrets sync documentation Solutions to secret sprawl Vault Enterprise secrets sync tutorial Centralize control of external secrets managers with Vault Enterprise secrets sync (video) View the full article
  2. With the advent of containerization, software developers could build, deploy, and scale applications in a transformative way. Docker quickly became the leading containerization platform and remains one of the most common container images used today. Docker Hub is one of the prominent locations for developers and enterprises to publish and distribute docker images. With popularity comes greater attention from cybercriminals. Cybernews recently reported 5,500 out of 10,000 public docker images contained 48,000+ sensitive secrets - a combination of harmless and potentially vulnerable API keys. This report illustrates why it's imperative that security and platform teams know the most common attack vectors for their Docker containers and understand how to close them. This post will provide a brief checklist of the various attack vectors into your Docker containers specifically originating from exposed secrets. Docker and exposed secrets Let’s quickly examine the relationship between container runtime and registry. When we spin-up a container, an image is pulled from the registry via APIs and is deployed. This is visualized below: [Image source](https://community.sap.com/legacyfs/online/storage/blogattachments/2022/09/1-83.png)_ The high number of secrets from the Cybernews report is attributed to developers re-using packages from a registry containing sensitive secrets. Secrets are commonly found in the container image metadata - the environment variables and filesystem. Also, source code leakage could allow attackers to generate newer valid tokens that could provide unauthorized system access. Attack surface An attack surface is a collection of all vulnerable points an attacker can use to enter the target system. Attackers skillfully exploit these vulnerable points in technology and human behavior to access sensitive assets. We need to understand two Docker concepts as we continue this discussion: Filesystem: In Docker, each layer can contain directory changes. The most commonly used filesystem, OverlayFS, enables Docker to overlay these layers to create a unified filesystem for a container. Layers: Docker images are created in layers - i.e., each command on the DockerFile corresponds to a layer. With that context, let’s understand and analyze how exposed secrets can affect these Docker image attack vectors. Docker image layers Secrets explicitly declared in the Dockerfile or build arguments can easily be accessed via the Docker image history command. #terminal docker image history This represents one of the simplest methods for an attacker to capitalize on a secret. Filesystem This Dockerfile demonstrates a scenario where sensitive data like SSH private key and secrets.txt are added to the container's filesystem and later removed. #Dockerfile FROM nginx:latest # Copy in SSH private key, then delete it; this is INSECURE, # the secret will still be in the image. COPY id_rsa . RUN rm -r id_rsa ARG DB_USERNAME ENV DB_USERNAME =$DB_USERNAME ARG DB_PASSWORD ENV DB_PASSWORD =$DB_PASSWORD ARG API_KEY ENV API_KEY =$API_KEY # Expose secrets via a publicly accessible endpoint (insecure practice) RUN echo "DB_USERNAME=$DB_USERNAME" > /usr/share/nginx/html/secrets.txt RUN echo "DB_PASSWORD=$DB_PASSWORD" >> /usr/share/nginx/html/secrets.txt RUN echo "API_KEY=$API_KEY" >> /usr/share/nginx/html/secrets.txt RUN rm /usr/share/nginx/html/secrets.txt CMD ["nginx", "-g", "daemon off;"] Docker uses layer caching - hence, the secret is still available in one of the layers. An internal attacker can also extract individual layers of a Docker image, stored as tar files in registries, which enables them to uncover hidden secrets. After creating a Dockerfile, developers mistakenly use build arguments to create an image. For the above dockerfile, the secrets are input as arguments. #terminal docker build \ --build-arg DB_USERNAME=root \ --build-arg DB_PASSWORD=Xnkfnbgf \ --build-arg API_KEY=PvL4FjrrSXyT7qr \ -t myapp:1.0 .While convenient, it is not secure since the arguments also get embedded in the image. A simple docker history --no-trunc &lt;image> can expose the secret values. Developers should either use multi-stage builds or secret managers. Environment variables Apart from Docker image access, unauthorized access to the source code of the docker image can provide additional attack vectors. The .env files are primarily used to store secrets such as API tokens, database credentials, and other forms of secrets that an application needs. When attackers have access to secrets in the .env, they can make unauthorized accesses that the secret allows. Dockerfile DockerFile is a standard file that contains execution instructions to build an image while spinning up containers. Hard-coding secrets into DockerFile creates a significant attack surface. When attackers access the DockerFile, they can see hard-coded secrets, the base image, the list of dependencies, and critical file locations. Developers need to use appropriate secret managers to reference variables. Docker-compose.yml  Docker-compose defines networks, services, and storage volumes. When an attacker views the file, they can understand the application architecture and exploit or disrupt its operation. services: web: image: my-web-app:latest ports: - "80:80" networks: - app-network db: image: postgres:latest volumes: - db-data:/var/lib/postgresql/data environment: POSTGRES_PASSWORD: example_db_password networks: app-network: volumes: db-data:In the above docker-compose.yml, the postgres database password is hardcoded. The password can easily be accessed with the docker exec command as shown below: #terminal docker exec -it /bin/bash envApart from secrets, an attacker can also analyze the volume mappings and identify potential points of weakness. If they discover that the database volume (db-data) is also mounted to the host filesystem, they could exploit and perform a container breakout attack, gaining access to the underlying host system. CI/CD config files CI/CD configuration files such as .gitlab-ci.yml, Azure-pipelines.yml , Jenkinsfile,  etc., contain instructions for building, testing, and deploying applications. The logs generated in CI/CD pipeline can contain debugging and logging information. If a developer includes a logging statement that inadvertently prints a sensitive secret, it can lead to unauthorized exposure and compromise. Such secret leaks need to be detected so that developers can fix their source code. Developers also tend to leave the registry login credentials in the CI CD configuration files. Consider the following gitlab-ci.yml. variables: DOCKER_IMAGE_TAG: latest DOCKER_REGISTRY_URL: registry.example.com DOCKER_IMAGE_NAME: my-docker-image DOCKER_REGISTRY_USER: adminuser <-- should use $CI_REGISTRY_USER DOCKER_REGISTRY_PASSWORD: secretpassword <-- should use $CI_REGISTRY_PASSWORD # Jobs build: stage: build image: image_name:stable script: - docker build -t $DOCKER_REGISTRY_URL/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG . - docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY_URL - docker push $DOCKER_REGISTRY_URL/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAGIn the configuration above, the developer makes a docker registry login using a hardcoded username and password, leading to unwarranted secret exposure. A good development practice is to integrate CI/CD environment with secret managers like Hashicorp Vault. Detecting secrets Older or unused Docker images can contain unpatched vulnerabilities or outdated dependencies, posing security risks to the enterprise. Regularly scanning and removing unused images helps mitigate these risks by reducing the attack surface and ensuring that only secure images are deployed. Enterprises also need to be actively using secret scanners to detect secrets in docker images, whether they’re stored in Docker Hub, JFrog Artifactory, AWS ECR, or any other repository. HCP Vault Radar can meet these requirements and is an excellent choice since it's an add-on to the most popular secrets manager: HashiCorp Vault. Vault Radar analyzes the contents of each layer described in this post to identify secrets in the software packages and dependencies. Learn more Vault Radar can scan your container images and other destinations such as source code, productivity applications like Jira, Confluence, Slack, Terraform variables, server directories, and more. When it detects leaked secrets, it has options to remediate them and enhance your security posture. You can sign up now for an HCP Vault Radar test run to detect secret sprawl in your enterprise, or learn more about HashiCorp Vault and Vault Radar on our homepage. This post was originally published on Dev.to. View the full article
  3. Explore how Akeyless Vaultless Secrets Management integrates with the Kubernetes Secrets Store CSI Driver to enhance security and streamline secrets management in your Kubernetes environment. The post Enhancing Kubernetes Secrets Management with Akeyless and CSI Driver Integration appeared first on Akeyless. The post Enhancing Kubernetes Secrets Management with Akeyless and CSI Driver Integration appeared first on Security Boulevard. View the full article
  4. Researchers have discovered a new side-channel vulnerability in Apple’s M-series of processors that they claim could be used to extract secret keys from Mac devices when they’re performing cryptographic operations. Academic researchers from the University of Illinois Urbana-Champaign, University of Texas at Austin, Georgia Institute of Technology, University of California, University of Washington, and Carnegie Mellon University, explained in a research paper that the vulnerability, dubbed GoFetch, was found in the chips’ data memory-dependent prefetcher (DPM), a optimization mechanism that predicts the memory addresses of data that active code could access in the near future. Since the data is loaded in advance, the chip makes performance gains. However, as the prefetchers make predictions based on previous access patterns, they also create changes in state that the attackers can observe, and then use to leak sensitive information. GoFetch risk The vulnerability is not unlike the one abused in Spectre/Meltdown attacks as those, too, observed the data the chips loaded in advance, in order to improve the performance of the silicon. The researchers also noted that this vulnerability is basically unpatchable, since it’s derived from the design of the M chips themselves. Instead of a patch, the only thing developers can do is build defenses into third-party cryptographic software. The caveat with this approach is that it could severely hinder the processors’ performance for cryptographic operations. Apple has so far declined to discuss the researchers’ findings, and stressed that any performance hits would only be visible during cryptographic operations. While the vulnerability itself might not affect the regular Joe, a future patch hurting the device’s performance just might. Those interested in reading about GoFetch in depth, should check out the research paper here. Via Ars Technica More from TechRadar Pro This nasty new Android malware can easily bypass Google Play security — and it's already been downloaded thousands of timesHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  5. Millions of secrets and authentication keys were leaked on GitHub in 2023, with the majority of developers not caring to revoke them even after being notified of the mishap, new research has claimed. A report from GitGuardian, a project that helps developers secure their software development with automated secrets detection and remediation, claims that in 2023, GitHub users accidentally exposed 12.8 million secrets in more than 3 million public repositories. These secrets include account passwords, API keys, TLS/SSL certificates, encryption keys, cloud service credentials, OAuth tokens, and similar. Slow response During the development stage, many IT pros would hardcode different authentication secrets in order to make their lives easier. However, they often forget to remove the secrets before publishing the code on GitHub. Thus, should any malicious actors discover these secrets, they would get easy access to private resources and services, which can result in data breaches and similar incidents. India was the country from which most leaks originated, followed by the United States, Brazil, China, France, and Canada. The vast majority of the leaks came from the IT industry (65.9%), followed by education (20.1%). The remaining 14% was split between science, retail, manufacturing, finance, public administration, healthcare, entertainment, and transport. Making a mistake and hardcoding secrets can happen to anyone - but what happens after is perhaps even more worrying. Just 2.6% of the secrets are revoked within the hour - practically everything else (91.6%) remains valid even after five days, when GitGuardian stops tracking their status. To make matters worse, the project sent 1.8 million emails to different developers and companies, warning them of its findings, and just 1.8% responded by removing the secrets from the code. Riot Games, GitHub, OpenAI, and AWS were listed as companies with the best response mechanisms. Via BleepingComputer More from TechRadar Pro GitHub's secret scanning feature is now even more powerful, covering AWS, Google, Microsoft, and moreHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  6. In the realm of containerized applications, Kubernetes reigns supreme. But with great power comes great responsibility, especially when it comes to safeguarding sensitive data within your cluster. Terraform, the infrastructure-as-code darling, offers a powerful solution for managing Kubernetes Secrets securely and efficiently. This blog delves beyond the basics, exploring advanced techniques and considerations for leveraging Terraform to manage your Kubernetes Secrets. Understanding Kubernetes Secrets Kubernetes Secrets provides a mechanism to store and manage sensitive information like passwords, API keys, and tokens used by your applications within the cluster. These secrets are not directly exposed in the container image and are instead injected into the pods at runtime. View the full article
  7. Accidental leaks of API keys, tokens, and other secrets risk security breaches, reputation damage, and legal liability at a mind-boggling scale. In just the first eight weeks of 2024, GitHub has detected over 1 million leaked secrets on public repositories. That’s more than a dozen accidental leaks every minute. Since last August, all GitHub cloud users could opt-in to secret scanning push protection, which automatically blocks commits when a secret is detected. Now, we’ve enabled secret scanning push protection by default for all pushes to public repositories. What’s changing This week, we began the rollout of push protection for all users. This means that when a supported secret is detected in any push to a public repository, you will have the option to remove the secret from your commits or, if you deem the secret safe, bypass the block. It might take a week or two for this change to apply to your account; you can verify status and opt-in early in code security and analysis settings. How will this change benefit me? Leaked secrets can pose a risk to reputation, revenue, and even legal exposure, which is why GitHub Advanced Security customers scan more than 95% of pushes to private repositories. As champions for the open source community, we believe that public repositories–and your reputation as a coder–are worth protecting, too. Do I have a choice? Yes. Even with push protection enabled, you have the choice to bypass the block. Although we don’t recommend it, you can also disable push protection entirely in your user security settings. However, since you always retain the option to bypass the block, we recommend that you leave push protection enabled and make exceptions on an as-needed basis. What about private repositories? If your organization is on the GitHub Enterprise plan, you can add GitHub Advanced Security to keep secrets out of private repositories as well. You’ll also get all of the other features for secret scanning, along with code scanning, AI-powered autofix code suggestions, and other static application security (SAST) features as part of a comprehensive DevSecOps platform solution. Learn more about secret scanning GitHub secret scanning guards over 200 token types and patterns from more than 180 service providers, and boasts the industry’s highest precision and lowest rate of false positives.1 Together, we can keep secrets from leaking on public repositories. Learn more about secret scanning Learn more about push protection for users Join the discussion within GitHub Community Notes A Comparative Study of Software Secrets Reporting by Secret Detection Tools, Setu Kumar Basak et al., North Carolina State University, 2023. ↩
  8. Kubernetes is an open-source tool that is utilized to execute and manage the containerized application inside the cluster. It performs various tasks to control, run, and secure the application’s credentials through secret and ingress. Ingress is used to manage application incoming traffic and also for SSL termination. In contrast, secrets are used to store confidential information and TLS certificates for application. This post will illustrate: What are the Kubernetes Secrets? Prerequisite: Generate Private Key and Certificate How to Create Secret TLS in Kubernetes? How to Create a Secret Through Yaml File? How to Embed Secret With Kubernetes Pod? Conclusion What are the Kubernetes Secrets? The Secrets are one of the Kubernetes resources, used to store confidential information such as user login credentials, keys, certificates, or tokens. The secrets can be created individually and connected to pods. It prevents the developer from providing confidential data in code and also provides an extra layer of security. Different kinds of secrets can be created and used. The most commonly used secrets are: Generic Secret: The generic secrets are utilized to store basic information such as passwords, tokens, API keys, OAuth keys, and so on. TLS Secret: TLS secrets are used to store private keys and certificates that are signed by the CA. To ensure the security of applications running inside Kubernetes and for securing communication within the cluster, the user usually needs to create and embed TLS secrets to the pod. Docker Registry: It is used to store the docker registry credential to easily pull the images from the registry. Prerequisite: Generate Private Key and Certificate To create the certificate and private key for security improvement, utilize OpenSSL that creates the CSR (certificate signing request) and private key. Then, use CSR to generate the self-signed or CA certificates. To utilize the OpenSSL commands on Windows, users are required to install Git. For this purpose, follow our linked “Install git on Windows” article. After installing git, follow the below instructions to generate a private key and signed certificate. Step 1: Launch Git Bash Terminal Make a search for “Git Bash” in the Start menu and launch the terminal: To check the current directory use the “pwd” command: pwd Currently, we are working in the %USERPROFILE% directory: Step 2: Create New Directory Create a new directory to save the certificates and private key: mkdir cert Navigate to the newly created directory using the “cd” command: cd cert Step 3: Generate Private Key Now, generate the private key through the given command. Here, the generated private key will be saved in “mycert.key”: openssl genpkey -algorithm RSA -out mycert.key Step 4: Generate CSR To generate the CSR (certificate service request) to get a signed certificate, use the given command: openssl req -new -key mycert.key -out mycert.csr Step 5: Generate Certificate Lastly, using the generated private key and CSR, create a certificate and save it in the “mycert.crt” file. For this purpose, execute the below command: openssl x509 -req -in mycert.csr -signkey mycert.key -out mycert.crt -days 365 After generating the TLS certificates, the user can create the secret TLS by following the below section. How to Create Secret TLS in Kubernetes? To ensure the application security and secure communication within and outside the Kubernetes cluster, the TLS (Transport Layer Security) certificates are essential that are used in data encrypting. The Kubernetes secret allows us to embed the TLS certificate with running pods through secret TLS. To create a secret TLS in Kubernetes, go through the following instructions. Step 1: Start Minikube Cluster To start the minikube cluster, first, launch the Windows PowerShell as administrator. After that, create and run the cluster using the “minikube start” command: minikube start Step 2: Get Nodes Access the Kubernetes node to check if the cluster is started or not: minikube get nodes Step 3: Create Secret TLS Create the TLS secret in Kubernetes using “kubectl create secret <secret-type> <secret-name> –cert=<path-to-tls certificate> –key=<path-to-private-key>” command. Here, the secret type can be “generic”, “tls”, or “docker-registry”. To create a TLS secret, we have set the secret type as “tls”: kubectl create secret tls demo-secret --cert=C:\Users\Dell\cert\mycert.crt --key=C:\Users\Dell\cert\mycert.key Step 4: Get Secrets For confirmation, list down the Kubernetes secret using the given command: kubectl get secret Here, you can see we have effectively created a “demo-secret” that contains “2” data values: Step 5: Describe Secret To view how data are viewed or stored in secret, describe the secret using the “kubectl describe secret <secret-name>” command: kubectl describe secret demo-secret You can see values are stored in bytes and cannot be directly viewed unlike Kubernetes ConfigMaps: How to Create a Secret TLS Through Yaml File? To create a secret TLS through a yaml file, first, create a “secret.yml” file, add the tls base64 encoded certificate in the “tls.crt” key, and add the base64 encoded key in the “tls.key”. For demonstration, follow the listed steps. Step 1: Create Yaml File Create a file named “secret.yml” and paste the given code: apiVersion: v1 data: tls.crt: "base64 encoded cert" tls.key: "base64 encoded key" kind: Secret metadata: name: mytls-secret namespace: default type: kubernetes.io/tls In the above snippet, replace the “tls.crt” and “tls.key” key values with your original certificate and key values: Step 2: Create a Secret Now, apply the secret yaml file through the “kubectl apply -f <path-to-secret.yml>” command: kubectl apply -f secret.yml The output shows that we have successfully created the “mytls-secret” using yaml file: Note: View TLS Certificate and Private Key To view the base64 encoded certificate and use it in the yaml file, run the “cat <path-to-certificate file> | base64” command in the git bash terminal: cat mycert.crt | base64 In order to view the base64 encoded key, use “cat <path-to-key file> | base64” command: cat mycert.key | base64 How to Embed Secret TLS With Kubernetes Pod? After creating the secret TSL, the user can embed it with the Kubernetes pod. To do so, use the following instructions. Step 1: Create Yaml File Make a file named “pod.yml” file and paste the below snippet into the file: apiVersion: v1 kind: Pod metadata: name: demo-pod spec: containers: - name: html-cont image: rafia098/html-img:1.0 envFrom: - secretRef: name: demo-secret In the above snippet: “kind” key specifies the Kubernetes resource that the user is creating. “name” key will set the pod name. “containers” key will store the container information. “name” key under the “containers” key will set the container name. “image” key will provide the application or container image to create and start the application inside the container. “envFrom” key will set the environment variable from other Kubernetes resources. Here, to embed the secret TLS in a pod, “secretRef” is used to provide a secret reference. To embed the above secret TLS, specify the name of the secret in the “name” key. Step 2: Create or Upgrade the Pod Next, open the folder where the “pod.yml” file is created: cd C:\Users\Dell\Documents\Kubernetes\Secret Apply the yaml file to create or reconfigure the pod using the “kubectl apply” command: kubectl apply -f pod.yml Step 3: Access Kubernetes Pods For verification, list down the Kubernetes pods: kubectl get pod Here, you can see we have created the “demo-pod” successfully: Step 4: Describe the Pod To check if the pod has embedded the secret TLS or not, describe the pod using the below command: kubectl describe pod demo-pod The below output shows that we have successfully embedded the TLS secret with pod: We have covered how to create secret TLS and embed it with the Kubernetes application running in the pod. Conclusion To create the secret TLS in Kubernetes, first, create the TLS signed certificate and private key. After that, start the Kubernetes cluster and run the “kubectl create secret <secret-type> <secret-name> –cert=<path-to-tls certificate> –key=<path-to-private-key>” command. Users can also create the secret TLS using yaml manifest. This post has illustrated how to create the secret TLS and how to embed the secret with a running application or pod. View the full article
  9. One of the most popular cloud-native, PaaS (Platform as a Service) products in Microsoft Azure is Azure App Service. It enables you to easily deploy and host web and API applications in Azure. The service supports ways to configure App Settings and Connection String within the Azure App Service instance. Depending on who has access […] The article Terraform: Deploy Azure App Service with Key Vault Secret Integration appeared first on Build5Nines. View the full article
  10. AWS Secrets Manager serves as a centralized and user-friendly solution for effectively handling access to all your secrets within the AWS cloud environment. It simplifies the process of rotating, maintaining, and recovering essential items such as database credentials and API keys throughout their lifecycle. A solid grasp of the AWS Secrets Manager concept is a valuable asset on the path to becoming an AWS Certified Developer. In this blog, you are going to see how to retrieve the secrets that exist in the AWS Service Manager with the help of AWS Lambda in virtual lab settings. Let’s dive in! What is a Secret Manager in AWS? AWS Secrets Manager is a tool that assists in safeguarding confidential information required to access your applications, services, and IT assets. This service makes it simple to regularly change, oversee, and access things like database credentials and API keys securely. Consider the AWS Secrets Manager example, users and applications can retrieve these secrets using specific APIs, eliminating the necessity of storing sensitive data in plain text within the code. This enhances security and simplifies the management of secret information. AWS Secrets Manager Pricing AWS Secrets Manager operates on a pay-as-you-go basis, where your costs are determined by the number of secrets you store and the API calls you make. The service is transparent, with no hidden fees or requirements for long-term commitments. Additionally, there is a 30-day AWS Secrets Manager free tier period, which begins when you store your initial secret, allowing you to explore AWS Secrets Manager without any charges. Once the free trial period ends, you will be billed at a rate of $0.40 per secret each month, and $0.05 for every 10,000 API calls. AWS Secrets Manager Vs Parameter Score What are AWS Lambda functions? AWS Lambda is a service for creating applications that eliminates the need to manually set up or oversee servers. AWS Lambda functions frequently require access to sensitive information like certificates, API keys, or database passwords. It’s crucial to keep these secrets separate from the function code to prevent exposing them in the source code of your application. By using an external secrets manager, you can enhance security and avoid unintentional exposure. Secrets managers offer benefits like access control, auditing, and the ability to manage secret rotation. It’s essential not to store secrets in Lambda configuration environment variables, as these can be seen by anyone with access to view the function’s configuration settings. Architecture Diagram for retrieving secretes in AWS Secrets Manager with AWS Lambda When Lambda invokes your function for the first time, it creates a runtime environment. First, it runs the function’s initialization code, which includes everything outside of the main handler. After that, Lambda executes the function’s handler code, which receives the event payload and processes your application’s logic. For subsequent invocations, Lambda can reuse the same runtime environment. To access secrets, you have a couple of options. One way is to retrieve the secret during each function invocation from within your handler code. This ensures you always have the most up-to-date secret, but it can lead to longer execution times and higher costs, as you’re making a call to the secret manager every time. There may also be additional costs associated with retrieving secrets from the Secret Manager. Another approach is to retrieve the secret during the function’s initialization process. This means you fetch the secret once when the runtime environment is set up, and then you can reuse that secret during subsequent invocations, improving cost efficiency and performance. The Serverless Land pattern example demonstrates how to retrieve a secret during the initialization phase using Node.js and top-level await. If the secret might change between invocations, make sure your handler can verify the secret’s validity and, if necessary, retrieve the updated secret. Another method to optimize this process is to use Lambda extensions. These extensions can fetch secrets from Secrets Manager, cache them, and automatically refresh the cache based on a specified time interval. The extension retrieves the secret from Secrets Manager before the initialization process and provides it via a local HTTP endpoint. Your function can then get the secret from this local endpoint, which is faster than direct retrieval from Secrets Manager. Moreover, you can share the extension among multiple functions, reducing code duplication. The extension takes care of refreshing the cache at the right intervention to ensure that your function always has access to the most recent secret, which enhances reliability. Guidelines to retrieve secrets stored in AWS Secrets Manager with AWS Lambda To retrieve the secrets retained in the AWS Secret Manager with the help of AWS Lambda, you can follow these guided instructions: First, you need to access the Whizlabs Labs library. Click on guided labs on the left side of the lab’s homepage and enter the lab name in the search lab tab. Now, you have found the guided lab for the topic you have entered in the search tab. By clicking on this lab, you can see the lab overview section. Upon reviewing the lab instructions, you may initiate the lab by selecting the “Start Lab” option located on the right side of the screen. Tasks involved in this guided lab are as follows: Task 1: Sign in to the AWS Management Console Start by accessing the AWS Management Console and set the region to N. Virginia a.You need to ensure that you do not edit or remove the 12-digit Account ID in the AWS Console. Copy your username and password from the Lab Console, then paste them into the IAM Username and Password fields in the AWS Console. Afterward, click the ‘Sign in’ button. Task 2: Create a Lambda Function Navigate to the Lambda service. Create a new Lambda function named “WhizFunction” with the runtime set to Python 3.8. Configure the function’s execution role and use the existing role named “Lambda_Secret_Access.” Adjust the function’s timeout to 2 minutes. Adjust the function’s timeout to 2 minutes. Task 3: Write a Lambda to Hard-code Access Keys Develop a Lambda function that creates a DynamoDB table and inserts items. This code will include hard-coded access keys. Download the code provided in the lab document. Replace the existing code in the Lambda function “WhizFunction” with the code from “Code1” in the downloaded zip file. Make sure to change the AWS Access Key and AWS Secret Access Key as instructed in the lab document. Deploy the code and configure a test event named “WhizEvent.” Run the test to create a DynamoDB table with i followed by configuration of the test event. Now click on the save button and click the test button to execute the code. The DynamoDB table was created successfully with some data fields. Task 4: View the DynamoDB Table in the Console Access the DynamoDB service by searching service in the top left corner. In the “Tables” section, you will find a table named “Whizlabs_stud_table1.” You can view the items within the table by selecting the table and clicking “Explore table items.” Task 5: Write a Lambda Code to Return Table Data Modify the Lambda function “WhizFunction” to write code that retrieves data from the DynamoDB table. Replace the existing code with the code from “Code2” in the lab document, making the necessary AWS Access Key and AWS Secret Access Key changes. Deploy the code and execute a test to enable the Lambda function to return data from the table. Task 6: Create a Secret Manager to Store Access Keys Access AWS Secrets Manager and make sure you are in the N. Virginia Region. Create a new secret by specifying it as “Other Type of Secret.” Enter the Access Key and Secret Access Key as key-value pairs. Choose the default encryption key. Name the secret “whizsecret” and proceed with the default settings. Review and store the secret and copy the Secret ARN for later use. Task 7: Write a Lambda to Create DynamoDB Items Using Secrets Manager Modify the Lambda function to create a new DynamoDB table and insert items by retrieving access keys from Secrets Manager. Replace the code with the code from “Code3” in the lab document, updating the Secret ARN. Deploy the code and run a test to create the DynamoDB table and items securely. Task 8: View the DynamoDB Table in the Console Access the DynamoDB service. In the “Tables” section, you will find a table named “Whizlabs_stud_table2.” To view the items, select the table and click “Explore table items.” Task 9: Write a Lambda Code to View Table Items Using Secrets Manager. Modify the Lambda function to write code that fetches table items securely using access and secret keys stored in Secrets Manager. Replace the code with the code from “Code4” in the lab document, updating the Secret ARN. Deploy the code and execute a test to securely access and view table items. Task 10: Cleanup AWS Resources Finally, you can delete the Lambda function “WhizFunction.” Delete both DynamoDB tables created. Delete the secret “whizsecret” from AWS Secrets Manager. Schedule its deletion with a waiting period of 7 days to ensure cleanup. Finally, end the lab by signing out from the AWS Management console. Also Read : Free AWS Developer Associate Exam Questions FAQs How much does the AWS Secret Manager parameter store cost? Parameter Store doesn’t incur any extra costs. However, there is a maximum limit of 10,000 parameters that you can store. What can be stored in AWS secrets manager? AWS Secrets Manager serves as a versatile solution for storing and managing a variety of sensitive information. This includes but is not limited to database credentials, application credentials, OAuth tokens, API keys, and various other secrets essential for different aspects of your operations. It’s important to note that several AWS services seamlessly integrate with Secrets Manager to securely handle and utilize these confidential data points throughout their entire lifecycle. What is the length limit for the AWS secrets manager? In the Secrets Manager console, data is stored in the form of a JSON structure, consisting of key/value pairs that can be easily parsed by a Lambda rotation function. AWS Secret manager limits range from 1 character to 65536 characters. Also, it’s important to note that the tag key names in Secrets Manager are case-sensitive. What are the benefits of AWS Secrets Manager? Secrets Manager provides a secure way to save and oversee your credentials. It makes the process of modifying or rotating your credentials easy, without requiring any complex code or configuration adjustments. Instead of embedding credentials directly in your code or configuration files, you can opt to store them safely using Secrets Manager. What is the best practice for an AWS secrets manager? You can adhere to the below listed AWS Secrets Manager best practices to carry out the secret storing in a better way: Make sure that the AWS Secrets Manager service applies encryption for data at rest by using Key Management Service (KMS) Customer Master Keys (CMKs). Ensure that automatic rotation is turned on for your Amazon Secrets Manager secrets. Also, confirm that the rotation schedule for Amazon Secrets Manager is set up correctly. Conclusion Hope this blog equips you with the knowledge and skills to effectively manage secrets within AWS, ensuring the protection of your critical data. Following the above AWS Secrets Manager tutorial steps can help to access the sensitive information stored in Secret Manager securely with the usage of AWS Lambda. You can also opt for AWS Sandbox to play around with the AWS platform. View the full article
  11. If you are deploying your application to Azure from Azure Pipelines, you might want to leverage the ability to do so without using secrets, thanks to Workload identity federation. In this article, I will demonstrate how to automate the configuration of your Azure DevOps project, with everything pre-configured to securely deploy applications to Azure... View the full article
  12. In the current digital era, handling sensitive data like passwords, API keys, and other credentials is vital to safeguarding an organization's infrastructure. Outdated methods of storing and managing secrets, including hardcoding them in configuration files or employing version control systems, no longer offer adequate protection against contemporary cyber threats. In response, Amazon Web Services (AWS) presents AWS Secrets Manager, a secure solution that offers scalability when managing, rotating, and retrieving secrets. This article delves into the primary advantages of utilizing AWS Secrets Manager for secure configuration management. View the full article
  13. In my previous tutorials, we looked at Azure Key Vault and Google Secret Manager: How to Handle Secrets With Azure Key Vault: In this piece, we had a look at the Zero Trust security strategy, how to put it into practice to secure applications and data, and how secrets managers can help to achieve the Zero Trust goal. We also included a tutorial on Kubernetes/SPC to use secrets from secret managers. How to Handle Secrets With Google Secret Manager: In this piece, we did a tutorial on using secrets from secret managers in your CI workflows (GitHub Actions). If you haven't read them yet, please give them a quick read, because even if you are not an Azure or a GCP user, they still might be worth reading. View the full article
  14. Today at HashiConf, we are pleased to announce the alpha program for HashiCorp Cloud Platform (HCP) Vault Radar, HCP Vault Secrets general availability, secrets sync beta for Vault Enterprise, and HashiCorp Vault 1.15. These new capabilities help organizations secure their applications and services as they leverage a cloud operating model to power their shift to the cloud. Enabling a cloud operating model helps organizations cut costs, reduce risks, and increase the speed at which developers build and deploy secure applications. The new capabilities boost Vault’s focus on helping organizations use identity to achieve their security goals by: Centrally managing and enforcing access to secrets and systems based on trusted sources of application and user identity. Eliminating credential sprawl by identifying static secrets hardcoded throughout complex systems and tooling across your entire cloud estate. Reducing manual overhead and risk associated with managing access to infrastructure resources like SSH, VPNs, as well as applications and services. Automatically implementing authentication and authorization mechanisms to ensure only authorized services can communicate with one another. View the full article
  15. Today at HashiConf, we are pleased to announce the general availability of HCP Vault Secrets, a new software-as-a-service (SaaS) offering of HashiCorp Vault that focuses on secrets management. Released in beta earlier this year, HCP Vault Secrets lets users onboard quickly and is free to get started. The general availability release of HCP Vault Secrets builds on the beta release with production-ready secrets management capabilities, additional secrets sync destinations, and multiple consumption tiers. During the public beta period, we worked on improvements and additions to HCP Vault Secrets. Focused on secrets management for developers, these additions will help our users to: Boost security across clouds and machines: Centralize where secrets are stored and minimize context switching between multiple solutions to reduce the risk of breaches caused by human error. Increase productivity: Improve security posture without expending additional time and effort. Enhance visibility of secrets activity across teams: Understand when secrets are modified or accessed — including by whom, when, and from where — with advanced filtering and storage. Comply with security best practices: Eliminate manual upgrade requirements with fully managed deployment to keep your instance up to date and in line with security best practices. Last-mile secrets availability for developers: Centralize secrets in HCP Vault Secrets while syncing secrets to existing platforms and tools so developers can access secrets when and where they need them. View the full article
  16. Pulumi previewed a tool that enables DevOps teams to unify environments, secrets and configuration (ESC) management. View the full article
  17. At HashiDays in June, we announced the public beta for a new offering on the HashiCorp Cloud Platform: HCP Vault Secrets is a powerful new tool designed to identify, control, and remediate secrets sprawl and centralize secrets management by synchronizing secrets across platforms. Secrets are unlike traditional credentials because they are leveraged by developers, applications, services, infrastructure, and platforms to establish trusted identities. As organizations distribute their workloads across more platforms they lose centralized control over identity security and become more exposed to secrets sprawl. This post reviews the secrets sync beta feature released as part of Vault Enterprise 1.15 and discusses how it will help organizations corral secrets sprawl and regain centralized control and visibility of their secrets... View the full article
  18. You can use your Yubikey to remember and type an arbitrary string, as well as using it as a OTP generator and a secure store for your SSH key. We use this so that we don’t have to remember our 1Password secret keys… View the full article
  19. Managing sensitive information, such as API keys, database passwords, or encryption keys, is a critical aspect of infrastructure and application security. AWS Secrets Manager is a service that helps you protect and manage your application's secrets, and Terraform is a powerful tool for provisioning and managing infrastructure. In this guide, we'll explore how to retrieve secrets from AWS Secret Manager and use them securely in your Terraform configurations. Click Here To Read More
  20. HashiCorp this week acquired BluBracket to add a set of static secrets discovery tools to its portfolio.View the full article
  21. In March, we introduced the beta version of the HashicCorp Vault Secrets Operator for Kubernetes. Today, the Operator has reached general availability. We received a great deal of feedback from our user community that helped us identify and prioritize features for the Vault Secrets Operator GA. This post covers the functionality of the Vault Operator and reviews the new features released along with GA... View the full article
  22. GitGuardian has expanded its ability to secure code repositories by providing deeper integration with GitHub. Ziad Ghalleb, product marketing manager for GitGuardian, said the results of security scans are now provided in the context of pull requests alongside suggestions for remediating issues. The company also expanded developer onboarding options by adding an application programming interface […] The post GitGuardian Tightens Integration With GitHub to Secure Secrets appeared first on DevOps.com. View the full article
  23. We are pleased to announce the general availability of HashiCorp Vault 1.11. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure. Vault 1.11 focuses on improving Vault’s core workflows and making key features production-ready. In this release, Vault adds a new Kubernetes secrets engine to dynamically generate credentials, improves the KV (key-value) secrets engine’s usability, adds support for the PKI engine for non-disruptive rotation, enables bring your own key (BYOK) for Transit, and brings many other improvements. Key features and improvements include: Kubernetes secrets engine: A new secrets engine that can dynamically generate Kubernetes service account tokens, service accounts, role bindings, and roles. Integrated storage autopilot (for Vault Enterprise): Autopilot is now able to perform seamless automated upgrades and includes support for redundancy zones to improve cluster resiliency. Vault agent: Updated consul-template includes an opt-in pkiCert option to prevent consul-template from re-fetching PKI certificates on reload or restart. Transit secrets engine: The ability to import externally generated keys to support use cases where there is a need to bring in an existing key from a hardware security module (HSM) or other outside system. PKI secrets engine: Support for non-disruptive intermediate and root certificate rotation. This introduces /keys and /issuers endpoints to allow import, generation, and configuration of any number of keys or issuers within a PKI mount, giving operators the ability to rotate certificates in place without affecting existing client configurations. Also has support for CPS URL in custom policy identifiers when generating certificates using the PKI engine. Terraform provider for Vault: New documentation and feature enhancements in the Terraform provider to the PKI secrets engine, along with support for specifying a namespace within a resource or data source. Entropy Augmentation: Updated sys/tools/random and transit/random endpoints to support user defined random byte source from an HSM. Google Cloud auth method: A custom_endpoint option so that Google service endpoints used by the underlying client can be customized to support both public and private services. User interface updates (UI): UI support to configure login multi-factor authentication (MFA) using time-based one-time passwords (TOTP), Duo, Okta, and PingIdentity. Snowflake database secrets engine: Support to manage RSA key-pair credentials for dynamic and static Snowflake database users. Consul secrets engine: Support for templating policy on node_identities and service_identities to be set on the Consul token creation. KMIP secrets engine (for Vault Enterprise): Support for import, query, encryption, and decryption operations. (Please refer to the Supported KMIP Operations for a complete list.) Transform secrets engine (for Vault Enterprise): A convergent tokenization mode and a tokenization lookup feature. Vault usage metrics: Ability to export the unique client count aggregate for a selected billing period (in technical preview). UI interface updated with the ability to view changes to client counts month over month. This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.11 changelog and release notes list all the updates. Please visit the Vault HashiCorp Learn page for step-by-step tutorials demonstrating the new features… View the full article
  24. The continuous integration and continuous delivery (CI/CD) pipeline is a fundamental component of the software delivery process for DevOps teams. The pipeline leverages automation and continuous monitoring to enable seamless delivery of software. With continuous automation, it’s important to ensure security for every step of the CI/CD pipeline. Sensitive information like access credentials is often […] View the full article
  25. Field level encryption (FLE) allows developers to selectively encrypt specific data fields. It helps protect sensitive data and enhances the security of communication between client apps and server. Pairing an FTE-capable database with a KMIP provider offers the highest level of security and control. The Key Management Interoperability Protocol (KMIP) standard is a widely adopted approach to handle cryptographic workloads and secrets management for enterprise infrastructure such as databases, network storage, and virtual and physical servers. HashiCorp Vault, being a KMIP compliant Key Management Server (KMS), enables organizations to perform cryptographic operations for their apps and services. With MongoDB releasing client-side field level encryption with KMIP support, customers are now able to use Vault’s KMIP secrets engine to supply the encryption keys. This allows customers to be in full control of their keys... View the full article
  • Forum Statistics

    42.5k
    Total Topics
    42.3k
    Total Posts
×
×
  • Create New...