Jump to content

Search the Community

Showing results for tags 'hsms'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 24 results

  1. HSM Integration refers to the process of incorporating a Hardware Security Module (HSM) into an organization’s IT and security infrastructure. HSMs are physical devices designed to secure digital keys and perform cryptographic operations, such as encryption, decryption, and digital signing, in a tamper-resistant environment. This integration is pivotal for enhancing the security of sensitive data […] The post What is HSM Integration? appeared first on Akeyless. The post What is HSM Integration? appeared first on Security Boulevard. View the full article
  2. In High Demand - How Thales and DigiCert Protect Against Software Supply Chain Attacks madhav Tue, 04/16/2024 - 05:25 Software supply chain attacks have been rapidly increasing in the past few years. Also called backdoor attacks, they cleverly exploit third-party software vulnerabilities to access an organization’s systems and data. These infiltrations tend to be very lucrative for criminals and devastating to businesses, as a single breach can impact thousands of victims in a rapid domino effect. However, the combined partnership of Thales and DigiCert offers solutions to help protect against these security risks. By 2025, Gartner predicts that 45% of organizations worldwide will have experienced attacks on their software supply chains. The news has already captured some very high-profile incidents, including attacks on an American retailer, a software vendor, and more recently a multinational investment and financial services bank. How are the hackers getting in? Attacks are most commonly executed by hijacking software updates, undermining code signing, and compromising open-source code. Hackers tend to target software with poor security. In most cases, criminals sneak in behind legitimate processes to get unrestricted access to an organization's software ecosystem to wreak havoc. Examples include covertly inserting malware or manipulating unprotected code-signing keys. How can organizations protect their data? Fortunately, there are several preventive measures organizations can take which are recommended by the Cybersecurity and Infrastructure Security Agency (CISA). These include the 4 following best practices: 1 – Protect software releases with Code Signing Most modern software is a compilation of code and packages from multiple sources. This may include open source, third party, and various libraries – in addition to possible multiple components from internal and external DevOps and Continuous Integration/Delivery (CI/CD) teams. Multiple security measures need to be integrated and automated during each build and release cycle. Tampering, such as inserting malware, can occur at any point during this process. Tampering can involve inserting 3rd party code (often coming from open-source software components), manipulating the organization’s proprietary source code, or even manipulating the artifacts that are used to build the software. Software releases can be protected by code signing. This framework uses digital certificates and the public key infrastructure (PKI) to sign program files so that users can authoritatively identify the publisher of the file and verify that the file hasn’t been tampered with or accidentally modified. Organizations can stop application and software tampering by using DigiCert Code Signing Certificates to ensure their downloaded files are published as intended. Additionally, software developers can use DigiCert Extended Validation (EV) code signing certificates to digitally sign apps, drivers, and software programs as a way for end-users to verify that the code they receive has not been altered or compromised by a third party. 2 – Protect Private Keys The most significant issue with code signing is protecting the private keys associated with the code signing certificates. If the keys are compromised, the certificate and software can no longer be fully trusted. To protect keys, organizations generally can choose to apply software or hardware protections. However, due to several high-profile hacks into systems using software-only protections, the trend is increasingly to embrace more robust forms of hardware-based solutions. For example, both the National Institute of Standards and Technology (NIST) agency and the Certification Authority Browser Forum (C/AB) consortium recommend using hardened cryptographic hardware products to protect keys, such as Hardware Security Modules (HSMs) as a security best practice. For example, Thales Luna Hardware Security Modules (HSMs), are intrusion-resistant, tamper-evident FIPS validated and Common Criteria EAL 4+ certified hardware devices that provide a secure root of trust for code signing. For organizations needing flexible deployment models, Luna HSMs are one of the few available on-premises, as-a-service, or in a hybrid model. Luna HSMs can also be used to securely generate, store, and manage the private keys used in code-signing applications. They also provide strict access and control over the use of keys (which must be kept inside the HSM to perform the code signature), as well as generate a secure audit log to track access. All of these protections, ultimately ensure the private code signing keys aren’t stolen or misused. 3 - Proactively detect software vulnerabilities To minimize any code signing process vulnerabilities, centralized solutions that combine advanced binary analysis and threat detection are recommended. DigiCert Trust Software Manager improves software integrity through deep binary static analysis to verify software is free from known threats like malware, software implants, software tampering, and exposed secrets before the final software is code signed. Additionally, DigiCert Trust Software Manager works in conjunction with both on-premises Luna HSMs and Luna Cloud HSM services (available on Thales Data Protection on Demand), to ensure customers can protect their data on-premises, in the cloud, or across hybrid environments. 4 – Reduce risk with SBOMs Following several major software supply chain attacks, Executive Order 14028 was issued outlining the value of a Software Bill of Materials (SBOM) to improve cybersecurity. A Software Bill of Materials is a list of components attached to the software as a nested inventory. It lists every piece of code that makes up the full software package, so you can know what to trust and more easily trace and eliminate vulnerabilities or malware. Leveraging automated tools for scanning to minimize oversight risks that can also generate SBOMs, such as DigiCert Software Trust Manager can greatly benefit organizations seeking to apply protective and compliance measures more rapidly. Thales and DigiCert Architecture Overview Conclusion While the threat of software supply chain attacks looms large on the horizon, there are several ways to defend against these breaches. Together, both Thales and DigiCert together can help protect against software supply chain attacks by ensuring software integrity for organizations. To learn more, please join our upcoming webinar on April 23rd. Encryption Melody Wood | Partner Solutions Marketing Manager, Thales More About This Author > Schema The post In High Demand – How Thales and DigiCert Protect Against Software Supply Chain Attacks appeared first on Security Boulevard. View the full article
  3. Protecting people, data, and critical assets is a crucial responsibility for modern organizations, yet conventional security approaches often struggle to address the escalating velocity, breadth, and intricacy of modern cyberattacks. Bolting on more new security products is simply not a viable long-term strategy. What organizations need from their security solutions is a convergence of essential capabilities that brings simplicity, streamlines operations, and enhances efficiency and effectiveness. Today at Google Cloud Next, we are announcing innovations across our security portfolio that are designed to deliver stronger security outcomes and enable every organization to make Google a part of their security team. Increasing speed and productivity with Gemini in Security Generative AI offers tremendous potential to tip the balance in favor of defenders and we continue to infuse AI-driven capabilities into our products. Today we’re announcing the following new AI capabilities: Gemini in Security Operations is coming to the entire investigation lifecycle, building on our December GA of natural language search and case summaries in Chronicle. A new assisted investigation feature, generally available at the end of this month, will guide analysts through their workflow wherever they are in Chronicle Enterprise and Chronicle Enterprise Plus. Gemini recommends actions based on the context of an investigation, and can run searches and create detection rules to improve response times. Plus, analysts can now ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment — and Gemini will navigate users to the most relevant pages in the integrated platform for deeper investigation. Gemini in Security Operations allows users to quickly investigate incidents and alerts using conversational chat in Chronicle. “Gemini in Security Operations is enabling us to enhance the efficiency of our Cybersecurity Operations Center program as we continue to drive operational excellence,” said Ronald Smalley, senior vice president of Cybersecurity Operations, Fiserv. “Detection engineers can create detections and playbooks with less effort, and security analysts can find answers quickly with intelligent summarization and natural language search. This is critical as SOC teams continue to manage increasing data volumes and need to detect, validate, and respond to events faster.“ Gemini in Threat Intelligence now offers conversational search across Mandiant’s vast and growing repository of threat intelligence directly from frontline investigations — a grounded experience, now in preview. Plus, VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform — a feature that’s generally available now. Gemini in Security Command Center now offers preview features that let security teams search for threats and other security events using natural language. It can also provide summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarize attack paths to help understand cloud risks for remediation. We are also infusing AI in many of our cloud platform’s security services. Today, we’re announcing previews of new capabilities in Gemini Cloud Assist, including: IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts to help uplevel IAM posture and reduce risk exposure. Key Insights, which can provide assistance during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs. Confidential Computing Insights, which can recommend options for adding confidential computing protection for your most sensitive workloads based on your data and your compute usage. Delivering a new frontline of defense for the enterprise Chrome Enterprise Premium is a new offering that redefines, simplifies, and strengthens endpoint security. It brings together the most popular and trusted enterprise browser with Google’s advanced security capabilities, including threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting. With Chrome Enterprise Premium, which is generally available today, hundreds of millions of enterprise users can get additional protection delivered instantly where they do their work every day. "With Chrome Enterprise Premium, we have confidence in Google’s security expertise, including Project Zero’s cutting-edge security research and fast security patches. We set up data loss prevention restrictions and warnings for sharing sensitive information in applications like Generative AI platforms and noticed a noteworthy 50% reduction in content transfers,” said Nick Reva, head of corporate security engineering, Snap Inc. Turning intelligence into action Our focus on intelligence-driven outcomes continues with the launch of Applied Threat Intelligence in Google Security Operations. Applied threat intelligence takes our industry-leading global threat visibility and automatically applies it to each customer’s unique environment. It can help security operations teams uncover more threats with less effort and use the most up-to-date threat intelligence to address them before they create damage or loss. Managing cloud risk Security Command Center Enterprise, the industry’s first cloud risk-management solution that fuses proactive cloud security and enterprise security operations, is now generally available. This new solution offers security teams a single view of their posture controls, active threats, cloud identities, data, and more, while integrating remediation and issue accountability into end-to-end workflows. Mandiant Hunt for Security Command Center Enterprise is now in preview, and offers on-demand human expertise that can become an extension of internal security operations teams. Hundreds of elite-level analysts and researchers are available on-call to proactively find elusive threats in organizations’ SCC environments. New security capabilities in our trusted cloud We continue our regular delivery of new security controls and capabilities on our cloud platform to help organizations meet evolving policy, compliance, and business objectives. Today we’re announcing the following updates: For Identity and Access Management: Privileged Access Manager (PAM), now available in preview, is designed to help mitigate risks tied to privileged access misuse or abuse. PAM can help customers shift from always-on standing privileges towards on-demand access with just-in-time, time-bound, and approval-based access elevations. Principal Access Boundary (PAB) is a new, identity-centric control now in preview. It can empower security administrators to enforce restrictions on IAM principals so that they can only access authorized resources within a specific defined boundary. For Network Security: Cloud NGFW Enterprise is now generally available. Our cloud-first next generation firewall (NGFW) includes threat protection powered by Palo Alto Networks with a unique distributed architecture that can provide granular control at the workload level. Cloud Armor Enterprise, now generally available, offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence to help protect your cloud applications and services. For Data Security: Confidential Accelerators: Confidential VMs on Intel TDX are now in preview and available on the C3 machine series with Intel TDX. For AI and ML workloads, we support Intel AMX, which provides CPU-based acceleration by default on C3 series Confidential VMs. In addition, Confidential Compute will also be coming to A3 VMs with NVIDIA H100 GPUs in preview later this year. With these announcements, our Confidential Computing portfolio now spans Intel, AMD, and NVIDIA hardware. Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine. This powerful combination can pinpoint high-value assets, analyze vulnerabilities in databases, and simulate real-world attack scenarios that can enable you to proactively address risks and safeguard data. Key management with Autokey is now in preview. Autokey simplifies creating and managing customer encryption keys (CMEK) by ensuring you use the right key type for each resource, thus reducing complexity and management overhead. Plus, Autokey can help you adhere to industry best practices for data security. Expanded regions available for bare metal HSM deployments allows you to deploy your own HSMs in PCI-compliant facilities with your Google Cloud workloads. For our Regulated Cloud offerings: Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries. The Regional Controls package can enforce data residency for customer content at rest, offers administrative access transparency, as well as compliant service restriction and monitoring. Regional controls are available at no additional cost. Audit Manager is now in preview. Audit Manager can help customers drastically simplify their compliance audit process by automating control verification with proof of compliance for their workloads and data on Google Cloud. Take your next security steps with Google Cloud Google’s decade of AI innovation, coupled with our security expertise, means we are strongly positioned to help you protect your users and brand by becoming an integral part of your security team. For more on our Next ‘24 announcements, you can watch our security spotlight, and check out the many great security breakout sessions at Google Cloud Next — live or on-demand. View the full article
  4. What is Voltage SecureData? What is Voltage SecureData Voltage SecureData is a comprehensive data security platform offering end-to-end encryption, tokenization, and data masking across diverse environments. It safeguards sensitive data like PII, financial information, and intellectual property at rest, in transit, and in use, empowering organizations to comply with data privacy regulations and minimize the risk of data breaches. Here’s what makes Voltage SecureData stand out: Format-Preserving Encryption (FPE): Encrypts data while maintaining its original format and data types, eliminating the need for schema changes and simplifying integration with existing applications. Multiple Tokenization Methods: Offers versatile tokenization options to replace sensitive data with non-sensitive surrogates, facilitating safe data sharing and analysis. Granular Data Security Policies: Define precise data protection rules based on data types, users, applications, and workflows for optimized security. Centralized Key Management: Securely manage encryption keys in a centralized location, ensuring consistent data protection and simplified key rotation. Extensive Platform Integrations: Integrates seamlessly with diverse applications, databases, cloud platforms, and security tools for a unified data security ecosystem. Top 10 use cases of Voltage SecureData? Top 10 Use Cases of Voltage SecureData: Protecting Data at Rest: Encrypt databases, files, and cloud storage to safeguard sensitive information against unauthorized access. Securing Data in Transit: Encrypt data transfers across networks and cloud applications to prevent data breaches during transmission. Data Anonymization and Pseudonymization: Tokenize sensitive data before analysis or sharing to comply with data privacy regulations and minimize risk. Compliance with Data Privacy Regulations: Achieve compliance with GDPR, CCPA, HIPAA, and other regulations by ensuring appropriate data protection controls. Securing Cloud Applications: Protect sensitive data within cloud-based applications like Salesforce, Office 365, and AWS while maintaining functionality. Data Masking for Testing and Development: Mask sensitive data in test and development environments to prevent misuse and protect real-world data. Data Loss Prevention (DLP): Control data movement and prevent unauthorized data exfiltration through robust DLP capabilities. Securing Big Data and Analytics: Protect sensitive data within big data platforms and analytics workflows to facilitate secure data analysis. Data Breach Protection: Minimize the impact of data breaches by encrypting sensitive data, making it unusable to attackers. Identity and Access Management (IAM): Integrate with IAM solutions to enforce fine-grained access controls and protect data from unauthorized users. Voltage SecureData offers a flexible and scalable solution for organizations of all sizes to address their unique data security challenges. By prioritizing data protection and adopting robust security measures, Voltage SecureData helps organizations build trust, mitigate risk, and ensure compliance. What are the feature of Voltage SecureData? Here’s a closer look at the key features of Voltage SecureData: Encryption: Format-Preserving Encryption (FPE): Encrypts data while maintaining its original format, allowing for seamless integration with existing applications and processes. Strong Encryption Algorithms: Supports industry-standard encryption algorithms like AES, DES, and RSA for robust data protection. Multi-Level Encryption: Encrypts data at multiple levels (file, field, record, or column) for granular control and enhanced security. Tokenization: Multiple Tokenization Methods: Offers a variety of tokenization techniques to suit different use cases, including static, dynamic, and vaultless tokenization. Tokenization Vault: Securely stores and manages tokens for efficient data retrieval and de-tokenization. Tokenization for Data Sharing and Analytics: Allows secure sharing and analysis of sensitive data without compromising privacy. Data Masking: Dynamic Data Masking: Masks sensitive data in real-time, protecting it from unauthorized access during use. Data Masking for Non-Production Environments: Safeguards sensitive data in test and development environments while maintaining functionality. Reversible and Irreversible Data Masking: Offers options for both reversible and irreversible masking based on data privacy requirements. Key Management: Centralized Key Management: Manages encryption keys securely in a centralized location for simplified administration and policy enforcement. Hardware Security Modules (HSM) Support: Integrates with HSMs for enhanced key protection and compliance with industry standards. Key Rotation and Lifecycle Management: Automates key rotation and lifecycle processes to maintain security and compliance. Data Security Policies: Granular Policy Enforcement: Defines and enforces data security policies based on data types, users, applications, and locations for precise control. Policy-Based Encryption and Tokenization: Automatically encrypts or tokenizes data based on predefined policies to streamline data protection. Auditing and Reporting: Tracks data access and usage for compliance reporting and security analysis. Integration and Deployment: Extensive Platform Integrations: Works seamlessly with various databases, applications, cloud platforms, and security tools. Flexible Deployment Options: Available as on-premises, cloud-based, or hybrid solutions to fit diverse IT environments. APIs for Custom Integrations: Provides APIs for seamless integration with custom applications and workflows. Additional Features: Data Loss Prevention (DLP) Capabilities: Identifies and protects sensitive data at rest, in transit, and in use to prevent data leaks. Identity and Access Management (IAM) Integration: Works with IAM solutions to enforce access controls and protect data from unauthorized users. Data Governance and Compliance: Facilitates compliance with data privacy regulations like GDPR, CCPA, and HIPAA. Threat Monitoring and Detection: Detects and alerts on potential security threats to safeguard sensitive data. How Voltage SecureData works and Architecture? Voltage SecureData works and Architecture Here’s a breakdown of Voltage SecureData’s architecture and how it safeguards your sensitive data: 1. Data Identification and Classification: Discovers sensitive data: Voltage SecureData scans your databases, files, and applications to identify sensitive information like PII, financial data, and trade secrets. Classifies data: It categorizes sensitive data according to predefined data types, compliance requirements, and organizational policies for targeted protection. 2. Policy-Based Protection: Define granular policies: Create and enforce data security policies that specify which data to protect, how to protect it (encryption, tokenization, or masking), and who can access it. Automated enforcement: Policies are automatically applied to data in real-time, ensuring consistent protection across your environment. 3. Encryption Engine: Secures data at rest: Encrypts sensitive data within databases, files, and cloud storage using strong encryption algorithms, rendering it unreadable without the decryption key. Protects data in transit: Encrypts data transfers across networks and between applications to prevent interception and unauthorized access. Format-preserving encryption (FPE): Preserves the original format of encrypted data, allowing for seamless integration with existing applications and processes. 4. Tokenization Engine: Replaces sensitive data with tokens: Uses tokenization to replace sensitive data with non-sensitive surrogate values (tokens), safeguarding the original data while enabling safe sharing and analysis. Secure token vault: Stores and manages tokens in a secure vault, ensuring only authorized users can access the original data through de-tokenization. 5. Data Masking Engine: Dynamically masks sensitive data: Obscures sensitive data in real-time, protecting it from unauthorized viewing during use in applications or testing environments. Preserves functionality: Masks data while maintaining its format and usability for testing and development purposes. 6. Centralized Key Management: Securely stores and manages encryption keys: Houses encryption keys in a centralized, secure location, ensuring access control and simplified key lifecycle management. Hardware security module (HSM) support: Integrates with HSMs for enhanced key protection and compliance with industry standards. 7. Integration and Extensibility: Seamless integration with diverse platforms: Connects with databases, applications, cloud platforms, and security tools through APIs and connectors for unified data protection. Flexible deployment options: Deploys on-premises, in the cloud, or in hybrid environments to align with your IT infrastructure. 8. Monitoring and Reporting: Tracks data access and usage: Monitors data access and usage activity for compliance reporting and security analysis. Generates detailed reports: Produces comprehensive reports on data protection status, policy compliance, and potential security incidents. 9. Administration and Management: Centralized management console: Provides a centralized console for managing data security policies, encryption keys, tokenization vaults, and user access. Role-based access controls: Enforces granular access controls to manage who can access and manage sensitive data. By combining these components and functionalities, Voltage SecureData creates a robust and adaptable framework for safeguarding sensitive data across your organization, empowering you to meet compliance requirements and mitigate data security risks effectively. How to Install Voltage SecureData it? I’m unable to provide direct installation steps due to security concerns and the need for expertise in your specific environment. However, I can outline the general process and key considerations to guide you: 1. Planning and Assessment: Evaluate needs: Analyze your organization’s data security requirements, compliance needs, and existing infrastructure to determine the most suitable Voltage SecureData solution and deployment model. Consult experts: Engage with Voltage SecureData representatives or certified partners for guidance on best practices, licensing models, and configuration options. 2. System Requirements: Review prerequisites: Ensure your systems meet the minimum hardware and software requirements, including compatible operating systems, database versions, and supported platforms. 3. Installation Options: Choose deployment model: Select between on-premises installation (on your own servers or virtual machines) or cloud-based deployment (hosted by Voltage SecureData). 4. Installation Steps (General Outline): Obtain installation files: Acquire the necessary software components from Voltage SecureData. Set up infrastructure: Prepare servers, databases, and network components according to Voltage SecureData’s guidelines. Install core components: Follow Voltage SecureData’s instructions to install the Voltage SecureData Server, Key Management Server, and associated agents. Configure settings: Tailor Voltage SecureData to your specific needs, including data sources, security policies, user permissions, and alerts. 5. Agent Deployment: Install agents: Deploy Voltage SecureData agents on file servers, endpoints, databases, and applications to monitor data flow and enforce protection. 6. Integration and Testing: Connect with other tools: Integrate Voltage SecureData with your existing security tools (e.g., SIEM, firewalls) for a comprehensive security ecosystem. Run thorough tests: Conduct rigorous testing to ensure Voltage SecureData functions correctly and aligns with your security policies. 7. Training and Ongoing Maintenance: Educate users: Train your team on Voltage SecureData’s features, functionalities, and best practices for effective use. Regular updates: Maintain Voltage SecureData with regular software updates and security patches to address vulnerabilities and enhance capabilities. Essential Reminders: Voltage expertise: Involve Voltage SecureData professionals or certified partners for assistance with installation, configuration, and ongoing support. Documentation: Refer to Voltage SecureData’s comprehensive documentation for detailed instructions and troubleshooting guidance. Security best practices: Adhere to security best practices during installation and configuration to protect sensitive data. Basic Tutorials of Voltage SecureData: Getting Started Basic Tutorials of Voltage SecureData Now, lets move to the stepwise tutorials part but I’ll provide clear instructions and text descriptions to guide you through the basic tutorials of Voltage SecureData. 1. Installation and Configuration: Download the Voltage SecureData Client: Obtain the installation file from the official Voltage website or your company’s IT administrator. Run the installer: Apply the on-screen prompts to fullfil the installation process. Configure the client: Access the Voltage SecureData settings to specify: Your organization’s Voltage server address Your user credentials Any additional security policies or preferences 2. Protecting Data: Right-click on a file or folder: Choose “Protect with Voltage SecureData” from the context menu. Assign access permissions: Specify who can access the protected data and what actions they can perform (view, edit, print, etc.). Apply protection: The client encrypts the data and applies access controls. 3. Accessing Protected Data: Double-click on a protected file: The Voltage SecureData Client prompts you for authentication. Enter your credentials: Provide your username and password, or use other authentication methods like two-factor authentication. View and work with the data: Once authenticated, you can access and use the protected data as usual, within the authorized permissions. 4. Sharing Protected Data: Use the “Share” option: Within the Voltage SecureData Client, select the file or folder you want to share and click the “Share” button. Specify recipients and permissions: Indicate who should receive access and what actions they can perform. Send share notification: The recipients receive a notification with instructions on how to access the protected data. 5. Managing Policies and Settings: Access the Voltage SecureData Admin Console: This web-based interface allows administrators to: Manage users and groups Set access control policies Monitor usage logs Configure security settings The post What is Voltage SecureData and use cases of Voltage SecureData? appeared first on DevOpsSchool.com. View the full article
  5. AWS Private Certificate Authority (AWS Private CA) launches the Connector for Active Directory (AD). The Connector for AD allows you to use AWS Private CA as a drop-in replacement for your self-managed enterprise certificate authorities without the need to deploy, patch, or update local agents or proxy servers. Enterprises that use AD to manage Windows environments can reduce their private certificate authority (CA) costs and complexity. You can help meet your security and compliance goals by using AWS Private CA, a fully-managed service, which stores CA private keys in FIPS 140 validated hardware security modules (HSMs). View the full article
  6. Authors: Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM) In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment's security and privacy properties. Further, we will show how the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm. Confidential Computing is a concept that has been introduced previously in the cloud-native world. The Confidential Computing Consortium (CCC) is a project community in the Linux Foundation that already worked on Defining and Enabling Confidential Computing. In the Whitepaper, they provide a great motivation for the use of Confidential Computing: Data exists in three states: in transit, at rest, and in use. …Protecting sensitive data in all of its states is more critical than ever. Cryptography is now commonly deployed to provide both data confidentiality (stopping unauthorized viewing) and data integrity (preventing or detecting unauthorized changes). While techniques to protect data in transit and at rest are now commonly deployed, the third state - protecting data in use - is the new frontier. Confidential Computing aims to primarily solve the problem of protecting data in use by introducing a hardware-enforced Trusted Execution Environment (TEE). Trusted Execution Environments For more than a decade, Trusted Execution Environments (TEEs) have been available in commercial computing hardware in the form of Hardware Security Modules (HSMs) and Trusted Platform Modules (TPMs). These technologies provide trusted environments for shielded computations. They can store highly sensitive cryptographic keys and carry out critical cryptographic operations such as signing or encrypting data. TPMs are optimized for low cost, allowing them to be integrated into mainboards and act as a system's physical root of trust. To keep the cost low, TPMs are limited in scope, i.e., they provide storage for only a few keys and are capable of just a small subset of cryptographic operations. In contrast, HSMs are optimized for high performance, providing secure storage for far more keys and offering advanced physical attack detection mechanisms. Additionally, high-end HSMs can be programmed so that arbitrary code can be compiled and executed. The downside is that they are very costly. A managed CloudHSM from AWS costs around $1.50 / hour or ~$13,500 / year. In recent years, a new kind of TEE has gained popularity. Technologies like AMD SEV, Intel SGX, and Intel TDX provide TEEs that are closely integrated with userspace. Rather than low-power or high-performance devices that support specific use cases, these TEEs shield normal processes or virtual machines and can do so with relatively low overhead. These technologies each have different design goals, advantages, and limitations, and they are available in different environments, including consumer laptops, servers, and mobile devices. Additionally, we should mention ARM TrustZone, which is optimized for embedded devices such as smartphones, tablets, and smart TVs, as well as AWS Nitro Enclaves, which are only available on Amazon Web Services and have a different threat model compared to the CPU-based solutions by Intel and AMD. IBM Secure Execution for Linux lets you run your Kubernetes cluster's nodes as KVM guests within a trusted execution environment on IBM Z series hardware. You can use this hardware-enhanced virtual machine isolation to provide strong isolation between tenants in a cluster, with hardware attestation about the (virtual) node's integrity. Security properties and feature set In the following sections, we will review the security properties and additional features these new technologies bring to the table. Only some solutions will provide all properties; we will discuss each technology in further detail in their respective section. The Confidentiality property ensures that information cannot be viewed while it is in use in the TEE. This provides us with the highly desired feature to secure data in use. Depending on the specific TEE used, both code and data may be protected from outside viewers. The differences in TEE architectures and how their use in a cloud native context are important considerations when designing end-to-end security for sensitive workloads with a minimal Trusted Computing Base (TCB) in mind. CCC has recently worked on a common vocabulary and supporting material that helps to explain where confidentiality boundaries are drawn with the different TEE architectures and how that impacts the TCB size. Confidentiality is a great feature, but an attacker can still manipulate or inject arbitrary code and data for the TEE to execute and, therefore, easily leak critical information. Integrity guarantees a TEE owner that neither code nor data can be tampered with while running critical computations. Availability is a basic property often discussed in the context of information security. However, this property is outside the scope of most TEEs. Usually, they can be controlled (shut down, restarted, …) by some higher level abstraction. This could be the CPU itself, the hypervisor, or the kernel. This is to preserve the overall system's availability, not the TEE itself. When running in the cloud, availability is usually guaranteed by the cloud provider in terms of Service Level Agreements (SLAs) and is not cryptographically enforceable. Confidentiality and Integrity by themselves are only helpful in some cases. For example, consider a TEE running in a remote cloud. How would you know the TEE is genuine and running your intended software? It could be an imposter stealing your data as soon as you send it over. This fundamental problem is addressed by Attestability. Attestation allows us to verify the identity, confidentiality, and integrity of TEEs based on cryptographic certificates issued from the hardware itself. This feature can also be made available to clients outside of the confidential computing hardware in the form of remote attestation. TEEs can hold and process information that predates or outlives the trusted environment. That could mean across restarts, different versions, or platform migrations. Therefore Recoverability is an important feature. Data and the state of a TEE need to be sealed before they are written to persistent storage to maintain confidentiality and integrity guarantees. The access to such sealed data needs to be well-defined. In most cases, the unsealing is bound to a TEE's identity. Hence, making sure the recovery can only happen in the same confidential context. This does not have to limit the flexibility of the overall system. AMD SEV-SNP's migration agent (MA) allows users to migrate a confidential virtual machine to a different host system while keeping the security properties of the TEE intact. Feature comparison These sections of the article will dive a little bit deeper into the specific implementations, compare supported features and analyze their security properties. AMD SEV AMD's Secure Encrypted Virtualization (SEV) technologies are a set of features to enhance the security of virtual machines on AMD's server CPUs. SEV transparently encrypts the memory of each VM with a unique key. SEV can also calculate a signature of the memory contents, which can be sent to the VM's owner as an attestation that the initial guest memory was not manipulated. The second generation of SEV, known as Encrypted State or SEV-ES, provides additional protection from the hypervisor by encrypting all CPU register contents when a context switch occurs. The third generation of SEV, Secure Nested Paging or SEV-SNP, is designed to prevent software-based integrity attacks and reduce the risk associated with compromised memory integrity. The basic principle of SEV-SNP integrity is that if a VM can read a private (encrypted) memory page, it must always read the value it last wrote. Additionally, by allowing the guest to obtain remote attestation statements dynamically, SNP enhances the remote attestation capabilities of SEV. AMD SEV has been implemented incrementally. New features and improvements have been added with each new CPU generation. The Linux community makes these features available as part of the KVM hypervisor and for host and guest kernels. The first SEV features were discussed and implemented in 2016 - see AMD x86 Memory Encryption Technologies from the 2016 Usenix Security Symposium. The latest big addition was SEV-SNP guest support in Linux 5.19. Confidential VMs based on AMD SEV-SNP are available in Microsoft Azure since July 2022. Similarly, Google Cloud Platform (GCP) offers confidential VMs based on AMD SEV-ES. Intel SGX Intel's Software Guard Extensions has been available since 2015 and were introduced with the Skylake architecture. SGX is an instruction set that enables users to create a protected and isolated process called an enclave. It provides a reverse sandbox that protects enclaves from the operating system, firmware, and any other privileged execution context. The enclave memory cannot be read or written from outside the enclave, regardless of the current privilege level and CPU mode. The only way to call an enclave function is through a new instruction that performs several protection checks. Its memory is encrypted. Tapping the memory or connecting the DRAM modules to another system will yield only encrypted data. The memory encryption key randomly changes every power cycle. The key is stored within the CPU and is not accessible. Since the enclaves are process isolated, the operating system's libraries are not usable as is; therefore, SGX enclave SDKs are required to compile programs for SGX. This also implies applications need to be designed and implemented to consider the trusted/untrusted isolation boundaries. On the other hand, applications get built with very minimal TCB. An emerging approach to easily transition to process-based confidential computing and avoid the need to build custom applications is to utilize library OSes. These OSes facilitate running native, unmodified Linux applications inside SGX enclaves. A library OS intercepts all application requests to the host OS and processes them securely without the application knowing it's running a TEE. The 3rd generation Xeon CPUs (aka Ice Lake Server - "ICX") and later generations did switch to using a technology called Total Memory Encryption - Multi-Key (TME-MK) that uses AES-XTS, moving away from the Memory Encryption Engine that the consumer and Xeon E CPUs used. This increased the possible enclave page cache (EPC) size (up to 512GB/CPU) and improved performance. More info about SGX on multi-socket platforms can be found in the Whitepaper. A list of supported platforms is available from Intel. SGX is available on Azure, Alibaba Cloud, IBM, and many more. Intel TDX Where Intel SGX aims to protect the context of a single process, Intel's Trusted Domain Extensions protect a full virtual machine and are, therefore, most closely comparable to AMD SEV. As with SEV-SNP, guest support for TDX was merged in Linux Kernel 5.19. However, hardware support will land with Sapphire Rapids during 2023: Alibaba Cloud provides invitational preview instances, and Azure has announced its TDX preview opportunity. Overhead analysis The benefits that Confidential Computing technologies provide via strong isolation and enhanced security to customer data and workloads are not for free. Quantifying this impact is challenging and depends on many factors: The TEE technology, the benchmark, the metrics, and the type of workload all have a huge impact on the expected performance overhead. Intel SGX-based TEEs are hard to benchmark, as shown by different papers. The chosen SDK/library OS, the application itself, as well as the resource requirements (especially large memory requirements) have a huge impact on performance. A single-digit percentage overhead can be expected if an application is well suited to run inside an enclave. Confidential virtual machines based on AMD SEV-SNP require no changes to the executed program and operating system and are a lot easier to benchmark. A benchmark from Azure and AMD shows that SEV-SNP VM overhead is <10%, sometimes as low as 2%. Although there is a performance overhead, it should be low enough to enable real-world workloads to run in these protected environments and improve the security and privacy of our data. Confidential Computing compared to FHE, ZKP, and MPC Fully Homomorphic Encryption (FHE), Zero Knowledge Proof/Protocol (ZKP), and Multi-Party Computations (MPC) are all a form of encryption or cryptographic protocols that offer similar security guarantees to Confidential Computing but do not require hardware support. Fully (also partially and somewhat) homomorphic encryption allows one to perform computations, such as addition or multiplication, on encrypted data. This provides the property of encryption in use but does not provide integrity protection or attestation like confidential computing does. Therefore, these two technologies can complement to each other. Zero Knowledge Proofs or Protocols are a privacy-preserving technique (PPT) that allows one party to prove facts about their data without revealing anything else about the data. ZKP can be used instead of or in addition to Confidential Computing to protect the privacy of the involved parties and their data. Similarly, Multi-Party Computation enables multiple parties to work together on a computation, i.e., each party provides their data to the result without leaking it to any other parties. Use cases of Confidential Computing The presented Confidential Computing platforms show that both the isolation of a single container process and, therefore, minimization of the trusted computing base and the isolation of a `` full virtual machine are possible. This has already enabled a lot of interesting and secure projects to emerge: Confidential Containers Confidential Containers (CoCo) is a CNCF sandbox project that isolates Kubernetes pods inside of confidential virtual machines. CoCo can be installed on a Kubernetes cluster with an operator. The operator will create a set of runtime classes that can be used to deploy pods inside an enclave on several different platforms, including AMD SEV, Intel TDX, Secure Execution for IBM Z, and Intel SGX. CoCo is typically used with signed and/or encrypted container images which are pulled, verified, and decrypted inside the enclave. Secrets, such as image decryption keys, are conditionally provisioned to the enclave by a trusted Key Broker Service that validates the hardware evidence of the TEE prior to releasing any sensitive information. CoCo has several deployment models. Since the Kubernetes control plane is outside the TCB, CoCo is suitable for managed environments. CoCo can be run in virtual environments that don't support nesting with the help of an API adaptor that starts pod VMs in the cloud. CoCo can also be run on bare metal, providing strong isolation even in multi-tenant environments. Managed confidential Kubernetes Azure and GCP both support the use of confidential virtual machines as worker nodes for their managed Kubernetes offerings. Both services aim for better workload protection and security guarantees by enabling memory encryption for container workloads. However, they don't seek to fully isolate the cluster or workloads against the service provider or infrastructure. Specifically, they don't offer a dedicated confidential control plane or expose attestation capabilities for the confidential cluster/nodes. Azure also enables Confidential Containers in their managed Kubernetes offering. They support the creation based on Intel SGX enclaves and AMD SEV-based VMs. Constellation Constellation is a Kubernetes engine that aims to provide the best possible data security. Constellation wraps your entire Kubernetes cluster into a single confidential context that is shielded from the underlying cloud infrastructure. Everything inside is always encrypted, including at runtime in memory. It shields both the worker and control plane nodes. In addition, it already integrates with popular CNCF software such as Cilium for secure networking and provides extended CSI drivers to write data securely. Occlum and Gramine Occlum and Gramine are examples of open source library OS projects that can be used to run unmodified applications in SGX enclaves. They are member projects under the CCC, but similar projects and products maintained by companies also exist. With these libOS projects, existing containerized applications can be easily converted into confidential computing enabled containers. Many curated prebuilt containers are also available. Where are we today? Vendors, limitations, and FOSS landscape As we hope you have seen from the previous sections, Confidential Computing is a powerful new concept to improve security, but we are still in the (early) adoption phase. New products are starting to emerge to take advantage of the unique properties. Google and Microsoft are the first major cloud providers to have confidential offerings that can run unmodified applications inside a protected boundary. Still, these offerings are limited to compute, while end-to-end solutions for confidential databases, cluster networking, and load balancers have to be self-managed. These technologies provide opportunities to bring even the most sensitive workloads into the cloud and enables them to leverage all the tools in the CNCF landscape. Call to action If you are currently working on a high-security product that struggles to run in the public cloud due to legal requirements or are looking to bring the privacy and security of your cloud-native project to the next level: Reach out to all the great projects we have highlighted! Everyone is keen to improve the security of our ecosystem, and you can play a vital role in that journey. Confidential Containers Constellation: Always Encrypted Kubernetes Occlum Gramine CCC also maintains a list of projects View the full article
  7. Today, AWS is announcing a new service, AWS Payment Cryptography. This service simplifies your implementation of cryptography operations used to secure data in payment processing applications for debit, credit, and stored-value cards in accordance with various payment card industry (PCI), network, and American National Standards Institute (ANSI) standards and rules. Financial service providers and processors can replace their on-premises hardware security modules (HSMs) with this elastic service and move their payments-specific cryptography and key management functions to the cloud. View the full article
  8. Cryptography is everywhere in our daily lives. If you’re reading this blog, you’re using HTTPS, an extension of HTTP that uses encryption to secure communications. On AWS, multiple services and capabilities help you manage keys and encryption, such as: AWS Key Management Service (AWS KMS), which you can use to create and protect keys to encrypt or digitally sign your data. AWS CloudHSM, which you can use to manage single-tenant hardware security modules (HSMs). HSMs are physical devices that securely protect cryptographic operations and the keys used by these operations. HSMs can help you meet your corporate, contractual, and regulatory compliance requirements. With CloudHSM, you have access to general-purpose HSMs. When payments are involved, there are specific payment HSMs that offer capabilities such as generating and validating the personal identification number (PIN) and the security code of a credit or debit card. Today, I am happy to share the availability of AWS Payment Cryptography, an elastic service that manages payment HSMs and keys for payment processing applications in the cloud. Applications using payments HSMs have challenging requirements because payment processing is complex, time sensitive, and highly regulated and requires the interaction of multiple financial service providers and payment networks. Every time you make a payment, data is exchanged between two or more financial service providers and must be decrypted, transformed, and encrypted again with a unique key at each step. This process requires highly performant cryptography capabilities and key management procedures between each payment service provider. These providers might have thousands of keys to protect, manage, rotate, and audit, making the overall process expensive and difficult to scale. To add to that, payment HSMs historically employ complex and error-prone processes, such as exchanging keys in a secure room using multiple hand-carried paper forms, each with separate key components printed on them. Introducing AWS Payment Cryptography AWS Payment Cryptography simplifies your implementation of cryptographic functions and key management used to secure data in payment processing in accordance with various payment card industry (PCI) standards. With AWS Payment Cryptography, you can eliminate the need to provision and manage on-premises payment HSMs and use the provided tools to avoid error-prone key exchange processes. For example, with AWS Payment Cryptography, payment and financial service providers can begin development within minutes and plan to exchange keys electronically, eliminating manual processes. To provide its elastic cryptographic capabilities in a compliant manner, AWS Payment Cryptography uses HSMs with PCI PTS HSM device approval. These capabilities include encryption and decryption of card data, key creation, and pin translation. AWS Payment Cryptography is also designed in accordance with PCI security standards such as PCI DSS, PCI PIN, and PCI P2PE, and it provides evidence and reporting to help meet your compliance needs. You can import and export symmetric keys between AWS Payment Cryptography and on-premises HSMs under key encryption key (KEKs) using the ANSI X9 TR-31 protocol. You can also import and export symmetric KEKs with other systems and devices using the ANSI X9 TR-34 protocol, which allows the service to exchange symmetric keys using asymmetric techniques. To simplify moving consumer payment processing to the cloud, existing card payment applications can use AWS Payment Cryptography through the AWS SDKs. In this way, you can use your favorite programming language, such as Java or Python, instead of vendor-specific ASCII interfaces over TCP sockets, as is common with payment HSMs. Access can be authorized using AWS Identity and Access Management (IAM) identity-based policies, where you can specify which actions and resources are allowed or denied and under which conditions. Monitoring is important to maintain the reliability, availability, and performance needed by payment processing. With AWS Payment Cryptography, you can use Amazon CloudWatch, AWS CloudTrail, and Amazon EventBridge to understand what is happening, report when something is wrong, and take automatic actions when appropriate. Let’s see how this works in practice. Using AWS Payment Cryptography Using the AWS Command Line Interface (AWS CLI), I create a double-length 3DES key to be used as a card verification key (CVK). A CVK is a key used for generating and verifying card security codes such as CVV, CVV2, and similar values. Note that there are two commands for the CLI (and similarly two endpoints for API and SDKs): payment-cryptography for control plane operation such as listing and creating keys and aliases. payment-cryptography-data for cryptographic operations that use keys, for example, to generate PIN or card validation data. Creating a key is a control plane operation: aws payment-cryptography create-key \ --no-exportable \ --key-attributes KeyAlgorithm=TDES_2KEY, KeyUsage=TR31_C0_CARD_VERIFICATION_KEY, KeyClass=SYMMETRIC_KEY, KeyModesOfUse='{Generate=true,Verify=true}' { "Key": { "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h", "KeyAttributes": { "KeyUsage": "TR31_C0_CARD_VERIFICATION_KEY", "KeyClass": "SYMMETRIC_KEY", "KeyAlgorithm": "TDES_2KEY", "KeyModesOfUse": { "Encrypt": false, "Decrypt": false, "Wrap": false, "Unwrap": false, "Generate": true, "Sign": false, "Verify": true, "DeriveKey": false, "NoRestrictions": false } }, "KeyCheckValue": "B2DD4E", "KeyCheckValueAlgorithm": "ANSI_X9_24", "Enabled": true, "Exportable": false, "KeyState": "CREATE_COMPLETE", "KeyOrigin": "AWS_PAYMENT_CRYPTOGRAPHY", "CreateTimestamp": "2023-05-26T14:25:48.240000+01:00", "UsageStartTimestamp": "2023-05-26T14:25:48.220000+01:00" } } To reference this key in the next steps, I can use the Amazon Resource Name (ARN) as found in the KeyARN property, or I can create an alias. An alias is a friendly name that lets me refer to a key without having to use the full ARN. I can update an alias to refer to a different key. When I need to replace a key, I can just update the alias without having to change the configuration or the code of your applications. To be recognized easily, alias names start with alias/. For example, the following command creates the alias alias/my-key for the key I just created: aws payment-cryptography create-alias --alias-name alias/my-key \ --key-arn arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h { "Alias": { "AliasName": "alias/my-key", "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h" } } Before I start using the new key, I list all my keys to check their status: aws payment-cryptography list-keys { "Keys": [ { "KeyArn": "arn:aws:payment-cryptography:us-west-2:123421341234:key/42cdc4ocf45mg54h", "KeyAttributes": { "KeyUsage": "TR31_C0_CARD_VERIFICATION_KEY", "KeyClass": "SYMMETRIC_KEY", "KeyAlgorithm": "TDES_2KEY", "KeyModesOfUse": { "Encrypt": false, "Decrypt": false, "Wrap": false, "Unwrap": false, "Generate": true, "Sign": false, "Verify": true, "DeriveKey": false, "NoRestrictions": false } }, "KeyCheckValue": "B2DD4E", "Enabled": true, "Exportable": false, "KeyState": "CREATE_COMPLETE" }, { "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/ok4oliaxyxbjuibp", "KeyAttributes": { "KeyUsage": "TR31_C0_CARD_VERIFICATION_KEY", "KeyClass": "SYMMETRIC_KEY", "KeyAlgorithm": "TDES_2KEY", "KeyModesOfUse": { "Encrypt": false, "Decrypt": false, "Wrap": false, "Unwrap": false, "Generate": true, "Sign": false, "Verify": true, "DeriveKey": false, "NoRestrictions": false } }, "KeyCheckValue": "905848", "Enabled": true, "Exportable": false, "KeyState": "DELETE_PENDING" } ] } As you can see, there is another key I created before, which has since been deleted. When a key is deleted, it is marked for deletion (DELETE_PENDING). The actual deletion happens after a configurable period (by default, 7 days). This is a safety mechanism to prevent the accidental or malicious deletion of a key. Keys marked for deletion are not available for use but can be restored. In a similar way, I list all my aliases to see to which keys they are they referring: aws payment-cryptography list-aliases { "Aliases": [ { "AliasName": "alias/my-key", "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h" } ] } Now, I use the key to generate a card security code with the CVV2 authentication system. You might be familiar with CVV2 numbers that are usually written on the back of a credit card. This is the way they are computed. I provide as input the primary account number of the credit card, the card expiration date, and the key from the previous step. To specify the key, I use its alias. This is a data plane operation: aws payment-cryptography-data generate-card-validation-data \ --key-identifier alias/my-key \ --primary-account-number=171234567890123 \ --generation-attributes CardVerificationValue2={CardExpiryDate=0124} { "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h", "KeyCheckValue": "B2DD4E", "ValidationData": "343" } I take note of the three digits in the ValidationData property. When processing a payment, I can verify that the card data value is correct: aws payment-cryptography-data verify-card-validation-data \ --key-identifier alias/my-key \ --primary-account-number=171234567890123 \ --verification-attributes CardVerificationValue2={CardExpiryDate=0124} \ --validation-data 343 { "KeyArn": "arn:aws:payment-cryptography:us-west-2:123412341234:key/42cdc4ocf45mg54h", "KeyCheckValue": "B2DD4E" } The verification is successful, and in return I get back the same KeyCheckValue as when I generated the validation data. As you might expect, if I use the wrong validation data, the verification is not successful, and I get back an error: aws payment-cryptography-data verify-card-validation-data \ --key-identifier alias/my-key \ --primary-account-number=171234567890123 \ --verification-attributes CardVerificationValue2={CardExpiryDate=0124} \ --validation-data 999 An error occurred (com.amazonaws.paymentcryptography.exception#VerificationFailedException) when calling the VerifyCardValidationData operation: Card validation data verification failed In the AWS Payment Cryptography console, I choose View Keys to see the list of keys. Optionally, I can enable more columns, for example, to see the key type (symmetric/asymmetric) and the algorithm used. I choose the key I used in the previous example to get more details. Here, I see the cryptographic configuration, the tags assigned to the key, and the aliases that refer to this key. AWS Payment Cryptography supports many more operations than the ones I showed here. For this walkthrough, I used the AWS CLI. In your applications, you can use AWS Payment Cryptography through any of the AWS SDKs. Availability and Pricing AWS Payment Cryptography is available today in the following AWS Regions: US East (N. Virginia) and US West (Oregon). With AWS Payment Cryptography, you only pay for what you use based on the number of active keys and API calls with no up-front commitment or minimum fee. For more information, see AWS Payment Cryptography pricing. AWS Payment Cryptography removes your dependencies on dedicated payment HSMs and legacy key management systems, simplifying your integration with AWS native APIs. In addition, by operating the entire payment application in the cloud, you can minimize round-trip communications and latency. Move your payment processing applications to the cloud with AWS Payment Cryptography. — Danilo View the full article
  9. AWS Key Management Service (KMS) announced today that the hardware security modules (HSMs) used in the service were awarded Federal Information Processing Standards (FIPS) 140-2 Security Level 3 certification from the U.S. National Institute of Standards and Technology (NIST). The FIPS 140 program validates areas related to the secure design and implementation of a cryptographic module, including the correctness of cryptographic algorithm implementations and tamper resistance/response. AWS KMS HSMs have been certified under FIPS 140-2 overall Security Level 2 continuously since 2017. This new certification gives customers assurance that all cryptographic operations involving their keys in AWS KMS happen within an HSM certified at FIPS 140-2 Security Level 3. View the full article
  10. HashiCorp is pleased to announce the general availability of Vault 1.13. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure. Vault 1.13 focuses on Vault’s core secrets workflows as well as team workflows, integrations, and visibility. The key features in this release include improvements to: Multi-namespace access workflows Azure auth method Google Cloud secrets engine KMIP secrets engine MFA login Vault Agent Certificate revocation for cross-cluster management Additional new features include: Event-based notifications (alpha) Vault Operator (beta) HCP link for self-managed Vault (private beta) PKI health checks Managed Transit keys »Multi-namespace access improvements When customers have secrets distributed across multiple (independent) namespaces, their applications need to authenticate multiple times to Vault, creating an unnecessary burden. Additionally, customers using Vault Agent need to run separate Vault Agent instances to talk to each namespace. Vault 1.13 includes namespace improvements to alleviate these challenges by enabling a single agent instance to be able to fetch secrets across multiple namespaces. »Microsoft Azure auth improvements Prior to Vault 1.13, only Microsoft Azure virtual machines could leverage the Azure auth method to authenticate to Vault. Vault 1.13 introduces the ability to authenticate to Azure Functions and the Azure App Service. »Google Cloud secrets engine improvements To avoid exhausting Google Cloud’s 10-key limit when the same service account is used with 10 or more Vault roles, Vault 1.13 improves the Google Cloud secrets engine integration by using its own service account with Google Cloud’s account impersonation feature to generate access tokens. »Event-based notifications (alpha) Vault 1.13 introduces an alpha release of a lightweight event-based notification feature that can be used to notify downstream applications of changes to Vault secrets. Applications and external entities can subscribe to events via a websockets API. »Undo logs We identified a race condition associated with Vault’s replication that is triggered when frequent updates to a set of keys can occasionally result in a merkle diff or sync loop. This condition was corrected in Vault 1.12. However, it was disabled in 1.12. Vault 1.13 includes this correction as enabled by default. »MFA login improvements As of 1.10, Vault introduced Login MFA, a standardized configuration for integration with Duo, PingIdentity, Okta, and TOTP, but some customers have found UX challenges with these configurations. With these improvements introduced in 1.13, Customers will be able to migrate to Login MFA more easily. Login MFA will be easier to configure and debug. Prior to 1.13, customers had to specify an MFA ID for each MFA provider, which is a long UUID. This can be cumbersome for customers with many clusters. 1.13 introduces MFA name as a human-readable alias. Passcode was not being used consistently across methods, now there is consistent behavior across the CLI and API, regardless of method. »Vault Operator (beta) Kubernetes applications using Vault for secrets management have had the option of leveraging a sidecar or the CSI secrets store provider to inject secrets into files. This created a number of challenges. First, these approaches required applications to be modified if you wanted the ability to read from a file. Furthermore, applications need to be aware of when credentials have been modified so that they can be re-read from the file. Vault 1.13 introduces the Vault Operator to provide a resolution to these challenges. The Operator will allow customers to natively sync secrets from Vault to Kubernetes clusters. The Vault Operator will not be immediately available with the GA launch of 1.13. Its availability date will occur in late March. »Vault Agent improvements Vault 1.13 includes several enhancements to the Vault Agent. Users can get started with Vault Agent without needing to set up auth methods. This feature is for learning and testing. It is not recommended for production use. Listeners for the Vault Agent can now be configured with a role of “metrics only” so that a service can be configured to listen on a particular port for the purpose of metrics collection only. The Vault Agent can now read configurations from multiple files. The Vault Agent persists logging when there is a mismatch between the agent and server. »HCP link for self-managed Vault (private beta) In Vault 1.13 and the HashiCorp Cloud Platform (HCP), we’ve introduced a feature enabling active connections between self-managed Vault clusters and HCP. The feature is similar to the Consul global management plane. You sign up for free on HCP to use it, and you can connect self-managed clusters into one platform and access applications built on top of HCP, such as the operations monitoring and usage insights features currently in private beta. We’ve chosen specifically to add these capabilities to HCP because adding data processing functionalities to the Vault binary could increase memory consumption. To participate in this private beta you must first upgrade to Vault 1.13. However, the upgrade doesn’t need to occur in your production environments. Simply upgrade a pre-production environment to Vault 1.13 and you can start testing. If you would like to participate in the beta please contact our product team by emailing siranjeevi.dheenadhayalan@hashicorp.com. »Certificate revocation for cross-cluster management To improve the cross-cluster certificate management user experience, Vault 1.13 extends revocation capability support (including certificate revocation lists (CRLs) and online certificate status protocol (OCSP)) across multiple Vault clusters for the same PKI mount. »PKI health checks Vault 1.13 introduces PKI health check CLI commands to help organizations troubleshoot issues and avoid downtime. The following configurations are included: CA CRL validity Role permissions Audit for potentially unsafe parameters API’s Certificate volume »Enhanced KMIP with PKCS#11 operations In Vault 1.13 we have added support in the KMIP secrets engine (KMIP Asymmetric Key Lifecycle, Advanced CryptoGraphic Server profiles) for key PKCS#11 operations such as signature (sign/verify), random number generation (RNG), and MAC sign/verify. These should enable better KMIP/PKCS#11 integrations for Vault with a broader base of devices and software. »Managed Transit keys Customers with particularly conservative risk profiles require their keys to be created and stored within hardware security modules (HSMs) or an external KMS. We want to make it easier for organizations to leverage Vault across all sections of their infrastructure, including HSMs and KMS. Vault 1.13 introduces support for offloading Transit key cryptographic operations (encrypt, decrypt, MAC sign/verify, RNG generation) from Transit to an HSM or cloud KMS. Organizations can now generate new Transit keys and key pairs using key material in an external HSM or KMS, and then use them for encryption and signatures on secrets in Vault. These features make it possible to centralize secrets and encryption management in Vault even in cases where Vault doesn’t support certain cryptographic algorithms (such as SEED/ARIA) required by an organization’s compliance policies. »Enable rotate-root in Azure auth method While the rotate-root endpoint is available in the Azure secrets engine, it has not been implemented in our Azure auth method. For customers who want to rotate their client secrets in the Azure auth method mounts on a routine basis, this requires manual rotation. To do this, users need to manually rotate the client secret in Azure, and then manually update the Azure auth configuration. At scale, this is impractical. Vault 1.13 adds rotate-root capabilities to the Azure auth method, allowing users to generate a new client secret for the root account defined in the config of Azure auth method mounts. The generated value will be known only by Vault. »Upgrade details This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.13 changelog lists all the updates. Please visit the Vault Release Highlights page for step-by-step tutorials demonstrating the new features. Vault 1.13 introduces significant new functionality. As such, please review the Upgrading Vault page, as well as the Feature Deprecation Notice and Plans page for further details. As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing security@hashicorp.com — do not use the public issue tracker. For more information, please consult our security policy and our PGP key. For more information about Vault Enterprise, visit hashicorp.com/products/vault. View the full article
  11. AWS re:Invent is in full swing this week in Las Vegas. HashiCorp has a big presence at the event, with breakout sessions, expert talks, and product demos. As AWS re:Invent dominates the tech headlines, we wanted to reflect on our current project collaborations with AWS and the state of HashiCorp security and networking initiatives with AWS. That includes securing workloads in EKS with HashiCorp Vault, Vault Lambda Extension Caching, Vault + AWS XKS, updates on HashiCorp Consul on AWS, and more. »HashiCorp and AWS Security HashiCorp Vault provides the foundation for modern cloud security. Vault was purpose-built in the cloud era to authenticate and access multiple clouds, systems, and endpoints, and to centrally store, access, and deploy secrets (API keys, credentials, etc.). It also provides a simple workflow to encrypt data in flight and at rest. Vault centrally manages and enforces access to secrets and systems based on trusted sources of application and user identity. With Vault and the HashiCorp model around zero trust security, organizations can manage their transition to AWS while maintaining the level of security they need — one that trusts nothing and authenticates and authorizes everything. Specific HashiCorp-AWS security developments in the last year include: »HashiCorp and AWS Secure Workloads in EKS with Vault HashiCorp partnered with AWS to make it easier to use Vault, our enterprise secrets management solution, on AWS. The launch of EKS Blueprints with AWS allows you to enable and start up Vault instances in Amazon Elastic Kubernetes Service (EKS). EKS Blueprints is a new open source project that aims to make it easier and faster for customers to adopt EKS. As part of the EKS Blueprints launch, AWS and HashiCorp partnered to build an add-on repository that lets you enable and start up Vault instances in Kubernetes. The add-on also makes it faster and easier to start the Vault instance inside EKS; you can access Vault in EKS with one command. »Vault Lambda Extension Caching With the arrival of the Vault AWS Lambda extension in 2020, practitioners who had standardized on HashiCorp Vault for secrets management and AWS Lambda as their serverless compute environment no longer had to make their Lambda functions Vault-aware. The extension retrieves the specified secret from a Vault cluster and presents it to the Lambda function. This year we announced a new caching feature that can be added to Lambda and Vault infrastructure: Vault Lambda extension caching. This extension can cache the tokens and leased secrets proxied through the agent, including the auto-auth token. This allows for easier access to Vault secrets for edge applications, reduces the I/O burden for basic secrets access for Vault clusters, and allows for secure local access to leased secrets for the life of a valid token. »Vault + AWS XKS AWS External Key Store (XKS) is a new capability in AWS Key Management Service (AWS KMS) that allows customers to protect their data in AWS using cryptographic keys held inside on-premises hardware security modules (HSMs), software security modules (SSMs) like Vault, or other key managers outside of AWS. This integration mimics existing support for AWS CloudHSM within KMS, except that the customer-controlled key manager resides outside of an AWS datacenter. For regulatory and compliance reasons, some enterprises have a need to move their encryption key material and encryption operators completely outside of AWS infrastructure. When Vault is running outside of AWS infrastructure, it can effectively serve as a software security module (SSM) to store and manage this root of trust for a customer’s AWS account. For a more detailed overview of the external key store capabilities, please see the External Key Store (XKS) announcement on the AWS News Blog. »HashiCorp Boundary at AWS re:Inforce Earlier this year at AWS’ security conference, AWS re:inforce, HashiCorp presented a new way to safeguard who and what has access to applications, systems, and endpoints with the beta release of HCP Boundary. HCP Boundary is now generally available and provides an easy way to securely access critical systems with fine-grained authorizations based on trusted identities. Boundary on the HashiCorp Cloud Platform (HCP) provides a fully managed, single workflow to securely connect to hosts and critical systems across Kubernetes clusters, cloud service catalogs, and on-premises infrastructure. »HashiCorp Wins AWS Security Partner of the Year in North America Amazon Web Services has named HashiCorp the winner of its Security Partner of the Year in North America award, validating HashiCorp's vision for delivering zero trust security to cloud infrastructure. The Security Partner of the Year award recognizes top partners with the AWS Security Competency and affirms HashiCorp as a partner that has proven customer success stories securing every stage of cloud adoption, from initial migration through ongoing day-to-day management. HashiCorp is also one of the 2022 Regional and Global AWS Partner Award winners, with which AWS recognizes leaders around the globe playing a key role in helping customers drive innovation and build solutions on AWS. Announced at AWS re:Invent, the AWS Partner Awards recognize AWS partners whose business models have embraced specialization, innovation, and collaboration over the past year, and whose models continue to evolve and thrive on AWS as they work with customers. »HashiCorp and AWS Networking HashiCorp Consul is a cloud services networking platform that helps discover, securely connect, and improve the resiliency/visibility of services across AWS services like Amazon Elastic Cloud Compute (Amazon EC2), Amazon EKS, AWS Fargate, Amazon Elastic Container Service (Amazon ECS), and AWS Lambda. Consul enables services like these to automatically find each other, and enables secure connections between specific services according to security policies. Specific HashiCorp-AWS networking developments in the last year include: »Consul on AWS Updates This year, HashiCorp Consul on Amazon ECS added support for multi-tenancy, AWS Identity and Access Management (IAM), and mesh gateways. AWS Lambda updates, which include Consul mesh services invoking AWS Lambda functions (now generally available), and AWS Lambda functions accessing Consul mesh services (in beta), help organizations interested in serverless computing remove the barrier to adoption due to the difficulty of integrating these workloads into the service mesh. This release means service mesh users can now have consistent workflows for encrypted communications flowing between mesh services and Lambda functions. At AWS re:Invent, HashiCorp helped highlight Comcast's journey to service networking with HashiCorp Consul and AWS during a speaker session. Comcast's architecture includes multiple on-premises datacenters and cloud services, including Amazon ECS, AWS Fargate, AWS Lambda, Amazon EC2 VMs, on-premises Kubernetes, and on-premises VMs. The multinational telecommunications conglomerate adopted Consul because it flexibly supports Comcast’s cloud and on-premises workloads in multiple AWS regions, as well as the company’s own datacenters. Consul helps manage this complexity while scaling with resiliency. »HashiCorp Consul on AWS Resources Modern infrastructure may require that services run in different networks, runtimes, or compute solutions, such as Amazon EC2, Amazon EKS, AWS Fargate, Amazon ECS, or AWS Lambda. To support these services, HashiCorp provides tutorials and documentation on how to run Consul on Kubernetes, VMs and AWS services including Amazon ECS, and AWS Lambda. We have a large number of resources that can help you learn how to use Consul to securely connect your services on AWS: Get Started with Consul on Kubernetes Get Started with Consul on VMs Consul with ECS Workloads Consul with Lambda Workloads Consul Cluster Peering on Kubernetes in AWS Consul Learn Lab: Deploy Resilient Applications with Service Mesh and AWS Lambda »A Cloud-Managed Zero Trust Security Solution HashiCorp Cloud Platform (HCP) is a fully managed platform available for HashiCorp Terraform, Vault, Consul, Boundary, Waypoint, and Packer. This year, HashiCorp announced the industry’s first zero trust security solution fully deployed on the cloud, combining HCP Vault, HCP Consul, and HCP Boundary to secure applications, networks, and people, delivered on AWS. To learn more about Vault, Boundary, or Consul, visit our product pages on HashiCorp.com and read our getting started tutorials on HashiCorp Developer. And if you’re attending AWS re:Invent, please stop by our booth (#3410) to chat with our technical experts, take in a product demo, and learn how companies are accelerating their cloud journey with HashiCorp and AWS. View the full article
  12. I am excited to announce the availability of AWS Key Management Service (AWS KMS) External Key Store. Customers who have a regulatory need to store and use their encryption keys on premises or outside of the AWS Cloud can now do so. This new capability allows you to store AWS KMS customer managed keys on a hardware security module (HSM) that you operate on premises or at any location of your choice. At a high level, AWS KMS forwards API calls to securely communicate with your HSM. Your key material never leaves your HSM. This solution allows you to encrypt data with external keys for the vast majority of AWS services that support AWS KMS customer managed keys, such as Amazon EBS, AWS Lambda, Amazon S3, Amazon DynamoDB, and over 100 more services. There is no change required to your existing AWS services’ configuration parameters or code. This helps you unblock use cases for a small portion of regulated workloads where encryption keys should be stored and used outside of an AWS data center. But this is a major change in the way you operate cloud-based infrastructure and a significant shift in the shared responsibility model. We expect only a small percentage of our customers to enable this capability. The additional operational burden and greater risks to availability, performance, and low latency operations on protected data will exceed—for most cases—the perceived security benefits from AWS KMS External Key Store. Let me dive into the details. A Brief Recap on Key Management and Encryption When an AWS service is configured to encrypt data at rest, the service requests a unique encryption key from AWS KMS. We call this the data encryption key. To protect data encryption keys, the service also requests that AWS KMS encrypts that key with a specific KMS customer managed key, also known as a root key. Once encrypted, data keys can be safely stored alongside the data they protect. This pattern is called envelope encryption. Imagine an envelope that contains both the encrypted data and the encrypted key that was used to encrypt these data. But how do we protect the root key? Protecting the root key is essential as it allows the decryption of all data keys it encrypted. The root key material is securely generated and stored in a hardware security module, a piece of hardware designed to store secrets. It is tamper-resistant and designed so that the key material never leaves the secured hardware in plain text. AWS KMS uses HSMs that are certified under the NIST 140-2 Cryptographic Module certification program. You can choose to create root keys tied to data classification, or create unique root keys to protect different AWS services, or by project tag, or associated to each data owner, and each root key is unique to each AWS Region. AWS KMS calls the root keys customer managed keys when you create and manage the keys yourself. They are called AWS managed keys when they are created on behalf of an AWS service that encrypts data, such as Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (RDS), or Amazon DynamoDB. For simplicity, let’s call them KMS keys. These are the root keys, the ones that never leave the secured HSM environment. All KMS encryption and decryption operations happen in the secured environment of the HSM. The XKS Proxy Solution When configuring AWS KMS External Key Store (XKS), you are replacing the KMS key hierarchy with a new, external root of trust. The root keys are now all generated and stored inside an HSM you provide and operate. When AWS KMS needs to encrypt or decrypt a data key, it forwards the request to your vendor-specific HSM. All AWS KMS interactions with the external HSM are mediated by an external key store proxy (XKS proxy), a proxy that you provide, and you manage. The proxy translates generic AWS KMS requests into a format that the vendor-specific HSMs can understand. The HSMs that XKS communicates with are not located in AWS data centers. To provide customers with a broad range of external key manager options, AWS KMS developed the XKS specification with feedback from several HSM, key management, and integration service providers, including Atos, Entrust, Fortanix, HashiCorp, Salesforce, Thales, and T-Systems. For information about availability, pricing, and how to use XKS with solutions from these vendors, consult the vendor directly. In addition, we will provide a reference implementation of an XKS proxy that can be used with SoftHSM or any HSM that supports a PKCS #11 interface. This reference implementation XKS proxy can be run as a container, is built in Rust, and will be available via GitHub in the coming weeks. Once you have completed the setup of your XKS proxy and HSM, you can create a corresponding external key store resource in KMS. You create keys in your HSM and map these keys to the external key store resource in KMS. Then you can use these keys with AWS services that support customer keys or your own applications to encrypt your data. Each request from AWS KMS to the XKS proxy includes meta-data such as the AWS principal that called the KMS API and the KMS key ARN. This allows you to create an additional layer of authorization controls at the XKS proxy level, beyond those already provided by IAM policies in your AWS accounts. The XKS proxy is effectively a kill switch you control. When you turn off the XKS proxy, all new encrypt and decrypt operations using XKS keys will cease to function. AWS services that have already provisioned a data key into memory for one of your resources will continue to work until either you deactivate the resource or the service key cache expires. For example, Amazon S3 caches data keys for a few minutes when bucket keys are enabled. The Shift in Shared Responsibility Under standard cloud operating procedures, AWS is responsible for maintaining the cloud infrastructure in operational condition. This includes, but is not limited to, patching the systems, monitoring the network, designing systems for high availability, and more. When you elect to use XKS, there is a fundamental shift in the shared responsibility model. Under this model, you are responsible for maintaining the XKS proxy and your HSM in operational condition. Not only do they have to be secured and highly available, but also sized to sustain the expected number of AWS KMS requests. This applies to all components involved: the physical facilities, the power supplies, the cooling system, the network, the server, the operating system, and more. Depending on your workload, AWS KMS operations may be critical to operating services that require encryption for your data at rest in the cloud. Typical services relying on AWS KMS for normal operation include Amazon Elastic Block Store (Amazon EBS), Lambda, Amazon S3, Amazon RDS, DynamoDB, and more. In other words, it means that when the part of the infrastructure under your responsibility is not available or has high latencies (typically over 250 ms), AWS KMS will not be able to operate, cascading the failure to requests that you make to other AWS services. You will not be able to start an EC2 instance, invoke a Lambda function, store or retrieve objects from S3, connect to your RDS or DynamoDB databases, or any other service that relies on AWS KMS XKS keys stored in the infrastructure you manage. As one of the product managers involved in XKS told me while preparing this blog post, “you are running your own tunnel to oxygen through a very fragile path.” We recommend only using this capability if you have a regulatory or compliance need that requires you to maintain your encryption keys outside of an AWS data center. Only enable XKS for the root keys that support your most critical workloads. Not all your data classification categories will require external storage of root keys. Keep the data set protected by XKS to the minimum to meet your regulatory requirements, and continue to use AWS KMS customer managed keys—fully under your control—for the rest. Some customers for which external key storage is not a compliance requirement have also asked for this feature in the past, but they all ended up accepting one of the existing AWS KMS options for cloud-based key storage and usage once they realized that the perceived security benefits of an XKS-like solution didn’t outweigh the operational cost. What Changes and What Stays the Same? I tried to summarize the changes for you. What is identical to standard AWS KMS keys What is changing The supported AWS KMS APIs and key identifiers (ARN) are identical. AWS services that support customer managed keys will work with XKS. The way to protect access and monitor access from the AWS side is unchanged. XKS uses the same IAM policies and the same key policies. API calls are logged in AWS CloudTrail, and AWS CloudWatch has the usage metrics. The pricing is the same as other AWS KMS keys and API operations. XKS does not support asymmetric or HMAC keys managed in the HSM you provide. You now own the concerns of availability, durability, performance, and latency boundaries of your encryption key operations. You can implement another layer of authorization, auditing, and monitoring at XKS proxy level. XKS resides in your network. While the KMS price stays the same, your expenses are likely to go up substantially to procure an HSM and maintain your side of the XKS-related infrastructure in operational condition. An Open Specification For those strictly regulated workloads, we are developing XKS as an open interoperability specification. Not only have we collaborated with the major vendors I mentioned already, but we also opened a GitHub repository with the following materials: The XKS proxy API specification. This describes the format of the generic requests KMS sends to an XKS proxy and the responses it expects. Any HSM vendor can use the specification to create an XKS proxy for their HSM. A reference implementation of an XKS proxy that implements the specification. This code can be adapted by HSM vendors to create a proxy for their HSM. An XKS proxy test client that can be used to check if an XKS proxy complies with the requirements of the XKS proxy API specification. Other vendors, such as SalesForce, announced their own XKS solution allowing their customers to choose their own key management solution and plug it into their solution of choice, including SalesForce. Pricing and Availability External Key Store is provided at no additional cost on top of AWS KMS. AWS KMS charges $1 per root key per month, no matter where the key material is stored, on KMS, on CloudHSM, or on your own on-premises HSM. For a full list of Regions where AWS KMS XKS is currently available, visit our technical documentation. If you think XKS will help you to meet your regulatory requirements, have a look at the technical documentation and the XKS FAQ. -- sebView the full article
  13. The introduction of 5G networking and its accompanying Service-Based Architecture (SBA) control plane brings a noteworthy shift: Instead of a traditional design consisting of proprietary signaling between physical, black-box components, SBA uses a commodity-like, microservice implementation that is increasingly cloud native, relying on standard RESTful APIs to communicate. This requires a reset in how carriers implement security, one where proven cloud concepts will likely play a significant role. This post will show how the HashiCorp suite of products, especially HashiCorp Vault’s PKI functionality, are well suited for SBAs and cater to a variety of 5G core use cases, with tight Kubernetes integrations and a focus on zero trust networking. These tools provide a strong foundation for 5G environments because many of the constructs included in SBA resemble a modern, zero trust service mesh. Vault in particular offers full PKI management and a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. »The New Face of Telecom Networking The 3GPP standards body mandates a 5G mobile packet core based on discrete software components known as Network Functions (NF). The specifications clearly articulate requirements for NF communication pathways (known as reference points), protocols, service-based interfaces (SBI), and critically, how these network channels are secured. SBI representation of a 5G service-based architecture Orchestration platforms have opened up powerful integration, scaling, and locality opportunities for hosting and managing these NFs that were not possible in earlier manifestations of cellular technology. A mature 5G core could span multiple datacenters and public cloud regions, and scale to thousands of worker nodes. An entire Kubernetes cluster, for example, may be dedicated to the requirements of a single NF: internally, a function may consist of many pods, deployments, services, and other Kubernetes constructs. The SBI itself could be any network interface associated with an NF that is attached to the control plane network for the purpose of consuming and/or providing a service in accordance with the specification. The 5G SBA also brings new security challenges and opportunities. »Securing Network Function Communication Security architecture and procedures for 5G System (3GPP TS 33.501) is the document of record that details various security-related requirements within 5G SBA. Section 13.1.0 states: All network functions shall support mutually authenticated TLS and HTTPS as specified in RFC 7540 [47] and RFC 2818 [90]. The identities in the end entity certificates shall be used for authentication and policy checks. Network functions shall support both server-side and client-side certificates. TLS client and server certificates shall be compliant with the SBA certificate profile specified in clause 6.1.3c of TS 33.310 [5]. mTLS is a fundamental requirement within the 5G SBA for securing SBI flows at the authentication level. But what about authorization? One NF in particular is especially crucial in the context of security: the Network Repository Function (NRF) is responsible for dynamically registering all SBA components as they come online, acting as a kind of service discovery mechanism that can be queried in order to locate healthy services. In addition, the NRF has universal awareness of which functions should be permitted to freely communicate, issuing appropriately scoped OAuth2 tokens to each entity. These tokens authorize network flows between NFs, further securing the fabric. NF authentication and authorization flow There are two modes of service-to-service communication described in the 3GPP specifications. In the Direct Communication mode, NFs engage in service discovery and inter-function network operations as explained above. However, in the Indirect Communication mode, a Service Control Proxy (SCP) may optionally intercept flows and even broker discovery requests with the NRF on behalf of a consumer. Various SCP implementations can augment SBA service networking by introducing intelligent load balancing and failover, policy-based controls, and monitoring. »If it Looks Like a Mesh, Walks Like a Mesh… To summarize, the 5G SBA includes a number of broad technology constructs: Microservice architecture based on Kubernetes Hybrid-cloud/multi-cloud capabilities Service discovery and load balancing Network authentication via mTLS OAuth2 token-based authorization Optional proxy-based mode (policy and telemetry) If this is starting to sound familiar, you’re not alone. While the indirect communication mode is optional (and does not specify a sidecar proxy), these elements combined closely resemble a modern, zero trust service mesh. Intentional or not, this emergent pattern could evolve towards the same architectural trends, platforms, and abstractions being adopted elsewhere in modern software. To that end, HashiCorp‘s enterprise products cater to a variety of core 5G use cases, with tight Kubernetes integrations and a keen focus on zero trust networking: HashiCorp Terraform: Builds reliable multi-cloud infrastructure and deploys complex workloads to Kubernetes using industry-standard infrastructure as code practices HashiCorp Consul: Discovers services and secure networks through identity-based authorization HashiCorp Vault: Protects sensitive data and delivers automated PKI at scale to achieve mTLS for authenticated SBI communications HashiCorp Vault in particular presents an attractive solution for easily securing SBI flows with mTLS authentication. Vault is a distributed, highly available secrets management platform that can span multiple private and public cloud regions, accommodating a wide variety of SBA consumer personas and environments. Several replication options offer robust disaster recovery features, as well as increased performance through horizontal scaling. Vault high-availability architecture »Certificate Lifecycle Automation with Vault The PKI functionality of Vault (one of many secret engines available) is powerful, comprehensive, and simple to implement. Vault supports an arbitrary number of Certificate Authorities (CAs) and Intermediates, which can be generated internally or imported from external sources such as hardware security modules (HSMs). Fully automated cross-signing capabilities create additional options for managing 5G provider trust boundaries and network topologies. Access to Vault itself must be authenticated. Thankfully, this is a Kubernetes-friendly operation that permits straightforward integration options for container-based NF workloads. Supported authentication methods include all of the major public cloud machine-identity systems, a per-cluster native Kubernetes option, and JWT-based authentication that incorporates seamlessly with the OIDC provider built into Kubernetes. The JWT-based method is capable of scaling to support many clusters in parallel, utilizing the service account tokens that are projected to pods by default. Once successfully authenticated to Vault, a policy attached to the auth method dictates the client’s ability to access secrets within an engine. These policies can be highly granular based on a number of parameters, such as the client’s JWT token claims, Microsoft Azure Managed Identity, AWS Identity and Access Management (IAM) role, and more. Vault logical flow from authentication to secret consumption If a policy grants access to a PKI secrets engine, the client may request a certificate specifying certain parameters in the API request payload, such as: Common name Subject alternative names (SANs) IP SANs Expiry time The allowed parameters of the request are constrained by a role object configured against the PKI engine, which outlines permitted domain names, maximum TTL, and additional enforcements for the associated certificate authority. An authenticated, authorized, and valid request results in the issuance of a certificate and private key, delivered back to the client in the form of a JSON payload, which can then be parsed and rendered to the pod filesystem as specified by the NF application’s requirements and configuration. The processes described to authenticate and request certificates can be executed by API call from the primary container, aninitcontainer, or any of a number of custom solutions. To reduce the burden of developing unique strategies for each NF, organizations may instead choose to leverage the Vault Agent Injector for Kubernetes to automate the distribution of certificates. This solution consists of a mutating admission controller that intercepts lifecycle events and modifies the pod spec to include a Vault Agent sidecar container. Once configured, standard pod annotations can be used by operations teams to manage the PKI lifecycle, ensuring that certificates and private keys are rendered to appropriate filesystem locations, and are renewed prior to expiry, without ever touching the NF application code. The agent is additionally capable of executing arbitrary commands or API calls upon certificate renewal, which can be configured to include reloading a service or restarting a pod. The injector provides a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. Vault JWT Auth Method with Kubernetes as OIDC provider Vault also integrates with Jetstack cert-manager, which grants the ability to configure Vault as a ClusterIssuer in Kubernetes and subsequently deliver certificates to Ingresses and other cluster objects. This approach can be useful if the SBI in question specifies a TLS-terminating Ingress Controller. Software vendors building 5G NFs may alternatively decide to incorporate Vault into their existing middleware or configuration automation via a more centralized API integration. For example, a service may already be in place to distribute certificates to pods within the NF ecosystem that have interfaces on the SBI message bus. This solution might rely on a legacy certificate procurement protocol such as CMPv2. Replacing this mechanism with simple HTTP API calls to Vault would not only be a relatively trivial effort, it would be a shift very much in the spirit of the 3GPP inclination towards standard RESTful, HTTP-based communications, and broader industry trends. »Working Together to Make 5G More Secure HashiCorp accelerates cloud transformation for Telcos pursuing automation, operational maturity, and compliance for 5G networks. Join the HashiCorp Telco user group to stay up to date with recent developments, blog posts, talk tracks, and industry trends. Reach out to the HashiCorp Telco team at telco@hashicorp.com. View the full article
  14. We are very excited to announce the general availability of Azure Payment HSM, a BareMetal Infrastructure as a service (IaaS) that enables customers to have native access to payment HSM in the Azure cloud. With Azure Payment HSM, customers can seamlessly migrate PCI workloads to Azure and meet the most stringent security, audit compliance, low latency, and high-performance requirements needed by the Payment Card Industry (PCI). Azure Payment HSM service empowers service providers and financial institutions to accelerate their payment system’s digital transformation strategy and adopt the public cloud. “Payment HSM support in the public cloud is one of the most significant hurdles to overcome in moving payment systems to the public cloud. While there are many different solutions, none can meet the stringent requirements required for a payment system. Microsoft, working with Thales, stepped up to provide a payment HSM solution that could meet the modernization ambitions of ACI Worldwide’s technology platform. It has been a pleasure working with both teams to bring this solution to reality." —Timothy White, Chief Architect, Retail Payments and Cloud Service overview Azure Payment HSM solution is delivered using Thales payShield 10K Payment HSM, which offers single-tenant HSMs and full remote management capabilities. The service is designed to enable total customer control with strict role and data separation between Microsoft and the customer. HSMs are provisioned and connected directly to the customer’s virtual network, and the HSMs are under the customer’s sole administration control. Once allocated, Microsoft’s administrative access is limited to “Operator” mode and full responsibility for configuration and maintenance of the HSM and software falls upon the customer. When the HSM is no longer required and the device is returned to Microsoft, customer data is erased to ensure privacy and security. The solution comes with Thales payShield premium package license and enhanced support Plan, with a direct relationship between the customer and Thales. Figure 1: After HSM is provisioned, HSM device is connected directly to a customer’s virtual network with full remote HSM management capabilities through Thales payShield Manager and TMD. The customer can quickly add more HSM capacity on demand and subscribe to the highest performance level (up to 2500 CPS) for mission-critical payment applications with low latency. The customer can upgrade, or downgrade HSM performance level based on business needs without interruption of HSM production usage. HSMs can be easily provisioned as a pair of devices and configured for high availability. Azure remains committed to helping customers achieve compliance with the Payment Card Industry’s leading compliance certifications. Azure Payment HSM is certified across stringent security and compliance requirements established by the PCI Security Standards Council (PCI SSC) including PCI DSS, PCI 3DS, and PCI PIN. Thales payShield 10K HSMs are certified to FIPS 140-2 Level 3 and PCI HSM v3. Azure Payment HSM customers can significantly reduce their compliance time, efforts, and cost by leveraging the shared responsibility matrix from Azure’s PCI Attestation of Compliance (AOC). Typical use cases Financial institutions and service providers in the payment ecosystem including issuers, service providers, acquirers, processors, and payment networks will benefit from Azure Payment HSM. Azure Payment HSM enables a wide range of use cases, such as payment processing, which allows card and mobile payment authorization and 3D-Secure authentication; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data and sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization. Get started Azure Payment HSM is available at launch in the following regions: East US, West US, South Central US, Central US, North Europe, and West Europe As Azure Payment HSM is a specialized service, customers should ask their Microsoft account manager and CSA to send the request via email. Learn more about Azure Payment HSM Azure Payment HSM. Azure Payment HSM documentation. Thales payShield 10K. Thales payShield Manager. Thales payShield Trusted Management Device. To download PCI certification reports and shared responsibility matrices: Azure PCI PIN AOC. Azure PCI DSS AOC. Azure PCI 3DS AOC. View the full article
  15. The HashiCorp Vault partner ecosystem continues to show strong growth as we added 19 new HashiCorp Cloud Platform (HCP), Enterprise, and OSS integrations this past fiscal quarter. »New HCP Vault Verified Integrations HCP Vault is a fully managed platform operated by HashiCorp, allowing organizations to get Vault up and running quickly to secure applications and protect sensitive data. The HCP Vault Verified badge indicates a product has been verified to work with HCP Vault. We are pleased to announce five new integrations have now been verified to work with HCP Vault: »Cockroach Labs Cockroach Labs completed two HCP Vault integrations this quarter. The first validation is the HCP Vault & CockroachDB Certificate Management integration, which manages certificates used by CockroachDB Self-Hosted via the HCP Vault PKI secrets engine and Vault Agent. The second Cockroach integration is the HCP Vault & CockroachDB Encryption-at-Rest integration, which uses the Transit secrets engine in HCP Vault to provide externally managed encryption keys for use as the store key for CockroachDB's enterprise encryption-at-rest. »Dynatrace The Dynatrace HCP Vault & Synthetic Monitoring integration uses HCP Vault to store usernames and passwords to use in synthetic monitors for testing API endpoints and websites. »Forgerock Forgerock also completed two HCP Vault integrations this past quarter. The first integration is HCP Vault & ForgeRock Authentication, which authenticates into HCP Vault using single sign-on with ForgeRock as an identity provider. The second integration is HCP Vault & ForgeRock Secrets, which uses HCP Vault as a secret store to manage secrets in ForgeRock Access Management. »New Vault Enterprise Integrations Nine new Vault Enterprise integrations were added this past quarter: »Cockroach Labs Cockroach Labs have completed three enterprise integrations this quarter to add to their growing portfolio. Adding to the enterprise versions of the Vault & CockroachDB Certificate Management and Vault & CockroachDB Encryption-at-Rest integrations, which were validated with HCP Vault from above, Cockroach Labs have also completed the Vault & CockroachDB Dedicated CMEK integration, which enables support for Customer Managed Encryption Keys (CMEK) from CockroachDB Dedicated by managing keys in AWS and GCP KMS from Vault Enterprise's Key Management secrets engine. »Crypto4A QxEDGE Hybrid Security Platform (HSP) has been validated to work with Vault's new managed keys feature, which delegates the handling, storage, and interaction with private key material to a trusted external KMS. These managed keys can be used in Vault’s PKI secrets engine to offload PKI operations to the HSM. »Dynatrace The Vault enterprise integration with Dynatrace uses Vault to store usernames and passwords for use in synthetic monitors that test API endpoints and websites. »Forgerock The Vault & ForgeRock Secrets Enterprise integration utilizes HashiCorp Vault as a secret store to manage secrets in ForgeRock Access Management. »Futurex Vectera Plus, KMES Series 3, and VirtuCrypt cloud HSMs have been validated to work with Vault's new managed keys feature, which delegates the handling, storage, and interaction with private key material to a trusted external KMS. These managed keys can be used in Vault’s PKI secrets engine to offload PKI operations to the HSM »Securosys Securosys HSM has been validated to work with Vault's new managed keys feature, which delegates the handling, storage, and interaction with private key material to a trusted external KMS. These managed keys can be used in Vault’s PKI secrets engine to offload PKI operations to the HSM »Utimaco Utimaco HSM has been validated to work with Vault's new managed keys feature, which delegates the handling, storage, and interaction with private key material to a trusted external KMS. These managed keys can be used in Vault’s PKI secrets engine to offload PKI operations to the HSM »New Vault OSS Integrations We also added five new open source Vault integrations to our ecosystem: »BigID The Vault & BigID integration retrieves credentials from Vault to authenticate BigID connections to any data source using usernames and passwords. »Forgerock The Vault & ForgeRock Authentication integration authenticates into Vault using single sign-on with ForgeRock as an identity provider. »Kaleido The Kaleido Vault & EthSign secrets engine enables support for creating secp256k1 keys to sign transactions for submission to any Ethereum-based blockchain with an API interface. »Palo Alto Networks Secure, store, and tightly control access to tokens, passwords, certificates, and encryption keys, and other sensitive data using HashiCorp Vault within Palo Alto Networks XSOAR pipelines.This integration supports the use of Vault namespaces. »Learn More The HashiCorp Vault Integration Program allows partners to integrate their products to work with HashiCorp Vault (both the open source and Enterprise versions) or HashiCorp Cloud Platform (HCP) Vault. Learn more at https://www.vaultproject.io/docs/partnerships. As a fully managed service, HCP Vault is the easiest way to secure, store, and tightly control access to tokens, passwords, certificates, encryption keys, and other sensitive data. For more information about HCP Vault and pricing, please visit the HCP product page or sign up through the HCP portal. Find more information on past Vault integrations here: Cribl, MongoDB, and Thales Highlight New HCP and Enterprise Vault Integrations MongoDB Field Level Encryption with HashiCorp Vault KMIP Secrets Engine Red Hat, Datadog, and More Partners Add Vault Ecosystem Integrations HashiCorp Vault Surpasses 100 Integrations with 75 Partners HashiCorp Vault Integrates with ServiceNow for Credential Management GitHub, F5, and Okta Among New HCP Vault Integrations HashiCorp Releases Identity-based Security as a Service on the HashiCorp Cloud Platform View the full article
  16. We are pleased to announce the general availability of HashiCorp Vault 1.12. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure. Vault 1.12 focuses on improving Vault’s core workflows as well as adding new features such as Redis and Amazon ElastiCache secrets engines, a new PKCS#11 provider, improved Transform secrets engine usability, updated resource quotas, expanded PKI revocation and telemetry capabilities, and much more. Key features and improvements in Vault 1.12 include: PKCS #11 provider (Vault Enterprise): Added the Vault PKCS#11 provider, which enables the Vault KMIP secrets engine to be used via PKCS#11 calls. The provider supports a subset of key generation, encryption, decryption, and key storage operations. Transparent Data Encryption for Oracle (Enterprise): Support for Vault to manage encryption keys for Transparent Data Encryption (TDE) with Oracle servers. Transform secrets engine (Vault Enterprise): Added the ability to import externally generated keys for bring-your-own-key (BYOK) workflows, added MSSQL external storage support, and added support for encryption key auto-rotation via an auto_rotate_period option. Resource quotas: Enhanced path and role-based resource quotas with support for API path suffixes and auth mount roles. For example, a trailing wildcard * can be added as part of the path, so auth/token/create* would match both auth/token/create and auth/token/create-orphan but not auth/token/lookup-self. Versioned plugins: Added the concept of versions to plugins, making plugins “version-aware” and enabling release standardization and a better user experience when installing and upgrading plugins. Namespace custom metadata (Vault Enterprise): Support for specifying custom metadata on namespaces was added. The new vault namespace patch command can be used to update existing namespaces with custom metadata as well. OIDC provider interface update (UI): Our design and user research teams gathered community feedback and simplified the setup experience for using Vault as an OIDC provider. With just a few UI clicks, users can now have a default OIDC provider configured and ready to go. Okta number challenge interface update (UI): Added support for Okta’s number challenge to the Vault UI. This enables users to complete the Okta number challenge from a UI, CLI, and API. PKI secrets engine: We are improving Vault’s PKI engine revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta certificate revocation list (CRL) to track changes to the main CRL. These changes offer significant performance and data transfer improvements to revocation workflows. PKI secrets engine telemetry: Support for additional telemetry metrics for better insights into certificate usage via the count of stored and revoked certificates. Vault’s tidy function was also enhanced with additional metrics that reflect the remaining stored and revoked certificates. Redis secrets engine: Added a new database secrets engine that supports the generation of static and dynamic user roles and root credential rotation on a standalone Redis server. Huge thanks to Francis Hitchens, who contributed a repository to HashiCorp. Amazon ElastiCache secrets engine: Added a new database secrets engine that generates static credentials for existing managed roles in Amazon ElastiCache. LDAP secrets engine: Added a new LDAP secrets engine that unifies the user experience between the Active Directory (AD) secrets engine and OpenLDAP secrets engine. This new engine supports all implementations from both of the engines mentioned above (AD, LDAP, and RACF) and brings dynamic credential capabilities for users relying on Active Directory. KMIP secrets engine (Vault Enterprise): Added support to the KMIP secrets engine for the operations and attributes in the Baseline Server profile, in addition to the already supported Symmetric Key Lifecycle Server and the Basic Cryptographic Server profiles. This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.12 changelog list all the updates. Please visit the Vault Release Highlights page for step-by-step tutorials demonstrating the new features. »PKI Secrets Engine Improvements We are improving Vault PKI Engine’s revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta CRL to track changes to the main CRL. These enhancements significantly streamline the PKI engine, making the certification revocation semantics easier to understand and manage. Additionally, support for automatic CRL rotation and periodic tidy operations helps reduce operator burden, alleviate the demand on cluster resources during periods of high revocation, and ensure clients are always served valid CRLs. Finally, support for bring-your-own-cert (BYOC) allows revocation of no_store=true certificates and, for proof-of-possession (PoP), allows end users to safely revoke their own certificates (with corresponding private key) without operator intervention. PKI and managed key support for RSA-PSS signatures: Since its initial release, Vault's PKI secrets engine supported only RSA-PKCS#1v1.5 (public key cryptographic standards) signatures for issuers and leaves. To conform with guidance from the National Institute of Standards and Technology (NIST) around key transport and for compatibility with newer hardware security module (HSM) firmware, we have included support for RSA-PSS (probabilistic signature scheme) signatures. See the section on PSS Support in the PKI documentation for limitations of this feature. PKI telemetry improvements: This release adds additional telemetry to Vault’s PKI secrets engine, enabling customers to gather better insights into certificate usage via the count of stored and revoked certificates. Additionally, the Vault tidy function is enhanced with additional metrics that reflect the remaining stored and revoked certificates. Google Cloud Key Manager support: Managed keys let Vault secrets engines (currently PKI) use keys stored in cloud KMS systems for cryptographic operations like certificate signing. Vault 1.12 adds support for Google Cloud KMS to the managed key system, where previously only AWS, Microsoft Azure, and PKCS#11 HSMs were supported. For more information, please see the PKI Secrets Engine documentation. »PKCS #11 Provider Software solutions often require cryptographic objects such as keys or X.509 certificates. Some external software must also perform operations including key generation, hashing, encryption, decryption, and signing. HSMs are traditionally used as a secure option but can be expensive and challenging to operationalize. Vault Enterprise 1.12 is a PKCS#11 2.40 compliant provider, extended profile. PKCS#11 is the standard protocol supported for integrating with HSMs. It also has the operational flexibility and advantages of software for key generation, encryption, and object storage operations. The PKCS#11 provider in Vault 1.12 supports a subset of key generation, encryption, decryption, and key storage operations. Protecting sensitive data at rest is a fundamental task for database administrators that enables many organizations to follow industry best practices and comply with regulatory requirements. Administrators of Oracle databases will also now be able to enable Transparent Data Encryption (TDE) for Oracle because of this feature. TDE for Oracle performs real-time data and log file encryption and decryption transparently to end user applications. For more information, please see the PKCS#11 provider documentation. »Transform Secret Engine Enhancements Transform is a Vault Enterprise feature that lets Vault use data transformations and tokenization to protect secrets residing in untrusted or semi-trusted systems. This includes protecting compliance-regulated data such as social security numbers and credit card numbers. Oftentimes, data must reside within file systems or databases for performance but must be protected in case the system in which it resides is compromised. Transform is built for these kinds of use cases. With this release, we added the ability to import externally generated keys for BYOK workflows, MSSQL external storage support, and support for encryption key auto-rotation via an auto_rotate_period option. Bring your own key (BYOK): Added the ability to import externally generated keys to support use cases where there is a need to bring in an existing key from an HSM or other outside system. In release 1.11, we introduced BYOK support to Vault, enabling customers to import existing keys into the Vault Transit secrets engine and enabling secure and flexible Vault deployments. We are extending that support to the Vault Transform secrets engine in this release. MSSQL support: An MSSQL store is now available to be used as an external storage engine with tokenization in the Transform secrets engine. Refer to the following documents: Transform Secrets Engine (API), Transform Secrets Engine, and Tokenization Transform for more information. Key auto rotation: Periodic rotation of encryption keys is a recommended key management practice for a good security posture. In Vault 1.10, we added support for auto key rotation in the Transit secrets engine. In Vault 1.12, the Transform secrets engine has been enhanced to let users set the rotation policy during key creation in a time interval, which will cause Vault to automatically rotate the Transform keys when the time interval elapses . Refer to the Tokenization Transform and Transform Secrets Engine (API) documentation for more information. For more information, please see the Transform Secrets Engine documentation. »Other Vault 1.12 Features Many new features in Vault 1.12 have been developed over the course of the 1.11.x releases. You can learn more about how to use these features in our detailed, hands-on HashiCorp Vault guides. You can consult the changelog for full details, but here are a few of the larger changes and depreciation notices: Terraform provider for Vault: The Terraform provider uses Vault’s sys/seal-status endpoint to get the Vault server’s version, and then determine the correct features available for use. Vault usage metrics: Enhanced the /sys/internal/counters API to support setting the end_date to the current month. When this is done, the new_clients field will have the approximate number of new clients that came in for the current month. Licensing enhancement (Vault Enterprise): Updated license termination behavior so that production licenses no longer have a termination date, which makes Vault more robust for Vault Enterprise customers. AAD Graph on Azure Secrets Engine removed: We added a use_microsoft_graph_api configuration parameter for using Microsoft Graph API, since the Azure Active Directory API is being removed. X.509 certificates with SHA-1 signatures support removed: Please migrate off SHA-1 for certificate signing. Go (Golang) version 1.18 removes support for SHA-1 by default, however, you can set a Go environment variable to restore SHA-1 support if you need to continue using SHA-1 (supported until Go 1.19). Standalone database engines impacted experience: If you use any standalone database engines, please migrate away from their usage. With this release, Vault will log error messages and shut down. Any attempts to add new mounts will result in an error. Please migrate to database secrets engines. AppID impacted experience: If you use AppID, please migrate away from its usage. With this release, Vault will log error messages and shut down. Please migrate to the AppRole auth method. »Upgrade Details Vault 1.12 introduces significant new functionality. As such, please review the Upgrading Vault page, as well as the Feature Deprecation Notice and Plans page for further details. As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing security@hashicorp.com — do not use the public issue tracker. For more information, please consult our security policy and our PGP key. For more information about Vault Enterprise, visit hashicorp.com/products/vault. You can download the open source version of Vault at vaultproject.io. We hope you enjoy HashiCorp Vault 1.12. View the full article
  17. This blog post has been co-authored by Darius Ryals, General Manager of Partner Promises and Azure Chief Information Security Officer. Today we’re announcing Azure Payment HSM has achieved Payment Card Industry Personal Identification Number (PCI PIN) making Azure the first hyperscale cloud service provider to obtain this certification. Financial technology has rapidly disrupted the payments industry and securing payment transactions is of the utmost importance. Azure helps customers secure their critical payment infrastructure in the cloud and streamlines global payments security compliance. Azure remains committed to helping customers achieve compliance with the Payment Card Industry’s leading compliance certifications. Enhanced security and compliance through Azure Payment HSM Azure Payment HSM is a bare metal infrastructure as a service (IaaS) that provides cryptographic key operations for real-time payment transactions in Azure. The service empowers financial institutions and service providers to accelerate their digital payment strategy through the cloud. Azure Payment HSM is certified across stringent security and compliance requirements established by the PCI Security Standards Council (PCI SSC) including PCI DSS, PCI 3DS, and PCI PIN and offers HSMs certified to FIPS 140-2 Level 3 and PCI HSM v3. Azure Payment HSM enables a wide range of use cases. These include payment processing for card and mobile payment authorization and 3D-Secure authentication; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data for POS, mPOS, Remote key loading, PIN generation, and PIN routing; sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization. Azure Payment HSM is designed to meet the low latency and high-performance requirements for mission-critical payment applications. The service is comprised of single-tenant HSMs offering customers complete remote administrative control and exclusive access. HSMs are provisioned and connected directly to users’ virtual networks, and HSMs are under users’ sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Azure Payment HSM provides great benefits for both payment HSM users with a legacy on-premises HSM footprint and those new payment ecosystem entrants who may choose a cloud-native approach from the outset. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM. Leverage Azure Payment HSM PCI PIN certification PINs are used to verify cardholder identity during online and offline payment card transactions. The PCI PIN Security Standard contains requirements for the secure management, processing, and transmission of PIN data and applies to merchants and service providers that store, process, transmit, or can impact the security of PIN data. Azure Payment HSM customers can reduce their compliance burden by leveraging Azure’s PCI PIN Attestation of Compliance (AOC) which addresses Azure’s portion of responsibility for each PCI PIN requirement and contains the list of certified Azure regions. The Azure Payment HSM Shared Responsibility Matrix is also available to help customers significantly reduce time, effort, and cost during their own PCI PIN assessments by simplifying the compliance process. Learn more When moving payment systems to the cloud, payment security must adhere to Payment Industry’s mandate compliance without failure. Financial institutions and service providers in the payment ecosystem including issuers, service providers, acquirers, processors, and payment networks would benefit from Azure Payment HSM. To learn how Microsoft Azure capabilities can help, see the resources below: Azure Payment HSM Azure Payment HSM documentation Azure PCI PIN AOC Azure PCI DSS AOC Azure PCI 3DS AOC View the full article
  18. Encryption and data protection is a major requirement for customers moving their workloads to the cloud. To meet this requirement, organizations often invest a great deal of time in protecting sensitive data in cloud-based databases. This is driven mostly by government regulations, compliance, and organizations’ security requirements to have data protected at rest. As Customer Engineers on the Security and Compliance technology team in Google Cloud, we engage both executive and technical stakeholders to help customers build secure deployments that enable their digital transformation on our Cloud platform. As Google Cloud continues its efforts to be the industry’s most trusted cloud, we’re taking steps to help customers better understand encryption options available to protect workloads on our platform. In this post, we provide a guide on how to accelerate your design considerations and decision making when securely migrating or building databases with the various encryption options supported on Google Cloud platform. Managing data at rest with encryption on Google Cloud When you move data to Google Cloud, you have options to choose from databases that are simple to use and operate without cumbersome maintenance tasks and operational overhead. Google Cloud keeps the databases highly available and updated, while your IT team can focus on delivering innovations and your end users enjoy reliable services. Additionally, you inherit security controls like encryption of data at-rest by default that can help simplify your security implementations. For most organizations, encryption is one piece of a broader security strategy. Encryption adds a layer of defense in depth for protecting data and provides an important mechanism for how Google helps ensure the privacy of data. Encryption ensures that if the data accidentally falls into an attacker's hands, they cannot access the data without also having access to the encryption keys. Our platform offers data-at-rest encryption by default, ensuring that all data stored within the cloud is encrypted by Google-managed keys. Management options for encryption keys Google Managed Keys: All data stored within Google Cloud is encrypted at rest using the same hardened key management systems that we use for our own encrypted data. These key-management systems provide strict access controls and auditing, and encrypt user data at rest using AES-256 encryption standards. No setup, configuration, or management is required. Google managed keys is an appropriate choice if you don't have specific requirements related to compliance or locality of cryptographic materials. Customer Managed Keys: Customer managed encryption keys (CMEK) offer the ability to protect your databases with encryption keys you control and manage. Using CMEK gives you control over more aspects of the lifecycle and management of your keys, such as key rotation, defining access control policies, auditing and logging, and enforcing data locality or residency requirements. CMEKs are supported on Cloud Key Management Service, Cloud Hardware Security Module, and Cloud External Key Manager. Encryption options for Google Cloud databases In addition to default security controls inherited on Google Cloud, we believe customers should have options to choose the level of protection over data stored in the cloud. We’ve developed database products integrated with our encryption capabilities that enable you to control your data and provide expanded granularity into when and how data is accessed. Google’s default encryption: Customers' content stored on our platform is encrypted at rest, without any action from customers using multiple encryption mechanisms. Data for storage is split into chunks, and each chunk is encrypted with a unique data encryption key. The data encrypted keys are protected with key encryption keys (KEK) and stored centrally in Google's KMS, a repository built specifically for storing keys. Cloud Key Management Service (Cloud KMS) provides you with the capability to manage cryptographic keys in a central cloud service for either direct use or use by other cloud resources such as databases and datastores. Cloud KMS combines secure, highly available infrastructure with capabilities not only to provide the mechanisms to create keys of various types and strengths, but also an option for the keys to remain exclusively within the Google Cloud region with which the data is associated. Cloud Hardware Security Module (Cloud HSM) enables you to generate encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. The service is fully managed, so you can protect your most sensitive workloads without worrying about the operational overhead of managing an HSM cluster. Google manages the HSM hardware, automatically scales based on your use, spares you the complexity of managing and using HSM-backed keys in production. For example, you can encrypt data in Cloud SQL tables using a Cloud HSM key that you manage and control the life cycle of. Cloud External Key Manager (EKM) gives you ultimate control over the keys and encrypted data-at-rest within Google Cloud resources such as CloudSQL, Cloud Spanner, etc. Google EKM enables you to use keys managed in a supported key management system external to Google to protect data within Google Cloud. It’s important to note that for this option, externally managed keys are never cached or stored within Google Cloud. Whenever Google Cloud needs to decrypt data, it communicates directly with the external key manager. In addition to Cloud EKM, customers may leverageKey Access Justifications to understand why their externally-hosted keys are being requested to decrypt data. Here’s a look at the encryption options for database services that Google Cloud offers Database Platform Google Cloud Database service Encryption Options Supported Microsoft SQL Server Cloud SQL for SQL Server Google default encryption Cloud KMS CloudHSM CloudEKM MySQL Cloud SQL for MySQL Google default encryption Cloud KMS CloudHSM CloudEKM PostgreSQL Cloud SQL for PostgreSQL Google default encryption Cloud KMS CloudHSM CloudEKM MongoDB MongoDB Atlas Google default encryption Cloud KMS CloudHSM Apache HBase Cloud BigTable Google default encryption Cloud KMS CloudHSM CloudEKM PostgreSQL CloudSpanner for PostgreSQL Google default encryption Cloud KMS CloudHSM CloudEKM Google Standard SQL CloudSpanner Google Standard SQL Google default encryption Cloud KMS CloudHSM CloudEKM Redis Memory Store Memorystore for Redis. Google default encryption Cloud KMS CloudHSM Firestore Firestore Google default encryption Oracle Database Bare Metal Solution for Oracle Customer owned key management system For more information on Key Management on GCP read our KMS Deep Dive Whitepaper.
  19. Data security is a huge part of an organization's security posture. Encryption is a core control for data security, and Google Cloud offers multiple encryption options for data at-rest, in-transit, and even in-use. Let’s shed some light on each of these. Click to enlarge Encryption Encryption at rest by default To help protect your data, Google encrypts data at rest, ensuring that it can only be accessed by authorized roles and services, with audited access to the encryption keys. Data is encrypted prior to it being written to disk. here is how: Data is first “chunked” - broken up into pieces, and each chunk is encrypted with its own data encryption key. Each data encryption key is wrapped using a key encryption key. The encrypted chunks and wrapped encryption keys are then distributed across Google’s storage infrastructure. If a chunk of data is updated, it is encrypted with a new key, rather than by reusing the existing key. When data needs to be retrieved, the process repeats in reverse. As a result, if an attacker were to compromise an individual key or gain physical access to storage, they would still be unable to read customer data - as they need to identify all the data chunks in an object, retrieve them, and retrieve the associated encryption keys. Encryption in transit by default All communications over the Internet to Google Cloud require properly terminated TLS connections. Encryption in transit protects your data if communications are intercepted while data moves between your site and the cloud provider or between two services. This protection is achieved by encrypting the data before transmission; authenticating the endpoints; and decrypting and verifying the data on arrival. For example, Transport Layer Security (TLS) is often used to encrypt data in transit for transport security, and Secure/Multipurpose Internet Mail Extensions (S/MIME) is used often for email message security. Encryption in use: Confidential computing Confidential Computing adds a "third pillar" which protects your data in memory from compromise or exfiltration by encrypting data while it is being processed. You can encrypt your data in-use with Confidential VMs and Confidential GKE Nodes. This builds on the protections Shielded VMs offer against rootkit and bootkits. Main memory encryption is performed using dedicated hardware within on-die memory controllers. Each controller includes a high-performance AES engine. The AES engine encrypts data as it is written toDRAM or shared between sockets, and decrypts it when data is read. Google does not have access to the encryption key. At-rest Encryption options While in some cases the encryption by default might be all you need, Google Cloud provides other options for customers based on their trust level and business needs. Customer Supplied Encryption Keys (CSEK) If you need to operate with minimal trust, you can use Customer Supplied Encryption Keys (CSEK) which enable you to maintain your own separate root of trust and push keys at time of use to Google Cloud via an API. Those keys are stored in RAM during the time required to perform the specific operation. With CSEK, the burden and responsibility of protecting and not losing keys falls with you. Google has no way to recover your data if your keys are inadvertently deleted or lost. It is very easy to get this wrong. So, if you use CSEK then you need to be exceedingly careful and must also invest in your own key distribution system to push keys to Google to match the rate of use in your applications. Key Management Service (Cloud KMS) Another option is Cloud Key Management Service which enables you to leverage our globally scalable key management system while maintaining control of key operations including full audit logging of your keys. This solution alleviates the need for you to create your own key distribution system while still enabling you to control the visibility of your keys. With KMS, keys created and maintained in Cloud KMS are used as the key-encryption keys in place of Google’s default key-encryption keys. Hardware security modules (HSM) You can also optionally store keys in a cloud-hosted hardware security module. service that allows you to host encryption keys and perform cryptographic operations in a cluster of FIPS 140-2 Level 3 certified HSMs. Google manages the HSM cluster for you, so you don't need to worry about clustering, scaling, or patching. Because Cloud HSM uses Cloud KMS as its front end, you can leverage all the conveniences and features that Cloud KMS provides. Cloud External Key Manager (Cloud EKM) With Cloud EKM you can use encryption keys that you manage within a supported external key management partner to protect data within Google Cloud. Here is how it works: First, you create or use an existing key in a supported external key management partner system. This key has a unique URI. Next, you grant your Google Cloud project access to use the key, in the external key management partner system. In your Google Cloud project, you create a Cloud EKM key, using the URI for the externally-managed key. The Cloud EKM key and the external key management partner key work together to protect your data. The external key is never exposed to Google. Other data security services Apart from data encryption some other services that come in handy for data security in Google Cloud are: VPC Service Controls which mitigate data exfiltration risks by isolating multi-tenant services Data Loss Prevention which helps discover, classify, and protect sensitive data. Let’s cover this in the next blog. For a more in-depth look into how encryption at rest and in transit works across our various services checkout the whitepapers. For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev Related Article Security Monitoring in Google Cloud Moving to the cloud comes with the fundamental question of how to effectively manage security and risk posture. From a Security Operation... Read Article
  20. Cloud Spanner is Google Cloud’s fully managed relational database that offers unlimited scale, high performance, strong consistency across regions and high availability (up to 99.999% availability SLA). In addition, enterprises trust Spanner because it provides security, transparency and complete data protection to its customers. To give enterprises greater control of how their data is secured, Spanner recently launched Customer-managed encryption keys (CMEK). CMEK enables customers to manage encryption keys in Cloud Key Management (KMS). From a security standpoint, Spanner already offers, by default, encryption for data-in-transit via its client libraries and for data at rest using Google-managed encryption keys. Customers in regulated industries such as financial services, healthcare and life sciences, and telecommunications need control of the encryption keys to meet their compliance requirements. With the launch of CMEK support for Spanner, you now have complete control of the encryption keys and can run workloads that require the highest level of security and compliance. You can also protect database backups with CMEK. Spanner also provides VPC Service Controls support and has compliance certifications and necessary approvals so that it can be used for workloads requiring ISO 27001, 27017, 27018, PCI DSS, SOC1|2|3, HIPAA and FedRamp. Spanner integrates with Cloud KMS to offer CMEK support, enabling you to generate, use, rotate, and destroy cryptographic keys in Cloud KMS. Customers who need an increased level of security can choose to use hardware-protected encryption keys, and can host encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 validated Hardware Security Modules (HSMs). CMEK capability in Spanner is available in all Spanner regions and select multi-regions that support KMS and HSM. How to use CMEK with Spanner To use CMEK for a Spanner database, users should specify the KMS key at the time of database creation. The key must be in the same location as the Spanner instance (regional or multi-regional). Spanner is able to access the key on user’s behalf after the user grants the Cloud KMS Encrypter and Decrypter role to a Google-managed Cloud Spanner service account. Once a database with CMEK is created, the access to it via APIs, DDL and DML is the same as for a database using Google-managed encryption keys. You can see the details of the encryption type and encryption key in the database overview page. Spanner calls KMS in each zone of an instance-configuration about every five minutes to ensure that the key for the Spanner database is still valid. Customers can audit the Spanner requests to KMS on their behalf in the Logs Viewer if they enable logging for Cloud KMS API in their project. Access approval support for Spanner In addition to security controls, customers need complete visibility and control over how their data is used. Customers today use Cloud Spanner audit logs to record the admin and data access activities for members in their Google Cloud organization, whereas they enable Access Transparency logs to record the actions taken by Google personnel. Access Transparency provides near real-time logs to customers where Google support and engineering personnel logs business justification (including reference to support tickets in some scenarios) for any access to customer’s data. Expanding on this, Spanner has launched support for Access Approval in Preview. With Access Approval in Spanner, a customer blocks administrative access to their data from Google personnel and requires explicit approval from them to proceed. Hence, this is an additional layer of control on top of the transparency provided by Access Transparency Logs. Access Approval also provides a historical view of all requests that were approved, dismissed, or expired. To use Access Approval, customers have to first enable Access Transparency from the console for their organization; Access Approval can then be enabled from the console as well. With Access Approval, users will receive an email or Pub/Sub message with an access request that they are able to approve. Using the information in the message, they can use the Google Cloud Console or the Access Approval API to approve the access. Learn more Spanner bills a CMEK-enabled database the same as any other Spanner database. Customers are billed for Cloud KMS use (for the cost of the key and for cryptographic operations) whenever Spanner uses the key for encryption/decryption. We expect this cost to be minimal; see KMS pricing for details. To learn more about CMEK, see documentation. To get started with Spanner, create an instanceor try it out with a Spanner Qwiklab. Related Article Cloud Spanner launches point-in-time-recovery capability Check out Cloud Spanner’s new point-in-time recovery (PITR) capability, offering continuous data protection when you configure the databa... Read Article
  21. The 2020 IoT Signals Report reveals 95 percent of IoT projects fail at proof of concept (PoC), with a significant portion due to the inability to scale despite the existence of many claims touting zero-touch provisioning. Imagine the following winning alternative instead: an IoT solution builder receives a batch of devices from its original equipment manufacturer (OEM) and all they do is power them to get the following: Automatic and secure onboarding to a production certificates provider. Receipt of device operational certificate credentials. Automatic provisioning to cloud application services. Automation of credentials renewal and lifecycle management. More so, this seamless process is the same for all devices, whether in PoC or the millionth in production, and the best part is that setup requires only three simple one-time non-engineering actions by the solution builder. This is exactly what we’ve accomplished with partners and now present how as a blueprint. Figure 1: Seamlessly and securely deploy at scale from a one-time setup in three simple steps—a solution blueprint to zero-touch provisioning For this ease, all the solution builder does for setup is create an account with the managed credential provider, deliver device customization instructions to the OEM, and register an attestation certificate to Azure Device Provisioning Service (DPS). They perform each of these actions only once to enable a zero-touch provisioning experience that is true for both PoC experience and production deployments at scale. What the solution builder may not and need not know is the preceding complex integrations comprise an interplay of multiple certificate credentials in a trust delegation to accommodate the multi-custodial nature of the device manufacturing value chain, security hardening to resist tampering, and priming for automated renewal and lifecycle management of operational credentials. True scalable zero-touch provisioning can result only after these complex integrations occur; otherwise, the burden falls on the internet of things (IoT) solution builder and hence the observed high failure rate. But why is zero-touch provisioning so elusive? Simply put, zero-touch provisioning requires a narrow understanding of the provisioning needs for IoT devices. This is not a criticism but rather an observation that might be indicative of evolution and maturity in IoT practices. A true solution will never emerge without a proper understanding of the problem space. A holistic view of IoT provisioning must recognize IoT projects which exist in phases and must consider these phases when designing a zero-touch provisioning experience. For illustrative simplicity, let’s break down the project into three phases, namely evaluation, deployment, and operational, but knowing one could get even more granular. The evaluation phase The evaluation phase kickstarts every project and entails the creation of a PoC. It is characterized by the solution builder having full control of the development environment and working with pre-existing devices in unitary quantities. By virtue of the full control of the development environment, provisioning entails embedding a credential into the device. This allows the builder to take comfort in the security because only they have knowledge of the credential and they are the only one who has physical possession of the device. The deployment phase Next comes the deployment phase which entails device manufacturing for production scale. This phase expands the development environment into an ecosystem of device manufacturing and supply chain partners. It also expands device quantities by several orders of magnitude. A clear characteristic of the deployment phase is a shift of control from full ownership of the solution builder to shared ownership with partners. Security demands strong actions to protect confidential information within the solution by preventing the accidental sharing of information, allowing for increased trust in partner interactions. To uphold security and trust, provisioning must entail multiple credentials for knowledge amongst partners, a trust delegation scheme as the device changes custody, and security hardening to help prevent tampering. The operational phase The operational phase returns control to the IoT solution builder and entails the solution operation and lifecycle management of credentials and devices. The role of provisioning in this phase is the setup that divorces the value chain of manufacturing partners to focus on operation (hence how the solution builder regains control), provisions operational credentials, and enables lifecycle management actions such as renewal, revocation, and retirement. Provisioning of IoT devices is therefore a complex undertaking in security and building trust within an open ecosystem. Consequently, getting provisioning right demands a comprehensive understanding of the multi-faceted nature of the problem and acknowledgment that a complete solution will require several fields of expertise. Sadly, most claims of zero-touch provisioning only address the needs of the evaluation phase and ignore the needs of the deployment and operational phases that are requisite for at scale production. It is no wonder why the experience of zero-touch provisioning is elusive. Call in the experts Complex problems are best solved by domain experts. Solving for zero-trust provisioning requires expertise from many domains top of which are experts in operating public key infrastructures (PKI), hardening security, and customizing devices in a standard device manufacturing and acquisition process. Expertly operating a PKI is a fundamental requirement of zero-touch provisioning. A PKI service suitable for onboarding and operation of IoT devices at scale amongst many attributes needs to be highly available, provide global coverage, enable certificates audits, and deliver lifecycle management actions such as renewal and revocation. Above all, the PKI service should assist in achieving data sovereignty goals. An expertly operated PKI is important for many reasons. First, the underlying asymmetric-key cryptography provides the foundation for a zero-trust model of partner collaboration within a devices’ value chain. The fact that each partner holds a private key that they never share provides the basis for exclusive trust. Secondly, PKI enables IoT to benefit from decades of experience in enterprise IT practice of issuance and lifecycle management of certificate-based device credentials. Certificate-based credentials are valued over alternative forms of credentials because they also build on asymmetric-key cryptography to enforce a zero-trust model of computing in IoT. Operating a PKI builds on these two concepts and requires heavy responsibilities only dedicated experts can deliver. Certificate Authorities (CA) possess the requisite expertise from long practices in IT. Security hardening complements a well-planned and structured PKI in resisting tampering. It is not a secured solution without countermeasure to subversion which is the goal of tamper resistance. Tamper resistance derives from a very special class of integrated circuits whose primary goal is to operate normally or fail predictably under all adversity be it physical, environmental, or networked. The result is mitigation against subversion, hijack, infiltration, and exfiltration. These tamper-resistant integrated circuits commonly known as hardware security modules or simply HSMs. The well-honed art of producing and prescribing proper application HSMs demands expertise that is provincial to only a subset of semiconductor silicon manufacturers. Device personalization through customization is the final element to achieving secured zero-touch provisioning and demands domain expertise of OEM. The OEM must work in concert with the PKI and HSM providers to assure certain goals. First, that trust initiates and properly transits various custodians in the device manufacturing value chain. Second, that the device is customized to the solution builders’ specifications and to seamlessly connect to the right cloud solutions. Third, that the device automatically onboards and transition into operational states complete with proper credential provisioning and lifecycle management. Fourth, that the device is hardened against impersonation. Finally, that the device procurement process remains simple. Delivering secured devices in simplicity is a difficult balance that commands expertise and experience. Finally, it takes the right IoT product base, with features purposefully designed in, to leverage expertise from the various domains and exclusively through the use of standards where available. The IoT Identity Service security subsystem for Azure IoT Edge accomplishes this goal. The blueprint For this blueprint we allied with the Certificate Authority (CA) and PKI services provider, Global Sign, the semiconductor manufacturer and HSM maker, Infineon Technologies, and OEM and edge device integrator, Eurotech. The technical integration builds on the modular IoT Identity Service security subsystem of Azure IoT Edge where the domain experts leveraged features such as the IETF RFC 7030 Enrollment over Secure Transport (EST) built-in client standard for certificates requests, the ISO/IEC 11889 Trusted Platform Module (TPM) and PKCS#11 interface standards for HSM integration, and the modularity of the security subsystem to accommodate the diversity of existing device manufacturing flows which is a very important consideration. The goal is not to disrupt decades-old existing manufacturing supply chains but to build on their respective experiences. This allied integration spares the IoT solution builder from delving into requisite domain expertise and assures a solution that is secured by default. The result is a device highly customized for the IoT solution builder who need not do more on receipt than to turn it on. Figure 2: Integrated trust from TPM to cloud for security and integrity from supply chain to services The blueprint is thus about domain experts allying to solve the problem for the IoT solution builder and in doing so assures proper application of various technologies for a comprehensive solution to zero-touch provisioning at scale. For this integration, trust truly initiates from the source of the value chain which is the Infineon Technologies TPM. For example, Global Sign can authoritatively verify that the target TPM is in fact one manufactured by Infineon Technologies because of prior TPM manufacturing certificate cross-signing as part of pre-verification to issuing operational certificates. Figure 3: The IoT device identity lifecycle involves multiple credentials This alliance of partners has composed a joint whitepaper that outlines the security and engineering principles that underlie this solution in the spirit of presenting a blueprint for replication. Why standardization is important Zero-touch provisioning is a difficult problem that truly calls for standardization. The difficulty might stem from several reasons but an obvious one is how to build a solution standard on a very diverse base of manufacturing flows without coercing expensive restructuring and retooling. No problem lasts forever and someday a standard will emerge. Until then, why not build on existing standards (such as TPM, X.509, PKCS#11, EST), manufacturing flows, and value chains to create microcosms of technology alignments and pragmatically solve a clear and present problem? This is the essence of the blueprint which in addition to providing a pragmatic solution for the moment, is a call to the rest of the industry to unite in standardization. Bringing IoT solutions to production Many solutions that claim zero-touch provisioning in IoT lead to failures in PoC because they fail to solve the challenges that underlie IoT provisioning at scale. The right solution requires a comprehensive undertaking that must employ expertise from several domains to overcome complex challenges and deliver secured and seamless zero-touch provisioning at scale. Complex problems of this nature are often solved by uniting forces in standardization. However, many consortia have been at this problem for several years without tangible results, presumably because a high risk of forcing a highly diverse device manufacturing flows into untenable expensive restructuring for compliance. This blog offers a comprehensive solution to zero-touch provisioning by an alliance of experts presented here as a blueprint that builds on existing experiences and manufacturing flows to raise the success rate of IoT solutions going into production. To all the domain experts in the IoT value chain, this is a call to recognize the shared responsibility requisite of secured IoT solution deployments. We all win when the solution builder is successful so let us all team up in alliances to bring about truly secured and comprehensive zero-touch provisioning in production at scale or simply join us in Azure. It is the blueprint for success. To all IoT solution builders, ask your OEM partners to align with partners and deliver devices with the pre-integrations described in this blueprint to help simplify the experience of securely scaling the solution from PoC to production. Learn more About the principles at play in the joint whitepaper by this expert alliance. From an OEM perspective in this press release by Eurotech. From Certificate Authority perspective in this blog by Global Sign. From a TPM manufacturer perspective in this Infineon Technologies. View the full article
  22. As we discussed in “The Cloud trust paradox: To trust cloud computing more, you need the ability to trust it less” and hinted at in “Unlocking the mystery of stronger security key management,” there are situations where the encryption keys must be kept away from the cloud provider environment. While we argue that these are rare, they absolutely do exist. Moreover, when these situations materialize, the data in question or the problem being solved is typically hugely important. Here are three patterns where keeping the keys off the cloud may in fact be truly necessary or outweighs the benefits of cloud-based key management. Scenario 1: The last data to go to the cloud As organizations migrate data processing workloads to the cloud, there usually is this pool of data “that just cannot go.” It may be data that is the most sensitive, strictly regulated or the one with the toughest internal security control requirements. Examples of such highly sensitive data vary by industry and even by company. One global organization states that if they present the external key approach to any regulator in the world, they would be expecting an approval due to their robust key custody processes. Another organization was driven by their interpretation of PCI DSS and internal requirements to maintain control of their own master keys in FIPS 140-2 level 3 HSMs that they own and operate. This means that risk, compliance or policy reasons make it difficult if not impossible to send this data set to the public cloud provider for storage or processing. This use case often applies to a large organization that is heavily regulated (financial, healthcare and manufacturing come to mind). It may be data about specific “priority” patients or data related to financial transactions of a specific kind. However, the organization may be willing to migrate this data set to the cloud as long as it is encrypted and they have sole possession of the encryption keys. Thus, a specific decision to migrate may be made involving a combination of risk, trust, as well as auditor input. Or, customer key possession may be justified by customer interpretation of specific compliance mandates. Now, some of you may say “but we have data that really should never go to the cloud.” This may indeed be the case, but there is also general acceptance that digital transformation projects require the agility of the cloud, so an acceptable, if not entirely agreeable solution must be found. Scenario 2: Regional regulations and concerns As cloud computing evolves, regional requirements are playing a larger role in how organizations migrate to the cloud and operate workloads in public cloud. This scenario focuses on a situation where an organization outside of one country wants to use a cloud based in a different country, but is not comfortable with the provider having access to encryption keys for all stored data. Note that if the unencrypted data is processed in the same cloud, the provider will access the data at one point anyhow. Some of these organizations may be equally uncomfortable with keys stored in any cryptographic device (such as an HSM) under logical or physical control of the cloud provider. They reasonably conclude that such an approach is not really Hold Your Own Key (HYOK). This may be due to issues with regulations they are subject to their government, or all of the above. Furthermore, regulators in Europe, Japan, India, Brazil and other countries are considering or strengthening mandates for keeping unencrypted data and/or encryption keys within their boundaries. Examples may include specific industry mandates (such as TISAX in Europe) that either state or imply that the cloud provider cannot have access to data under any circumstances, that may necessitate not having any way for them to access the encryption keys. However, preliminary data indicates that some may accept the models where the encryption keys are in a sole possession of a customer and located in their country, and hence off the cloud provider premises (while the encrypted data may be outside). Another variation is the desire to have the keys for each country specific data set in the respective country under the control of that country's personnel or citizens. This may apply to banking data and will necessitate the encryption keys for each data set being stored in each country. An example may be a bank that insists that all their encryption keys are stored under one particular mountain in Switzerland. Yet another example covers the requirements (whether regulatory or internal) to have complete knowledge and control over administrators to the keys, and a local audit log of all key access activity. As Thomas Kurian states here, “data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these capabilities, the customer is the ultimate arbiter of access to their data.” Therefore, this scenario allows organizations to utilize Google Cloud while keeping their encryption keys in the location of their choice, under their physical and administrative control. Scenario 3: Centralized encryption key control With this use case, there are no esoteric threats to discuss or obscure audit requirements to handle. The focus here is on operational efficiency. As Gartner recently noted, the need to reduce the number of key management tools is a strong motivation for keep all the keys within one system to cover multiple cloud and on-premise environments. It may sound like a cliche, but complexity is very much the enemy of security. Multiple “centralized” systems for any task—be it log management or encryption key management—add complexity and introduce new points for security to break. In light of this, a desire to use one system for a majority of encryption keys, cloud or not, is understandable. Given that few organizations are 100% cloud-based today for workloads that require encryption, the natural course of action is to keep all the keys on-prem. Additional benefits may stem from using the same vendor as an auxiliary access control and policy point. A single set of keys reduces complexity and a properly implemented system with adequate security and redundancy outweighs the need to have multiple systems. Another variant of this is a motivation to retain an absolute control over data processing by means of controlling the encryption key access. After all, if a client can push the button and instantly cut off the cloud provider from key access, the data cannot possibly be accessed or stolen by anybody else. Finally, centralizing key management gives the cloud user a central location to enforce policies around access to keys and hence access to data-at-rest. Next steps To summarize, these scenarios truly call for encryption keys being both physically away from the cloud provider, away from their physical and administrative control. This means that a customer managed HSM at the CSP location won’t do. Please review Unlocking the mystery of stronger security key management for a broader review of key management in the cloud. Assess your data risks in regards to attackers, regulations, geopolitical risks, etc. Understand the three scenarios discussed in this post and match your requirements to them. Apply threat model thinking to your cloud data processing and see if you truly need to remove the keys from the cloud. Review services covered by Google EKM and partners to deliver encryption key management for keeping the keys away from the cloud, on premises (Ionic, Fortanix, Thales, etc).
  23. The adoption of cloud services has created a generational opportunity to meaningfully improve information security and reduce risk. As an organization moves applications and data to the cloud, they can take advantage of native security capabilities in their cloud platform. Done well, use of these engineered-in platform capabilities can simplify security to the extent that it becomes almost invisible to users, reducing operational complexity, favorably altering the balance of shared responsibility for customers, and decreasing the need for highly specialized security talent. At Google Cloud, we call the result Invisible Security, and it requires a foundation of innovative, powerful, best-in-class native security controls. Given the importance of these capabilities to our strategy, we are happy to announce today that Forrester Research has again named Google Cloud as one of just two leaders in The Forrester Wave™ Infrastructure-as-a-Service Platform Native Security (IPNS), Q4 2020 report, and rated Google Cloud highest among providers evaluated in the current offering category. The report evaluates the native security capabilities and features of cloud infrastructure as a service (IaaS) platform providers such as storage and data security, identity and access management, network security and hardware & hypervisor security. The report states that “Google has been steadily investing in its offering and has added many new security features, including Anthos (a service to manage non-Google public and private clouds) and Security Command Center Premium” and notes that the Google Cloud features of “data leak prevention (DLP) capabilities, integration support for external hardware security modules (HSMs), and third-party threat intelligence source integration are also nice.” The report also emphasizes the increasing importance of extending consistent security capabilities across hybrid and multi-cloud deployments, stating “vendors that can provide comprehensive IPNS, not only for their own platforms but also for competing public and private cloud and on premises workloads and platforms, position themselves to successfully evolve into their customers’ security central nervous systems” and notes in Google’s vendor profile that “Anthos is ahead of the competition when it comes to managing non-Google, third-party clouds.” In this Wave, Forrester evaluated seven cloud platforms against 29 criteria, looking at current offerings, strategy and market presence. Of the seven vendors, Google Cloud scored highest overall in the current offering category, and received the highest score possible in its plans for security posture management, hypervisor security, guest OS and container protection, and network security criteria. Further, Google Cloud’s had the highest possible score in the execution roadmap criterion. Google Cloud continues to redefine what’s possible in the cloud with unique security capabilities like External Key Manager, Key Access Justifications, Assured Workloads, Confidential VMs, Binary Authorization, IAM Recommender, and enabling a zero trust architecture for customers with BeyondCorp. Elaborating on Google Cloud’s roadmap, the report noted: “The vendor plans to: 1) invest in providing customers with digital sovereignty across data, operations and software in the cloud; 2) expand security for multicloud and cross-cloud environments; and 3) increase support for Zero Trust and identity-based and richer policy creation.” Google Cloud also received the highest possible score for the partner ecosystem strategy criterion. As further validation of the strength of our platform’s native capabilities, numerous Google Cloud security partners have chosen to take advantage of our platform to run and deliver their own security offerings: "At ForgeRock we help people safely access the connected world. We put a premium on security because our customers and our business depend on digital experiences that can withstand and prevent cyber attacks and bad actors," said Fran Rosch, CEO of ForgeRock. “Our partnership with Google Cloud gives us access to unique security platform capabilities that help us meet customer needs and strengthens our position as a global identity and access management leader.” We are honored to be a Leader in The Forrester Wave™ IaaS Platform Native Security Q4 2020 report, and look forward to continuing to innovate and partner with you on ways to make your digital transformation journey safer and more secure. Download the full The Forrester Wave™ IaaS Platform Native Security (IPNS), Q4 2020 report. You can get started for free with Google Cloud today.
  24. As more organizations undergo digital transformation, evolve their IT infrastructure and migrate to public cloud, the role of digital certificates will grow—and grow a lot. Certificates and certificate authorities (CAs) play a key role in both modern IT models like DevOps and in the evolution of traditional enterprise IT. In August, we announced our Certificate Authority Service (CAS)—a highly scalable and available service that simplifies and automates the management and deployment of private CAs while meeting the needs of modern developers building and running modern systems and applications. Take a look at how easy it is to set up a CA in minutes! At launch, we showed how CAS allows DevOps security officers to focus on running the environment and offload time consuming and expensive infrastructure setup to the cloud. Moreover, as remote work continues to grow, it’s bringing a rapid increase in zero trust network access (example), and the need to issue an increasing number of certificates for many types of devices and systems outside the DevOps environment. The challenge that emerged is that the number of certificates and the rate of change both went up. It is incredibly hard to support a large WFH workforce from a traditional on-premise CA, assuming your organization even has the “premises” where it can be deployed. To be better ready for these new WFH related scenarios, we are introducing a new Enterprise tier that is optimized for machine and user identity. These use cases tend to favor longer lived certificates and require much more control over certificate lifecycle (e.g., ability to revoke a certificate when the user loses a device). This new tier complements the DevOps tier which is optimized for high throughput environments, and which tend to favor shorter lived certificates (e.g., for containers, micro-services, load balancers, etc.) at an exceptionally high QPS (number of certificates issued per second). Simply, put our goal with the new Enterprise tier is to make it easy to lift and shift your existing on-premises CA. Today CAS supports “bring your own root” to allow the existing CA root of trust to continue being the root of trust for CAS. This gives you full control over your root of trust while offloading scaling and availability management to the cloud. This also gives you freedom to move workload across clouds without having to re-issue your PKI, and vastly reduces the migration cost. Moreover, through our integration with widely deployed certificate lifecycle managers (e.g., Venafiand AppViewX), we have made the lift and shift of an existing CA to the cloud a breeze, so you can continue using the tooling that you are familiar with and simply move your CA to the cloud. CAS leverages FIPS 140-2 Level 3 validated HSMs to protect private key material. With the two tiers of CAS (Enterprise and DevOps), you can now address all your certificate needs (whether for your devops environments or for your corporate machine and user identity) in one place. This is great news for security engineers and CA admins in your environment as now they can use a single console to manage the certificates in the environment, create policies, audit, and react to security incidents. Visibility and expiration have always been the two biggest issues in PKI and with CAS and our partner solutions, you can solve these issues in one place. So whether you are at the beginning of your journey of using certificates and CAs, or have an existing CA that has reached its limit to address the surge in demand (either due to WFH or your new DevOps environment), CA Service can deliver a blend of performance, convenience, ease of deployment/operation with the security and trust benefits of Google Cloud. CAS is available in preview for all customers to try. Call to action: Review CAS video “Securing Applications with Private CAs and Certificates” at Google Cloud Security Talks Review “Introducing CAS: Securing applications with private CAs and certificates”for other CAS use cases such as support for DevOps environments. Try Certificate Authority Service for your organization. Related Article Introducing CAS: Securing applications with private CAs and certificates Certificate Authority Service (CAS) is a highly scalable and available service that simplifies and automates the management and deploymen... Read Article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...