Search the Community
Showing results for tags 'security'.
-
As the cybersecurity threat landscape continues to evolve globally, organizations operating in the financial sector are seeing regulations shift to address the associated risks, and none may prove more impactful than the European Union’s (EU) Digital Operational Resilience Act (DORA). This regulation aims to strengthen the operational resilience of financial entities (FEs), and their third-party information and communication technology (ICT) providers. Here, we’ll cover what DORA is, why it matters and how Snowflake can help support your DORA compliance obligations. The second batch of DORA policy products, which aims to strengthen oversight and risk management for third-party ICT providers, is currently under development. Snowflake will provide more information once the second batch of policy requirements is established in July 2024. DORA: Building a More Secure Financial System DORA, enacted in January 2023, moves beyond reactive measures, requiring FEs and their service providers to proactively identify vulnerabilities, prevent disruptions and plan for swift recovery from incidents. DORA mandates a five-step lifecycle approach: Identify: pinpoint critical functions vulnerable to disruptions. Assess: evaluate the potential risks associated with those functions. Prevent: implement robust measures to safeguard these functions. Respond: develop clear plans for effectively handling incidents and minimizing their impact. Recover: establish processes for rapid recovery after incidents to ensure business continuity. This translates to several key requirements and their associated benefits: More stringent technical and process-oriented security measures, enhancing protection for both FEs and their ICT third-party providers. Identification of potential threats through data analysis and risk assessment processes to shift toward more proactive risk management. Collaboration between European Supervisory Authorities (ESA) and national competent authorities to promote consistent enforcement of cybersecurity rules and a more resilient financial ecosystem. Five Core Pillars of DORA 1. ICT Risk Management Framework and Governance FEs’ leadership teams must define a risk management strategy, inclusive of both their internally managed critical systems and the risks associated with their ICT providers. This strategy must incorporate business impact analyses as well as backup and recovery plans in the event of a security incident or loss of access to data. 2. ICT Incident Reporting FEs must establish and implement a management process for monitoring, managing, logging, classifying and reporting ICT-related incidents. Depending on the severity of the incident, DORA specifies the incident notification timelines, forms and reporting requirements to both Competent Authorities (CA) and affected clients and partners. 3. Digital Operational Resilience Testing FEs must test their ICT risk management framework periodically to evaluate the strength of the procedures and processes against any vulnerabilities. The results of these tests and any improvement plans against identified vulnerabilities must be reported to the CA, if requested. 4. Managing Third-Party Risk FEs shall manage ICT third-party risk as an integral component of ICT risk within their ICT risk management framework. FEs must negotiate appropriate contractual arrangements when outsourcing functions to their ICT third-party service providers. ESA will designate critical ICT providers in January 2025. All designated critical ICT third-party service providers will be subject to direct oversight by ESA. 5. Information Sharing Arrangements FEs are encouraged to exchange cyberthreat and intelligence information among themselves, and to collectively leverage their individual knowledge and practical experience at strategic, tactical and operational levels. This will assist in enhancing their capabilities to adequately assess, monitor, defend against and respond to cyberthreats by participating in information-sharing arrangements. How can Snowflake Help? The Snowflake Data Cloud can be a valuable tool for FEs to achieve compliance with DORA and strengthen their overall operational resilience through robust security and advanced data management capabilities. When leveraged appropriately, Snowflake can and will empower FEs’ abilities to safeguard their sensitive financial data in compliance with their legal obligations. Data Encryption: Snowflake encrypts data at rest using AES 256-bit (or better) encryption and leverages Transport Layer Security (TLS) 1.2 (or better) for data in transit. Snowflake’s Bring Your Own Key (BYOK) model (known as Tri-Secret Secure) empowers customers to maintain complete control over their encryption keys, adding an extra layer of security. Access Control: Snowflake allows customers to define granular permissions for user roles, minimizing the risk of unauthorized access to sensitive data. Additionally, data can be classified and tagged based on its level of sensitivity, confidentiality or importance to the organization. This prioritizes security measures and simplifies data discovery. Data Governance: Snowflake also offers a comprehensive list of data governance features. These include, but are not limited to, data masking, support for external tokenization and historical logging of user access history. These features further enhance the protection of customers’ sensitive data. Data Resiliency: Snowflake understands the importance of data resiliency. Built-in fault tolerance and data replication supports continuous access to your data, even during hardware failures. Data is automatically replicated across different availability zones within the same region. If there’s an issue, the system automatically fails over to another zone, minimizing downtime. Snowflake also offers advanced account replication and failover features (available in Business Critical and Enterprise editions). These features allow customers to replicate their entire Snowflake account, including databases and metadata, to a separate account in a different region, providing a complete disaster recovery solution. Replication is configurable, allowing customers to recover their data to a specific point in time, if necessary. By combining industry-leading security features with robust disaster recovery options, Snowflake provides a comprehensive solution for safeguarding your sensitive financial data. Third-Party Monitoring: Snowflake has an established vendor risk assessment program, which evaluates the operational resilience of its sub-processors annually and on an ad hoc basis. Snowflake customers may subscribe at the above link to receive advance notifications of new sub-processors. Proactive Security: Snowflake conducts frequent vulnerability scans and engages third-party security firms to conduct penetration testing of its platform. Snowflake also integrates with popular Security Incident and Event Management (SIEM) systems, allowing Snowflake customers to centralize security monitoring and receive alerts of suspicious activity. In the event of a security incident, Snowflake will provide its customers with timely information about the nature and consequences of the incident, the measures being taken to mitigate it, and the status of their investigation as described in Snowflake’s Security Addendum. * * * By leveraging Snowflake’s capabilities, FEs gain a strong partner in navigating DORA’s requirements, empowering them to build a more secure and trusted financial landscape. To learn more about our commitments, please contact Snowflake or reach out directly to your Snowflake Account Team for early access to guidance material. This blog post is provided for informational purposes only, with the understanding that it shall not create any legally binding representations or other obligations on Snowflake or constitute legal advice. Snowflake serves a variety of Customers with organization-specific deployment models and regulatory compliance demands, and you are responsible for making your own independent assessment of the information contained herein and ensuring your own compliance with all applicable laws and regulations. You should consult with your legal advisors for any requirements associated with the compliance posture of your organization. Snowflake may update the information provided in this document from time to time without notice. The post How the EU’s Digital Operations Resilience Act (DORA) Aims To Strengthen Operational Resilience in Financial Services appeared first on Snowflake. View the full article
-
The construction of big data applications based on open source software has become increasingly uncomplicated since the advent of projects like Data on EKS, an open source project from AWS to provide blueprints for building data and machine learning (ML) applications on Amazon Elastic Kubernetes Service (Amazon EKS). In the realm of big data, securing data on cloud applications is crucial. This post explores the deployment of Apache Ranger for permission management within the Hadoop ecosystem on Amazon EKS. We show how Ranger integrates with Hadoop components like Apache Hive, Spark, Trino, Yarn, and HDFS, providing secure and efficient data management in a cloud environment. Join us as we navigate these advanced security strategies in the context of Kubernetes and cloud computing. Overview of solution The Amber Group’s Data on EKS Platform (DEP) is a Kubernetes-based, cloud-centered big data platform that revolutionizes the way we handle data in EKS environments. Developed by Amber Group’s Data Team, DEP integrates with familiar components like Apache Hive, Spark, Flink, Trino, HDFS, and more, making it a versatile and comprehensive solution for data management and BI platforms. The following diagram illustrates the solution architecture. Effective permission management is crucial for several key reasons: Enhanced security – With proper permission management, sensitive data is only accessible to authorized individuals, thereby safeguarding against unauthorized access and potential security breaches. This is especially important in industries handling large volumes of sensitive or personal data. Operational efficiency – By defining clear user roles and permissions, organizations can streamline workflows and reduce administrative overhead. This system simplifies managing user access, saves time for data security administrators, and minimizes the risk of configuration errors. Scalability and compliance – As businesses grow and evolve, a scalable permission management system helps with smoothly adjusting user roles and access rights. This adaptability is essential for maintaining compliance with various data privacy regulations like GDPR and HIPAA, making sure that the organization’s data practices are legally sound and up to date. Addressing big data challenges – Big data comes with unique challenges, like managing large volumes of rapidly evolving data across multiple platforms. Effective permission management helps tackle these challenges by controlling how data is accessed and used, providing data integrity and minimizing the risk of data breaches. Apache Ranger is a comprehensive framework designed for data governance and security in Hadoop ecosystems. It provides a centralized framework to define, administer, and manage security policies consistently across various Hadoop components. Ranger specializes in fine-grained access control, offering detailed management of user permissions and auditing capabilities. Ranger’s architecture is designed to integrate smoothly with various big data tools such as Hadoop, Hive, HBase, and Spark. The key components of Ranger include: Ranger Admin – This is the central component where all security policies are created and managed. It provides a web-based user interface for policy management and an API for programmatic configuration. Ranger UserSync – This service is responsible for syncing user and group information from a directory service like LDAP or AD into Ranger. Ranger plugins – These are installed on each component of the Hadoop ecosystem (like Hive and HBase). Plugins pull policies from the Ranger Admin service and enforce them locally. Ranger Auditing – Ranger captures access audit logs and stores them for compliance and monitoring purposes. It can integrate with external tools for advanced analytics on these audit logs. Ranger Key Management Store (KMS) – Ranger KMS provides encryption and key management, extending Hadoop’s HDFS Transparent Data Encryption (TDE). The following flowchart illustrates the priority levels for matching policies. The priority levels are as follows: Deny list takes precedence over allow list Deny list exclude has a higher priority than deny list Allow list exclude has a higher priority than allow list Our Amazon EKS-based deployment includes the following components: S3 buckets – We use Amazon Simple Storage Service (Amazon S3) for scalable and durable Hive data storage MySQL database – The database stores Hive metadata, facilitating efficient metadata retrieval and management EKS cluster – The cluster is comprised of three distinct node groups: platform, Hadoop, and Trino, each tailored for specific operational needs Hadoop cluster applications – These applications include HDFS for distributed storage and YARN for managing cluster resources Trino cluster application – This application enables us to run distributed SQL queries for analytics Apache Ranger – Ranger serves as the central security management tool for access policy across the big data components OpenLDAP – This is integrated as the LDAP service to provide a centralized user information repository, essential for user authentication and authorization Other cloud services resources – Other resources include a dedicated VPC for network security and isolation By the end of this deployment process, we will have realized the following benefits: A high-performing, scalable big data platform that can handle complex data workflows with ease Enhanced security through centralized management of authentication and authorization, provided by the integration of OpenLDAP and Apache Ranger Cost-effective infrastructure management and operation, thanks to the containerized nature of services on Amazon EKS Compliance with stringent data security and privacy regulations, due to Apache Ranger’s policy enforcement capabilities Deploy a big data cluster on Amazon EKS and configure Ranger for access control In this section, we outline the process of deploying a big data cluster on AWS EKS and configuring Ranger for access control. We use AWS CloudFormation templates for quick deployment of a big data environment on Amazon EKS with Apache Ranger. Complete the following steps: Upload the provided template to AWS CloudFormation, configure the stack options, and launch the stack to automate the deployment of the entire infrastructure, including the EKS cluster and Apache Ranger integration. After a few minutes, you’ll have a fully functional big data environment with robust security management ready for your analytical workloads, as shown in the following screenshot. On the AWS web console, find the name of your EKS cluster. In this case, it’s dep-demo-eks-cluster-ap-northeast-1. For example: aws eks update-kubeconfig --name dep-eks-cluster-ap-northeast-1 --region ap-northeast-1 ## Check pod status. kubectl get pods --namespace hadoop kubectl get pods --namespace platform kubectl get pods --namespace trino After Ranger Admin is successfully forwarded to port 6080 of localhost, go to localhost:6080 in your browser. Log in with user name admin and the password you entered earlier. By default, you have already created two policies: Hive and Trino, and granted all access to the LDAP user you created (depadmin in this case). Also, the LDAP user sync service is set up and will automatically sync all users from the LDAP service created in this template. Example permission configuration In a practical application within a company, permissions for tables and fields in the data warehouse are divided based on business departments, isolating sensitive data for different business units. This provides data security and orderly conduct of daily business operations. The following screenshots show an example business configuration. The following is an example of an Apache Ranger permission configuration. The following screenshots show users associated with roles. When performing data queries, using Hive and Spark as examples, we can demonstrate the comparison before and after permission configuration. The following screenshot shows an example of Hive SQL (running on superset) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with permissions permitting. Based on this example and considering your enterprise requirements, it becomes feasible and flexible to manage permissions in the data warehouse effectively. Conclusion This post provided a comprehensive guide on permission management in big data, particularly within the Amazon EKS platform using Apache Ranger, that equips you with the essential knowledge and tools for robust data security and management. By implementing the strategies and understanding the components detailed in this post, you can effectively manage permissions, implementing data security and compliance in your big data environments. About the Authors Yuzhu Xiao is a Senior Data Development Engineer at Amber Group with extensive experience in cloud data platform architecture. He has many years of experience in AWS Cloud platform data architecture and development, primarily focusing on efficiency optimization and cost control of enterprise cloud architectures. Xin Zhang is an AWS Solutions Architect, responsible for solution consulting and design based on the AWS Cloud platform. He has a rich experience in R&D and architecture practice in the fields of system architecture, data warehousing, and real-time computing. View the full article
-
- kubernetes
- security
-
(and 3 more)
Tagged with:
-
Photo by Gabriel Heinzer on Unsplash We’re excited about the upcoming Ubuntu 24.04 LTS release, Noble Numbat. Like all Ubuntu releases, Ubuntu 24.04 LTS comes with 5 years of free security maintenance for the main repository. Support can be expanded for an extra 5 years, and to include the universe repository, via Ubuntu Pro. Organisations looking to keep their systems secure without needing a major upgrade can also get the Legacy Support add-on to expand that support beyond the 10 years. Combined with the enhanced security coverage provided by Ubuntu Pro and Legacy Support, Ubuntu 24.04 LTS provides a secure foundation on which to develop and deploy your applications and services in an increasingly risky environment. In this blog post, we will look at some of the enhancements and security features included in Noble Numbat, building on those available in Ubuntu 22.04 LTS. Unprivileged user namespace restrictions Unprivileged user namespaces are a widely used feature of the Linux kernel, providing additional security isolation for applications, and are often employed as part of a sandbox environment. They allow an application to gain additional permissions within a constrained environment, so that a more trusted part of an application can then use these additional permissions to create a more constrained sandbox environment within which less trusted parts can then be executed. A common use case is the sandboxing employed by modern web browsers, where the (trusted) application itself sets up the sandbox where it executes the untrusted web content. However, by providing these additional permissions, unprivileged user namespaces also expose additional attack surfaces within the Linux kernel. There has been a long history of (ab)use of unprivileged user namespaces to exploit various kernel vulnerabilities. The most recent interim release of Ubuntu, 23.10, introduced the ability to restrict the use of unprivileged user namespaces to only those applications which legitimately require such access. In Ubuntu 24.04 LTS, this feature has both been improved to cover additional applications both within Ubuntu and from third parties, and to allow better default semantics of the feature. For Ubuntu 24.04 LTS, the use of unprivileged user namespaces is then allowed for all applications but access to any additional permissions within the namespace are denied. This allows more applications to more better gracefully handle this default restriction whilst still protecting against the abuse of user namespaces to gain access to additional attack surfaces within the Linux kernel. Binary hardening Modern toolchains and compilers have gained many enhancements to be able to create binaries that include various defensive mechanisms. These include the ability to detect and avoid various possible buffer overflow conditions as well as the ability to take advantage of modern processor features like branch protection for additional defence against code reuse attacks. The GNU C library, used as the cornerstone of many applications on Ubuntu, provides runtime detection of, and protection against, certain types of buffer overflow cases, as well as certain dangerous string handling operations via the use of the _FORTIFY_SOURCE macro. FORTIFY_SOURCE can be specified at various levels providing increasing security features, ranging from 0 to 3. Modern Ubuntu releases have all used FORTIFY_SOURCE=2 which provided a solid foundation by including checks on string handling functions like sprintf(), strcpy() and others to detect possible buffer overflows, as well as format-string vulnerabilities via the %n format specifier in various cases. Ubuntu 24.04 LTS enables additional security features by increasing this to FORTIFY_SOURCE=3. Level three greatly enhances the detection of possible dangerous use of a number of other common memory management functions including memmove(), memcpy(), snprintf(), vsnprintf(), strtok() and strncat(). This feature is enabled by default in the gcc compiler within Ubuntu 24.04 LTS, so that all packages in the Ubuntu archive which are compiled with gcc, or any applications compiled with gcc on Ubuntu 24.04 LTS also receive this additional protection. The Armv8-M hardware architecture (provided by the “arm64” software architecture on Ubuntu) provides hardware-enforced pointer authentication and branch target identification. Pointer authentication provides the ability to detect malicious stack buffer modifications which aim to redirect pointers stored on the stack to attacker controlled locations, whilst branch target identification is used to track certain indirect branch instructions and the possible locations which they can target. By tracking such valid locations, the processor can detect possible malicious jump-oriented programming attacks which aim to use existing indirect branches to jump to other gadgets within the code. The gcc compiler supports these features via the -mbranch-protection option. In Ubuntu 24.04 LTS, the dpkg package now enables -mbranch-protection=standard, so that all packages within the Ubuntu archive enable support for these hardware features where available. AppArmor 4 The aforementioned unprivileged user namespace restrictions are all backed by the AppArmor mandatory access control system. AppArmor allows a system administrator to implement the principle of least authority by defining which resources an application should be granted access to and denying all others. AppArmor consists of a userspace package, which is used to define the security profiles for applications and the system, as well as the AppArmor Linux Security Module within the Linux kernel which provides enforcement of the policies. Ubuntu 24.04 LTS includes the latest AppArmor 4.0 release, providing support for many new features, such as specifying allowed network addresses and ports within the security policy (rather than just high level protocols) or various conditionals to allow more complex policy to be expressed. An exciting new development provided by AppArmor 4 in Ubuntu 24.04 LTS is the ability to defer access control decisions to a trusted userspace program. This allows for quite advanced decision making to be implemented, by taking into account the greater context available within userspace or to even interact with the user / system administrator in a real-time fashion. For example, the experimental snapd prompting feature takes advantage of this work to allow users to exercise direct control over which files a snap can access within their home directory. Finally, within the kernel, AppArmor has gained the ability to mediate access to user namespaces as well as the io_uring subsystem, both of which have historically provided additional kernel attack surfaces to malicious applications. Disabling of old TLS versions The use of cryptography for private communications is the backbone of the modern internet. The Transport Layer Security protocol has provided confidentiality and integrity to internet communications since it was first standardised in 1999 with TLS 1.0. This protocol has undergone various revisions since that time to introduce additional security features and avoid various security issues inherent in the earlier versions of this standard. Given the wide range of TLS versions and options supported by each, modern internet systems will use a process of auto-negotiation to select an appropriate combination of protocol version and parameters when establishing a secure communications link. In Ubuntu 24.04 LTS, TLS 1.0, 1.1 and DTLS 1.0 are all forcefully disabled (for any applications that use the underlying openssl or gnutls libraries) to ensure that users are not exposed to possible TLS downgrade attacks which could expose their sensitive information. Upstream Kernel Security Features Linux kernel v5.15 was used as the basis for the Linux kernel in the previous Ubuntu 22.04 LTS release. This provided a number of kernel security features including core scheduling, kernel stack randomisation and unprivileged BPF restrictions to name a few. Since that time, the upstream Linux kernel community has been busy adding additional kernel security features. Ubuntu 24.04 LTS includes the v6.8 Linux kernel which provides the following additional security features: Intel shadow stack support Modern Intel CPUs support an additional hardware feature aimed at preventing certain types of return-oriented programming (ROP) and other attacks that target the malicious corruption of the call stack. A shadow stack is a hardware enforced copy of the stack return address that cannot be directly modified by the CPU. When the processor returns from a function call, the return address from the stack is compared against the value from the shadow stack – if the two differ, the process is terminated to prevent a possible ROP attack. Whilst compiler support for this feature has been enabled for userspace packages since Ubuntu 19.10, it has not been able to be utilised until it was also supported by the kernel and the C library. Ubuntu 24.04 LTS includes this additional support for shadow stacks to allow this feature to be enabled when desired by setting the GLIBC_TUNABLES=glibc.cpu.hwcaps=SHSTK environment variable. Secure virtualisation with AMD SEV-SNP and Intel TDX Confidential computing represents a fundamental departure from the traditional threat model, where vulnerabilities in the complex codebase of privileged system software like the operating system, hypervisor, and firmware pose ongoing risks to the confidentiality and integrity of both code and data. Likewise, unauthorised access by a malicious cloud administrator could jeopardise the security of your virtual machine (VM) and its environment. Building on the innovation of Trusted Execution Environments at the silicon level, Ubuntu Confidential VMs aim to restore your control over the security assurances of your VMs. For the x86 architecture, both AMD and Intel processors provide hardware features (named AMD SEV SNP and Intel TDX respectively) to support running virtual machines with memory encryption and integrity protection. They ensure that the data contained within the virtual machine is inaccessible to the hypervisor and hence the infrastructure operator. Support for using these features as a guest virtual machine was introduced in the upstream Linux kernel version 5.19. Thanks to Ubuntu Confidential VMs, a user can make use of compute resources provided by a third party whilst maintaining the integrity and confidentiality of their data through the use of memory encryption and other features. On the public cloud, Ubuntu offers the widest portfolio of confidential VMs. These build on the innovation of both the hardware features, with offerings available across Microsoft Azure, Google Cloud and Amazon AWS. For enterprise customers seeking to harness confidential computing within their private data centres, a fully enabled software stack is essential. This stack encompasses both the guest side (kernel and OVMF) and the host side (kernel-KVM, QEMU, and Libvirt). Currently, the host-side patches are not yet upstream. To address this, Canonical and Intel have forged a strategic collaboration to empower Ubuntu customers with an Intel-optimised TDX Ubuntu build. This offering includes all necessary guest and host patches, even those not yet merged upstream, starting with Ubuntu 23.10 and extending into 24.04 and beyond. The complete TDX software stack is accessible through this github repository. This collaborative effort enables our customers to promptly leverage the security assurances of Intel TDX. It also serves to narrow the gap between silicon innovation and software readiness, a gap that grows as Intel continues to push the boundaries of hardware innovation with 5th Gen Intel Xeon scalable processors and beyond. Strict compile-time bounds checking Similar to hardening of binaries within the libraries and applications distributed in Ubuntu, the Linux kernel itself gained enhanced support for detecting possible buffer overflows at compile time via improved bounds checking of the memcpy() family of functions. Within the kernel, the FORTIFY_SOURCE macro enables various checks in memory management functions like memcpy() and memset() by checking that the size of the destination object is large enough to hold the specified amount of memory, and if not will abort the compilation process. This helps to catch various trivial memory management issues, but previously was not able to properly handle more complex cases such as when an object was embedded within a larger object. This is quite a common pattern within the kernel, and so the changes introduced in the upstream 5.18 kernel version to enumerate and fix various such cases greatly improves this feature. Now the compiler is able to detect and enforce stricter checks when performing memory operations on sub-objects to ensure that other object members are not inadvertently overwritten, avoiding an entire class of possible buffer overflow vulnerabilities within the kernel. Wrapping up Overall, the vast range of security improvements that have gone into Ubuntu 24.04 LTS greatly improve on the strong foundation provided by previous Ubuntu releases, making it the most secure release to date. Additional features within both the kernel, userspace and across the distribution as a whole combine to address entire vulnerability classes and attack surfaces. With up to 12 years of support, Ubuntu 24.04 LTS provides the best and most secure foundation to develop and deploy Linux services and applications. Expanded Security Maintenance, kernel livepatching and additional services are all provided to Ubuntu Pro subscribers to enhance the security of their Ubuntu deployments. View the full article
-
Kubernetes has transformed container Orchestration, providing an effective framework for delivering and managing applications at scale. However, efficient storage management is essential to guarantee the dependability, security, and efficiency of your Kubernetes clusters. Benefits like data loss prevention, regulations compliance, and maintaining operational continuity mitigating threats underscore the importance of security and dependability. This post will examine the best practices for the top 10 Kubernetes storage, emphasizing encryption, access control, and safeguarding storage components. Kubernetes Storage Kubernetes storage is essential to contemporary cloud-native setups because it makes data persistence in containerized apps more effective. It provides a dependable and scalable storage resource management system that guarantees data permanence through migrations and restarts of containers. Among other capabilities, persistent Volumes (PVs) and Persistent Volume Claims (PVCs) give Kubernetes a versatile abstraction layer for managing storage. By providing dynamic provisioning of storage volumes catered to particular workload requirements, storage classes further improve flexibility. Organizations can build and manage stateful applications with agility, scalability, and resilience in various computing settings by utilizing Kubernetes storage capabilities. 1. Data Encryption Sensitive information kept in Kubernetes clusters must be protected with data encryption. Use encryption tools like Kubernetes Secrets to safely store sensitive information like SSH keys, API tokens, and passwords. Encryption both in transit and at rest is also used to further protect data while it is being stored and transmitted between nodes. 2. Use Secrets Management Tools Steer clear of hardcoding private information straight into Kubernetes manifests. Instead, use powerful secrets management solutions like Vault or Kubernetes Secrets to securely maintain and distribute secrets throughout your cluster. This guarantees that private information is encrypted and available only to approved users and applications. 3. Implement Role-Based Access Control (RBAC) RBAC allows you to enforce fine-grained access controls on your Kubernetes clusters. Define roles and permissions to limit access to storage resources using the least privilege concept. This lowers the possibility of data breaches and unauthorized access by preventing unauthorized users or apps from accessing or changing crucial storage components. 4. Secure Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) Ensure that claims and persistent volumes are adequately secured to avoid tampering or unwanted access. Put security rules in place to limit access to particular namespaces or users and turn on encryption for information on persistent volumes. PVs and PVCs should have regular audits and monitoring performed to identify and address any security flaws or unwanted entry attempts. 5. Enable Network Policies To manage network traffic between pods and storage resources, use Kubernetes network policies. To guarantee that only authorized pods and services may access storage volumes and endpoints, define firewall rules restricting communication to and from storage components. This reduces the possibility of data exfiltration and network-based assaults and prevents unauthorized network access. 6. Enable Role-Based Volume Provisioning Utilize Kubernetes’ dynamic volume provisioning features to automate storage volume creation and management. To limit users’ ability to build or delete volumes based on their assigned roles and permissions, utilize role-based volume provisioning. This guarantees the effective and safe allocation of storage resources and helps prevent resource abuse. 7. Utilize Pod Security Policies To specify and implement security restrictions on pods’ access to storage resources, implement pod security policies. To manage pod rights, host resource access, and storage volume interactions, specify security policies. By implementing stringent security measures, you can reduce the possibility of privilege escalation, container escapes, and illegal access to storage components. 8. Regularly Update and Patch Kubernetes Components Monitor security flaws by regularly patching and updating Kubernetes components, including storage drivers and plugins. Keep your storage infrastructure safe from new attacks and vulnerabilities by subscribing to security advisories and adhering to best practices for Kubernetes cluster management. 9. Monitor and Audit Storage Activity To keep tabs on storage activity in your Kubernetes clusters, put extensive logging, monitoring, and auditing procedures in place. To proactively identify security incidents or anomalies, monitor access logs, events, and metrics on storage components. Utilize centralized logging and monitoring systems to see what’s happening with storage in your cluster. 10. Conduct Regular Security Audits and Penetration Testing Conduct comprehensive security audits and penetration tests regularly to evaluate the security posture of your Kubernetes storage system. Find and fix any security holes, incorrect setups, and deployment flaws in your storage system before hackers can exploit them. Work with security professionals and use automated security technologies to thoroughly audit your Kubernetes clusters. Considerations Before putting suggestions for Kubernetes storage into practice, take into account the following: Evaluate Security Requirements: Match storage options with compliance and corporate security requirements. Assess Performance Impact: Recognize the potential effects that resource usage and application performance may have from access controls, encryption, and security rules. Identify Roles and Responsibilities: Clearly define who is responsible for what when it comes to managing storage components in Kubernetes clusters. Plan for Scalability: Recognize the need for scalability and the possible maintenance costs related to implementing security measures. Make Monitoring and upgrades a Priority: To ensure that security measures continue to be effective over time, place a strong emphasis on continual monitoring, audits, and upgrades. Effective storage management is critical for ensuring the security, reliability, and performance of Kubernetes clusters. By following these ten best practices for Kubernetes storage, including encryption, access control, and securing storage components, you can strengthen the security posture of your Kubernetes environment and mitigate the risk of data breaches, unauthorized access, and other security threats. Stay proactive in implementing security measures and remain vigilant against emerging threats to safeguard your Kubernetes storage infrastructure effectively. The post Mastering Kubernetes Storage: 10 Best Practices for Security and Efficiency appeared first on Amazic. View the full article
-
- 1
-
- kubernetes
- storage
- (and 11 more)
-
Google Cloud Next made a big splash in Las Vegas this week! From our opening keynote showcasing incredible customer momentum to exciting product announcements, we covered how AI is transforming the way that companies work. You can catch up on the highlights in our 14 minute keynote recap! Developers were front and center at our Developer keynote and in our buzzing Innovators Hive on the Expo floor (which was triple the size this year!). Our nearly 400 partner sponsors were also deeply integrated throughout Next, bringing energy from the show floor to sessions and evening events throughout the week. Last year, we talked about the exciting possibilities of generative AI, and this year it was great to showcase how customers are now using it to transform the way they work. At Next ‘24, we featured 300+ customer and partner AI stories, 500+ breakout sessions, hands-on demos, interactive training sessions, and so much more. It was a jam-packed week, so we’ve put together a summary of our announcements which highlight how we’re delivering the new way to cloud. Read on for a complete list of the 218 (yes, you read that right) announcements from Next ‘24: Gemini for Google Cloud We shared how Google's Gemini family of models will help teams accomplish more in the cloud, including: 1. Gemini for Google Cloud, a new generation of AI assistants for developers, Google Cloud services, and applications. 2. Gemini Code Assist, which is the evolution of the Duet AI for Developers. 3. Gemini Cloud Assist, which helps cloud teams design, operate, and optimize their application lifecycle. 4. Gemini in Security Operations, generally available at the end of this month, converts natural language to new detections, summarizes event data, recommends actions to take, and navigates users through the platform via conversational chat. 5. Gemini in BigQuery, in preview, enables data analysts to be more productive, improve query performance and optimize costs throughout the analytics lifecycle. 6. Gemini in Looker, in private preview, provides a dedicated space in Looker to initiate a chat on any topic with your data and derive insights quickly. 7. Gemini in Databases, also in preview, helps developers, operators, and database administrators build applications faster using natural language; manage, optimize and govern an entire fleet of databases from a single pane of glass; and accelerate database migrations. Customer Stories We shared new customer announcements, including: 8. Cintas is leveraging Google Cloud’s gen AI to develop an internal knowledge center that will allow its customer service and sales employees to easily find key information. 9. Bayer will build a radiology platform that will help Bayer and other companies create and deploy AI-first healthcare apps that assist radiologists, ultimately improving efficiency and diagnosis turn-around time. 10. Best Buy is leveraging Google Cloud’s Gemini large language model to create new and more convenient ways to give customers the solutions they need, starting with gen AI virtual assistants that can troubleshoot product issues, reschedule order deliveries, and more. 11. Citadel Securities used Google Cloud to build the next generation of its quantitative research platform that increased its research productivity and price-performance ratio. 12. Discover Financial is transforming customer experience by bringing gen AI to its customer contact centers to improve agent productivity through personalized resolutions, intelligent document summarization, real-time search assistants, and enhanced self-service options. 13. IHG Hotels & Resorts is using Gemini to build a generative AI-powered chatbot to help guests easily plan their next vacation directly in the IHG Hotels & Rewards mobile app. 14. Mercedes-Benz will expand its collaboration with Google Cloud, using our AI and gen AI technologies to advance customer-facing use cases across e-commerce, customer service, and marketing. 15. Orange is expanding its partnership with Google Cloud to deploy generative AI closer to Orange’s and its customers’ operations to help meet local requirements for trusted cloud environments and accelerate gen AI adoption and benefits across autonomous networks, workforce productivity, and customer experience. 16. WPP will leverage Google Cloud’s gen AI capabilities to deliver personalization, creativity, and efficiency across the business. Following the adoption of Gemini, WPP is already seeing internal impacts, including real-time campaign performance analysis, streamlined content creation processes, AI narration, and more. 17. Covered California, California’s health insurance marketplace, will simplify the healthcare enrollment process using Google Cloud’s Document AI, enabling the organization to verify more than 50,000 healthcare documents with a 84% verification rate per month. Workspace and collaboration The next wave of innovations and enhancements are coming to Google Workspace: 18. Google Vids, a key part of our Google Workspace innovations, is a new AI-powered video creation app for work that sits alongside Docs, Sheets and Slides. Vids will be released to Workspace Labs in June. 19. Gemini is coming to Google Chat in preview, giving you an AI-powered teammate to summarize conversations, answer questions, and more. 20. The new AI Meetings and Messaging add-on is priced at $10 per user, per month, and includes: Take notes for me, now in preview, translate for me, coming in June, which automatically detects and translates captions in Meet, with support for 69 languages, and automatic translation of messages and on-demand conversation summaries in Google Chat, coming later this year. 21. Using large language models, Gmail can now block an additional 20% more spam and evaluate 1,000 times more user-reported spam every day. 22. A new AI Security add-on allows IT teams to automatically classify and protect sensitive files in Google Drive, and is available for $10 per user, per month. 23. We’re extending DLP controls and classification labels to Gmail in beta. 24. We’re adding experimental support for post-quantum cryptography (PQC) in client-side encryption with our partners Thales and Fortanix. 25. Voice prompting and instant polish in Gmail: Send emails easily when you’re on the go with voice input in Help me write, and convert rough notes to a complete email with one click. 26. A new tables feature in Sheets (generally available in the coming weeks) formats and organizes data with a sleek design and a new set of building blocks — from project management to event planning templates witautomatic alerts based on custom triggers like a change in a status field. 27. Tabs in Docs (generally available in the coming weeks) allow you to organize information in a single document rather than linking to multiple documents or searching through Drive. 28. Docs now supports full-bleed cover images that extend from one edge of your browser to the other; generally available in the coming weeks. 29. Generally available in the coming weeks, Chat will support increased member capacity of up to 500,000 in spaces. 30. Messaging interoperability for Slack and Teams is now generally available through our partner Mio. AI infrastructure 31. The Cloud TPU v5p GA is now generally available. 32. Google Kubernetes Engine (GKE) now supports Cloud TPU v5p and TPU multi-host serving, also generally available. 33. A3 Mega compute instance powered by NVIDIA H100 GPUs offers double the GPU-to-GPU networking bandwidth of A3, and will be generally available in May. 34. Confidential Computing is coming to the A3 VM family, in preview later this year. 35. The NVIDIA Blackwell GPU platform will be available on the AI Hypercomputer architecture in two configurations: NVIDIA HGX B200 for the most demanding AI, data analytics, and HPC workloads; and the liquid-cooled GB200 NVL72 GPU for real-time LLM inference and training massive-scale models. 36. New caching capabilities for Cloud Storage FUSE improve training throughput and serving performance, and are generally available. 37. The Parallelstore high-performance parallel filesystem now includes caching in preview. 38. Hyperdisk ML in preview is a next-generation block storage service optimized for AI inference/serving workloads. 39. The new open-source MaxDiffusion is a new high-performance and scalable reference implementation for diffusion models. 40. MaxText, a JAX LLM, now supports new LLM models including Gemma, GPT3, LLAMA2 and Mistral across both Cloud TPUs and NVIDIA GPUs. 41. PyTorch/XLA 2.3 will follow the upstream release later this month, bringing single program, multiple data (SPMD) auto-sharding, and asynchronous distributed checkpointing features. 42. For Hugging Face PyTorch users, the Hugging Face Optimum-TPU package lets you train and serve Hugging Face models on TPUs. 43. Jetstream is a new open-source, throughput- and memory-optimized LLM inference engine for XLA devices (starting with TPUs); it supports models trained with both JAX and PyTorch/XLA, with optimizations for popular open models such as Llama 2 and Gemma. 44. Google models will be available as NVIDIA NIM inference microservices. 45. Dynamic Workload Scheduler now offers two modes: flex start mode (in preview), and calendar mode (in preview). 46. We shared the latest performance results from MLPerf™ Inference v4.0 using A3 virtual machines (VMs) powered by NVIDIA H100 GPUs. 47. We shared performance benchmarks for Gemma models using Cloud TPU v5e and JetStream. 48. We introduced ML Productivity Goodput, a new metric to measure the efficiency of an overall ML system, as well as an API to integrate into your projects, and methods to maximize ML Productivity Goodput. Vertex AI 49. Gemini 1.5 Pro is now available in public preview in Vertex AI, bringing the world’s largest context window to developers everywhere. 50. Gemini 1.5 Pro on Vertex AI can now process audio streams including speech, and the audio portion of videos. 51. Imagen 2.0, our family of image generation models, can now be used to create short, 4-second live images from text prompts. 52. Image editing is generally available in Imagen 2.0, including inpainting/outpainting and digital watermarking powered by Google DeepMind’s SynthID. 53. We added CodeGemma, a new model from our Gemma family of lightweight models, to Vertex AI. 54. Vertex AI has expanded grounding capabilities, including the ability to directly ground responses with Google Search, now in public preview. 55. Vertex AI Prompt Management, in preview, helps teams improve prompt performance. 56. Vertex AI Rapid Evaluation, in preview, helps users evaluate model performance when iterating on the best prompt design. 57. Vertex AI AutoSxS is now generally available, and helps teams compare the performance of two models. 58. We expanded data residency guarantees for data stored at-rest for Gemini, Imagen, and Embeddings APIs on Vertex AI to 11 new countries: Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan. 59. When using Gemini 1.0 Pro and Imagen, you can now limit machine-learning processing to the United States or European Union. 60. Vertex AI hybrid search, in preview, integrates vector-based and keyword-based search techniques to ensure relevant and accurate responses for users. 61. The new Vertex AI Agent Builder, in preview, lets developers build and deploy gen AI experiences using natural language or open-source frameworks like LangChain on Vertex AI. 62. Vertex AI includes two new text embedding models in public preview: the English-only text-embedding-preview-0409, and the multilingual text-multilingual-embedding-preview-0409 Core infrastructure Thomas with the Google Axion chip 63. We expanded Google Cloud’s compute portfolio, with major product releases spanning compute and storage for general-purpose workloads, as well as for more specialized workloads like SAP and high-performance databases. 64. Google Axion is our first custom Arm-based CPU designed for the data center, and will be in preview in the coming months. 65. Now in preview, the Compute Engine C4 general-purpose VM provides high performance paired with a controlled maintenance experience for your mission-critical workloads. 66. The general-purpose N4 machine series is built for price-performance with Dynamic Resource Management, and is generally available. 67. C3 bare-metal machines, available in an upcoming preview, provide workloads with direct access to the underlying server’s CPU and memory resources. 68. New X4 memory-optimized instances are now in preview, through this interest form. 69. Z3 VMs are designed for storage-dense workloads that require SSD, and are generally available. 70. Hyperdisk Storage Pools Advanced Capacity, in general availability, and Advanced Performance in preview, allow you to purchase and manage block storage capacity in a pool that’s shared across workloads. 71. Coming to general availability in May, Hyperdisk Instant Snapshots provide near-zero RPO/RTO for Hyperdisk volumes. 72. Google Compute Engine users can now use zonal flexibility, VM family flexibility, and mixed on-demand and spot consumption to deploy their VMs. As part of Google Distributed Cloud (GDC) offering, we announced: 73. A generative AI search packaged solution powered by Gemma open models will be available in preview in Q2 2024 on GDC to help customers retrieve and analyze data at the edge or on-premises. 74. GDC has achieved ISO27001 and SOC2 compliance certifications. 75. A new managed Intrusion Detection and Prevention Solution (IDPS) integrates Palo Alto Networks threat prevention technology with GDC, and is now generally available. 76. GDC Sandbox, in preview, helps application developers build and test services designed for GDC in a Google Cloud environment, without needing to navigate the air-gap and physical hardware. 77. A preview GDC storage flexibility feature can help you grow your storage independent of compute, with support for block, file, or object storage. 78. GDC can now run in disconnected mode for up to seven days, and offers a suite of offline management features to help ensure deployments and workloads are accessible and working while they are disconnected; this capability is generally available. 79. New Managed GDC Providers who can sell GDC as a managed service include Clarence, T-Systems, and WWT.and a new Google Cloud Ready — Distributed Cloud badge signals that a solution has been tuned for GDC. 80. GDC servers are now available with an energy-efficient NVIDIA L4 Tensor Core GPU. 81. Google Distributed Cloud Hosted (GDC Hosted) is now authorized to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense (DoD). From our Google Cloud Networking family, we announced: 82. Gemini Cloud Assist, in preview, provides AI-based assistance to solve a variety of networking tasks such as generating configurations, recommending capacity, correlating changes with issues, identifying vulnerabilities, and optimizing performance. 83. Now generally available, the Model as a Service Endpoint solution uses Private Service Connect, Cloud Load Balancing, and App Hub lets model creators own the model service endpoint to which application developers then connect. 84. Later this year, Cloud Load Balancing will add enhancements for inference workloads: Cloud Load Balancing with custom metrics, Cloud Load Balancing for streaming inference, and Cloud Load Balancing with traffic management for AI models. 85. Cloud Service Mesh is a fully managed service mesh that combines Traffic Director’s control plane and Google’s open-source Istio-based service mesh, Anthos Service Mesh. A service-centric Cross-Cloud Network delivers a consistent, secure experience from any cloud to any service, and includes the following enhancements: 86. Private Service Connect transitivity over Network Connectivity Center, available in preview this quarter, enables services in a spoke VPC to be transitively accessible from other spoke VPCs. 87. Cloud NGFW Enterprise (formerly Cloud Firewall Plus), now GA, provides network threat protection powered by Palo Alto Networks, plus network security posture controls for org-wide perimeter and Zero Trust microsegmentation. 88. Identity-based authorization with mTLS integrates the Identity-Aware Proxy with our internal application Load Balancer to support Zero Trust network access, including client-side and soon, back-end mutual TLS. 89. In-line network data-loss prevention (DLP), in preview soon, integrates Symantec DLP into Cloud Load Balancers and Secure Web Proxy using Service Extensions. 90. Partners Imperva, HUMAN Security, Palo Alto Networks and Traceable are integrating their advanced web protection services into Service Extensions, as are web services providers Cloudinary, Nagra, Queue-it, and Datadog. 91. Service Extensions now has a library of code examples to customize origin selection, adjust headers, and more. 92. Private Service Connect is now fully integrated with Cloud SQL, and generally available. There are many improvements to our storage offerings: 93. Generate insights with Gemini lets you use natural language to analyze your storage footprint, optimize costs, and enhance security across billions of objects. It is available now through the Google Cloud console as an allowlist experimental release. 94. Google Cloud NetApp Volumes is expanding to 15 new Google Cloud regions in Q2’24 (GA) and includes a number of enhancements: dynamically migrating files by policy to lower-cost storage based on access frequency (in preview Q2’24); increasing Premium and Extreme service levels up to 1PB in size, with throughput performance up to 3X (preview Q2’24). NetApp Volumes also includes a new Flex service level enabling volumes as small as 1GiB. 95. Filestore now supports single-share backup for Filestore Persistent Volumes and GKE (generally available) and NFS v4.1 (preview), plus expanded Filestore Enterprise capacity up to 100TiB. For Cloud Storage: 96. Cloud Storage Anywhere Cache now uses zonal SSD read cache across multiple regions within a continent (allowlist GA). 97. Cloud Storage soft delete protects against accidental or malicious deletion of data by preserving deleted items for a configurable period of time (generally available). 98. The new Cloud Storage managed folders resource type allows granular IAM permissions to be applied to groups of objects (generally available). 99. Tag-based at-scale backup helps manage data protection for Compute Engine VMs (generally available). 100. The new high-performance backup option for SAP HANA leverages persistent disk (PD) snapshot capabilities for database-aware backups (generally available). 101. As part of Backup and DR Service Report Manager, you can now customize reports with data from Google Cloud Backup and DR using Cloud Monitoring, Cloud Logging, and BigQuery (generally available). Databases 102. Database Studio, a part of Gemini in Databases, brings SQL generation and summarization capabilities to our rich SQL editor in the Google Cloud console, as well as an AI-driven chat interface. 103. Database Center lets operators manage an entire fleet of databases through intelligent dashboards that proactively assess availability, data protection, security, and compliance issues, as well as with smart recommendations to optimize performance and troubleshoot issues. 104. Database Migration Service is also integrated with Gemini in Databases, including assistive code conversion (e.g., from Oracle to PostgreSQL) and explainability features. Likewise, AlloyDB gains a lot of new functionality: 105. AlloyDB AI lets gen AI developers build applications that accurately query data with natural language, just like they do with SQL; available now in AlloyDB Omni. 106. AlloyDB AI now includes a new pgvector-compatible index based on Google’s approximate nearest neighbor algorithms, or ScaNN; it’s available as a technology preview in AlloyDB Omni. 107. AlloyDB model endpoint management makes it easier to call remote Vertex AI, third-party, and custom models; available in AlloyDB Omni today and soon on AlloyDB in Google Cloud. 108. AlloyDB AI “parameterized secure views” secures data based on end-users’ context; available now in AlloyDB Omni. Bigtable, which turns 20 this year, got several new features: 109. Bigtable Data Boost, a pre-GA offering, delivers high-performance, workload-isolated, on-demand processing of transactional data, without disrupting operational workloads. 110. Bigtable authorized views, now generally available, allow multiple teams to leverage the same tables and securely share data directly from the database. 111. New Bigtable distributed counters in preview process high-frequency event data like clickstreams directly in the database. 112. Bigtable large nodes, the first of other workload-optimized node shapes, offer more performance stability at higher server utilization rates, and are in private preview. Memorystore for Redis Cluster, meanwhile: 113. Now supports both AOF (Append Only File) and RDB (Redis Database)-based persistence and has new node shapes that offer better performance and cost management. 114. Offers ultra-fast vector search, now generally available. 115. Includes new configuration options to tune max clients, max memory, max memory policies, and more, now in preview. Firestore users, take note: 116. Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. 117. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. 118. Firestore Query Explain in preview can help you troubleshoot your queries. 119. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. 120. You can now deploy Firestore in any available supported Google Cloud region, and Firestore’s Scheduled Backup feature can now retain backups for up to 98 days, up from seven days. 121. Cloud SQL Enterprise Plus edition now offers advanced failover capabilities such as orchestrated switchover and switchback Data analytics 122. BigQuery is now Google Cloud’s single integrated platform for data to AI workloads, with BigLake, BigQuery’s unified storage engine, providing a single interface across BigQuery native and open formats for analytics and AI workloads. 123. BigQuery better supports Iceberg, DDL, DML and high-throughput support in preview, while BigLake now supports the Delta file format, also in preview. 124. BigQuery continuous queries are in preview, providing continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. The above-mentioned Gemini in BigQuery enables all manner of new capabilities and offerings: 125. New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics and vector embeddings, and fine-tuning of LLMs. 126. BigQuery Studio provides a collaborative data workspace, the choice of SQL, Python, Spark or natural language directly, and new integrations for real-time streaming and governance; it is now generally available. 127. The new BigQuery data canvas provides a notebook-like experience with embedded visualizations and natural language support courtesy of Gemini. 128. BigQuery can now connect models in Vertex AI with enterprise data, without having to copy or move data out of BigQuery. 129. You can now use BigQuery with Gemini 1.0 Pro Vision to analyze both images and videos by combining them with your own text prompts using familiar SQL statements. 130. Column-level lineage in BigQuery and expanded lineage capabilities for Vertex AI pipelines will be in preview soon. Other updates to our data analytics portfolio include: 131. Apache Kafka for BigQuery as a managed service is in preview, to enable streaming data workloads based on open source APIs. 132. A serverless engine for Apache Spark integrated within BigQuery Studio is now in preview. 133. Dataplex features expanded data-to-AI governance capabilities in preview. Developers & operators Gemini Code Assist includes several new enhancements: 134. Full codebase awareness, in preview, uses Gemini 1.5 Pro to make complex changes, add new features, and streamline updates to your codebase. 135. A new code transformation feature available today in Cloud Workstations and Cloud Shell Editor lets you use natural language prompts to tell Gemini Code Assist to analyze, refactor, and optimize your code. 136. Gemini Code Assist now has extended local context, automatically retrieving relevant local files from your IDE workspace and displaying references to the files used. 137. With code customization in private preview, Gemini Code Assist lets you integrate private codebases and repositories for hyper-personalized code generation and completions, and connects to GitLab, GitHub, and Bitbucket source-code repositories. 138. Gemini Code Assist extends to Apigee and Application Integration in preview, to access and connect your applications. 139. We extended our partnership with Snyk to Gemini Code Assist, letting you learn about vulnerabilities and common security topics right within your IDE. 140. The new App Hub provides an accurate, up-to-date representation of deployed applications and their resource dependencies. Integrated with Gemini Cloud Assist, App Hub is generally available. Users of our Cloud Run and Google Kubernetes Engine (GKE) runtime environments can look forward to a variety of features: 141. Cloud Run application canvas lets developers generate, modify and deploy Cloud Run applications with integrations to Vertex AI, Firestore, Memorystore, and Cloud SQL, as well as load balancing and Gemini Cloud Assist. 142. GKE now supports container and model preloading to accelerate workload cold starts. 143. GPU sharing with NVIDIA Multi-Process Service (MPS) is now offered in GKE, enabling concurrent processing on a single GPU. 144. GKE support GCS FUSE read caching, now generally available, using a local directory as a cache to accelerate repeat reads for small and random I/Os. 145. GKE Autopilot mode now supports NVIDIA H100 GPUs, TPUs, reservations, and Compute Engine committed use discounts (CUDs). 146. Gemini Cloud Assist in GKE is available to help with optimizing costs, troubleshooting, and synthetic monitoring. Cloud Billing tools help you track and understand Google Cloud spending, pay your bill, and optimize your costs; here are a few new features: 147. Support for Cloud Storage costs at the bucket level and storage tags is included out of the box with Cloud Billing detailed data exports to BigQuery. 148. A new BigQuery data view for FOCUS allows users to compare costs and usage across clouds. 149. You can now convert cost management reports into BigQuery billing queries right from the Cloud Billing console. 150. A new Cloud FinOps Anomaly Detection feature is in private preview. 151. FinOps hub is now generally available, adds support to view top savings opportunities, and a preview of our FinOps hub dashboard lets you to analyze costs by project, region, or machine type. 152. A new CUD Analysis solution is available across Google Compute Engine resource families including TPU v5e, TPU v5p, A3, H3, and C3D. 153. There are new spend-based CUDs available for Memorystore, AlloyDB, BigTable, and Dataflow. Security Building on natural language search and case summaries in Chronicle, Gemini in Security Operations is coming to the entire investigation lifecycle, including: 154. A new assisted investigation feature, generally available at the end of this month, that guides analysts through their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. 155. The ability to ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment. 156. Gemini in Threat Intelligence, in public preview, allows you to tap into Mandiant’s frontline threat intelligence using conversational search. 157. VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform; generally available now. 158. Gemini in Security Command Center, which now lets security teams search for threats and other security events using natural language in preview, and provides summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarizes attack paths. 159. Gemini Cloud Assist also helps with security tasks, via: IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts; Key Insights, which help during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs; and Confidential Computing Insights, which recommends options for adding confidential computing protection to sensitive workloads based on your data and your compute usage. Other security news includes: 160. The new Chrome Enterprise Premium, now generally available, combines the popular browser with Google threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting. 161. Applied threat intelligence in Google Security Operations, now generally available, automatically applies global threat visibility and applies it to each customer’s unique environment. 162. Security Command Center Enterprise is now generally available and includesMandiant Hunt, now in preview. 163. Identity and Access Management Privileged Access Manager (PAM), now available in preview, provides just-in-time, time-bound, and approval-based access elevations. 164. Identity and Access Management Principal Access Boundary (PAB) is a new, identity-centered control now in preview that enforces restrictions on IAM principals. 165. Cloud Next-Gen Firewall (NGFW) Enterprise is now generally available, including threat protection from Palo Alto Networks. 166. Cloud Armor Enterprise is now generally available and offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence. 167. Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine. 168. Key management with Autokey is now in preview, simplifying the creation and management of customer encryption keys (CMEK). 169. Bare metal HSM deployments in PCI-compliant facilities are now available in more regions. 170. Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries. 171. Audit Manager automates control verification with proof of compliance for workloads and data on Google Cloud, and is in preview. 172. Advanced API Security, part of Apigee API Management, now offers shadow API detection in preview. As part of our Confidential Computing portfolio, we announced: 173. Confidential VMs on Intel TDX are now in preview and available on the C3 machine series with Intel TDX. For AI and ML workloads, we support Intel AMX, which provides CPU-based acceleration by default on C3 series Confidential VMs. 174. Confidential VMs on general-purpose N2D machine series with AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) are now in preview. 175. Live Migration on Confidential VMs is now in general availability on N2D machine series across all regions. 176. Confidential VMs on the A3 machine series with NVIDIA Tensor Core H100 GPUs will be in private preview later this year. Migration 177. The Rapid Migration Program (RaMP) now covers migration and modernization use cases that span across applications and the underlying infrastructure, data and analytics. For example, as part of RaMP for Storage: Storage egress costs from Amazon S3 to Google Cloud Storage are now completely free. Cloud Storage's client libraries for Python, Node.js, and Java now support parallelization of uploads and downloads from client libraries. Migration Center also includes several excellent new additions: 178. Migration use case navigator, for mapping out how to migrate your resources (servers, databases, data warehouses, etc.) from on-prem and other clouds directly into Google Cloud, including new Cloud Spend Estimators for rapid TCO assessments of on-premises VMware and Exadata environments. 179. Database discovery and assessment for Microsoft SQL Server, PostgreSQL and MySQL to Cloud SQL migrations. Google Cloud VMware Engine, an integrated VMware service on Google Cloud now offers: 180. The intent to support VMware Cloud Foundation License Portability 181. General availability of larger instance type (ve2-standard-128) offerings. 182. Networking enhancements including next-gen VMware Engine Networking, automated zero-config VPC peering, and Cloud DNS for workloads. 183. Terraform Infrastructure as Code Automation. Migrate to Virtual Machines helps teams migrate their workloads. Here’s what we announced: 184. A new Disk Migration solution for migrating disk volumes to Google Cloud. 185. Image Import (preview) as a managed service. 186. BIOS to UEFI Conversion in preview, which automatically converts bootloaders to the newer UEFI format. 187. Amazon Linux Conversion in preview, for converting Amazon Linux to Rocky Linux in Google Compute Engine. 188. CMEK support, so you maintain control over your own encryption keys. When replatforming VMs to containers in GKE or Cloud Run, there’s: 189. The new Migrate to Containers (M2C) CLI, which generates artifacts that you can deploy to either GKE or Cloud Run. 190. M2C Cloud Code Extension, in preview, which migrates applications from VMs to containers running on GKE directly in Visual Studio. Here are the enhancements to our Database Migration Service: 191. Database Migration Service now offers AI-powered last-mile code conversion from Oracle to PostgreSQL. 192. Database Migration Service now performs migration from SQL Server (on any platform) to Cloud SQL for SQL Server, in preview. 193. In Datastream, SQL Server as a source for CDC performs data movement to BigQuery destinations. Migrating from a mainframe? Here are some new capabilities: 194. The Mainframe Assessment Tool (MAT) now powered by gen AI analyzes the application codebase, performing fit assessment and creating application-level summarization and test cases. 195. Mainframe Connector sends a copy of your mainframe data to BigQuery for off-mainframe analytics. 196. G4 refactors mainframe application code (COBOL, RPG, JCL etc.) and data from their original state/programming language to a modern stack (JAVA). 197. Dual Run lets you run a new system side by side with your existing mainframe, duplicating all transactions and checking for completeness, quality and effectiveness of the new solution. Partners & ecosystem 198. Partners showcased more than 100 solutions that leverage Google AI on the Next ‘24 show floor. 199. We announced the 2024 Google Cloud Partner of the Year winners. 200. Gemini models will be available in the SAP Generative AI Hub. 201. GitLab announced that its authentication, security, and CI/CD integrations with Google Cloud are now in public beta for customers. 202. Palo Alto Networks named Google Cloud its AI provider of choice and will use Gemini models to improve threat analysis and incident summarization for its Cortex XSIAM platform. 203. Exabeam is using Google Cloud AI to improve security outcomes for customers. 204. Global managed security services company Optiv is expanding support for Google Cloud products. 205. Alteryx, Dynatrace, and Harness are launching new features built with Google Cloud AI to automate workflows, support data governance, and enable users to better observe and manage the data. 206. A new Generative AI Services Specialization is available for partners who demonstrate the highest level of technical proficiency with Google Cloud gen AI. 207. We introduced new Generative AI Delivery Excellence and Technical Bootcamps, and advanced Challenge Labs in generative AI. 208. The Google Cloud Ready - BigQuery initiative has 21 new partners: Actable, AgileData, Amplitude, Boostkpi, CaliberMind, Calibrate Analytics, CloudQuery, DBeaver, Decube, DinMo, Estuary, Followrabbit, Gretel, Portable, Precog, Retool, SheetGo, Tecton, Unravel Data, Vallidio, and Vaultree 209. The Google Cloud Ready - AlloyDB initiative has six new partners: Boostkpi, DBeaver, Estuary, Redis, Thoughtspot, and SeeBurger 210. The Google Cloud Ready - Cloud SQL initiative has five new partners: BoostKPI, DBeaver, Estuary, Redis, and Thoughtspot 211. Crowdstrike is integrating its Falcon Platform with Google Cloud products. Members of our Google for Startups program, meanwhile, will be interested to learn that: 212. The Google for Startups Cloud Program has a new partnership with the NVIDIA Inception startup program. The benefits include providing Inception members with access to Google Cloud credits, go-to-market support, technical expertise, and fast-tracked onboarding to Google Cloud Marketplace. 213. As part of the NVIDIA Inception partnership, Google for Startups Cloud Program members can join NVIDIA Inception and gain access to technological expertise, NVIDIA Deep Learning Institute course credits, NVIDIA hardware and software, and more. Eligible members of the Google for Startups Cloud Program also can participate in NVIDIA Inception Capital Connect, a platform that gives startups exposure to venture capital firms interested in the space. 214. The new Google for Startups Accelerator: AI-First program for startups building AI solutions based in the U.S. and Canada has launched, and its cohort includes 15 AI startups: Aptori, Augmend, Backpack Healthcare, BrainLogic AI, Cicerai, CLIKA, Easel AI, Findly, Glass Health, Kodif, Liminal, mbue, Modulo Bio, Rocket Doctor, and Sibli. 215. The Startup Learning Center provides startups with curated content to help them grow with Google Cloud, and will be launching an offering for startup developers and future founders via Innovators Plus in the coming months Finally, Google Cloud Consulting, has the following services to help you build out your Google Cloud environment: 216. Google Cloud Consulting is offering no-cost, on-demand training to top customers through Google Cloud Skills Boost, including new gen AI skill badges: Prompt Design in Vertex AI, Develop Gen AI Apps with Gemini and Streamlit, and Inspect Rich Documents with Gemini Multimodality and Multimodal RAG. 217. The new Isolator solution protects healthcare data used in collaborations between parties using a variety of Google Cloud technologies including Chrome Enterprise Premium, VPC Service Controls, Chrome Enterprise, and encryption. 218. Google Cloud Consulting’s Delivery Navigator is now generally available to all Google Cloud qualified services partners. Phew. What a week! On behalf of Google Cloud, we’re so grateful you joined us at Next ‘24, and can’t wait to host you again next year back in Las Vegas at the Mandalay Bay on April 9 - 11 in 2025! View the full article
-
- 1
-
- google cloud next
- google gemini
- (and 6 more)
-
There is no AI without data Artificial intelligence is the most exciting technology revolution of recent years. Nvidia, Intel, AMD and others continue to produce faster and faster GPU’s enabling larger models, and higher throughput in decision making processes. Outside of the immediate AI-hype, one area still remains somewhat overlooked: AI needs data (find out more here). First and foremost, storage systems need to provide high performance access to ever growing datasets, but more importantly they need to ensure that this data is securely stored, not just for the present, but also for the future. There are multiple different types of data used in typical AI systems: Raw and pre-processed data Training data Models Results All of this data takes time and computational effort to collect, process and output, and as such need to be protected. In some cases, like telemetry data from a self-driving car, this data might never be able to be reproduced. Even after training data is used to create a model, its value is not diminished; improvements to models require consistent training data sets so that any adjustments can be fairly benchmarked. Raw, pre-processed, training and results data sets can contain personally identifiable information and as such steps need to be taken to ensure that it is stored in a secure fashion. And more than just the moral responsibility of safely storing data, there can be significant penalties associated with data breaches. Challenges with securely storing AI data We covered many of the risks associated with securely storing data in this blog post. The same risks apply in an AI setting as well. Afterall machine learning is another application that consumes storage resources, albeit sometimes at a much larger scale. AI use cases are relatively new, however the majority of modern storage systems, including the open source solutions like Ceph, have mature features that can be used to mitigate these risks. Physical theft thwarted by data at rest encryption Any disk used in a storage system could theoretically be lost due to theft, or when returned for warranty replacement after a failure event. By using at rest encryption, every byte of data stored on a disk, spinning media, or flash, is useless without the cryptographic keys needed to unencrypt the data. Thus protecting sensitive data, or proprietary models created after hours or even days of processing. Strict access control to keep out uninvited guests A key tenet of any system design is ensuring that users (real people, or headless accounts) have access only to the resources they need, and that at any time that access can easily be removed. Storage systems like Ceph use both their own access control mechanisms and also integrate with centralised auth systems like LDAP to allow easy access control. Eavesdropping defeated by in flight encryption There is nothing worse than someone listening into a conversation that they should not be privy to. The same thing can happen in computer networks too. By employing encryption on all network flows: client to storage, and internal storage system networks no data can be leaked to 3rd parties eavesdropping on the network. Recover from ransomware with snapshots and versioning It seems like every week another large enterprise has to disclose a ransomware event, where an unauthorised 3rd party has taken control of their systems and encrypted the data. Not only does this lead to downtime but also the possibility of having to pay a ransom for the decryption key to regain control of their systems and access to their data. AI projects often represent a significant investment of both time and resources, so having an initiative undermined by a ransomware attack could be highly damaging. Using point in time snapshots or versioning of objects can allow an organisation to revert to a previous non-encrypted state, and potentially resume operations sooner. Learn more Ceph is one storage solution that can be used to store various AI datasets, and is not only scalable to meet performance and capacity requirements, but also has a number of features to ensure data is stored securely. Find out more about how Ceph solves AI storage challenges: Find out more about Ceph here. Additional resources What is Ceph? Blog : Ceph storage for AI Webinar : AI storage with Ceph White paper – A guide to software-defined storage for enterprises Explore Canonical’s AI solutions View the full article
-
- data storage
- storage
-
(and 4 more)
Tagged with:
-
We’re all already cyborgs. Whether you’re wearing glasses, sensing the dimensions of your car as you’re parking, or even “feeling” the texture of your food as you skewer it with a fork, you are directly experiencing the mind’s ability to extend itself and enmesh itself with technology – no cyberpunk implants required. Because we’re cyborgs, we have a natural inclination to pull tools and technology closer to ourselves, which has played out in the rise of everything from the Walkman to wearable fitness trackers to, most recently, the Apple Vision Pro. But today’s smart devices do more than just expand our minds. Sitting as they do at the intersection of the physical and digital worlds, they generate data that combine to describe almost every aspect of a person’s online and offline life, creating precisely the kind of comprehensive asset that hungry advertisers, governments, and even career cybercriminals dream of. Wearable technology has vast promise, but it comes with unprecedented risks to our privacy. Only by understanding the dimensions of those risks can we preserve the benefits of wearables while promoting safety and transparency. Wearable smart devices make us vulnerable by collecting health-related information, biometric data, location data, and more, while often commingling those data points with common digital information, including contact information, purchase history, or browsing behavior somewhere along the line. The hybrid nature of the data and devices ensures that the risks they pose are just as multifaceted, but it’s possible to think of them in three broad categories: 1. Technological Technological vulnerabilities are typically most people’s first thought when assessing the potential dangers of new tech. It’s a good instinct – in many respects hardware and software security constitute the first and last line of defense for user data. It’s therefore necessary to pay close attention to the attack surface that wearables present, as it’s larger and more complex than one might expect. First, there’s the device itself, which could be vulnerable to proximity-based attacks via however it communicates with the outside world. From there, wearables might transmit data to an intermediary device like a smartphone, which itself is vulnerable, after which the data makes its way to permanent, centralized storage on proprietary servers. Each step in this process can be attacked in creative ways via hardware and software, and it’s increasingly likely that bad actors will try due to the richness of the target. Thankfully, standards such as Bluetooth and WiFi have robust security mechanisms to prevent most such attacks, but they’re not perfect. Healthcare data breaches more than doubled from 2013 to 2023, and it’s likely that this trend will be reflected in healthcare-adjacent data, too. 2. Regulatory As is so often the case, privacy protections have failed to keep pace with advancements in technology, and what protections do exist are piecemeal and surprisingly narrow. Most Americans have a vague sense that their healthcare data is protected (and they’re vaguely correct), while an informed minority know the Health Insurance Portability and Accountability Act (HIPAA) exists to safeguard healthcare data. What’s not commonly understood is that HIPAA applies only to healthcare providers and insurers, and only for personally identifiable records used by those entities. This means there’s a potential regulatory distinction between healthcare data and biometric data produced by a fitness tracker even if the data point being tracked is identical. Generalized privacy regulations attempt to fill in this gap, but they’re mostly case-by-case. While the EU has one standard (GDPR), as does Canada (PIPEDA), the United States has a state-by-state patchwork of uneven regulation that remains difficult to navigate. The Federal Trade Commission has also tried to backstop health data privacy, citing both GoodRx and BetterHelp in 2023 alone. Absent more specific privacy protections, however, this type of enforcement will necessarily come after privacy has been violated, and almost always on the basis of “deceptive business practices” rather than due to inherent biometric data safeguards. 3. Educational Just as regulators trail technology, so too does consumer understanding of what’s being tracked, how data can be used, and by whom, all of which are necessary to give informed consent. First and foremost, people need to get into the habit of thinking about everything on their wearables as potentially valuable data. Your daily step count, your heart rate, your sleep quality – all of the fun and useful insights your wearables generate – begin painting a comprehensive picture of you that can seriously erode individual privacy, and it’s all above-board. This kind of data tracking becomes even more impactful when you think about today’s most powerful devices. The Apple Vision Pro by default knows where you are, what you’re browsing, the features of your environment, and even where you’re looking and how you move your body. So much data aggregation allows for deep, profound inferences about individuals that can could be used (hopefully not misused) in ways ranging from anodyne to alarming: more targeted ads based on implied preferences; increased insurance premiums due to lifestyle choices or poor treatment compliance; hacker groups revealing someone’s house is empty in real time; the list goes on. Data rollups Data rollups aren't confined to devices as powerful as the Apple Vision Pro, either. Consumers need to be made aware of how big tech companies can connect their individual dots across multiple devices and services. For example, consumers are broadly aware Google has location data from Android phones along with search and browsing history, but fewer know that Google acquired Fitbit in 2021, thereby making all Fitbit-generated data a de facto part of the Google ecosystem. There’s nothing intrinsically wrong with this, but consumers require an ecosystem-level understanding of the entities controlling their data to make informed choices. None of this is to say that the situation is beyond repair. In fact, we have to fix it so that we can safely enjoy the benefits of life-changing technology. In order to do that we need solutions that are as comprehensive as the problems. After all, privacy is a team sport. Comprehensive security First, we need to embrace more comprehensive security and default encryption at every step, on every device, for all data. Blockchains have much to offer in terms of restricting device access, securing data, and leveraging decentralized infrastructure in order to reduce the honeypot effect of vast data troves. Second, as noted above, informed visibility is a strict prerequisite for informed consent, so consumers must demand – and privacy-conscious companies must embrace – absolute transparency in terms of what data is collected, how it’s used (and by whom), and with what other data might it be commingled. It’s even possible to envision a world in which companies disclose the information they’re looking to derive based on the data points they’re aggregating, and consumers in turn have the ability to accept or reject the proposition. That leads us to the final piece, which is nuanced control of one’s data. Among its many flaws, the standard model of extracting data from users generally presents them with binary choices: consent and participation, or opting out entirely. Consumers need finer-grained control over what data they share and for what purposes it may be used rather than being strong-armed into an all-or-nothing model. Once again, they should demand it, and privacy-conscious companies can earn immense goodwill by giving it to them. Ultimately there’s nothing to be gained by assuming that privacy is doomed to become a quaint notion from a bygone era. We should instead be hopeful that we can find a balance between the unprecedented benefits of wearable technology and the risks they pose to privacy, but we can’t afford to wait around for regulators. Instead, it’s incumbent upon everyday people to educate themselves on threats to their privacy and speak up not just in favor of better regulation, but in defense of their right to own and control the data they’re creating. If enough people say it, the industry has to listen. We've listed the best smart home devices. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
-
X, formerly Twitter, has extended support for passkeys as a login option for iPhone users across the globe, the company has announced. Passkeys support was introduced by X earlier this year, but the option was limited to iOS users based in the United States. Now anyone on the social media platform can use them. Passkeys are both easier to use and more secure than passwords because they let users sign in to apps and sites the same way they unlock their devices: With Face ID, Touch ID, or a device passcode. Passkeys are also resistant to online attacks like phishing, making them more secure than things like SMS one-time codes. Apple integrated passkeys into iOS in 2022 with the launch of iOS 16, and it is also available in iPadOS 16.1 and later as well as macOS Ventura and later. To set up passkeys in X on iPhone, follow these steps: Log in to the X app. Click Your account in the navigation bar. Select Settings and privacy, then click Security and account access, then Security. Under Additional password protection, click Passkey. Enter your password when prompted. Select Add a passkey and follow the prompts. Update: Passkeys is now available as a login option for everyone globally on iOS! Try it out.https://t.co/v1LyN0l8wF — Safety (@Safety) April 8, 2024 X is just one of several companies to implement support for passkeys in recent months, with other supporting apps and websites including Google, PayPal, Best Buy, eBay, Dashlane, and Microsoft.Tags: Twitter, Passkeys This article, "X Rolls Out Passkeys Support to iPhone Users Worldwide" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
5G technology impacts not just our daily lifestyle but the Internet of Things (IoT) as well. The world of 5G is not only transformed by hyper-connectivity but is also involved in the future hinges on a critical element: IoT security. While 5G has remarkable speed and capacity, it also provides a large attack surface. Unlike […] The post Impact of IoT Security for 5G Technology appeared first on Kratikal Blogs. The post Impact of IoT Security for 5G Technology appeared first on Security Boulevard. View the full article
-
In short, API security testing involves the systematic assessment of APIs to identify vulnerabilities, coding errors, and other weaknesses that could be exploited by malicious actors. Application Programming Interfaces, or APIs, provide much of the communication layer between applications that house an organization’s critical customer and company information, and API security testing is essential to […] The post What is API Security Testing? appeared first on Cequence Security. The post What is API Security Testing? appeared first on Security Boulevard. View the full article
-
Picus Security today added an artificial intelligence (AI) capability to enable cybersecurity teams to automate tasks via a natural language interface. The capability, enabled by OpenAI, leverages the existing knowledge graph technologies from Picus Security. Dubbed Picus Numi AI, the company is making use of a large language model (LLM) developed by Open AI to.. The post Picus Security Melds Security Knowledge Graph with Open AI LLM appeared first on Security Boulevard. View the full article
-
On February 29, I was honored to serve as the moderator for a panel on “The Rise of AI and its Impact on Corporate Security” at the 2024 Ontic Summit. The panel not only provided me with a reason to focus my own thoughts on the topic, but to also learn from the insights of the… The post Implications of AI for Corporate Security appeared first on Ontic. The post Implications of AI for Corporate Security appeared first on Security Boulevard. View the full article
-
Security researchers have found a relatively easy and cheap way to clone the keycards used on three million Saflok electronic RFID locks in 13,000 hotels and homes all over the world. The keycard and lock manufacturer, Dormakaba, has been notified, and it is currently working to replace the vulnerable hardware - but it’s a long, tedious process, which is not yet done. Although first discovered back in 2022, the researchers have disclosed more information on the flaws, dubbed “Unsaflok”, in order to raise awareness. Cheap card cloning The flaws were discovered at a private hacking event was set up in Las Vegas, where different research teams competed to find vulnerabilities in a hotel room and all devices inside. A team, consisting of Lennert Wouters, Ian Carroll, rqu, BusesCanFly, Sam Curry, shell, and Will Caruana, focused their attention on the Dormakaba Saflok electronic locks for hotel rooms. Soon enough, they found two flaws which, when chained together, allowed them to open the doors with a custom-built keycard. First, they needed access to any card from the premises. That could be the card to their own room. Then, they reverse-engineered the Dormakaba front desk software and lock programming device, which allowed them to spoof a working master key which can open any room on the property. Finally, to clone the cards, they needed to break into Dormakaba’s key derivation function. To forge the keycards, the team used a MIFARE Classic card, a commercial card-writing tool, and an Android phone with NFC capabilities. All of this costs just a few hundred dollars, it was said. With their custom-built keycard, the team would be able to access more than three million locks, installed in 13,000 hotels and homes all over the world. Following the publication of the findings, Dormakaba released a statement to the media, saying the vulnerability affects Saflok systems System 6000, Ambiance, and Community. It added that there is no evidence of these flaws ever being exploited in the wild. Via BleepingComputer More from TechRadar Pro This nasty new Android malware can easily bypass Google Play security — and it's already been downloaded thousands of timesHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
-
Apple today released iOS 17.4.1 and iPadOS 17.4.1, minor updates to the iOS 17 and iPadOS 17 operating systems. The new software comes a couple of weeks after Apple released iOS 17.4 and iPadOS 17.4 with app changes in the European Union, new emoji, and more. iOS 17.4.1 and iPadOS 17.4.1 can be downloaded on eligible iPhones and iPads over-the-air by going to Settings > General > Software Update. For customers who are still on iOS 16, Apple has also released an iOS 16.7.7 security update. According to Apple's release notes, the iOS 17.4.1 update includes important security updates and bug fixes. Apple will likely begin testing iOS 17.5 in the near future, with betas expected to come out in the next two weeks.Related Roundups: iOS 17, iPadOS 17Related Forums: iOS 17, iPadOS 17 This article, "Apple Releases iOS 17.4.1 and iPadOS 17.4.1 With Bug Fixes and Security Improvements" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
We are excited to announce the release of the Databricks AI Security Framework (DASF) version 1.0 whitepaper! The framework is designed to improve... View the full article
-
- databricks
- ai
-
(and 1 more)
Tagged with:
-
In the realm of containerized applications, Kubernetes reigns supreme. But with great power comes great responsibility, especially when it comes to safeguarding sensitive data within your cluster. Terraform, the infrastructure-as-code darling, offers a powerful solution for managing Kubernetes Secrets securely and efficiently. This blog delves beyond the basics, exploring advanced techniques and considerations for leveraging Terraform to manage your Kubernetes Secrets. Understanding Kubernetes Secrets Kubernetes Secrets provides a mechanism to store and manage sensitive information like passwords, API keys, and tokens used by your applications within the cluster. These secrets are not directly exposed in the container image and are instead injected into the pods at runtime. View the full article
-
- kubernetes
- k8s
-
(and 2 more)
Tagged with:
-
In the rapidly evolving landscape of Kubernetes, security remains at the forefront of concerns for developers and architects alike. Kubernetes 1.25 brings significant changes, especially in how we approach pod security, an area critical to the secure deployment of applications. This article dives deep into the intricacies of Pod Security Admission (PSA), the successor to Pod Security Policies (PSP), providing insights and practical guidance to harness its potential effectively. Understanding Pod Security Admission With the deprecation of Pod Security Policies in previous releases, Kubernetes 1.29 emphasizes Pod Security Admission (PSA), a built-in admission controller designed to enforce pod security standards at creation and modification time. PSA introduces a more streamlined, understandable, and manageable approach to securing pods, pivotal for protecting cluster resources and data. View the full article
-
Author: Sascha Grunert Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a node to your Pods and containers. But distributing those seccomp profiles is a major challenge in Kubernetes, because the JSON files have to be available on all nodes where a workload can possibly run. Projects like the Security Profiles Operator solve that problem by running as a daemon within the cluster, which makes me wonder which part of that distribution could be done by the container runtime. Runtimes usually apply the profiles from a local path, for example: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Localhost localhostProfile: nginx-1.25.3.json The profile nginx-1.25.3.json has to be available in the root directory of the kubelet, appended by the seccomp directory. This means the default location for the profile on-disk would be /var/lib/kubelet/seccomp/nginx-1.25.3.json. If the profile is not available, then runtimes will fail on container creation like this: kubectl get pods NAME READY STATUS RESTARTS AGE pod 0/1 CreateContainerError 0 38s kubectl describe pod/pod | tail Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 117s default-scheduler Successfully assigned default/pod to 127.0.0.1 Normal Pulling 117s kubelet Pulling image "nginx:1.25.3" Normal Pulled 111s kubelet Successfully pulled image "nginx:1.25.3" in 5.948s (5.948s including waiting) Warning Failed 7s (x10 over 111s) kubelet Error: setup seccomp: unable to load local profile "/var/lib/kubelet/seccomp/nginx-1.25.3.json": open /var/lib/kubelet/seccomp/nginx-1.25.3.json: no such file or directory Normal Pulled 7s (x9 over 111s) kubelet Container image "nginx:1.25.3" already present on machine The major obstacle of having to manually distribute the Localhost profiles will lead many end-users to fall back to RuntimeDefault or even running their workloads as Unconfined (with disabled seccomp). CRI-O to the rescue The Kubernetes container runtime CRI-O provides various features using custom annotations. The v1.30 release adds support for a new set of annotations called seccomp-profile.kubernetes.cri-o.io/POD and seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. Those annotations allow you to specify: a seccomp profile for a specific container, when used as: seccomp-profile.kubernetes.cri-o.io/<CONTAINER> (example: seccomp-profile.kubernetes.cri-o.io/webserver: 'registry.example/example/webserver:v1') a seccomp profile for every container within a pod, when used without the container name suffix but the reserved name POD: seccomp-profile.kubernetes.cri-o.io/POD a seccomp profile for a whole container image, if the image itself contains the annotation seccomp-profile.kubernetes.cri-o.io/POD or seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. CRI-O will only respect the annotation if the runtime is configured to allow it, as well as for workloads running as Unconfined. All other workloads will still use the value from the securityContext with a higher priority. The annotations alone will not help much with the distribution of the profiles, but the way they can be referenced will! For example, you can now specify seccomp profiles like regular container images by using OCI artifacts: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: … The image quay.io/crio/seccomp:v2 contains a seccomp.json file, which contains the actual profile content. Tools like ORAS or Skopeo can be used to inspect the contents of the image: oras pull quay.io/crio/seccomp:v2 Downloading 92d8ebfa89aa seccomp.json Downloaded 92d8ebfa89aa seccomp.json Pulled [registry] quay.io/crio/seccomp:v2 Digest: sha256:f0205dac8a24394d9ddf4e48c7ac201ca7dcfea4c554f7ca27777a7f8c43ec1b jq . seccomp.json | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "defaultErrno": "ENOSYS", "archMap": [ { "architecture": "SCMP_ARCH_X86_64", "subArchitectures": [ "SCMP_ARCH_X86", "SCMP_ARCH_X32" # Inspect the plain manifest of the image skopeo inspect --raw docker://quay.io/crio/seccomp:v2 | jq . { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.cncf.seccomp-profile.config.v1+json", "digest": "sha256:ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356", "size": 3, }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:92d8ebfa89aa6dd752c6443c27e412df1b568d62b4af129494d7364802b2d476", "size": 18853, "annotations": { "org.opencontainers.image.title": "seccomp.json" }, }, ], "annotations": { "org.opencontainers.image.created": "2024-02-26T09:03:30Z" }, } The image manifest contains a reference to a specific required config media type (application/vnd.cncf.seccomp-profile.config.v1+json) and a single layer (application/vnd.oci.image.layer.v1.tar) pointing to the seccomp.json file. But now, let's give that new feature a try! Using the annotation for a specific container or whole pod CRI-O needs to be configured adequately before it can utilize the annotation. To do this, add the annotation to the allowed_annotations array for the runtime. This can be done by using a drop-in configuration /etc/crio/crio.conf.d/10-crun.conf like this: [crio.runtime] default_runtime = "crun" [crio.runtime.runtimes.crun] allowed_annotations = [ "seccomp-profile.kubernetes.cri-o.io", ] Now, let's run CRI-O from the latest main commit. This can be done by either building it from source, using the static binary bundles or the prerelease packages. To demonstrate this, I ran the crio binary from my command line using a single node Kubernetes cluster via local-up-cluster.sh. Now that the cluster is up and running, let's try a pod without the annotation running as seccomp Unconfined: cat pod.yaml apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Unconfined kubectl apply -f pod.yaml The workload is up and running: kubectl get pods NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 15s And no seccomp profile got applied if I inspect the container using crictl: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp null Now, let's modify the pod to apply the profile quay.io/crio/seccomp:v2 to the container: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/container: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 I have to delete and recreate the Pod, because only recreation will apply a new seccomp profile: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created The CRI-O logs will now indicate that the runtime pulled the artifact: WARN[…] Allowed annotations are specified for workload [seccomp-profile.kubernetes.cri-o.io] INFO[…] Found container specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io/container=quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer And the container is finally using the profile: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { The same would work for every container in the pod, if users replace the /container suffix with the reserved name /POD, for example: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 Using the annotation for a container image While specifying seccomp profiles as OCI artifacts on certain workloads is a cool feature, the majority of end users would like to link seccomp profiles to published container images. This can be done by using a container image annotation; instead of being applied to a Kubernetes Pod, the annotation is some metadata applied at the container image itself. For example, Podman can be used to add the image annotation directly during image build: podman build \ --annotation seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 \ -t quay.io/crio/nginx-seccomp:v2 . The pushed image then contains the annotation: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2 | jq '.annotations."seccomp-profile.kubernetes.cri-o.io"' "quay.io/crio/seccomp:v2" If I now use that image in an CRI-O test pod definition: apiVersion: v1 kind: Pod metadata: name: pod # no Pod annotations set spec: containers: - name: container image: quay.io/crio/nginx-seccomp:v2 Then the CRI-O logs will indicate that the image annotation got evaluated and the profile got applied: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created INFO[…] Found image specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Created container 116a316cd9a11fe861dd04c43b94f45046d1ff37e2ed05a4e4194fcaab29ee63: default/pod/container id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { For container images, the annotation seccomp-profile.kubernetes.cri-o.io will be treated in the same way as seccomp-profile.kubernetes.cri-o.io/POD and applies to the whole pod. In addition to that, the whole feature also works when using the container specific annotation on an image, for example if a container is named container1: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2-container | jq '.annotations."seccomp-profile.kubernetes.cri-o.io/container1"' "quay.io/crio/seccomp:v2" The cool thing about this whole feature is that users can now create seccomp profiles for specific container images and store them side by side in the same registry. Linking the images to the profiles provides a great flexibility to maintain them over the whole application's life cycle. Pushing profiles using ORAS The actual creation of the OCI object that contains a seccomp profile requires a bit more work when using ORAS. I have the hope that tools like Podman will simplify the overall process in the future. Right now, the container registry needs to be OCI compatible, which is also the case for Quay.io. CRI-O expects the seccomp profile object to have a container image media type (application/vnd.cncf.seccomp-profile.config.v1+json), while ORAS uses application/vnd.oci.empty.v1+json per default. To achieve all of that, the following commands can be executed: echo "{}" > config.json oras push \ --config config.json:application/vnd.cncf.seccomp-profile.config.v1+json \ quay.io/crio/seccomp:v2 seccomp.json The resulting image contains the mediaType that CRI-O expects. ORAS pushes a single layer seccomp.json to the registry. The name of the profile does not matter much. CRI-O will pick the first layer and check if that can act as a seccomp profile. Future work CRI-O internally manages the OCI artifacts like regular files. This provides the benefit of moving them around, removing them if not used any more or having any other data available than seccomp profiles. This enables future enhancements in CRI-O on top of OCI artifacts, but also allows thinking about stacking seccomp profiles as part of having multiple layers in an OCI artifact. The limitation that it only works for Unconfined workloads for v1.30.x releases is something different CRI-O would like to address in the future. Simplifying the overall user experience by not compromising security seems to be the key for a successful future of seccomp in container workloads. The CRI-O maintainers will be happy to listen to any feedback or suggestions on the new feature! Thank you for reading this blog post, feel free to reach out to the maintainers via the Kubernetes Slack channel #crio or create an issue in the GitHub repository. View the full article
-
iPhone and iPad owners may want to update to iOS 17.4 and iPadOS 17.4 in the near future, as the updates address two security vulnerabilities that may have been exploited to gain access to user devices. In the security support document for the updates, Apple says that it "is aware of a report" that RTKit and kernel vulnerabilities may have been exploited by bad actors.Impact: An attacker with arbitrary kernel read and write capability may be able to bypass kernel memory protections. Apple is aware of a report that this issue may have been exploited.Apple fixed the memory corruption issue with improved validation to patch the security hole. iOS 17.4 and iPadOS 17.4 also address an Accessibility vulnerability and an issue with Safari Private Browsing that could allow locked tabs to be briefly visible while switching tab groups. The software updates were released this morning and are available on eligible iPhones and iPads by going to Settings > General > Software Update.Related Roundups: iOS 17, iPadOS 17Related Forums: iOS 17, iPadOS 17 This article, "Make Sure to Update: iOS 17.4 and iPadOS 17.4 Fix Two Major Security Vulnerabilities" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
There are several steps involved in implementing a data pipeline that integrates Apache Kafka with AWS RDS and uses AWS Lambda and API Gateway to feed data into a web application. Here is a high-level overview of how to architect this solution: 1. Set Up Apache Kafka Apache Kafka is a distributed streaming platform that is capable of handling trillions of events a day. To set up Kafka, you can either install it on an EC2 instance or use Amazon Managed Streaming for Kafka (Amazon MSK), which is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. View the full article
-
Today, AWS announces the expansion in the log coverage support for Amazon Security Lake, which includes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs. This enhancement allows you to automatically centralize and normalize your Amazon EKS audit logs in Security Lake, making it easier to monitor and investigate potential suspicious activities in your Amazon EKS clusters. View the full article
-
Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike. Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams. Introducing enhanced file-sharing controls in Docker Desktop Business As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use. For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes: Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default. Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization. Figure 1: Display of Docker Desktop settings enhanced file-sharing settings. We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice: Clearer error messaging: Quickly understand and rectify issues with enhanced error messages. Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible. Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights. These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind. Refining development with Docker Desktop’s Builds view update Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds. New in Docker Desktop 4.28: Dedicated tabs: Separates active from completed builds for better organization (Figure 3). Build insights: Displays build duration and cache steps, offering more clarity on the build process. Reliability fixes: Resolves issues with updates for a more consistent experience. UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4). These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds. Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds. Figure 4: Updated view supporting empty state — no Active builds. To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey. These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate. Join the conversation and make your mark Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs. Learn more Authenticate and update to receive the newest Docker Desktop features per your subscription level. New to Docker? Create an account. Read our latest blog on synchronized file shares. Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta. Learn about Docker Build Cloud. Subscribe to the Docker Newsletter. View the full article
-
- docker
- docker desktop
-
(and 3 more)
Tagged with:
-
Docker images play a pivotal role in containerized application deployment. They encapsulate your application and its dependencies, ensuring consistent and efficient deployment across various environments. However, security is a paramount concern when working with Docker images. In this guide, we will explore security best practices for Docker images to help you create and maintain secure images for your containerized applications. 1. Introduction The Significance of Docker Images Docker images are at the core of containerization, offering a standardized approach to packaging applications and their dependencies. They allow developers to work in controlled environments and empower DevOps teams to deploy applications consistently across various platforms. However, the advantages of Docker images come with security challenges, making it essential to adopt best practices to protect your containerized applications. View the full article
-
- security
- docker images
-
(and 1 more)
Tagged with:
-
PlayStation account owners will soon be able to start using a passkey as an alternative to a password when logging into a PlayStation account on the web, in an app, or on a PlayStation device. Passkey integration is set to be introduced at some point today, and users will be able to log in and authenticate their accounts with Face ID, Touch ID, or a device passcode on an iPhone. Passkeys are considered more convenient and secure than a traditional password, with sign-ins streamlined through biometric authentication. Passkeys are resistant to online attacks such as phishing because there's no password to steal and no one-time SMS code that can be intercepted. Apple has supported passkeys since 2022, and passkeys are available on iOS 16 and later, iPadOS 16 and later, and macOS Ventura and later. Many companies have been implementing support for passkeys, including Twitter, Google, PayPal, Best Buy, Microsoft, and eBay.Tag: Passkeys This article, "PlayStation Adds Support for Passkeys as Password Alternative" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts