Jump to content

Search the Community

Showing results for tags 'security'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Photo by Gabriel Heinzer on Unsplash We’re excited about the upcoming Ubuntu 24.04 LTS release, Noble Numbat. Like all Ubuntu releases, Ubuntu 24.04 LTS comes with 5 years of free security maintenance for the main repository. Support can be expanded for an extra 5 years, and to include the universe repository, via Ubuntu Pro. Organisations looking to keep their systems secure without needing a major upgrade can also get the Legacy Support add-on to expand that support beyond the 10 years. Combined with the enhanced security coverage provided by Ubuntu Pro and Legacy Support, Ubuntu 24.04 LTS provides a secure foundation on which to develop and deploy your applications and services in an increasingly risky environment. In this blog post, we will look at some of the enhancements and security features included in Noble Numbat, building on those available in Ubuntu 22.04 LTS. Unprivileged user namespace restrictions Unprivileged user namespaces are a widely used feature of the Linux kernel, providing additional security isolation for applications, and are often employed as part of a sandbox environment. They allow an application to gain additional permissions within a constrained environment, so that a more trusted part of an application can then use these additional permissions to create a more constrained sandbox environment within which less trusted parts can then be executed. A common use case is the sandboxing employed by modern web browsers, where the (trusted) application itself sets up the sandbox where it executes the untrusted web content. However, by providing these additional permissions, unprivileged user namespaces also expose additional attack surfaces within the Linux kernel. There has been a long history of (ab)use of unprivileged user namespaces to exploit various kernel vulnerabilities. The most recent interim release of Ubuntu, 23.10, introduced the ability to restrict the use of unprivileged user namespaces to only those applications which legitimately require such access. In Ubuntu 24.04 LTS, this feature has both been improved to cover additional applications both within Ubuntu and from third parties, and to allow better default semantics of the feature. For Ubuntu 24.04 LTS, the use of unprivileged user namespaces is then allowed for all applications but access to any additional permissions within the namespace are denied. This allows more applications to more better gracefully handle this default restriction whilst still protecting against the abuse of user namespaces to gain access to additional attack surfaces within the Linux kernel. Binary hardening Modern toolchains and compilers have gained many enhancements to be able to create binaries that include various defensive mechanisms. These include the ability to detect and avoid various possible buffer overflow conditions as well as the ability to take advantage of modern processor features like branch protection for additional defence against code reuse attacks. The GNU C library, used as the cornerstone of many applications on Ubuntu, provides runtime detection of, and protection against, certain types of buffer overflow cases, as well as certain dangerous string handling operations via the use of the _FORTIFY_SOURCE macro. FORTIFY_SOURCE can be specified at various levels providing increasing security features, ranging from 0 to 3. Modern Ubuntu releases have all used FORTIFY_SOURCE=2 which provided a solid foundation by including checks on string handling functions like sprintf(), strcpy() and others to detect possible buffer overflows, as well as format-string vulnerabilities via the %n format specifier in various cases. Ubuntu 24.04 LTS enables additional security features by increasing this to FORTIFY_SOURCE=3. Level three greatly enhances the detection of possible dangerous use of a number of other common memory management functions including memmove(), memcpy(), snprintf(), vsnprintf(), strtok() and strncat(). This feature is enabled by default in the gcc compiler within Ubuntu 24.04 LTS, so that all packages in the Ubuntu archive which are compiled with gcc, or any applications compiled with gcc on Ubuntu 24.04 LTS also receive this additional protection. The Armv8-M hardware architecture (provided by the “arm64” software architecture on Ubuntu) provides hardware-enforced pointer authentication and branch target identification. Pointer authentication provides the ability to detect malicious stack buffer modifications which aim to redirect pointers stored on the stack to attacker controlled locations, whilst branch target identification is used to track certain indirect branch instructions and the possible locations which they can target. By tracking such valid locations, the processor can detect possible malicious jump-oriented programming attacks which aim to use existing indirect branches to jump to other gadgets within the code. The gcc compiler supports these features via the -mbranch-protection option. In Ubuntu 24.04 LTS, the dpkg package now enables -mbranch-protection=standard, so that all packages within the Ubuntu archive enable support for these hardware features where available. AppArmor 4 The aforementioned unprivileged user namespace restrictions are all backed by the AppArmor mandatory access control system. AppArmor allows a system administrator to implement the principle of least authority by defining which resources an application should be granted access to and denying all others. AppArmor consists of a userspace package, which is used to define the security profiles for applications and the system, as well as the AppArmor Linux Security Module within the Linux kernel which provides enforcement of the policies. Ubuntu 24.04 LTS includes the latest AppArmor 4.0 release, providing support for many new features, such as specifying allowed network addresses and ports within the security policy (rather than just high level protocols) or various conditionals to allow more complex policy to be expressed. An exciting new development provided by AppArmor 4 in Ubuntu 24.04 LTS is the ability to defer access control decisions to a trusted userspace program. This allows for quite advanced decision making to be implemented, by taking into account the greater context available within userspace or to even interact with the user / system administrator in a real-time fashion. For example, the experimental snapd prompting feature takes advantage of this work to allow users to exercise direct control over which files a snap can access within their home directory. Finally, within the kernel, AppArmor has gained the ability to mediate access to user namespaces as well as the io_uring subsystem, both of which have historically provided additional kernel attack surfaces to malicious applications. Disabling of old TLS versions The use of cryptography for private communications is the backbone of the modern internet. The Transport Layer Security protocol has provided confidentiality and integrity to internet communications since it was first standardised in 1999 with TLS 1.0. This protocol has undergone various revisions since that time to introduce additional security features and avoid various security issues inherent in the earlier versions of this standard. Given the wide range of TLS versions and options supported by each, modern internet systems will use a process of auto-negotiation to select an appropriate combination of protocol version and parameters when establishing a secure communications link. In Ubuntu 24.04 LTS, TLS 1.0, 1.1 and DTLS 1.0 are all forcefully disabled (for any applications that use the underlying openssl or gnutls libraries) to ensure that users are not exposed to possible TLS downgrade attacks which could expose their sensitive information. Upstream Kernel Security Features Linux kernel v5.15 was used as the basis for the Linux kernel in the previous Ubuntu 22.04 LTS release. This provided a number of kernel security features including core scheduling, kernel stack randomisation and unprivileged BPF restrictions to name a few. Since that time, the upstream Linux kernel community has been busy adding additional kernel security features. Ubuntu 24.04 LTS includes the v6.8 Linux kernel which provides the following additional security features: Intel shadow stack support Modern Intel CPUs support an additional hardware feature aimed at preventing certain types of return-oriented programming (ROP) and other attacks that target the malicious corruption of the call stack. A shadow stack is a hardware enforced copy of the stack return address that cannot be directly modified by the CPU. When the processor returns from a function call, the return address from the stack is compared against the value from the shadow stack – if the two differ, the process is terminated to prevent a possible ROP attack. Whilst compiler support for this feature has been enabled for userspace packages since Ubuntu 19.10, it has not been able to be utilised until it was also supported by the kernel and the C library. Ubuntu 24.04 LTS includes this additional support for shadow stacks to allow this feature to be enabled when desired by setting the GLIBC_TUNABLES=glibc.cpu.hwcaps=SHSTK environment variable. Secure virtualisation with AMD SEV-SNP and Intel TDX Confidential computing represents a fundamental departure from the traditional threat model, where vulnerabilities in the complex codebase of privileged system software like the operating system, hypervisor, and firmware pose ongoing risks to the confidentiality and integrity of both code and data. Likewise, unauthorised access by a malicious cloud administrator could jeopardise the security of your virtual machine (VM) and its environment. Building on the innovation of Trusted Execution Environments at the silicon level, Ubuntu Confidential VMs aim to restore your control over the security assurances of your VMs. For the x86 architecture, both AMD and Intel processors provide hardware features (named AMD SEV SNP and Intel TDX respectively) to support running virtual machines with memory encryption and integrity protection. They ensure that the data contained within the virtual machine is inaccessible to the hypervisor and hence the infrastructure operator. Support for using these features as a guest virtual machine was introduced in the upstream Linux kernel version 5.19. Thanks to Ubuntu Confidential VMs, a user can make use of compute resources provided by a third party whilst maintaining the integrity and confidentiality of their data through the use of memory encryption and other features. On the public cloud, Ubuntu offers the widest portfolio of confidential VMs. These build on the innovation of both the hardware features, with offerings available across Microsoft Azure, Google Cloud and Amazon AWS. For enterprise customers seeking to harness confidential computing within their private data centres, a fully enabled software stack is essential. This stack encompasses both the guest side (kernel and OVMF) and the host side (kernel-KVM, QEMU, and Libvirt). Currently, the host-side patches are not yet upstream. To address this, Canonical and Intel have forged a strategic collaboration to empower Ubuntu customers with an Intel-optimised TDX Ubuntu build. This offering includes all necessary guest and host patches, even those not yet merged upstream, starting with Ubuntu 23.10 and extending into 24.04 and beyond. The complete TDX software stack is accessible through this github repository. This collaborative effort enables our customers to promptly leverage the security assurances of Intel TDX. It also serves to narrow the gap between silicon innovation and software readiness, a gap that grows as Intel continues to push the boundaries of hardware innovation with 5th Gen Intel Xeon scalable processors and beyond. Strict compile-time bounds checking Similar to hardening of binaries within the libraries and applications distributed in Ubuntu, the Linux kernel itself gained enhanced support for detecting possible buffer overflows at compile time via improved bounds checking of the memcpy() family of functions. Within the kernel, the FORTIFY_SOURCE macro enables various checks in memory management functions like memcpy() and memset() by checking that the size of the destination object is large enough to hold the specified amount of memory, and if not will abort the compilation process. This helps to catch various trivial memory management issues, but previously was not able to properly handle more complex cases such as when an object was embedded within a larger object. This is quite a common pattern within the kernel, and so the changes introduced in the upstream 5.18 kernel version to enumerate and fix various such cases greatly improves this feature. Now the compiler is able to detect and enforce stricter checks when performing memory operations on sub-objects to ensure that other object members are not inadvertently overwritten, avoiding an entire class of possible buffer overflow vulnerabilities within the kernel. Wrapping up Overall, the vast range of security improvements that have gone into Ubuntu 24.04 LTS greatly improve on the strong foundation provided by previous Ubuntu releases, making it the most secure release to date. Additional features within both the kernel, userspace and across the distribution as a whole combine to address entire vulnerability classes and attack surfaces. With up to 12 years of support, Ubuntu 24.04 LTS provides the best and most secure foundation to develop and deploy Linux services and applications. Expanded Security Maintenance, kernel livepatching and additional services are all provided to Ubuntu Pro subscribers to enhance the security of their Ubuntu deployments. View the full article
  2. Kubernetes has transformed container Orchestration, providing an effective framework for delivering and managing applications at scale. However, efficient storage management is essential to guarantee the dependability, security, and efficiency of your Kubernetes clusters. Benefits like data loss prevention, regulations compliance, and maintaining operational continuity mitigating threats underscore the importance of security and dependability. This post will examine the best practices for the top 10 Kubernetes storage, emphasizing encryption, access control, and safeguarding storage components. Kubernetes Storage Kubernetes storage is essential to contemporary cloud-native setups because it makes data persistence in containerized apps more effective. It provides a dependable and scalable storage resource management system that guarantees data permanence through migrations and restarts of containers. Among other capabilities, persistent Volumes (PVs) and Persistent Volume Claims (PVCs) give Kubernetes a versatile abstraction layer for managing storage. By providing dynamic provisioning of storage volumes catered to particular workload requirements, storage classes further improve flexibility. Organizations can build and manage stateful applications with agility, scalability, and resilience in various computing settings by utilizing Kubernetes storage capabilities. 1. Data Encryption Sensitive information kept in Kubernetes clusters must be protected with data encryption. Use encryption tools like Kubernetes Secrets to safely store sensitive information like SSH keys, API tokens, and passwords. Encryption both in transit and at rest is also used to further protect data while it is being stored and transmitted between nodes. 2. Use Secrets Management Tools Steer clear of hardcoding private information straight into Kubernetes manifests. Instead, use powerful secrets management solutions like Vault or Kubernetes Secrets to securely maintain and distribute secrets throughout your cluster. This guarantees that private information is encrypted and available only to approved users and applications. 3. Implement Role-Based Access Control (RBAC) RBAC allows you to enforce fine-grained access controls on your Kubernetes clusters. Define roles and permissions to limit access to storage resources using the least privilege concept. This lowers the possibility of data breaches and unauthorized access by preventing unauthorized users or apps from accessing or changing crucial storage components. 4. Secure Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) Ensure that claims and persistent volumes are adequately secured to avoid tampering or unwanted access. Put security rules in place to limit access to particular namespaces or users and turn on encryption for information on persistent volumes. PVs and PVCs should have regular audits and monitoring performed to identify and address any security flaws or unwanted entry attempts. 5. Enable Network Policies To manage network traffic between pods and storage resources, use Kubernetes network policies. To guarantee that only authorized pods and services may access storage volumes and endpoints, define firewall rules restricting communication to and from storage components. This reduces the possibility of data exfiltration and network-based assaults and prevents unauthorized network access. 6. Enable Role-Based Volume Provisioning Utilize Kubernetes’ dynamic volume provisioning features to automate storage volume creation and management. To limit users’ ability to build or delete volumes based on their assigned roles and permissions, utilize role-based volume provisioning. This guarantees the effective and safe allocation of storage resources and helps prevent resource abuse. 7. Utilize Pod Security Policies To specify and implement security restrictions on pods’ access to storage resources, implement pod security policies. To manage pod rights, host resource access, and storage volume interactions, specify security policies. By implementing stringent security measures, you can reduce the possibility of privilege escalation, container escapes, and illegal access to storage components. 8. Regularly Update and Patch Kubernetes Components Monitor security flaws by regularly patching and updating Kubernetes components, including storage drivers and plugins. Keep your storage infrastructure safe from new attacks and vulnerabilities by subscribing to security advisories and adhering to best practices for Kubernetes cluster management. 9. Monitor and Audit Storage Activity To keep tabs on storage activity in your Kubernetes clusters, put extensive logging, monitoring, and auditing procedures in place. To proactively identify security incidents or anomalies, monitor access logs, events, and metrics on storage components. Utilize centralized logging and monitoring systems to see what’s happening with storage in your cluster. 10. Conduct Regular Security Audits and Penetration Testing Conduct comprehensive security audits and penetration tests regularly to evaluate the security posture of your Kubernetes storage system. Find and fix any security holes, incorrect setups, and deployment flaws in your storage system before hackers can exploit them. Work with security professionals and use automated security technologies to thoroughly audit your Kubernetes clusters. Considerations Before putting suggestions for Kubernetes storage into practice, take into account the following: Evaluate Security Requirements: Match storage options with compliance and corporate security requirements. Assess Performance Impact: Recognize the potential effects that resource usage and application performance may have from access controls, encryption, and security rules. Identify Roles and Responsibilities: Clearly define who is responsible for what when it comes to managing storage components in Kubernetes clusters. Plan for Scalability: Recognize the need for scalability and the possible maintenance costs related to implementing security measures. Make Monitoring and upgrades a Priority: To ensure that security measures continue to be effective over time, place a strong emphasis on continual monitoring, audits, and upgrades. Effective storage management is critical for ensuring the security, reliability, and performance of Kubernetes clusters. By following these ten best practices for Kubernetes storage, including encryption, access control, and securing storage components, you can strengthen the security posture of your Kubernetes environment and mitigate the risk of data breaches, unauthorized access, and other security threats. Stay proactive in implementing security measures and remain vigilant against emerging threats to safeguard your Kubernetes storage infrastructure effectively. The post Mastering Kubernetes Storage: 10 Best Practices for Security and Efficiency appeared first on Amazic. View the full article
  3. Google Cloud Next made a big splash in Las Vegas this week! From our opening keynote showcasing incredible customer momentum to exciting product announcements, we covered how AI is transforming the way that companies work. You can catch up on the highlights in our 14 minute keynote recap! Developers were front and center at our Developer keynote and in our buzzing Innovators Hive on the Expo floor (which was triple the size this year!). Our nearly 400 partner sponsors were also deeply integrated throughout Next, bringing energy from the show floor to sessions and evening events throughout the week. Last year, we talked about the exciting possibilities of generative AI, and this year it was great to showcase how customers are now using it to transform the way they work. At Next ‘24, we featured 300+ customer and partner AI stories, 500+ breakout sessions, hands-on demos, interactive training sessions, and so much more. It was a jam-packed week, so we’ve put together a summary of our announcements which highlight how we’re delivering the new way to cloud. Read on for a complete list of the 218 (yes, you read that right) announcements from Next ‘24: Gemini for Google Cloud We shared how Google's Gemini family of models will help teams accomplish more in the cloud, including: 1. Gemini for Google Cloud, a new generation of AI assistants for developers, Google Cloud services, and applications. 2. Gemini Code Assist, which is the evolution of the Duet AI for Developers. 3. Gemini Cloud Assist, which helps cloud teams design, operate, and optimize their application lifecycle. 4. Gemini in Security Operations, generally available at the end of this month, converts natural language to new detections, summarizes event data, recommends actions to take, and navigates users through the platform via conversational chat. 5. Gemini in BigQuery, in preview, enables data analysts to be more productive, improve query performance and optimize costs throughout the analytics lifecycle. 6. Gemini in Looker, in private preview, provides a dedicated space in Looker to initiate a chat on any topic with your data and derive insights quickly. 7. Gemini in Databases, also in preview, helps developers, operators, and database administrators build applications faster using natural language; manage, optimize and govern an entire fleet of databases from a single pane of glass; and accelerate database migrations. Customer Stories We shared new customer announcements, including: 8. Cintas is leveraging Google Cloud’s gen AI to develop an internal knowledge center that will allow its customer service and sales employees to easily find key information. 9. Bayer will build a radiology platform that will help Bayer and other companies create and deploy AI-first healthcare apps that assist radiologists, ultimately improving efficiency and diagnosis turn-around time. 10. Best Buy is leveraging Google Cloud’s Gemini large language model to create new and more convenient ways to give customers the solutions they need, starting with gen AI virtual assistants that can troubleshoot product issues, reschedule order deliveries, and more. 11. Citadel Securities used Google Cloud to build the next generation of its quantitative research platform that increased its research productivity and price-performance ratio. 12. Discover Financial is transforming customer experience by bringing gen AI to its customer contact centers to improve agent productivity through personalized resolutions, intelligent document summarization, real-time search assistants, and enhanced self-service options. 13. IHG Hotels & Resorts is using Gemini to build a generative AI-powered chatbot to help guests easily plan their next vacation directly in the IHG Hotels & Rewards mobile app. 14. Mercedes-Benz will expand its collaboration with Google Cloud, using our AI and gen AI technologies to advance customer-facing use cases across e-commerce, customer service, and marketing. 15. Orange is expanding its partnership with Google Cloud to deploy generative AI closer to Orange’s and its customers’ operations to help meet local requirements for trusted cloud environments and accelerate gen AI adoption and benefits across autonomous networks, workforce productivity, and customer experience. 16. WPP will leverage Google Cloud’s gen AI capabilities to deliver personalization, creativity, and efficiency across the business. Following the adoption of Gemini, WPP is already seeing internal impacts, including real-time campaign performance analysis, streamlined content creation processes, AI narration, and more. 17. Covered California, California’s health insurance marketplace, will simplify the healthcare enrollment process using Google Cloud’s Document AI, enabling the organization to verify more than 50,000 healthcare documents with a 84% verification rate per month. Workspace and collaboration The next wave of innovations and enhancements are coming to Google Workspace: 18. Google Vids, a key part of our Google Workspace innovations, is a new AI-powered video creation app for work that sits alongside Docs, Sheets and Slides. Vids will be released to Workspace Labs in June. 19. Gemini is coming to Google Chat in preview, giving you an AI-powered teammate to summarize conversations, answer questions, and more. 20. The new AI Meetings and Messaging add-on is priced at $10 per user, per month, and includes: Take notes for me, now in preview, translate for me, coming in June, which automatically detects and translates captions in Meet, with support for 69 languages, and automatic translation of messages and on-demand conversation summaries in Google Chat, coming later this year. 21. Using large language models, Gmail can now block an additional 20% more spam and evaluate 1,000 times more user-reported spam every day. 22. A new AI Security add-on allows IT teams to automatically classify and protect sensitive files in Google Drive, and is available for $10 per user, per month. 23. We’re extending DLP controls and classification labels to Gmail in beta. 24. We’re adding experimental support for post-quantum cryptography (PQC) in client-side encryption with our partners Thales and Fortanix. 25. Voice prompting and instant polish in Gmail: Send emails easily when you’re on the go with voice input in Help me write, and convert rough notes to a complete email with one click. 26. A new tables feature in Sheets (generally available in the coming weeks) formats and organizes data with a sleek design and a new set of building blocks — from project management to event planning templates witautomatic alerts based on custom triggers like a change in a status field. 27. Tabs in Docs (generally available in the coming weeks) allow you to organize information in a single document rather than linking to multiple documents or searching through Drive. 28. Docs now supports full-bleed cover images that extend from one edge of your browser to the other; generally available in the coming weeks. 29. Generally available in the coming weeks, Chat will support increased member capacity of up to 500,000 in spaces. 30. Messaging interoperability for Slack and Teams is now generally available through our partner Mio. AI infrastructure 31. The Cloud TPU v5p GA is now generally available. 32. Google Kubernetes Engine (GKE) now supports Cloud TPU v5p and TPU multi-host serving, also generally available. 33. A3 Mega compute instance powered by NVIDIA H100 GPUs offers double the GPU-to-GPU networking bandwidth of A3, and will be generally available in May. 34. Confidential Computing is coming to the A3 VM family, in preview later this year. 35. The NVIDIA Blackwell GPU platform will be available on the AI Hypercomputer architecture in two configurations: NVIDIA HGX B200 for the most demanding AI, data analytics, and HPC workloads; and the liquid-cooled GB200 NVL72 GPU for real-time LLM inference and training massive-scale models. 36. New caching capabilities for Cloud Storage FUSE improve training throughput and serving performance, and are generally available. 37. The Parallelstore high-performance parallel filesystem now includes caching in preview. 38. Hyperdisk ML in preview is a next-generation block storage service optimized for AI inference/serving workloads. 39. The new open-source MaxDiffusion is a new high-performance and scalable reference implementation for diffusion models. 40. MaxText, a JAX LLM, now supports new LLM models including Gemma, GPT3, LLAMA2 and Mistral across both Cloud TPUs and NVIDIA GPUs. 41. PyTorch/XLA 2.3 will follow the upstream release later this month, bringing single program, multiple data (SPMD) auto-sharding, and asynchronous distributed checkpointing features. 42. For Hugging Face PyTorch users, the Hugging Face Optimum-TPU package lets you train and serve Hugging Face models on TPUs. 43. Jetstream is a new open-source, throughput- and memory-optimized LLM inference engine for XLA devices (starting with TPUs); it supports models trained with both JAX and PyTorch/XLA, with optimizations for popular open models such as Llama 2 and Gemma. 44. Google models will be available as NVIDIA NIM inference microservices. 45. Dynamic Workload Scheduler now offers two modes: flex start mode (in preview), and calendar mode (in preview). 46. We shared the latest performance results from MLPerf™ Inference v4.0 using A3 virtual machines (VMs) powered by NVIDIA H100 GPUs. 47. We shared performance benchmarks for Gemma models using Cloud TPU v5e and JetStream. 48. We introduced ML Productivity Goodput, a new metric to measure the efficiency of an overall ML system, as well as an API to integrate into your projects, and methods to maximize ML Productivity Goodput. Vertex AI 49. Gemini 1.5 Pro is now available in public preview in Vertex AI, bringing the world’s largest context window to developers everywhere. 50. Gemini 1.5 Pro on Vertex AI can now process audio streams including speech, and the audio portion of videos. 51. Imagen 2.0, our family of image generation models, can now be used to create short, 4-second live images from text prompts. 52. Image editing is generally available in Imagen 2.0, including inpainting/outpainting and digital watermarking powered by Google DeepMind’s SynthID. 53. We added CodeGemma, a new model from our Gemma family of lightweight models, to Vertex AI. 54. Vertex AI has expanded grounding capabilities, including the ability to directly ground responses with Google Search, now in public preview. 55. Vertex AI Prompt Management, in preview, helps teams improve prompt performance. 56. Vertex AI Rapid Evaluation, in preview, helps users evaluate model performance when iterating on the best prompt design. 57. Vertex AI AutoSxS is now generally available, and helps teams compare the performance of two models. 58. We expanded data residency guarantees for data stored at-rest for Gemini, Imagen, and Embeddings APIs on Vertex AI to 11 new countries: Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan. 59. When using Gemini 1.0 Pro and Imagen, you can now limit machine-learning processing to the United States or European Union. 60. Vertex AI hybrid search, in preview, integrates vector-based and keyword-based search techniques to ensure relevant and accurate responses for users. 61. The new Vertex AI Agent Builder, in preview, lets developers build and deploy gen AI experiences using natural language or open-source frameworks like LangChain on Vertex AI. 62. Vertex AI includes two new text embedding models in public preview: the English-only text-embedding-preview-0409, and the multilingual text-multilingual-embedding-preview-0409 Core infrastructure Thomas with the Google Axion chip 63. We expanded Google Cloud’s compute portfolio, with major product releases spanning compute and storage for general-purpose workloads, as well as for more specialized workloads like SAP and high-performance databases. 64. Google Axion is our first custom Arm-based CPU designed for the data center, and will be in preview in the coming months. 65. Now in preview, the Compute Engine C4 general-purpose VM provides high performance paired with a controlled maintenance experience for your mission-critical workloads. 66. The general-purpose N4 machine series is built for price-performance with Dynamic Resource Management, and is generally available. 67. C3 bare-metal machines, available in an upcoming preview, provide workloads with direct access to the underlying server’s CPU and memory resources. 68. New X4 memory-optimized instances are now in preview, through this interest form. 69. Z3 VMs are designed for storage-dense workloads that require SSD, and are generally available. 70. Hyperdisk Storage Pools Advanced Capacity, in general availability, and Advanced Performance in preview, allow you to purchase and manage block storage capacity in a pool that’s shared across workloads. 71. Coming to general availability in May, Hyperdisk Instant Snapshots provide near-zero RPO/RTO for Hyperdisk volumes. 72. Google Compute Engine users can now use zonal flexibility, VM family flexibility, and mixed on-demand and spot consumption to deploy their VMs. As part of Google Distributed Cloud (GDC) offering, we announced: 73. A generative AI search packaged solution powered by Gemma open models will be available in preview in Q2 2024 on GDC to help customers retrieve and analyze data at the edge or on-premises. 74. GDC has achieved ISO27001 and SOC2 compliance certifications. 75. A new managed Intrusion Detection and Prevention Solution (IDPS) integrates Palo Alto Networks threat prevention technology with GDC, and is now generally available. 76. GDC Sandbox, in preview, helps application developers build and test services designed for GDC in a Google Cloud environment, without needing to navigate the air-gap and physical hardware. 77. A preview GDC storage flexibility feature can help you grow your storage independent of compute, with support for block, file, or object storage. 78. GDC can now run in disconnected mode for up to seven days, and offers a suite of offline management features to help ensure deployments and workloads are accessible and working while they are disconnected; this capability is generally available. 79. New Managed GDC Providers who can sell GDC as a managed service include Clarence, T-Systems, and WWT.and a new Google Cloud Ready — Distributed Cloud badge signals that a solution has been tuned for GDC. 80. GDC servers are now available with an energy-efficient NVIDIA L4 Tensor Core GPU. 81. Google Distributed Cloud Hosted (GDC Hosted) is now authorized to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense (DoD). From our Google Cloud Networking family, we announced: 82. Gemini Cloud Assist, in preview, provides AI-based assistance to solve a variety of networking tasks such as generating configurations, recommending capacity, correlating changes with issues, identifying vulnerabilities, and optimizing performance. 83. Now generally available, the Model as a Service Endpoint solution uses Private Service Connect, Cloud Load Balancing, and App Hub lets model creators own the model service endpoint to which application developers then connect. 84. Later this year, Cloud Load Balancing will add enhancements for inference workloads: Cloud Load Balancing with custom metrics, Cloud Load Balancing for streaming inference, and Cloud Load Balancing with traffic management for AI models. 85. Cloud Service Mesh is a fully managed service mesh that combines Traffic Director’s control plane and Google’s open-source Istio-based service mesh, Anthos Service Mesh. A service-centric Cross-Cloud Network delivers a consistent, secure experience from any cloud to any service, and includes the following enhancements: 86. Private Service Connect transitivity over Network Connectivity Center, available in preview this quarter, enables services in a spoke VPC to be transitively accessible from other spoke VPCs. 87. Cloud NGFW Enterprise (formerly Cloud Firewall Plus), now GA, provides network threat protection powered by Palo Alto Networks, plus network security posture controls for org-wide perimeter and Zero Trust microsegmentation. 88. Identity-based authorization with mTLS integrates the Identity-Aware Proxy with our internal application Load Balancer to support Zero Trust network access, including client-side and soon, back-end mutual TLS. 89. In-line network data-loss prevention (DLP), in preview soon, integrates Symantec DLP into Cloud Load Balancers and Secure Web Proxy using Service Extensions. 90. Partners Imperva, HUMAN Security, Palo Alto Networks and Traceable are integrating their advanced web protection services into Service Extensions, as are web services providers Cloudinary, Nagra, Queue-it, and Datadog. 91. Service Extensions now has a library of code examples to customize origin selection, adjust headers, and more. 92. Private Service Connect is now fully integrated with Cloud SQL, and generally available. There are many improvements to our storage offerings: 93. Generate insights with Gemini lets you use natural language to analyze your storage footprint, optimize costs, and enhance security across billions of objects. It is available now through the Google Cloud console as an allowlist experimental release. 94. Google Cloud NetApp Volumes is expanding to 15 new Google Cloud regions in Q2’24 (GA) and includes a number of enhancements: dynamically migrating files by policy to lower-cost storage based on access frequency (in preview Q2’24); increasing Premium and Extreme service levels up to 1PB in size, with throughput performance up to 3X (preview Q2’24). NetApp Volumes also includes a new Flex service level enabling volumes as small as 1GiB. 95. Filestore now supports single-share backup for Filestore Persistent Volumes and GKE (generally available) and NFS v4.1 (preview), plus expanded Filestore Enterprise capacity up to 100TiB. For Cloud Storage: 96. Cloud Storage Anywhere Cache now uses zonal SSD read cache across multiple regions within a continent (allowlist GA). 97. Cloud Storage soft delete protects against accidental or malicious deletion of data by preserving deleted items for a configurable period of time (generally available). 98. The new Cloud Storage managed folders resource type allows granular IAM permissions to be applied to groups of objects (generally available). 99. Tag-based at-scale backup helps manage data protection for Compute Engine VMs (generally available). 100. The new high-performance backup option for SAP HANA leverages persistent disk (PD) snapshot capabilities for database-aware backups (generally available). 101. As part of Backup and DR Service Report Manager, you can now customize reports with data from Google Cloud Backup and DR using Cloud Monitoring, Cloud Logging, and BigQuery (generally available). Databases 102. Database Studio, a part of Gemini in Databases, brings SQL generation and summarization capabilities to our rich SQL editor in the Google Cloud console, as well as an AI-driven chat interface. 103. Database Center lets operators manage an entire fleet of databases through intelligent dashboards that proactively assess availability, data protection, security, and compliance issues, as well as with smart recommendations to optimize performance and troubleshoot issues. 104. Database Migration Service is also integrated with Gemini in Databases, including assistive code conversion (e.g., from Oracle to PostgreSQL) and explainability features. Likewise, AlloyDB gains a lot of new functionality: 105. AlloyDB AI lets gen AI developers build applications that accurately query data with natural language, just like they do with SQL; available now in AlloyDB Omni. 106. AlloyDB AI now includes a new pgvector-compatible index based on Google’s approximate nearest neighbor algorithms, or ScaNN; it’s available as a technology preview in AlloyDB Omni. 107. AlloyDB model endpoint management makes it easier to call remote Vertex AI, third-party, and custom models; available in AlloyDB Omni today and soon on AlloyDB in Google Cloud. 108. AlloyDB AI “parameterized secure views” secures data based on end-users’ context; available now in AlloyDB Omni. Bigtable, which turns 20 this year, got several new features: 109. Bigtable Data Boost, a pre-GA offering, delivers high-performance, workload-isolated, on-demand processing of transactional data, without disrupting operational workloads. 110. Bigtable authorized views, now generally available, allow multiple teams to leverage the same tables and securely share data directly from the database. 111. New Bigtable distributed counters in preview process high-frequency event data like clickstreams directly in the database. 112. Bigtable large nodes, the first of other workload-optimized node shapes, offer more performance stability at higher server utilization rates, and are in private preview. Memorystore for Redis Cluster, meanwhile: 113. Now supports both AOF (Append Only File) and RDB (Redis Database)-based persistence and has new node shapes that offer better performance and cost management. 114. Offers ultra-fast vector search, now generally available. 115. Includes new configuration options to tune max clients, max memory, max memory policies, and more, now in preview. Firestore users, take note: 116. Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. 117. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. 118. Firestore Query Explain in preview can help you troubleshoot your queries. 119. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. 120. You can now deploy Firestore in any available supported Google Cloud region, and Firestore’s Scheduled Backup feature can now retain backups for up to 98 days, up from seven days. 121. Cloud SQL Enterprise Plus edition now offers advanced failover capabilities such as orchestrated switchover and switchback Data analytics 122. BigQuery is now Google Cloud’s single integrated platform for data to AI workloads, with BigLake, BigQuery’s unified storage engine, providing a single interface across BigQuery native and open formats for analytics and AI workloads. 123. BigQuery better supports Iceberg, DDL, DML and high-throughput support in preview, while BigLake now supports the Delta file format, also in preview. 124. BigQuery continuous queries are in preview, providing continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. The above-mentioned Gemini in BigQuery enables all manner of new capabilities and offerings: 125. New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics and vector embeddings, and fine-tuning of LLMs. 126. BigQuery Studio provides a collaborative data workspace, the choice of SQL, Python, Spark or natural language directly, and new integrations for real-time streaming and governance; it is now generally available. 127. The new BigQuery data canvas provides a notebook-like experience with embedded visualizations and natural language support courtesy of Gemini. 128. BigQuery can now connect models in Vertex AI with enterprise data, without having to copy or move data out of BigQuery. 129. You can now use BigQuery with Gemini 1.0 Pro Vision to analyze both images and videos by combining them with your own text prompts using familiar SQL statements. 130. Column-level lineage in BigQuery and expanded lineage capabilities for Vertex AI pipelines will be in preview soon. Other updates to our data analytics portfolio include: 131. Apache Kafka for BigQuery as a managed service is in preview, to enable streaming data workloads based on open source APIs. 132. A serverless engine for Apache Spark integrated within BigQuery Studio is now in preview. 133. Dataplex features expanded data-to-AI governance capabilities in preview. Developers & operators Gemini Code Assist includes several new enhancements: 134. Full codebase awareness, in preview, uses Gemini 1.5 Pro to make complex changes, add new features, and streamline updates to your codebase. 135. A new code transformation feature available today in Cloud Workstations and Cloud Shell Editor lets you use natural language prompts to tell Gemini Code Assist to analyze, refactor, and optimize your code. 136. Gemini Code Assist now has extended local context, automatically retrieving relevant local files from your IDE workspace and displaying references to the files used. 137. With code customization in private preview, Gemini Code Assist lets you integrate private codebases and repositories for hyper-personalized code generation and completions, and connects to GitLab, GitHub, and Bitbucket source-code repositories. 138. Gemini Code Assist extends to Apigee and Application Integration in preview, to access and connect your applications. 139. We extended our partnership with Snyk to Gemini Code Assist, letting you learn about vulnerabilities and common security topics right within your IDE. 140. The new App Hub provides an accurate, up-to-date representation of deployed applications and their resource dependencies. Integrated with Gemini Cloud Assist, App Hub is generally available. Users of our Cloud Run and Google Kubernetes Engine (GKE) runtime environments can look forward to a variety of features: 141. Cloud Run application canvas lets developers generate, modify and deploy Cloud Run applications with integrations to Vertex AI, Firestore, Memorystore, and Cloud SQL, as well as load balancing and Gemini Cloud Assist. 142. GKE now supports container and model preloading to accelerate workload cold starts. 143. GPU sharing with NVIDIA Multi-Process Service (MPS) is now offered in GKE, enabling concurrent processing on a single GPU. 144. GKE support GCS FUSE read caching, now generally available, using a local directory as a cache to accelerate repeat reads for small and random I/Os. 145. GKE Autopilot mode now supports NVIDIA H100 GPUs, TPUs, reservations, and Compute Engine committed use discounts (CUDs). 146. Gemini Cloud Assist in GKE is available to help with optimizing costs, troubleshooting, and synthetic monitoring. Cloud Billing tools help you track and understand Google Cloud spending, pay your bill, and optimize your costs; here are a few new features: 147. Support for Cloud Storage costs at the bucket level and storage tags is included out of the box with Cloud Billing detailed data exports to BigQuery. 148. A new BigQuery data view for FOCUS allows users to compare costs and usage across clouds. 149. You can now convert cost management reports into BigQuery billing queries right from the Cloud Billing console. 150. A new Cloud FinOps Anomaly Detection feature is in private preview. 151. FinOps hub is now generally available, adds support to view top savings opportunities, and a preview of our FinOps hub dashboard lets you to analyze costs by project, region, or machine type. 152. A new CUD Analysis solution is available across Google Compute Engine resource families including TPU v5e, TPU v5p, A3, H3, and C3D. 153. There are new spend-based CUDs available for Memorystore, AlloyDB, BigTable, and Dataflow. Security Building on natural language search and case summaries in Chronicle, Gemini in Security Operations is coming to the entire investigation lifecycle, including: 154. A new assisted investigation feature, generally available at the end of this month, that guides analysts through their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. 155. The ability to ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment. 156. Gemini in Threat Intelligence, in public preview, allows you to tap into Mandiant’s frontline threat intelligence using conversational search. 157. VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform; generally available now. 158. Gemini in Security Command Center, which now lets security teams search for threats and other security events using natural language in preview, and provides summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarizes attack paths. 159. Gemini Cloud Assist also helps with security tasks, via: IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts; Key Insights, which help during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs; and Confidential Computing Insights, which recommends options for adding confidential computing protection to sensitive workloads based on your data and your compute usage. Other security news includes: 160. The new Chrome Enterprise Premium, now generally available, combines the popular browser with Google threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting. 161. Applied threat intelligence in Google Security Operations, now generally available, automatically applies global threat visibility and applies it to each customer’s unique environment. 162. Security Command Center Enterprise is now generally available and includesMandiant Hunt, now in preview. 163. Identity and Access Management Privileged Access Manager (PAM), now available in preview, provides just-in-time, time-bound, and approval-based access elevations. 164. Identity and Access Management Principal Access Boundary (PAB) is a new, identity-centered control now in preview that enforces restrictions on IAM principals. 165. Cloud Next-Gen Firewall (NGFW) Enterprise is now generally available, including threat protection from Palo Alto Networks. 166. Cloud Armor Enterprise is now generally available and offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence. 167. Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine. 168. Key management with Autokey is now in preview, simplifying the creation and management of customer encryption keys (CMEK). 169. Bare metal HSM deployments in PCI-compliant facilities are now available in more regions. 170. Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries. 171. Audit Manager automates control verification with proof of compliance for workloads and data on Google Cloud, and is in preview. 172. Advanced API Security, part of Apigee API Management, now offers shadow API detection in preview. As part of our Confidential Computing portfolio, we announced: 173. Confidential VMs on Intel TDX are now in preview and available on the C3 machine series with Intel TDX. For AI and ML workloads, we support Intel AMX, which provides CPU-based acceleration by default on C3 series Confidential VMs. 174. Confidential VMs on general-purpose N2D machine series with AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) are now in preview. 175. Live Migration on Confidential VMs is now in general availability on N2D machine series across all regions. 176. Confidential VMs on the A3 machine series with NVIDIA Tensor Core H100 GPUs will be in private preview later this year. Migration 177. The Rapid Migration Program (RaMP) now covers migration and modernization use cases that span across applications and the underlying infrastructure, data and analytics. For example, as part of RaMP for Storage: Storage egress costs from Amazon S3 to Google Cloud Storage are now completely free. Cloud Storage's client libraries for Python, Node.js, and Java now support parallelization of uploads and downloads from client libraries. Migration Center also includes several excellent new additions: 178. Migration use case navigator, for mapping out how to migrate your resources (servers, databases, data warehouses, etc.) from on-prem and other clouds directly into Google Cloud, including new Cloud Spend Estimators for rapid TCO assessments of on-premises VMware and Exadata environments. 179. Database discovery and assessment for Microsoft SQL Server, PostgreSQL and MySQL to Cloud SQL migrations. Google Cloud VMware Engine, an integrated VMware service on Google Cloud now offers: 180. The intent to support VMware Cloud Foundation License Portability 181. General availability of larger instance type (ve2-standard-128) offerings. 182. Networking enhancements including next-gen VMware Engine Networking, automated zero-config VPC peering, and Cloud DNS for workloads. 183. Terraform Infrastructure as Code Automation. Migrate to Virtual Machines helps teams migrate their workloads. Here’s what we announced: 184. A new Disk Migration solution for migrating disk volumes to Google Cloud. 185. Image Import (preview) as a managed service. 186. BIOS to UEFI Conversion in preview, which automatically converts bootloaders to the newer UEFI format. 187. Amazon Linux Conversion in preview, for converting Amazon Linux to Rocky Linux in Google Compute Engine. 188. CMEK support, so you maintain control over your own encryption keys. When replatforming VMs to containers in GKE or Cloud Run, there’s: 189. The new Migrate to Containers (M2C) CLI, which generates artifacts that you can deploy to either GKE or Cloud Run. 190. M2C Cloud Code Extension, in preview, which migrates applications from VMs to containers running on GKE directly in Visual Studio. Here are the enhancements to our Database Migration Service: 191. Database Migration Service now offers AI-powered last-mile code conversion from Oracle to PostgreSQL. 192. Database Migration Service now performs migration from SQL Server (on any platform) to Cloud SQL for SQL Server, in preview. 193. In Datastream, SQL Server as a source for CDC performs data movement to BigQuery destinations. Migrating from a mainframe? Here are some new capabilities: 194. The Mainframe Assessment Tool (MAT) now powered by gen AI analyzes the application codebase, performing fit assessment and creating application-level summarization and test cases. 195. Mainframe Connector sends a copy of your mainframe data to BigQuery for off-mainframe analytics. 196. G4 refactors mainframe application code (COBOL, RPG, JCL etc.) and data from their original state/programming language to a modern stack (JAVA). 197. Dual Run lets you run a new system side by side with your existing mainframe, duplicating all transactions and checking for completeness, quality and effectiveness of the new solution. Partners & ecosystem 198. Partners showcased more than 100 solutions that leverage Google AI on the Next ‘24 show floor. 199. We announced the 2024 Google Cloud Partner of the Year winners. 200. Gemini models will be available in the SAP Generative AI Hub. 201. GitLab announced that its authentication, security, and CI/CD integrations with Google Cloud are now in public beta for customers. 202. Palo Alto Networks named Google Cloud its AI provider of choice and will use Gemini models to improve threat analysis and incident summarization for its Cortex XSIAM platform. 203. Exabeam is using Google Cloud AI to improve security outcomes for customers. 204. Global managed security services company Optiv is expanding support for Google Cloud products. 205. Alteryx, Dynatrace, and Harness are launching new features built with Google Cloud AI to automate workflows, support data governance, and enable users to better observe and manage the data. 206. A new Generative AI Services Specialization is available for partners who demonstrate the highest level of technical proficiency with Google Cloud gen AI. 207. We introduced new Generative AI Delivery Excellence and Technical Bootcamps, and advanced Challenge Labs in generative AI. 208. The Google Cloud Ready - BigQuery initiative has 21 new partners: Actable, AgileData, Amplitude, Boostkpi, CaliberMind, Calibrate Analytics, CloudQuery, DBeaver, Decube, DinMo, Estuary, Followrabbit, Gretel, Portable, Precog, Retool, SheetGo, Tecton, Unravel Data, Vallidio, and Vaultree 209. The Google Cloud Ready - AlloyDB initiative has six new partners: Boostkpi, DBeaver, Estuary, Redis, Thoughtspot, and SeeBurger 210. The Google Cloud Ready - Cloud SQL initiative has five new partners: BoostKPI, DBeaver, Estuary, Redis, and Thoughtspot 211. Crowdstrike is integrating its Falcon Platform with Google Cloud products. Members of our Google for Startups program, meanwhile, will be interested to learn that: 212. The Google for Startups Cloud Program has a new partnership with the NVIDIA Inception startup program. The benefits include providing Inception members with access to Google Cloud credits, go-to-market support, technical expertise, and fast-tracked onboarding to Google Cloud Marketplace. 213. As part of the NVIDIA Inception partnership, Google for Startups Cloud Program members can join NVIDIA Inception and gain access to technological expertise, NVIDIA Deep Learning Institute course credits, NVIDIA hardware and software, and more. Eligible members of the Google for Startups Cloud Program also can participate in NVIDIA Inception Capital Connect, a platform that gives startups exposure to venture capital firms interested in the space. 214. The new Google for Startups Accelerator: AI-First program for startups building AI solutions based in the U.S. and Canada has launched, and its cohort includes 15 AI startups: Aptori, Augmend, Backpack Healthcare, BrainLogic AI, Cicerai, CLIKA, Easel AI, Findly, Glass Health, Kodif, Liminal, mbue, Modulo Bio, Rocket Doctor, and Sibli. 215. The Startup Learning Center provides startups with curated content to help them grow with Google Cloud, and will be launching an offering for startup developers and future founders via Innovators Plus in the coming months Finally, Google Cloud Consulting, has the following services to help you build out your Google Cloud environment: 216. Google Cloud Consulting is offering no-cost, on-demand training to top customers through Google Cloud Skills Boost, including new gen AI skill badges: Prompt Design in Vertex AI, Develop Gen AI Apps with Gemini and Streamlit, and Inspect Rich Documents with Gemini Multimodality and Multimodal RAG. 217. The new Isolator solution protects healthcare data used in collaborations between parties using a variety of Google Cloud technologies including Chrome Enterprise Premium, VPC Service Controls, Chrome Enterprise, and encryption. 218. Google Cloud Consulting’s Delivery Navigator is now generally available to all Google Cloud qualified services partners. Phew. What a week! On behalf of Google Cloud, we’re so grateful you joined us at Next ‘24, and can’t wait to host you again next year back in Las Vegas at the Mandalay Bay on April 9 - 11 in 2025! View the full article
  4. There is no AI without data Artificial intelligence is the most exciting technology revolution of recent years. Nvidia, Intel, AMD and others continue to produce faster and faster GPU’s enabling larger models, and higher throughput in decision making processes. Outside of the immediate AI-hype, one area still remains somewhat overlooked: AI needs data (find out more here). First and foremost, storage systems need to provide high performance access to ever growing datasets, but more importantly they need to ensure that this data is securely stored, not just for the present, but also for the future. There are multiple different types of data used in typical AI systems: Raw and pre-processed data Training data Models Results All of this data takes time and computational effort to collect, process and output, and as such need to be protected. In some cases, like telemetry data from a self-driving car, this data might never be able to be reproduced. Even after training data is used to create a model, its value is not diminished; improvements to models require consistent training data sets so that any adjustments can be fairly benchmarked. Raw, pre-processed, training and results data sets can contain personally identifiable information and as such steps need to be taken to ensure that it is stored in a secure fashion. And more than just the moral responsibility of safely storing data, there can be significant penalties associated with data breaches. Challenges with securely storing AI data We covered many of the risks associated with securely storing data in this blog post. The same risks apply in an AI setting as well. Afterall machine learning is another application that consumes storage resources, albeit sometimes at a much larger scale. AI use cases are relatively new, however the majority of modern storage systems, including the open source solutions like Ceph, have mature features that can be used to mitigate these risks. Physical theft thwarted by data at rest encryption Any disk used in a storage system could theoretically be lost due to theft, or when returned for warranty replacement after a failure event. By using at rest encryption, every byte of data stored on a disk, spinning media, or flash, is useless without the cryptographic keys needed to unencrypt the data. Thus protecting sensitive data, or proprietary models created after hours or even days of processing. Strict access control to keep out uninvited guests A key tenet of any system design is ensuring that users (real people, or headless accounts) have access only to the resources they need, and that at any time that access can easily be removed. Storage systems like Ceph use both their own access control mechanisms and also integrate with centralised auth systems like LDAP to allow easy access control. Eavesdropping defeated by in flight encryption There is nothing worse than someone listening into a conversation that they should not be privy to. The same thing can happen in computer networks too. By employing encryption on all network flows: client to storage, and internal storage system networks no data can be leaked to 3rd parties eavesdropping on the network. Recover from ransomware with snapshots and versioning It seems like every week another large enterprise has to disclose a ransomware event, where an unauthorised 3rd party has taken control of their systems and encrypted the data. Not only does this lead to downtime but also the possibility of having to pay a ransom for the decryption key to regain control of their systems and access to their data. AI projects often represent a significant investment of both time and resources, so having an initiative undermined by a ransomware attack could be highly damaging. Using point in time snapshots or versioning of objects can allow an organisation to revert to a previous non-encrypted state, and potentially resume operations sooner. Learn more Ceph is one storage solution that can be used to store various AI datasets, and is not only scalable to meet performance and capacity requirements, but also has a number of features to ensure data is stored securely. Find out more about how Ceph solves AI storage challenges: Find out more about Ceph here. Additional resources What is Ceph? Blog : Ceph storage for AI Webinar : AI storage with Ceph White paper – A guide to software-defined storage for enterprises Explore Canonical’s AI solutions View the full article
  5. We’re all already cyborgs. Whether you’re wearing glasses, sensing the dimensions of your car as you’re parking, or even “feeling” the texture of your food as you skewer it with a fork, you are directly experiencing the mind’s ability to extend itself and enmesh itself with technology – no cyberpunk implants required. Because we’re cyborgs, we have a natural inclination to pull tools and technology closer to ourselves, which has played out in the rise of everything from the Walkman to wearable fitness trackers to, most recently, the Apple Vision Pro. But today’s smart devices do more than just expand our minds. Sitting as they do at the intersection of the physical and digital worlds, they generate data that combine to describe almost every aspect of a person’s online and offline life, creating precisely the kind of comprehensive asset that hungry advertisers, governments, and even career cybercriminals dream of. Wearable technology has vast promise, but it comes with unprecedented risks to our privacy. Only by understanding the dimensions of those risks can we preserve the benefits of wearables while promoting safety and transparency. Wearable smart devices make us vulnerable by collecting health-related information, biometric data, location data, and more, while often commingling those data points with common digital information, including contact information, purchase history, or browsing behavior somewhere along the line. The hybrid nature of the data and devices ensures that the risks they pose are just as multifaceted, but it’s possible to think of them in three broad categories: 1. Technological Technological vulnerabilities are typically most people’s first thought when assessing the potential dangers of new tech. It’s a good instinct – in many respects hardware and software security constitute the first and last line of defense for user data. It’s therefore necessary to pay close attention to the attack surface that wearables present, as it’s larger and more complex than one might expect. First, there’s the device itself, which could be vulnerable to proximity-based attacks via however it communicates with the outside world. From there, wearables might transmit data to an intermediary device like a smartphone, which itself is vulnerable, after which the data makes its way to permanent, centralized storage on proprietary servers. Each step in this process can be attacked in creative ways via hardware and software, and it’s increasingly likely that bad actors will try due to the richness of the target. Thankfully, standards such as Bluetooth and WiFi have robust security mechanisms to prevent most such attacks, but they’re not perfect. Healthcare data breaches more than doubled from 2013 to 2023, and it’s likely that this trend will be reflected in healthcare-adjacent data, too. 2. Regulatory As is so often the case, privacy protections have failed to keep pace with advancements in technology, and what protections do exist are piecemeal and surprisingly narrow. Most Americans have a vague sense that their healthcare data is protected (and they’re vaguely correct), while an informed minority know the Health Insurance Portability and Accountability Act (HIPAA) exists to safeguard healthcare data. What’s not commonly understood is that HIPAA applies only to healthcare providers and insurers, and only for personally identifiable records used by those entities. This means there’s a potential regulatory distinction between healthcare data and biometric data produced by a fitness tracker even if the data point being tracked is identical. Generalized privacy regulations attempt to fill in this gap, but they’re mostly case-by-case. While the EU has one standard (GDPR), as does Canada (PIPEDA), the United States has a state-by-state patchwork of uneven regulation that remains difficult to navigate. The Federal Trade Commission has also tried to backstop health data privacy, citing both GoodRx and BetterHelp in 2023 alone. Absent more specific privacy protections, however, this type of enforcement will necessarily come after privacy has been violated, and almost always on the basis of “deceptive business practices” rather than due to inherent biometric data safeguards. 3. Educational Just as regulators trail technology, so too does consumer understanding of what’s being tracked, how data can be used, and by whom, all of which are necessary to give informed consent. First and foremost, people need to get into the habit of thinking about everything on their wearables as potentially valuable data. Your daily step count, your heart rate, your sleep quality – all of the fun and useful insights your wearables generate – begin painting a comprehensive picture of you that can seriously erode individual privacy, and it’s all above-board. This kind of data tracking becomes even more impactful when you think about today’s most powerful devices. The Apple Vision Pro by default knows where you are, what you’re browsing, the features of your environment, and even where you’re looking and how you move your body. So much data aggregation allows for deep, profound inferences about individuals that can could be used (hopefully not misused) in ways ranging from anodyne to alarming: more targeted ads based on implied preferences; increased insurance premiums due to lifestyle choices or poor treatment compliance; hacker groups revealing someone’s house is empty in real time; the list goes on. Data rollups Data rollups aren't confined to devices as powerful as the Apple Vision Pro, either. Consumers need to be made aware of how big tech companies can connect their individual dots across multiple devices and services. For example, consumers are broadly aware Google has location data from Android phones along with search and browsing history, but fewer know that Google acquired Fitbit in 2021, thereby making all Fitbit-generated data a de facto part of the Google ecosystem. There’s nothing intrinsically wrong with this, but consumers require an ecosystem-level understanding of the entities controlling their data to make informed choices. None of this is to say that the situation is beyond repair. In fact, we have to fix it so that we can safely enjoy the benefits of life-changing technology. In order to do that we need solutions that are as comprehensive as the problems. After all, privacy is a team sport. Comprehensive security First, we need to embrace more comprehensive security and default encryption at every step, on every device, for all data. Blockchains have much to offer in terms of restricting device access, securing data, and leveraging decentralized infrastructure in order to reduce the honeypot effect of vast data troves. Second, as noted above, informed visibility is a strict prerequisite for informed consent, so consumers must demand – and privacy-conscious companies must embrace – absolute transparency in terms of what data is collected, how it’s used (and by whom), and with what other data might it be commingled. It’s even possible to envision a world in which companies disclose the information they’re looking to derive based on the data points they’re aggregating, and consumers in turn have the ability to accept or reject the proposition. That leads us to the final piece, which is nuanced control of one’s data. Among its many flaws, the standard model of extracting data from users generally presents them with binary choices: consent and participation, or opting out entirely. Consumers need finer-grained control over what data they share and for what purposes it may be used rather than being strong-armed into an all-or-nothing model. Once again, they should demand it, and privacy-conscious companies can earn immense goodwill by giving it to them. Ultimately there’s nothing to be gained by assuming that privacy is doomed to become a quaint notion from a bygone era. We should instead be hopeful that we can find a balance between the unprecedented benefits of wearable technology and the risks they pose to privacy, but we can’t afford to wait around for regulators. Instead, it’s incumbent upon everyday people to educate themselves on threats to their privacy and speak up not just in favor of better regulation, but in defense of their right to own and control the data they’re creating. If enough people say it, the industry has to listen. We've listed the best smart home devices. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  6. X, formerly Twitter, has extended support for passkeys as a login option for iPhone users across the globe, the company has announced. Passkeys support was introduced by X earlier this year, but the option was limited to iOS users based in the United States. Now anyone on the social media platform can use them. Passkeys are both easier to use and more secure than passwords because they let users sign in to apps and sites the same way they unlock their devices: With Face ID, Touch ID, or a device passcode. Passkeys are also resistant to online attacks like phishing, making them more secure than things like SMS one-time codes. Apple integrated passkeys into iOS in 2022 with the launch of iOS 16, and it is also available in iPadOS 16.1 and later as well as macOS Ventura and later. To set up passkeys in X on ‌iPhone‌, follow these steps: Log in to the X app. Click Your account in the navigation bar. Select Settings and privacy, then click Security and account access, then Security. Under Additional password protection, click Passkey. Enter your password when prompted. Select Add a passkey and follow the prompts. Update: Passkeys is now available as a login option for everyone globally on iOS! Try it out.https://t.co/v1LyN0l8wF — Safety (@Safety) April 8, 2024 X is just one of several companies to implement support for passkeys in recent months, with other supporting apps and websites including Google, PayPal, Best Buy, eBay, Dashlane, and Microsoft.Tags: Twitter, Passkeys This article, "X Rolls Out Passkeys Support to iPhone Users Worldwide" first appeared on MacRumors.com Discuss this article in our forums View the full article
  7. 5G technology impacts not just our daily lifestyle but the Internet of Things (IoT) as well. The world of 5G is not only transformed by hyper-connectivity but is also involved in the future hinges on a critical element: IoT security. While 5G has remarkable speed and capacity, it also provides a large attack surface. Unlike […] The post Impact of IoT Security for 5G Technology appeared first on Kratikal Blogs. The post Impact of IoT Security for 5G Technology appeared first on Security Boulevard. View the full article
  8. In short, API security testing involves the systematic assessment of APIs to identify vulnerabilities, coding errors, and other weaknesses that could be exploited by malicious actors. Application Programming Interfaces, or APIs, provide much of the communication layer between applications that house an organization’s critical customer and company information, and API security testing is essential to […] The post What is API Security Testing? appeared first on Cequence Security. The post What is API Security Testing? appeared first on Security Boulevard. View the full article
  9. Picus Security today added an artificial intelligence (AI) capability to enable cybersecurity teams to automate tasks via a natural language interface. The capability, enabled by OpenAI, leverages the existing knowledge graph technologies from Picus Security. Dubbed Picus Numi AI, the company is making use of a large language model (LLM) developed by Open AI to.. The post Picus Security Melds Security Knowledge Graph with Open AI LLM appeared first on Security Boulevard. View the full article
  10. On February 29, I was honored to serve as the moderator for a panel on “The Rise of AI and its Impact on Corporate Security” at the 2024 Ontic Summit. The panel not only provided me with a reason to focus my own thoughts on the topic, but to also learn from the insights of the… The post Implications of AI for Corporate Security appeared first on Ontic. The post Implications of AI for Corporate Security appeared first on Security Boulevard. View the full article
  11. Security researchers have found a relatively easy and cheap way to clone the keycards used on three million Saflok electronic RFID locks in 13,000 hotels and homes all over the world. The keycard and lock manufacturer, Dormakaba, has been notified, and it is currently working to replace the vulnerable hardware - but it’s a long, tedious process, which is not yet done. Although first discovered back in 2022, the researchers have disclosed more information on the flaws, dubbed “Unsaflok”, in order to raise awareness. Cheap card cloning The flaws were discovered at a private hacking event was set up in Las Vegas, where different research teams competed to find vulnerabilities in a hotel room and all devices inside. A team, consisting of Lennert Wouters, Ian Carroll, rqu, BusesCanFly, Sam Curry, shell, and Will Caruana, focused their attention on the Dormakaba Saflok electronic locks for hotel rooms. Soon enough, they found two flaws which, when chained together, allowed them to open the doors with a custom-built keycard. First, they needed access to any card from the premises. That could be the card to their own room. Then, they reverse-engineered the Dormakaba front desk software and lock programming device, which allowed them to spoof a working master key which can open any room on the property. Finally, to clone the cards, they needed to break into Dormakaba’s key derivation function. To forge the keycards, the team used a MIFARE Classic card, a commercial card-writing tool, and an Android phone with NFC capabilities. All of this costs just a few hundred dollars, it was said. With their custom-built keycard, the team would be able to access more than three million locks, installed in 13,000 hotels and homes all over the world. Following the publication of the findings, Dormakaba released a statement to the media, saying the vulnerability affects Saflok systems System 6000, Ambiance, and Community. It added that there is no evidence of these flaws ever being exploited in the wild. Via BleepingComputer More from TechRadar Pro This nasty new Android malware can easily bypass Google Play security — and it's already been downloaded thousands of timesHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  12. Apple today released iOS 17.4.1 and iPadOS 17.4.1, minor updates to the iOS 17 and iPadOS 17 operating systems. The new software comes a couple of weeks after Apple released iOS 17.4 and iPadOS 17.4 with app changes in the European Union, new emoji, and more. iOS 17.4.1 and iPadOS 17.4.1 can be downloaded on eligible iPhones and iPads over-the-air by going to Settings > General > Software Update. For customers who are still on iOS 16, Apple has also released an iOS 16.7.7 security update. According to Apple's release notes, the iOS 17.4.1 update includes important security updates and bug fixes. Apple will likely begin testing iOS 17.5 in the near future, with betas expected to come out in the next two weeks.Related Roundups: iOS 17, iPadOS 17Related Forums: iOS 17, iPadOS 17 This article, "Apple Releases iOS 17.4.1 and iPadOS 17.4.1 With Bug Fixes and Security Improvements" first appeared on MacRumors.com Discuss this article in our forums View the full article
  13. We are excited to announce the release of the Databricks AI Security Framework (DASF) version 1.0 whitepaper! The framework is designed to improve... View the full article
  14. In the realm of containerized applications, Kubernetes reigns supreme. But with great power comes great responsibility, especially when it comes to safeguarding sensitive data within your cluster. Terraform, the infrastructure-as-code darling, offers a powerful solution for managing Kubernetes Secrets securely and efficiently. This blog delves beyond the basics, exploring advanced techniques and considerations for leveraging Terraform to manage your Kubernetes Secrets. Understanding Kubernetes Secrets Kubernetes Secrets provides a mechanism to store and manage sensitive information like passwords, API keys, and tokens used by your applications within the cluster. These secrets are not directly exposed in the container image and are instead injected into the pods at runtime. View the full article
  15. In the rapidly evolving landscape of Kubernetes, security remains at the forefront of concerns for developers and architects alike. Kubernetes 1.25 brings significant changes, especially in how we approach pod security, an area critical to the secure deployment of applications. This article dives deep into the intricacies of Pod Security Admission (PSA), the successor to Pod Security Policies (PSP), providing insights and practical guidance to harness its potential effectively. Understanding Pod Security Admission With the deprecation of Pod Security Policies in previous releases, Kubernetes 1.29 emphasizes Pod Security Admission (PSA), a built-in admission controller designed to enforce pod security standards at creation and modification time. PSA introduces a more streamlined, understandable, and manageable approach to securing pods, pivotal for protecting cluster resources and data. View the full article
  16. Author: Sascha Grunert Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a node to your Pods and containers. But distributing those seccomp profiles is a major challenge in Kubernetes, because the JSON files have to be available on all nodes where a workload can possibly run. Projects like the Security Profiles Operator solve that problem by running as a daemon within the cluster, which makes me wonder which part of that distribution could be done by the container runtime. Runtimes usually apply the profiles from a local path, for example: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Localhost localhostProfile: nginx-1.25.3.json The profile nginx-1.25.3.json has to be available in the root directory of the kubelet, appended by the seccomp directory. This means the default location for the profile on-disk would be /var/lib/kubelet/seccomp/nginx-1.25.3.json. If the profile is not available, then runtimes will fail on container creation like this: kubectl get pods NAME READY STATUS RESTARTS AGE pod 0/1 CreateContainerError 0 38s kubectl describe pod/pod | tail Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 117s default-scheduler Successfully assigned default/pod to 127.0.0.1 Normal Pulling 117s kubelet Pulling image "nginx:1.25.3" Normal Pulled 111s kubelet Successfully pulled image "nginx:1.25.3" in 5.948s (5.948s including waiting) Warning Failed 7s (x10 over 111s) kubelet Error: setup seccomp: unable to load local profile "/var/lib/kubelet/seccomp/nginx-1.25.3.json": open /var/lib/kubelet/seccomp/nginx-1.25.3.json: no such file or directory Normal Pulled 7s (x9 over 111s) kubelet Container image "nginx:1.25.3" already present on machine The major obstacle of having to manually distribute the Localhost profiles will lead many end-users to fall back to RuntimeDefault or even running their workloads as Unconfined (with disabled seccomp). CRI-O to the rescue The Kubernetes container runtime CRI-O provides various features using custom annotations. The v1.30 release adds support for a new set of annotations called seccomp-profile.kubernetes.cri-o.io/POD and seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. Those annotations allow you to specify: a seccomp profile for a specific container, when used as: seccomp-profile.kubernetes.cri-o.io/<CONTAINER> (example: seccomp-profile.kubernetes.cri-o.io/webserver: 'registry.example/example/webserver:v1') a seccomp profile for every container within a pod, when used without the container name suffix but the reserved name POD: seccomp-profile.kubernetes.cri-o.io/POD a seccomp profile for a whole container image, if the image itself contains the annotation seccomp-profile.kubernetes.cri-o.io/POD or seccomp-profile.kubernetes.cri-o.io/<CONTAINER>. CRI-O will only respect the annotation if the runtime is configured to allow it, as well as for workloads running as Unconfined. All other workloads will still use the value from the securityContext with a higher priority. The annotations alone will not help much with the distribution of the profiles, but the way they can be referenced will! For example, you can now specify seccomp profiles like regular container images by using OCI artifacts: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: … The image quay.io/crio/seccomp:v2 contains a seccomp.json file, which contains the actual profile content. Tools like ORAS or Skopeo can be used to inspect the contents of the image: oras pull quay.io/crio/seccomp:v2 Downloading 92d8ebfa89aa seccomp.json Downloaded 92d8ebfa89aa seccomp.json Pulled [registry] quay.io/crio/seccomp:v2 Digest: sha256:f0205dac8a24394d9ddf4e48c7ac201ca7dcfea4c554f7ca27777a7f8c43ec1b jq . seccomp.json | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "defaultErrno": "ENOSYS", "archMap": [ { "architecture": "SCMP_ARCH_X86_64", "subArchitectures": [ "SCMP_ARCH_X86", "SCMP_ARCH_X32" # Inspect the plain manifest of the image skopeo inspect --raw docker://quay.io/crio/seccomp:v2 | jq . { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.cncf.seccomp-profile.config.v1+json", "digest": "sha256:ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356", "size": 3, }, "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar", "digest": "sha256:92d8ebfa89aa6dd752c6443c27e412df1b568d62b4af129494d7364802b2d476", "size": 18853, "annotations": { "org.opencontainers.image.title": "seccomp.json" }, }, ], "annotations": { "org.opencontainers.image.created": "2024-02-26T09:03:30Z" }, } The image manifest contains a reference to a specific required config media type (application/vnd.cncf.seccomp-profile.config.v1+json) and a single layer (application/vnd.oci.image.layer.v1.tar) pointing to the seccomp.json file. But now, let's give that new feature a try! Using the annotation for a specific container or whole pod CRI-O needs to be configured adequately before it can utilize the annotation. To do this, add the annotation to the allowed_annotations array for the runtime. This can be done by using a drop-in configuration /etc/crio/crio.conf.d/10-crun.conf like this: [crio.runtime] default_runtime = "crun" [crio.runtime.runtimes.crun] allowed_annotations = [ "seccomp-profile.kubernetes.cri-o.io", ] Now, let's run CRI-O from the latest main commit. This can be done by either building it from source, using the static binary bundles or the prerelease packages. To demonstrate this, I ran the crio binary from my command line using a single node Kubernetes cluster via local-up-cluster.sh. Now that the cluster is up and running, let's try a pod without the annotation running as seccomp Unconfined: cat pod.yaml apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - name: container image: nginx:1.25.3 securityContext: seccompProfile: type: Unconfined kubectl apply -f pod.yaml The workload is up and running: kubectl get pods NAME READY STATUS RESTARTS AGE pod 1/1 Running 0 15s And no seccomp profile got applied if I inspect the container using crictl: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp null Now, let's modify the pod to apply the profile quay.io/crio/seccomp:v2 to the container: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/container: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 I have to delete and recreate the Pod, because only recreation will apply a new seccomp profile: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created The CRI-O logs will now indicate that the runtime pulled the artifact: WARN[…] Allowed annotations are specified for workload [seccomp-profile.kubernetes.cri-o.io] INFO[…] Found container specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io/container=quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=26ddcbe6-6efe-414a-88fd-b1ca91979e93 name=/runtime.v1.RuntimeService/CreateContainer And the container is finally using the profile: export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { The same would work for every container in the pod, if users replace the /container suffix with the reserved name /POD, for example: apiVersion: v1 kind: Pod metadata: name: pod annotations: seccomp-profile.kubernetes.cri-o.io/POD: quay.io/crio/seccomp:v2 spec: containers: - name: container image: nginx:1.25.3 Using the annotation for a container image While specifying seccomp profiles as OCI artifacts on certain workloads is a cool feature, the majority of end users would like to link seccomp profiles to published container images. This can be done by using a container image annotation; instead of being applied to a Kubernetes Pod, the annotation is some metadata applied at the container image itself. For example, Podman can be used to add the image annotation directly during image build: podman build \ --annotation seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 \ -t quay.io/crio/nginx-seccomp:v2 . The pushed image then contains the annotation: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2 | jq '.annotations."seccomp-profile.kubernetes.cri-o.io"' "quay.io/crio/seccomp:v2" If I now use that image in an CRI-O test pod definition: apiVersion: v1 kind: Pod metadata: name: pod # no Pod annotations set spec: containers: - name: container image: quay.io/crio/nginx-seccomp:v2 Then the CRI-O logs will indicate that the image annotation got evaluated and the profile got applied: kubectl delete pod/pod pod "pod" deleted kubectl apply -f pod.yaml pod/pod created INFO[…] Found image specific seccomp profile annotation: seccomp-profile.kubernetes.cri-o.io=quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Pulling OCI artifact from ref: quay.io/crio/seccomp:v2 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Retrieved OCI artifact seccomp profile of len: 18853 id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer INFO[…] Created container 116a316cd9a11fe861dd04c43b94f45046d1ff37e2ed05a4e4194fcaab29ee63: default/pod/container id=c1f22c59-e30e-4046-931d-a0c0fdc2c8b7 name=/runtime.v1.RuntimeService/CreateContainer export CONTAINER_ID=$(sudo crictl ps --name container -q) sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp | head { "defaultAction": "SCMP_ACT_ERRNO", "defaultErrnoRet": 38, "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { For container images, the annotation seccomp-profile.kubernetes.cri-o.io will be treated in the same way as seccomp-profile.kubernetes.cri-o.io/POD and applies to the whole pod. In addition to that, the whole feature also works when using the container specific annotation on an image, for example if a container is named container1: skopeo inspect --raw docker://quay.io/crio/nginx-seccomp:v2-container | jq '.annotations."seccomp-profile.kubernetes.cri-o.io/container1"' "quay.io/crio/seccomp:v2" The cool thing about this whole feature is that users can now create seccomp profiles for specific container images and store them side by side in the same registry. Linking the images to the profiles provides a great flexibility to maintain them over the whole application's life cycle. Pushing profiles using ORAS The actual creation of the OCI object that contains a seccomp profile requires a bit more work when using ORAS. I have the hope that tools like Podman will simplify the overall process in the future. Right now, the container registry needs to be OCI compatible, which is also the case for Quay.io. CRI-O expects the seccomp profile object to have a container image media type (application/vnd.cncf.seccomp-profile.config.v1+json), while ORAS uses application/vnd.oci.empty.v1+json per default. To achieve all of that, the following commands can be executed: echo "{}" > config.json oras push \ --config config.json:application/vnd.cncf.seccomp-profile.config.v1+json \ quay.io/crio/seccomp:v2 seccomp.json The resulting image contains the mediaType that CRI-O expects. ORAS pushes a single layer seccomp.json to the registry. The name of the profile does not matter much. CRI-O will pick the first layer and check if that can act as a seccomp profile. Future work CRI-O internally manages the OCI artifacts like regular files. This provides the benefit of moving them around, removing them if not used any more or having any other data available than seccomp profiles. This enables future enhancements in CRI-O on top of OCI artifacts, but also allows thinking about stacking seccomp profiles as part of having multiple layers in an OCI artifact. The limitation that it only works for Unconfined workloads for v1.30.x releases is something different CRI-O would like to address in the future. Simplifying the overall user experience by not compromising security seems to be the key for a successful future of seccomp in container workloads. The CRI-O maintainers will be happy to listen to any feedback or suggestions on the new feature! Thank you for reading this blog post, feel free to reach out to the maintainers via the Kubernetes Slack channel #crio or create an issue in the GitHub repository. View the full article
  17. iPhone and iPad owners may want to update to iOS 17.4 and iPadOS 17.4 in the near future, as the updates address two security vulnerabilities that may have been exploited to gain access to user devices. In the security support document for the updates, Apple says that it "is aware of a report" that RTKit and kernel vulnerabilities may have been exploited by bad actors.Impact: An attacker with arbitrary kernel read and write capability may be able to bypass kernel memory protections. Apple is aware of a report that this issue may have been exploited.Apple fixed the memory corruption issue with improved validation to patch the security hole. iOS 17.4 and iPadOS 17.4 also address an Accessibility vulnerability and an issue with Safari Private Browsing that could allow locked tabs to be briefly visible while switching tab groups. The software updates were released this morning and are available on eligible iPhones and iPads by going to Settings > General > Software Update.Related Roundups: iOS 17, iPadOS 17Related Forums: iOS 17, iPadOS 17 This article, "Make Sure to Update: iOS 17.4 and iPadOS 17.4 Fix Two Major Security Vulnerabilities" first appeared on MacRumors.com Discuss this article in our forums View the full article
  18. There are several steps involved in implementing a data pipeline that integrates Apache Kafka with AWS RDS and uses AWS Lambda and API Gateway to feed data into a web application. Here is a high-level overview of how to architect this solution: 1. Set Up Apache Kafka Apache Kafka is a distributed streaming platform that is capable of handling trillions of events a day. To set up Kafka, you can either install it on an EC2 instance or use Amazon Managed Streaming for Kafka (Amazon MSK), which is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. View the full article
  19. Today, AWS announces the expansion in the log coverage support for Amazon Security Lake, which includes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs. This enhancement allows you to automatically centralize and normalize your Amazon EKS audit logs in Security Lake, making it easier to monitor and investigate potential suspicious activities in your Amazon EKS clusters. View the full article
  20. Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike. Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams. Introducing enhanced file-sharing controls in Docker Desktop Business As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use. For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes: Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default. Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization. Figure 1: Display of Docker Desktop settings enhanced file-sharing settings. We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice: Clearer error messaging: Quickly understand and rectify issues with enhanced error messages. Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible. Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights. These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind. Refining development with Docker Desktop’s Builds view update Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds. New in Docker Desktop 4.28: Dedicated tabs: Separates active from completed builds for better organization (Figure 3). Build insights: Displays build duration and cache steps, offering more clarity on the build process. Reliability fixes: Resolves issues with updates for a more consistent experience. UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4). These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds. Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds. Figure 4: Updated view supporting empty state — no Active builds. To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey. These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate. Join the conversation and make your mark Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs. Learn more Authenticate and update to receive the newest Docker Desktop features per your subscription level. New to Docker? Create an account. Read our latest blog on synchronized file shares. Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta. Learn about Docker Build Cloud. Subscribe to the Docker Newsletter. View the full article
  21. Because of the critical nature of the DevOps pipeline, security is becoming a top priority. Here's how to integrate DevSecOps.View the full article
  22. Docker images play a pivotal role in containerized application deployment. They encapsulate your application and its dependencies, ensuring consistent and efficient deployment across various environments. However, security is a paramount concern when working with Docker images. In this guide, we will explore security best practices for Docker images to help you create and maintain secure images for your containerized applications. 1. Introduction The Significance of Docker Images Docker images are at the core of containerization, offering a standardized approach to packaging applications and their dependencies. They allow developers to work in controlled environments and empower DevOps teams to deploy applications consistently across various platforms. However, the advantages of Docker images come with security challenges, making it essential to adopt best practices to protect your containerized applications. View the full article
  23. PlayStation account owners will soon be able to start using a passkey as an alternative to a password when logging into a PlayStation account on the web, in an app, or on a PlayStation device. Passkey integration is set to be introduced at some point today, and users will be able to log in and authenticate their accounts with Face ID, Touch ID, or a device passcode on an iPhone. Passkeys are considered more convenient and secure than a traditional password, with sign-ins streamlined through biometric authentication. Passkeys are resistant to online attacks such as phishing because there's no password to steal and no one-time SMS code that can be intercepted. Apple has supported passkeys since 2022, and passkeys are available on iOS 16 and later, iPadOS 16 and later, and macOS Ventura and later. Many companies have been implementing support for passkeys, including Twitter, Google, PayPal, Best Buy, Microsoft, and eBay.Tag: Passkeys This article, "PlayStation Adds Support for Passkeys as Password Alternative" first appeared on MacRumors.com Discuss this article in our forums View the full article
  24. Developing and running secure Docker applications demands a strategic approach, encompassing considerations like avoiding unnecessary bloat in images and access methods. One crucial aspect to master in Docker development is understanding image layering and optimization. Docker images are constructed using layers, each representing specific changes or instructions in the image’s build process. In this article, we’ll delve into the significance of Docker image layering, the importance of choosing minimal base images, and practical approaches like multi-stage builds. Additionally, we’ll discuss the critical practices of running applications as non-root users, checking images for vulnerabilities using tools like Docker Scout, and implementing Docker Content Trust for image integrity. This comprehensive guide aims to equip developers and operators with actionable insights to enhance the security and efficiency of Docker applications. Understanding Docker image layering Before we jump into Docker security aspects, we need to understand Docker image layering and optimization. For a better understanding, let’s consider this Dockerfile, retrieved from a sample repository. It’s a simple React program that prints “Hello World.” The core code uses React, a JavaScript library for building user interfaces. Docker images comprise layers, and each layer represents a set of file changes or instructions in the image’s construction. These layers are stacked on each other to form the complete image (Figure 1). To combine them, a “unioned filesystem” is created, which basically takes all of the layers of the image and overlays them together. These layers are immutable. When you’re building an image, you’re simply creating new filesystem diffs, not modifying previous layers. Figure 1: Visual representation of layers in a Docker image. When you build a Docker image, each instruction in your Dockerfile creates a new layer. Layers are cached, so if you make a change in your code and rebuild the image, only the layers affected by that change will be recreated, saving time and bandwidth. This layering system makes images efficient to use. You might notice that there are two COPY instructions (as shown in Figure 1). The first COPY instruction copies only package.json (and potentially package-lock.json) into the image. The second COPY instruction copies the remaining application code (excluding files already copied in the first COPY command). If only application code changes, the first two layers are cached, avoiding re-downloading and reinstalling dependencies, which can significantly speed up builds. 1. Choose a minimal base image Docker Hub has millions of images, and choosing the right image for your application is important. It is always better to consider a minimal base image with a small size, as slimmer images contain fewer dependencies, resulting in less surface area to attract. Not only does a smaller image improve your image security, but it also reduces the time for pulling and pushing images and optimizing the overall development lifecycle. Figure 2: Example of Docker images with different sizes. As depicted in Figure 2, we opted for the node:21.6-alpine3.18 image due to its smaller footprint. We selected the Alpine image for our Node application below because it omits additional tools and packages present in the default Node image. This decision aligns with good security practices, as it minimizes the attack surface by eliminating unnecessary components for running your application. # Use the official Node.js image with Alpine Linux as the base image FROM node:21.6-alpine3.18 # Set the working directory inside the container to /app WORKDIR /app # Copy package.json and package-lock.json to the working directory COPY package*.json ./ # Install Node.js dependencies based on the package.json RUN npm install # Copy all files from the current directory to the working directory in the container COPY . . # Expose port 3000 EXPOSE 3000 # Define the command to run your application when the container starts CMD ["npm", "start"] 2. Use multi-stage builds Multi-stage builds offer a great way to streamline Docker images, making them smaller and more secure. They allow us to trim down a hefty 1.9 GB image to a lean 140 MB by using different build stages. In this approach, we leverage multiple FROM statements and carefully pick only the necessary pieces from one stage to another. We have converted our Dockerfile to a multi-stage one (Figure 3). In the first stage, we use a Node.js image to build the app, manage dependencies, and create application files (see the Dockerfile below). In the second stage, we copy the lightweight files generated in the first step and use Nginx to run them. We skip the build tool required to build the app in the final stage. This is why the final image is small and suitable for the production environment. Also, this is a great representation that we don’t need the heavyweight system on which we build; we can copy them to a lighter runner to run the app. Figure 3: High-level representation of Docker multi-stage build. # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # Start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] You can access this Dockerfile directly from a repository on GitHub. 3. Check your images for vulnerabilities using Docker Scout Let’s look at the following multi-stage Dockerfile: # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # Start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] You can run the following command to build a Docker image: docker build -t react-app-multi-stage . -f Dockerfile.multi Once the build process is complete, the CLI lets you view a summary of image vulnerabilities and recommendations. That’s what Docker Scout is all about. => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:f348bcb19411fa1c4abf2e682f3dded7963c0c0c9b39c31804df5cd0e0f185d9 0.0s => => naming to docker.io/library/react-node-app 0.0s View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/sci2bo7xihgwnfihigd8x9uh1 What's Next? View a summary of image vulnerabilities and recommendations → docker scout quickview Docker Scout analyzes the contents of container images and generates a report of packages and vulnerabilities that it detects, helping users to identify and remediate issues. Docker Scout image analysis is more than point-in-time scanning; the analysis gets reevaluated continuously, meaning you don’t need to re-scan the image to see an updated vulnerability report. If your base image has a security concern, Docker Scout will check for updates and patches to suggest how to replace the image. If issues exist in other layers, Docker Scout will reveal precisely where it was introduced and make recommendations accordingly (Figure 4). Figure 4: How Docker Scout works. Docker Scout uses Software Bills of Materials (SBOMs) to cross-reference with streaming Common Vulnerabilities and Exposures (CVE) data to surface vulnerabilities (and potential remediation recommendations) as soon as possible. An SBOM is a nested inventory, a list of ingredients that make up software components. Docker Scout is built on a streaming event-driven data model, providing actionable CVE reports. Once the SBOM is generated and exists, Docker Scout automatically checks between existing SBOMs and new CVEs. You will see automatic updates for new CVEs without re-scanning artifacts. After building the image, we will open Docker Desktop (ensure you have the latest version installed), analyze the level of vulnerabilities, and fix them. We can also use Docker Scout from the Docker CLI, but Docker Desktop gives you a better way to visualize the stuff. Select Docker Scout from the sidebar and choose the image. Here, we have chosen the react-app-multi-stage, which we built just now. As you can see, Scout immediately shows vulnerabilities and their level. We can select View packages and CVEs beside that to take a deep look and get recommendations (Figure 5). Figure 5: Docker Scout tab in Docker Desktop. Now, a window will open, which shows you a detailed report about the vulnerabilities and layer-wise breakdown (Figure 6). Figure 6: Detailed report of vulnerabilities. To get recommendations to fix the image vulnerabilities, select Recommended Fixes in the top-right corner, and a dialog box will open with the recommended fixes. As shown in Figure 7, it recommends upgrading Nginx from version 1.20 to 1.24, which has fewer vulnerabilities and fixes all the critical and higher-level issues. Also, a good thing to note is that even though version 1.25 was available, it still recommends version 1.24 because 1.25 has critical vulnerabilities compared to 1.24. Figure 7: Recommendation tab for fixing vulnerabilities in Docker Desktop. Now, we need to rebuild our image by changing the base image of the final stage to the recommended version 1.24 (Figure 8), which will fix those vulnerabilities. Figure 8: Advanced image analysis with Docker Scout. The key features and capabilities of Docker Scout include: Unified view: Docker Scout provides a single view of your application’s dependencies from all layers, allowing you to easily understand your image composition and identify remediation steps. Event-driven vulnerability updates: Docker Scout uses an event-driven data model to continuously detect and surface vulnerabilities, ensuring that analysis is always up-to-date and based on the latest CVEs. In-context remediation recommendations: Docker Scout provides integrated recommendations visible in Docker Desktop, suggesting remediation options for base image updates and dependency updates within your application code layers. Note that Docker Scout is available through multiple interfaces, including the Docker Desktop and Docker Hub user interfaces, as well as a web-based user interface and a command-line interface (CLI) plugin. Users can view and interact with Docker Scout through these interfaces to gain a deeper understanding of the composition and security of their container images. 4. Use Docker Content Trust Docker Content Trust (DCT) lets you sign and verify Docker images, ensuring they come from trusted sources and haven’t been tampered with. This process acts like a digital seal of approval for images, whether signed by people or automated processes. To enable Docker Content Trust, follow these steps: Initialize Docker Content Trust Before you can sign images, ensure that Docker Content Trust is initialized. Open a terminal and run the following command: export DOCKER_CONTENT_TRUST=1 Sign the Docker image Sign the Docker image using the following command: docker build -t <your_namespace>/node-app docker trust sign <your_namespace>/node-app ... v1.0: digest: sha256:5fa48a9b4e52a9d9681a5786b4885be080668d06019e91eece6dfded5a0f8a47 size: 1986 Signing and pushing trust metadata Enter passphrase for <namespace> key with ID 96c9857: Successfully signed docker.io/<your_namespace/node-app:v1.0 Push the signed image to a registry You can push the signed Docker image to a registry with: docker push <your_namespace/node-app:v1.0 Verify the signature To verify the signature of an image, use the following command: docker trust inspect --pretty <your_namespace>/node-app:v1.0 Signatures for your_namespace/node-app:v1.0 SIGNED TAG DIGEST SIGNERS v1.0 5fa48a9b4e52a9d968XXXXXX19e91eece6dfded5a0f8a47 <your_namespace> List of signers and their keys for <your_namespace>/node-app:v1.0 SIGNER KEYS ajeetraina 96c985786950 Administrative keys for <your_namespace>/node-app:v1.0 Repository Key: 47214511f851e28018a7b0443XXXXXXc7d5846bf6f7 Root Key: 52bae142a9ac98a473c5275bXXXXXX2f4f5068081d567903dd By following these steps, you’ve enabled Docker Content Trust for your Node.js application, signing and verifying the image to enhance security and ensure the integrity of your containerized application throughout its lifecycle. 5. Practice least privileges Security is crucial in containerized environments. Embracing the principle of least privilege ensures that Docker containers operate with only the necessary permissions, thereby reducing the attack surface and mitigating potential security risks. Let’s explore specific best practices for achieving least privilege in Docker. Run as non-root user We minimize potential risks by running applications without unnecessary high-level access (root privileges). Many applications don’t need root privileges. So, in the Dockerfile, we can create a non-root system user to run the application inside the container with the limited privileges of the non-root user, improving security and holding to the principle of least privilege. # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Set ownership and permissions for nginx user RUN chown -R nginx:nginx /app && \ chmod -R 755 /app && \ chown -R nginx:nginx /var/cache/nginx && \ chown -R nginx:nginx /var/log/nginx && \ chown -R nginx:nginx /etc/nginx/conf.d # Create nginx user and set appropriate permissions RUN touch /var/run/nginx.pid && \ chown -R nginx:nginx /var/run/nginx.pid # Switch to the nginx user USER nginx # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # CMD to start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] If we are using Node as the final base image (Figure 9), we can add USER node to our Dockerfile to run the application as a non-root user. The node user is created within the Node image with restricted permissions, unlike the root user, which has full control over the system. By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. Figure 9: Images tab in Docker Desktop. Limit capabilities Limiting Linux kernel capabilities is crucial for controlling the privileges available to containers. Docker, by default, runs with a restricted set of capabilities. You can enhance security by dropping unnecessary capabilities and adding only the ones required. docker run --cap-drop all --cap-add CHOWN node-app Let’s take our simple Hello World React containerized app and see how it can fit into the example practices for least privilege in Docker and integrate this application with least privilege practices: FROM node:21.6-alpine3.18 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 # Drop unnecessary capabilities CMD ["--cap-drop", "all", "--cap-add", "CHOWN", "npm", "start"] Add –no-new-privileges flag Running containers with the --security-opt=no-new-privileges flag is essential to prevent privilege escalation through setuid or setgid binaries. The setuid and setgid binaries allow users to run an executable with the file system permissions of the executable’s owner or group, respectively, and to change behavior in directories. This flag ensures that the container’s privileges cannot be escalated during runtime. docker run --security-opt=no-new-privileges node-app Disable inter-container communication Inter-container communication (icc) is enabled by default in Docker, allowing containers to communicate using the docker0 bridged network. docker0 bridges your container’s network (or any Compose networks) to the host’s main network interface, meaning your containers can access the network and you can access the containers. Disabling icc enhances security, requiring explicit communication definitions with --link options. docker run --icc=false node-app Use Linux Security Modules When you’re running applications in Docker containers, you want to make sure they’re as secure as possible. One way to do this is by using Linux Security Modules (LSMs), such as seccomp, AppArmor, or SELinux. These tools can provide additional layers of protection for Linux systems and containerized applications by controlling which actions a container can perform on the host system: Seccomp is a Linux kernel feature that allows a process to make a one-way transition into a “secure” state where it’s restricted to a reduced set of system calls. It restricts the system calls that a process can make, reducing its attack surface and potential impact if compromised. AppArmor confines individual programs to predefined rules, specifying their allowed behavior and limiting access to files and resources. SELinux enforces mandatory access control policies, defining rules for interactions between processes and system resources to mitigate the risk of privilege escalation and enforce least privilege principles. By leveraging these LSMs, administrators can enhance the security posture of their systems and applications, safeguarding against various threats and vulnerabilities. For instance, when considering a simple Hello World React application containerized within Docker, you may opt to employ the default seccomp profile unless overridden with the --security-opt option. This flexibility enables administrators to explicitly define security policies based on their specific requirements, as demonstrated in the following command: docker run --rm -it --security-opt seccomp=/path/to/seccomp/profile.json node-app Customize seccomp profiles Customizing seccomp profiles at runtime offers several benefits: Flexibility: By separating the seccomp configuration from the Dockerfile, you can adjust the security settings without modifying the image itself. This approach allows for easier experimentation and iteration. Granular control: Custom seccomp profiles let you precisely define which system calls are permitted or denied within your containers. This level of granularity allows you to tailor the security settings to the specific requirements of your application. Security compliance: In environments with strict security requirements, custom seccomp profiles can help ensure compliance by enforcing tighter restrictions on containerized processes. Limit container resources In Docker, containers are granted flexibility to consume CPU and RAM resources up to the extent allowed by the host kernel scheduler. While this flexibility facilitates efficient resource utilization, it also introduces potential risks: Security breaches: In the unfortunate event of a container compromise, attackers could exploit its unrestricted access to host resources for malicious activities. For instance, a compromised container could be exploited to mine cryptocurrency or execute other nefarious actions. Performance bottlenecks: Resource-intensive containers have the potential to monopolize system resources, leading to performance degradation or service outages across your applications. To mitigate these risks effectively, it’s crucial to establish clear resource limits for your containers: Allocate resources wisely: Assign specific amounts of CPU and RAM to each container to ensure fair distribution and prevent resource dominance. Enforce boundaries: Set hard limits that containers cannot exceed, effectively containing potential damage and thwarting resource exhaustion attacks. Promote harmony: Efficient resource management ensures stability, allowing containers to operate smoothly and fulfill their tasks without contention. For example, to limit CPU usage, you can run the container with: -docker run -it --cpus=".5" node-app This command limits the container to use only 50% of a single CPU core. Remember, setting resource limits isn’t just about efficiency — it’s a vital security measure that safeguards your host system and promotes harmony among your containerized applications. To prevent potential denial-of-service (DoS) attacks, limiting resources such as memory, CPU, file descriptors, and processes is crucial. Docker provides mechanisms to set these limits for individual containers. --restart=on-failure:<number_of_restarts> --ulimit nofile=<number> --ulimit nproc=<number> By diligently adhering to these least privilege principles, you can establish a robust security posture for your Docker containers. 6. Choose the right base image Finding the right image can seem daunting with more than 8.3 million repositories on Docker Hub. Two beacons can help guide you toward safe waters: Docker Official Images (DOI) and Docker Verified Publisher (DVP) badges. Docker Official Images (marked by a blue badge shield) offer a curated set of open source and drop-in solution repositories. These are your go-to for common bases like Ubuntu, Python, or Nginx. Imagine them as trusty ships, built with quality materials and regularly inspected for seaworthiness. Docker Verified Publisher Images (signified by a gold check mark) are like trusted partners, organizations who have teamed up with Docker to offer high-quality images. Docker verifies the authenticity and security of their content, giving you extra peace of mind. Think of them as sleek yachts, built by experienced shipwrights and certified by maritime authorities. Remember that Docker Official Images are a great starting point for common needs, and Verified Publisher images offer an extra layer of trust and security for crucial projects. Conclusion Optimizing Docker images for security involves a multifaceted approach, addressing image size, access controls, and vulnerability management. By understanding Docker image layering and leveraging practices such as choosing minimal base images and employing multi-stage builds, developers can significantly enhance efficiency and security. Running applications with least privileges, monitoring vulnerabilities with tools like Docker Scout, and implementing content trust further fortify the containerized ecosystem. As the Docker landscape evolves, staying informed about best practices and adopting proactive security measures is paramount. This guide serves as a valuable resource, empowering developers and operators to navigate the seas of Docker security with confidence and ensuring their applications are not only functional but also resilient to potential threats. Learn more Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Get started with Docker Scout. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
  25. Organizations can maintain their DevOps momentum while protecting the software supply chain by shifting security left. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...