Jump to content

Search the Community

Showing results for tags 'amazon eks'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. This is a guest post by Pranav Kapoor, Head of DevOps at Upstox co-authored with Jayesh Vartak, Solutions Architect at AWS and Jitendra Shihani, Technical Account Manager (TAM) at AWS. Upstox is India’s largest investech, a multi-unicorn valued at $3.5 billion. It allows you to buy and sell stocks, mutual funds, and derivatives, and is loved and trusted by over 12 million customers. It is backed by Mr. Ratan Tata and Tiger Global and is the official partner for the Tata IPL (Indian Premier League). Upstox experienced 10x growth during the pandemic, with the number of users increasing from 1 million to over 10 million in 2022. To sustain this exponential growth and prepare for future expansion, Upstox set high standards for running its trading platform. These standards include availability, scalability, security, operational efficiency, and cost optimization. Availability: Targeted availability SLA from 99.9% to at least 99.99%. Scalability: Aimed to reduce the scaling lag and provide superior performance even during sudden bursts of traffic, such as market opening hours, budget day announcements, or market news. Security: Planned to implement more guardrails to improve security posture. Additionally focused on data privacy, handling of Personally Identifiable Information (PII) data, and storing and processing customer data in a highly secured way. Also aimed to streamline the auditing and compliance processes. Operational efficiency: Aimed to create new infrastructure or environment in an automated way using Infrastructure-as-a-Code (IaaC). Also, wanted to incorporate chaos engineering as part of release lifecycle. Cost Optimization: Aiming to reduce the infrastructure and operational cost without compromising availability, security, and scalability. In order to meet these targets, Upstox embarked on the journey of building a NextGen platform called “Greenfield”. Upstox chose Amazon Elastic Kubernetes Service (Amazon EKS) as a core compute platform for Greenfield to leverage the benefits of containers, whereas the earlier platform was on Amazon Elastic Compute Cloud (Amazon EC2). In this post, we share the tenets followed to build the Greenfield, its core differentiators, and the outcomes. Upstox Greenfield philosophy In order to build the future-proof architecture that can handle further exponential growth in traffic and incorporating flexibility for evolving over time, we used the following tenets: Security: Security is “job zero” at AWS and Upstox. It is the top priority in every aspect of the platform, following the least access privileges, mandatory encryption for data at-rest and in-transit, no SSH keys sharing, mandatory AWS Identity and Access Management (IAM) role based access (no IAM access key and secret access key sharing). Customer Experience: The architecture is focused on availability, performance, scalability, and resiliency to deliver the best customer experience. We set the simple principle that any server belonging to any service can be terminated at any point-in-time, and it should not have an impact on the customer experience. Smart Defaults: Simplicity is key to the long-term success of the platform. Today Kubernetes offers a wide range of capabilities, but if we use all these capabilities then we can quickly make the platform complex. Therefore, we decided to keep the platform simple yet powerful by selecting only a few of the Kubernetes capabilities. Additionally, to keep the platform simple, we followed the principle of building services with smart defaults. The idea is that all the services come with default settings so that novice users or new users can easily run their applications without getting into the complexities of the platform. At the same time, it allows advanced users to customize the defaults as per their respective use cases. For example, the default for the minimum number of pods is two, and the zone topology constraints spread the pods across multiple Availability Zones (AZs) so that by default the application is highly available. To avoid single point of failure, the application teams can’t override the minimum number of pods to less than two and can only override to more than two. Keeping the platform simple along with smart defaults has enabled all teams, such as development, QA, InfoSec, and Operations, to focus on understanding and leveraging a few yet powerful capabilities and be proficient in them. NoOps: The goal is to eliminate or minimize the manual activities as much as possible. This involves having continuous integration/continuous deployment (CI/CD) pipeline for not only applications but also infrastructure (such as IaaC). Also, the test first approach along with chaos engineering is vital to achieve NoOps. The objective is to empower the team by providing self-service, automated platform, and keeping the human activities to a minimum. For example, if a team wants to deploy the application and InfoSec needs to approve it, then they should keep the human touch-point to only the InfoSec team and eliminate the touch-point with the DevSecOps team. To keep the focus on automating everything, the DevSecOps team is comprised of mostly developers, so that they are spending most of the time automating the platform as opposed to day-to-day operations. Cost optimized: We wanted to build the cost-optimized platform. Along with the core cost optimization levers, such as right sizing, auto-scaling, AMD powered instances, AWS Graviton, spot instances, savings plan, and reserved instances, the platform also focuses on leveraging services features (such as S3 intelligent tier) and re-architecture of the applications for cost optimization. Approach It took about a year to build the Next-Gen trading platform. In the planning phase, we evaluated the applications portfolio for various dimensions, such as criticality of the application, dependencies of the application, complexity of the application, containerization effort, testing effort, current cost, and roadmap. Based on this evaluation, we categorized the applications into five buckets: Two representative applications for identified capabilities Two mission-critical applications Top cost contributing applications Remaining to-be containerized applications Applications to-be retained (not to-be containerized) Then, we migrated the applications in phases as follows, Phase 1 – Two representative applications for identified capabilities: In the first phase, we selected a few representative applications that were good candidates for migration to Amazon EKS. This migration was successfully completed, allowing us to validate critical features, such as gRPC, websockets, and load balancing. It helped the team learn Docker, Kubernetes, and Amazon EKS. The team understood the nuances and became confident in migrating and running containers. Phase 2 – Two mission-critical applications: In the second phase, we focused on the top two mission-critical applications. These applications provide the exchange’s data feed to end-users in real-time and also power the charts and graphs. These applications are critical for end-users to make trading decisions. The team resolved all the challenges in migrating and running these mission-critical applications and learned the specifics of running mission-critical applications on Amazon EKS. This experience streamlined the later phases, as the majority of the use cases for forthcoming applications had already been addressed through the initial two representative and two mission-critical applications. Phase 3 – Top cost contributing applications: In the third phase, we picked up the top cost contributing applications, such as complex applications. There were multiple benefits. Apart from realizing the cost savings early, we increased the agility, resiliency, scalability, and performance of these applications. Phase 4 – Remaining to-be containerized applications: In the fourth phase, we migrated the rest of the applications to Amazon EKS. With the learnings and insights we gained in the earlier phases, we were able to expedite the migration process significantly. We evaluated the applications for the fifth phase and concluded that containerization did not present the cost benefit ratio due to various factors, such as the efforts involved as opposed to the advantages and future strategic plans for these applications. Therefore, we retained these applications as is. Greenfield differentiation Scaling based on Traffic Pattern: Upstox did an in-depth analysis of daily traffic pattens. On a typical day, there’s an exponential spike in traffic during the market’s opening hour, approximately from 9:00 AM to 9:30 AM. Then, there is another surge in the traffic before market closing hour. The traffic varies throughout the trading hours. Whereas, traffic is very less during non-trading hours. This traffic pattern is shown in the following diagram. Scaling based on Traffic Pattern However, during a non-typical day, traffic varies based on market news, events such as budget day announcements. Considering these aspects, Upstox Implemented scaling based on multiple dimensions like time-based, request-based, and Memory/CPU based scaling. To further reduce/eliminate the scaling lag, we leveraged the technique “Eliminate Kubernetes node scaling lag with pod priority and over-provisioning”. Application Load Balancer pre-warming: Since there is a sudden spike in the traffic in a short span of time during market opening hours, the Application Load Balancer (ALB) also needs to scale accordingly. The ALB can scale to handle up to double the traffic in the next five minutes. However, the increase in the traffic during market opening hours is much higher than ALB can handle. Therefore, to address this challenge, Upstox has been using ALB pre-warming. With pre-warming, the higher capacity (LSU) is pre-provisioned to handle the sudden spike in the traffic. Karpenter: Upstox implemented Kubernetes Karpenter autoscaler instead of cluster autoscaler to scale the Amazon EKS worker nodes in-line with the traffic patterns. We used the following key features of Karpenter: Instance type selection:We used a broad set of EC2 instance types to mitigate the risk of instance unavailability and to optimize the cost. Node-recycling: The Amazon EKS cluster’s worker nodes use Amazon EKS optimized Amazon Linux AMIs to run containers securely and performantly. To make sure that the worker nodes are always up-to-date with the latest security patches and fixes, we adopted a routine cycle of replacing them with the most recent Amazon EKS-optimized Amazon Machine Images (AMIs). Therefore, by replacing the worker node instead of doing in-place OS updates of the existing worker node, we aligned with the best practice of treating the infrastructure as immutable. We used the following NodePool parameter to continuously replace the node with the latest Amazon EKS optimized AMI. spec.disruption.expireAfter Consolidation: We further optimized the cost by scaling-in the worker nodes in-line with the load. We used the following NodePool parameter to downsize the over-provisioned nodes and to reduce the number of nodes, as the load is decreased and pods are removed. spec.disruption.consolidationPolicy Security: The Greenfield platform is highly secured with a focus on data security, data privacy, access control, and compliance. Upstox leveraged AWS encryption SDK to manage PII data. Upstox leveraged AWS Systems Manager extensively as follows: Systems Manager – Session Manager: We managed the SSH access using Systems Manager – Session Manager and followed the least access principle using IAM policies and roles. Therefore, we eliminated the sharing of SSH keys. Systems Manager – Patch Manager: We also used the Systems Manager – Patch Manager to periodically apply the patches to the EC2 instances outside of Amazon EKS clusters. Systems Manager – Compliance: We used the Systems Manager – Compliance to generate the audit reports such as patch compliance data. Additional Cost Optimizations: To further optimize the cost, we followed these advanced cost optimization techniques: Right Sizing: As we containerized the applications, based on the load and performance testing, we chose the right size and type of the EC2 instances to get the best performance at the lowest cost. AMD powered instances for applications: AMD-powered EC2 instances give customers the ability to run general purpose, memory intensive, burstable, compute intensive, and graphics intensive workloads, all at a significant price advantage relative to comparable offerings. By adopting AMD powered Amazon EKS EC2 instances, Upstox achieved an impressive cost reduction of approximately 40-45% in the AWS Mumbai Region. Spot instances: EC2 Spot instances provide up to 90% cost savings as compared to on-demand instances. Therefore, Upstox migrated Amazon EKS Dev and Staging environments to Spot instances with fallback to on-demand instances if Spot instances were unavailable. This resulted in an overall 70% cost savings for dev and staging environments. Graviton instances for managed services: AWS Graviton processor offers upto 40% cost performance benefits and most of the managed services can be updated to use Graviton processor. Therefore, we leveraged graviton processor for managed services such as Amazon ElastiCache and Amazon Aurora. Amazon EBS gp3: Amazon EBS gp3 volumes offer up to 20% lower cost than previous generation gp2 volumes and also allow scaling of the performance independent of storage capacity. Therefore, we migrated all the EBS volumes from gp2 to gp3 to lower the cost by approximately by 18% and to get better performance. Amazon S3: With exponential growth in the business, the data and cost also increased significantly. Upstox leveraged S3 storage lens to optimize the Amazon Simple Storage Service (Amazon S3) cost as outlined in the case study “Upstox Saves $1 Million Annually Using Amazon S3 Storage Lens” Architectural Roadmap As part of the architecture evolution, Upstox is considering the following roadmap: Multi-architecture CPU: Upstox is considering a multi-architecture CPU, x86 and ARM, for all of their workloads by leveraging Graviton instances along with AMD instances. This diversification makes sure of a wider selection of Graviton instances, mitigating the risk of instance shortages. For example, if AMD instances are not available, then the workload scales using Graviton instances and the other way around. In turn, this improves the resiliency while keeping the cost optimized. Auto-scaling for databases: Currently databases are provisioned for peak capacity and there is no auto-scaling. Upstox is planning to implement auto-scaling for databases read-replicas so that the number of read replicas are automatically adjusted based on the load and to further optimize the cost. Combination of on-demand and Spot instances for production workload: Currently the production environment uses on-demand instances and the plan is to have a combination of on-demand and spot instances. This can further reduce the cost. Conclusion In this post, we showed the detailed journey of Upstox in developing a Greenfield platform, a transformative project that has significantly enhanced customer experiences, agility, security, and operational efficiency, all while reducing costs. The Platform has not only shortened the release lifecycle, enabling faster delivery of new use cases, but also managed to lower operational costs despite handling increased volumes. This was achieved without compromising on security, scalability, or performance. The Upstox initiative stands as a testament to how thoughtful innovation and strategic investment in technology can lead to substantial business benefits. View the full article
  2. The construction of big data applications based on open source software has become increasingly uncomplicated since the advent of projects like Data on EKS, an open source project from AWS to provide blueprints for building data and machine learning (ML) applications on Amazon Elastic Kubernetes Service (Amazon EKS). In the realm of big data, securing data on cloud applications is crucial. This post explores the deployment of Apache Ranger for permission management within the Hadoop ecosystem on Amazon EKS. We show how Ranger integrates with Hadoop components like Apache Hive, Spark, Trino, Yarn, and HDFS, providing secure and efficient data management in a cloud environment. Join us as we navigate these advanced security strategies in the context of Kubernetes and cloud computing. Overview of solution The Amber Group’s Data on EKS Platform (DEP) is a Kubernetes-based, cloud-centered big data platform that revolutionizes the way we handle data in EKS environments. Developed by Amber Group’s Data Team, DEP integrates with familiar components like Apache Hive, Spark, Flink, Trino, HDFS, and more, making it a versatile and comprehensive solution for data management and BI platforms. The following diagram illustrates the solution architecture. Effective permission management is crucial for several key reasons: Enhanced security – With proper permission management, sensitive data is only accessible to authorized individuals, thereby safeguarding against unauthorized access and potential security breaches. This is especially important in industries handling large volumes of sensitive or personal data. Operational efficiency – By defining clear user roles and permissions, organizations can streamline workflows and reduce administrative overhead. This system simplifies managing user access, saves time for data security administrators, and minimizes the risk of configuration errors. Scalability and compliance – As businesses grow and evolve, a scalable permission management system helps with smoothly adjusting user roles and access rights. This adaptability is essential for maintaining compliance with various data privacy regulations like GDPR and HIPAA, making sure that the organization’s data practices are legally sound and up to date. Addressing big data challenges – Big data comes with unique challenges, like managing large volumes of rapidly evolving data across multiple platforms. Effective permission management helps tackle these challenges by controlling how data is accessed and used, providing data integrity and minimizing the risk of data breaches. Apache Ranger is a comprehensive framework designed for data governance and security in Hadoop ecosystems. It provides a centralized framework to define, administer, and manage security policies consistently across various Hadoop components. Ranger specializes in fine-grained access control, offering detailed management of user permissions and auditing capabilities. Ranger’s architecture is designed to integrate smoothly with various big data tools such as Hadoop, Hive, HBase, and Spark. The key components of Ranger include: Ranger Admin – This is the central component where all security policies are created and managed. It provides a web-based user interface for policy management and an API for programmatic configuration. Ranger UserSync – This service is responsible for syncing user and group information from a directory service like LDAP or AD into Ranger. Ranger plugins – These are installed on each component of the Hadoop ecosystem (like Hive and HBase). Plugins pull policies from the Ranger Admin service and enforce them locally. Ranger Auditing – Ranger captures access audit logs and stores them for compliance and monitoring purposes. It can integrate with external tools for advanced analytics on these audit logs. Ranger Key Management Store (KMS) – Ranger KMS provides encryption and key management, extending Hadoop’s HDFS Transparent Data Encryption (TDE). The following flowchart illustrates the priority levels for matching policies. The priority levels are as follows: Deny list takes precedence over allow list Deny list exclude has a higher priority than deny list Allow list exclude has a higher priority than allow list Our Amazon EKS-based deployment includes the following components: S3 buckets – We use Amazon Simple Storage Service (Amazon S3) for scalable and durable Hive data storage MySQL database – The database stores Hive metadata, facilitating efficient metadata retrieval and management EKS cluster – The cluster is comprised of three distinct node groups: platform, Hadoop, and Trino, each tailored for specific operational needs Hadoop cluster applications – These applications include HDFS for distributed storage and YARN for managing cluster resources Trino cluster application – This application enables us to run distributed SQL queries for analytics Apache Ranger – Ranger serves as the central security management tool for access policy across the big data components OpenLDAP – This is integrated as the LDAP service to provide a centralized user information repository, essential for user authentication and authorization Other cloud services resources – Other resources include a dedicated VPC for network security and isolation By the end of this deployment process, we will have realized the following benefits: A high-performing, scalable big data platform that can handle complex data workflows with ease Enhanced security through centralized management of authentication and authorization, provided by the integration of OpenLDAP and Apache Ranger Cost-effective infrastructure management and operation, thanks to the containerized nature of services on Amazon EKS Compliance with stringent data security and privacy regulations, due to Apache Ranger’s policy enforcement capabilities Deploy a big data cluster on Amazon EKS and configure Ranger for access control In this section, we outline the process of deploying a big data cluster on AWS EKS and configuring Ranger for access control. We use AWS CloudFormation templates for quick deployment of a big data environment on Amazon EKS with Apache Ranger. Complete the following steps: Upload the provided template to AWS CloudFormation, configure the stack options, and launch the stack to automate the deployment of the entire infrastructure, including the EKS cluster and Apache Ranger integration. After a few minutes, you’ll have a fully functional big data environment with robust security management ready for your analytical workloads, as shown in the following screenshot. On the AWS web console, find the name of your EKS cluster. In this case, it’s dep-demo-eks-cluster-ap-northeast-1. For example: aws eks update-kubeconfig --name dep-eks-cluster-ap-northeast-1 --region ap-northeast-1 ## Check pod status. kubectl get pods --namespace hadoop kubectl get pods --namespace platform kubectl get pods --namespace trino After Ranger Admin is successfully forwarded to port 6080 of localhost, go to localhost:6080 in your browser. Log in with user name admin and the password you entered earlier. By default, you have already created two policies: Hive and Trino, and granted all access to the LDAP user you created (depadmin in this case). Also, the LDAP user sync service is set up and will automatically sync all users from the LDAP service created in this template. Example permission configuration In a practical application within a company, permissions for tables and fields in the data warehouse are divided based on business departments, isolating sensitive data for different business units. This provides data security and orderly conduct of daily business operations. The following screenshots show an example business configuration. The following is an example of an Apache Ranger permission configuration. The following screenshots show users associated with roles. When performing data queries, using Hive and Spark as examples, we can demonstrate the comparison before and after permission configuration. The following screenshot shows an example of Hive SQL (running on superset) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with permissions permitting. Based on this example and considering your enterprise requirements, it becomes feasible and flexible to manage permissions in the data warehouse effectively. Conclusion This post provided a comprehensive guide on permission management in big data, particularly within the Amazon EKS platform using Apache Ranger, that equips you with the essential knowledge and tools for robust data security and management. By implementing the strategies and understanding the components detailed in this post, you can effectively manage permissions, implementing data security and compliance in your big data environments. About the Authors Yuzhu Xiao is a Senior Data Development Engineer at Amber Group with extensive experience in cloud data platform architecture. He has many years of experience in AWS Cloud platform data architecture and development, primarily focusing on efficiency optimization and cost control of enterprise cloud architectures. Xin Zhang is an AWS Solutions Architect, responsible for solution consulting and design based on the AWS Cloud platform. He has a rich experience in R&D and architecture practice in the fields of system architecture, data warehousing, and real-time computing. View the full article
  3. Amazon CloudWatch Container Insights with Enhanced Observability for EKS now auto-discovers critical health metrics from your AWS accelerators Trainium and Inferentia, and AWS high performance network adapters (Elastic Fabric Adapters) as well as NVIDIA GPUs. You can visualize these out-of-the-box metrics in curated Container Insights dashboards to help monitor your accelerated infrastructure and optimize your AI workloads for operational excellence. View the full article
  4. Today, we are excited to announce that customers will now be able to use Apache Livy to submit their Apache Spark jobs to Amazon EMR on EKS, in addition to using StartJobRun API, Spark Operator, Spark Submit and Interactive Endpoints. With this launch, customers will be able to use a REST interface to easily submit Spark jobs or snippets of Spark code, retrieve results synchronously or asynchronously while continuing to get all of the Amazon EMR on EKS benefits such as EMR optimized Spark runtime, SSL secured Livy endpoint, programmatic set-up experience etc. View the full article
  5. This post was coauthored by Venkatesh Nannan, Sr. Engineering Manager at Rippling Introduction Rippling is a workforce management system that eliminates the friction of running a business, combining HR, IT, and Finance apps on a unified data platform. Rippling’s mission is to free up intelligent people to work on hard problems. Existing Stack Rippling uses a modular monolith architecture with different Docker entrypoints for multiple services and background jobs. These components are managed within a single, large, multi-tenant production cluster on Amazon Elastic Kubernetes Service (Amazon EKS per region), on a scale of over 1000 nodes. Rippling’s infra stack consists of: Karpenter for cluster autoscaling – a flexible, high-performance Kubernetes cluster autoscaler making sure of optimal compute capacity. Horizontal Pod Autoscaler for scaling Kubernetes pods based on demand. KEDA, an event-driven autoscaler for scaling background job processing containers based on event volume. IAM Roles for Service Accounts (IRSA) provide temporary AWS Identity and Access Management (IAM) credentials to the Kubernetes pod, enabling access to AWS resources such as Amazon Simple Storage Service (Amazon S3) buckets, etc. Argo CD, an open-source, GitOps continuous delivery tool, deploys applications and add-on software to the Kubernetes cluster. AWS Load Balancer Controller exposes Kubernetes services to end-users. TargetGroupBinding Custom Resource binds pods to Application Load Balancer (ALB) target groups. Amazon EKS managed node groups spanning across multiple Availability Zones (AZs). In addition to these technologies, we were using Cilium CNI for controlling network traffic between pods. However, we were running into challenges with this part of our stack, so we decided to look for the following alternatives. Figure 1: High level architecture of Rippling Challenges As Amazon EKS version 1.23 approached end-of-life, upgrading to v1.27 became imperative. However, during our initial attempts at upgrading to v1.24 in our non-production environment, we encountered a significant hurdle. New nodes running Cilium failed to join the cluster, increasing our downtime and requiring operational work on the CNI plugin. As a company, we prioritize using managed services to streamline operations and focus on adding value to our business. This Kubernetes upgrade task gave us an opportunity to look at alternatives that would be easier to maintain. We saw that AWS had just announced the VPC CNI support for k8s network policies using eBPF. We realized that migrating to this solution would enable us to replace our third-party networking add-on and solely rely on VPC CNI for both cluster networking and network policy implementation. This change would help reduce the overhead of managing operational software needed for cluster networking. Introduction of Amazon VPC CNI support for network policies When AWS announced VPC CNI support for k8s Network Policies using eBPF, we wanted to use the Amazon VPC CNI to secure the traffic in our Kubernetes clusters and simplify our EKS cluster management and operations. As network policy agents are bundled in existing VPC CNI pods, we would no longer need to run additional daemon pods and network policy controllers on the worker nodes. We followed the blue-green cluster upgrade strategy and were able to safely migrate the traffic from the old cluster to the new cluster with minimal risk of breaking existing workloads. Planning the migration We did an inventory of the applied network policies in our existing cluster and the various ingress/egress features used. This helped us identify deviations from upstream K8s Network Policies. This is necessary for migrating, as Amazon VPC CNI supports only the upstream k8s network policies as of this writing. Rippling was not using advanced features from our third-party network policy engines such as Global Network Policies, DNS based policy rules, or rule priority. Therefore, we did not need Custom Resource Definition (CRD) transformations going into the migration process. AWS recommends converting third-party NetworkPolicy CRDs to Kubernetes Network Policy resources and testing the converted policies in a separate test cluster before migrating from third-party to VPC CNI Network Policy engine in production. To assist in the migration process, AWS has developed a tool called K8s Network Policy Migrator that converts existing supported Calico/Cilium network policy CRDs to Kubernetes native network policies. After conversion you can directly test the converted network policies on your new clusters running VPC CNI network policy controller. The tool is designed to help streamline the migration process and make sure of a smooth transition. Picking migration strategy There are broadly two strategies to migrate the CNI plugin in the EKS cluster: (1) In-place and (2) Blue-Green. The in-Place strategy replaces an existing third-party CNI plugin with the VPC CNI plugin with network policy support in an existing EKS cluster. This would entail the following steps: Creating a new label “cni-plugin=3p” on the existing Amazon EKS managed node groups and Karpenter NodePool resources. Updating the existing third-party CNI DaemonSet to schedule CNI pods on those labeled nodes. Deploy the Amazon EKS Add-on version of Amazon VPC CNI and schedule them to nodes without the “cni-plugin” label. At this point the existing nodes have third-party CNI plugin pods and not the VPC CNI pods. Launch new Amazon EKS Managed node groups, Karpenter NodePool resources without the “cni-plugin=3p” label so that VPC CNI pods can be scheduled to those nodes. Drain and delete the existing Amazon EKS managed node groups and Karpenter NodePool resources to move the workloads to the new worker nodes with VPC CNI. Finally, delete the third-party CNI and associated network policy controllers from the cluster. As you can see, this process is involved, needs careful orchestration, and is more prone to errors that impact the application availability. The second approach is to use the Blue-Green strategy, in which a new EKS cluster is launched with the VPC CNI plugin and then the workloads are migrated to it. This approach is safer since it can be rolled back and provides the ability to test the setup in isolation before routing the live production traffic. Therefore, we chose the Blue-Green strategy for our migration. Migration As part of the blue-green strategy, we created a new EKS cluster with the Amazon VPC CNI and enabled Network Policy support by customizing the VPC CNI Amazon EKS add-on configuration. We also deployed the Argo CD agent on the cluster and bootstrapped it using Argo CD’s App of apps pattern to deploy the applications into the cluster. Network policies were also deployed to the cluster using the Argo CD. This was tested in a non-production environment to migrate from the third-party CNI to VPC CNI to make sure that applications and services passed functional tests. Then we could safely migrate the traffic from the old cluster to the new cluster without risks by leveraging the same strategy in the production environment. Lessons learned Amazon VPC CNI uses the VPC IP space to assign IP addresses to k8s pods. This led us to realize our existing VPCs were not properly sized to meet the growing number of k8s pods. We added a permitted secondary CIDR block 100.64.0.0/10 to the VPC and configured VPC CNI Custom Networking feature to assign those IP addresses to the k8s pods. This proactive measure makes sure of scalability as our infrastructure expands, mitigating concerns about IP address exhaustion. Leveraging automation and Infrastructure-as-Code (IaC) is recommended, especially as we are replicating existing clusters and migrating the workloads to them. Conclusion In this post, we discussed how Rippling migrated from third-party CNI to Amazon VPC CNI in their Amazon EKS clusters and enabled network policy support to secure pod-to-pod communications. Rippling used the blue-green strategy for the migration to minimize the application impact, and safely cut over the traffic to the new cluster. This migration helped Rippling to use the native features offered by AWS and reduced the burden of managing the operational software in our EKS clusters. Venkatesh Nannan, Sr. Engineering Manager – Infrastructure at Rippling Venkatesh Nannan is a seasoned Engineering leader with expertise in building scalable cloud-native applications, specializing in backend development and infrastructure architecture. View the full article
  6. Starting today, customers can receive granular cost visibility for Amazon Elastic Kubernetes Service (Amazon EKS) in the AWS Cost and Usage Reports (CUR), enabling you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. With AWS Split Cost Allocation Data for Amazon EKS, customers can now allocate application costs to individual business units and teams based on how Kubernetes applications consume shared EC2 CPU and memory resources. View the full article
  7. Amazon CloudWatch Container Insights now offers observability for Windows containers running on Amazon Elastic Kubernetes Service (EKS), and helps customers collect, aggregate, and summarize metrics and logs from their Windows container infrastructure. With this support, customers can monitor utilization of resources such as CPU, memory, disk, and network, as well as get enhanced observability such as container-level EKS performance metrics, Kube-state metrics and EKS control plane metrics for Windows containers. CloudWatch also provides diagnostic information, such as container restart failures, for faster problem isolation and troubleshooting for Windows containers running on EKS. View the full article
  8. We are excited to announce that Amazon EMR on EKS simplified the authentication and authorization user experience by integrating with Amazon EKS's improved cluster access management controls. With this launch, Amazon EMR on EKS will use EKS access management controls to automatically obtain the necessary permissions to run Amazon EMR applications on the EKS cluster. View the full article
  9. Introduction Snapchat is an app that hundreds of millions of people around the world use to communicate with their close friends. The app is powered by microservice architectures deployed in Amazon Elastic Kubernetes Service (Amazon EKS) and datastores such as Amazon CloudFront, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and Amazon ElastiCache. This post explains how Snap builds its microservices, leveraging Amazon EKS with AWS Identity and Access Management. It also discusses how Snap protects its K8s resources with intelligent threat detection offered by GuardDuty that is augmented with Falco and in-house tooling to secure Snap’s cloud-native service mesh platform. The following figure (Figure 1) shows the main data flow when users send and receive snaps. To send a snap, the mobile app calls Snap API Gateway that routes the call to a Media service that persists the sender message media in S3 (Steps 1-3). Next, the Friend microservice validates the sender’s permission to Snap the recipient user by querying the messaging core service (MCS) that checks whether the recipient is a friend (Steps 4-6), and the conversation is stored in Snap DB powered by DynamoDB (Steps 7-8). To receive a snap, the mobile app calls MCS to get the message metadata such as the pointer to the media file. It calls the Media microservice to load the media file from the content system that persists user data, powered by CloudFront and Amazon S3 (Steps 9-11) Figure 1: Snaps end-to-end user data flow main data flow when users send and receive Snaps The original Snap service mesh design included single tenant microservice per EKS cluster. Snap discovered, however, that managing thousands of clusters added an operational burden as microservices grew. Additionally, they discovered that many environments were underused and unnecessarily consuming AWS account resources such as IAM roles and policies. This required enabling microservices to share clusters and redefining tenant isolation to meet security requirements. Finally, Snap wanted to limit access to microservice data and storage while keeping them centralized in a network account meshed with Google’s cloud. The following figure illustrates a Kubernetes-based microservice, Friends and Users, deployed in Amazon EKS or Google Kubernetes Engine (GKE). Snap users reach Snap’s API Gateway through Envoy. Switchboard, Snap’s mesh service configuration panel, updates Edge Envoy endpoints with available micro-services resources after deploying them. Figure 2 – Snap’s high mesh design Bootstrap The purpose of this stage is the preparation and implementation of a secure multi-cloud compute provisioning system. Snap uses Kubernetes clusters as a construct that defines an environment that hosts one or more microservices, such as Friends, and Users in the first figure. Snap’s security bootstrap includes three layers that include authentication, authorization, and admission control when designing a Kubernetes-based multi-cloud. Snap uses IAM roles for Kubernetes service accounts (IRSA) to provision fine-grained service identities for microservices running in shared EKS clusters that allow access to AWS services such as Amazon S3, DynamoDB, etc. For operator access scoped to the K8s namespace, Snap built a tool to manage K8s RBAC that maps K8s roles to IAM, allowing developers to perform service operations following the principle of least privileges. Beyond RBAC and IRSA, Snap wanted to impose deployment validations such as making sure containers are instantiated by approved image registries (such as Amazon Elastic Container Registry (Amazon ECR) or images built and signed by approved CI systems) as well as preventing containers from running with elevated permissions. To accomplish this, Snap built admission controller webhooks. Build-time Snap believes in empowering its engineers to architect microservices autonomously within K8s constructs. Snap’s goal was to maximize Amazon EKS security benefits while abstracting K8s semantics. Specifically, 1/ The security of the Cloud safeguarding the infrastructure that runs Amazon EKS, object-stores (Amazon S3), data-stores (KeyDB, ElastiCache, and DynamoDB), and the network that interconnects them. The security in the Cloud includes 2/ protecting the K8s cluster API server and etcd from malicious access, and finally, 3/ protecting Snap’s applications’ RBAC, network policies, data encryption, and containers. Switchboard – Snap’s mesh service configuration panel Snap built a configuration hub called Switchboard to provide a single control panel across AWS and GCP to create K8s clusters. Microservices owners can define environments in regions with specific compute types offered by cloud providers. Switchboard also enables service owners to follow approval workflows to establish trust for service-to-service authentication and specify public routes and other service configurations. It allows service owners to manage service dependencies, traffic routes between K8s clusters. Switchboard presents a simplified configuration model based on the environments. It manages metadata, such as the service owner, team email, and on-call paging information. Snap wanted to empower tenants to control access to microservice data objects (images, audio, and video) and metadata stores such as databases and cache stores so they deployed the data stores in separate data accounts controlled by IAM roles and policies in those accounts. Snap needed to centralize microservices network paths to mesh with GCP resources. Therefore, Snap deployed the microservices in a centralized account using IAM roles for service accounts that assume roles the tenants’ data AWS accounts. The following figure shows how multiple environments (Kubernetes cluster) host three tenants using two different IRSA. Friends’ Service A can read and write to a dedicated DynamoDB table deployed in a separate AWS account. Similarly, MCS’ Service B can get and cache sessions or friends in ElastiCache. Figure 2 – Snap’s high mesh design One of Snap’s design principles was to maximize autonomy while maintaining their desired level of isolation between environments, all while minimizing operational overhead. Snap chose Kubernetes service accounts as the minimal isolation level. Amazon EKS support for IRSA allowed Snap to leverage OIDC to simplify the process of granting IAM permissions to application pods. Snap also uses RBAC to limit access to K8s cluster resources and secure cluster users’ authentication. Snap considers adopting Amazon EKS Pod Identities to reuse associations when running the same application in multiple clusters. This is done by applying identical associations to each cluster without modifying the role trust policy. Deployment-time Cluster access by human operators AWS IAM users and roles are currently managed by Snap which generates policies based on business requirements. Operators use Switchboard to request access to their microservice. Switchboard map an IAM user to a cluster RBAC policy that grants access to Kuberentes objects. Snap is evaluating AWS Identity Center to allow Switchboard federate AWS Single Sign-On (SSO) with a central identity provider (IdP) for enabling cluster operators to have least-privilege access using cluster RBAC policies, enforced through AWS IAM. Isolation strategy Snap chose to isolate K8s cluster resources by namespaces to achieve isolation by limiting container permissions with IAM roles for Service Accounts, and CNI network policies. In addition, Snap provision separate pod identity for add-ons such as CNI, Cluster-AutoScaler, and FluentD. Add-ons uses separate IAM policies using IRSA and not overly permissive EC2 instance IAM roles. Network partitioning Snap’s mesh defines rules that restrict or permit network traffic between microservices pods with Amazon VPC CNI network policies. Snap wanted to minimize IP exhaustion caused by IPv4 address space limitations due to its massive scale. Instead of working around IPv4 limitations using Kubernetes IPv4/IPv6 dual-stack, Snap wanted to migrate gracefully to IPv6. Snap can connect IPv4-based Amazon EKS clusters to IPv6 clusters using Amazon EKS IPv6 support and Amazon VPC CNI. Container hardening Snap built admission controller webhook to audit and enforce pod security context to prevent containers from running with elevated permissions (RunAs) or accessing volumes at the cluster or namespace level. Snap validate that workloads don’t use configurations that break container isolation such as hostIPC, HostNetwork, and HostPort. Figure 4 – Snap’s admission controller service Network policies Kubernetes Network Policies enable you to define and enforce rules for traffic flow between pods. Policies act as a virtual firewall, which allows you to segment and secure your cluster by specifying network traffic rules for pods, namespaces, IP addresses, and ports. Amazon EKS extends and simplifies native support for network policies in Amazon VPC CNI and Amazon EC2, security groups, and network access control lists (NACLs) through the upstream Kubernetes Network Policy API. Run-time Audit logs Snap needs auditing system activities to enhance compliance, intrusion detection, and policy validation. This is to track unauthorized access, policy violations, suspicious activities, and incident responses. Snap uses Amazon EKS control plane logging that ingests API server, audit, authenticator, controller manager, and scheduler logs into CloudWatch. It also uses Amazon CloudTrail for cross-AWS services access and fluentd to ingest application logging to CloudWatch and Google’s operations suite. Runtime security monitoring Snap has begun using GuardDuty EKS Protection. This helps Snap monitor EKS cluster control plane activity by analyzing Amazon EKS audit logs to identify unauthorized and malicious access patterns. This functionality, combined with their admission controller events provides coverage of cluster changes. For runtime monitoring, Snap uses the open source Falco agent to monitor EKS workloads in the Snap service mesh. GuardDuty findings are contextualized by Falco rules based on container running processes. This context helps to identify cluster tenants with whom to triage the findings. Falco agents support Snap’s runtime monitoring goals and deliver consistent reporting. Snap compliments GuardDuty with Falco to ensure changes are not made to a running container by monitoring and analyzing container syscalls (container drift detection rule). Conclusion Snap’s cloud infrastructure has evolved from running a monolith inside Google App Engine to microservices deployed in Kubernetes across AWS and GCP. This streamlined architecture helped improve Snapchat’s reliability. Snap’s Kuberentes multi-tenant vision needed abstraction of cloud provider security semantics such as AWS security features to comply with strict security and privacy standards. This blog reviewed the methods and systems used to implement a secure compute and data platform on Amazon EKS and Amazon data-stores. This included bootstrapping, building, deploying, and running Snap’s workloads. Snap is not stopping here. Learn more about Snap and our collaboration with Snap. View the full article
  10. Introduction Quora is a leading Q&A platform with a mission to share and grow the world’s knowledge, serving hundreds of millions of users worldwide every month. Quora uses machine learning (ML) to generate a custom feed of questions, answers, and content recommendations based on each user’s activity, interests, and preferences. ML drives targeted advertising on the platform, where advertisers use Quora’s vast user data and sophisticated targeting capabilities to deliver highly personalized ads to the audience. Moreover, ML plays a pivotal role in maintaining high-quality content for users by effectively filtering spam and moderating content. Quora launched Poe, a generative artificial intelligence (AI) based chatbot app by leveraging different Large Language Models (LLMs) and offering fast and accurate responses. Poe aims to simplify the user experience and provide continuous back-and-forth dialogue while integrating with the major LLMs and other generative AI models. Quora successfully modernized its model serving with NVIDIA Triton Inference Server (Triton) on Amazon Elastic Kubernetes Service (Amazon EKS). This move enabled a small team of ML engineers to manage, operate, and enhance model serving efficiently. This post delves into the design decisions, benefits of running NVIDIA Triton Server on Amazon EKS, and how Quora reduced model serving latency by three times and model serving cost by 23%. Previous model serving architecture Quora was running its model serving in hybrid mode where around half of the models were hosted on TensorFlow Serving (TFS), and the other half were hosted on a Custom Python Engine. The Custom Python Engine supported different model frameworks, such as PyTorch, XGBoost, Microsoft LightGBM, sklearn, whereas TFS was used only for the TensorFlow model framework. Figure 1: Previous model serving architecture Challenges with previous model serving architecture Custom Python Engine uses Apache Thrift, whereas TFS uses gRPC framework. Maintaining different frameworks for implementing and managing remote procedure calls (RPC) in model serving architecture added significant complexity. The existing system faced challenges with using GPUs effectively for serving, which led to unnecessary resource waste and increased costs. Furthermore, both had limited support for GPU optimization techniques that restrict model performance and efficiency. There was a pressing need at Quora to serve Recommendation models with large embeddings on GPUs instead of CPU to improve cost. Limitations of Custom Python Engine Performance: Models deployed on Custom Python Engine, which used Apache Thrift for RPC communication, encountered high latency issues that impact model performance. On certain occasions, response time could soar up to 1500 milliseconds (ms), in stark contrast to the anticipated latency of 50 ms. Service Mesh Integration: Quora uses Istio service mesh. gRPC natively supports HTTP2 and integrates seamlessly with service mesh technologies, which provide ease of support for features such as traffic mirroring and rate limiting. Apache Thrift does not support HTTP2 and is not natively integrated with Istio Service mesh. High Traffic management: Custom Python Engine models faced challenges in handling high-traffic scenarios due to limitations in its client-side rate limiting mechanism. gRPC integrates seamlessly with server-side mesh-based rate limiting solutions, providing a much more robust and scalable solution to manage surges in traffic and maintain system stability. This method has been particularly effective for making sure of smooth operation during spikes in queries per second (QPS). The significant disparity in response times across different models underscores the need for an optimized solution to enhance overall model serving performance and to meet specific latency and throughput requirements, particularly in critical use cases such as ads ranking and user feed. Quora was looking for a new model serving solution that resolves the preceding challenges, and also supports multi-ML frameworks such as ONNX, and TensorRT. Solution overview Overview of NVIDIA Triton Inference Server NVIDIA Triton Inference Server is an open-source software solution purpose-built for serving ML models. It optimizes the deployment of models in production by maximizing hardware use, supporting multiple frameworks, and providing a range of flexible serving options. Why did Quora select NVIDIA Triton Inference Server on Amazon EKS? To improve performance and optimize the cost of its model serving, Quora investigated various software and hardware, aiming to reduce latency and increase model throughput. Quora eventually selected NVIDIA Triton Inference Server due to its potential to meet the challenges in its model serving infrastructure. Triton is designed to effectively utilize GPUs for serving a wide variety of models, and flexible deployment options made it an optimal choice for modernizing Quora’s model serving. The reasons for choosing Triton include: Multi-ML frameworks: Triton supports multiple ML frameworks, such as, TensorFlow, PyTorch, ONNX, TensorRT, OpenVINO, HugeCTR, FIL (Forest Inference Library). The broad framework support facilitates the migration of all models from current custom Python engines to Triton. HTTP/GRPC endpoints: Triton provides HTTP/GRPC endpoints for model serving, which simplifies integration with Quora’s existing Istio service mesh. High performance: Triton quickly and efficiently processes requests, making it perfect for applications requiring low latency. It includes essential features such as rate limiting status, health checks, dynamic batching, and concurrent model execution capabilities. Scalability: It can easily scale up to handle large workloads and is designed to handle multiple models and data sources. Additionally, it supports a wide range of hardware (such as GPUs and CPUs), multi-node deployment, model versioning, and ensemble models handling. This makes it easy to deploy models on different hardware configurations. Managed observability: Integration with Prometheus and Grafana for metrics, tools that are already in use at Quora for monitoring ML systems. Recommendation models serving on GPUs: The NVIDIA Merlin HugeCTR (Huge Click-Through-Rate) is a GPU-accelerated deep neural network (DNN) training and inference framework designed for efficiently serving Recommendation models with large embeddings on NVIDIA GPUs. Auto-tuning tools for model optimization: Model Analyzer: Assesses runtime performance and suggests optimized configurations (batch size, instance group, CPU, memory, etc.) Model Navigator: Automates the transition of models from source to optimal format and configuration for Triton deployment Walkthrough The following walkthrough guides you through this solution. Architecture of running NVIDIA Triton server on Amazon EKS Quora chose gRPC as the standard client communication framework and Triton as the model serving for all ML models. There is a separate namespace for training and model serving in the Amazon EKS cluster. Within the model serving, separate node groups are used for the CPU-based models and the GPU-based models. Quora decided to move all new ML models on the following architecture: Figure 2: Modernized model serving Migration to NVIDIA Triton Server on Amazon EKS The existing ML model serving architecture was designed to accommodate multiple ML Serving engines, such as Custom Python Engine and TFS. The following steps are performed to add Triton Server into model serving architecture and migrate GPU models to Triton: Generate stubs for gRPC service: Quora chose to use the gRPC framework with Triton. To generate the stubs necessary for RPC communication between the server and client sides, we followed HTTP/REST and GRPC Protocol and used Triton’s protobuf specification to generate these stubs. Setup NVIDIA Triton on Amazon EKS as the serving server Customize the base image of NVIDIA with ONXX framework: NVIDIA provides pre-built Docker containers for the NVIDIA Triton Inference Server, which are available in their NGC Catalog. However, to tailor the Triton container to our specific environment, we followed the instructions detailed in Triton’s customization guide. This process included selecting the particular framework that our environment needs (for example, ONNX) and installing any additional libraries required by our models. To accommodate a variety of our models based on different frameworks, we built multiple Triton packages. Add Triton-specific model configurations: Triton requires specific configuration details, such as the model’s name, version, and procedures for preprocessing inputs and post-processing outputs. Triton is added as the third engine in the model serving architecture to incorporate Triton specific settings within the existing model configuration. These configurations are serialized into the pbtxt file, which serves as the required model configuration in the model repository for Triton deployment. Prepare the model to deploy on Triton: We took an existing PyTorch model and converted that to the ONNX format and uploaded it to an Amazon Simple Storage Service (Amazon S3) model repository. We used MLFlow model registry for model versioning and incorporated Triton packages into our Continuous Integration/Continuous Deployment (CI/CD) pipeline. With these steps, we successfully integrated the NVIDIA Triton Inference Server into the model serving architecture. Migrate models to NVIDIA Triton Server: In the initial phase, we successfully migrated four PyTorch models, running on Python engine, and two TensorFlow models, running on TFS engine, to the Triton server running with the ONNX framework. This led to substantial improvements in model availability, reducing latency and cost by at least 50%. After the initial success, three new PyTorch GPU models were added directly to the Triton server. Benefits of modernized architecture The modernized model serving platform enables Quora to achieve performance enhancement, cost savings, and substantial feature enrichment. Some significant wins observed after the migration include: Performance enhancement: Latency of the PyTorch GPU model was slashed by seven times (from 230ms to 30ms) and latency for the TensorFlow GPU model was reduced by two times (from 20ms to 8ms). Notably, significant enhancements have been observed in Transformer and BERT-based models, such as DeBERTa, RoBERTa, XLM-RoBERTa, and E5 Text Embedding, with latency reductions exceeding seven times. Improved performance occurs due to conversion to the ONXX format, and model quantization from fp-32 to fp-16. This reduces the model size and memory usage, using ONNX runtime as inference backend engine and using gRPC for the communication framework Cost savings: The GPU model serving cost is reduced by 52%, which leads to 23% overall savings in model serving. The primary contributors to cost savings are conversion to ONNX, and Model Quantization. The model size gets smaller, and Quora could enhance throughput by two times and GPU utilization by three times. Ultimately, this improves the efficiency and cuts down cost. GPU use: The adoption of ONNX frameworks improved GPU use from 40% to 80%, leading to two times serving efficiency. Unified RPC framework: The new setup promotes a unified framework by migrating all models to use gRPC and service mesh functionalities. This unification simplifies client-side RPC support and streamlines the operations. More time to focus on innovation: With Amazon EKS, engineers don’t need to spend time on undifferentiated infrastructure management. It helps reduce operational burden, such as on-call pages. This allows ML engineers to dedicate more time to experimentation, training, and serving new models for an improved customer experience. Lessons learned Adopting new technologies can be a challenging journey, often fraught with unexpected obstacles and setbacks. Here are some of the lessons we learned: ONNX as a Preferred Exchange Format: Quora found ONNX to be an ideal open standard format for model serving. It’s designed for interoperability, making it a perfect choice when working with models trained with various frameworks. After training an ML model in PyTorch or TensorFlow, we could easily convert it to ONNX and apply post-training quantization. This process led to significant improvements in latency and efficiency. gRPC as the communication framework: Quora’s experience has shown gRPC to be a reliable RPC framework offering improved performance and reliability. Remote model repository feature in Triton: Although Triton supports remote model repository in Amazon S3, our testing indicated that this feature did not function as anticipated. We recommend incorporating a step to fetch the model files from Amazon S3 and place them into a predefined local path, such as: /mnt/models/. This method guarantees the availability of model files at a recognized location, a critical need for Triton backends such as the python_backend, which require Python runtime and libraries, or the hugectr_backend, which requires access to embedding files. Support of multi-ML frameworks: NVIDIA Triton Inference Server supports multiple frameworks, such as PyTorch, TensorFlow, TensorRT, or ONNX Runtime with different hardware. Amazon EKS as ML service: Quora needed an extensible, self-serving ML service based on microservice architecture that helps ML Engineers iterate quicker before deploying the model. Ideally, this service should support various training and serving environments, essentially being a truly framework-agnostic training and model serving service. We found Amazon EKS as the most suitable ML service. Conclusion In this post, we showed how Quora modernized its model serving with NVIDIA Triton Inference Server on Amazon EKS, which provided a strong foundation for flexible, reliable, and efficient model serving. This service reduced model serving complexity, which enabled Quora to quickly adapt to changing business requirements. The key factors that drove the modernization decisions were the ability to support multiple ML frameworks, scale the model serving with effective compute resource management, increase system reliability, and reduce the cost of operations. The modernized model serving on Amazon EKS also decreased the ongoing operational support burden for engineers, and the scalability of the design improved customer experience and opened up opportunities for innovation and growth. We’re excited to share our learnings with the wider community through this post, and to support other organizations that are starting their model serving journey or looking to improve the existing model serving pipelines. As part of our experience, we highly recommend modernizing your model serving with NVIDIA Triton on Amazon EKS. View the full article
  11. Today, Amazon EKS announces general availability of extended support for Kubernetes versions. You can run EKS clusters using any Kubernetes version for up to 26 months from the time the version becomes available on EKS. Extended support for Kubernetes versions is available for Kubernetes versions 1.21 and higher. For a full list of versions and support dates, see here. View the full article
  12. Introduction We are excited to announce general availability of Amazon Linux 2023 (AL2023) on Amazon Elastic Kubernetes Service (Amazon EKS). AL2023 is the next generation of Amazon Linux from Amazon Web Services (AWS) and is designed to provide a secure, stable, high-performance environment to develop and run your cloud applications. The standard AL2023-based EKS-optimized Amazon Machine Image (AMI) can be used with Karpenter, managed node groups (MNG), and self-managed nodes in all AWS Regions. It can be used on Amazon EKS versions 1.25 or greater in standard support and Amazon EKS versions 1.23 and 1.24 in extended support. The standard AL2023-based EKS-optimized AMI is available for both x86 and ARM64 variants. Accelerated AL2023 AMIs will be released at a later date. If you would like to continue to use the accelerated AMI, you should do so by using the AL2-based accelerated AMI. The standard AL2023-based EKS-optimized AMI helps you increase security and improve application performance. AL2023 includes preconfigured security policies to help you adhere to common industry guidelines that can be confirmed at launch time or run time. By default, any Amazon EC2 instances launched with the AL2023-based EKS-optimized AMI will automatically use Security Enhanced Linux (SELinux), Open Secure Sockets Layer version 3 (OpenSSL 3), and Instance Metadata Service Version 2 (IMDSv2). To learn more about how these protocols will help improve your security posture, see Comparing AL2 to Amazon Linux 2023. From a performance standpoint, the standard AL2023-based EKS-optimized AMI optimizes boot time to reduce the time from instance launch to running your applications. These optimizations span the Amazon Linux kernel and beyond. AL2023 offers an integrated experience with many AWS-specific tools such as Systems Manager and the AWS Command Line Interface (AWS CLI). The standard AL2023-based EKS-optimized AMI also uses Control Group version 2 (cgroup v2), which is a Linux kernel feature that hierarchically organizes processes and distributes system resources between them. Cgroup v2 offers several improvements such as a single unified hierarchy design in API, safer sub-tree delegation to containers, enhanced resource allocation management, and isolation across multiple resources. Together, these improvements help with unifying accounting of different memory allocations such as network memory and kernel memory. Cgroup v2 became available on Kubernetes starting with version 1.25, and now by default comes with AL2023. Before upgrading to AL2023 While there are security and performance benefits, there are also several package changes and we recommend you to test applications thoroughly before upgrading applications in production environments. For a list of all package changes in AL2023, refer to Package changes in Amazon Linux 2023. In addition to these changes in AL2023, you should be aware of the following: AL2023 introduces a new node initialization process nodeadm that uses a YAML configuration schema. If you are using self-managed nodes or MNG with a custom launch template, you will now need to provide additional cluster metadata explicitly when creating a new node group. An example of the minimum required parameters is as follows, where apiServerEndpoint, certificateAuthority, and service cidr are now requir --- apiVersion: node.eks.aws/v1alpha1 kind: NodeConfig spec: cluster: name: my-cluster apiServerEndpoint: https://example.com certificateAuthority: Y2VydGlmaWNhdGVBdXRob3JpdHk= cidr:10.100.0.0/16 In AL2, the metadata from these parameters was discovered from the Amazon EKS DescribeCluster API call as part of the bootstrap.sh script. With AL2023, this behavior has changed since the additional API call risks throttling during large node scale ups. This change does not affect customers that are using MNG without a custom launch template or Karpenter. For more information on certificateAuthority and service cidr, see DescribeCluster. Docker is not supported in AL2023 for all supported Amazon EKS versions. Support for Docker has ended and been removed from Amazon EKS version 1.24 or greater on AL2. For more information on deprecation, see Amazon EKS ended support for Dockershim. VPC CNI v1.16.2 or greater is required for AL2023 AL2023 requires IMDSv2 by default. IMDSv2 has several benefits that help improve security posture. It uses a session-oriented authentication method that requires the creation of a secret token in a simple HTTP PUT request to start the session. A session’s token can be valid for anywhere between 1 second and 6 hours. For more information on how to transition from IMDSv1 to IMDSv2, see Transition to using Instance Metadata Service Version 2 and Get the full benefits of IMDSv2 and disable IMDSv1 across your AWS infrastructure. If you would like to use IMDSv1, you can still do so by manually overriding the settings using Instance Metadata option launch properties. Note: For IMDSv2 the default hop count for MNG is set to 1. This means that containers will not have access to the node’s credentials via IMDS. If you would like to use IMDSv1, you can still do so by manually overriding the settings using instance metadata option launch properties. Using the standard AL2023-based Amazon EKS Amazon Machine Image The Amazon EKS-optimized Amazon Linux 2023 AMI is built on top of Amazon Linux 2023, and is configured to serve as the base image for Amazon EKS nodes. The AMI is configured to work with Amazon EKS and it includes the following components: kubelet AWS IAM Authenticator containerd To retrieve the AMI ID, you will need to query the AWS Systems Manager Parameter Store API. To do this, you need to first determine the AWS region where your node will be deployed in and the Kubernetes version of the cluster your node will join. You can then run the following AWS CLI command to retrieve the appropriate AL2023 AMI ID. At launch, the AL2023 AMI comes in two flavors: x86 standard and arm64 standard. AL2023_x86_64_STANDARD: 'amazon-linux-2023/x86_64/standard' AL2023_ARM_64_STANDARD: 'amazon-linux-2023/arm64/standard' Replace the AWS Region and Kubernetes version as appropriate. Note, you must be logged into the AWS CLI using an IAM principal that has the ssm:GetParameter IAM permission to retrieve the Amazon EKS-optimized AMI metadata. aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.29/amazon-linux-2023/x86_64/standard/recommended/image_id --region region-code --query "Parameter.Value" --output text For managed node groups (MNG) You can create a new MNG using the CreateNodeGroup Amazon EKS API and specifying the AMI family type, either AL2023_x86_64_STANDARD or AL2023_ARM_64_STANDARD. The new node group will be created with the latest AL2023 AMI. If you would like to use a specific AMI version, you can specify the AL2023 Amazon EKS-optimized AMI ID in a custom launch template. If you are using the Amazon EKS console to create a new MNG, you can select Amazon Linux 2023 from the drop down menu for AMI type: If you have an existing MNG, you can upgrade to AL2023 by either performing in-place upgrade or a blue/green upgrade depending on how you are using a launch template: If you are using a custom AMI with an MNG and you are specifying the AMI ID, you can perform an in-place upgrade by swapping the AMI ID in the launch template. You should ensure that your applications and any user data transfer over to AL2023 first, before performing this upgrade strategy. If you are using MNG with either the standard launch template or with a custom launch template that does not specify the AMI ID, you are required to upgrade using a blue/green strategy since at this time, you cannot edit the amiType in the MNG. A blue/green upgrade is an alternative strategy that is more involved since a new node group will be created with AL2023 as the AMI type. You will need to ensure that the new node group is carefully configured so that all custom user data from the AL2 node group is compatible with the new operating system. Once the new node group is ready, nodes in the old node group can be cordoned and drained so that pods are scheduled on the new node group. For more on custom user data, see Customizing managed nodes with launch templates. Starting with Amazon EKS version 1.30 or newer, new MNG’s will default to using the AL2023 Amazon EKS-optimized AMI. You can continue to use AL2 by choosing it as the AMI type when creating a new MNG. For Karpenter Karpenter users who want to use AL2023 should modify the EC2NodeClass amiFamily field with AL2023. By default, Drift is enabled in Karpenter. This means that once the amiFamily field has been changed, Karpenter will detect that the Karpenter-provisioned nodes are using EKS-optimized AMIs for the old AMI. Karpenter will then automatically cordon, drain, and replace those nodes with the new AMI. apiVersion: karpenter.sh/v1beta1 kind: NodePool metadata: name: default spec: template: spec: nodeClassRef: apiVersion: karpenter.k8s.aws/v1beta1 kind: EC2NodeClass name: default --- apiVersion: karpenter.k8s.aws/v1beta1 kind: EC2NodeClass metadata: name: default spec: # Required, resolves a default ami and userdata amiFamily: AL2023 Conclusion The Amazon EKS-optimized AL2023 AMI helps you improve the performance and security posture of your applications. The EKS-optimized AL2023 AMI is available today for MNG, Karpenter, and self-managed nodes. You can also customize your EKS-optimized AMIs using packer build steps listed in the amazon-eks-ami GitHub repo. To learn more about using Amazon Linux 2023 with Amazon EKS, see Amazon EKS-optimized Amazon Linux AMIs. For accelerated workloads, the Accelerated AL2023-based AMIs will be released at a later date. If you would like to continue to use the accelerated AMI, you should do so by using the AL2-based accelerated AMI. View the full article
  13. Analyze the traffic patterns on any public-facing website or web app, and you’ll notice connection requests from all over the world. Apart from the intended traffic, a typical web application responds to requests from bots, health checks, and various attempts to circumvent security and gain unauthorized access. In addition to impacting your customer’s experience, these requests can also increase your AWS spend. Today, business-critical web apps rely on web application firewalls to block unwanted traffic and protect apps from common vulnerabilities. But many customers struggle with the complexity when it comes to implementing an effective web application firewall. AWS Web Application Firewall (AWS WAF) and AWS Firewall Manager are designed to make it easy for you to protect your web applications and APIs from common web exploits that can disrupt services, increase resource usage, and put data at risk. This post describes how to use AWS WAF and AWS Firewall Manager to protect web-based workloads that run in an Amazon Elastic Kubernetes Services (Amazon EKS) cluster. AWS WAF gives you control over the type of traffic that reaches your web applications. It allows you to monitor HTTP(S) requests to web applications and protect them against DDoS attacks, bots, and common attack patterns, such as SQL injection or cross-site scripting. If you are unfamiliar with web application firewalls (WAF), you’d be pleased to learn that you don’t have to be a regular expressions wrangler or an expert in firewall rules to use AWS WAF. It comes with a set of AWS-managed rules, so you don’t have to write filtering logic to protect against common application vulnerabilities or unwanted traffic. AWS WAF integrates with Amazon CloudFront, Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync. If you already use an ALB as an ingress for your Kubernetes-hosted applications, you can add a web application firewall to your apps within a few minutes. Customers that operate in multiple AWS accounts can use AWS Organizations and AWS Firewall Manager to control AWS WAF rules in multiple accounts from a single place. AWS Firewall Manager monitors for new resources or accounts created to ensure they comply with a mandatory set of security policies. It is a best practice to run EKS clusters in dedicated VPCs, and Firewall Manager can ensure your WAF rules get applied across accounts, wherever your applications run. Solution This post demonstrates how to implement a web application firewall using AWS WAF to protect applications running on EKS. We will start by creating an EKS cluster and deploying a sample workload. The sample application that we will use for this walkthrough is a web-based application that we’ll expose using an Application Load Balancer. We’ll then create a Kubernetes ingress and associate an AWS WAF web access control list (web ACL) with an ALB in front of the ingress. Prerequisites You will need the following to complete the tutorial: AWS CLI version 2 eksctl kubectl Helm Git Create an EKS cluster Let’s start by setting a few environment variables: WAF_AWS_REGION=us-west-2 #Change this to match your region WAF_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) WAF_EKS_CLUSTER_NAME=waf-eks-sample Create a cluster using eksctl: Note: This may take approx. 15 mins to create the cluster. eksctl create cluster \ --name $WAF_EKS_CLUSTER_NAME \ --region $WAF_AWS_REGION \ --managed Store the cluster’s VPC ID in an environment variable as we will need it for the next step: WAF_VPC_ID=$(aws eks describe-cluster \ --name $WAF_EKS_CLUSTER_NAME \ --region $WAF_AWS_REGION \ --query 'cluster.resourcesVpcConfig.vpcId' \ --output text) Install the AWS Load Balancer Controller The AWS Load Balancer Controller is a Kubernetes controller that runs in your EKS cluster and handles the configuration of the Network Load Balancers and Application Load Balancers on your behalf. It allows you to configure Load Balancers declaratively in the same manner as you handle the configuration of your application. Install the AWS Load Balancer Controller by running these commands: ## Associate OIDC provider eksctl utils associate-iam-oidc-provider \ --cluster $WAF_EKS_CLUSTER_NAME \ --region $WAF_AWS_REGION \ --approve ## Download the IAM policy document ## Create an IAM policy WAF_LBC_IAM_POLICY=$(aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy-WAFDEMO \ --policy-document file://iam-policy.json) ## Get IAM Policy ARN WAF_LBC_IAM_POLICY_ARN=$(aws iam list-policies \ --query "Policies[?PolicyName=='AWSLoadBalancerControllerIAMPolicy-WAFDEMO'].Arn" \ --output text) ## Create a service account eksctl create iamserviceaccount \ --cluster=$WAF_EKS_CLUSTER_NAME \ --region $WAF_AWS_REGION \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --override-existing-serviceaccounts \ --attach-policy-arn=${WAF_LBC_IAM_POLICY_ARN} \ --approve ## Add the helm repo and install the AWS Load Balancer Controller helm repo add eks https://aws.github.io/eks-charts && helm repo update helm install aws-load-balancer-controller \ eks/aws-load-balancer-controller \ --namespace kube-system \ --set clusterName=$WAF_EKS_CLUSTER_NAME \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller \ --set vpcId=$WAF_VPC_ID \ --set region=$WAF_AWS_REGION Verify that the controller is installed. kubectl get deployment -n kube-system aws-load-balancer-controller Deploy the sample app We will use a sample application called Yelb for this demo. It provides an Angular 2-based UI that will represent a real-world application for this post. Here’s a high-level architectural view of Yelb:high-level architectural view of Yelb: Clone the repository and deploy Yelb in your EKS cluster: git clone https://github.com/aws/aws-app-mesh-examples.git cd aws-app-mesh-examples/walkthroughs/eks-getting-started/ kubectl apply -f infrastructure/yelb_initial_deployment.yaml Check the deployed resources within the yelb namespace: kubectl get all -n yelb Note: The Postgres database that Yelb uses is not configured to use a persistent volume. Expose Yelb using an ingress Let’s create a Kubernetes ingress to make Yelb available publicly. The AWS Load Balancer Controller will associate the the ingress with an Application Load Balancer. cat << EOF > yelb-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: yelb.app namespace: yelb annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: ingressClassName: alb # Updated method to attach ingress class rules: - http: paths: - path: / pathType: Prefix backend: service: name: yelb-ui port: number: 80 EOF kubectl apply -f yelb-ingress.yaml Test the application by sending a request using curl or by using a web browser to navigate to the URL. It may take some time for the loadbalancer to become available, use command below to confirm: kubectl wait -n yelb ingress yelb.app --for=jsonpath='{.status.loadBalancer.ingress}' && YELB_URL=$(kubectl get ingress yelb.app -n yelb \ -o jsonpath="{.status.loadBalancer.ingress[].hostname}") You can obtain the URL using Kubernetes API and also navigate to the site by entering the URL: echo $YELB_URL Add a web application firewall to the ingress Now that our sample application is functional, let’s add a web application firewall to it. The first thing we need to do is create a WAS web ACL. In AWS WAF, a web access control list or a web ACL monitors HTTP(S) requests for one or more AWS resources. These resources can be an Amazon API Gateway, AWS AppSync, Amazon CloudFront, or an Application Load Balancer. Within an AWS WAF Web ACL, you associate rule groups that define the attack patterns to look for in web requests and the action to take when a request matches the patterns. Rule groups are reusable collections of rules. You can use Managed rule groups offered and maintained by AWS and AWS Marketplace sellers. When you use managed rules, AWS WAF automatically updates your WAF Rules regularly to ensure that your web apps are protected against newer threats. You can also write your own rules and use your own rule groups. Create an AWS WAF web ACL: WAF_WACL_ARN=$(aws wafv2 create-web-acl \ --name WAF-FOR-YELB \ --region $WAF_AWS_REGION \ --default-action Allow={} \ --scope REGIONAL \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=YelbWAFAclMetrics \ --description 'WAF Web ACL for Yelb' \ --query 'Summary.ARN' \ --output text) echo $WAF_WACL_ARN Store the AWS WAF web ACL’s Id in an environment variable as it is required for updating the AWS WAF web ACL in the upcoming steps: WAF_WAF_ID=$(aws wafv2 list-web-acls \ --region $WAF_AWS_REGION \ --scope REGIONAL \ --query "WebACLs[?Name=='WAF-for-Yelb'].Id" \ --output text) Update the ingress and associate this AWS WAF web ACL with the ALB that the ingress uses: cat << EOF > yelb-ingress-waf.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: yelb.app namespace: yelb annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/wafv2-acl-arn: ${WAF_WACL_ARN} spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: yelb-ui port: number: 80 EOF kubectl apply -f yelb-ingress-waf.yaml By adding alb.ingress.kubernetes.io/wafv2-acl-arn annotation to the ingress, AWS WAF is inspecting incoming traffic. However, it’s not blocking any traffic yet. Before we send a request to our sample app using curl, let's wait for the loadbalancer to become ready for traffic kubectl wait -n yelb ingress yelb.app --for=jsonpath='{.status.loadBalancer.ingress}' Now lets send traffic to our sample app: curl $YELB_URL You should see a response from Yelb’s UI server: Enable traffic filtering in AWS WAF We have associated the ALB that our Kubernetes ingress uses with an AWS WAF web ACL Every request that’s handled by our sample application Yelb pods goes through AWS WAF for inspection. The AWS WAF web ACL is currently allowing every request to pass because we haven’t configured any AWS WAF rules. In order to filter out potentially malicious traffic, we have to specify rules. These rules will tell AWS WAF how to inspect web requests and what to do when it finds a request that matches the inspection criteria. AWS WAF Bot Control is a managed rule group that provides visibility and control over common and pervasive bot traffic to web applications. The Bot Control managed rule group has been tuned to detect various types of bots seen on the web. It can also detect requests that are generated from HTTP libraries, such as libcurl. Since our sample workload isn’t popular enough to attract malicious traffic, let’s use curl to generate bot-like traffic. Once enabled, we expect users who are accessing our application from a web browser like Firefox or Chrome to be allowed in, whereas traffic generated from curl would be blocked out. While Bot Control has been optimized to minimize false positives, we recommend that you deploy Bot Control in count mode first and review CloudWatch metrics and AWS WAF logs to ensure that you are not accidentally blocking legitimate traffic. You can use the Labels feature within AWS WAF to customize how Bot Control behaves. Based on labels generated by Bot Control, you can have AWS WAF take an alternative action, such as sending out customized responses back to the client. Customers use custom responses to override the default response, which is 403 (Forbidden), for block actions when they’d like to send a nondefault status, serve a static error page code back to the client, or redirect the client to a different URL by specifying a 3xx redirection status code. Create a rules file: cat << EOF > waf-rules.json [ { "Name": "AWS-AWSManagedRulesBotControlRuleSet", "Priority": 0, "Statement": { "ManagedRuleGroupStatement": { "VendorName": "AWS", "Name": "AWSManagedRulesBotControlRuleSet" } }, "OverrideAction": { "None": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AWS-AWSManagedRulesBotControlRuleSet" } } ] EOF Update the AWS WAF web ACL with the rule: aws wafv2 update-web-acl \ --name WAF-FOR-YELB \ --scope REGIONAL \ --id $WAF_WAF_ID \ --default-action Allow={} \ --lock-token $(aws wafv2 list-web-acls \ --region $WAF_AWS_REGION \ --scope REGIONAL \ --query "WebACLs[?Name=='WAF-FOR-YELB'].LockToken" \ --output text) \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=YelbWAFAclMetrics \ --region $WAF_AWS_REGION \ --rules file://waf-rules.json Press q to exit the NextLockToken section. After waiting about 10 seconds, test the rule by sending a request: curl $YELB_URL As you see, access to the application is no longer accessible via the terminal. Now lets open the same URL in your browser below and you should see the Yelb UI. echo http://$YELB_URL Note that we added AWSManagedRulesBotControlRuleSet rule group to AWS WAF web ACL (see configuration file waf-rules.json). This rule group contains rules to block and manage requests from bots as described in AWS WAF documentation. AWS WAF blocks the requests we send using curl because AWS WAF web ACL rules are configured to inspect and block requests for user agent strings that don’t seem to be from a web browser. AWS WAF logging and monitoring Network security teams require AWS WAF logging to meet their compliance and auditing needs. AWS WAF provides near-real-time logs through Amazon Kinesis Data Firehose. AWS WAF logs each request along with information such as timestamp, header details, and the action of the rule that matched. Customers can integrate AWS WAF logs with Security information and event management (SIEM) solutions or other log analysis tools for debugging and forensics. You can enable access logging in AWS WAF, save AWS WAF logs to Amazon S3, and use Amazon Athena to query WAF logs without creating servers. AWS WAF also allows you to redact certain fields during logging, which is helpful if your requests contain sensitive information that should not be logged. After implementing an AWS WAF, it is critical to regularly review your applications’ traffic to develop a baseline understanding of its traffic patterns. Application and security teams should review AWS WAF metrics and dimensions to ensure that the web ACL rules block requests that can potentially compromise the application’s security and availability. AWS Shield Advanced and WAF AWS Shield Advanced subscribers can also engage the AWS Shield response team during an active DDoS attack. The AWS Shield Response team helps you analyze suspicious activity and assists you in mitigating the issue. The mitigation often involves updating or creating AWS WAF rules and AWS WAF web ACLs in your account. AWS Firewall Manager AWS Firewall Manager enables customers that operate multiple AWS accounts to centrally manage their web ACL. It simplifies administration and maintenance tasks across multiple accounts and resources for a variety of protections, including AWS WAF, AWS Shield Advanced, Amazon VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall. If you’d like to use AWS Firewall Manager to centralize the control of AWS WAF in multiple AWS accounts, you’d also need: AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an organization and Enabling all features in your organization. A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the Firewall Manager administrator for Firewall Manager. This gives the account permission to deploy security policies across the organization. AWS Config: You must enable AWS Config for all of the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, use the Enable AWS Config template from the StackSets sample templates. You can associate Firewall Manager with either a management account or a member account that has appropriate permissions as a delegated administrator. AWS Organizations’ documentation includes more information about using Firewall Manager with AWS Organizations. Cleanup Use the following commands to delete resources created during this post: kubectl delete ingress yelb.app -n yelb aws wafv2 delete-web-acl --id $WAF_WAF_ID --name WAF-FOR-YELB --scope REGIONAL \ --lock-token $(aws wafv2 list-web-acls \ --region $WAF_AWS_REGION \ --scope REGIONAL \ --query "WebACLs[?Name=='WAF-FOR-YELB'].LockToken" \ --output text) \ --region $WAF_AWS_REGION helm delete aws-load-balancer-controller -n kube-system eksctl delete iamserviceaccount \ --cluster $WAF_EKS_CLUSTER_NAME \ --region $WAF_AWS_REGION \ --name aws-load-balancer-controller aws iam detach-role-policy \ --policy-arn $WAF_LBC_IAM_POLICY_ARN \ --role-name $(aws iam list-entities-for-policy --policy-arn $WAF_LBC_IAM_POLICY_ARN --query 'PolicyRoles[0].RoleName' --output text) aws iam delete-policy \ --policy-arn $WAF_LBC_IAM_POLICY_ARN kubectl patch targetgroupbinding k8s-yelb-yelbui-87f2ba1d97 -n yelb --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers"}]' kubectl patch svc yelb-ui -n yelb --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers"}]' kubectl delete ns yelb eksctl delete cluster --name $WAF_EKS_CLUSTER_NAME --region $WAF_AWS_REGION Conclusion This post demonstrates how to protect your web workloads using AWS WAF. Amazon EKS customers benefit from AWS WAF-provided AWS Managed Rules to add a web application firewall to web apps without learning how to write AWS WAF rules. Additionally, AWS WAF Bot Control gives you visibility and control over common and pervasive bot traffic that can consume excess resources, skew metrics, cause downtime, or perform other undesired activities. We recommend implementing a AWS WAF and testing its effectiveness by conducting penetration tests regularly to identify gaps in your AWS WAF rules. The Guidelines for Implementing AWS WAF whitepaper provides a detailed implementation guide for anyone looking to protect web applications. View the full article
  14. Today we are announcing the availability of the fourth quarterly update for Amazon Linux 2024, AL2023.4 as well as the Amazon Linux 2023 EKS optimized AMI. Amazon EKS customers can now use the standard AL2023-based EKS optimized Amazon Machine Image (AMI) with Managed Node Groups, self-managed nodes, and Karpenter, and is available across all supported version of Amazon EKS. To learn more about using Amazon Linux 2023 with EKS, see Amazon EKS optimized Amazon Linux AMIs. View the full article
  15. Video encoding and transcoding are critical workloads for media and entertainment companies. Delivering high-quality video content to viewers across devices and networks needs efficient and scalable encoding infrastructure. As video resolutions continue to increase to 4K and 8K, GPU acceleration is essential to real-time encoding workflows where parallel encoding tasks are necessary. Although encoding on the CPU is possible, this is better suited to smaller-scale sequential encoding tasks or where encoding speed is less of a concern. AWS offers GPU instance families, such as G4dn, G5, and G5g, which are well suited for these real-time encoding workloads. Modern GPUs offer users thousands of shading units and the ability to process billions of pixels per second. Running a single encoding job on the GPU often leaves resources under-used, which presents an optimization opportunity. By running multiple processes simultaneously on a single GPU, processes can be bin-packed and use a fraction of the GPU. This practice is known as fractionalization. This post explores how to build a video encoding pipeline on AWS that uses fractional GPUs in containers using Amazon Elastic Kubernetes Service (Amazon EKS). By splitting the GPU into fractions, multiple encoding jobs can share the GPU concurrently. This improves resource use and lowers costs. This post also looks at using Bottlerocket and Karpenter to achieve fast scaling of heterogeneous encoding capacity. With Bottlerocket’s image caching capabilities, new instances can start-up rapidly to handle spikes in demand. By combining fractional GPUs, containers, and Bottlerocket on AWS, media companies can achieve the performance, efficiency, and scale they need for delivering high-quality video streams to viewers. The examples in this post are using the following software versions: Amazon EKS version 1.28 Karpenter version 0.33.0 Bottlerocket version 1.16.0 To view and deploy the full example, see the GitHub repository. Configuring GPU time-slicing in Amazon EKS The concept of sharing or time-slicing a GPU is not new. To achieve maximum use, multiple processes can be run on the same physical GPU. By using as much of the available GPU capacity as possible, the cost per streaming session decreases. Therefore, the density – the number of simultaneous transcoding or encoding processes – is an important dimension for cost-effective media-streaming. With the popularity of Kubernetes, GPU vendors have invested heavily in developing plugins to make this process easier. Some of benefits of using Kubernetes over running processes directly on the virtual machine (VM) are: Resiliency – By using Kubernetes daemonsets and deployments, you can rely on Kubernetes to automatically restart any crashed or failed tasks. Security – Network policies can be defined to prevent inter-pod communication. Additionally, Kubernetes namespaces can be used to provide additional isolation. This is useful in multi-tenant environments for software-vendors and Software-as-a-Service (SaaS) providers. Elasticity – Kubernetes deployments allow you to easily scale-out and scale-in based on changing traffic volumes. Event-driven autoscaling, such as with KEDA, allows for responsive provisioning of additional resources. Tools such as the Cluster Autoscaler and Karpenter automatically provision compute capacity based on resource use. A device plugin is needed to expose the GPU resources to Kubernetes. It is the device plugin’s primary job to make the details of the available GPU resources visible to Kubernetes. Multiple plugins are available for allocating fractions of GPU in Kubernetes. In this post, the NVIDIA device plugin for Kubernetes is used as it provides a lightweight mechanism to expose the GPU resources. As of version 12 this plugin supports time-slicing. Additional wrappers for the device plugin are available, such as the NVIDIA GPU Operator for Kubernetes, which provide further management and monitoring capabilities if needed. To configure the NVIDIA device plugin with time-slicing, the following steps should be followed. Remove any existing NVIDIA device plugin from the cluster: kubectl delete daemonset nvidia-device-plugin-daemonset -n kube-system Next, create a ConfigMap to define how many “slices” to split the GPU into. The number of slices needed can be calculated by reviewing the GPU use for a single task. For example, if your workload uses at most 10% of the available GPU, you could split the GPU into 10 slices. This is shown in the following example config: apiVersion: v1 kind: ConfigMap metadata: name: time-slicing-config-all data: any: |- version: v1 flags: migStrategy: none sharing: timeSlicing: resources: - name: nvidia.com/gpu replicas: 10 kubectl create -n kube-system -f time-slicing-config-all.yaml Finally, deploy the latest version of the plugin, using the created ConfigMap: helm repo add nvdp https://nvidia.github.io/k8s-device-plugin helm repo update helm upgrade -i nvdp nvdp/nvidia-device-plugin \ --version=0.14.1 \ --namespace kube-system \ --create-namespace \ --set config.name=time-slicing-config-all If the nodes in the cluster are inspected, then they show an updated GPU resource limit, despite only having one physical GPU: Capacity: cpu: 8 ephemeral-storage: 104845292Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32386544Ki nvidia.com/gpu: 10 pods: 29 As the goal is to bin-pack as many tasks on to a single GPU as possible, it is likely the Max Pods setting are hit next. On the machine used in this post (g4dn.2xlarge) the default max pods is 29. For testing purposes, this is increased to 110 pods. 110 is the maximum recommended for nodes smaller than 32 vCPUs. To increase this, the following steps need to be followed. Pass the max-pods flag to the kubelet in the node bootstrap script: /etc/eks/bootstrap.sh my-cluster --use-max-pods false --kubelet-extra-args '--max-pods=110' When using Karpenter for auto-scaling, the NodePool resource definition passes this configuration to new nodes: kubelet: maxPods: 110 The number of pods is now limited by the maximum Elastic Network Interfaces (ENIs) and IP addresses per interface. See the ENI documentation for the limits for each instance type. The formula is Number of ENIs * (Number of IPv4 per ENI – 1) + 2). To increase the max pods per node beyond this, prefix delegation must be used. This is configured using the following command: kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true For more details on prefix delegation, see Amazon VPC CNI increases pods per node limits. Amazon EC2 instance type session density The next decision is which instance type to use. GPU instances are often in high demand because of their use in both Media and Machine Learning (ML) workloads. It is a best practice to diversify across as many instance types as you can in all the Availability Zones (AZs) in an AWS Region. At the time of writing, the three current generation NVIDIA GPU-powered instance families most used for Media workloads are G4dn, G5, and G5g. The latter uses an ARM64 CPU architecture with AWS Graviton 2 processors. The examples used in the post use 1080p25 (1080 resolution and 25 frames-per-second) as the frame-rate profile. If you are using a different resolution or framerate, then your results vary. To test this, ffmpeg was run in the container using h264 hardware encoding with CUDA using the following arguments: ffmpeg -nostdin -y -re -vsync 0 -c:v h264_cuvid -hwaccel cuda -i <input_file> -c:v h264_nvenc -preset p1 -profile:v baseline -b:v 5M -an -f rtp -payload_type 98 rtp://192.168.58.252:5000?pkt_size=1316 The key options used in this example are as follows, and you may want to change these based on your requirements: `-re`: Read input at the native frame rate. This is particularly useful for real-time streaming scenarios. `-c:v h264_cuvid`: Use NVIDIA CUVID for decoding. `-hwaccel cuda`: Specify CUDA as the hardware acceleration API. `-c:v h264_nvenc`: Use NVIDIA NVENC for video encoding. `-preset p1`: Set the encoding preset to “p1” (you might want to adjust this based on your requirements). `-profile:v baseline`: Set the H.264 profile to baseline. `-b:v 5M`: Set the video bitrate to 5 Mbps. To view the full deployment definition, see the GitHub repository. All instances were using NVIDIA driver version 535 and CUDA version 12.2. Then, the output was monitored on a remote instance using the following command: ffmpeg -protocol_whitelist file,crypto,udp,rtp -i input.sdp -f null – Concurrent Sessions Average g4dn.2xlarge FPS Average g5g.2xlarge FPS Average g5.2xlarge FPS 26 25 25 25 27 25 25 25 28 25 25 25 29 23 24 25 30 23 24 25 31 23 23 25 32 22 23 24 33 22 21 23 35 21 20 22 40 19 19 19 50 12 12 15 The green highlighted cells indicate the maximum concurrent sessions at which the desired framerate was consistently achieved. G4dn.2xlarge The T4 GPU in the g4dn instance has a single encoder, which means that the encoder consistently reaches capacity at around 28 concurrent jobs. On a 2xl, there is still spare VRAM, CPU, and memory available at this density. This spare capacity could be used to encode additional sessions on the CPU, run the application pods, or the instance could be scaled down to a smaller instance size. Besides monitoring the FPS, the stream can be manually monitored using ffplay or VLC. Note that although additional sessions can be run beyond the preceding numbers, frame rate drops become more common. Eventually, the GPU becomes saturated and CUDA memory exceptions are thrown, causing the container to crash and restart. The following stream quality was observed when manually watching the stream through VLC: 25-28 sessions – high quality, minimal drops in frame rate, optimal viewing experience >=30 sessions – some noticeable drops in frame rate and resolution. >=50 sessions – frequent stutters, and heavy artifacts, mostly unwatchable (at this density CPU, Memory and Network could all become bottlenecks) G5g.2xlarge The Graviton-based instance performs nearly identically to the G4dn. This is expected as the T4g GPU in the G5g instance has similar specifications to the T4 GPU. The key difference is that the G5g uses ARM-based Graviton 2 processors instead of x86. This means the G5g instances have approximately 25% better price/performance than the equivalent G4dn. When deploying ffmpeg in a containerized environment, this means that multi-arch container images can be built to target both x86 and ARM architectures. Using hardware encoding with h264 and CUDA works well using cross-compiled libraries for ARM. G5.2xlarge The G5 instances use the newer A10G GPU. This adds an additional 8GB of VRAM and doubles the memory bandwidth compared to the T4, up to 600 GBs thanks to PCIe Gen4. This means it can produce lower latency, higher resolution video. However, it still has one encoder. When running concurrent rendering jobs, the bottleneck is the encoder capacity. The higher memory bandwidth allows a couple of extra concurrent sessions, but the density that can be achieved is similar. This does mean it is possible to achieve the same density at a slightly higher framerate or resolution. The cost per session for each instance is shown in the following table (based on on-demand pricing in the US-East-1 Region): Instance Type Cost per hour ($) Max sessions at 1080p25 Cost per session per hour ($) G4dn 0.752 28 0.027 G5 1.212 31 0.039 G5g 0.556 28 0.02 By mixing different instance families and sizes and deploying across all AZs in a Region or multiple Regions, you can improve your resiliency and scalability. This also allows you to unlock the maximum spot discount by choosing a “price-capacity-optimized” model if your application is able to gracefully handle spot interruptions. Horizontal node auto-scaling As media-streaming workloads fluctuate with viewing habits, it’s important to have elastically scalable rendering capacity. The more responsively additional compute capacity can be provisioned, the better the user experience. This also optimizes the cost by reducing the need to provision for peak. Note that this section explores scaling of the underlying compute resources, not auto-scaling the workloads themselves. The latter is covered in the Horizontal Pod Autoscaler documentation. Container images needing video drivers or frameworks are often large, typically ranging from 500MiB – 3GiB+. Fetching these large container images over the network can be time intensive. This impairs the ability to scale responsively to sudden changes in activity. There are some tools that can be leveraged to make scaling more responsive: Karpenter – Karpenter allows for scaling using heterogeneous instance types. This means pools of G4dn, G5, and G5g instances can be used, with Karpenter picking the most cost effective to place the pending pods. As the resource type used by the device plugin presents as a standard GPU resource, Karpenter can scale based on this resource. As of writing, Karpenter does not support scaling based on custom resources. Initially nodes are launched with the default one GPU resource until the node is properly labelled by the device plugin. During spikes in scaling, nodes may be over-provisioned until Karpenter reconciles the workload. Bottlerocket – Bottlerocket is a minimal container OS that only contains the software needed to run container images. Due to this smaller footprint, Bottlerocket nodes can start faster than general-purpose Linux distributions in some scenarios, see the following table for a comparison of this: Stage General-purpose Linux elapsed time g4dn.xlarge (s) Bottlerocket elapsed time g4dn.xlarge (s) Bottlerocket elapsed time g5g.xlarge (s) Instance Launch 0 0 0 Kubelet Starting 33.36 17.5 16.54 Kubelet Started 37.36 21.25 19.85 Node Ready 51.71 34.19 32.38 Caching container images on the node. When using Bottlerocket, container images are read from the data volume. This data volume is initialized on instance boot. This means that container images can be pre-fetched or cached onto an Amazon Elastic Block Store (Amazon EBS) volume, rather than pulled over the network. This can lead to considerably faster node start times. For a detailed walkthrough of this process, see Reduce container startup time on Amazon EKS with Bottlerocket data volume. As an additional scaling technique, the cluster can be over-provisioned. Having an additional warm pool of nodes available for scheduling allows for sudden spikes while the chosen autoscaling configuration kicks in. This is explored in more detail in Eliminate Kubernetes node scaling lag with pod priority and over-provisioning. By using a multi-arch container build, multiple Amazon Elastic Compute Cloud (Amazon EC2) instance types can be targeted using the same NodePool configuration in Karpenter. This allows for cost-effective scaling of resources. The example workload was built using the following command: docker buildx build --platform "linux/amd64,linux/arm64" --tag ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/ffmpeg:1.0 --push . -f Dockerfile This allows for a NodePool defined in Karpenter as follows: apiVersion: karpenter.sh/v1beta1 kind: NodePool metadata: name: default spec: template: spec: requirements: - key: karpenter.sh/capacity-type operator: In values: ["on-demand", "spot"] - key: "node.kubernetes.io/instance-type" operator: In values: ["g5g.2xlarge", "g4dn.2xlarge", "g5.2xlarge"] nodeClassRef: name: default kubelet: maxPods: 110 In this NodePool, all three instance types are available for Karpenter to use. Karpenter can choose the most efficient, regardless of processor architecture, as we are using the multi-arch image built previously in the deployment. The capacity type uses spot instances to reduce cost. If the workload cannot tolerate interruptions, then spot can be removed and only on-demand instances are provisioned. This would work with any supported operating system. To make Karpenter use Bottlerocket-based Amazon Machine Images (AMIs), the corresponding EC2NodeClass is defined as follows: apiVersion: karpenter.k8s.aws/v1beta1 kind: EC2NodeClass metadata: name: default spec: amiFamily: Bottlerocket ... This automatically selects the latest AMI in the specified family, in this case Bottlerocket. For the full example and more details on this configuration, see the example in the GitHub repository. Conclusion By leveraging fractional GPUs, container orchestration, and purpose-built OS and instance types, media companies can achieve up to 95% better price-performance. The techniques covered in this post showcase how AWS infrastructure can be tailored to deliver high density video encoding at scale. With thoughtful architecture decisions, organizations can future-proof their workflows and provide exceptional viewing experiences as video continues evolving to higher resolutions and bitrates. To start optimizing your workloads, experiment with different instance types and OS options such as Bottlerocket. Monitor performance and cost savings as you scale out encoding capacity. Use AWS’s flexibility and purpose-built tools to future-proof your video pipeline today. View the full article
  16. Starting today, you can use private cluster endpoints with AWS Batch on Amazon Elastic Kubernetes Service (Amazon EKS). You can bring existing private Amazon EKS clusters and create a compute environment on AWS Batch. This setup enables Amazon EKS jobs to run private endpoints using AWS Batch. View the full article
  17. Today, AWS announces the expansion in the log coverage support for Amazon Security Lake, which includes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs. This enhancement allows you to automatically centralize and normalize your Amazon EKS audit logs in Security Lake, making it easier to monitor and investigate potential suspicious activities in your Amazon EKS clusters. View the full article
  18. Today, we are announcing general availability of Amazon Linux 2023 (AL2023) on Amazon Elastic Kubernetes Service (EKS). AL2023 is the next generation of Amazon Linux from Amazon Web Services and is designed to provide a secure, stable, high-performance environment to develop and run your cloud applications. EKS customers can enjoy the benefits of AL2023 by using the standard AL2023-based EKS optimized Amazon Machine Image (AMI) with Managed Node Groups, self-managed nodes, and Karpenter. View the full article
  19. Introduction On October 4, 2023, Amazon Elastic Kubernetes Service (Amazon EKS) announced the public preview of extended support for Kubernetes versions, which gives you an additional 12 months of support for Kubernetes minor versions. Today, we are announcing the pricing for extended support. Amazon EKS clusters running on a Kubernetes version in the extended support window will be charged a total of $0.60 per cluster per hour, effective from the April 2024 billing cycle (starting April 1, 2024). Pricing for clusters running Kubernetes versions in standard support is unchanged — you will continue to pay $0.10 per cluster per hour. This pricing covers the cost to run the Kubernetes control plane. You pay for the AWS resources (e.g., Amazon Elastic Compute Cloud [Amazon EC2] instances, AWS Fargate pods, Amazon Elastic Block Store [Amazon EBS] volumes, or AWS Outposts capacity) that you create to run your Kubernetes worker nodes or support other cluster functions. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments Extended version support in Amazon EKS is available for Kubernetes versions 1.23 and higher. You can refer to the Amazon EKS version release calendar to see the standard support and extended support windows for each version. What is Amazon EKS extended support for Kubernetes versions? Amazon EKS extended support for Kubernetes versions lets you use a Kubernetes minor version for up to 26 months from the time the version is generally available from Amazon EKS. Amazon EKS versions in extended support receive ongoing security patches for the Kubernetes control plane managed by EKS. Additionally, EKS will release critical patches for the Amazon VPC CNI, kube-proxy, and CoreDNS add-ons, AWS-published EKS Optimized Amazon Machine Images (AMIs) for Amazon Linux, Bottlerocket, Windows, and EKS Fargate nodes. (Note: support for AWS-published EKS Optimized AMIs for Windows will be available for Kubernetes versions 1.24 and higher.) AWS backs all EKS versions in both standard and extended support with full technical support. Extended support for Kubernetes versions is available in all AWS Regions where Amazon EKS is available, including AWS GovCloud (US) Regions. When is an Amazon EKS version in standard or extended support? Standard support begins when a version becomes available in Amazon EKS, and continues for 14 months — the same as upstream Kubernetes support window for minor versions. Extended support in Amazon EKS begins immediately at the end of standard support and continues for 12 months. Kubernetes versions 1.23 and higher are eligible for extended support from Amazon EKS. How much does extended support cost? The cost to run an Amazon EKS cluster is based on the Kubernetes minor version of the cluster control plane. When your Amazon EKS cluster runs a Kubernetes version that is in standard support, you pay $0.10 per cluster per hour. When your Amazon EKS cluster runs a Kubernetes version in extended support, you pay $0.60 per cluster per hour. Here are a few scenarios to better explain how extended support works: If you have a large fleet of Amazon EKS clusters running on different Kubernetes versions, then clusters that are running a version in standard support will be billed at $0.10 per cluster per hour, and clusters running a version in extended support will be billed at $0.60 per cluster per hour. If you create a new Amazon EKS cluster using a Kubernetes version that is in extended support, then you will pay $0.60 per hour. If you are running an Amazon EKS cluster using a Kubernetes version in extended support, and you upgrade the control plane for that cluster to a Kubernetes version in standard support, then you will pay $0.60 per hour for the time the cluster was running an extended support version prior to the upgrade, and you will pay $0.10 per hour after the upgrade. Here are some examples to explain billing for Amazon EKS with the standard and extended support windows: Example 1: Let’s assume that you create an Amazon EKS cluster on a Kubernetes version the day Amazon EKS releases it, and that you run the cluster for the next 26 months without upgrading the control plane version. You will pay $0.10 per hour during the first 14 months when the version is in standard support. After 14 months, the Kubernetes version transitions to extended support. You will now pay $0.60 per hour for the remaining 12 months. During the 26-month period, on average, you will pay $0.33 per hour to run this cluster. Support type Usage (Months) Price (per cluster per hour) Standard 14 $0.10 Extended 12 $0.60 Average cost for 26 months of support $0.33 Example 2: Let’s assume that you create an Amazon EKS cluster using a Kubernetes version that is 4 months into the standard support window (i.e., there are 10 months left in the version’s standard support window). At the end of standard support for this version, you decide to use extended support for 6 months before upgrading your cluster to the next version which is in standard support. During the 16 months for which you’ve run this cluster, you pay $0.10 per hour for the first 10 months, and $0.60 per hour for the remaining 6 months. On average, you’re paying $0.29 per hour to run this cluster for 16 months. (Note: When you upgrade your cluster to a Kubernetes version in standard support, your billing will return to $0.10 per cluster per hour). Support Type Usage (Months) Price (per cluster per hour) Standard 10 $0.10 Extended 6 $0.60 Average hourly cost for 16 months of support $0.29 Example 3: Let’s assume you quickly adopt new versions and follow a regular upgrade cadence for your Amazon EKS clusters, so you don’t use a Kubernetes version beyond its 14-month standard support window. In this case, there are no changes from Amazon EKS billing today. You will continue to pay $0.10 per hour for your cluster. Support Type Usage (Months) Price (per cluster per hour) Standard 14 $0.10 Extended 0 $0.60 Average hourly cost for 14 months of support $0.10 The following table and figure summarize the end of support dates and pricing for Kubernetes versions available today on Amazon EKS: Support type Duration Price (per cluster per hour) Standard 14 months starting from the date a version is generally available on Amazon EKS $0.10 Extended 12 months starting from the date a version reaches the end of standard support in Amazon EKS $0.60 Kubernetes version Standard Support Pricing Window Extended Support Pricing Window 1.23 August 2022 – October 2023 October 2023 – October 2024 1.24 November 2022 – January 2024 January 2023 – January 2024 1.25 February 2023 – May 2024 May 2024 – May 2025 1.26 April 2023 – June 2024 June 2024 – June 2025 1.27 May 2023 – July 2024 July 2024 – July 2025 1.28 September 2023 – November 2024 November 2024 – November 2025 * See Amazon EKS Kubernetes calendar for exact dates. This calendar is regularly updated and should be considered the source of truth for exact and final version support dates. Next Steps Extended support for Kubernetes versions is available to all Amazon EKS customers today as a preview and for no additional cost. Effective from the April 2024 billing cycle (starting April 1, 2024), your Amazon EKS clusters that are running on a Kubernetes version in extended support will be charged at $0.60 per cluster hour. Remember that you can upgrade your cluster at any time to a version in standard support. Pricing for standard support is unchanged, and will remain $0.10 per cluster per hour. Please use the Amazon EKS version calendar to see the latest dates for version support. View the full article
  20. In this post, we demonstrate how to build and deploy a Kafka cluster with Amazon Elastic Kubernetes Service (Amazon EKS) using Data on EKS (DoEKS). DoEKS’s open-source project empowers customers to expedite their data platform development on Kubernetes. DoEKS provides templates that incorporate best practices around security, performance, cost-optimization among others that can be used as is or fine-tuned to meet more unique requirements. View the full article
  21. Amazon Elastic Kubernetes Service (EKS) customers can now leverage EC2 security groups to secure applications in clusters using Internet Protocol version 6(IPv6) address space. View the full article
  22. Amazon Elastic Kubernetes Service (EKS) now surfaces cluster-related health issues in the EKS console and API, providing administrators enhanced visibility into the health of their clusters. Cluster health status information helps customers to quickly diagnose, troubleshoot, and remedy issues with their clusters, enabling them to run more up-to-date and secure application environments. View the full article
  23. Post co-written by Shahar Azulay, CEO and Co-Founder at GroundCover Introduction The abstraction introduced by Kubernetes allows teams to easily run applications at varying scale without worrying about resource allocation, autoscaling, or self-healing. However, abstraction isn’t without cost and adds complexity and difficulty tracking down the root cause of problems that Kubernetes users experience. To mitigate these issues, detailed observability into each application’s state is key but can be challenging. Users have to ensure they’re exposing the right metrics, emitting actionable logs, and instrumenting their application’s code with specific client-side libraries to collect spans and traces. If that’s not a hard task on its own, gaining such visibility into multiple small and distributed, interdependent pieces of code that comprise a modern Kubernetes microservices environment, becomes a much harder task at scale. A new emerging technology called eBPF (Extended Berkeley Packet Filter) is on its path to relieve many of these problems. Referred to as an invaluable technology by many industry leaders, eBPF allows users to trace any type of event regarding their application performance – such as network operations – directly from the Linux kernel, with minimal performance overhead, and without configuring side-cars or agents. The eBPF sensor is out-of-band to the application code, which means zero code changes or integrations and immediate time to value all across your stack. The result is granular visibility into what’s happening within an Amazon Elastic Kubernetes Service (Amazon EKS) Kubernetes cluster. In this post, we’ll cover what eBPF is, why it’s important, and what eBPF-based tools are available for you to get visibility into your Amazon EKS applications. We’ll also review how Groundcover uses eBPF to enable its cost-efficient architecture. In today’s fast-paced world of software development, Kubernetes has emerged as a game-changer, offering seamless scalability and resource management for applications. However, with this abstraction comes the challenge of maintaining comprehensive observability in complex microservices environments. In this blog post, we’ll explore how eBPF (Extended Berkeley Packet Filter) is revolutionizing Kubernetes observability on Amazon EKS). Amazon EKS-optimized Amazon Machine Images (AMI) fully support eBPF, which empowers users with unparalleled insights into their applications. Solution overview The need for observability As applications scale and interdependencies grow, gaining detailed insights into their state becomes vital for effective troubleshooting. Kubernetes users face the daunting task of instrumenting applications, collecting metrics, and emitting actionable logs to track down issues. This becomes even more challenging in modern microservices environments, where numerous small and distributed, independent pieces of code interact with each other. Introducing eBPF: A game-changer for Kubernetes observability In the quest for enhanced observability, eBPF emerges as an invaluable technology. Unlike traditional observability approaches, eBPF empowers users to trace any type of event regarding their application performance – such as network operations — directly from the Linux kernel, with minimal performance overhead, and without configuring side-cars or agents. The out-of-band advantage of eBPF The advantage of eBPF lies in its out-of-band approach to observability. Out-of-band means eBPF collects the data without being part of the application’s code. One of the advantages of eBPF is that no code change is needed and installation can be done on the instance level without handling configuration per application deployed. Another advantage is that it eliminates unexpected effects of the observability stack for your time-critical application code, as all data collection is being done outside of the application process. Users can gain granular visibility into applications deployed on their Kubernetes clusters without instrumenting their code or integrating any third-party libraries. By tracing at the kernel level, eBPF enables users to analyze the inner workings of their applications, with immediate time to value, full coverage and unprecedented depth. eBPF-based tools for Amazon EKS observability A plethora of eBPF-based tools has emerged to provide comprehensive observability into Amazon EKS applications. These tools offer tracing capabilities for various events, including network operations, system calls, and even custom application events. For instance, BCC is a toolkit that helps simplify eBPF bootstrapping and development and includes several useful tools like network traffic analysis and resource utilization profiling. Another example is bpftrace, which is a little more focused on one-liners and short scripts for quick insights. These tools were built to be ad hoc, so you can run them directly on any Linux machine and get real-time value. Amazon EKS users who want to inspect their Kubernetes environment with eBPF tools should check out Inspector Gadget, which manages the packaging, deployment and execution of eBPF programs in a Kubernetes cluster, including many based on BCC tools. In the following section, we’ll dive deeper into some of the open source projects that use eBPF to collect insight data about applications. Walkthrough Caretta Caretta helps teams instantly create a visual network map of the services running in their Kubernetes cluster. Caretta uses eBPF to collect data in an efficient way and is equipped with a Grafana Node Graph dashboard to quickly display a dynamic map of the cluster. The main idea behind Caretta is gaining a clear understanding of the inter-dependencies between the different workloads running in the cluster. Caretta maps all service interactions and their traffic rates, leveraging Kubernetes APIs (Application Program Interface) to create a clean and informative map of any Kubernetes cluster that can be used for on-demand granular observability, cost optimization, and security insights, which allows teams to quickly reach insights such as identifying central points of failure or pinpointing security anomalies. Installing Caretta Installing Carreta is as simple as installing a helm chart onto your Amazon EKS cluster: helm repo add groundcover https://helm.groundcover.com/ helm install caretta --namespace caretta --create-namespace groundcover/caretta kubectl port-forward -n caretta pods/<caretta-grafana POD NAME> 3000:3000 Once installed, Carreta provides a full network dependency map that captures the Kubernetes service interaction. The map is also interleaved with Prometheus metrics that it exposes to measure the total throughput of each link observed since launching the program. That information, scraped and consolidated by a Prometheus agent, can be easily analyzed with standard queries such as sorting, calculating rate, filtering namespaces and time ranges, and of course — visualizing as a network map. The following is an example of a metric exposed by a Caretta’s agent, and it’s related label that provide granular insight into the network traffic captured by eBPF: caretta_links_observed{client_kind="Deployment",client_name="checkoutservice",client_namespace="demo-ng",link_id="198768460",role="server", server_port="3550"} This is useful to detect unknown dependencies or network interactions but also to easily detect bottlenecks like the hot zones handling large volumes of network data and might be the root cause of problems in your Amazon EKS cluster. But what can we do when network monitoring isn’t enough and application-level API monitoring is where the problem lies? Hubble Hubble is a network observability and troubleshooting component within Cilium (which is also based on eBPF for networking). Hubble uses eBPF to gain deep visibility into network traffic and to collect fine-grained telemetry data within the Linux kernel. By attaching eBPF programs to specific network events, Hubble can capture data such as packet headers, network flows, latency metrics, and more. It provides a powerful and low-overhead mechanism for monitoring and analyzing network behavior in real-time. With the help of eBPF, Hubble can perform advanced network visibility tasks, including flow-level monitoring, service dependency mapping, network security analysis, and troubleshooting. It then takes this data and aggregates it to present it to the user through the Command Line Interface (CLI)or UI. Hubble enables platform teams to gain insights into network communications within their cloud-native environments and gives developers the ability to understand how their applications communicate without first becoming a networking expert. Just like Carreta, Hubble knows how to create service dependency maps and metrics about the network connection it tracks with eBPF. Metrics like req/s and packet drops are captured by Hubble and can be used to detect issues at the network layer hiding in your Amazon EKS environment. Hubble also provides Layer 7 metrics by tracking HTTP and DNS connections inside the cluster using eBPF. Here, metrics like request rate, latency rate, and error rate kick in unlocking application-level monitoring. You can use Hubble to detect application bugs, which high-latency APIs slow down the application, or observe slow performance degradation that might be lurking. Installing Hubble requires installation of Cilium, which is documented in the Cilium getting started guide and is out of scope of this post. Groundcover: Pioneering cost-efficient Amazon EKS observability Groundcover, a next-generation observability company, is an example of how future observability platforms will leverage eBPF as their main data collection sensor. Groundcover, focused on cloud-native environments, utilizes eBPF to create a full-stack observability platform that provides instant value without compromising on scale, granularity, or cost. Its eBPF sensor was built in a performance-first mindset, which harnesses kernel resources to create a cost-efficient architecture for Amazon EKS observability. By collecting all its observability data using eBPF, Groundcover eliminates the need for intrusive code changes and manual labor, and the need to deploy multiple external agents. This streamlined approach not only enhances the coverage and depth of observability but also optimizes costs by reducing performance overhead. Conclusion In this post, we showed you how the open-source eBPF tool ecosystem. Customers can try out this technology on their own with our example demonstrating how this technology translates into next-generation observability platforms. As the demand for Kubernetes observability continues to grow, eBPF has emerged as a transformative technology, redefining how we monitor applications in Amazon EKS clusters. With its unparalleled performance and seamless integration into the Linux kernel, eBPF offers granular visibility without disrupting existing application code. Through eBPF-based tools, developers and operations teams can now troubleshoot and optimize their applications effortlessly, making Kubernetes observability more accessible and efficient. As more businesses embrace eBPF’s capabilities, we can expect organizations to migrate to Kubernetes with more confidence and assured of their expected observability coverage. This enables users to unleash the full potential of their applications on Amazon EKS and focus on building great products on top of Kubernetes. With eBPF’s bright future ahead, Groundcover and other industry leaders are paving the way for a new era of Kubernetes observability. Shahar Azulay, groundcover Shahar, CEO and cofounder of groundcover is a serial R&D leader. Shahar brings experience in the world of cybersecurity and machine learning having worked as a leader in companies such as Apple, DayTwo, and Cymotive Technologies. Shahar spent many years in the Cyber division at the Israeli Prime Minister’s Office and holds three degrees in Physics, Electrical Engineering and Computer Science, and currently strives to use technological learnings from this rich background and bring it to today’s cloud native battlefield in the sharpest, most innovative form to make the world of dev a better place. View the full article
  24. Introduction The adoption and large-scale growth of Kubernetes in recent years has resulted in businesses deploying multiple Amazon Elastic Kubernetes Service (Amazon EKS) clusters to support their growing number of microservice based applications. The Amazon EKS clusters are usually deployed in separate Amazon Virtual Private Clouds (Amazon VPCs) and often in separate AWS accounts. A common challenge we hear from our customers is that their clients or applications have requirements to communicate with their Amazon EKS Kubernetes API across individual VPC and account-level boundaries and don’t want to route traffic over the Internet. In this post, we’ll explore a blueprint for using AWS PrivateLink as a solution to enable cross-VPC and cross-account private access for the Amazon EKS Kubernetes API. Amazon VPC connectivity options There are a number of AWS native networking services that offer connectivity across VPCs and across accounts without exposing your traffic to the public Internet. These include: VPC Peering – which creates a connection between two or more VPCs that allow endpoints to communicate as if they were on the same network. It can connect VPCs that are in different accounts and in different regions. AWS Transit Gateway – which acts as a highly scalable cloud router connecting multiple VPCs in a region with a hub-and spoke architecture. Transit Gateway works across accounts using AWS Resource Access Manager (AWS RAM). AWS PrivateLink – which provides private connectivity to a service in one VPC that can be consumed from a VPC in the same account or from another account. PrivateLink supports private access to customer application services, AWS Marketplace partner services, and to certain AWS services. VPC Peering and Transit Gateway are great options for cross-VPC and cross-account connectivity when you want to integrate the VPCs into a larger virtual network. However, this may not be an option for some customers due to security concerns, compliance requirements, or overlapping CIDR blocks. AWS PrivateLink AWS PrivateLink is a purpose-built technology designed to enable private connectivity between VPCs and supported AWS services in a highly available, reliable, and secure manner. A service powered by PrivateLink is created directly inside your VPC as an interface VPC endpoint. For each subnet (from your VPC) you choose for your endpoint, an Elastic Network Interface (ENI) is provisioned with a private IP addresses from the subnet address range. This keeps your traffic on the AWS backbone network and removes the need for Internet and NAT gateways to access supported AWS services. AWS PrivateLink offers several advantages when it comes to securely connecting services and resources across different VPCs and accounts: PrivateLink provides a higher level of security and isolation by enabling private connectivity directly between VPCs without exposing traffic to the public Internet. This ensures that sensitive data and services remain within the AWS network, reducing potential attack vectors. Setting up and managing PrivateLink is often simpler and more straightforward compared to VPC Peering and Transit Gateway, which can be complex to configure, especially in larger and more complex network architectures. PrivateLink can be a preferred choice for organizations with strict compliance requirements, as it helps maintain data within the AWS network, which reduces the risk of data exposure. PrivateLink can handle overlapping IP address ranges between VPCs. It simplifies network design in situations where address conflicts might occur. AWS PrivateLink and Amazon EKS Private access to the Amazon EKS management API is available with AWS PrivateLink. The EKS management API allows you to provision and manage the lifecycle of EKS clusters, EKS managed node groups, and EKS add-ons. Each EKS cluster you provision has a separate API (i.e., the Kubernetes API) that’s accessed using tools like kubectl and Helm to provision and manage the lifecycle of Kubernetes resources in that cluster. An EKS cluster’s Kubernetes API is publicly available on the Internet by default. You can enable private access to the Kubernetes API by configuring your EKS cluster’s endpoint access. While the EKS Management API is available as a PrivateLink endpoint, the Kubernetes API isn’t and has historically required configuring VPC Peering or Transit Gateway to enable cross-VPC and cross-account access. AWS PrivateLink for the Amazon EKS Kubernetes API Native AWS PrivateLink support for the Amazon EKS Kubernetes API isn’t currently available, though there is an open roadmap request. However, customers can create a custom AWS PrivateLink powered service, known as a VPC endpoint service, for any application or service available behind a Network Load Balancer (NLB). To create a PrivateLink powered endpoint service for the EKS Kubernetes API, you need to create a NLB that targets the Kubernetes control plane ENIs that EKS creates in your subnets. In addition, there are two challenges that need to addressed: The EKS ENIs for the Kubernetes control plane are deleted and recreated in response to a number of events including, but not limited to: Kubernetes version upgrades, EKS platform upgrades, control plane autoscaling, enabling secret encryption, and associating/disassociating an OIDC identity provider. The NLB target group must be updated each time the ENIs are created and deleted. A new DNS hostname is created for the PrivateLink interface VPC endpoint that isn’t included in the EKS Kubernetes API TLS certificate. Clients connections to the Kubernetes API using the PrivateLink endpoint DNS fail with a certificate error. Solution overview To addresses the challenge of the deleted and recreated Amazon EKS ENI’s behind the NLB target group, a combination of Amazon EventBridge and AWS Lambda are used to respond to these changes and automatically update the NLB target group. Two EventBridge rules are created to process these events with Lambda functions: The first rule responds to the CreateNetworkInterface API call, which adds the newly created EKS ENIs to the NLB target group. The second rule uses the EventBridge Scheduler to run every 15 minutes and removes targets for ENIs that have been deleted. To address the challenge of the Kubernetes API certificate error, an Amazon Route 53 private hosted zone is created in the client VPC to override the DNS of the EKS API server endpoint. An alias record, matching the DNS name of EKS Kubernetes API server, is added to the private hosted zone and routes traffic to the AWS PrivateLink interface VPC endpoint in the client VPC. This eliminates certificate errors and enables secure and reliable communication with the EKS cluster. This post introduces the Amazon EKS Blueprints PrivateLink Access pattern and the architecture is shown in the following diagram. PrivateLink is used to create a secure, private connection between the Client VPC and a private EKS cluster in the Service Provider VPC. A few key points to highlight: The Amazon Elastic Compute Cloud (Amazon EC2) instance in the client VPC can access the private EKS cluster as if it were in its own VPC, without the use of an Internet gateway, NAT device, VPN connection, or AWS Direct Connect The client VPC has overlapping IPs with the Service Provider VPC. AWS PrivateLink makes it easy to publish an API or application endpoint between VPCs, including those that have overlapping IP address ranges. To connect through AWS PrivateLink, an interface VPC endpoint is created in the Client VPC, which establishes connections between the subnets in Client VPC and Amazon EKS in Service Provider VPC using network interfaces. This Service Provider creates a PrivateLink endpoint service pointing to internal NLB that targets the Kubernetes control plane ENIs. Walkthrough This post walks you through the following steps: Download and configure the PrivateLink Access Blueprint Deploy the PrivateLink Access Blueprint Test access to EKS Kubernetes API server endpoint End-to-end Testing with kubectl Prerequisites We need a few tools to deploy our blueprint. Ensure you have each of the following tools in your working environment: AWS Command Line Interface (AWS CLI) v2 Session Manager plugin for AWS CLI Terraform kubectl jq 1. Download the PrivateLink Access Blueprint In a terminal session, run the following set of commands to download the repo for the PrivateLink Access Blueprint: git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git cd terraform-aws-eks-blueprints/patterns/privatelink-access 2. Deploy the PrivateLink Access Blueprint Run the following steps (in the order shown) to initialize the Terraform repository, create, and configure the Private NLB in the service provider VPC and Amazon EventBridge rules before the rest of the resources are created. This ordered execution of steps ensures that the NLB, Lambda functions and EventBridge rules are in place and ready to capture creation of (and/or changes to the) EKS Kubernetes API endpoints (i.e., ENIs) when the cluster is in the process of being created (or upgraded). Note: The EKS Kubernetes API endpoint is temporarily made public so that Terraform has access to update the aws-auth ConfigMap. The Client Amazon EC2 instance’s AWS Identify and Access Management (AWS IAM) role is configured with administrative access for testing. terraform init terraform apply -target=module.eventbridge -target=module.nlb --auto-approve terraform apply --auto-approve Once the pattern has successfully deployed, you’ll be provided with multiple output values. Copy and save them to use later in the post. Review the Terraform output value for cluster_endpoint_private, it should look similar to snippet below: aws eks update-cluster-config \ --region us-west-2 \ --name privatelink-access \ --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true Copy the command and run it in a terminal session to take the cluster API endpoint private as originally intended. The above command causes the cluster to update, which takes a few minutes to complete. You can check on the status of the cluster by running the following command. aws eks describe-cluster --name privatelink-access | jq -r '.cluster.status' When the cluster is updating the output will be UPDATING , else the output will be ACTIVE. 3. Test access to EKS Kubernetes API server endpoint Of the other Terraform output values, the value ssm_test is provided to aid in quickly testing the connectivity from the client EC2 instance to the private EKS cluster via PrivateLink. Copy the output value, which looks like the snippet shown below (as an example) and paste it into your terminal to execute and check the connectivity. If configured correctly, the value returned should be ok. A successful test validates that: Private hosted zone for EKS API server endpoint worked The PrivateLink access worked thru the NLB all the way to the private API endpoints of the EKS cluster both of which are private in service provider VPC The EKS Kubernetes API server endpoint is ready and functional Note: The following snippet is shown only as an example. COMMAND="curl -ks https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.gr7.us-west-2.eks.amazonaws.com/readyz" COMMAND_ID=$(aws ssm send-command --region region-name \ --document-name "AWS-RunShellScript" \ --parameters "commands=[$COMMAND]" \ --targets "Key=instanceids,Values=i-XXXXXXXXXXXXXXXXX" \ --query 'Command.CommandId' \ --output text) aws ssm get-command-invocation --region region-name \ --command-id $COMMAND_ID \ --instance-id i-XXXXXXXXXXXXXXXXX \ --query 'StandardOutputContent' \ --output text 4. End-to-end testing with kubectl On the Client EC2 instance, creating the kubeconfig file and interacting with the EKS cluster’s API server with kubectl should work seamlessly without any additional configuration as the IAM role associated with the Client EC2 instance is: added to the aws-auth ConfigMap and mapped to Kubernetes role of system:masters (cluster admin role) and based on the attached IAM policies, has the permissions to eks:DescribeClusterand eks:ListClusters Follow the steps below to access the cluster’s API server with kubectl from the Client EC2 instance. Log into the Client EC2 instance Start a new AWS Systems Manager session (SSM session) on the Client EC2 instance using the provided ssm_start_session output value. It should look similar to the following code snippet shown (as an example). Copy the output value and paste it into your terminal to execute. Your terminal will now be connected to the Client EC2 instance. Note: The following snippet is shown only as an example. aws ssm start-session --region us-west-2 --target i-XXXXXXXXXXXXXXXXX Update Kubeconfig On the Client EC2 machine, run the following command to update the local ~/.kube/config file to enable access to the cluster: aws eks update-kubeconfig --region us-west-2 --name privatelink-access Test complete access with kubectl Test access by listing the pods running on the cluster: kubectl get pods -A The output should look similar to the sample shown below: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-4f8g8 1/1 Running 0 1m kube-system coredns-6ff9c46cd8-59sqp 1/1 Running 0 1m kube-system coredns-6ff9c46cd8-svnpb 1/1 Running 0 2m kube-system kube-proxy-mm2zc 1/1 Running 0 1m Cleaning up Before we destroy or teardown all the resources created, we need to ensure that the cluster state is restored for Terraform to do a complete cleanup. This means making the cluster API endpoint public again. Review the output value for cluster_endpoint_public, it should look similar to snippet below: aws eks update-cluster-config \ --region us-west-2 \ --name privatelink-access \ --resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true Copy the command and run it in a terminal session to take cluster API endpoint public and wait till the operation completes. Once verified that the cluster API endpoint is public, then proceed to clean up all the AWS resources created by Terraform by running the following command: terraform destroy --auto-approve Conclusion In this post, we described an innovative blueprint for harnessing AWS PrivateLink as a solution to enable private, cross-VPC, and cross-account access to the Amazon EKS Kubernetes API. The solution involves PrivateLink, EventBridge, Lambda functions, and a Route53 Private Hosted Zone built as part of the EKS Blueprint to present an effective way to ensure private access to EKS Kubernetes APIs, which enhanced security and compliance for businesses leveraging microservices across multiple VPCs and accounts. Reference De-mystifying cluster networking for Amazon EKS worker nodes View the full article
  25. Amazon GuardDuty has incorporated new machine learning techniques to more accurately detect anomalous activities indicative of threats to your Amazon Elastic Kubernetes Service (Amazon EKS) clusters. This new capability continuously models Kubernetes audit log events from Amazon EKS to detect highly suspicious activity such as unusual user access to Kubernetes secrets that can be used to escalate privileges, and suspicious container deployments with images not commonly used in the cluster or account. The new threat detections are available for all GuardDuty customers that have GuardDuty EKS Audit Log Monitoring enabled. View the full article
  • Forum Statistics

    43.4k
    Total Topics
    42.8k
    Total Posts
×
×
  • Create New...