Jump to content

Search the Community

Showing results for tags 'eks'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Amazon CloudWatch Container Insights with Enhanced Observability for EKS now auto-discovers critical health and performance metrics from your NVIDIA GPUs and delivers them in automatic dashboards to enable faster problem isolation and troubleshooting for your AI/ML workloads. Container Insights with Enhanced Observability delivers you out-of-the-box trends and patterns on your infrastructure health and removes the overhead of manual dashboard and alarm set-ups saving you time and effort. View the full article
  2. Starting today, you can use private cluster endpoints with AWS Batch on Amazon Elastic Kubernetes Service (Amazon EKS). You can bring existing private Amazon EKS clusters and create a compute environment on AWS Batch. This setup enables Amazon EKS jobs to run private endpoints using AWS Batch. View the full article
  3. Today, AWS announces the expansion in the log coverage support for Amazon Security Lake, which includes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs. This enhancement allows you to automatically centralize and normalize your Amazon EKS audit logs in Security Lake, making it easier to monitor and investigate potential suspicious activities in your Amazon EKS clusters. View the full article
  4. Today, we are announcing general availability of Amazon Linux 2023 (AL2023) on Amazon Elastic Kubernetes Service (EKS). AL2023 is the next generation of Amazon Linux from Amazon Web Services and is designed to provide a secure, stable, high-performance environment to develop and run your cloud applications. EKS customers can enjoy the benefits of AL2023 by using the standard AL2023-based EKS optimized Amazon Machine Image (AMI) with Managed Node Groups, self-managed nodes, and Karpenter. View the full article
  5. This post is co-written with Rivlin Pereira, Staff DevOps Engineer at VMware Introduction VMware Tanzu CloudHealth is the cloud cost management platform of choice for more than 20,000 organizations worldwide that rely on it to optimize and govern the largest and most complex multi-cloud environments. In this post, we will talk about how VMware Tanzu CloudHealth migrated their container workloads from self-managed Kubernetes on Amazon EC2 to Amazon Elastic Kubernetes Service (Amazon EKS). We will discuss lessons learned and how migration help achieve eventual goal of making cluster deployments fully automated with a one-click solution, scalable, secure, and reduce overall operational time spent to manage these clusters. This migration led them to scale their production cluster footprint from 2400 pods running in kOps (short for Kubernetes Operations) cluster on Amazon Elastic Compute Cloud (Amazon EC2) to over 5200 pods on Amazon EKS. Amazon EKS cluster footprint has also grown from running a few handful of clusters after the migration to 10 clusters in total across all environments and growing. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Previous self-managed K8s clusters and related challenges The self-managed Kubernetes clusters were deployed using kOps. These clusters required significant Kubernetes operational knowledge and maintenance time. While the clusters were quite flexible, the VMware Tanzu CloudHealth DevOps team was responsible for the inherent complexity, including custom Amazon Machine Image (AMI) creation, security updates, upgrade testing, control-plane backups, cluster upgrades, networking, and debugging. The clusters grew significantly and encountered limits that were not correctable without significant downtime and that is when the team considered moving to managed solution offering. Key drivers to move to managed solution VMware Tanzu CloudHealth DevOps team had following requirements for Amazon EKS clusters: Consistently reproducible and deployed with a one-click solution for automated Infrastructure-as-Code (IaC) deployment across environments. Consistent between workloads. Deployable in multiple regions. Services can migrate from the old clusters to the new clusters with minimal impact. New clusters provide more control over roles and permissions. New cluster lifecycle (i.e., creation, on-boarding users, and cluster upgrades) tasks reduce operational load. Key technical prerequisite evaluation We will discuss couple of technical aspects customers should evaluate in order to avoid surprises during the migration. Amazon EKS uses upstream Kubernetes, therefore, applications that run on Kubernetes should natively run on Amazon EKS without the need for modification. Here are some key technical considerations discussed in Migrating from self-managed Kubernetes to Amazon EKS? Here are some key considerations post that VMware team evaluated and implemented required changes: Kubernetes versions: VMware team was running k8s version 1.16 on kOps. For the Amazon EKS migration, team started with k8s version 1.17 and post migration they have upgraded to 1.24. Security: Authentication for Amazon EKS cluster: kOps clusters were configured to use Google OpenID for identity and authentication. Amazon EKS supports both OpenID Connect (OIDC) identity providers and AWS Identity and Access Management (AWS IAM) as methods to authenticate users to your cluster. To take advantage of Amazon EKS support for AWS IAM for identity and authentication, VMware made user configuration and authentication workflow changes to access the new clusters. Please see Updating kubeconfig for more information. AWS IAM roles for service accounts: VMware had configured AWS IAM roles for pod using kube2iam for kOps self-managed clusters. With this setup, pod level permissions were granted by IAM via proxy agent that was required to be run on every node. This kOps setup resulted in issues at scale. Amazon EKS enables a different approach. AWS Permissions are granted directly to pods by service account via a mutating webhook on the control plane. Communication for identity, authentication and authorization happens only with the AWS API endpoints and Kubernetes API, eliminating any proxy agent requirement. Review Introducing fine-grained IAM roles for service accounts for more information. The migration to IAM roles for service accounts for Amazon EKS fixed issues encountered with kube2iam when running at larger scales and has other benefits: Least privilege: By using the IAM roles for service accounts feature, they are no longer needed to provide extended permissions to the worker node IAM role so that pods on that node can call AWS APIs. You can scope IAM permissions to a service account, and only pods that use that service account have access to those permissions. Credential isolation: A container can only retrieve credentials for the IAM role that is associated with the service account to which it belongs. A container never has access to credentials that are intended for another container that belongs to another pod. Auditability: Access and event logging is available through AWS CloudTrail to help ensure retrospective auditing. Networking: VMware had setup kOps clusters using Calico as an overlay network. In Amazon EKS, they decided to implement Amazon VPC CNI plugin for K8s as it assigns IPs from the VPC classless interdomain routing (CIDR) to each pod. This is accomplished by adding a secondary IP to the EC2 nodes elastic network interface. Each Amazon EC2 node type has a supported number of elastic network interfaces (ENI) and corresponding number of secondary IPs assignable per ENI. Each EC2 instance starts with a single ENI attached and will add ENIs as required by pod assignment. VPC and subnet sizing: VMware created a single VPC with /16 CIDR range in production to deploy Amazon EKS cluster. For development and staging environments, they created multiple Amazon EKS clusters in single VPC with /16 CIDR to save on IP space. For each VPC, private and public subnets were created, and Amazon EKS clusters were created in private subnet. NAT gateway was configured for outbound public access. Also, subnets were appropriately tagged for internal use. Tooling to create Amazon EKS clusters: VMware reviewed AWS recommended best practices for cluster configuration. For cluster deployment, a common practice is IaC and there are several options like CloudFormation, eksctl, the official CLI tool of Amazon EKS, AWS Cloud Development Kit (CDK) and third-party solutions like Terraform. They decided to automate the deployment of the Amazon EKS cluster using a combination of community Terraform modules and some Terraform modules were developed in-house. Customers can also check Amazon EKS blueprints for cluster creation. Amazon EKS Node Groups (Managed/Unmanaged): Amazon EKS allows for use of both managed and self-managed node groups. Managed node groups offer significant advantages at no extra cost. This includes offloading of OS updates and security patching by using Amazon EKS optimized AMI where Amazon EKS is responsible for building patched versions of the AMI when bugs or issues are reported. Amazon EKS follows the shared responsibility model for Common Vulnerability and Exposures (CVE) and security patches on managed node groups, its customers responsibility for deploying these patched AMI versions to managed node groups. Other features of managed node groups include, automatic node draining via the Kubernetes API during terminations or updates, respect the pod disruption budgets, and automatic labeling to enable Cluster Autoscaler. Unless there is a specific configuration that cannot be fulfilled by a managed node group, recommendation is to use managed node group. Please note that cluster-autoscaler is not enabled for you by default on Amazon EKS and has to be deployed by the customer. VMware used managed node groups for migration to Amazon EKS. Solution overview Migration execution With an architecture defined, the next step was to create the AWS infrastructure and execute the migration of workloads from the self-managed Kubernetes clusters to Amazon EKS. Using IaC a parallel set of environments was provisioned for the Amazon EKS clusters alongside the existing kOps infrastructure. This would allow any changes necessary to be made to the Kubernetes manifests while retaining the capability to deploy changes to the existing infrastructure as needed. Figure a. Pre Cut-over Walkthrough Once the infrastructure was provisioned, changes were made to the manifests to align with Amazon EKS 1.17 and particular integrations that would be required. For example, the annotations to enable IRSA were added alongside the existing kube2iam metadata to allow the workloads to be deployed in both sets of infrastructure in parallel. Kube2iam on the kOps cluster provided AWS credentials via traffic redirect from the Amazon EC2 metadata API for docker containers to a container running on each instance, making a call to the AWS API to retrieve temporary credentials and return these to the caller. This function was enabled via an annotation on the pod specifications. kind: Pod metadata: name: aws-cli labels: name: aws-cli annotations: iam.amazonaws.com/role: role-arn iam.amazonaws.com/external-id: external-id To configure a pod to use IAM roles for service accounts, the service account was annotated instead of pod. apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME After testing was performed in a pre-production environment, the workloads were promoted to the production environment where further validation testing was completed. At this stage, it was possible to start routing traffic to the workloads running on Amazon EKS. In the case of this particular architecture this was accomplished by re-configuring the workloads consuming the APIs to incrementally route a certain percentage of traffic to the new API endpoints running on Amazon EKS. This would allow the performance characteristics of the new infrastructure to be validated gradually as the traffic increased, as well as the ability to rapidly roll back the change in the event of issues being encountered. Figure b. Partial Cut-over As production traffic was entirely routed to the new infrastructure and confidence was established in the stability of the new system the original kOps clusters could be decommissioned, and the migration completed. Figure c. Full Cut-over Lessons learned The following takeaways can be learned from this migration experience: Adequately plan for heterogeneous worker node instance types. VMware started with a memory-optimized Amazon EC2 instance family for their cluster node-group, but as the number of workloads run on Amazon EKS diversified, along with their compute requirements, it became clear that needed to offer other instance types. This led to dedicated node-groups for specific workload profiles (e.g., for compute heavy workloads). This has led VMware to investigate Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built by AWS. It helps improve application availability and cluster efficiency by rapidly launching right-sized compute resources in response to changing application load. Design VPCs to match the requirements of Amazon EKS networking. The initial VPC architecture implemented by VMware was adequate to allow the number of workloads on the cluster to grow, but over time the number of available IPs became constrained. This was resolved by monitoring the available IPs and configuring the VPC CNI with some optimizations for their architecture. You can review the recommendations for sizing VPCs for Amazon EKS in the best practices guide. As Amazon EKS clusters grow, optimizations will likely have to be made to core Kubernetes and third-party components. For example VMware had to optimize the configuration of Cluster Autoscaler for performance and scalability as the number of nodes grew. Similarly it was necessary to leverage NodeLocal DNS to reduce the pressure on CoreDNS as the number of workloads and pods increased. Using automation and infrastructure-as-code is recommended, especially as Amazon EKS cluster configuration becomes more complex. VMware took the approach of provisioning the Amazon EKS clusters and related infrastructure using Terraform, and ensured that Amazon EKS upgrade procedures were considered. Conclusion In this post, we walked you through how VMware Tanzu CloudHealth (formerly CloudHealth), migrated their container workloads from self-managed Kubernetes clusters running on kOps to AWS managed Amazon EKS with the eventual goal of making cluster deployments fully automated with a one-click solution, scalable, secure solution that reduced the overall operational time spend to manage these clusters. We walked you through important technical pre-requisites to be considered for migration to Amazon EKS, some challenges that were encountered either during or after migration, and lessons learned. We encourage to evaluate Amazon EKS for migrating workloads from kOps to a managed offering. Rivlin Pereira, VMware Tanzu Division Rivlin Pereira is Staff DevOps Engineer at VMware Tanzu Division. He is very passionate about Kubernetes and works on CloudHealth Platform building and operating cloud solutions that are scalable, reliable and cost effective. View the full article
  6. Amazon Elastic Kubernetes Service (EKS) customers can now leverage EC2 security groups to secure applications in clusters using Internet Protocol version 6(IPv6) address space. View the full article
  7. Amazon Elastic Kubernetes Service (EKS) now surfaces cluster-related health issues in the EKS console and API, providing administrators enhanced visibility into the health of their clusters. Cluster health status information helps customers to quickly diagnose, troubleshoot, and remedy issues with their clusters, enabling them to run more up-to-date and secure application environments. View the full article
  8. Today, we are excited to introduce the EKS Developers Workshop, a comprehensive and beginner-friendly workshop designed specifically for developers embarking on their Kubernetes and Amazon Elastic Kubernetes Service (Amazon EKS) journey. This new workshop augments the existing EKS Workshop for cluster operators by focusing on developers and the unique tools and processes they utilize in the Kubernetes lifecycle. It provides an ideal starting point for those new to Kubernetes and Amazon EKS, emphasizing real-world coding practices and Kubernetes integrations. What is the EKS Developers Workshop? The EKS Developers Workshop differentiates itself from the standard EKS Workshop by focusing on the foundational steps leading up to and including Kubernetes. It offers a unique pathway for refactoring existing applications for containerized and Kubernetes environments. Throughout the workshop, we use the FastAPI Book Management Application, which serves as the practical example for applying the concepts learned in the workshop. Key elements of the workshop include: Application refactoring: Learn about the types of Kubernetes resources you’ll deploy in the workshop, strategies for refactoring applications according to The Twelve-Factor App methodologies, and setup your local development environment. Containerization made simple: Learn the ins and outs of building and running cost-optimized containers, uploading images to and integrating with Amazon Elastic Container Registry (Amazon ECR), then create a multi-architecture container image using Finch or Docker, which you’ll reference in your Kubernetes workloads. Kubernetes at your pace: Learn how to create a local cluster using minikube. Deploy to Kubernetes, referencing your container image in Amazon ECR, then access and monitor Kubernetes resources through the minikube dashboard. Then do some load testing and right-size your pods using minikube and the Metrics Server, updating your Kubernetes manifests to include optimal resource limits and requests. Hands-on with Amazon EKS: Learn how to setup an Amazon EKS cluster using eksctl (using AWS Fargate or Managed Node Groups as a compute option), and configuring traffic with the AWS Load Balancer Controller (LBC). Set up persistent data storage using either the Amazon Elastic File System (Amazon EFS) or the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) drivers. Deploy to Amazon EKS, referencing the container image in Amazon ECR, then monitor your workloads using the Amazon EKS Console. Next, implement distributed tracing with AWS Distro for OpenTelemetry (ADOT), featuring AWS X-Ray. Finalize your setup by migrating to Amazon Aurora PostgreSQL, and integrate with AWS Secrets Manager for robust security. In the coming weeks, we’ll be launching a new lab exercise that shows how to automatically build and deploy Kubernetes workloads to Amazon EKS using Amazon CodeCatalyst, a fully managed, unified software development service that aims to streamline the entire software development lifecycle. Get started with the EKS Developers Workshop Currently featuring a Python-focused curriculum, the workshop offers a self-paced, comprehensive guide through the entire Kubernetes lifecycle, currently tailored for Python developers. In response to community input, future iterations will explore additional programming languages. This workshop provides an end-to-end journey through Kubernetes—from crafting multi-architecture container images and grasping Kubernetes concepts to integrating with Amazon EKS and various AWS services. Each chapter incrementally builds on the last, offering a transparent and cohesive learning experience using the FastAPI Book Management Application. If you want to stop the workshop at any time, we include clean-up steps for each chapter. Create an AWS account with administrative permissions at https://aws.amazon.com/resources/create-account/. Access the workshop at https://developers.eksworkshop.com/. Tenets of the EKS Developers Workshop Under the hood, we’re proud of our thriving community of experts, including a number of members of the Amazon EKS Support Desk, who guide the workshop’s evolution. As part of our ongoing commitment to provide a comprehensive and enriching learning experience, the Amazon EKS Developers Workshop is built on foundational tenets that ensure its relevance, effectiveness, and user-centric approach. These tenets guide every aspect of the workshop and are tailored specifically for Amazon EKS Developers. Tenets of the EKS Developers Workshop: Kubernetes beginner-friendly: Our goal is to employ Kubernetes and container tools that are straightforward and easy to grasp, while demystifying fundamental Kubernetes concepts for newcomers embarking on their Kubernetes journey. Comprehensive transparency: We strive to reveal the inner workings, from the application level to containers and Kubernetes, enabling developers to understand and fully adopt our practices. Customer-centric focus: We are committed to actively utilizing insights from the feedback survey to guide and refine our workshop’s future development, ensuring it continually meets and exceeds user needs. If you’re interested in contributing to the workshop, then we welcome your expertise and ideas. Visit our contribution guidelines at Authoring Guide for Contributors to learn how you can be a part of this collaborative project. Your contributions can help us expand and refine the workshop, making it an even more valuable resource for the Kubernetes and Amazon EKS community. AWS Launch Team The EKS Developers Workshop got started based on my personal journey of developing the FastAPI Book Management Application and deploying it to Kubernetes with minikube, then Amazon EKS. As we unveil the workshop, it’s crucial to acknowledge the incredible efforts of other contributors to this launch. This team, comprising the Steering Committee and Working Groups, has been instrumental in shaping the workshop. Their collective vision and hard work have been pivotal in creating a workshop that truly resonates with the developer community. We are immensely grateful to each member for their contributions and continued support in this journey. Steering committee: A special thanks to Joe North for his insights as our project’s Containers & Kubernetes Architect, and Smruti Tripathy for leading AWS Integrations Architecture. Working groups: Specialized working groups, chaired by dedicated leaders like Dola Krishnudu Battula, Asiel Bencomo, Kenichiro Hiraiwa, and Joe North, have worked tirelessly to ensure the workshop’s material is both cutting-edge and practical, alongside their maintainers who contributed to the workshop, James Gaines, Bhavesh Dave, Shamanth Devagari, Asiel Bencomo, Premdass Ravidass, Sahil Sethi, Jan Klotter, Deepankar Tiwari, Sanketh Jain, Udit Sidana, and Pankaj Walke. Conclusion The EKS Developers Workshop stands as a gateway for developers to immerse themselves in the world of Kubernetes and Amazon EKS. It’s tailored to provide a seamless, step-by-step learning experience, making it an invaluable resource for any developer looking to refine their skills in application refactoring, containerization, and cloud-based Kubernetes deployments. Join us at https://developers.eksworkshop.com to embark on your path to mastering Kubernetes on Amazon EKS. View the full article
  9. Amazon GuardDuty has incorporated new machine learning techniques to more accurately detect anomalous activities indicative of threats to your Amazon Elastic Kubernetes Service (Amazon EKS) clusters. This new capability continuously models Kubernetes audit log events from Amazon EKS to detect highly suspicious activity such as unusual user access to Kubernetes secrets that can be used to escalate privileges, and suspicious container deployments with images not commonly used in the cluster or account. The new threat detections are available for all GuardDuty customers that have GuardDuty EKS Audit Log Monitoring enabled. View the full article
  10. Amazon CloudWatch Container Insights now delivers enhanced observability for Amazon Elastic Kubernetes Service (EKS) with out-of-the-box detailed health and performance metrics, including container level EKS performance metrics, Kube-state metrics and EKS control plane metrics for faster problem isolation and troubleshooting. View the full article
  11. Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate provides serverless compute for containerized workloads that run on Kubernetes. By eliminating the need for infrastructure management with AWS Fargate, customers can avoid the operational overhead of scaling, patching, and securing instances. AWS Fargate provides a secure and a controlled environment for container execution. Consequently, customers are not allowed to extend extra privileges to containers in operation. As a result, traditional methods for enhancing visibility and ensuring container runtime security will not work. This post demonstrates the use of Aqua’s Cloud Native Security Platform on AWS Fargate to deliver runtime security without requiring added privileges. Aqua’s platform is compatible with containers deployed on various infrastructures, such as Amazon Elastic Container Service (Amazon ECS) and Amazon EKS.This post will focus on Amazon EKS.. The container runtime security element of Aqua’s Platform, the MicroEnforcer, is an agent that can be added to Kubernetes pods and can run unprivileged on AWS Fargate. Aqua’s Platform injects the MicroEnforcer into a Kubernetes pod and enforces run-time security, without the user having to make changes to the application or their deployment specifications. These run-time protection capabilities are delivered as part of comprehensive cloud-native security platform, spanning vulnerability management, cloud security posture management, supply chain security, Kubernetes security, assurance, and cloud-integrated storage (CIS) benchmarking. Aqua Security is an AWS Advanced Technology Partner with the AWS Containers Competency. They provide highly integrated security controls that customers use to build full code-to-production security across their continuous integration/continuous deployment (CI/CD) pipeline, with an orchestration layer and runtime environments... View the full article
  12. When the margin for error is razor thin, it is best to assume that anything that can go wrong will go wrong. AWS customers are increasingly building resilient workloads that continue to operate while tolerating faults in systems. When customers build mission-critical applications on AWS, they have to make sure that every piece in their system is designed in such a way that the system continues to work while things go wrong. AWS customers have applied the principle of design for failure to build scalable mission-critical systems that meet the highest standards of reliability. The best practices established in the AWS Well Architected framework have allowed teams to improve systems continuously while minimizing business disruptions. Let’s look at a few key design principles we have seen customers use to operate workloads that cannot afford downtime... View the full article
  13. WebSocket is a common communication protocol used in web applications to facilitate real-time bi-directional data exchange between client and server. However, when the server has to maintain a direct connection with the client, it can limit the server’s ability to scale down when there are long-running clients. This scale down can occur when nodes are underutilized during periods of low usage. In this post, we demonstrate how to redesign a web application to achieve auto scaling even for long-running clients, with minimal changes to the original application... View the full article
  14. AWS Fargate is a serverless compute engine for running Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS) workloads without managing the underlying infrastructure. AWS Fargate makes it easy to provision and scale secure, isolated, and right-sized compute capacity for containerized applications. As a result, teams are increasingly choosing AWS Fargate to run workloads in a Kubernetes clusters. It is a common practice for multiple teams to share a single Kubernetes cluster. In such cases, cluster administrators often have the need to allocate cost based on a team’s resource usage. Amazon EKS customers can deploy the Amazon EKS optimized bundle of Kubecost for cluster cost visibility when using Amazon EC2. However, in this post, we show you how to analyze costs of running workloads on EKS Fargate using the data in the AWS Cost and Usage Report (CUR). Using Amazon QuickSight, you can visualize your AWS Fargate spend and allocate cost by cluster, namespace, and deployment... View the full article
  15. Karpenter is an open-source cluster autoscaler that provisions right-sized nodes in response to unschedulable pods based on aggregated CPU, memory, volume requests, and other Kubernetes scheduling constraints (e.g., affinities and pod topology spread constraints), which simplifies infrastructure management. In this post, we’ll describe the mechanism for patching Kubernetes worker nodes provisioned with Karpenter through a gated Karpenter feature called Drift. If you have many worker nodes across multiple Amazon EKS clusters, then this mechanism can help you continuously patch at scale… View the full article
  16. Today, we’re announcing the preview of Amazon Elastic Kubernetes Service (EKS) extended support for Kubernetes versions. You can now run Amazon EKS clusters on a Kubernetes version for up to 26 months from the time the version is generally available on Amazon EKS. Extended Support is available as a free preview for all Amazon EKS customers, starting today with Kubernetes version 1.23… View the full article
  17. In this post, we’ll illustrate an enterprise IT scenario in which VPCs are overseen by a central network team, including configuration of VPC resources such as IP allocation, route policies, internet gateways, NAT gateways, security groups, peering, and on-premises connectivity. The network account, which serves as the owner of the centralized VPC, shares subnets with a participant application account managed by a platform team, both of which are part of the same organization. In this use case, the platform team owns the management of Amazon EKS cluster. We’ll also cover the key considerations of using shared subnets in Amazon EKS... View the full article
  18. The Amazon Elastic Kubernetes Service (Amazon EKS) team is pleased to announce support for Kubernetes version 1.28 for Amazon EKS and Amazon EKS Distro. Amazon EKS Anywhere (release 0.18.0) also supports Kubernetes 1.28. The theme for this version was chosen as a play on words that combines plant and Kubernetes to evoke the image of a garden. Hence, the fitting release name, Planternetes. In their official release announcement, the Kubernetes release team said this of the release, “people behind this release come from a wide range of backgrounds.” View the full article
  19. Today, we are excited to announce that Amazon EMR on EKS now supports managed Apache Flink, available in public preview. With this launch, customers who already use EMR can run their Apache Flink application along with other types of applications on the same Amazon EKS cluster, helping improve resource utilization and simplify infrastructure management. For customers who already run big data frameworks on Amazon EKS, they can now let Amazon EMR automate provisioning and management. View the full article
  20. Amazon GuardDuty announces a new capability in GuardDuty EKS Runtime Monitoring that allows you to selectively configure which Amazon Elastic Kubernetes Service (Amazon EKS) clusters are to be monitored for threat detection. Previously, configurability was at the account level only. With this added cluster-level configurability, customers can now selectively monitor EKS clusters for threat detection or continue to use account level configurability to monitor all EKS clusters in a given account and region. View the full article
  21. We are excited to announce support for Amazon Linux 2023 (AL2023) on Amazon EMR on EKS. Customers can now use AL2023 as the operating system together with Java 17 as Java runtime to run Spark workloads on Amazon EMR on EKS. This provides customers a secure, stable, high-performance environment to develop and run their applications as well as enables them to access the latest enhancements such as kernel, toolchain, glibc, openssl and other system libraries and utilities. View the full article
  22. Apache Spark revolutionized big data processing with its distributed computing capabilities, which enabled efficient data processing at scale. It offers the flexibility to run on traditional Central Processing Unit (CPUs) as well as specialized Graphic Processing Units (GPUs), which provides distinct advantages for various workloads. As the demand for faster and more efficient machine learning (ML) workloads grows, specialized hardware acceleration becomes crucial. This is where NVIDIA GPUs and Compute Unified Device Architecture (CUDA) come into the picture. To further enhance the capabilities of NVIDIA GPUs within the Spark ecosystem, NVIDIA developed Spark-RAPIDS. Spark-RAPIDS is an extension library that uses RAPIDS libraries built on CUDA, to enable high-performance data processing and ML training on GPUs. By combining the distributed computing framework of Spark with the parallel processing power of GPUs, Spark-RAPIDS significantly improves the speed and efficiency of analytics and ML workloads... View the full article
  23. While AWS ECS and EKS serve a similar purpose, they have several fundamental differences. Here's what you should know. View the full article
  24. This post demonstrates a proof-of-concept implementation that uses Kubernetes to execute code in response to an event. View the full article
  25. We are excited to announce that Amazon EMR on EKS now supports programmatic execution of Jupyter notebooks when running interactive workloads via managed endpoints. Amazon EMR on EKS enables customers to run open-source big data frameworks such as Apache Spark on Amazon EKS. Amazon EMR on EKS customers can setup and use a managed endpoint (available in preview) to run interactive workloads using integrated development environments (IDEs) such as EMR Studio. View the full article
  • Forum Statistics

    43.2k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...