Jump to content

Search the Community

Showing results for tags 'amazon ecs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 10 results

  1. We are excited to announce that AWS Fargate for Windows containers on Amazon ECS has reduced infrastructure pricing by up to 49%. Fargate simplifies the adoption of modern container technology for ECS customers by making it even easier to run their Windows containers on AWS. With Fargate, customers no longer need to set up automatic scaling groups or manage host instances for their application. View the full article
  2. Amazon Elastic Container Services (Amazon ECS) launches support for configuring timeout for service-to-service communication with its networking capability called ECS Service Connect. This feature enables you to set custom timeouts for Amazon ECS services running with Service Connect, supporting applications serving long-running requests. Amazon ECS is a fully managed container orchestration service that makes it easier for you to deploy, manage, and scale containerized applications. Customers can use ECS Service Connect capability to easily configure service discovery, connectivity and traffic observability for services running in Amazon ECS. This helps build applications faster by letting you focus on the application code and not on your networking infrastructure. View the full article
  3. Today, Amazon Elastic Container Services (Amazon ECS) announced managed instance draining, a new capability that facilitates graceful shutdown of workloads deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances by safely stopping and rescheduling workloads to other, non-terminating instances. This capability enables customers to simplify infrastructure maintenance workflows, such as rolling out a new Amazon Machine Image (AMI) version, without needing to build custom solutions to gracefully shutdown instances without disrupting their workloads. View the full article
  4. Introduction We have observed a growing adoption of container services among both startups and established companies. This trend is driven by the ease of deploying applications and migrating from on-premises environments to the cloud. One platform of choice for many of our customers is Amazon Elastic Container Service (Amazon ECS). The powerful simplicity of Amazon ECS allows customers to scale from managing a single task to overseeing their entire enterprise application portfolio and to reach thousands of tasks. Amazon ECS eliminates the management overhead associated with running your own container orchestration service. When working with customers, we have observed that there is a valuable opportunity to enhance the utilization of Amazon ECS events. Lifecycle events offer troubleshooting insights by linking service events with metrics and logs. Amazon ECS displays the latest 100 events, making it tricky to retrospectively review them. Using Amazon CloudWatch Container Insights resolves this by storing Amazon ECS lifecycle events in Amazon CloudWatch Log Group. This integration lets you analyze events retroactively, enhancing operational efficiency. Amazon EventBridge, a serverless event bus, which connects applications seamlessly. Along with Container Insights, Amazon ECS can serve as an Event source while Amazon CloudWatch Logs act as the Target in Amazon EventBridge. This enables post-incident analysis using Amazon CloudWatch Logs Insights. Our post explains how to effectively analyze Amazon ECS service events via Container Insights or Amazon EventBridge or both using Amazon CloudWatch Logs Insights Queries. These queries significantly enhance your development and operational workflows. Prerequisites To be able to work through the techniques that will be presented in this technical guide you must have the below feature enabled in your account. An Amazon ECS Cluster with active workload. Amazon EventBridge configured to stream events to either Amazon CloudWatch Logs directly or having Amazon ECS CloudWatch Container Insights enabled. Here is an elaborated guide to set up Amazon EventBridge to stream events to Amazon CloudWatch Logs or Container Insights. Walkthrough Useful lifecycle events patterns The events that the Elastic Container Service (Amazon ECS) emits can be categorized into four groups: Container instance state change events – These events are triggered when there is a change in the state of an Amazon ECS container instance. This can happen due to various reasons, such as starting or stopping a task, upgrading the Amazon ECS agent, or other scenarios. Task state change events – These events are emitted whenever there is a change in the state of a task, such as when it transitions from pending to running or from running to stopped. Additionally, events are triggered when a container within a task stops or when a termination notice is received for AWS Fargate Spot capacity. Service action events – These events provide information about the state of the service and are categorized as info, warning, or error. They are generated when the service reaches a steady state, when the service consistently cannot place a task, when the Amazon ECS APIs are throttled, or when there are insufficient resources to place a task. Service deployment state change events – These events are emitted when a deployment is in progress, completed, or fails. They are typically triggered by the circuit breaker logic and rollback settings. For a more detailed explanation and examples of these events and their potential use cases, please refer to the Amazon ECS events documentation. Let’s dive into some real-world examples of how to use events for operational support. We’ve organized these examples into four categories based on event patterns: Task Patterns, Service Action Patterns, Service Deployment Patterns, and ECS Container Instance Patterns. Each category includes common use cases and demonstrates specific queries and results. Running Amazon CloudWatch Logs Insights query Follow below steps to run an Amazon CloudWatch Logs Insights queries, which will be covered in latter section of this post: Open the Amazon CloudWatch console and choose Logs, and then choose Logs Insights. Choose log groups containing Amazon ECS events and performance logs to query. Enter the desired query and choose Run to view the results. Task event patterns Scenario 1: In this scenario, the operations team encounters a situation where they need to investigate the cause of HTTP status 5XX (server-side issue) errors that have been observed in their environment. To do so, they reach out to confirm whether an Amazon ECS task correctly followed its intended task lifecycle. The team suspects that a task’s lifecycle events might be contributing to the 5XX errors, and they need to narrow down the exact source of these issues to implement effective troubleshooting and resolution. Required query Query Inputs: detail.containers.0.taskArn: Intended Task ARN fields time as Timestamp, `detail-type` as Type, detail.lastStatus as `Last Status`, detail.desiredStatus as `Desired Status`, detail.stopCode as StopCode, detail.stoppedReason as Reason | filter detail.containers.0.taskArn = "arn:aws:ecs:us-east-1:111122223333:task/CB-Demo/6e81bd7083ad4d559f8b0b147f14753f" | sort @timestamp desc | limit 10 Result: Let’s see how service events can aide confirmation of Task lifecycle, from the results we can see Last Status of task progressed as shown in the following: PROVISIONING > PENDING > ACTIVATING > RUNNING > DEACTIVATING > STOPPING > DEPROVISIONING > STOPPED This confirms to documented task life cycle flow and task was first DEACTIVATED and then STOPPED, we can see that stoppage of this task was initiated by Scheduler ServiceSchedulerInitiated because of reason Task failed container health checks. Similarly, query can also fetch check lifecycle details of a task failing load balancer health checks, result will be as shown in the following: In below query replace detail.containers.0.taskArn with intended Task ARN: fields time as Timestamp, `detail-type` as Type, detail.lastStatus as `Last Status`, detail.desiredStatus as `Desired Status`, detail.stopCode as StopCode, detail.stoppedReason as Reason | filter detail.containers.0.taskArn = "arn:aws:ecs:us-east-1:111122223333:task/CB-Demo/649e1d63f0db482bafa0087f6a3aa5ed" | sort @timestamp desc | limit 10 Let’s see an example of another task which was stopped manually by calling StopTask, because action was UserInitiated and reason is Task stopped by user: As an addition in both cases we can see how Desired State (irrespective of who initiated stop task) drives Last Status of Task. Task Lifecycle for reference: Scenario 2: Let’s consider the scenario where you may encounter frequent task failures within a service, necessitating a means to diagnose the root causes behind these issues. Tasks might be terminating due to various reasons, such as resource limitations or application errors. To address this, you can query for the stop reasons for all tasks in the service to uncover underlying issues. Required Query Query Inputs: detail.group : Your intended service name filter `detail-type` = "ECS Task State Change" and detail.desiredStatus = "STOPPED" and detail.group = "service:circuit-breaker-demo" |fields detail.stoppingAt as stoppingAt, detail.stoppedReason as stoppedReason,detail.taskArn as Task | sort @timestamp desc | limit 200 TIP: In case if you have service autoscaling enabled and there are frequent scaling events for service you can further add another filter to above query to filter out events related to scaling to focus solely on other stop reason. filter detail-type = "ECS Task State Change" and detail.desiredStatus = "STOPPED" and detail.stoppedReason not like "Scaling activity initiated by" and detail.group = "service:circuit-breaker-demo" |fields detail.stoppingAt as stoppingAt, detail.stoppedReason as stoppedReason,detail.taskArn as Task | sort @timestamp desc | limit 200 Result: In the results, we can see the task stop reasons for tasks within the service, along with their respective task IDs. By analyzing these stop reasons, you can identify the specific issues leading to task terminations. Depending on the stop reasons, potential solutions might involve application tuning, adjusting resource allocations, optimizing task definitions, or fine-tuning scaling strategies. Scenario 3: Let’s consider a scenario where your security team needs critical information about the usage of specific network interfaces, MAC addresses, or attachment IDs. It’s important to note that Amazon ECS automatically provisions and deprovisions Elastic Network Interfaces (ENIs) when tasks start and stop. However, once a task is stopped, there are no readily available records or associations to trace back to a specific Task ID using Elastic Network Interface (ENI) or Media Access Control (MAC) assigned to ENI information. This poses a challenge in meeting the security team’s request for such data, as the automatic nature of ENI management in Amazon ECS may limit historical tracking capabilities for these identifiers. Required Query Query Inputs: detail.attachments.1.details.1.value :Intended mac address of the ENI Additional: Replace Task ARNs and Cluster ARN Details fields @timestamp, `detail.attachments.1.details.1.value` as ENIId,`detail.attachments.1.status` as ENIStatus, `detail.lastStatus` as TaskStatus | filter `detail.attachments.1.details.1.value` = "eni-0e2b348058ae3d639" | parse @message "arn:aws:ecs:us-east-1:111122223333:task/CB-Demo/*\"" as TaskId | parse @message "arn:aws:ecs:us-east-1:111122223333:cluster/*\"," as Cluster | parse @message "service:*\"," as Service | display @timestamp, ENIId, ENIStatus, TaskId, Service, Cluster, TaskStatus To Look up by ENI ID, replace value of detail.attachments.1.details.2.value with intended MAC address: fields @timestamp, `detail.attachments.1.details.1.value` as ENIId, `detail.attachments.1.details.2.value` as MAC ,`detail.attachments.1.status` as ENIStatus, `detail.lastStatus` as TaskStatus | filter `detail.attachments.1.details.2.value` = '12:eb:5f:5a:83:93' | parse @message "arn:aws:ecs:us-east-1:111122223333:task/CB-Demo/*\"" as TaskId | parse @message "arn:aws:ecs:us-east-1:111122223333:cluster/*\"," as Cluster | parse @message "service:*\"," as Service | display @timestamp, ENIId, MAC, ENIStatus, TaskId, Service, Cluster, TaskStatus Result: By ENI Id, in results we can details of task/service/cluster for which ENI was provisioned and the state of task to correlate. Just like ENI, we can query by MAC address, with the same details as ENI: Service action event patterns Scenario 4: You may encounter a situation where you need to identify and prioritize resolution for services with the highest number of faults. To achieve this, you want to query and determine the top N services that are experiencing issues. Required Query: filter `detail-type` = "ECS Service Action" and @message like /(?i)(WARN)/ | stats count(detail.eventName) as countOfWarnEvents by resources.0 as serviceArn, detail.eventName as eventFault | sort countOfWarnEvents desc | limit 20 Result: By filtering for WARN events and aggregating service-specific occurrences, you can pinpoint the services that require immediate attention. Prioritizing resolution efforts, for example, the service ecsdemo-auth-no-sd, in this case, is facing the SERVICE_TASK_START_IMPAIRED error. This ensures that you can focus your resources on mitigating the most impactful issues and enhancing the overall reliability of your microservices ecosystem: Service deployment event patterns Scenario 5: Since we are aware that any Amazon ECS service comes with an event type of INFO, WARN, or ERROR, we can use this as a search pattern to analysis our workloads for troubled services. Required Query: fields @timestamp as Time, `resources.0` as Service, `detail-type` as `lifecycleEvent`, `detail.reason` as `failureReason`, @message | filter `detail.eventType` = "ERROR" | sort @timestamp desc | display Time, Service, lifecycleEvent, failureReason | limit 100 Result: In results below the ecsdemo-backend service is failing to successfully deploy tasks, which activates the Amazon ECS circuit breaker mechanism that stops the deployment of the service. Using the expand arrow to the left of the table, we can get more details about the event: Service deployment event patterns Scenario 6: In this scenario, you have received a notification from the operations team indicating that, following a recent deployment to an Amazon ECS service, the previous version of the application is still visible. They are experiencing a situation where the new deployment did not replace the old one as expected, leading to confusion and potential issues. The operations team seeks to understand the series of events that occurred during the deployment process to determine what went wrong, identify the source of the issue, and implement the necessary corrective measures to ensure a successful deployment. Required Query Query Inputs: resources.0 : intended service ARN fields time as Timestamp, detail.deploymentId as DeploymentId , detail.eventType as Severity, detail.eventName as Name, detail.reason as Detail, `detail-type` as EventType | filter `resources.0` ="arn:aws:ecs:us-east-1:12345678910:service/CB-Demo/circuit-breaker-demo" | sort @timestamp desc | limit 10 Result: Let’s analyze service events to understand what went wrong during a deployment, by examining the sequence of events, a clear timeline emerges: We can see that service was initially in steady state (line 7) and there was good deployment (ecs-svc/6629184995452776901 in line 6). A new deployment (ecs-svc/4503003343648563919) occurs, possibly with a code bug (line 5). Task from this deployment was failing to start (line 3). This problematic deployment triggers a circuit breaker logic that initiates a rollback to the previously known good deployment (ecs-svc/6629184995452776901 in line 4). The service eventually returns to a steady state (lines 1 and 2). This sequence of events not only provides a chronological view of what happened but also offers specific insights into the deployments involved and the potential reasons for the issue. By analyzing these service events, the operations team can pinpoint the problematic deployment (i.e., ecs-svc/4503003343648563919) and investigate further to identify and address the underlying code issues, ensuring a more reliable deployment process in the future. ECS container instance event patterns: Scenario 7: You want to track the history of an Amazon ECS Agent updates for container instances in the cluster. A trackable history ensures compliance with security standards by verifying that the agent has the necessary patches and updates installed, and it also allows for the verification of rollbacks in the event of problematic updates. This information is valuable for operational efficiency and service reliability. Required Query: fields @timestamp, detail.agentUpdateStatus as agentUpdateStatus, detail.containerInstanceArn as containerInstanceArn,detail.versionInfo.agentVersion as agentVersion | filter `detail-type` = "ECS Container Instance State Change" | sort @timestamp desc | limit 200 Result: As we can see from the results, the Agent on Container Instance was at v 1.75.0. On Update Agent trigger, the process to update agent started at sequence 9 and finally completed at sequence 1. Initially, the container instance operated with ECS Agent version 1.75.0. Subsequently, at sequence 9, an update operation was initiated, indicating the presence of a new Amazon ECS Agent version. After a series of update actions, the Agent Update successfully concluded at sequence 1. This information offers a clear snapshot of the version transition and update procedure, underlining the importance of tracking Amazon ECS Agent updates to ensure the security, reliability, and functionality of the ECS cluster. Cleaning up Once you’ve completed with exploring sample queries, please ensure you disable any Amazon EventBridge rules and Amazon ECS CloudWatch Container Insights, so that you do not incur any further cost. Conclusion In this post, we’ve explored ways to harness the full potential of Amazon ECS events, a valuable resource for troubleshooting. Amazon ECS provides useful information about tasks, services, deployments, and container instances. Analyzing ECS events in Amazon CloudWatch Logs enables you to identify patterns over time, correlate events with other logs, discover recurring issues, and conduct various forms of analysis. We’ve outlined straightforward yet powerful methods for searching and utilizing Amazon ECS events. This includes tracking the lifecycle of tasks to swiftly diagnose unexpected stoppages, identifying tasks with specific network details to bolster security, pinpointing problematic services, understanding deployment issues, and ensuring the Amazon ECS agent is up-to-date for reliability. This broader perspective on your system’s operations equips you to proactively address problems, gain insights into your container performance, facilitate smooth deployments, and fortify your system’s security. Additional references Now that we have covered the basics of these lifecycle events, let’s look at best practices for querying these lifecycle events in the Amazon CloudWatch Log Insights console for troubling shooting purposes. To learn more about the Amazon CloudWatch query domain-specific language (DSL) visit the documentation (CloudWatch Logs Insights query syntax). You can further setup Anomaly Detection by further processing Amazon ECS events event bridge, which is explained in detail in Amazon Elastic Container Service Anomaly Detector using Amazon EventBridge. View the full article
  5. Customers running applications with more than one containers on Amazon Elastic Container Service (ECS) with AWS Fargate can now leverage Seekable OCI (SOCI) to lazily load specific container images within the Amazon ECS task definition. This eliminates the need to generate SOCI indexes for smaller container images within the task definition, while still getting the benefits of SOCI with larger container images, improving the overall application deployment and scale-out time. View the full article
  6. Introduction Amazon Elastic Container Service (Amazon ECS) is a container orchestration service that manages the lifecycle of billions of application containers on AWS every week. One of the core goals of Amazon ECS is to remove overhead burden from human operators. Amazon ECS watches over your application containers 24/7, and can respond to unexpected changes faster and better than any human can. Amazon ECS reacts to undesired changes, such as application crashes and hardware failures by continuously attempting to self-heal your application container deployments back to your desired state. There are also external factors such as traffic spikes that can cause an application brown out. This can be more challenging to handle. This post dives deep into recent changes to how Amazon ECS handles task health issues and task replacement, and how these changes increase the availability of your Amazon ECS orchestrated applications. Task health evaluation Amazon ECS evaluates the health of a task based on a few criteria: First, for a task to be healthy all containers that are marked as essential must be running. Every Amazon ECS task must have at least one essential container. Best practice containers run a single application process, and if that process ends because of a critical runtime exception, then the container stops. If that stopped container was marked as essential, then the entire task is considered to be unhealthy and the task must be replaced. You can use the Amazon ECS Task Definition to configure an optional internal health check command that the Amazon ECS agent runs inside the container periodically. This command is expected to return a zero exit code that indicates success. If it returns a non-zero exit code, then that indicates failure. The container is considered unhealthy and an unhealthy essential container causes the task to be considered unhealthy, which causes Amazon ECS to replace the task. You can use the Amazon ECS service to configure attachments between your application container and other AWS services. For example, you can connect your container deployment to an Amazon Elastic Load Balancer (ELB) or AWS Cloud Map. These services perform their own external health checks. For example, ELB periodically attempts to open a connection to your container and send a test request. If it isn’t possible to open that connection, your container returns an unexpected response, or your container takes too long to respond, then the ELB considers the target container to be unhealthy. Amazon ECS also considers this external health status when deciding whether an Amazon ECS task is healthy or unhealthy. An unhealthy ELB health check causes the task to be replaced. For a task to be healthy, all sources of health status must evaluate as healthy. If any of the sources return an unhealthy status, then the Amazon ECS task is considered unhealthy and it will be replaced. Task replacement behavior Replacing an Amazon ECS task is something that happens in two main circumstances: During a fresh deployment triggered by the UpdateService API call. Any existing tasks that are part of the previous deployment must be replaced by new tasks that are part of the new deployment. When an existing task inside an active deployment becomes unhealthy. Unhealthy tasks must be replaced in order to maintain the desired count of healthy tasks. From early on in the history of Amazon ECS, the behavior of task replacement during rolling deployments has been configurable using two properties of the Amazon ECS service: maximumPercent – This controls how many additional tasks Amazon ECS can launch above the service’s desired count. For example, if the maximumPercent is 200% and the desired count for the service is eight tasks, then Amazon ECS can launch additional tasks up to a total of 16 tasks. minimumHealthyPercent – This controls the percentage that an Amazon ECS service is allowed to go below the desired count during a deployment. For example, if minimumHealthyPercent is 75% and the desired count for the service is eight tasks, then Amazon ECS can stop two tasks, reducing the service deployment down to six running tasks. The maximumPercent and minimumHealthyPercent have functioned for many years as efficient controls for fine tuning the behavior of rolling deployments when running Amazon ECS tasks on Amazon Elastic Compute Cloud (Amazon EC2) capacity. However, these deployment controls don’t make as much sense in a world where more and more Amazon ECS users are choosing serverless AWS Fargate capacity. In most cases, modern applications don’t require Amazon ECS to go below the desired count of running tasks during a rolling deployment or reduce the number of additional tasks being launched during a rolling deployment, because AWS Fargate utilization isn’t constrained by how many underlying Amazon EC2 instances you have registered into your cluster. Additionally, the maximumPercent and minimumHealthyPercent controls were originally ignored when it came to replacing unhealthy tasks. If tasks became unhealthy, then your service’s desired count could dip well below the threshold defined by minimumHealthyPercent. For example, if you were running eight tasks and four of them became unhealthy, then Amazon ECS would terminate the four unhealthy tasks and launch four replacement tasks. The number of running tasks would temporarily dip to 50% of the desired count. Updates to how Amazon ECS replaces unhealthy tasks As of October 20, 2023, Amazon ECS now uses your maximumPercent whenever possible when replacing unhealthy tasks. Let’s look at a few scenarios to understand how this works: Crashing tasks You’re running a service with a desired count of eight tasks and maximum healthy percent of 200%. Four of your eight tasks encounter critical runtime exceptions. Their processes crash and exit, which causes an essential container to exit. Amazon ECS observes that four of the eight tasks have gone unhealthy because their essential container exited. Unfortunately, Amazon ECS can’t avoid the healthy percentage dipping below 100% because the unhealthy container crashed. The running task count dips to 50% of the desired count briefly, but Amazon ECS launches four replacement tasks as quickly as possible to bring the number of running tasks back up to the desired count of eight tasks. Frozen tasks You’re running a service with a desired count of eight tasks and maximum healthy percent of 200%. Because of an endless loop in your code four of your eight tasks freeze up, but the processes stay running. The attached load balancer that is sending health check requests to the service observes that the target container is no longer responsive to health check requests, so it marks the target as unhealthy. Amazon ECS considers those four frozen tasks to be unhealthy. The maximum percent for the service allows it to go up to 16 tasks. Amazon ECS launches four additional replacement tasks for the four unhealthy tasks, making a total of 12 running tasks. Once the four additional tasks have become healthy, Amazon ECS stops the four unhealthy tasks, which brings the running task count back down to the desired count of eight tasks. Overburdened tasks You’re running a service with a desired count of eight tasks and maximum healthy percent of 150%. The service has autoscaling rules attached to it. It also has a load balancer attached to it, and a large spike of traffic arrives via the load balancer. The spike of traffic is so large that response time from the task rises dramatically. As a result of high response time, the load balancer health check fails and the ELB marks all eight targets as unhealthy. The ELB fails open and continues distributing traffic to all the targets as there are no healthy targets in the load balancer. Amazon ECS observes that all eight tasks are unhealthy. As a result, Amazon ECS wants to replace these unhealthy tasks. The maximum percent of 150% allows the service to go up to 12 running tasks. Therefore, Amazon ECS avoids stopping the unhealthy running tasks immediately. Instead, it launches four replacement tasks in parallel with the existing eight unhealthy tasks. Fortunately these four additional tasks give the ELB more targets to distribute traffic across, and all 12 of the running tasks stabilize in health as they are now able to handle the incoming traffic without timing out. Amazon ECS observes that there are now 12 healthy running tasks. Simultaneously with this, an Application Auto Scaling rule has kicked in based on seeing high CPU utilization by the original eight running tasks. The rule has updated the desired count for the Amazon ECS service from eight running tasks to 10 running tasks. Therefore, Amazon ECS only stops two of the 12 healthy running tasks, which reduces the task count back down to its current desired count of 10 running tasks. Limited maximum percent You’re running a service with a desired count of eight tasks and because of downstream limits or infrastructure constraints you have set a maximum percent of 100%. This doesn’t allow Amazon ECS to launch any additional tasks in parallel with your eight running tasks. If a task from this deployment freezes, or becomes overburdened and starts failing health checks, then Amazon ECS needs to replace it. Amazon ECS stops the unhealthy task first, then launches a replacement task after the unhealthy task has been stopped. This means the running task count still temporarily dips below the desired count. Task fails health checks during a rolling deployment You’re running a service with a desired count of eight tasks and a maximum healthy percent of 150%. You’re doing a rolling deployment to update your running tasks to be based off of a new task definition. Because the maximum healthy percent is 150%, this allows Amazon ECS to launch additional tasks in parallel with your currently running tasks. The rolling deployment has already triggered four additional task launches. The service currently has 12 running tasks: eight old tasks and four new tasks. During this rolling deployment, some of the old tasks begin failing a health check due to an unexpected bug. Because there’s an active rolling deployment occurring, Amazon ECS resorts to terminating unhealthy tasks immediately and replacing them with instances of the new task as quickly as possible. During a rolling deployment, Amazon ECS always try to replace failing tasks with tasks from the new active deployment. Health checks and responsive absorption of workload spikes Previously, Amazon ECS always stopped unhealthy tasks first, then launched a replacement task. This behavior made sense in a world where tasks were binpacked densely onto a statically sized cluster of Amazon EC2 instances that had no room to launch a replacement task without stopping an existing task. But more modern container workloads are now running using serverless AWS Fargate capacity. There’s no need to stop an unhealthy running task to make room for its replacement, as AWS Fargate can supply as much on-demand container capacity as needed. Additionally, many customers of Amazon ECS on Amazon EC2 are now using Amazon ECS capacity providers to launch additional Amazon EC2 instances on demand, rather than deploying to statically sized clusters of Amazon EC2 instances. Therefore, Amazon ECS now prioritizes using the maximumPercent for a service, and whenever possible it keeps unhealthy tasks running until after their replacements have become healthy. Additionally, the new Amazon ECS task replacement behavior helps prevent runaway task termination. In some cases, a large workload spike could cause a few tasks from the deployment to become unhealthy, which triggered their replacement. However, when Amazon ECS stopped unhealthy tasks in order to launch a replacement, the load balancer would shift more workload onto the remaining healthy tasks, which caused them to go unhealthy. In quick succession, all healthy tasks would be overwhelmed with workload that caused a cascade of runaway health check failures until every task had gone unhealthy. Eventually, Application Auto Scaling rules would kick in and scale up the deployment to a large enough size to handle the workload. But in most cases, a traffic spike causes the load balancer health checks to fail before it triggers aggregate resource consumption-based autoscaling. Auto scaling rules need to observe at least one minute of high average resource utilization before they react by scaling out the container deployment. However, an overburdened task may begin failing load balancer health checks immediately. In the scenario where your tasks are unhealthy because they are dealing with a large spike of incoming workload, the new task replacement behavior of Amazon ECS dramatically improves availability and reliability of your service. Amazon ECS catches health check failures and proactively launches a parallel replacement task that can help absorb the incoming workload spike before autoscaling rules even trigger. Once autoscaling rules trigger, the replacement task and the original task are both retained, if they are both healthy and if they fulfill the current desired task count of the service. Conclusion In this post, we explained new Amazon ECS behavior when handling unhealthy tasks. As more customers adopt Amazon ECS for their mission critical applications, we are always happy to tackle challenging new orchestration problems at scale. This updated task replacement behavior is designed to help serve the needs of customers both small and large. It helps keep your container deployments online and available—even in adverse circumstances such as application failure or traffic spikes. Please visit the Amazon ECS public roadmap for more info on additional upcoming features for Amazon ECS or to create your own issue to request a change or new feature. For more info on Amazon ECS scheduler behavior, see the official documentation, under Service Scheduler Concepts. View the full article
  7. Introduction Data scientists and engineers have made Apache Airflow a leading open-source tool to create data pipelines due to its active open-source community, familiar Python development as Directed Acyclic Graph (DAG) workflows, and an extensive library of pre-built integrations. Amazon Managed Workflows for Apache Airflow (MWAA) is a managed service for Apache Airflow that makes it easy to run Airflow on AWS without the operational burden of having to manage the underlying infrastructure. While business needs demand scalability, availability, and security, Airflow development often doesn’t require full production-ready infrastructure. Many DAGs are written locally, and when doing so, developers need to be assured that these workflows function correctly when they’re deployed to their production environment. To that end, the MWAA team created an open-source local-runner that uses many of the same library versions and runtimes as MWAA in a container that can run in a local Docker instance, along with utilities that can test and package Python requirements. There are times when a full MWAA environment isn’t required, but a local Docker container doesn’t have access to the AWS resources needed to properly develop and test end-to-end workflows. As such, the answer may be to run local-runner on a container on AWS, and by running on the same configuration as MWAA you can closely replicate your production MWAA environment in a light-weight development container. This post covers the topic of launching MWAA local-runner containers on Amazon Elastic Container Service (ECS) Fargate. Prerequisites This tutorial assumes you have an existing Amazon MWAA environment and wish to create a development container with a similar configuration. If you don’t already have an MWAA environment, then you can follow the quick start documentation here to get started. Docker on your local desktop. AWS Command Line Interface (AWS CLI). Terraform CLI (only if using Terraform). Walkthrough Clone the local-runner repository, set the environment variables, and build the image We’ll start by pulling the latest Airflow version of the Amazon MWAA local-runner to our local machine. Note: Replace <your_region> with your region and <airflow_version> with the version specified here. git clone https://github.com/aws/aws-mwaa-local-runner.git cd aws-mwaa-local-runner export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) export REGION=<your_region> export AIRFLOW_VERSION=<airflow_version> ./mwaa-local-env build-image Note: We’re expressly using the latest version of the Amazon MWAA local-runner as it supports the functionality needed for this tutorial. 2. Push your local-runner image to Amazon ECR aws ecr get-login-password --region $REGION| docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com aws ecr create-repository --repository-name mwaa-local-runner --region $REGION export AIRFLOW_IMAGE=$(docker image ls | grep amazon/mwaa-local | grep $AIRFLOW_VERSION | awk '{ print $3 }') docker tag $AIRFLOW_IMAGE $ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/mwaa-local-runner docker push $ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/mwaa-local-runner Modify the MWAA execution role For this example, we enable an existing MWAA role to work with Amazon ECS Fargate. As an alternative ,you may also create a new task execution role. From the Amazon MWAA console, select the link of the environment whose role you wish to use for your Amazon ECS Fargate local-runner instance. Scroll down to Permissions and select the link to open the Execution role. Select the Trust relationships tab. Choose Edit trust policy. Under Statement -> Principal -> Service add ecs-tasks.amazonaws.com. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ecs-tasks.amazonaws.com", "airflow.amazonaws.com", "airflow-env.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 6. Select Update policy. 7. Choose the Permissions tab. 8. Select the link to the MWAA-Execution-Policy. 9. Choose Edit policy. 10. Choose the JSON tab. 11. In the Statement section describing logs permissions, under Resource, add arn:aws:logs:us-east-1:012345678910:log-group:/ecs/mwaa-local-runner-task-definition:*, where 012345678910 is replaced with your account number and us-east-1 is replaced with your region. { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents", "logs:GetLogEvents", "logs:GetLogRecord", "logs:GetLogGroupFields", "logs:GetQueryResults" ], "Resource": [ "arn:aws:logs:us-east-1:012345678910:log-group:/ecs/mwaa-local-runner-task-definition:*", "arn:aws:logs:us-east-1:012345678910:log-group:airflow-MWAA-Demo-IAD-1-*" ] }, 12. We also want to add permissions that allow us to execute commands on the container and pull the image from Amazon ECR. { "Effect": "Allow", "Action": [ "ssmmessages:CreateControlChannel", "ssmmessages:CreateDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:OpenDataChannel" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } Note: Ensure that your private subnets have access to AWS Systems Manager (SSM) via Internet Gateway or PrivateLink to “com.amazonaws.us-east-1.ssmmessages” in order to enable command execution 13. Choose Review policy. 14. Choose Save changes. The creation of the Aurora Postgress Serverless instance and Amazon ECS resources can either be done using AWS CloudFormation or Terraform, as per the following sections. To create the resources required, clone the aws-samples/amazon-mwaa-samples repository. git clone https://github.com/aws-samples/amazon-mwaa-examples.git Take note of the variables from the existing MWAA environment needed to create the Amazon ECS environment (i.e., security groups, subnet IDs, Virtual Private Cloud (VPC) ID, and execution role). $ export MWAAENV=test-MwaaEnvironment $ aws mwaa get-environment --name $MWAAENV --query 'Environment.NetworkConfiguration' --region $REGION { "SecurityGroupIds": [ "sg-12345" ], "SubnetIds": [ "subnet-12345", "subnet-56789" ] } $ aws mwaa get-environment --name $MWAAENV --query 'Environment.ExecutionRoleArn' "arn:aws:iam::123456789:role/service-role/MwaaExecutionRole" AWS CloudFormation Navigate to the ECS CloudFormation directory: $ cd amazon-mwaa-examples/usecases/local-runner-on-ecs-fargate/cloudformation Update the AWS CloudFormation template input parameters file parameter-values.json in your favorite code editor (e.g., vscode). { "Parameters": { "ECSClusterName": "mwaa-local-runner-cluster", "VpcId": "your-mwaa-vpc-id", "ECRImageURI" : "123456789.dkr.ecr.us-east-1.amazonaws.com/mwaa-local-runner:latest", "SecurityGroups" : "sg-security-group-id", "PrivateSubnetIds" : "subnet-mwaapvtsubnetid1,subnet-mwaapvtsubnetid2", "PublicSubnetIds" : "subnet-mwaapublicsubnetid1,subnet-mwaapublicsubnetid2", "S3BucketURI" : "s3://your-mwaa-bucket-path", "ECSTaskExecutionRoleArn": "arn:aws:iam::123456789:role/service-role/mwaaExecutionRoleName", "AssignPublicIpToTask" : "yes" } } [Optional] Additional AWS CloudFormation template input parameter values can be overridden in either template directly (mwaa-ecs-on-fargate.yml) or supplied in input parameter file in step # 2. Deploy the AWS CloudFormation template. $ aws cloudformation deploy \ --stack-name mwaa-ecs-sandbox \ --region $REGION --template-file mwaa-on-ecs-fargate.yml \ --parameter-overrides file://parameter-values.json \ --capabilities CAPABILITY_IAM Where … Stack-name – AWS CloudFormation Stack name is e.g., mwaa-ecs-sandbox Region – where you want to install the stack. It can be sourced from env variable or replaced with the value e.g., ap-east-2, us-west-2 Template-file – CF template name in subfolder mwaa-on-ecs-fargate.yml Parameter – overrides is updated input parameter file with your environment values in step 2 It takes time (up to 40 minutes) to create required Amazon ECS and Amazon Relational Database Service (RDS) resources before showing output on successful completion as … Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - mwaa-ecs-sandbox To test validate the deployed environment, lets get the output parameters AWS CloudFormation template generated including Load Balancer with AWS CloudFormation describe command as: $ aws cloudformation describe-stacks --stack-name mwaa-ecs-sandbox --query 'Stacks[0].Outputs[*]' [ { "OutputKey": "LoadBalancerURL", "OutputValue": "mwaa-LoadB-S3WM6Y7GE1WA-18678459101.us-east-1.elb.amazonaws.com", "Description": "Load Balancer URL" }, { "OutputKey": "DBClusterEP", "OutputValue": "database-mwaa-local-runner.cluster-ckxppcrgfesp.us-east-1.rds.amazonaws.com", "Description": "RDS Cluster end point" } ] To test validate the local runner on Amazon ECS Fargate, go to Access Airflow Interface Step below after the Terraform steps. Terraform Navigate to the ECS Terraform directory: $ cd amazon-mwaa-examples/usecases/local-runner-on-ecs-fargate/terraform/ecs Create the tfvars file that contains all the required parameters. Replace all the parameters with the required parameters for your configuration. $ cat <<EOT>> terraform.tfvars assign_public_ip_to_task = true ecs_task_execution_role_arn = "arn:aws:iam::123456789:role/ecsTaskExecutionRole" elb_subnets = ["subnet-b06911ed", "subnet-f3bf01dd"] image_uri = "123456789.dkr.ecr.us-east-1.amazonaws.com/mwaa-local-runner:latest" mwaa_subnet_ids = ["subnet-b06911ed", "subnet-f3bf01dd"] region = "us-east-1" s3_dags_path = "s3://airflow-mwaa-test/DAG/" s3_plugins_path = "s3://airflow-mwaa-test/plugins.zip" s3_requirements_path = "s3://airflow-mwaa-test/requirements.txt" vpc_id = "vpc-e4678d9f" vpc_security_group_ids = ["sg-ad76c8e5"] EOT Initialize the Terraform modules and plan the environment to create the RDS Aurora Serverless database. The subnet IDs and security group IDs of your environment can be retrieved from the previous step. Note: Make use of the existing MWAA Environment subnets, VPC, and security groups. The security group also needs to allow traffic to itself. The security group needs allow traffic from your local machine on port 80 to access the loadbalancer URL. $ terraform init $ terraform plan Once the plan has succeeded, create the resources using the variables used in the previous step. $ terraform apply -auto-approve ... ... Outputs: database_name = "AirflowMetadata" db_passsword = <sensitive> loadbalancer_url = "mwaa-local-runner-alb-552640779.us-east-1.elb.amazonaws.com" rds_endpoint = "database-mwaa-local-runner.cluster-cqvb75x52nu8.us-east-1.rds.amazonaws.com" Note: you may face the error create: ExpiredToken: The security token included in the request is expired │ status code: 403. If you do face this error, untaint the RDS resource and re-apply. Access the Airflow user interface Direct your browser to the Application Load Balancer (ALB) URL from the AWS Cloudformation/Terraform output, being sure to preface with http (mwaa-local-runner-alb-552640779.us-east-1.elb.amazonaws.com/home). Note: If you chose an internal ALB, you’ll need to be on your VPC private subnet via VPN or similar. When presented with the Airflow user interface, provide the username admin and the default password specified as test1234. You now are in a standard Airflow deployment that closely resembles the configuration of MWAA using local-runner. Updating the environment When you stop and restart the Amazon ECS Fargate task, the dags, plugins, and requirements will be re-initialized. This can be done through a forced update: $ aws ecs update-service \ --service mwaa-local-runner-service \ --cluster mwaa-local-runner-cluster \ --region $REGION \ --force-new-deployment If you wish to do so without restarting the task, you may run the command directly via execute-command: If this is your first time running execute-command then we need to update the service to allow this functionality: $ aws ecs update-service \ --service mwaa-local-runner-service \ --cluster mwaa-local-runner-cluster \ --region $REGION \ --enable-execute-command \ --force-new-deployment When the AWS Fargate task resumes availability, we need to know the task ID: $ aws ecs list-tasks \ --cluster mwaa-local-runner-cluster \ --region $REGION This returns a JSON string that contains an ARN with the unique task ID in the format: { "taskArns": [ "arn:aws:ecs:us-east-1:012345678910:task/mwaa-local-runner-cluster/11aa22bb33cc44dd55ee66ff77889900" ] } In this case 11aa22bb33cc44dd55ee66ff77889900, which we’ll use in the next command: $ aws ecs execute-command \ --region $REGION \ --cluster mwaa-local-runner-cluster \ --task 11aa22bb33cc44dd55ee66ff77889900 \ --command "/bin/bash" \ --interactive Note: You may need to install Session Manager in order to execute commands via the AWS CLI. At this point you can run any activities you wish, such as execute the s3 sync command to update your dags: $ aws s3 sync —exact-timestamp —delete $S3_DAGS_PATH /usr/local/airflow/dags Or view your scheduler logs: $ cd /usr/local/airflow/logs/scheduler/latest;cat * When complete, type exit to return to your terminal. Prerequisites Cleaning up If no longer needed, be sure to delete your AWS Fargate cluster, task definitions, ALB, Amazon ECR repository, Aurora RDS instance, and any other items you do not wish to retain. With AWS Cloudformation, delete the stack. $ aws cloudformation delete-stack --stack-name mwaa-ecs-sandbox With terraform, run $ terraform destroy Important: Terminating resources that aren’t actively being used reduces costs and is a best practice. Not terminating your resources can result in additional charges. Conclusion In this post, we showed you how to configure Amazon MWAA open-source local-runner container image on Amazon ECS Fargate containers to provide a development and testing environment, using Amazon Aurora Serverless v2 as the database backend and execute-command on the AWS Fargate task to interact with the system. To learn more about Amazon MWAA visit the Amazon MWAA documentation. For more blog posts about Amazon MWAA, please visit the Amazon MWAA resources page. View the full article
  8. Introduction Designing and maintaining secure user management, authentication and other related features for applications is not an easy task. Amazon Cognito takes care of this work, which allows developers to focus on building the core business logic of the application. Amazon Cognito provides user management, authentication, and authorization for applications where users can log in in directly or through their pre-existing social or corporate credentials. Amazon Elastic Containers Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for customers to deploy, manage, and scale their container-based applications. When building using Amazon ECS, it is common to use Application Load Balancer (ALB) for application high availability and other features like SSL/TLS offloading, host based routing, and other application-aware traffic handling. Another benefit of using the ALB with Amazon ECS is that the ALB has in-built support for Amazon Cognito. When setting up the ALB, you can chose if you want incoming user traffic to be redirected to Amazon Cognito for authentication. By building secure containerized applications using Amazon ECS, and using ALB and its Amazon Cognito integration, you get the benefits of the ease of container orchestration and user authentication and authorization. Flow of how Application Load Balancer authenticates users using Amazon Cognito For an application fronted by ALB that integrates with Amazon Cognito and has been set up to authenticate users, the following stepwise flow describes what happens when a user attempts to access the application. For more information, see the example built by the AWS Elastic Load Balancing Demos. You need to understand what the ALB is doing to secure user access with Amazon Cognito: A user sends a request to the application fronted by the ALB, which has a set of rules that it evaluates for all traffic to determine what action to carry out. The rule (such as the path-based rule saying all traffic for/login) when matched triggers the authentication action on the ALB. The ALB then inspects the user’s HTTP payload for an authentication cookie. Because this is the user’s first visit, this cookie isn’t present. The ALB doesn’t see any cookie and redirects the user to the configured Amazon Cognito’s authorization endpoint. The user is presented with an authentication page from Amazon Cognito, where the user inputs their credentials. Amazon Cognito redirects the user back to the ALB and passes an authorization code to the user in the redirect URL. The load balancer takes this authorization code and makes a request to Amazon Cognito’s token endpoint. Amazon Cognito validates the authorization code and presents the ALB with an ID and access token. The ALB forwards the access token to Amazon Cognito’s user info endpoint. Amazon Cognito’s user information endpoint presents the ALB with user claims. The ALB redirects the user who is trying to access the application (step 1) to the same URL while inserting the authentication cookie in the redirect response. The user makes the request to the ALB with the cookie and the ALB validates it and forwards the request to the ALB’s target. The ALB inserts information (such as user claims, access token, and the subject field) into a set of X-AMZN-OIDC-* HTTP headers to the target. The target generates a response and forwards to the ALB. The ALB sends the response to the authenticated user. When the user makes subsequent requests for HTTP request and response, the flow will go through steps 9–11. If the user makes a new request without the authentication cookie, it goes through steps 1–11. For more information, see the authentication flow between the ALB and Amazon Cognito. Solution overview You will use a PHP application built for demonstration purpose. The application is published and verified in the public docker hub. We use and configure Amazon Route 53 for Domain Name Service (DNS) handling and AWS Certificate Manager (ACM) to provision Transport Layer Security (TLS) Certificates for usage. Amazon Cognito handles the Authentication flows and Amazon ECS handles the container scheduling and orchestration. The following solution architecture diagram presents an overview of the solution. Prerequisites To complete this tutorial you need the following tools, which can be installed with the links: aws cliv2: The AWS Command Line Interface (AWS CLI) is an open source tool that allows you interact with AWS services using commands in your command-line shell. ecs-cli: The Amazon Elastic Container Service (Amazon ECS) CLI provides high-level commands to simplify creating, updating, and monitoring tasks and clusters from a local development environment. Environment In this post, I used the AWS Cloud9 as an Integrated Development Environment (IDE) to configure the settings in this tutorial. You can use AWS Cloud9 or your own IDE. The commands used were tested using Amazon Linux 2 running in the Amazon Cloud9 environment. Follow the steps linked to install and configure Amazon Cloud9: Create a workspace to deploy this solution, which includes creating an AWS Identity and Access Management (IAM) role that will be attached to the workspace instance. Launch the base infrastructure platform that the resources reside in The Amazon ECS needs to be launched into a Virtual Private Cloud (VPC) infrastructure. To create this infrastructure, you use an AWS CloudFormation template that automates the creation of the platform. Download the zip file that contains an AWS CloudFormation yaml file: codebuild-vpc-cfn.yaml. Once deployed, the following resources are created into your AWS account: a Virtual Private Cloud (VPC), an internet gateway, two public subnets, two private subnets, two Network Address Translation (NAT) Gateways, and one security group. To launch the stack, follow these steps: Sign in to the AWS Management Console. In your Region of choice, you will see the Region list in the top right-hand corner. Search for the AWS CloudFormation service in the Console search bar. Choose Create Stack and select with new resources (standard). To specify template, choose upload a template file. Upload the previously downloaded: codebuild-vpc-cfn.yaml file. To create the stack and configure stack options, choose Next. Enter ecsplatform for stack name and ecsplatform for EnvironmentName. Choose Next. Leave the rest of the default settings and choose Next. Choose Create Stack. When CloudFormation has completed its deployment its the resources, the status is CREATE_COMPLETE. Next on your Amazon Cloud9 workspace terminal, set the below environment variables: AUTH_ECS_REGION=eu-west-1 <-- Change to the region you used in your Cloudformation configuration AUTH_ECS_CLUSTER=ecsauth AUTH_ECS_VPC=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='VPC'].OutputValue" --output text) AUTH_ECS_PUBLICSUBNET_1=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PublicSubnet1'].OutputValue" --output text) AUTH_ECS_PUBLICSUBNET_2=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PublicSubnet2'].OutputValue" --output text) AUTH_ECS_PRIVATESUBNET_1=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnet1'].OutputValue" --output text) AUTH_ECS_PRIVATESUBNET_2=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnet2'].OutputValue" --output text) AUTH_ECS_SG=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='NoIngressSecurityGroup'].OutputValue" --output text) AUTH_ECS_DOMAIN=www.example.com <-- Change to a domain name you want to use for this solution for this solution You will set additional variables later, but these are enough to begin building your solution. Configure the security group rules needed for web traffic access When users access the ALB, the security group attached to it needs to allow ingress port 443 (https) traffic. In addition, when the ALB forwards the web traffic to the Amazon ECS tasks there needs to be a ingress rules attached to the Amazon ECS container instances that allows ingress port 80 (http) traffic. You can achieve this access with the following: aws ec2 authorize-security-group-ingress \ --group-id $AUTH_ECS_SG \ --protocol tcp \ --port 443 \ --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress \ --group-id $AUTH_ECS_SG \ --protocol tcp \ --port 80 \ --cidr 0.0.0.0/0 Create a public Application Load Balancer As described earlier, the ALB will receive and terminate all client requests to validate for authentication using Amazon Cognito. The ALB also handles TLS offloading where the TLS certificates for the domain name will be deployed on it. To create the ALB do the below: AUTH_ECS_ALBARN=$(aws elbv2 create-load-balancer --name $AUTH_ECS_CLUSTER --subnets $AUTH_ECS_PUBLICSUBNET_1 $AUTH_ECS_PUBLICSUBNET_2 --security-groups $AUTH_ECS_SG --query 'LoadBalancers[0].LoadBalancerArn' --output text) AUTH_ECS_ALB_DNS=$(aws elbv2 describe-load-balancers --load-balancer-arns $AUTH_ECS_ALBARN --query 'LoadBalancers[0].DNSName' --output text) Configure a Domain Name System Clients will need a domain name that points to the ALB to type into their browsers. In this post, the Domain Name System (DNS) name is registered using the DNS Resolution service, Amazon Route 53. You can configure your domain name (such as www.example.com) where it‘s known as the record and placed in a Route 53-hosted zone. Configure both the Route 53 hosted zone and record If you already have a Route53 publicly hosted zone for the apex domain and this is the location where you plan to add the record, then you will set its host zone ID (AUTH_ECS_R53HZ). For more information, see the hosted zone ID documentation. The first command line shown below demonstrates how to identify a hosted zone ID. You can substitute example.com for your apex domain name. The other commands create a record that points to the ALB. AUTH_ECS_R53HZ=$(aws route53 list-hosted-zones-by-name --dns-name example.com --query 'HostedZones[0].Id' --output text | grep -o '/hostedzone/.*' | cut -b 13-27) cat << EOF > dnsrecord.json { "Comment": "CREATE a DNS record that points to the ALB", "Changes": [ { "Action": "CREATE", "ResourceRecordSet": { "Name": "$AUTH_ECS_DOMAIN", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "$AUTH_ECS_ALB_DNS" } ] } } ] } EOF aws route53 change-resource-record-sets --hosted-zone-id $AUTH_ECS_R53HZ --change-batch file://dnsrecord.json Request a public certificate To ensure that web traffic sent by clients to the ALB is encrypted, integrate an ACM (AWS Certificate Manager) Certificate into the ALB’s listener. This ensures that the ALB serves HTTPS traffic and communications from clients to ALB is encrypted. Public SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources you create to run your application. Provision an ACM certificate AUTH_ECS_ACM_CERT_ARN=$(aws acm request-certificate \ --domain-name $AUTH_ECS_DOMAIN \ --validation-method DNS \ --region $AUTH_ECS_REGION \ --query 'CertificateArn' \ --output text) When you create an SSL/TLS Certificate using ACM, it will try to confirm that you’re the owner of the domain name before fully provisioning the certificate for you to use. One method of confirmation is through DNS validation. Through this method ACM creates two CNAME records that you must add in your Route53 hosted zone. To add the ACM CNAME records in your Route53 hosted zones: cat << EOF > acm_validate_cert_dns.json { "Changes":[ { "Action": "UPSERT", "ResourceRecordSet":{ "Name": "$(aws acm describe-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN --query 'Certificate.DomainValidationOptions[].ResourceRecord[].Name' --output text)", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "$(aws acm describe-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN --query 'Certificate.DomainValidationOptions[].ResourceRecord[].Value' --output text)" } ] } } ] } EOF aws route53 change-resource-record-sets \ --hosted-zone-id $AUTH_ECS_R53HZ \ --change-batch file://acm_validate_cert_dns.json It takes some time before the certificate will change from ‘Pending Validation’ to ‘Success’. Once the status shows ‘Issued’ on the ACM console then you can use the certificate. Create an HTTPS listener and listener rule on the ALB Now that you’ve created the ALB. In addition, you’ve also created a certificate to configure the HTTPS listener to accept incoming HTTPS request from clients and to terminate them. You integrate the certificate into the listener and add a default rule action on the ALB: cat << EOF > listener-defaultaction.json [ { "Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "Host": "$AUTH_ECS_DOMAIN", "StatusCode": "HTTP_301" } } ] EOF AUTH_ECS_ALBLISTENER=$(aws elbv2 create-listener \ --load-balancer-arn $AUTH_ECS_ALBARN \ --protocol HTTPS \ --port 443 \ --certificates CertificateArn=$AUTH_ECS_ACM_CERT_ARN \ --ssl-policy ELBSecurityPolicy-2016-08 \ --default-actions file://listener-defaultaction.json \ --query 'Listeners[0].ListenerArn' \ --output text) Create an Amazon Cognito user pool As previously described, Amazon Cognito provides user management, authentication and authorization for applications where users can login in directly or through their pre-existing social/corporate credentials. Create a user pool, which is a user directory in Amazon Cognito that helps clients to access the website. Clients sign in with their credentials before they get access to the site. To fully configure Amazon Cognito for integration with the ALB, create a user pool, a user pool application client, and a user pool domain. The following steps show you how to accomplish these tasks. Create an Amazon Cognito user pool AUTH_COGNITO_USER_POOL_ID=$(aws cognito-idp create-user-pool \ --pool-name ${AUTH_ECS_CLUSTER}_Pool \ --username-attributes email \ --username-configuration=CaseSensitive=false \ --region $AUTH_ECS_REGION \ --query 'UserPool.Id' \ --auto-verified-attributes email \ --output text) Create an Amazon Cognito user pool application client AUTH_COGNITO_USER_POOL_CLIENT_ID=$(aws cognito-idp create-user-pool-client \ --client-name ${AUTH_ECS_CLUSTER}_AppClient \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --generate-secret \ --allowed-o-auth-flows "code" \ --allowed-o-auth-scopes "openid" \ --callback-urls "https://${AUTH_ECS_DOMAIN}/oauth2/idpresponse" \ --supported-identity-providers "COGNITO" \ --allowed-o-auth-flows-user-pool-client \ --region $AUTH_ECS_REGION \ --query 'UserPoolClient.ClientId' \ --output text) Create an Amazon Cognito user pool domain AUTH_COGNITO_USER_POOL_ARN=$(aws cognito-idp describe-user-pool --user-pool-id $AUTH_COGNITO_USER_POOL_ID --query 'UserPool'.Arn --output text) AUTH_COGNITO_DOMAIN=(authecsblog$(whoami)) aws cognito-idp create-user-pool-domain \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --region $AUTH_ECS_REGION \ --domain $AUTH_COGNITO_DOMAIN Create and configure a target group for the ALB Create a target group for the ALB. The target group is used to route requests to the Amazon ECS tasks. When an ALB receives the HTTPS traffic from the web clients, it routes the requests to the target group (after authentication of the client has occurred) for a web response. (Amazon ECS tasks are registered to the target group in a later section “Configuring the ECS Service”). Create an empty target group: AUTH_ECS_ALBTG=$(aws elbv2 create-target-group \ --name ${AUTH_ECS_CLUSTER}-tg \ --protocol HTTP \ --port 80 \ --target-type instance \ --vpc-id $AUTH_ECS_VPC \ --query 'TargetGroups[0].TargetGroupArn' \ --output text) Host-based routing and an authentication rule on the ALB The ALB routes requests based on the host name in the HTTP host header. It is possible to configure multiple domains that all point to a single ALB because the ALB can route requests based on the incoming host header and forward the requests to the right target group for handling. You can configure an authentication rule which tells the ALB what to do to the incoming requests. In this post, we want the requests to first be authenticated and, if successful, the request should get forwarded to the target group we created earlier. Configure host-based routing and an authentication rule on the ALB cat << EOF > actions-authenticate.json [ { "Type": "authenticate-cognito", "AuthenticateCognitoConfig": { "UserPoolArn": "$AUTH_COGNITO_USER_POOL_ARN", "UserPoolClientId": "$AUTH_COGNITO_USER_POOL_CLIENT_ID", "UserPoolDomain": "$AUTH_COGNITO_DOMAIN", "SessionCookieName": "AWSELBAuthSessionCookie", "Scope": "openid", "OnUnauthenticatedRequest": "authenticate" }, "Order": 1 }, { "Type": "forward", "TargetGroupArn": "$AUTH_ECS_ALBTG", "Order": 2 } ] EOF cat << EOF > conditions-hostrouting.json [ { "Field": "host-header", "HostHeaderConfig": { "Values": ["$AUTH_ECS_DOMAIN"] } } ] EOF aws elbv2 create-rule \ --listener-arn $AUTH_ECS_ALBLISTENER \ --priority 20 \ --conditions file://conditions-hostrouting.json \ --actions file://actions-authenticate.json Amazon ECS configuration The ALB and Amazon Cognito are now configured for processing incoming requests and authentication. Next you will configure Amazon ECS to orchestrate and deploy running tasks to generate response for the client’s web request. An Amazon ECS cluster is a logical grouping of tasks or services. Amazon ECS instances are part of the Amazon ECS infrastructure registered to a cluster that the Amazon ECS tasks run on. Two t3.small Amazon ECS instances will be configured to run the tasks. Amazon ECS will run and maintain two tasks, which are configured based on parameters and settings contained in the task definition (a JSON text file). For more information on Amazon ECS basics, constructs, and orchestration read the Amazon ECS components documentation. Configure the Amazon ECS CLI Amazon ECS CLI is the tool that you’d use to configure and launch the Amazon ECS components. To download Amazon ECS CLI, follow the following steps: Amazon ECS CLI needs a CLI profile, to proceed generate an access key ID, and access key using the AWS credentials documentation. Set the $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY variables to the copied values generated by AWS IAM. Configure the Amazon ECS CLI for a CLI profile ecs-cli configure profile --profile-name profile_name --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY Create the Amazon ECS cluster Create the Amazon ECS cluster, which consists of two t3.small instance types deployed in the VPC and residing in the two private subnets from earlier. For the instance-role, use the AWS IAM role created when configuring the AWS Cloud9 environment workspace (ecsworkshop-admin). The first command creates a keypair and the second command configures the Amazon ECS cluster. The keypair is useful if you need to SSH into the Amazon ECS instances for troubleshooting. Configure the Amazon EC2 key pair and bring up the ECS cluster aws ec2 create-key-pair \ --key-name $AUTH_ECS_CLUSTER \ --key-type rsa \ --query "KeyMaterial" \ --output text > $AUTH_ECS_CLUSTER.pem ecs-cli up --instance-role ecsworkshop-admin --cluster $AUTH_ECS_CLUSTER --vpc $AUTH_ECS_VPC --subnets $AUTH_ECS_PRIVATESUBNET_1,$AUTH_ECS_PRIVATESUBNET_2 --port 443 --region $AUTH_ECS_REGION --keypair $AUTH_ECS_CLUSTER --size 2 --instance-type t3.small --security-group $AUTH_ECS_SG --launch-type EC2 The cluster creation will take some time, when fully deployed the AWS CloudFormation stack status will output ‘Cluster creation succeeded’. Configure the AWS Region and ECS cluster name using the configure command: ecs-cli configure --region $AUTH_ECS_REGION --cluster $AUTH_ECS_CLUSTER --default-launch-type EC2 --config-name $AUTH_ECS_CLUSTER The EC2 launch type, with Amazon ECS instances, is created and launched in your VPC. If you prefer not to manage the underlying instances hosting the tasks, then Fargate launch type is the option to use. Fargate is the serverless way to host your Amazon ECS workloads. Create the ECS service The ecs-cli compose service up command will create the Amazon ECS service and tasks from a Docker Compose file (ecsauth-compose.yaml) that you create. This service is configured to use the ALB that you created earlier. A task definition is created by the command. The Docker Compose file contains the configuration settings that the Amazon ECS service is spun up with. This includes the Docker image to pull and use, the ports to expose on the Amazon ECS instance, and the Amazon ECS task for network forwarding. In this post, we configured it to use the AWS published sample demo PHP application verified and published to Docker Hub. Also, the Transmission Control Protocol (TCP) port 80 will be opened on the Amazon ECS instance and traffic received on this port will be forwarded to the task on TCP port 80). Configuring the ECS service cat << EOF > ecsauth-compose.yml version: '2' services: web: image: amazon/amazon-ecs-sample ports: - "80:80" EOF ecs-cli compose --project-name ${AUTH_ECS_CLUSTER}service --file ecsauth-compose.yml service up --target-group-arn $AUTH_ECS_ALBTG --container-name web --container-port 80 --role ecsServiceRole ecs-cli compose --project-name ${AUTH_ECS_CLUSTER}service --file ecsauth-compose.yml service scale 2 Testing the solution end to end We now the have working components of the solution. To test the solution end to end, you can navigate to the https site of the domain name used in your browser (such as https://www.example.com). echo $AUTH_ECS_DOMAIN The sequence of events that follows is as we described in the flow of how the ALB authenticates users using Amazon Cognito (section “Flow of how Application Load Balancer authenticates users using Amazon Cognito”). After redirection by the ALB to the Amazon Cognito configured domain’s login page (a hosted UI by Amazon Cognito), enter input your credentials. Since this is the first time the page is accessed we will sign up as a new user. Amazon Cognito stores this information in the user pool. If you navigate to the Amazon Cognito user pool console after, you’ll see this new user. After signing in to the ALB, it redirects you to the landing page of the sample demonstration PHP application, which is shown in the diagram below. User claims encoding and security In this post, we configured the target group to use HTTP, because the ALB has handled the TLS offloading. However, for enhanced security, you should restrict the traffic getting to the Amazon ECS instances to only the load balancer using the security group. After the load balancer authenticates a user successfully, it passes the claims of the user to the target. If you inspect traffic forwarded to the sample demonstration application through custom HTTP header logging in your access logs, you can see three HTTP headers. These headers contain information about the user claims and is signed by the ALB with a signature and algorithm that you can verify. The three HTTP headers include the following: x-amzn-oidc-accesstoken The access token from the token endpoint, in plain text. x-amzn-oidc-identity The subject field (sub) from the user info endpoint, in plain text. x-amzn-oidc-data The user claims, in JSON web tokens (JWT) format. From information encoded in the x-amzn-oidc-data, it is possible to extract information about the user. The following is an example Python 3.x application that can decode the payload portion of the x-amzn-oidc-data to reveal the user claims passed by Amazon Cognito. import jwt import requests import base64 import json # Step 1: Get the key id from JWT headers (the kid field) ; encoded_jwt = headers.dict['x-amzn-oidc-data'] jwt_headers = encoded_jwt.split('.')[0] decoded_jwt_headers = base64.b64decode(jwt_headers) decoded_jwt_headers = decoded_jwt_headers.decode("utf-8") decoded_json = json.loads(decoded_jwt_headers) kid = decoded_json['kid'] # Step 2: Get the public key from regional endpoint url = 'https://public-keys.auth.elb.' + region + '.amazonaws.com/' + kidreq = requests.get(url) pub_key = req.text # Step 3: Get the payload payload = jwt.decode(encoded_jwt, pub_key, algorithms=['ES256']) Cleanup Now that you are done building the solution and testing it to clean up all the resources you can run the following commands: aws elbv2 delete-load-balancer \ --load-balancer-arn $AUTH_ECS_ALBARN aws ecs delete-service --cluster $AUTH_ECS_CLUSTER --service ${AUTH_ECS_CLUSTER}service –force containerinstance1=$(aws ecs list-container-instances --cluster $AUTH_ECS_CLUSTER --query 'containerInstanceArns[0]' --output text) containerinstance2=$(aws ecs list-container-instances –cluster $AUTH_ECS_CLUSTER --query 'containerInstanceArns[1]' --output text) aws ecs deregister-container-instance \ --cluster $AUTH_ECS_CLUSTER \ --container-instance $containerinstance1 \ --force aws ecs deregister-container-instance \ --cluster $AUTH_ECS_CLUSTER \ --container-instance $containerinstance2 \ --force aws ecs delete-cluster --cluster $AUTH_ECS_CLUSTER aws ecs deregister-task-definition --task-definition ${AUTH_ECS_CLUSTER}service:1 aws acm delete-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN aws route53 delete-hosted-zone --id $AUTH_ECS_R53HZ aws cognito-idp delete-user-pool-domain \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --domain $AUTH_COGNITO_DOMAIN aws cognito-idp delete-user-pool --user-pool-id $AUTH_COGNITO_USER_POOL_ID aws elbv2 delete-target-group \ --target-group-arn $AUTH_ECS_ALBTG aws cloudformation delete-stack \ --stack-name amazon-ecs-cli-setup-$AUTH_ECS_CLUSTER aws cloudformation delete-stack \ --stack-name ecsplatform aws ec2 delete-key-pair --key-name $AUTH_ECS_CLUSTER Conclusion In this post, we showed you how to authenticate users accessing your containerized application without writing authentication code, using the ALB’s inbuilt integration with Amazon Cognito. Maintaining and securing user management and authentication is offloaded from the application, which allows you to focus on building core business logic into the application. You don’t need to worry about platform tasks for managing, scheduling, and scaling containers for the web traffic because Amazon ECS handles all of that. View the full article
  9. By using cloud platforms, we can take advantage of different resource configurations and compute capacities. However, deploying containerized applications on cloud platforms is proving to be quite challenging, especially for new users who have no expertise on how to use that platform. As each platform may provide specific APIs, orchestrating the deployment of a containerized application can become a hassle. Docker Compose is a very popular tool used to manage containerized applications deployed on Docker hosts. Its popularity is maybe due to the simplicity on how to define an application and its components in a Compose file and the compact commands to manage its deployment. Since cloud platforms for containers have emerged, being able to deploy a Compose application on them is a most-wanted feature by many developers that use Docker Compose for their local development. In this blog post, we discuss how to use Docker Compose to deploy containerized applications to Amazon ECS. We aim to show how the transition from deploying to a local Docker environment to deploying to Amazon ECS is effortless, the application being managed in the same way for both environments. Requirements In order to exercise the examples in this blogpost, the following tools need to be installed locally: Windows and MacOS: install Docker Desktop Linux: install Docker Engine and Compose CLI To deploy to Amazon ECS: an AWS account For deploying a Compose file to Amazon ECS, we rely on the new Docker Compose implementation embedded into the Docker CLI binary. Therefore, we are going to run docker compose commands instead of docker-compose. For local deployments, both implementations of Docker Compose should work. If you find a missing feature that you use, report it on the issue tracker. Throughout this blogpost, we discuss how to: Build and ship a Compose Application. We exercise how to run an application defined in a Compose file locally and how to build and ship its images to Docker Hub to make them accessible from anywhere. Create an ECS context to target Amazon ECS. Run the Compose application on Amazon ECS. Build and Ship a Compose application Let us take an example application with the following structure: $ tree myproject/ myproject/ ├── backend │ ├── Dockerfile │ ├── main.py │ └── requirements.txt ├── compose.yaml └── frontend ├── Dockerfile └── nginx.conf 2 directories, 6 files The content of the files can be found here. The Compose file define only 2 services as follows: $ cat compose.yaml services: frontend: build: frontend ports: - 80:80 depends_on: - backend backend: build: backend Deploying this file locally on a Docker engine is quite straightforward: $ docker compose up -d [+] Running 3/3 ⠿ Network "myproject_default" Created 0.5s ⠿ Container myproject_backend_1 Started 0.7s ⠿ Container myproject_frontend_1 Started 1.4s Check the application is running locally: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES eec2dd88fd67 myproject_frontend "/docker-entrypoint...." 4 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp myproject_frontend_1 2c64e62b933b myproject_backend "python3 /app/main.py" 4 seconds ago Up 3 seconds myproject_backend_1 Query the frontend: $ curl localhost:80 ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === { / ===- \______ O __/ \ \ __/ \____\_______/ Hello from Docker! To remove the application: $ docker compose down [+] Running 3/3 ⠿ Container myproject_frontend_1 Removed 0.5s ⠿ Container myproject_backend_1 Removed 10.3s ⠿ Network "myproject_default" Removed 0.4s In order to deploy this application on ECS, we need to have the images for the application frontend and backend stored in a public image registry such as Docker Hub. This enables the images to be pulled from anywhere. To upload the images to Docker Hub, we can set the image names in the compose file as follows: $ cat compose.yamlservices: frontend: image: myhubuser/starter-front build: frontend ports: - 80:80 depends_on: - backend backend: image: myhubuser/starter-back build: backend Build the images with Docker Compose: $ docker compose build [+] Building 1.2s (16/16) FINISHED => [myhubuser/starter-front internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 31B 0.0s => [myhubuser/starter-back internal] load build definition from Dockerfile 0.0s ... In the build output we can notice the image has been named and tagged according to the image field from the Compose file. Before pushing the images to Docker Hub, check to be logged in: $ docker login ... Login Succeeded Push the images: $ docker compose push [+] Running 0/16 ⠧ Pushing Pushing frontend: f009a503aca1 Pushing [===========================================... 2.7s ... The images should be stored now in Docker Hub. Create an ECS Docker Context To make Docker Compose target the Amazon ECS platform, we need first to create a Docker context of the ECS type. A docker context is a mechanism that allows redirecting commands to different Docker hosts or cloud platforms. We assume at this point that we have AWS credentials set up in the local environment for authenticating with the ECS platform. To create an ECS context run the following command: $ docker context create ecs myecscontext ? Create a Docker context using: [Use arrows to move, type to filter] An existing AWS profile AWS secret and token credentials > AWS environment variables Based on the familiarity with the AWS credentials setup and the AWS tools use, we are prompted to choose between 3 context setups. To skip the details of AWS credential setup, we choose the option of using environment variables. $ docker context create ecs myecscontext ? Create a Docker context using: AWS environment variables Successfully created ecs context "myecscontext" This requires to have the AWS_ACCESS_KEY and AWS_SECRET_KEY set in the local environment when running Docker commands that target Amazon ECS. The current context in use is marked by * in the output of context listing: $ docker context lsNAME TYPE DESCRIPTION DOCKER ENDPOINT default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sockmyecscontext ecs credentials read from environment To make all subsequent commands target Amazon ECS, make the newly created ECS context the one in use by running: $ docker context use myecscontext myecscontext $ docker context ls NAME TYPE DESCRIPTION DOCKER ENDPOINT default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock myecscontext * ecs credentials read from environment Run the Compose application on Amazon ECS An alternative to having it as the context in use is to set the context flag for all commands targeting ECS. WARNING: Check in advance the cost that the ECS deployment may incur for 2 ECS services, load balancing (ALB), cloud map (DNS resolution) etc. For the following commands, we keep ECS context as the current context in use. Before running commands on ECS, make sure the Amazon account credentials grant access to manage resources for the application as detailed in the documentation. We can now run a command to check we can successfully access ECS. $ AWS_ACCESS_KEY="*****" AWS_SECRET_KEY="******" docker compose ls NAME STATUS Export the AWS credentials to avoid setting them for every command. $ export AWS_ACCESS_KEY="*****" $ export AWS_SECRET_KEY="******" The deploy the sample application to ECS, we can run the same command as in the local deployment: $ docker compose up WARNING services.build: unsupported attribute WARNING services.build: unsupported attribute [+] Running 18/18 ⠿ myproject CreateComplete 206.0s ⠿ FrontendTCP80TargetGroup CreateComplete 0.0s ⠿ CloudMap CreateComplete 46.0s ⠿ FrontendTaskExecutionRole CreateComplete 19.0s ⠿ Cluster CreateComplete 5.0s ⠿ DefaultNetwork CreateComplete 5.0s ⠿ BackendTaskExecutionRole CreateComplete 19.0s ⠿ LogGroup CreateComplete 1.0s ⠿ LoadBalancer CreateComplete 122.0s ⠿ Default80Ingress CreateComplete 1.0s ⠿ DefaultNetworkIngress CreateComplete 0.0s ⠿ BackendTaskDefinition CreateComplete 2.0s ⠿ FrontendTaskDefinition CreateComplete 3.0s ⠿ FrontendServiceDiscoveryEntry CreateComplete 1.0s ⠿ BackendServiceDiscoveryEntry CreateComplete 2.0s ⠿ BackendService CreateComplete 65.0s ⠿ FrontendTCP80Listener CreateComplete 3.0s ⠿ FrontendService CreateComplete 66.0s Docker Compose converts the Compose file to a CloudFormation template defining a set of AWS resources. Details on the resource mapping can be found in the documentation. To review the CloudFormation template generated, we can run the command: $ docker compose convert WARNING services.build: unsupported attribute WARNING services.build: unsupported attribute AWSTemplateFormatVersion: 2010-09-09 Resources: BackendService: Properties: Cluster: Fn::GetAtt: - Cluster - Arn DeploymentConfiguration: MaximumPercent: 200 MinimumHealthyPercent: 100... To check the state of the services, we can run the command: $ docker compose ps NAME SERVICE STATUS PORTS task/myproject/8c142dea1282499c83050b4d3e689566 backend Running task/myproject/a608f6df616e4345b92a3d596991652d frontend Running mypro-LoadB-1ROWIHLNOG5RZ-1172432386.eu-west-3.elb.amazonaws.com:80->80/http Similarly to the local run, we can query the frontend of the application: $ curl mypro-LoadB-1ROWIHLNOG5RZ-1172432386.eu-west-3.elb.amazonaws.com:80 ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === { / ===- \______ O __/ \ \ __/ \____\_______/ Hello from Docker! We can retrieve logs from the ECS containers by running the compose logs command: $ docker compose logs backend | * Serving Flask app "main" (lazy loading) backend | * Environment: production backend | WARNING: This is a development server. Do not use it in a production deployment. backend | Use a production WSGI server instead. ... frontend | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh frontend | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh frontend | /docker-entrypoint.sh: Configuration complete; ready for start up frontend | 172.31.22.98 - - [02/Mar/2021:08:35:27 +0000] "GET / HTTP/1.1" 200 212 "-" "ELB-HealthChecker/2.0" "-" backend | 172.31.0.11 - - [02/Mar/2021 08:35:27] "GET / HTTP/1.0" 200 - backend | 172.31.0.11 - - [02/Mar/2021 08:35:57] "GET / HTTP/1.0" 200 - frontend | 172.31.22.98 - - [02/Mar/2021:08:35:57 +0000] "GET / HTTP/1.1" 200 212 "-" "curl/7.75.0" "94.239.119.152" frontend | 172.31.22.98 - - [02/Mar/2021:08:35:57 +0000] "GET / HTTP/1.1" 200 212 "-" "ELB-HealthChecker/2.0" "-" To terminate the Compose application and release AWS resources, run: $ docker compose down [+] Running 2/4 ⠴ myproject DeleteInProgress User Initiated 8.5s ⠿ DefaultNetworkIngress DeleteComplete 1.0s ⠿ Default80Ingress DeleteComplete 1.0s ⠴ FrontendService DeleteInProgress 7.5s... The Docker documentation provides several examples of Compose files, supported features, details on how to deploy and how to update a Compose application running in ECS, etc. The following features are discussed in detail: use of private images service discovery volumes and secrets definition AWS-specific service properties for auto-scaling, IAM roles and load balancing use of existing AWS resources Summary We have covered the transition from local deployment of a Compose application to the deployment on Amazon ECS. We have used a minimal generic example for demonstrating how to use the Docker Compose cloud-capability. For a better understanding on how to update the Compose file and use specific AWS features, the documentation provides much more details. Resources: Docker Compose embedded in the Docker CLIhttps://github.com/docker/compose-cli/blob/main/INSTALL.md Compose to ECS support https://docs.docker.com/cloud/ecs-integration/ ECS-specific Compose examples:https://docs.docker.com/cloud/ecs-compose-examples/ Deploying Docker containers to ECS:https://docs.docker.com/cloud/ecs-integration/ Sample used to demonstrate Compose commands:https://github.com/aiordache/demos/tree/master/ecsblog-demo The post Docker Compose: From Local to Amazon ECS appeared first on Docker Blog. View the full article
  10. In July we announced a new strategic partnership with Amazon to integrate the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Over the last couple of months we have worked with the community on the beta experience in Docker Desktop Edge. Today we are excited to bring this experience to our entire community in Docker Desktop stable, version 2.3.0.5. You can watch Carmen Puccio (Amazon) and myself (Docker) and view the original demo in the recording of our latest webinar here. What started off in the beta as a Docker plugin experience docker ecs has been pulled into Docker directly as a familiar docker compose flow. This is just the beginning, and we could use your input so head over to the Docker Roadmap and let us know what you want to see as part of this integration. There is no better time to try it. Grab the latest Docker Desktop Stable. Then check out my example application which will walk you through everything you need to know to deploy a Python application locally in development and then again directly to Amazon ECS in minutes not hours. Want more? Join us this Wednesday, September 16th at 10am Pacific where Jonah Jones (Amazon), Peter McKee (Docker) and myself will continue the discussion on Docker Run, our YouTube channel. For a live QA from our last webinar. We will be answering the top questions from the webinar and from the live audience. DockTalk Q&A: From Docker Straight to AWS The post ICYMI: From Docker Straight to AWS Built-in appeared first on Docker Blog. View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...