Jump to content

Search the Community

Showing results for tags 'amazon cognito'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 20 results

  1. AWS has launched a feature for Amazon Cognito customers to reduce the time spent securing Amazon API Gateway APIs with fine-grained access control, from weeks to days. The feature leverages Amazon Verified Permissions to manage and evaluate granular security policies that reference user attributes and groups. With a few clicks, you can enforce that only users in authorized Amazon Cognito groups have access to the application’s APIs. For example, say you are building a loan processing application, you can secure your application by restricting access to the “approve_loan” API to users in the “loan_officers” group. You can implement more fine-grained authorization, without making any code changes, by updating the underlying Cedar policy, so that only “loan_officers” above “Director” level can approve loans. View the full article
  2. Amazon Cognito has added three features for customers using the SAML standard for federation. Customers can use Amazon Cognito user pools to send signed SAML authentication requests, require encrypted responses from a SAML identity provider, and use identity provider-initiated single sign-on (SSO) for SAML federation. View the full article
  3. Amazon Cognito now supports provisioned capacity for customers who need higher request limits for APIs used for authentication, user management, and other operations. Customers can request limits higher than the defaults provided by Cognito in any of nine API categories, including User Authentication, User Creation, User Federation, User Read, and User Token. For a complete list of API categories, and the API operations in them, refer to the documentation. Provisioned capacity will be charged based on the desired request-per-second (RPS) increment and the duration (expressed as a portion of a month). View the full article
  4. Amazon Cognito identity pools now enables you to manage quotas for commonly used operations to create and retrieve identities and manage tags for identity pools. This update makes it simple to view your quota usage and to better plan and architect your solution. For example, you now can now see the quotas for APIs such as GetID and TagResource in Service Quotas console. By leveraging AWS Service Quotas, you can quickly understand your applied service quota values for these identity pool APIs. View the full article
  5. Amazon Cognito user pools now support the ability to enrich access tokens with custom attributes in the form of OAuth 2.0 scopes and claims. You can make application-specific advanced authorization decisions using custom attributes in the access token. This feature also allows you to personalize end-user experiences and improve customer engagement. View the full article
  6. Introduction Kubecost provides real-time cost visibility and insights for teams using Kubernetes. It has an intuitive dashboard to help you understand and analyze the costs of running your workloads in a Kubernetes cluster. Kubecost is built on OpenCost, which was recently accepted as a Cloud Native Computing Foundation (CNCF) Sandbox project, and is actively supported by AWS. Amazon EKS optimized bundle of Kubecost Earlier last year, Amazon Elastic Kubernetes Service (Amazon EKS) announced the availability of an Amazon EKS-optimized bundle of Kubecost for cluster cost visibility. The bundle is available to customers free of charge and includes Kubecost troubleshooting support. Kubernetes platform administrators and finance leaders can use Kubecost to visualize a breakdown of their Amazon EKS charges, allocate costs, and chargeback organizational units (e.g., application teams). Kubecost gives internal teams and business units transparent and accurate cost data based on AWS bill. Customers can also get personalized suggestions for cost optimization tailored to their infrastructure environment and usage patterns. Using Kubecost’s intuitive dashboard, customers can monitor, analyze, and allocate cluster costs. When customers deploy Kubecost in a cluster, the dashboard is secured by NGINX basic authentication, which isn’t recommended in production environments. This post shows how to make the dashboard accessible to external audiences, such as finance leaders, and secure access using Amazon Cognito. Solution overview We make the Kubecost dashboard accessible outside the cluster by exposing it using an ingress, which uses Application Load Balancer (ALB). Integrating Amazon Cognito with the ALB, the solution adds support for authenticating and authorizing users to the Kubecost dashboard. To learn more about how ALB and Cognito integrate, please see How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps. In this post, we use the secure ingress-auth-cognito EKS Blueprints pattern to set up: Application Load Balancer, Amazon Cognito, and a Transport Layer Security (TLS) Certificate on AWS Certificate Manager (ACM) with Amazon Route 53 hosted zone to authenticate users to Kubecost Deployment of Kubecost application using Kubecost add-on for EKS CDK Blueprints Kubernetes Ingress with annotations for Amazon Cognito and a TLS Certificate (using Amazon Certificate Manager) for securely authenticating user to Kubecost Customers can use this pattern to manage multiple clusters across environments with GitOps. Please see Continuous Deployment and GitOps delivery with Amazon EKS Blueprints and ArgoCD to learn about GitOps-driven delivery using EKS Blueprints Patterns. The secure ingress-auth-cognito Cloud Development Kit (CDK) pattern includes an Amazon EKS cluster configuration, compute capacity configuration, and add-ons required by Kubecost. Prerequisites You need the following to complete the steps in this post: AWS Command Line Interface (AWS CLI) version 2 AWS CDK version 2.80.0 or later Node version 20.0.0 or later NPM version 8.19.2 or later kubectl version 1.24 or later Git An Amazon Route 53 public hosted zone Let’s start by setting a few environment variables: ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) export AWS_REGION=${AWS_REGION:=us-west-2} Clone the cdk-eks-blueprints-patterns repository and install dependency packages. This repository contains CDK v2 code written in TypeScript. git clone https://github.com/aws-samples/cdk-eks-blueprints-patterns.git cd cdk-eks-blueprints-patterns npm install make build The secure ingress-auth-cognito EKS Blueprints pattern is at lib/secure-ingress-auth-cognito/index.ts. In this file, you can find the blueprint definition with all the configurations above using the blueprints.EksBlueprint.builder() method. Bootstrap CDK The first step to any CDK deployment is bootstrapping the environment. Bootstrapping is the process of provisioning resources for the AWS CDK before you can deploy AWS CDK applications into an AWS environment (an AWS environment is a combination of an AWS account and Region). If you already use CDK in a region, then you don’t need to repeat the bootstrapping process. Execute the commands below to bootstrap the AWS environment in your Region: cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION Deploy Kubecost with secured access In this solution, we’ll allow access to the Kubecost dashboard based on user email addresses. You can control access to the dashboard by allow-listing an entire domain or individual email addresses. Users are required to sign-up before they can access the Kubecost dashboard. The pre sign-up Lambda trigger only allows sign-ups when the user’s email domain matches the allow-listed domains. When users sign-up, Amazon Cognito sends a verification code to their email address. Users have to verify access (using the one time valid code) to their email before they get access to the dashboard. First, we’ll create an AWS Systems Manager (SSM) parameter to store the value of the email domain that users use to sign up. Next, we’ll create an environment variable to store the domain name that hosts the Kubecost dashboard. The email domain and the domain used to host the Kubecost dashboard can be same or different. For example, you may choose to host the dashboard at kubecost.myorg.mycompany.com and use email@mycompany.org to login to the dashboard. Create below parameters with allowed email addresses and domains in the AWS Systems Manager Parameter Store: export SSM_PARAMETER_KEY="/secure-ingress-auth-cognito/ALLOWED_DOMAINS" export SSM_PARAMETER_VALUE="emaildomain1.com,emaildomain2.com" aws ssm put-parameter \ --name "$SSM_PARAMETER_KEY" \ --value "$SSM_PARAMETER_VALUE" \ --type "String" \ --region $AWS_REGION If you’d like to limit access to the dashboard by email addresses, then you can also create a parameter to store allowed email addresses and add a logic to the pre authentication Lambda trigger as shown here. Next, create a secret in AWS Secrets Manager that you’ll use to access ArgoCD. The argo-admin-password secret must be defined as plain text (not key/value): aws secretsmanager create-secret --name argo-admin-secret \ --description "Admin Password for ArgoCD" \ --secret-string "password123$" \ --region $AWS_REGION The CDK code expects the allowed domain and subdomain names in the CDK context file (cdk.json). Create two environment variables. The PARENT_HOSTED_ZONE variable contains the name of your Route 53 public hosted zone. The DEV_SUBZONE_NAME will be the address for your Kubecost dashboard. Generate the cdk.json file: PARENT_HOSTED_ZONE=mycompany.a2z.com DEV_SUBZONE_NAME=kubecost.mycompany.a2z.com cat << EOF > cdk.json { "app": "npx ts-node dist/lib/common/default-main.js", "context": { "parent.hostedzone.name": "${PARENT_HOSTED_ZONE}", "dev.subzone.name": "${DEV_SUBZONE_NAME}" } } EOF Run the below command from the root of this repository to deploy the solution: make pattern secure-ingress-cognito deploy secure-ingress-blueprint This blueprint will deploy the following: Amazon Virtual Private Cloud (Amazon VPC) with public and private subnets, Network Address Translation (NAT) gateways in each availability zone (AZ), and an Internet Gateway An Amazon EKS cluster with the following Kubernetes add-ons Metrics Server Cluster Autoscaler Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) Driver Amazon EKS AWS Load Balancer Controller Amazon VPC CNI ExternalDNS Kubecost Argo CD Amazon Cognito user pool, user pool client, domain and also pre-sign-up and pre-authentication lambda triggers to run custom logic to validate users before allowing them to either sign-up or authentication. Once the deployment is complete, you will see the output similar to shown below in your terminal: Outputs: secure-ingress-blueprint.secureingressblueprintClusterNameD6A1BE5C = secure-ingress-blueprint secure-ingress-blueprint.secureingressblueprintConfigCommandD0275968 = aws eks update-kubeconfig —name secure-ingress-blueprint —region us-west-2 —role-arn arn:aws:iam::<ACCOUNT ID>:role/secure-ingress-blueprint-secureingressblueprintMas-XXXXXXXXXX secure-ingress-blueprint.secureingressblueprintGetTokenCommand21BE2184 = aws eks get-token —cluster-name secure-ingress-blueprint —region us-west-2 —role-arn arn:aws:iam::<ACCOUNT ID>:role/secure-ingress-blueprint-secureingressblueprintMas-XXXXXXXXXX Stack ARN: arn:aws:cloudformation:us-west-2:<ACCOUNT ID>:stack/secure-ingress-blueprint/XXXXXXXXXX To update your Kubernetes configuration for your new cluster, copy and run the aws eks update-kubeconfig command (the second command in the output) in your terminal. export EKS_KUBECONFIG=$(aws cloudformation describe-stacks \ --stack-name secure-ingress-blueprint \ --query "Stacks[0].Outputs[?starts_with(OutputKey, 'secureingressblueprintConfigCommand')].OutputValue" \ --region $AWS_REGION \ --output text) eval $EKS_KUBECONFIG Validate the access to your Amazon EKS cluster using below kubectl listing all namespaces: kubectl get namespace You should see the following namespaces in the cluster: NAME STATUS AGE argocd Active 30m default Active 39m external-dns Active 30m kube-node-lease Active 39m kube-public Active 39m kube-system Active 39m kubecost Active 30m The stack deploys Kubecost resources in the kubecost namespace. kubectl -n kubecost get all NAME READY STATUS RESTARTS AGE pod/kubecost-cost-analyzer-84d5775f7b-zg8mq 2/2 Running 0 88m pod/kubecost-cost-analyzer-grafana-69d77ccd6d-9r8rc 2/2 Running 0 88m pod/kubecost-cost-analyzer-kube-state-metrics-789fc978c8-ch8lb 1/1 Running 0 88m pod/kubecost-cost-analyzer-prometheus-node-exporter-w9w75 1/1 Running 0 88m pod/kubecost-cost-analyzer-prometheus-server-6dc99564bf-mz9nw 2/2 Running 0 88m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubecost-cost-analyzer-cost-analyzer ClusterIP 172.20.193.130 <none> 9003/TCP,9090/TCP 88m service/kubecost-cost-analyzer-grafana ClusterIP 172.20.143.32 <none> 80/TCP 88m service/kubecost-cost-analyzer-kube-state-metrics ClusterIP 172.20.165.147 <none> 8080/TCP 88m service/kubecost-cost-analyzer-prometheus-node-exporter ClusterIP None <none> 9100/TCP 88m service/kubecost-cost-analyzer-prometheus-server ClusterIP 172.20.54.102 <none> 80/TCP 88m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/kubecost-cost-analyzer-prometheus-node-exporter 1 1 1 1 1 <none> 88m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kubecost-cost-analyzer 1/1 1 1 88m deployment.apps/kubecost-cost-analyzer-grafana 1/1 1 1 88m deployment.apps/kubecost-cost-analyzer-kube-state-metrics 1/1 1 1 88m deployment.apps/kubecost-cost-analyzer-prometheus-server 1/1 1 1 88m Testing the authentication Point your browsers to the URL you associated with the DEV_SUBZONE_NAME key from the CDK context to access the Kubecost dashboard. The value is also stored as an environment variable: echo $DEV_SUBZONE_NAME Your browser will be redirected to an Amazon Cognito hosted User Interface (UI) sign-in page. Since this is your first time accessing the application, select sign up. The Pre sign-up AWS Lambda trigger for Amazon Cognito User pool is configured to allow users to register only from certain allow-listed email domains. The allow-listed email domains are configured as an environmental variable in the AWS Lambda function. Let us try sign up a new user using an email id, whose domain is not part of the allow list. You’ll get an error since the domain is not allow-listed. Let’s sign up as a new user with one of the allow-listed email domains. This time, you’ll get a prompt to confirm your account. Get the verification code sent to your email and confirm your account. After verifying email address, sign in to access the Kubecost dashboard Once you sign in, the ALB will redirect you to the Kubecost dashboard: Cleaning up You continue to incur cost until deleting the infrastructure that you created for this post. Use the commands below to delete resources created during this post: make pattern secure-ingress-cognito destroy secure-ingress-blueprint Conclusion In this post we showed you how to secure the Kubecost dashboard while making it accessible to users without needing access to the Kubernetes cluster. We used an ALB to expose the dashboard and secured access using Cognito. We also created a record in Route 53 so users can easily access the dashboard. We used Cognito user pool to store user information. If you already have an identity provider that provides OpenID Connect (OIDC) or SAML 2.0 support, then you can integrate it with Cognito to skip the sign-up and sign-in to the Kubecost dashboard. View the full article
  7. This post describes how to use Amazon Cognito to authenticate users for web apps running in an Amazon Elastic Kubernetes Services (Amazon EKS) cluster. View the full article
  8. Amazon Cognito now offers a new console experience that makes it even easier for customers to manage Amazon Cognito identity pools and add federated sign in for customers’ applications to get access to AWS resources. Customers that wish to opt in to the new, streamlined experience can do so by navigating to the Amazon Cognito console. View the full article
  9. If your decentralized application (dApp) must interact directly with AWS services like Amazon S3 or Amazon API Gateway, you must authorize your users by granting them temporary AWS credentials. This solution uses Amazon Cognito in combination with your users’ digital wallet to obtain valid Amazon Cognito identities and temporary AWS credentials for your users. It also demonstrates how to use Amazon API Gateway to secure and proxy API calls to third-party Web3 APIs. In this blog, you will build a fully serverless decentralized application (dApp) called “NFT Gallery”. This dApp permits users to look up their own non-fungible token (NFTs) or any other NFT collections on the Ethereum blockchain using one of the following two Web3 providers HTTP APIs: Alchemy or Moralis. These APIs help integrate Web3 components in any web application without Blockchain technical knowledge or access. Solution overview The user interface (UI) of your dApp is a single-page application (SPA) written in JavaScript using ReactJS, NextJS, and Tailwind CSS. The dApp interacts with Amazon Cognito for authentication and authorization, and with Amazon API Gateway to proxy data from the backend Web3 providers’ APIs. Architecture diagram Figure 1. Architecture diagram showing authentication and API request proxy solution for Web3 Prerequisites Install Node.js, yarn, or npm, and the AWS Serverless Application Model Command Line Interface (AWS SAM CLI) on your computer. Have an AWS account and the proper AWS Identity and Access Management (IAM) permissions to deploy the resources required by this architecture. Install a digital wallet extension on your browser and connect to the Ethereum blockchain. Metamask is a popular digital wallet. Get an Alchemy account (free) and an API Key for the Ethereum blockchain. Read the Alchemy Quickstart guide for more information. Sign up for a Moralis account (free) and API Key. Read the Moralis Getting Started documentation for more information. Using the AWS SAM framework You’ll use AWS SAM as your framework to define, build, and deploy your backend resources. AWS SAM is built on top of AWS CloudFormation and enables developers to define serverless components using a simpler syntax. Walkthrough Clone this GitHub repository. Build and deploy the backend The source code has two top level folders: backend: contains the AWS SAM Template template.yaml. Examine the template.yaml file for more information about the resources deployed in this project. dapp: contains the code for the dApp 1. Go to the backend folder and copy the prod.parameters.example file to a new file called prod.parameters. Edit it to add your Alchemy and Moralis API keys. 2. Run the following command to process the SAM template (review the sam build Developer Guide). sam build 3. You can now deploy the SAM Template by running the following command (review the sam deploy Developer Guide). sam deploy --parameter-overrides $(cat prod.parameters) --capabilities CAPABILITY_NAMED_IAM --guided --confirm-changeset 4. SAM will ask you some questions and will generate a samconfig.toml containing your answers. You can edit this file afterwards as desired. Future deployments will use the .toml file and can be run using sam deploy. Don’t commit the samconfig.toml file to your code repository as it contains private information. Your CloudFormation stack should be deployed after a few minutes. The Outputs should show the resources that you must reference in your web application located in the dapp folder. Run the dApp You can now run your dApp locally. 1. Go to the dapp folder and copy the .env.example file to a new file named .env. Edit this file to add the backend resources values needed by the dApp. Follow the instructions in the .env.example file. 2. Run the following command to install the JavaScript dependencies: yarn 3. Start the development web server locally by running: yarn dev Your dApp should now be accessible at http://localhost:3000. Deploy the dApp The SAM template creates an Amazon S3 bucket and an Amazon CloudFront distribution, ready to serve your Single Page Application (SPA) on the internet. You can access your dApp from the internet with the URL of the CloudFront distribution. It is visible in your CloudFormation stack Output tab in the AWS Management Console, or as output of the sam deploy command. For now, your S3 bucket is empty. Build the dApp for production and upload the code to the S3 bucket by running these commands: cd dapp yarn build cd out aws s3 sync . s3://${BUCKET_NAME} Replace ${BUCKET_NAME} by the name of your S3 bucket. Automate deployment using SAM Pipelines SAM Pipelines automatically generates deployment pipelines for serverless applications. If changes are committed to your Git repository, it automates the deployment of your CloudFormation stack and dApp code. With SAM Pipeline, you can choose a Git provider like AWS CodeCommit, and a build environment like AWS CodePipeline to automatically provision and manage your deployment pipeline. It also supports GitHub Actions. Read more about the sam pipeline bootstrap command to get started. Host your dApp using Interplanetary File System (IPFS) IPFS is a good solution to host dApps in a decentralized way. IPFS Gateway can serve as Origin to your CloudFront distribution and serve IPFS content over HTTP. dApps are often hosted on IPFS to increase trust and transparency. With IPFS, your web application source code and assets are not tied to a DNS name and a specific HTTP host. They will live independently on the IPFS network. Read more about hosting a single-page website on IPFS, and how to run your own IPFS cluster on AWS. Secure authentication and authorization In this section, we’ll demonstrate how to: Authenticate users via their digital wallet using Amazon Cognito user pool Protect your API Gateway from the public internet by authorizing access to both authenticated and unauthenticated users Call Alchemy and Moralis third party APIs securely using API Gateway HTTP passthrough and AWS Lambda proxy integrations Use the JavaScript Amplify Libraries to interact with Amazon Cognito and API Gateway from your web application Authentication Your dApp is usable by both authenticated and unauthenticated users. Unauthenticated users can look up NFT collections while authenticated users can also look up their own NFTs. In your dApp, there is no login/password combination or Identity Provider (IdP) in place to authenticate your users. Instead, users connect their digital wallet to the web application. To capture users’ wallet addresses and grant them temporary AWS credentials, you can use Amazon Cognito user pool and Amazon Cognito identity pool. You can create a custom authentication flow by implementing an Amazon Cognito custom authentication challenge, which uses AWS Lambda triggers. This challenge requires your users to sign a generated message using their digital wallet. If the signature is valid, it confirms that the user owns this wallet address. The wallet address is then used as a user identifier in the Amazon Cognito user pool. Figure 2 details the Amazon Cognito authentication process. Three Lambda functions are used to perform the different authentication steps. Figure 2. Amazon Cognito authentication process To define the authentication success conditions, the Amazon Cognito user pool calls the “Define auth challenge” Lambda function (defineAuthChallenge.js). To generate the challenge, Amazon Cognito calls the “Create auth challenge” Lambda function (createAuthChallenge.js). In this case, it generates a random message for the user to sign. Amazon Cognito forwards the challenge to the dApp, which prompts the user to sign the message using their digital wallet and private key. The dApp then returns the signature to Amazon Cognito as a response. To verify if the user’s wallet effectively signed the message, Amazon Cognito forwards the user’s response to the “Verify auth challenge response” Lambda function (verifyAuthChallengeResponse.js). If True, then Amazon Cognito authenticates the user and creates a new identity in the user pool with the wallet address as username. Finally, Amazon Cognito returns a JWT Token to the dApp containing multiple claims, one of them being cognito:username, which contains the user’s wallet address. These claims will be passed to your AWS Lambda event and Amazon API Gateway mapping templates allowing your backend to securely identify the user making those API requests. Authorization Amazon API Gateway offers multiple ways of authorizing access to an API route. This example showcases three different authorization methods: AWS_IAM: Authorization with IAM Roles. IAM roles grant access to specific API routes or any other AWS resources. The IAM Role assumed by the user is granted by Amazon Cognito identity pool. COGNITO_USER_POOLS: Authorization with Amazon Cognito user pool. API routes are protected by validating the user’s Amazon Cognito token. NONE: No authorization. API routes are open to the public internet. API Gateway backend integrations HTTP proxy integration The HTTP proxy integration method allows you to proxy HTTP requests to another API. The requests and responses can passthrough as-is, or you can modify them on the fly using Mapping Templates. This method is a cost-effective way to secure access to any third-party API. This is because your third-party API keys are stored in your API Gateway and not on the frontend application. You can also activate caching on API Gateway to reduce the amount of API calls made to the backend APIs. This will increase performance, reduce cost, and control usage. Inspect the GetNFTsMoralisGETMethod and GetNFTsAlchemyGETMethod resources in the SAM template to understand how you can use Mapping Templates to modify the headers, path, or query string of your incoming requests. Lambda proxy integration API Gateway can use AWS Lambda as backend integration. Lambda functions enable you to implement custom code and logic before returning a response to your dApp. In the backend/src folder, you will find two Lambda functions: getNFTsMoralisLambda.js: Calls Moralis API and returns raw response getNFTsAlchemyLambda.js: Calls Alchemy API and returns raw response To access your authenticated user’s wallet address from your Lambda function code, access the cognito:username claim as follows: var wallet_address = event.requestContext.authorizer.claims["cognito:username"]; Using Amplify Libraries in the dApp The dApp uses the AWS Amplify Javascript Libraries to interact with Amazon Cognito user pool, Amazon Cognito identity pool, and Amazon API Gateway. With Amplify Libraries, you can interact with the Amazon Cognito custom authentication flow, get AWS credentials for your frontend, and make HTTP API calls to your API Gateway endpoint. The Amplify Auth library is used to perform the authentication flow. To sign up, sign in, and respond to the Amazon Cognito custom challenge, use the Amplify Auth library. Examine the ConnectButton.js and user.js files in the dapp folder. To make API calls to your API Gateway, you can use the Amplify API library. Examine the api.js file in the dApp to understand how you can make API calls to different API routes. Note that some are protected by AWS_IAM authorization and others by COGNITO_USER_POOL. Based on the current authentication status, your users will automatically assume the CognitoAuthorizedRole or CognitoUnAuthorizedRole IAM Roles referenced in the Amazon Cognito identity pool. AWS Amplify will automatically use the credentials associated with your AWS IAM Role when calling an API route protected by the AWS_IAM authorization method. Amazon Cognito identity pool allows anonymous users to assume the CognitoUnAuthorizedRole IAM Role. This allows secure access to your API routes or any other AWS services you configured, even for your anonymous users. Your API routes will then not be publicly available to the internet. Cleaning up To avoid incurring future charges, delete the CloudFormation stack created by SAM. Run the sam delete command or delete the CloudFormation stack in the AWS Management Console directly. Conclusion In this blog, we’ve demonstrated how to use different AWS managed services to run and deploy a decentralized web application (dApp) on AWS. We’ve also shown how to integrate securely with Web3 providers’ APIs, like Alchemy or Moralis. You can use Amazon Cognito user pool to create a custom authentication challenge and authenticate users using a cryptographically signed message. And you can secure access to third-party APIs, using API Gateway and keep your secrets safe on the backend. Finally, you’ve seen how to host a single-page application (SPA) using Amazon S3 and Amazon CloudFront as your content delivery network (CDN). View the full article
  10. Amazon Cognito identity pools now publishes data events to AWS CloudTrail logs. Customers now have greater visibility into access-related activities for both guest and authenticated users of their applications. Administrators can now configure Amazon CloudWatch Alarms to monitor specific activity on Amazon Cognito identity pools and react based on automated workflows. Customers can record data events in AWS CloudTrail and gain better insight into the identity providers leveraged by users to access AWS resources with Amazon Cognito identity pools. AWS CloudTrail may charge for recording data events. View the full article
  11. You can now activate deletion protection for your Amazon Cognito user pools. When you configure a user pool with deletion protection, the pool cannot be deleted by any user. Deletion protection is now active by default for new user pools created through the AWS Console. You can activate or deactivate deletion protection for an existing user pool in the AWS Console, the AWS Command Line Interface, and API. Deletion protection prevents you from requesting the deletion of a user pool unless you first modify the pool and deactivate deletion protection. View the full article
  12. Amazon Cognito hosted UI now enables end users to register their own authenticator apps. Customers can now enable users to self-enroll in either SMS based one-time-passwords (OTP) or a time-based-one-time-password (TOTP) authenticator app. Administrators no longer have to initiate end user enrollment when using TOTP with hosted UI. With this new addition, developers using hosted UI will now have the same level of security as before, but without having to develop any custom code, enabling them to focus on improving their application. Administrators will now spend less time onboarding end users to a higher level of authentication assurance. End users of the application now also have the convenience of adding their own authenticator apps and leveraging multi-factor authentication (MFA) when accessing applications that use Cognito hosted UI. Customers can benefit from a higher level of authentication for their applications at no additional cost. View the full article
  13. You can now enable AWS WAF protections for Amazon Cognito, making it even easier to protect Amazon Cognito user pools and hosted UI from common web exploits. View the full article
  14. Introduction Designing and maintaining secure user management, authentication and other related features for applications is not an easy task. Amazon Cognito takes care of this work, which allows developers to focus on building the core business logic of the application. Amazon Cognito provides user management, authentication, and authorization for applications where users can log in in directly or through their pre-existing social or corporate credentials. Amazon Elastic Containers Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for customers to deploy, manage, and scale their container-based applications. When building using Amazon ECS, it is common to use Application Load Balancer (ALB) for application high availability and other features like SSL/TLS offloading, host based routing, and other application-aware traffic handling. Another benefit of using the ALB with Amazon ECS is that the ALB has in-built support for Amazon Cognito. When setting up the ALB, you can chose if you want incoming user traffic to be redirected to Amazon Cognito for authentication. By building secure containerized applications using Amazon ECS, and using ALB and its Amazon Cognito integration, you get the benefits of the ease of container orchestration and user authentication and authorization. Flow of how Application Load Balancer authenticates users using Amazon Cognito For an application fronted by ALB that integrates with Amazon Cognito and has been set up to authenticate users, the following stepwise flow describes what happens when a user attempts to access the application. For more information, see the example built by the AWS Elastic Load Balancing Demos. You need to understand what the ALB is doing to secure user access with Amazon Cognito: A user sends a request to the application fronted by the ALB, which has a set of rules that it evaluates for all traffic to determine what action to carry out. The rule (such as the path-based rule saying all traffic for/login) when matched triggers the authentication action on the ALB. The ALB then inspects the user’s HTTP payload for an authentication cookie. Because this is the user’s first visit, this cookie isn’t present. The ALB doesn’t see any cookie and redirects the user to the configured Amazon Cognito’s authorization endpoint. The user is presented with an authentication page from Amazon Cognito, where the user inputs their credentials. Amazon Cognito redirects the user back to the ALB and passes an authorization code to the user in the redirect URL. The load balancer takes this authorization code and makes a request to Amazon Cognito’s token endpoint. Amazon Cognito validates the authorization code and presents the ALB with an ID and access token. The ALB forwards the access token to Amazon Cognito’s user info endpoint. Amazon Cognito’s user information endpoint presents the ALB with user claims. The ALB redirects the user who is trying to access the application (step 1) to the same URL while inserting the authentication cookie in the redirect response. The user makes the request to the ALB with the cookie and the ALB validates it and forwards the request to the ALB’s target. The ALB inserts information (such as user claims, access token, and the subject field) into a set of X-AMZN-OIDC-* HTTP headers to the target. The target generates a response and forwards to the ALB. The ALB sends the response to the authenticated user. When the user makes subsequent requests for HTTP request and response, the flow will go through steps 9–11. If the user makes a new request without the authentication cookie, it goes through steps 1–11. For more information, see the authentication flow between the ALB and Amazon Cognito. Solution overview You will use a PHP application built for demonstration purpose. The application is published and verified in the public docker hub. We use and configure Amazon Route 53 for Domain Name Service (DNS) handling and AWS Certificate Manager (ACM) to provision Transport Layer Security (TLS) Certificates for usage. Amazon Cognito handles the Authentication flows and Amazon ECS handles the container scheduling and orchestration. The following solution architecture diagram presents an overview of the solution. Prerequisites To complete this tutorial you need the following tools, which can be installed with the links: aws cliv2: The AWS Command Line Interface (AWS CLI) is an open source tool that allows you interact with AWS services using commands in your command-line shell. ecs-cli: The Amazon Elastic Container Service (Amazon ECS) CLI provides high-level commands to simplify creating, updating, and monitoring tasks and clusters from a local development environment. Environment In this post, I used the AWS Cloud9 as an Integrated Development Environment (IDE) to configure the settings in this tutorial. You can use AWS Cloud9 or your own IDE. The commands used were tested using Amazon Linux 2 running in the Amazon Cloud9 environment. Follow the steps linked to install and configure Amazon Cloud9: Create a workspace to deploy this solution, which includes creating an AWS Identity and Access Management (IAM) role that will be attached to the workspace instance. Launch the base infrastructure platform that the resources reside in The Amazon ECS needs to be launched into a Virtual Private Cloud (VPC) infrastructure. To create this infrastructure, you use an AWS CloudFormation template that automates the creation of the platform. Download the zip file that contains an AWS CloudFormation yaml file: codebuild-vpc-cfn.yaml. Once deployed, the following resources are created into your AWS account: a Virtual Private Cloud (VPC), an internet gateway, two public subnets, two private subnets, two Network Address Translation (NAT) Gateways, and one security group. To launch the stack, follow these steps: Sign in to the AWS Management Console. In your Region of choice, you will see the Region list in the top right-hand corner. Search for the AWS CloudFormation service in the Console search bar. Choose Create Stack and select with new resources (standard). To specify template, choose upload a template file. Upload the previously downloaded: codebuild-vpc-cfn.yaml file. To create the stack and configure stack options, choose Next. Enter ecsplatform for stack name and ecsplatform for EnvironmentName. Choose Next. Leave the rest of the default settings and choose Next. Choose Create Stack. When CloudFormation has completed its deployment its the resources, the status is CREATE_COMPLETE. Next on your Amazon Cloud9 workspace terminal, set the below environment variables: AUTH_ECS_REGION=eu-west-1 <-- Change to the region you used in your Cloudformation configuration AUTH_ECS_CLUSTER=ecsauth AUTH_ECS_VPC=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='VPC'].OutputValue" --output text) AUTH_ECS_PUBLICSUBNET_1=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PublicSubnet1'].OutputValue" --output text) AUTH_ECS_PUBLICSUBNET_2=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PublicSubnet2'].OutputValue" --output text) AUTH_ECS_PRIVATESUBNET_1=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnet1'].OutputValue" --output text) AUTH_ECS_PRIVATESUBNET_2=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnet2'].OutputValue" --output text) AUTH_ECS_SG=$(aws cloudformation describe-stacks --stack-name ecsplatform --query "Stacks[0].Outputs[?OutputKey=='NoIngressSecurityGroup'].OutputValue" --output text) AUTH_ECS_DOMAIN=www.example.com <-- Change to a domain name you want to use for this solution for this solution You will set additional variables later, but these are enough to begin building your solution. Configure the security group rules needed for web traffic access When users access the ALB, the security group attached to it needs to allow ingress port 443 (https) traffic. In addition, when the ALB forwards the web traffic to the Amazon ECS tasks there needs to be a ingress rules attached to the Amazon ECS container instances that allows ingress port 80 (http) traffic. You can achieve this access with the following: aws ec2 authorize-security-group-ingress \ --group-id $AUTH_ECS_SG \ --protocol tcp \ --port 443 \ --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress \ --group-id $AUTH_ECS_SG \ --protocol tcp \ --port 80 \ --cidr 0.0.0.0/0 Create a public Application Load Balancer As described earlier, the ALB will receive and terminate all client requests to validate for authentication using Amazon Cognito. The ALB also handles TLS offloading where the TLS certificates for the domain name will be deployed on it. To create the ALB do the below: AUTH_ECS_ALBARN=$(aws elbv2 create-load-balancer --name $AUTH_ECS_CLUSTER --subnets $AUTH_ECS_PUBLICSUBNET_1 $AUTH_ECS_PUBLICSUBNET_2 --security-groups $AUTH_ECS_SG --query 'LoadBalancers[0].LoadBalancerArn' --output text) AUTH_ECS_ALB_DNS=$(aws elbv2 describe-load-balancers --load-balancer-arns $AUTH_ECS_ALBARN --query 'LoadBalancers[0].DNSName' --output text) Configure a Domain Name System Clients will need a domain name that points to the ALB to type into their browsers. In this post, the Domain Name System (DNS) name is registered using the DNS Resolution service, Amazon Route 53. You can configure your domain name (such as www.example.com) where it‘s known as the record and placed in a Route 53-hosted zone. Configure both the Route 53 hosted zone and record If you already have a Route53 publicly hosted zone for the apex domain and this is the location where you plan to add the record, then you will set its host zone ID (AUTH_ECS_R53HZ). For more information, see the hosted zone ID documentation. The first command line shown below demonstrates how to identify a hosted zone ID. You can substitute example.com for your apex domain name. The other commands create a record that points to the ALB. AUTH_ECS_R53HZ=$(aws route53 list-hosted-zones-by-name --dns-name example.com --query 'HostedZones[0].Id' --output text | grep -o '/hostedzone/.*' | cut -b 13-27) cat << EOF > dnsrecord.json { "Comment": "CREATE a DNS record that points to the ALB", "Changes": [ { "Action": "CREATE", "ResourceRecordSet": { "Name": "$AUTH_ECS_DOMAIN", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "$AUTH_ECS_ALB_DNS" } ] } } ] } EOF aws route53 change-resource-record-sets --hosted-zone-id $AUTH_ECS_R53HZ --change-batch file://dnsrecord.json Request a public certificate To ensure that web traffic sent by clients to the ALB is encrypted, integrate an ACM (AWS Certificate Manager) Certificate into the ALB’s listener. This ensures that the ALB serves HTTPS traffic and communications from clients to ALB is encrypted. Public SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources you create to run your application. Provision an ACM certificate AUTH_ECS_ACM_CERT_ARN=$(aws acm request-certificate \ --domain-name $AUTH_ECS_DOMAIN \ --validation-method DNS \ --region $AUTH_ECS_REGION \ --query 'CertificateArn' \ --output text) When you create an SSL/TLS Certificate using ACM, it will try to confirm that you’re the owner of the domain name before fully provisioning the certificate for you to use. One method of confirmation is through DNS validation. Through this method ACM creates two CNAME records that you must add in your Route53 hosted zone. To add the ACM CNAME records in your Route53 hosted zones: cat << EOF > acm_validate_cert_dns.json { "Changes":[ { "Action": "UPSERT", "ResourceRecordSet":{ "Name": "$(aws acm describe-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN --query 'Certificate.DomainValidationOptions[].ResourceRecord[].Name' --output text)", "Type": "CNAME", "TTL": 300, "ResourceRecords": [ { "Value": "$(aws acm describe-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN --query 'Certificate.DomainValidationOptions[].ResourceRecord[].Value' --output text)" } ] } } ] } EOF aws route53 change-resource-record-sets \ --hosted-zone-id $AUTH_ECS_R53HZ \ --change-batch file://acm_validate_cert_dns.json It takes some time before the certificate will change from ‘Pending Validation’ to ‘Success’. Once the status shows ‘Issued’ on the ACM console then you can use the certificate. Create an HTTPS listener and listener rule on the ALB Now that you’ve created the ALB. In addition, you’ve also created a certificate to configure the HTTPS listener to accept incoming HTTPS request from clients and to terminate them. You integrate the certificate into the listener and add a default rule action on the ALB: cat << EOF > listener-defaultaction.json [ { "Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "Host": "$AUTH_ECS_DOMAIN", "StatusCode": "HTTP_301" } } ] EOF AUTH_ECS_ALBLISTENER=$(aws elbv2 create-listener \ --load-balancer-arn $AUTH_ECS_ALBARN \ --protocol HTTPS \ --port 443 \ --certificates CertificateArn=$AUTH_ECS_ACM_CERT_ARN \ --ssl-policy ELBSecurityPolicy-2016-08 \ --default-actions file://listener-defaultaction.json \ --query 'Listeners[0].ListenerArn' \ --output text) Create an Amazon Cognito user pool As previously described, Amazon Cognito provides user management, authentication and authorization for applications where users can login in directly or through their pre-existing social/corporate credentials. Create a user pool, which is a user directory in Amazon Cognito that helps clients to access the website. Clients sign in with their credentials before they get access to the site. To fully configure Amazon Cognito for integration with the ALB, create a user pool, a user pool application client, and a user pool domain. The following steps show you how to accomplish these tasks. Create an Amazon Cognito user pool AUTH_COGNITO_USER_POOL_ID=$(aws cognito-idp create-user-pool \ --pool-name ${AUTH_ECS_CLUSTER}_Pool \ --username-attributes email \ --username-configuration=CaseSensitive=false \ --region $AUTH_ECS_REGION \ --query 'UserPool.Id' \ --auto-verified-attributes email \ --output text) Create an Amazon Cognito user pool application client AUTH_COGNITO_USER_POOL_CLIENT_ID=$(aws cognito-idp create-user-pool-client \ --client-name ${AUTH_ECS_CLUSTER}_AppClient \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --generate-secret \ --allowed-o-auth-flows "code" \ --allowed-o-auth-scopes "openid" \ --callback-urls "https://${AUTH_ECS_DOMAIN}/oauth2/idpresponse" \ --supported-identity-providers "COGNITO" \ --allowed-o-auth-flows-user-pool-client \ --region $AUTH_ECS_REGION \ --query 'UserPoolClient.ClientId' \ --output text) Create an Amazon Cognito user pool domain AUTH_COGNITO_USER_POOL_ARN=$(aws cognito-idp describe-user-pool --user-pool-id $AUTH_COGNITO_USER_POOL_ID --query 'UserPool'.Arn --output text) AUTH_COGNITO_DOMAIN=(authecsblog$(whoami)) aws cognito-idp create-user-pool-domain \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --region $AUTH_ECS_REGION \ --domain $AUTH_COGNITO_DOMAIN Create and configure a target group for the ALB Create a target group for the ALB. The target group is used to route requests to the Amazon ECS tasks. When an ALB receives the HTTPS traffic from the web clients, it routes the requests to the target group (after authentication of the client has occurred) for a web response. (Amazon ECS tasks are registered to the target group in a later section “Configuring the ECS Service”). Create an empty target group: AUTH_ECS_ALBTG=$(aws elbv2 create-target-group \ --name ${AUTH_ECS_CLUSTER}-tg \ --protocol HTTP \ --port 80 \ --target-type instance \ --vpc-id $AUTH_ECS_VPC \ --query 'TargetGroups[0].TargetGroupArn' \ --output text) Host-based routing and an authentication rule on the ALB The ALB routes requests based on the host name in the HTTP host header. It is possible to configure multiple domains that all point to a single ALB because the ALB can route requests based on the incoming host header and forward the requests to the right target group for handling. You can configure an authentication rule which tells the ALB what to do to the incoming requests. In this post, we want the requests to first be authenticated and, if successful, the request should get forwarded to the target group we created earlier. Configure host-based routing and an authentication rule on the ALB cat << EOF > actions-authenticate.json [ { "Type": "authenticate-cognito", "AuthenticateCognitoConfig": { "UserPoolArn": "$AUTH_COGNITO_USER_POOL_ARN", "UserPoolClientId": "$AUTH_COGNITO_USER_POOL_CLIENT_ID", "UserPoolDomain": "$AUTH_COGNITO_DOMAIN", "SessionCookieName": "AWSELBAuthSessionCookie", "Scope": "openid", "OnUnauthenticatedRequest": "authenticate" }, "Order": 1 }, { "Type": "forward", "TargetGroupArn": "$AUTH_ECS_ALBTG", "Order": 2 } ] EOF cat << EOF > conditions-hostrouting.json [ { "Field": "host-header", "HostHeaderConfig": { "Values": ["$AUTH_ECS_DOMAIN"] } } ] EOF aws elbv2 create-rule \ --listener-arn $AUTH_ECS_ALBLISTENER \ --priority 20 \ --conditions file://conditions-hostrouting.json \ --actions file://actions-authenticate.json Amazon ECS configuration The ALB and Amazon Cognito are now configured for processing incoming requests and authentication. Next you will configure Amazon ECS to orchestrate and deploy running tasks to generate response for the client’s web request. An Amazon ECS cluster is a logical grouping of tasks or services. Amazon ECS instances are part of the Amazon ECS infrastructure registered to a cluster that the Amazon ECS tasks run on. Two t3.small Amazon ECS instances will be configured to run the tasks. Amazon ECS will run and maintain two tasks, which are configured based on parameters and settings contained in the task definition (a JSON text file). For more information on Amazon ECS basics, constructs, and orchestration read the Amazon ECS components documentation. Configure the Amazon ECS CLI Amazon ECS CLI is the tool that you’d use to configure and launch the Amazon ECS components. To download Amazon ECS CLI, follow the following steps: Amazon ECS CLI needs a CLI profile, to proceed generate an access key ID, and access key using the AWS credentials documentation. Set the $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY variables to the copied values generated by AWS IAM. Configure the Amazon ECS CLI for a CLI profile ecs-cli configure profile --profile-name profile_name --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY Create the Amazon ECS cluster Create the Amazon ECS cluster, which consists of two t3.small instance types deployed in the VPC and residing in the two private subnets from earlier. For the instance-role, use the AWS IAM role created when configuring the AWS Cloud9 environment workspace (ecsworkshop-admin). The first command creates a keypair and the second command configures the Amazon ECS cluster. The keypair is useful if you need to SSH into the Amazon ECS instances for troubleshooting. Configure the Amazon EC2 key pair and bring up the ECS cluster aws ec2 create-key-pair \ --key-name $AUTH_ECS_CLUSTER \ --key-type rsa \ --query "KeyMaterial" \ --output text > $AUTH_ECS_CLUSTER.pem ecs-cli up --instance-role ecsworkshop-admin --cluster $AUTH_ECS_CLUSTER --vpc $AUTH_ECS_VPC --subnets $AUTH_ECS_PRIVATESUBNET_1,$AUTH_ECS_PRIVATESUBNET_2 --port 443 --region $AUTH_ECS_REGION --keypair $AUTH_ECS_CLUSTER --size 2 --instance-type t3.small --security-group $AUTH_ECS_SG --launch-type EC2 The cluster creation will take some time, when fully deployed the AWS CloudFormation stack status will output ‘Cluster creation succeeded’. Configure the AWS Region and ECS cluster name using the configure command: ecs-cli configure --region $AUTH_ECS_REGION --cluster $AUTH_ECS_CLUSTER --default-launch-type EC2 --config-name $AUTH_ECS_CLUSTER The EC2 launch type, with Amazon ECS instances, is created and launched in your VPC. If you prefer not to manage the underlying instances hosting the tasks, then Fargate launch type is the option to use. Fargate is the serverless way to host your Amazon ECS workloads. Create the ECS service The ecs-cli compose service up command will create the Amazon ECS service and tasks from a Docker Compose file (ecsauth-compose.yaml) that you create. This service is configured to use the ALB that you created earlier. A task definition is created by the command. The Docker Compose file contains the configuration settings that the Amazon ECS service is spun up with. This includes the Docker image to pull and use, the ports to expose on the Amazon ECS instance, and the Amazon ECS task for network forwarding. In this post, we configured it to use the AWS published sample demo PHP application verified and published to Docker Hub. Also, the Transmission Control Protocol (TCP) port 80 will be opened on the Amazon ECS instance and traffic received on this port will be forwarded to the task on TCP port 80). Configuring the ECS service cat << EOF > ecsauth-compose.yml version: '2' services: web: image: amazon/amazon-ecs-sample ports: - "80:80" EOF ecs-cli compose --project-name ${AUTH_ECS_CLUSTER}service --file ecsauth-compose.yml service up --target-group-arn $AUTH_ECS_ALBTG --container-name web --container-port 80 --role ecsServiceRole ecs-cli compose --project-name ${AUTH_ECS_CLUSTER}service --file ecsauth-compose.yml service scale 2 Testing the solution end to end We now the have working components of the solution. To test the solution end to end, you can navigate to the https site of the domain name used in your browser (such as https://www.example.com). echo $AUTH_ECS_DOMAIN The sequence of events that follows is as we described in the flow of how the ALB authenticates users using Amazon Cognito (section “Flow of how Application Load Balancer authenticates users using Amazon Cognito”). After redirection by the ALB to the Amazon Cognito configured domain’s login page (a hosted UI by Amazon Cognito), enter input your credentials. Since this is the first time the page is accessed we will sign up as a new user. Amazon Cognito stores this information in the user pool. If you navigate to the Amazon Cognito user pool console after, you’ll see this new user. After signing in to the ALB, it redirects you to the landing page of the sample demonstration PHP application, which is shown in the diagram below. User claims encoding and security In this post, we configured the target group to use HTTP, because the ALB has handled the TLS offloading. However, for enhanced security, you should restrict the traffic getting to the Amazon ECS instances to only the load balancer using the security group. After the load balancer authenticates a user successfully, it passes the claims of the user to the target. If you inspect traffic forwarded to the sample demonstration application through custom HTTP header logging in your access logs, you can see three HTTP headers. These headers contain information about the user claims and is signed by the ALB with a signature and algorithm that you can verify. The three HTTP headers include the following: x-amzn-oidc-accesstoken The access token from the token endpoint, in plain text. x-amzn-oidc-identity The subject field (sub) from the user info endpoint, in plain text. x-amzn-oidc-data The user claims, in JSON web tokens (JWT) format. From information encoded in the x-amzn-oidc-data, it is possible to extract information about the user. The following is an example Python 3.x application that can decode the payload portion of the x-amzn-oidc-data to reveal the user claims passed by Amazon Cognito. import jwt import requests import base64 import json # Step 1: Get the key id from JWT headers (the kid field) ; encoded_jwt = headers.dict['x-amzn-oidc-data'] jwt_headers = encoded_jwt.split('.')[0] decoded_jwt_headers = base64.b64decode(jwt_headers) decoded_jwt_headers = decoded_jwt_headers.decode("utf-8") decoded_json = json.loads(decoded_jwt_headers) kid = decoded_json['kid'] # Step 2: Get the public key from regional endpoint url = 'https://public-keys.auth.elb.' + region + '.amazonaws.com/' + kidreq = requests.get(url) pub_key = req.text # Step 3: Get the payload payload = jwt.decode(encoded_jwt, pub_key, algorithms=['ES256']) Cleanup Now that you are done building the solution and testing it to clean up all the resources you can run the following commands: aws elbv2 delete-load-balancer \ --load-balancer-arn $AUTH_ECS_ALBARN aws ecs delete-service --cluster $AUTH_ECS_CLUSTER --service ${AUTH_ECS_CLUSTER}service –force containerinstance1=$(aws ecs list-container-instances --cluster $AUTH_ECS_CLUSTER --query 'containerInstanceArns[0]' --output text) containerinstance2=$(aws ecs list-container-instances –cluster $AUTH_ECS_CLUSTER --query 'containerInstanceArns[1]' --output text) aws ecs deregister-container-instance \ --cluster $AUTH_ECS_CLUSTER \ --container-instance $containerinstance1 \ --force aws ecs deregister-container-instance \ --cluster $AUTH_ECS_CLUSTER \ --container-instance $containerinstance2 \ --force aws ecs delete-cluster --cluster $AUTH_ECS_CLUSTER aws ecs deregister-task-definition --task-definition ${AUTH_ECS_CLUSTER}service:1 aws acm delete-certificate --certificate-arn $AUTH_ECS_ACM_CERT_ARN aws route53 delete-hosted-zone --id $AUTH_ECS_R53HZ aws cognito-idp delete-user-pool-domain \ --user-pool-id $AUTH_COGNITO_USER_POOL_ID \ --domain $AUTH_COGNITO_DOMAIN aws cognito-idp delete-user-pool --user-pool-id $AUTH_COGNITO_USER_POOL_ID aws elbv2 delete-target-group \ --target-group-arn $AUTH_ECS_ALBTG aws cloudformation delete-stack \ --stack-name amazon-ecs-cli-setup-$AUTH_ECS_CLUSTER aws cloudformation delete-stack \ --stack-name ecsplatform aws ec2 delete-key-pair --key-name $AUTH_ECS_CLUSTER Conclusion In this post, we showed you how to authenticate users accessing your containerized application without writing authentication code, using the ALB’s inbuilt integration with Amazon Cognito. Maintaining and securing user management and authentication is offloaded from the application, which allows you to focus on building core business logic into the application. You don’t need to worry about platform tasks for managing, scheduling, and scaling containers for the web traffic because Amazon ECS handles all of that. View the full article
  15. Amazon Cognito now enables application developers to propagate IP address as part of the caller context data in unauthenticated calls to Amazon Cognito. When Amazon Cognito’s Advanced Security Features (ASF) are enabled, this feature improves risk calculation and resulting authentication decisions performed in flows such as sign-up, account confirmation, and password change. Prior to this change, the end user IP address was not available in unauthenticated calls if these calls were initiated behind a proxy. With this new feature, developers who build identity micro-services, authentication modules or identity proxies can now leverage APIs to gain visibility into the client’s IP address and utilize them in other security applications to better understand the risk of a particular user activity. View the full article
  16. By default, Amazon Cognito refresh tokens expire 30 days after a user signs in to a user pool. When you create an app, you can set the app's refresh token expiration to any value between 60 minutes and 10 years. Amazon Cognito now enables you to revoke refresh tokens in real time so that those refresh tokens cannot be used to generate additional access tokens. View the full article
  17. Amazon Cognito now supports SMS Sandbox in Amazon SNS. Amazon Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect. View the full article
  18. Amazon Cognito Identity Pools now enables you to use attributes from social and corporate identity providers to make access control decisions and simplify permissions management to AWS resources. View the full article
  19. Amazon Cognito User Pools now enables you to manage quotas for commonly used operation categories, such as user creation and user authentication, as well as view quotas and usage levels in the AWS Service Quotas dashboard or in CloudWatch metrics. This update makes it simple view your quota usage of and request rate increases for multiple APIs in the same category. For example, you now can now see the aggregated limit for a single “UserCreation” category, which includes SignUp, AdminCreateUser, ConfirmSignUp, and AdminConfirmSignUp. You can check whether the existing quotas can meet your operations needs in Service Quotas console or CloudWatch metrics. You can refer to this documentation to learn how the API operations are mapped to the new categories. View the full article
  20. Amplify Admin UI now supports importing existing Amazon Cognito User Pools and Identity Pools. This means you can link your Cognito User Pool and Identity Pool resources to your Amplify app to take advantage of authorization scenarios for your data model, and manage users and groups directly from the Admin UI. View the full article
  • Forum Statistics

    43.6k
    Total Topics
    43.2k
    Total Posts
×
×
  • Create New...