Search the Community
Showing results for tags 'auth'.
-
With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta. Today's article is about authentication: finding out who's performing a task, and checking that they are who they say they are. Check back in tomorrow to find about what's new in Kubernetes v1.30 around authorization (deciding what someone can and can't access). Motivation Kubernetes has had a long-standing need for a more flexible and extensible authentication system. The current system, while powerful, has some limitations that make it difficult to use in certain scenarios. For example, it is not possible to use multiple authenticators of the same type (e.g., multiple JWT authenticators) or to change the configuration without restarting the API server. The Structured Authentication Configuration feature is the first step towards addressing these limitations and providing a more flexible and extensible way to configure authentication in Kubernetes. What is structured authentication configuration? Kubernetes v1.30 builds on the experimental support for configurating authentication based on a file, that was added as alpha in Kubernetes v1.30. At this beta stage, Kubernetes only supports configuring JWT authenticators, which serve as the next iteration of the existing OIDC authenticator. JWT authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens. The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured issuer. The Kubernetes project added configuration from a file so that it can provide more flexibility than using command line options (which continue to work, and are still supported). Supporting a configuration file also makes it easy to deliver further improvements in upcoming releases. Benefits of structured authentication configuration Here's why using a configuration file to configure cluster authentication is a benefit: Multiple JWT authenticators: You can configure multiple JWT authenticators simultaneously. This allows you to use multiple identity providers (e.g., Okta, Keycloak, GitLab) without needing to use an intermediary like Dex that handles multiplexing between multiple identity providers. Dynamic configuration: You can change the configuration without restarting the API server. This allows you to add, remove, or modify authenticators without disrupting the API server. Any JWT-compliant token: You can use any JWT-compliant token for authentication. This allows you to use tokens from any identity provider that supports JWT. The minimum valid JWT payload must contain the claims documented in structured authentication configuration page in the Kubernetes documentation. CEL (Common Expression Language) support: You can use CEL to determine whether the token's claims match the user's attributes in Kubernetes (e.g., username, group). This allows you to use complex logic to determine whether a token is valid. Multiple audiences: You can configure multiple audiences for a single authenticator. This allows you to use the same authenticator for multiple audiences, such as using a different OAuth client for kubectl and dashboard. Using identity providers that don't support OpenID connect discovery: You can use identity providers that don't support OpenID Connect discovery. The only requirement is to host the discovery document at a different location than the issuer (such as locally in the cluster) and specify the issuer.discoveryURL in the configuration file. How to use Structured Authentication Configuration To use structured authentication configuration, you specify the path to the authentication configuration using the --authentication-config command line argument in the API server. The configuration file is a YAML file that specifies the authenticators and their configuration. Here is an example configuration file that configures two JWT authenticators: apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration # Someone with a valid token from either of these issuers could authenticate # against this cluster. jwt: - issuer: url: https://issuer1.example.com audiences: - audience1 - audience2 audienceMatchPolicy: MatchAny claimValidationRules: expression: 'claims.hd == "example.com"' message: "the hosted domain name must be example.com" claimMappings: username: expression: 'claims.username' groups: expression: 'claims.groups' uid: expression: 'claims.uid' extra: - key: 'example.com/tenant' expression: 'claims.tenant' userValidationRules: - expression: "!user.username.startsWith('system:')" message: "username cannot use reserved system: prefix" # second authenticator that exposes the discovery document at a different location # than the issuer - issuer: url: https://issuer2.example.com discoveryURL: https://discovery.example.com/.well-known/openid-configuration audiences: - audience3 - audience4 audienceMatchPolicy: MatchAny claimValidationRules: expression: 'claims.hd == "example.com"' message: "the hosted domain name must be example.com" claimMappings: username: expression: 'claims.username' groups: expression: 'claims.groups' uid: expression: 'claims.uid' extra: - key: 'example.com/tenant' expression: 'claims.tenant' userValidationRules: - expression: "!user.username.startsWith('system:')" message: "username cannot use reserved system: prefix" Migration from command line arguments to configuration file The Structured Authentication Configuration feature is designed to be backwards-compatible with the existing approach, based on command line options, for configuring the JWT authenticator. This means that you can continue to use the existing command-line options to configure the JWT authenticator. However, we (Kubernetes SIG Auth) recommend migrating to the new configuration file-based approach, as it provides more flexibility and extensibility. Note If you specify --authentication-config along with any of the --oidc-* command line arguments, this is a misconfiguration. In this situation, the API server reports an error and then immediately exits. If you want to switch to using structured authentication configuration, you have to remove the --oidc-* command line arguments, and use the configuration file instead. Here is an example of how to migrate from the command-line flags to the configuration file: Command-line arguments --oidc-issuer-url=https://issuer.example.com --oidc-client-id=example-client-id --oidc-username-claim=username --oidc-groups-claim=groups --oidc-username-prefix=oidc: --oidc-groups-prefix=oidc: --oidc-required-claim="hd=example.com" --oidc-required-claim="admin=true" --oidc-ca-file=/path/to/ca.pem There is no equivalent in the configuration file for the --oidc-signing-algs. For Kubernetes v1.30, the authenticator supports all the asymmetric algorithms listed in oidc.go. Configuration file apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: - issuer: url: https://issuer.example.com audiences: - example-client-id certificateAuthority: <value is the content of file /path/to/ca.pem> claimMappings: username: claim: username prefix: "oidc:" groups: claim: groups prefix: "oidc:" claimValidationRules: - claim: hd requiredValue: "example.com" - claim: admin requiredValue: "true" What's next? For Kubernetes v1.31, we expect the feature to stay in beta while we get more feedback. In the coming releases, we want to investigate: Making distributed claims work via CEL expressions. Egress selector configuration support for calls to issuer.url and issuer.discoveryURL. You can learn more about this feature on the structured authentication configuration page in the Kubernetes documentation. You can also follow along on the KEP-3331 to track progress across the coming Kubernetes releases. Try it out In this post, I have covered the benefits the Structured Authentication Configuration feature brings in Kubernetes v1.30. To use this feature, you must specify the path to the authentication configuration using the --authentication-config command line argument. From Kubernetes v1.30, the feature is in beta and enabled by default. If you want to keep using command line arguments instead of a configuration file, those will continue to work as-is. We would love to hear your feedback on this feature. Please reach out to us on the #sig-auth-authenticators-dev channel on Kubernetes Slack (for an invitation, visit https://slack.k8s.io/). How to get involved If you are interested in getting involved in the development of this feature, share feedback, or participate in any other ongoing SIG Auth projects, please reach out on the #sig-auth channel on Kubernetes Slack. You are also welcome to join the bi-weekly SIG Auth meetings held every-other Wednesday. View the full article
-
Navigating the shift to passwordless authentication via digital certificates demands a visionary approach that considers the immediate benefits while strategically planning for future scalability and adaptability. The post Mapping Your Path to Passwordless appeared first on Security Boulevard. View the full article
-
In phpMyAdmin, the authentication type determines how users are authenticated when they try to access the phpMyAdmin interface. This setting is configured in the config.inc.php file of your phpMyAdmin installation, using the $cfg['Servers'][$i]['auth_type'] directive. There are several authentication types available in phpMyAdmin: config config: This is the simplest method where the username and password are stored in the config.inc.php file. This method is not recommended for servers accessible from the Internet due to security concerns, as it does not prompt the user for a username or password. cookie cookie: This method uses cookies for authentication. The user will be presented with a login screen to enter their username and password. This method is more secure than config and is commonly used for Internet-facing servers. http http: With this method, HTTP Authentication is used. The web server manages the login process, and the login dialogue is presented by the browser itself. This method also allows for the use of web server’s authentication modules. signon signon: This advanced authentication method allows for single sign-on capabilities. It uses a session-based mechanism and requires additional scripting to integrate phpMyAdmin logins with other application logins. Choosing the right authentication type depends on your specific needs, including the level of security required and whether the phpMyAdmin installation is exposed to the Internet. For most users, cookie authentication provides a good balance between convenience and security. Reference https://howtolamp.com/lamp/phpmyadmin/4.2/authentication-modes/ The post What are the Authentication type in phpmyadmin which for config.inc.php appeared first on DevOpsSchool.com. View the full article
-
Millions of secrets and authentication keys were leaked on GitHub in 2023, with the majority of developers not caring to revoke them even after being notified of the mishap, new research has claimed. A report from GitGuardian, a project that helps developers secure their software development with automated secrets detection and remediation, claims that in 2023, GitHub users accidentally exposed 12.8 million secrets in more than 3 million public repositories. These secrets include account passwords, API keys, TLS/SSL certificates, encryption keys, cloud service credentials, OAuth tokens, and similar. Slow response During the development stage, many IT pros would hardcode different authentication secrets in order to make their lives easier. However, they often forget to remove the secrets before publishing the code on GitHub. Thus, should any malicious actors discover these secrets, they would get easy access to private resources and services, which can result in data breaches and similar incidents. India was the country from which most leaks originated, followed by the United States, Brazil, China, France, and Canada. The vast majority of the leaks came from the IT industry (65.9%), followed by education (20.1%). The remaining 14% was split between science, retail, manufacturing, finance, public administration, healthcare, entertainment, and transport. Making a mistake and hardcoding secrets can happen to anyone - but what happens after is perhaps even more worrying. Just 2.6% of the secrets are revoked within the hour - practically everything else (91.6%) remains valid even after five days, when GitGuardian stops tracking their status. To make matters worse, the project sent 1.8 million emails to different developers and companies, warning them of its findings, and just 1.8% responded by removing the secrets from the code. Riot Games, GitHub, OpenAI, and AWS were listed as companies with the best response mechanisms. Via BleepingComputer More from TechRadar Pro GitHub's secret scanning feature is now even more powerful, covering AWS, Google, Microsoft, and moreHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
-
By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers. Import public content locally There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably. For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content. Configure Artifact Cache to consume public content Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation. Authenticate pulls with public registries We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads. Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable. Learn more about securing containers Try Docker Scout to assess your images for security risks. Looking to get up and running? Use our Quickstart guide. Have questions? The Docker community is here to help. Subscribe to the Docker Newsletter to stay updated with Docker news and announcements. Additional resources for improving container security for Microsoft and Docker customers Visit Microsoft Learn. Read the introduction to Microsoft’s framework for securing containers. Learn how to manage public content with Azure Container Registry. View the full article
-
- azure
- azure container registry
- (and 5 more)
-
HashiCorp Nomad supports JWT authentication methods, which allow users to authenticate into Nomad using tokens that can be verified via public keys. Primarily, JWT auth methods are used for machine-to-machine authentication, while OIDC auth methods are used for human-to-machine authentication. This post explains how JWT authentication works and how to set it up in Nomad using a custom GitHub Action. The GitHub Action will use built-in GitHub identity tokens to obtain a short-lived Nomad token with limited permissions. How JWT-based authentication works The first step in JWT-based authentication is the JSON Web Token (JWT) itself. JWTs are encoded pieces of JSON that contain information about the identity of some workload or machine. JWT is a generic format, but for authentication, JWTs will sometimes conform to the more specific OIDC spec and include keys such as “sub”, “iss”, or “aud”. This example JWT decodes to the following JSON: { "jti": "eba60bec-a4e4-4787-9b16-20bed89d7092", "sub": "repo:mikenomitch/nomad-gha-jwt-auth:ref:refs/heads/main:repository_owner:mikenomitch:job_workflow_ref:mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main:repository_id:621402301", "aud": "https://github.com/mikenomitch", "ref": "refs/heads/main", "sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "repository": "mikenomitch/nomad-gha-jwt-auth", "repository_owner": "mikenomitch", "repository_owner_id": "2732204", "run_id": "5173139311", "run_number": "31", "run_attempt": "1", "repository_visibility": "public", "repository_id": "621402301", "actor_id": "2732204", "actor": "mikenomitch", "workflow": "Nomad GHA Demo", "head_ref": "", "base_ref": "", "event_name": "push", "ref_type": "branch", "workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "job_workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "job_workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "runner_environment": "github-hosted", "iss": "https://token.actions.githubusercontent.com", "nbf": 1685937407, "exp": 1685938307, "iat": 1685938007 }(Note: If you ever want to decode or encode a JWT, jwt.io is a good tool.) This specific JWT contains information about a GitHub workflow, including an owner, a GitHub Action name, a repository, and a branch. That is because it was issued by GitHub and is an identity token, meaning it is supposed to be used to verify the identity of this workload. Each run in a GitHub Action can be provisioned with one of these JWTs. (More on how they can be used later in this blog post.) Importantly, aside from the information in the JSON, JWTs can be signed with a private key and verified with a public key. It is worth noting that while they are signed, their contents are still decodable by anybody, just not verified. The public keys for JWTs can sometimes be found at idiomatically well-known URLs, such as JSON Web Key Sets (JWKs) URLs. For example, these GitHub public keys can be used to verify their identity tokens. JWT authentication in Nomad Nomad can use external JWT identity tokens to issue its own Nomad ACL tokens with the JWT auth method. In order to set this up, Nomad needs: Roles and/or policies that define access based on identity An auth method that tells Nomad to trust JWTs from a specific source A binding rule that tells Nomad how to map information from that source into Nomad concepts, like roles and policies Here’s how to set up an authentication in Nomad to achieve the following rule: I want any repo using an action called Nomad JWT Auth to get a Nomad ACL token that grants the action permissions for all the Nomad policies assigned to a specific role for their GitHub organization. Tokens should be valid for only one hour, and the action should be valid only for the main branch. That may seem like a lot, but with Nomad JWT authentication, it’s actually fairly simple. In older versions of Nomad, complex authentication like this was impossible. This forced administrators into using long-lived tokens with very high levels of permissions. If a token was leaked, admins would have to manually rotate all of their tokens stored in external stores. This made Nomad less safe and harder to manage. Now, tokens can be short-lived and after a one-time setup with identity-based rules, users don’t have to worry about managing Nomad tokens for external applications. Setting up JWT authentication To set up the authentication, start by creating a simple policy that has write access to the namespace “app-dev” and another policy that has read access to the default namespace. Create a namespace called app-dev: nomad namespace apply "app-dev" Write a policy file called app-developer.policy.hcl: namespace "app-dev" { policy = "write" } Then create it with this CLI command: nomad acl policy apply -description "Access to app-dev namespace" app-developer app-developer.policy.hcl Write a policy file called default-read.policy.hcl: namespace "default" { policy = "read" }Then create it in the CLI: nomad acl policy apply -description "Read access to default namespace" default-read default-read.policy.hcl Next, create roles that have access to this policy. Often these roles are team-based, such as “engineering” or “ops”, but in this case, create a role with the name of “org-” then our Github organization’s name: mikenomitch. Repositories in this organization should be able to deploy to the “app-dev” namespace, and we should be able to set up a GitHub Action to deploy them on merge. Give this role access to the two new policies: nomad acl role create -name="org-mikenomitch" -policy=app-developer -policy=default-read Now, create a file defining an auth method for GitHub in auth-method.json: { "JWKSURL": "https://token.actions.githubusercontent.com/.well-known/jwks", "ExpirationLeeway": "1h", "ClockSkewLeeway": "1h", "ClaimMappings": { "repository_owner": "repo_owner", "repository_id": "repo_id", "workflow": "workflow", "ref": "ref" } }Then create it with the CLI: nomad acl auth-method create -name="github" -type="JWT" -max-token-ttl="1h" -token-locality=global -config "@auth-method.json" This tells Nomad to expect JWTs from GitHub, to verify them using the public key in JWKSURL, and to map key-value pairs found in the JWT to new names. This allows binding rules to be created using these values. A binding rule sets up the complex auth logic requirements stated in a block quote earlier in this post: nomad acl binding-rule create \ -description 'repo name mapped to role name, on main branch, for “Nomad JWT Auth workflow"' \ -auth-method 'github' \ -bind-type 'role' \ -bind-name 'org-${value.repo_owner}' \ -selector 'value.workflow == "Nomad JWT Auth" and value.ref == "refs/heads/main"'The selector field tells Nomad to match JWTs only with certain values in the ref, and workflow fields. The bind-type and bind-name fields tell Nomad to allow JWTs that match this selector to be matched to specific roles. In this case, they refer to roles that have a name matching the GitHub organization name. If you wanted more granular permissions, you could match role names to repository IDs using the repo_id field. So, the JWTs for repositories in the mikenomitch organization are given an ACL token with the role org-mikenomitch, which in turn grants access to the app-developer and default-read policies. Nomad auth with a custom GitHub Action Now you’re ready to use a custom GitHub Action to authenticate into Nomad. This will expose a short-lived Nomad token as an output, which can be used by another action that uses simple bash to deploy any files in the ./nomad-jobsdirectory to Nomad. The code for this action is very simple, it just calls Nomad’s /v1/acl/login endpoint specifying the GitHub auth method and passes in the GitHub Action’s JWT as the login token. (See the code.) To use this action, just push to GitHub with the following file at .github/workflows/github-actions-demo.yml name: Nomad JWT Auth on: push: branches: - main - master env: PRODUCT_VERSION: "1.7.2" NOMAD_ADDR: "https://my-nomad-addr:4646" jobs: Nomad-JWT-Auth: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: Checkout uses: actions/checkout@v3 - name: Setup `nomad` uses: lucasmelin/setup-nomad@v1 id: setup with: version: ${{ env.PRODUCT_VERSION }} - name: Auth Into Nomad id: nomad-jwt-auth uses: mikenomitch/nomad-jwt-auth@v0.1.0 with: url: ${{ env.NOMAD_ADDR }} caCertificate: ${{ secrets.NOMAD_CA_CERT }} continue-on-error: true - name: Deploy Jobs run: for file in ./nomad-jobs/*; do NOMAD_ADDR="${{ env.NOMAD_ADDR }}" NOMAD_TOKEN="${{ steps.nomad-jwt-auth.outputs.nomadToken }}" nomad run -detach "$file"; doneNow you have a simple CI/CD flow on GitHub Actions set up. This does not require manually managing tokens and is secured via identity-based rules and auto-expiring tokens. Possibilities for JWT authentication in Nomad With the JWT auth method, you can enable efficient workflows for tools like GitHub Actions, simplifying management of Nomad tokens for external applications. Machine-to-machine authentication is an important function in cloud infrastructure, yet implementing it correctly requires understanding several standards and protocols. Nomad’s introduction of JWT authentication methods provides the necessary building blocks to make setting up machine-to-machine auth simple. This auth method extends the authentication methods made available in Nomad 1.5, which introduced SSO and OIDC support. As organizations move towards zero trust security, Nomad users now have more choices when implementing access to their critical infrastructure. To learn more about how HashiCorp provides a solid foundation for companies to safely migrate and secure their infrastructure, applications, and data as they move to a multi-cloud world, visit our zero trust security page. To try the feature described in this post, download the latest version of HashiCorp Nomad. View the full article
-
The rise of open source software has led to more collaborative development, but it’s not without challenges. While public container images offer convenience and access to a vast library of prebuilt components, their lack of control and potential vulnerabilities can introduce security and reliability risks into your CI/CD pipeline. This blog post delves into best practices that your teams can implement to mitigate these risks and maintain a secure and reliable software delivery process. By following these guidelines, you can leverage the benefits of open source software while safeguarding your development workflow. 1. Store local copies of public containers To minimize risks and improve security and reliability, consider storing local copies of public container images whenever feasible. The Open Containers Initiative offers guidelines on consuming public content, which you can access for further information. 2. Use authentication when accessing Docker Hub For secure and reliable CI/CD pipelines, authenticating with Docker Hub instead of using anonymous access is recommended. Anonymous access exposes you to security vulnerabilities and increases the risk of hitting rate limits, hindering your pipeline’s performance. The specific authentication method depends on your CI/CD infrastructure and Google Cloud services used. Fortunately, several options are available to ensure secure and efficient interactions with Docker Hub. 3. Use Artifact Registry remote repositories Instead of directly referencing Docker Hub repositories in your build processes, opt for Artifact Registry remote repositories for secure and efficient access. This approach leverages Docker Hub access tokens, minimizing the risk of vulnerabilities and facilitating a seamless workflow. Detailed instructions on configuring this setup can be found in the following Artifact Registry documentation: Configure remote repository authentication to Docker Hub. 4. Use Google Cloud Build to interact with Docker images Google Cloud Build offers robust authentication mechanisms to pull Docker Hub images seamlessly within your build steps. These mechanisms are essential if your container images rely on external dependencies hosted on Docker Hub. By implementing these features, you can ensure secure and reliable access to the necessary resources while streamlining your CI/CD pipeline. Implementing the best practices outlined above offers significant benefits for your CI/CD pipelines. You’ll achieve a stronger security posture and reduced reliability risks, ensuring smooth and efficient software delivery. Additionally, establishing robust authentication controls for your development environments prevents potential roadblocks that could arise later in production. As a result, you can be confident that your processes comply with or surpass corporate security standards, further solidifying your development foundation. Learn more Visit the following product pages to learn more about the features that assist you in implementing these steps. Take control of your supply chain with Artifact Registry remote and virtual repositories Analyze images to prioritize and remediate software supply chain issues with Docker Scout Artifact Registry Product Page Google Cloud Build Product Page View the full article
-
- auth
- docker hub
-
(and 2 more)
Tagged with:
-
If your decentralized application (dApp) must interact directly with AWS services like Amazon S3 or Amazon API Gateway, you must authorize your users by granting them temporary AWS credentials. This solution uses Amazon Cognito in combination with your users’ digital wallet to obtain valid Amazon Cognito identities and temporary AWS credentials for your users. It also demonstrates how to use Amazon API Gateway to secure and proxy API calls to third-party Web3 APIs. In this blog, you will build a fully serverless decentralized application (dApp) called “NFT Gallery”. This dApp permits users to look up their own non-fungible token (NFTs) or any other NFT collections on the Ethereum blockchain using one of the following two Web3 providers HTTP APIs: Alchemy or Moralis. These APIs help integrate Web3 components in any web application without Blockchain technical knowledge or access. Solution overview The user interface (UI) of your dApp is a single-page application (SPA) written in JavaScript using ReactJS, NextJS, and Tailwind CSS. The dApp interacts with Amazon Cognito for authentication and authorization, and with Amazon API Gateway to proxy data from the backend Web3 providers’ APIs. Architecture diagram Figure 1. Architecture diagram showing authentication and API request proxy solution for Web3 Prerequisites Install Node.js, yarn, or npm, and the AWS Serverless Application Model Command Line Interface (AWS SAM CLI) on your computer. Have an AWS account and the proper AWS Identity and Access Management (IAM) permissions to deploy the resources required by this architecture. Install a digital wallet extension on your browser and connect to the Ethereum blockchain. Metamask is a popular digital wallet. Get an Alchemy account (free) and an API Key for the Ethereum blockchain. Read the Alchemy Quickstart guide for more information. Sign up for a Moralis account (free) and API Key. Read the Moralis Getting Started documentation for more information. Using the AWS SAM framework You’ll use AWS SAM as your framework to define, build, and deploy your backend resources. AWS SAM is built on top of AWS CloudFormation and enables developers to define serverless components using a simpler syntax. Walkthrough Clone this GitHub repository. Build and deploy the backend The source code has two top level folders: backend: contains the AWS SAM Template template.yaml. Examine the template.yaml file for more information about the resources deployed in this project. dapp: contains the code for the dApp 1. Go to the backend folder and copy the prod.parameters.example file to a new file called prod.parameters. Edit it to add your Alchemy and Moralis API keys. 2. Run the following command to process the SAM template (review the sam build Developer Guide). sam build 3. You can now deploy the SAM Template by running the following command (review the sam deploy Developer Guide). sam deploy --parameter-overrides $(cat prod.parameters) --capabilities CAPABILITY_NAMED_IAM --guided --confirm-changeset 4. SAM will ask you some questions and will generate a samconfig.toml containing your answers. You can edit this file afterwards as desired. Future deployments will use the .toml file and can be run using sam deploy. Don’t commit the samconfig.toml file to your code repository as it contains private information. Your CloudFormation stack should be deployed after a few minutes. The Outputs should show the resources that you must reference in your web application located in the dapp folder. Run the dApp You can now run your dApp locally. 1. Go to the dapp folder and copy the .env.example file to a new file named .env. Edit this file to add the backend resources values needed by the dApp. Follow the instructions in the .env.example file. 2. Run the following command to install the JavaScript dependencies: yarn 3. Start the development web server locally by running: yarn dev Your dApp should now be accessible at http://localhost:3000. Deploy the dApp The SAM template creates an Amazon S3 bucket and an Amazon CloudFront distribution, ready to serve your Single Page Application (SPA) on the internet. You can access your dApp from the internet with the URL of the CloudFront distribution. It is visible in your CloudFormation stack Output tab in the AWS Management Console, or as output of the sam deploy command. For now, your S3 bucket is empty. Build the dApp for production and upload the code to the S3 bucket by running these commands: cd dapp yarn build cd out aws s3 sync . s3://${BUCKET_NAME} Replace ${BUCKET_NAME} by the name of your S3 bucket. Automate deployment using SAM Pipelines SAM Pipelines automatically generates deployment pipelines for serverless applications. If changes are committed to your Git repository, it automates the deployment of your CloudFormation stack and dApp code. With SAM Pipeline, you can choose a Git provider like AWS CodeCommit, and a build environment like AWS CodePipeline to automatically provision and manage your deployment pipeline. It also supports GitHub Actions. Read more about the sam pipeline bootstrap command to get started. Host your dApp using Interplanetary File System (IPFS) IPFS is a good solution to host dApps in a decentralized way. IPFS Gateway can serve as Origin to your CloudFront distribution and serve IPFS content over HTTP. dApps are often hosted on IPFS to increase trust and transparency. With IPFS, your web application source code and assets are not tied to a DNS name and a specific HTTP host. They will live independently on the IPFS network. Read more about hosting a single-page website on IPFS, and how to run your own IPFS cluster on AWS. Secure authentication and authorization In this section, we’ll demonstrate how to: Authenticate users via their digital wallet using Amazon Cognito user pool Protect your API Gateway from the public internet by authorizing access to both authenticated and unauthenticated users Call Alchemy and Moralis third party APIs securely using API Gateway HTTP passthrough and AWS Lambda proxy integrations Use the JavaScript Amplify Libraries to interact with Amazon Cognito and API Gateway from your web application Authentication Your dApp is usable by both authenticated and unauthenticated users. Unauthenticated users can look up NFT collections while authenticated users can also look up their own NFTs. In your dApp, there is no login/password combination or Identity Provider (IdP) in place to authenticate your users. Instead, users connect their digital wallet to the web application. To capture users’ wallet addresses and grant them temporary AWS credentials, you can use Amazon Cognito user pool and Amazon Cognito identity pool. You can create a custom authentication flow by implementing an Amazon Cognito custom authentication challenge, which uses AWS Lambda triggers. This challenge requires your users to sign a generated message using their digital wallet. If the signature is valid, it confirms that the user owns this wallet address. The wallet address is then used as a user identifier in the Amazon Cognito user pool. Figure 2 details the Amazon Cognito authentication process. Three Lambda functions are used to perform the different authentication steps. Figure 2. Amazon Cognito authentication process To define the authentication success conditions, the Amazon Cognito user pool calls the “Define auth challenge” Lambda function (defineAuthChallenge.js). To generate the challenge, Amazon Cognito calls the “Create auth challenge” Lambda function (createAuthChallenge.js). In this case, it generates a random message for the user to sign. Amazon Cognito forwards the challenge to the dApp, which prompts the user to sign the message using their digital wallet and private key. The dApp then returns the signature to Amazon Cognito as a response. To verify if the user’s wallet effectively signed the message, Amazon Cognito forwards the user’s response to the “Verify auth challenge response” Lambda function (verifyAuthChallengeResponse.js). If True, then Amazon Cognito authenticates the user and creates a new identity in the user pool with the wallet address as username. Finally, Amazon Cognito returns a JWT Token to the dApp containing multiple claims, one of them being cognito:username, which contains the user’s wallet address. These claims will be passed to your AWS Lambda event and Amazon API Gateway mapping templates allowing your backend to securely identify the user making those API requests. Authorization Amazon API Gateway offers multiple ways of authorizing access to an API route. This example showcases three different authorization methods: AWS_IAM: Authorization with IAM Roles. IAM roles grant access to specific API routes or any other AWS resources. The IAM Role assumed by the user is granted by Amazon Cognito identity pool. COGNITO_USER_POOLS: Authorization with Amazon Cognito user pool. API routes are protected by validating the user’s Amazon Cognito token. NONE: No authorization. API routes are open to the public internet. API Gateway backend integrations HTTP proxy integration The HTTP proxy integration method allows you to proxy HTTP requests to another API. The requests and responses can passthrough as-is, or you can modify them on the fly using Mapping Templates. This method is a cost-effective way to secure access to any third-party API. This is because your third-party API keys are stored in your API Gateway and not on the frontend application. You can also activate caching on API Gateway to reduce the amount of API calls made to the backend APIs. This will increase performance, reduce cost, and control usage. Inspect the GetNFTsMoralisGETMethod and GetNFTsAlchemyGETMethod resources in the SAM template to understand how you can use Mapping Templates to modify the headers, path, or query string of your incoming requests. Lambda proxy integration API Gateway can use AWS Lambda as backend integration. Lambda functions enable you to implement custom code and logic before returning a response to your dApp. In the backend/src folder, you will find two Lambda functions: getNFTsMoralisLambda.js: Calls Moralis API and returns raw response getNFTsAlchemyLambda.js: Calls Alchemy API and returns raw response To access your authenticated user’s wallet address from your Lambda function code, access the cognito:username claim as follows: var wallet_address = event.requestContext.authorizer.claims["cognito:username"]; Using Amplify Libraries in the dApp The dApp uses the AWS Amplify Javascript Libraries to interact with Amazon Cognito user pool, Amazon Cognito identity pool, and Amazon API Gateway. With Amplify Libraries, you can interact with the Amazon Cognito custom authentication flow, get AWS credentials for your frontend, and make HTTP API calls to your API Gateway endpoint. The Amplify Auth library is used to perform the authentication flow. To sign up, sign in, and respond to the Amazon Cognito custom challenge, use the Amplify Auth library. Examine the ConnectButton.js and user.js files in the dapp folder. To make API calls to your API Gateway, you can use the Amplify API library. Examine the api.js file in the dApp to understand how you can make API calls to different API routes. Note that some are protected by AWS_IAM authorization and others by COGNITO_USER_POOL. Based on the current authentication status, your users will automatically assume the CognitoAuthorizedRole or CognitoUnAuthorizedRole IAM Roles referenced in the Amazon Cognito identity pool. AWS Amplify will automatically use the credentials associated with your AWS IAM Role when calling an API route protected by the AWS_IAM authorization method. Amazon Cognito identity pool allows anonymous users to assume the CognitoUnAuthorizedRole IAM Role. This allows secure access to your API routes or any other AWS services you configured, even for your anonymous users. Your API routes will then not be publicly available to the internet. Cleaning up To avoid incurring future charges, delete the CloudFormation stack created by SAM. Run the sam delete command or delete the CloudFormation stack in the AWS Management Console directly. Conclusion In this blog, we’ve demonstrated how to use different AWS managed services to run and deploy a decentralized web application (dApp) on AWS. We’ve also shown how to integrate securely with Web3 providers’ APIs, like Alchemy or Moralis. You can use Amazon Cognito user pool to create a custom authentication challenge and authenticate users using a cryptographically signed message. And you can secure access to third-party APIs, using API Gateway and keep your secrets safe on the backend. Finally, you’ve seen how to host a single-page application (SPA) using Amazon S3 and Amazon CloudFront as your content delivery network (CDN). View the full article
-
Hi. I have a problem with parsing token for authorization in ArgoCD. Bearer token passed to me in "header" from 3rd party Oauth2(not GitHub, Google, etc). How I can parse this token and use it for ArgoCD server authorization? In my configuration I don't point "issuer:" because my Oauth2 provider(it's custom our provider) automatically transmit JWT to argocd-server/ Any idea how I can resolve this issue?
-
Styra, Inc. today launched an authorization service based on the Open Policy Agent (OPA) software that can be invoked via an application programming interface (API). Torin Sandall, vice president of open source for Styra, said the Styra Run cloud service will make it much simpler to embed enterprise-grade authorization capabilities within applications. Today, developers spend […] The post Styra Unfurls Cloud Service for Implementing Compliance-as-Code appeared first on DevOps.com. View the full article
-
- styra
- compliance
- (and 4 more)
-
Amazon OpenSearch Service now supports tag-based authorization for HTTP methods, making it easier for you to manage access control for data read and write operations. You can use Identity policies in AWS Identity and Access Management (IAM) to define permissions for read and write HTTP methods, allowing coarse-grained access control of data on your Amazon OpenSearch Service domains. View the full article
-
- opensearch
- tags
-
(and 1 more)
Tagged with:
-
AWS Amplify Flutter introduces support for creating customizable authentication flows, using Amazon Cognito Lambda triggers. Using this functionality, developers are able to setup customizations for the login experience in their Flutter apps, such as creating OTP login flows, or adding CAPTCHA to their Flutter app. View the full article
-
Today’s applications use a great many login and authentication methods and workflows. Here, I’ll share the most relevant and proven authentication workflows, which you can use as a basis for architecting and designing an authentication system for traditional web applications, single-page applications and native mobile applications. Authentication Workflows for Traditional Web Applications Traditional web applications […] View the full article
-
Amazon Cognito now enables application developers to propagate IP address as part of the caller context data in unauthenticated calls to Amazon Cognito. When Amazon Cognito’s Advanced Security Features (ASF) are enabled, this feature improves risk calculation and resulting authentication decisions performed in flows such as sign-up, account confirmation, and password change. Prior to this change, the end user IP address was not available in unauthenticated calls if these calls were initiated behind a proxy. With this new feature, developers who build identity micro-services, authentication modules or identity proxies can now leverage APIs to gain visibility into the client’s IP address and utilize them in other security applications to better understand the risk of a particular user activity. View the full article
-
Amazon Cognito now supports SMS Sandbox in Amazon SNS. Amazon Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect. View the full article
-
AWS Single Sign-On (SSO) now enables you to secure user access to AWS accounts and business applications using multi-factor authentication (MFA) with FIDO-enabled security keys, such as YubiKey, and built-in biometric authenticators, such as Touch ID on Apple MacBooks and facial recognition on PCs. With this release, AWS SSO now supports the Web Authentication (WebAuthn) specification to provide strongly attestable and phishing-resistant authentication across all supported browsers, using interoperable FIDO2 and U2F authenticators. View the full article
-
Amazon Cognito User Pools now enables you to manage quotas for commonly used operation categories, such as user creation and user authentication, as well as view quotas and usage levels in the AWS Service Quotas dashboard or in CloudWatch metrics. This update makes it simple view your quota usage of and request rate increases for multiple APIs in the same category. For example, you now can now see the aggregated limit for a single “UserCreation” category, which includes SignUp, AdminCreateUser, ConfirmSignUp, and AdminConfirmSignUp. You can check whether the existing quotas can meet your operations needs in Service Quotas console or CloudWatch metrics. You can refer to this documentation to learn how the API operations are mapped to the new categories. View the full article
- 1 reply
-
- amazon cognito user pools
- aws
- (and 6 more)
-
Amplify Admin UI now supports importing existing Amazon Cognito User Pools and Identity Pools. This means you can link your Cognito User Pool and Identity Pool resources to your Amplify app to take advantage of authorization scenarios for your data model, and manage users and groups directly from the Admin UI. View the full article
- 2 replies
-
- amazon cognito
- amazon cognito user pools
-
(and 3 more)
Tagged with:
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts