Search the Community
Showing results for tags 'nomad'.
-
HashiCorp Nomad supports JWT authentication methods, which allow users to authenticate into Nomad using tokens that can be verified via public keys. Primarily, JWT auth methods are used for machine-to-machine authentication, while OIDC auth methods are used for human-to-machine authentication. This post explains how JWT authentication works and how to set it up in Nomad using a custom GitHub Action. The GitHub Action will use built-in GitHub identity tokens to obtain a short-lived Nomad token with limited permissions. How JWT-based authentication works The first step in JWT-based authentication is the JSON Web Token (JWT) itself. JWTs are encoded pieces of JSON that contain information about the identity of some workload or machine. JWT is a generic format, but for authentication, JWTs will sometimes conform to the more specific OIDC spec and include keys such as “sub”, “iss”, or “aud”. This example JWT decodes to the following JSON: { "jti": "eba60bec-a4e4-4787-9b16-20bed89d7092", "sub": "repo:mikenomitch/nomad-gha-jwt-auth:ref:refs/heads/main:repository_owner:mikenomitch:job_workflow_ref:mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main:repository_id:621402301", "aud": "https://github.com/mikenomitch", "ref": "refs/heads/main", "sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "repository": "mikenomitch/nomad-gha-jwt-auth", "repository_owner": "mikenomitch", "repository_owner_id": "2732204", "run_id": "5173139311", "run_number": "31", "run_attempt": "1", "repository_visibility": "public", "repository_id": "621402301", "actor_id": "2732204", "actor": "mikenomitch", "workflow": "Nomad GHA Demo", "head_ref": "", "base_ref": "", "event_name": "push", "ref_type": "branch", "workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "job_workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "job_workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "runner_environment": "github-hosted", "iss": "https://token.actions.githubusercontent.com", "nbf": 1685937407, "exp": 1685938307, "iat": 1685938007 }(Note: If you ever want to decode or encode a JWT, jwt.io is a good tool.) This specific JWT contains information about a GitHub workflow, including an owner, a GitHub Action name, a repository, and a branch. That is because it was issued by GitHub and is an identity token, meaning it is supposed to be used to verify the identity of this workload. Each run in a GitHub Action can be provisioned with one of these JWTs. (More on how they can be used later in this blog post.) Importantly, aside from the information in the JSON, JWTs can be signed with a private key and verified with a public key. It is worth noting that while they are signed, their contents are still decodable by anybody, just not verified. The public keys for JWTs can sometimes be found at idiomatically well-known URLs, such as JSON Web Key Sets (JWKs) URLs. For example, these GitHub public keys can be used to verify their identity tokens. JWT authentication in Nomad Nomad can use external JWT identity tokens to issue its own Nomad ACL tokens with the JWT auth method. In order to set this up, Nomad needs: Roles and/or policies that define access based on identity An auth method that tells Nomad to trust JWTs from a specific source A binding rule that tells Nomad how to map information from that source into Nomad concepts, like roles and policies Here’s how to set up an authentication in Nomad to achieve the following rule: I want any repo using an action called Nomad JWT Auth to get a Nomad ACL token that grants the action permissions for all the Nomad policies assigned to a specific role for their GitHub organization. Tokens should be valid for only one hour, and the action should be valid only for the main branch. That may seem like a lot, but with Nomad JWT authentication, it’s actually fairly simple. In older versions of Nomad, complex authentication like this was impossible. This forced administrators into using long-lived tokens with very high levels of permissions. If a token was leaked, admins would have to manually rotate all of their tokens stored in external stores. This made Nomad less safe and harder to manage. Now, tokens can be short-lived and after a one-time setup with identity-based rules, users don’t have to worry about managing Nomad tokens for external applications. Setting up JWT authentication To set up the authentication, start by creating a simple policy that has write access to the namespace “app-dev” and another policy that has read access to the default namespace. Create a namespace called app-dev: nomad namespace apply "app-dev" Write a policy file called app-developer.policy.hcl: namespace "app-dev" { policy = "write" } Then create it with this CLI command: nomad acl policy apply -description "Access to app-dev namespace" app-developer app-developer.policy.hcl Write a policy file called default-read.policy.hcl: namespace "default" { policy = "read" }Then create it in the CLI: nomad acl policy apply -description "Read access to default namespace" default-read default-read.policy.hcl Next, create roles that have access to this policy. Often these roles are team-based, such as “engineering” or “ops”, but in this case, create a role with the name of “org-” then our Github organization’s name: mikenomitch. Repositories in this organization should be able to deploy to the “app-dev” namespace, and we should be able to set up a GitHub Action to deploy them on merge. Give this role access to the two new policies: nomad acl role create -name="org-mikenomitch" -policy=app-developer -policy=default-read Now, create a file defining an auth method for GitHub in auth-method.json: { "JWKSURL": "https://token.actions.githubusercontent.com/.well-known/jwks", "ExpirationLeeway": "1h", "ClockSkewLeeway": "1h", "ClaimMappings": { "repository_owner": "repo_owner", "repository_id": "repo_id", "workflow": "workflow", "ref": "ref" } }Then create it with the CLI: nomad acl auth-method create -name="github" -type="JWT" -max-token-ttl="1h" -token-locality=global -config "@auth-method.json" This tells Nomad to expect JWTs from GitHub, to verify them using the public key in JWKSURL, and to map key-value pairs found in the JWT to new names. This allows binding rules to be created using these values. A binding rule sets up the complex auth logic requirements stated in a block quote earlier in this post: nomad acl binding-rule create \ -description 'repo name mapped to role name, on main branch, for “Nomad JWT Auth workflow"' \ -auth-method 'github' \ -bind-type 'role' \ -bind-name 'org-${value.repo_owner}' \ -selector 'value.workflow == "Nomad JWT Auth" and value.ref == "refs/heads/main"'The selector field tells Nomad to match JWTs only with certain values in the ref, and workflow fields. The bind-type and bind-name fields tell Nomad to allow JWTs that match this selector to be matched to specific roles. In this case, they refer to roles that have a name matching the GitHub organization name. If you wanted more granular permissions, you could match role names to repository IDs using the repo_id field. So, the JWTs for repositories in the mikenomitch organization are given an ACL token with the role org-mikenomitch, which in turn grants access to the app-developer and default-read policies. Nomad auth with a custom GitHub Action Now you’re ready to use a custom GitHub Action to authenticate into Nomad. This will expose a short-lived Nomad token as an output, which can be used by another action that uses simple bash to deploy any files in the ./nomad-jobsdirectory to Nomad. The code for this action is very simple, it just calls Nomad’s /v1/acl/login endpoint specifying the GitHub auth method and passes in the GitHub Action’s JWT as the login token. (See the code.) To use this action, just push to GitHub with the following file at .github/workflows/github-actions-demo.yml name: Nomad JWT Auth on: push: branches: - main - master env: PRODUCT_VERSION: "1.7.2" NOMAD_ADDR: "https://my-nomad-addr:4646" jobs: Nomad-JWT-Auth: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: Checkout uses: actions/checkout@v3 - name: Setup `nomad` uses: lucasmelin/setup-nomad@v1 id: setup with: version: ${{ env.PRODUCT_VERSION }} - name: Auth Into Nomad id: nomad-jwt-auth uses: mikenomitch/nomad-jwt-auth@v0.1.0 with: url: ${{ env.NOMAD_ADDR }} caCertificate: ${{ secrets.NOMAD_CA_CERT }} continue-on-error: true - name: Deploy Jobs run: for file in ./nomad-jobs/*; do NOMAD_ADDR="${{ env.NOMAD_ADDR }}" NOMAD_TOKEN="${{ steps.nomad-jwt-auth.outputs.nomadToken }}" nomad run -detach "$file"; doneNow you have a simple CI/CD flow on GitHub Actions set up. This does not require manually managing tokens and is secured via identity-based rules and auto-expiring tokens. Possibilities for JWT authentication in Nomad With the JWT auth method, you can enable efficient workflows for tools like GitHub Actions, simplifying management of Nomad tokens for external applications. Machine-to-machine authentication is an important function in cloud infrastructure, yet implementing it correctly requires understanding several standards and protocols. Nomad’s introduction of JWT authentication methods provides the necessary building blocks to make setting up machine-to-machine auth simple. This auth method extends the authentication methods made available in Nomad 1.5, which introduced SSO and OIDC support. As organizations move towards zero trust security, Nomad users now have more choices when implementing access to their critical infrastructure. To learn more about how HashiCorp provides a solid foundation for companies to safely migrate and secure their infrastructure, applications, and data as they move to a multi-cloud world, visit our zero trust security page. To try the feature described in this post, download the latest version of HashiCorp Nomad. View the full article
-
HashiConf Digital June 2020: Nomad News in Review
Hashicorp posted a topic in Infrastructure-as-Code
It’s been a busy year for the HashiCorp Nomad team with releases packed full of some of our most requested features. In the first post of this two-part recap, I’ll reflect on the announcements and talks from HashiConf Digital June. In the second part, I’ll review some key features in Nomad 0.12 and share our team’s must-see highlights to prepare you for HashiConf Digital this upcoming October! ... Register now at hashiconf.com/digital-october! View the full article -
Last year we wrote A Kubernetes User's Guide to HashiCorp Nomad — a resource for learning the equivalent terminologies, comparisons, and differentiations between HashiCorp Nomad and Kubernetes. As a follow up, we also wrote A Kubernetes User's Guide to HashiCorp Nomad Secret Management, which compared Kubernetes and Nomad’s integration workflows with HashiCorp Vault. This year, we’ve designed a companion cheat sheet that condenses some of the concept comparisons made in those guides and adds original content. This new, one-page reference PDF starts with a list of Kubernetes commands and their Nomad equivalents and has a list of Kubernetes concepts and their Nomad equivalents at the bottom. »The Cheat Sheet Below is an image of the Kubernetes to Nomad cheat sheet and a PDF download link (no registration required): »Contribute on the GitHub Page Along with the PDF cheat sheet, we’ve also started a GitHub page for collecting Nomad cheat sheets. The Kubernetes-to-Nomad commands and concepts cheat sheets are the first two resources to be added to this repository, which is now open for edits and additions by the community. So if you have your own set of common Kubernetes commands that you’ve translated into Nomad (or vice versa), or you think there’s an error that needs to be updated, you can easily put in a pull request on the GitHub page. And if you have an idea for another Nomad cheat sheet, and you’d like to submit some of it, go ahead and write a pull request for that as well. This cheat sheet and its various deployments were created by Michael Schurter, Kerim Satirli, Erik Veld, Taylor Dolezal, Jacquie Grindrod, Andrei Burd, Charlie Voiselle, Amier Chery, Kristopher Hughes, and Paul Burt. View the full article
-
- 1
-
- kubernetes
- nomad
-
(and 2 more)
Tagged with:
-
Last year we wrote A Kubernetes User's Guide to HashiCorp Nomad — a resource for learning the equivalent terminologies, comparisons, and differentiations between HashiCorp Nomad and Kubernetes. As a follow up, we also wrote A Kubernetes User's Guide to HashiCorp Nomad Secret Management, which compared Kubernetes and Nomad’s integration workflows with HashiCorp Vault. This year, we’ve designed a companion cheat sheet that condenses some of the concept comparisons made in those guides and adds original content. This new, one-page reference PDF starts with a list of Kubernetes commands and their Nomad equivalents and has a list of Kubernetes concepts and their Nomad equivalents at the bottom. »The Cheat Sheet Below is an image of the Kubernetes to Nomad cheat sheet and a PDF download link (no registration required): »Contribute on the GitHub Page Along with the PDF cheat sheet, we’ve also started a GitHub page for collecting Nomad cheat sheets. The Kubernetes-to-Nomad commands and concepts cheat sheets are the first two resources to be added to this repository, which is now open for edits and additions by the community. So if you have your own set of common Kubernetes commands that you’ve translated into Nomad (or vice versa), or you think there’s an error that needs to be updated, you can easily put in a pull request on the GitHub page. And if you have an idea for another Nomad cheat sheet, and you’d like to submit some of it, go ahead and write a pull request for that as well. This cheat sheet and its various deployments were created by Michael Schurter, Kerim Satirli, Erik Veld, Taylor Dolezal, Jacquie Grindrod, Andrei Burd, Charlie Voiselle, Amier Chery, Kristopher Hughes, and Paul Burt. View the full article
-
- kubernetes
- nomad
-
(and 1 more)
Tagged with:
-
We are pleased to announce the public beta of HashiCorp Nomad 1.0. Nomad is a simple and flexible orchestrator to deploy and manage containers and non-containerized applications across on-premises and cloud environments at scale. Nomad is widely adopted and used in production by organizations like Cloudflare, Roblox, Q2, Pandora, and more... View the full article
-
Recently we announced that Nomad now supports running Consul Connect ingress gateways. For the past year, Nomad has been incrementally improving its first-class integration with Consul’s service mesh. Whether through the use of sidecar proxies like Envoy or by embedding the Connect native client library, Nomad supports running tasks that can communicate with other components of a Consul service mesh quickly and securely. Now with support for Consul ingress gateways, Nomad users are able to provide access to Connect-enabled services from outside the service mesh. Overview Ingress gateways enable ingress traffic from services running outside of the Consul service mesh to services inside the mesh. An ingress gateway is a special type of proxy that is registered into Consul as a service with its kind set to ingress-gateway. They provide a dedicated entry point for outside traffic and apply the proper traffic management policies for how requests to mesh services are handled. With this latest release, Nomad can now be used to not only deploy ingress gateway proxy tasks, but configure them too. Using this feature, Nomad job authors can enable applications external to the Consul service mesh to access those Connect-enabled services. The ingress gateway configuration enables defining one or more listeners that map to a set of backing services. Service Configuration There is a new gateway parameter available for services with a Connect stanza defined at the group level in a Nomad job specification. Inside this stanza are parameters for configuring the underlying Envoy proxy as well as the configuration entry that is used to establish the gateway configuration in Consul. service { gateway { proxy { // envoy proxy configuration // https://www.nomadproject.io/docs/job-specification/gateway#proxy-parameters } ingress { // consul configuration entry // https://www.nomadproject.io/docs/job-specification/gateway#ingress-parameters } } } The proxy stanza is used to define configuration regarding the underlying Envoy proxy that Nomad will run as a task in the same task group as the service definition. This configuration becomes part of the service registration for the service registered on behalf of the ingress gateway. The ingress stanza represents the ingress-gateway configuration entry that Consul uses to manage the proxy's listeners. A listener declares the port, protocol (tcp or http), and each Consul service which may respond to incoming requests. When listening on http, a service may be configured with a list of hosts that specify which requests will match the service. ingress { listener { port = 8080 protocol = "tcp" service { name = "uuid-api" } } } If the task group containing the ingress gateway definition is configured for bridge networking, Nomad will automatically reconfigure the proxy options to work from inside the group's network namespace for the defined listeners. E.g., envoy_gateway_no_default_bind = true envoy_gateway_bind_addresses "uuid-api" { address = "0.0.0.0" port = 8080 } Task Nomad and Consul leverage Envoy as the underlying proxy implementation for ingress gateways. The Nomad task group that defines the ingress service does not require any tasks to be defined - Nomad will derive the task from the service configuration and inject the task into the task group automatically during job creation. Discover To enable easier service discovery, Consul provides a new DNS subdomain for each service fronted by an ingress gateway. To find ingress-enabled services, use: <service>.ingress.<domain> By default, <domain> is simply consul. To test that an ingress-gateway is working, the dig command can be used to look up the DNS entry of a service from Consul directly, e.g. dig @127.0.0.1 -p 8600 <service>.ingress.consul SRV Example The following job specification demonstrates using an ingress gateway as a method of plain HTTP ingress for our UUID generator API Connect-native sample service. This open source example is designed to be runnable if Consul, Nomad, and Docker are already configured. job "ig-bridge-demo" { datacenters = ["dc1"] # This group will have a task providing the ingress gateway automatically # created by Nomad. The ingress gateway is based on the Envoy proxy being # managed by the docker driver. group "ingress-group" { network { mode = "bridge" # This example will enable tcp traffic to access the uuid-api connect # native example service by going through the ingress gateway on port 8080. # The communication between the ingress gateway and the upstream service occurs # through the mTLS protected service mesh. port "api" { static = 8080 to = 8080 } } service { name = "my-ingress-service" port = "8080" connect { gateway { # Consul gateway [envoy] proxy options. proxy { # The following options are automatically set by Nomad if not # explicitly configured when using bridge networking. # # envoy_gateway_no_default_bind = true # envoy_gateway_bind_addresses "uuid-api" { # address = "0.0.0.0" # port = <associated listener.port> # } # # Additional options are documented at # https://www.nomadproject.io/docs/job-specification/gateway#proxy-parameters } # Consul Ingress Gateway Configuration Entry. ingress { # Nomad will automatically manage the Configuration Entry in Consul # given the parameters in the ingress block. # # Additional options are documented at # https://www.nomadproject.io/docs/job-specification/gateway#ingress-parameters listener { port = 8080 protocol = "tcp" service { name = "uuid-api" } } } } } } } # The UUID generator from the Connect-native demo is used as an example service. # The ingress gateway above makes access to the service possible over tcp port 8080. group "generator" { network { mode = "host" port "api" {} } service { name = "uuid-api" port = "${NOMAD_PORT_api}" connect { native = true } } task "generate" { driver = "docker" config { image = "hashicorpnomad/uuid-api:v3" network_mode = "host" } env { BIND = "0.0.0.0" PORT = "${NOMAD_PORT_api}" } } } } You can run this example by saving it as ingress-gateway.nomad and running the commands, consul agent -dev sudo nomad agent -dev-connect nomad job run ingress-gateway.nomad Once running, the ingress gateway will be available on port 8080 of the node that the ingress gateway service is running on. The UUID generator service will be listening to a dynamically allocated port chosen by Nomad. Because the UUID generator service is in the Connect service mesh, it will not be possible to connect to it directly as it will reject any connection without a valid mTLS certificate. In most environments Consul DNS will be configured so that applications can easily discover Consul services. We can use curl and dig to simulate what an application accessing a service through the ingress gateway would look like. $ curl $(dig +short @127.0.0.1 -p 8600 uuid-api.ingress.dc1.consul. ANY):8080 c8bfae29-3683-4b19-89dd-fbfbe691a6e7 Limitations At the moment, Envoy is the only proxy implementation that can be used by Nomad and Consul as an ingress gateway. When being used as an ingress gateway, Nomad will launch Envoy using the docker task driver, as there is not yet support for manually specifying the proxy task. Ingress gateways are configured using Consul configuration entries, which are global in scope across federated Consul clusters. When multiple Nomad regions define an ingress gateway under a particular service name, each region will rewrite the ingress-gateway Configuration Entry in Consul for that service. In practice, typical individual ingress gateway service definitions would be the same across Nomad regions, causing the extra writes to turn into no-ops. When running the ingress gateway in host-networking mode, the Envoy proxy creates a default administration HTTP listener that is bound to localhost. There is no way to disable or secure the Envoy administration listener envoy/2763. Any other process able to connect to localhost on the host machine will be able to access the Envoy configuration through its administration listener, including Service Identity Tokens for the proxy when Consul ACLs are in use. Conclusion In this blog post, we shared an overview of Nomad's Consul ingress gateway integration and how it can be used to configure and run Consul ingress gateways on Nomad. Using this integration, job specification authors can easily create endpoints for external applications to make requests to Connect-enabled services that would otherwise be accessible only through the Consul service mesh. For more information about ingress gateways, see the Consul ingress gateway documentation. For more information about Nomad, please visit our docs View the full article
-
Today we’re pleased to announce the release of a suite of Terraform modules in the public registry that provide an implementation of the Reference Architecture for Consul, Nomad, and Vault. You can use these modules in AWS Cloud. They represent a straightforward way to stand up a working product cluster. If you navigate to the HashiCorp products section of the Terraform registry, scroll down and you'll see the "Modules Maintained By HashiCorp" section shown above. What Modules Are Available?This initial release contains modules for the open-source versions of Consul, Nomad, and Vault for AWS. This combination of products and platform was chosen in light of the fact that AWS is the cloud of choice across much of the industry, and these three products have broad practitioner support and adoption. What Are These Modules?These modules are opinionated implementations of the product reference architectures for Vault, Consul, and Nomad. You can drop them into existing Terraform set-ups or use them to compose entirely new infrastructure in Terraform. Each module is composed in such a way that you can get started quickly by supplying a few values as variables. Other than these values, which are specific to your environment, the module contains defaults that bring up the appropriate infrastructure for each product in accordance with the recommendations of the HashiCorp Enterprise Architecture group. For the inexperienced practitioner this means that the start time is greatly accelerated, spinning up the infrastructure allows you to get started with the product rather than having to learn and understand the details of configuration. This straightforward approach is also intended to help you experiment by making it simple to bring up a functional version of the product for demonstration purposes, or for internal benchmarking by an organization looking to make sure introducing HashiCorp products is not also introducing overhead. While full integration of a product into an existing infrastructure might require more specific configuration, these modules allow the swift set-up of test or development environments that adhere to HashiCorp best practices in the operation of our products. What About Flexibility?The HashiCorp way has always been to provide you with as much flexibility as possible; we make tools, not prescriptions. These modules don’t change that, rather they’re a new way of expressing it. If you are a more experienced practitioner who is looking for flexibility and the ability to control configuration in a more manual fashion, we still offer our previous modules for Consul, Nomad, and Vault. These new modules join our previous offerings in the registry, and are intended to function as quickstarts for new practitioners and as reference material accompanying our HashiCorp Learn site. What's Next?We believe that remixing and collaboration make us better, and that’s why we’ve invested in these open source modules. As the maintainers, we are sharing these in the hope that you will find them helpful whether as implementation tools, references, or templates. We also invite feedback and contribution, both on the modules already released and our future work in this regard. We especially hope new practitioners will find these modules helpful, and we’ll be working toward improving any rough edges in new practitioner experience in these, and the future modules we will release. View the full article
-
Forum Statistics
63.7k
Total Topics61.7k
Total Posts