Jump to content

Search the Community

Showing results for tags 'github actions'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 16 results

  1. GitHub Actions are great at enabling you to automate workflows directly from within a GitHub repository. The workflows are stored in a YAML definition file located within the .github/workflows directory within the repository. GitHub Actions can be used to configure workflows that can perform a variety of actions to perform build and release steps. On […] The article GitHub Actions: Commit and Push Changes Back to Repository appeared first on Build5Nines. View the full article
  2. Automotive software development moves to the cloud We are at an inflection point for automotive embedded development to move to the cloud. In an era where software has not just eaten the world but is continuously redefining it through AI, the cloud emerges not just as a platform but as the foundational fabric for software engineering. With AI’s increasing demand for computational power driving unprecedented changes in silicon, both at the edge and in the cloud, the need for agile, scalable, and continuously optimized development environments has never been more critical. As the home of the world’s developers, GitHub is the platform to build the next generation of automotive and embedded development environments in the cloud. Traditional embedded development challenges Improving the developer experience is at the heart of what GitHub does. We’re dedicated to making coding as smooth as possible by reducing unnecessary complexity. The traditional process for developers working with embedded systems has plenty of friction to remove. Historically, software development has been very hardware-dependent with developers maintaining some combination of test hardware connected to their development machines or an in-house testing farm. There weren’t many alternatives because so much was proprietary. In recent years, a series of technical advancements have significantly influenced the foundational architectures within the field. Despite these changes, many traditional methods and operational processes remain in use. Key developments include the adoption of more powerful multipurpose processors, the establishment of open standards for the lower-level software stack such as SOAFEE.io for cloud native architecture at the edge, and the increased reliance on open-source resources, facilitating reuse across different domains. These innovations have provided developers with the opportunity to fundamentally rethink their approaches to development, enabling more efficient and flexible strategies. As the rate of these technical trends and foundational change increases, teams are finding it increasingly difficult to deliver application commitments without significant cost of maintaining these in-house development and test environments. See how Scalable Open Architecture For Embedded Edge (SOAFEE), an industry-led collaboration between companies across the automotive and technology sectors, is working to radically simplify vehicle software solutions. Virtualization for embedded and automotive development While virtualization has become a cornerstone of enterprise development, its integration into embedded systems has proceeded at a more cautious pace. The complexities inherent in embedded systems—spanning a vast array of processors, operating systems, and specialized software—pose unique challenges not encountered in the more homogeneous environments of data centers and IT networks. Embedded systems require a nuanced approach to virtualization that goes beyond simply accommodating mainstream operating systems like Windows and Linux on standard Intel architectures. In a significant development that reflects the evolving landscape of embedded systems, in March 2024, Arm unveiled its new Automotive Enhanced (AE) processors. These cutting-edge processors are designed to boost AI capabilities within the automotive sector, ensuring ISA (Instruction Set Architecture) compatibility. This advancement is poised to revolutionize the way applications are developed and deployed, enabling developers to create software in the cloud and seamlessly transition it to the edge, such as in vehicles, without the need for extensive reconfiguration or modification. This leap forward promises to accelerate the time-to-market for new applications, bridging the gap between cloud development environments and the nuanced world of embedded systems . This transition exemplifies how advancements in processor technology and virtualization are converging to address the unique challenges of embedded development, paving the way for more integrated and efficient systems across industries. Developers will be able to write, build, and test code in the cloud and then run their applications in virtualized environments with digital twins that mirror their processor targets, even if those targets haven’t even been delivered in the silicon. Cloud-based continuous integration platform Continuous integration (CI), a cornerstone of agile methodologies for over two decades, automates the build, test, and deployment processes. This automation accelerates feedback loops, enabling timely verification that the software meets the intended requirements. It also minimizes integration risks and enhances the early detection of defects and security vulnerabilities. While surveys indicate that many embedded development teams have adopted CI as a practice, managing the development environments across multiple hardware configurations and deployment targets is costly and complex. Implementing CI/CD in a cloud environment leverages the well-established advantages of cloud computing for embedded engineering teams, significantly enhancing their ability to deliver high-quality products within tight market timelines. Enhanced Scalability. Cloud-based CI allows teams to dynamically allocate resources and optimize compute spend. Teams can execute workloads in parallel in order to support multiple hardware and software configurations simultaneously. Developers can also participate across geographic regions or even across organizational boundaries within the supply chain. Reduced Complexity. Standardizing on cloud-based CI reduces environment setup and tear down times and promotes consistency. Workflows can easily be shared across teams. Improved Quality. When compute resources are too constrained or managing the CI environment is brittle, teams may optimize locally onto too narrow a piece of the development. Reducing this friction and thereby increasing the end to end feedback loops can improve quality. To deliver cloud-based embedded developer environments for the design and build time that feed into the runtime virtualized and simulated targets, GitHub needed to update our infrastructure. In October 2023, GitHub announced native Arm64 support for our hosted CI/CD workflow engine, GitHub Actions. Supporting this platform is important because Arm’s family of processor designs are central to many uses in the embedded and automotive world. This promises to free embedded developers from being tied to the desktop. By moving jobs to the cloud, development teams will be able to focus more on coding time and less on infrastructure management. We also recently announced the public beta of GPU hosted runners that will enable teams building machine learning models to do complete application testing, including the ML components within GitHub Actions. Conclusion The convergence of cloud technologies, advanced virtualization, and cutting-edge processor innovations represents a transformative shift in automotive software development. To further advance and support these transformations across the industry, GitHub has recently joined SOAFEE.io, as well as maintaining our membership in the Connected Vehicle Systems Alliance (COVESA) and supporting Microsoft’s commitment to the Eclipse Software Defined Vehicle project. GitHub Enterprise Cloud, along with Arm’s latest AE processors, heralds a new era where development and testing transcend traditional boundaries, leveraging the cloud’s vast resources for more efficient, scalable, and flexible software creation. This paradigm shift towards cloud-based development and virtualized testing environments not only addresses the complexities and limitations of embedded system design but also dramatically reduces the overhead associated with physical hardware dependencies. By enabling developers to seamlessly transition applications from the cloud to the edge without extensive rework, the automotive industry stands on the brink of a significant acceleration in innovation and time-to-market for new technologies. GitHub’s introduction of native Arm64 support and the public beta of GPU hosted runners on its CI/CD platform, GitHub Actions, further underscores this transition. These advancements ensure that the embedded and automotive development communities can fully harness the cloud’s potential, facilitating a shift from local, hardware-constrained development processes to a more agile, cloud-centric approach. As a result, developers can focus more on innovation and less on the intricacies of hardware management, propelling the automotive sector into a future where software development is more integrated, dynamic, and responsive to the rapidly evolving demands of technology and consumers. This transition not only signifies a leap forward in how automotive software is developed but also reflects a broader trend towards the cloud as the backbone of modern software engineering across industries. Learn more about GitHub-hosted runners and look for the public beta for Arm-hosted runners coming later this year.
  3. AWS CodeBuild now supports managed GitHub Action self-hosted runners. Customers can configure their CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. View the full article
  4. Learn how to automate machine learning training and evaluation using scikit-learn pipelines, GitHub Actions, and CML.View the full article
  5. Implementing Continuous Integration/Continuous Deployment (CI/CD) for a Python application using Django involves several steps to automate testing and deployment processes. This guide will walk you through setting up a basic CI/CD pipeline using GitHub Actions, a popular CI/CD tool that integrates seamlessly with GitHub repositories. Step 1: Setting up Your Django Project Ensure your Django project is in a Git repository hosted on GitHub. This repository will be the basis for setting up your CI/CD pipeline. View the full article
  6. HashiCorp Nomad supports JWT authentication methods, which allow users to authenticate into Nomad using tokens that can be verified via public keys. Primarily, JWT auth methods are used for machine-to-machine authentication, while OIDC auth methods are used for human-to-machine authentication. This post explains how JWT authentication works and how to set it up in Nomad using a custom GitHub Action. The GitHub Action will use built-in GitHub identity tokens to obtain a short-lived Nomad token with limited permissions. How JWT-based authentication works The first step in JWT-based authentication is the JSON Web Token (JWT) itself. JWTs are encoded pieces of JSON that contain information about the identity of some workload or machine. JWT is a generic format, but for authentication, JWTs will sometimes conform to the more specific OIDC spec and include keys such as “sub”, “iss”, or “aud”. This example JWT decodes to the following JSON: { "jti": "eba60bec-a4e4-4787-9b16-20bed89d7092", "sub": "repo:mikenomitch/nomad-gha-jwt-auth:ref:refs/heads/main:repository_owner:mikenomitch:job_workflow_ref:mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main:repository_id:621402301", "aud": "https://github.com/mikenomitch", "ref": "refs/heads/main", "sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "repository": "mikenomitch/nomad-gha-jwt-auth", "repository_owner": "mikenomitch", "repository_owner_id": "2732204", "run_id": "5173139311", "run_number": "31", "run_attempt": "1", "repository_visibility": "public", "repository_id": "621402301", "actor_id": "2732204", "actor": "mikenomitch", "workflow": "Nomad GHA Demo", "head_ref": "", "base_ref": "", "event_name": "push", "ref_type": "branch", "workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "job_workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "job_workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "runner_environment": "github-hosted", "iss": "https://token.actions.githubusercontent.com", "nbf": 1685937407, "exp": 1685938307, "iat": 1685938007 }(Note: If you ever want to decode or encode a JWT, jwt.io is a good tool.) This specific JWT contains information about a GitHub workflow, including an owner, a GitHub Action name, a repository, and a branch. That is because it was issued by GitHub and is an identity token, meaning it is supposed to be used to verify the identity of this workload. Each run in a GitHub Action can be provisioned with one of these JWTs. (More on how they can be used later in this blog post.) Importantly, aside from the information in the JSON, JWTs can be signed with a private key and verified with a public key. It is worth noting that while they are signed, their contents are still decodable by anybody, just not verified. The public keys for JWTs can sometimes be found at idiomatically well-known URLs, such as JSON Web Key Sets (JWKs) URLs. For example, these GitHub public keys can be used to verify their identity tokens. JWT authentication in Nomad Nomad can use external JWT identity tokens to issue its own Nomad ACL tokens with the JWT auth method. In order to set this up, Nomad needs: Roles and/or policies that define access based on identity An auth method that tells Nomad to trust JWTs from a specific source A binding rule that tells Nomad how to map information from that source into Nomad concepts, like roles and policies Here’s how to set up an authentication in Nomad to achieve the following rule: I want any repo using an action called Nomad JWT Auth to get a Nomad ACL token that grants the action permissions for all the Nomad policies assigned to a specific role for their GitHub organization. Tokens should be valid for only one hour, and the action should be valid only for the main branch. That may seem like a lot, but with Nomad JWT authentication, it’s actually fairly simple. In older versions of Nomad, complex authentication like this was impossible. This forced administrators into using long-lived tokens with very high levels of permissions. If a token was leaked, admins would have to manually rotate all of their tokens stored in external stores. This made Nomad less safe and harder to manage. Now, tokens can be short-lived and after a one-time setup with identity-based rules, users don’t have to worry about managing Nomad tokens for external applications. Setting up JWT authentication To set up the authentication, start by creating a simple policy that has write access to the namespace “app-dev” and another policy that has read access to the default namespace. Create a namespace called app-dev: nomad namespace apply "app-dev" Write a policy file called app-developer.policy.hcl: namespace "app-dev" { policy = "write" } Then create it with this CLI command: nomad acl policy apply -description "Access to app-dev namespace" app-developer app-developer.policy.hcl Write a policy file called default-read.policy.hcl: namespace "default" { policy = "read" }Then create it in the CLI: nomad acl policy apply -description "Read access to default namespace" default-read default-read.policy.hcl Next, create roles that have access to this policy. Often these roles are team-based, such as “engineering” or “ops”, but in this case, create a role with the name of “org-” then our Github organization’s name: mikenomitch. Repositories in this organization should be able to deploy to the “app-dev” namespace, and we should be able to set up a GitHub Action to deploy them on merge. Give this role access to the two new policies: nomad acl role create -name="org-mikenomitch" -policy=app-developer -policy=default-read Now, create a file defining an auth method for GitHub in auth-method.json: { "JWKSURL": "https://token.actions.githubusercontent.com/.well-known/jwks", "ExpirationLeeway": "1h", "ClockSkewLeeway": "1h", "ClaimMappings": { "repository_owner": "repo_owner", "repository_id": "repo_id", "workflow": "workflow", "ref": "ref" } }Then create it with the CLI: nomad acl auth-method create -name="github" -type="JWT" -max-token-ttl="1h" -token-locality=global -config "@auth-method.json" This tells Nomad to expect JWTs from GitHub, to verify them using the public key in JWKSURL, and to map key-value pairs found in the JWT to new names. This allows binding rules to be created using these values. A binding rule sets up the complex auth logic requirements stated in a block quote earlier in this post: nomad acl binding-rule create \ -description 'repo name mapped to role name, on main branch, for “Nomad JWT Auth workflow"' \ -auth-method 'github' \ -bind-type 'role' \ -bind-name 'org-${value.repo_owner}' \ -selector 'value.workflow == "Nomad JWT Auth" and value.ref == "refs/heads/main"'The selector field tells Nomad to match JWTs only with certain values in the ref, and workflow fields. The bind-type and bind-name fields tell Nomad to allow JWTs that match this selector to be matched to specific roles. In this case, they refer to roles that have a name matching the GitHub organization name. If you wanted more granular permissions, you could match role names to repository IDs using the repo_id field. So, the JWTs for repositories in the mikenomitch organization are given an ACL token with the role org-mikenomitch, which in turn grants access to the app-developer and default-read policies. Nomad auth with a custom GitHub Action Now you’re ready to use a custom GitHub Action to authenticate into Nomad. This will expose a short-lived Nomad token as an output, which can be used by another action that uses simple bash to deploy any files in the ./nomad-jobsdirectory to Nomad. The code for this action is very simple, it just calls Nomad’s /v1/acl/login endpoint specifying the GitHub auth method and passes in the GitHub Action’s JWT as the login token. (See the code.) To use this action, just push to GitHub with the following file at .github/workflows/github-actions-demo.yml name: Nomad JWT Auth on: push: branches: - main - master env: PRODUCT_VERSION: "1.7.2" NOMAD_ADDR: "https://my-nomad-addr:4646" jobs: Nomad-JWT-Auth: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: Checkout uses: actions/checkout@v3 - name: Setup `nomad` uses: lucasmelin/setup-nomad@v1 id: setup with: version: ${{ env.PRODUCT_VERSION }} - name: Auth Into Nomad id: nomad-jwt-auth uses: mikenomitch/nomad-jwt-auth@v0.1.0 with: url: ${{ env.NOMAD_ADDR }} caCertificate: ${{ secrets.NOMAD_CA_CERT }} continue-on-error: true - name: Deploy Jobs run: for file in ./nomad-jobs/*; do NOMAD_ADDR="${{ env.NOMAD_ADDR }}" NOMAD_TOKEN="${{ steps.nomad-jwt-auth.outputs.nomadToken }}" nomad run -detach "$file"; doneNow you have a simple CI/CD flow on GitHub Actions set up. This does not require manually managing tokens and is secured via identity-based rules and auto-expiring tokens. Possibilities for JWT authentication in Nomad With the JWT auth method, you can enable efficient workflows for tools like GitHub Actions, simplifying management of Nomad tokens for external applications. Machine-to-machine authentication is an important function in cloud infrastructure, yet implementing it correctly requires understanding several standards and protocols. Nomad’s introduction of JWT authentication methods provides the necessary building blocks to make setting up machine-to-machine auth simple. This auth method extends the authentication methods made available in Nomad 1.5, which introduced SSO and OIDC support. As organizations move towards zero trust security, Nomad users now have more choices when implementing access to their critical infrastructure. To learn more about how HashiCorp provides a solid foundation for companies to safely migrate and secure their infrastructure, applications, and data as they move to a multi-cloud world, visit our zero trust security page. To try the feature described in this post, download the latest version of HashiCorp Nomad. View the full article
  7. This post was contributed by Ethan Heilman, CTO at BastionZero. OpenPubkey is the web’s new technology for adding public keys to standard single sign-on (SSO) interactions with identity providers that speak OpenID Connect (OIDC). OpenPubkey works by essentially turning an identity provider into a certificate authority (CA), which is a trusted entity that issues certificates that cryptographically bind an identity with a cryptographic public key. With OpenPubkey, any OIDC-speaking identity provider can bind public keys to identities today. OpenPubkey is newly open-sourced through a collaboration of BastionZero, Docker, and the Linux Foundation. We’d love for you to try it out, contribute, and build your own use cases on it. You can check out the OpenPubkey repository on GitHub. In this article, our goal is to show you how to use OpenPubkey to bind public keys to workload identities. We’ll concentrate on GitHub Actions workloads, because this is what is currently supported by the OpenPubkey open source project. We’ll also briefly cover how Docker is using OpenPubkey with GitHub Actions to sign Docker Official Images and improve supply chain security. What’s an ID token? Before we start, let’s review the OpenID Connect protocol. Identity providers that speak OIDC are usually called OpenID Providers, but we will just call them OPs in this article. OIDC has an important artifact called an ID token. A user obtains an ID token after they complete their single sign-on to their OP. They can then present the ID token to a third-party service to prove that they have properly been authenticated by their OP. The ID token includes the user’s identity (such as their email address) and is cryptographically signed by the OP. The third-party service can validate the ID token by querying the OP’s JSON Web Key Set (JWKS) endpoint, obtaining the OP’s public key, and then using the OP’s public key to validate the signature on the ID token. The OP’s public key is available by querying a JWKS endpoint hosted by the OP. How do GitHub Actions obtain ID tokens? So far, we’ve been talking about human identities (such as email addresses) and how they are used with ID tokens. But, our focus in this article is on workload identities. It turns out that Actions has a nice way to assign ID tokens to GitHub Actions. Here’s how it works. GitHub runs an OpenID Provider. When a new GitHub Action is spun up, GitHub first assigns it a fresh API key and secret. The GitHub Action can then use its API key and secret to authenticate to GitHub’s OP. GitHub’s OP can validate this API key and secret (because it knows that it was assigned to the new GitHub Action) and then provide the GitHub Action with an OIDC ID token. This GitHub Action can now use this ID token to identify itself to third-party services. When interacting with GitHub’s OP, Docker uses the job_workflow_ref claim in the ID token as the workflow’s “identity.” This claim identifies the location of the file that the GitHub Action is built from, so it allows the verifier to identify the file that generated the workflow and thus also understand and check the validity of the workflow itself. Here’s an example of how the claim could be set: job_workflow_ref = octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main Other claims in the ID tokens issued by GitHub’s OP can be useful in other use cases. For example, there is a field called Actor or ActorID, which is the identity of the person who kicked off the GitHub Action. This could be useful for checking that workload was kicked off by a specific person. (It’s less useful when the workload was started by an automated process.) GitHub’s OP supports many other useful fields in the ID token. You can learn more about them in the GitHub OIDC documentation. Creating a PK token for workloads Now that we’ve seen how to identify workloads using GitHub’s OP, we will see how to bind that workload identity to its public key with OpenPubkey. OpenPubKey does this with a cryptographic object called the PK token. To understand how this process works, let’s go back and look at how GitHub’s OP implements the OIDC protocol. The ID tokens generated by GitHub’s OP have a field called audience. Importantly, the audience field is chosen by the OIDC client that requests the ID token. When GitHub’s OP creates the ID token, it includes the audience along with the other fields (like job_workflow_ref and actor) that the OP signs when it creates the ID token. So, in OpenPubkey, the GitHub Action workload runs an OpenPubkey client that first generates a new public-private key pair. Then, when the workload authenticates to GitHub’s OP with OIDC, it sets the audience field equal to the cryptographic hash of the workload’s public key along with some random noise. Now, the ID token contains the GitHub OP’s signature on the workload’s identity (the job_workflow_ref field and other relevant fields) and on the hash of the workload’s public key. This is most of what we need to have GitHub’s OP bind the workload’s identity and public key. In fact, the PK token is a JSON Web Signature (JWS) which roughly consists of: The ID token, including the audience field, which contains a hash of the workload’s public key. The workload’s public key. The random noise used to compute the hash of the workload’s public key. A signature, under the workload’s public key, of all the information in the PK token. (This signature acts as a cryptographic proof that the user has access to the user-held secret signing key that is certified in the PK token.) The PK token can then be presented to any OpenPubkey verifier, which uses OIDC to obtain the GitHub OP’s public key from its JWKS end. The verifier then verifies the ID token using the GitHub OP public key and then verifies the rest of the other fields in the PK token using the workload’s public key. Now the verifier knows the public key of the workload (as identified by its job_workflow_ref or other fields in the ID token) and can use this public key for whatever cryptography it wants to do. Can you use ephemeral keys with OpenPubkey? Yes! An ephemeral key is a key that is only used once. Ephemeral keys are nice because there is no need to store the private key anywhere, which improves security and reduces operational overhead. Here’s how to do this with OpenPubkey. You choose a public-private key pair, authenticate to the OP to obtain a PK token for the public key, sign your object using the private key, and finally throw away the private key. One-time-use PK token We can take this a step further and ensure the PK token may only be associated with a single signed object. Here’s how it works. To start, we take a hash of the object to be signed. Then, when the workload authenticates to GitHub’s OP, set the audience claim to equal to the cryptographic hash of the following items: The public key The hash of the object to be signed Some random noise Finally, OpenPubkey verifier obtains the signed object and its one-time-use PK token, and then validates the PK token by additionally checking that the hash of the signed object is included in the audience claim. Now, you have a one-time-use PK token. You can learn more about this feature of OpenPubkey in the repo. How will Docker use OpenPubkey to sign Docker Official Images? Docker will be using OpenPubkey with GitHub Actions workloads to sign Docker Official Images. Every Docker Official Image will be created using a GitHub Action workload. The workload creates a fresh ephemeral public-private key pair, obtains the PK token for the public key via OpenPubkey, and finally signs the image using the private key. The private key is then deleted, and the image, its signature, and the PK token will be made available on the Docker Hub container registry. This approach is nice because it doesn’t require the signer to maintain or store the private key. Docker’s container signing use case also relies heavily on The Update Framework (TUF), another Linux Foundation open source project. Read “Signing Docker Official Images Using OpenPubkey” for more details on how it all works. What else can you do with OpenPubkey and GitHub Actions workloads? Check out the following ideas on how to put OpenPubkey and GitHub Actions to work for you. Signing private artifacts with a one-time key Consider signing artifacts that will be stored in a private repository. You can use OpenPubkey if you want to have a GitHub Action cryptographically sign an artifact using a one-time-use key. A nice thing about this approach is that it doesn’t require you to expose information in a public repository or transparency log. Instead, you need to post the artifact, its signature, and its PK token in the private repository. This capability is useful for private code repositories or internal build systems where you don’t want to reveal to the world what is being built, by whom, when, or how frequently. If relevant, you could also consider using the actor and actor-ID claim to bind the human who builds a particular artifact to the signed artifact itself. Authenticating workload-to-workload communication Suppose you want one workload (call it Bob) to process an artifact created by another workload (call it Alice). If the Alice workload is a GitHub Action, the artifact it creates could be signed using OpenPubkey and passed on to the Bob workload, which uses an OpenPubkey verifier to verify it using the GitHub OP’s public key (which it would obtain from the GitHub OP’s JWKS url). This approach might be useful in a multi-stage CI/CD process. And other things, too! These are just strawman ideas. The whole point of this post is for you to try out OpenPubkey, contribute, and build your own use cases on it. Other technical issues we need to think about Before we wrap up, we need to discuss a few technical questions. Aren’t ID tokens supposed to remain private? You might worry about applications of OpenPubkey where the ID token is broadly exposed to the public inside the PK token. For example, in Docker Official Image signing use case, the PK tokens are made available to the public in the Docker Hub container registry. If the ID token is broadly exposed to the public, there is a risk that the ID token could be replayed and used for unauthorized access to other services. For this reason, we have a slightly different PK token for applications where the PK token is made broadly available to the public. For those applications, OpenPubkey strips the OP’s signature from the ID token before including it in the PK token. The OP’s signature is replaced with a Guillou-Quisquater (GQ) non-interactive proof-of-knowledge for an RSA signature (which is also known as a “GQ signature”). Now, the ID token cannot be replayed against other services, because the OP’s signature is removed. An ID token without a signature is useless. So, in applications where the PK token must be broadly exposed to the public, the PK token is a JSON Web Signature, which consists of: The ID token excluding the OP’s signature A GQ signature on the ID token The user’s public key The random noise used to compute the hash of the user’s public key A signature, under the user’s public key, of all the information in the PK token The QC signature allows the client to prove that the ID token was validly signed by the identity provider (IdP), without revealing the OP’s signature. The OpenPubkey client generates the QC signature to cryptographically prove that the client knows the OP’s signature on the ID token, while still keeping the OP’s signature secret. GQ signatures only work with RSA, but this is fine because every OpenID Connect provider is required to support RSA. Because GC signatures are larger and slower than regular signatures, we recommend using them only for use cases where the PK token must be made broadly available to the public. BastionZero’s infrastructure access use case does not use GQ signatures because it does not require the PK token to be made public. Instead, the user only exposes their PK token to the target (e.g., server, container, cluster, database) that they want to access; this is the same way an ID token is usually exposed with OpenID Connect. GC signatures might not be necessary when authenticating workload-to-workload communications; if the Alice workload is passing the signed artifact and its PK token to the Bob workload only, there is less of a concern that the PK token is broadly available to the public. What happens when the OP rotates its OpenID Connect key? OPs have OpenID Connect signing keys that change over time (e.g., every two weeks). What happens if we need to use a PK token after the OP rotates its OpenID Connect key? For some use cases, the lifetime of a PK token is typically short. With BastionZero’s infrastructure access use case, for instance, a PK token will not be used for longer than 24 hours. In this use case, these timing problems are solved by (1) having user re-authenticate to the IdP and create a new PK token whenever the IdP rotates it’s key, and (2) having the OpenPubkey verifier check that the client also has a valid OIDC Refresh token along with the PK token whenever the ID token expires. For some use cases, the PK token has a long life, so we do need to worry about OP rotating their OpenID Connect keys. With Docker’s container signing use case, this problem is solved by having TUF additionally store a historical log of the OP’s signing key. Anyone can keep a historical log of the OP public keys for use after they expire. In fact, we envision a future where OP’s might keep this historical log themselves. That’s it for now! You can check out the OpenPubkey repo on GitHub. We’d love for you to join the project, contribute, and identify other use cases where OpenPubkey might be useful. Learn more Read “Signing Docker Official Images Using OpenPubkey.” Get the latest release of Docker Desktop. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
  8. As part of our continued efforts to improve the security of the software supply chain and increase trust in the container images developers create and use every day, Docker has begun migrating its Docker Official Images (DOI) builds to the GitHub Actions platform. Leveraging the GitHub Actions hosted, ephemeral build platform enables the creation of secure, verifiable images with provenance and SBOM attestations signed using OpenPubkey and the GitHub Actions OIDC provider. DOI currently supports up to nine architectures for a wide variety of images, more than any other collection of images. As we increase the trust in the DOI catalog, we will spread out the work over three phases. In our first phase, only Linux/AMD64 and Linux/386 images will be built on GitHub Actions. For the second phase, we eagerly anticipate the availability of GitHub Actions Arm-based hosted runners next year to add support for additional Arm architectures. In our final phase, we will investigate using GitHub Actions self-hosted runners for the image architectures not supported by GitHub Actions hosted runners to cover any outstanding architectures. In addition to using GitHub Actions, the new DOI signing approach requires establishing a root of trust that identifies who should be signing Docker Official Images. We are working with various relevant communities — for example, the Open Source Security Foundation (OpenSSF, a Linux Foundation project), the CNCF TUF (The Update Framework) and in-toto projects, and the OCI technical community — to establish and distribute this trust root using TUF. To ensure smooth and rapid developer adoption, we will integrate DOI TUF+OpenPubkey signing and verification into the container toolchain. These pluggable integrations will enable developers to seamlessly verify signatures of DOI, ensuring the integrity and origin of these fundamental artifacts. Soon, verifying your DOI base image signatures will be integrated into the Build and push Docker images GitHub Action for a more streamlined workflow. What’s next Looking forward, Docker will continue to develop and extend the TUF+OpenPubkey signing approach to make it more widely useful, enhancing and simplifying trust bootstrapping, signing, and verification. As a next step, we plan to work with Docker Verified Publishers (DVP) and Docker-Sponsored Open Source (DSOS) to expand signing support to additional Docker Trusted Content. Additionally, plans are in place to offer an integration of Docker Hub with GitHub Actions OIDC, allowing developers to push OCI images directly to Docker Hub using their GitHub Actions OIDC identities. Learn more OpenPubkey FAQ Signing Docker Official Images Using OpenPubkey Docker Official Image Signing based on OpenPubkey and TUF View the full article
  9. GitHub Actions is relatively new to the world of automation and Continuous Integration (CI). Providing ‘CI as a Service,’ GitHub Actions has many differences from its traditional rival platforms. In this post, we explore the differences between GitHub Actions and traditional build servers. We also look at whether GitHub Actions is a suitable option for building and testing your code. View the full article
  10. Although relatively new to the world of continuous integration (CI), GitHub’s adding of Actions has seen its strong community build useful tasks that plug right into your repository. Actions let you run non-standard tasks to help you test, build, and push your work to your deployment tools. View the full article
  11. HashiCorp Terraform infrastructure deployments can always be run manual, but using GitHub Actions to implement Continuous Integration and Continuous Deployment (CI/CD) can be used to streamline and automate the Terraform infrastructure as code (IaC) deployment workflow. Developers will use GitHub Actions to automate the code build and deployment for an applications code, and the same […] The article Terraform: GitHub Actions Automated Deployment appeared first on Build5Nines. View the full article
  12. Palo Alto Networks has added support for GitHub Actions, GitLab Runners, CircleCI and Argo Workflows to Checkov, an open source tool that scans programmatically provisioned infrastructure for misconfigurations. Guy Eisenkot, senior director of product at Bridgecrew by Prisma Cloud at Palo Alto Networks, said the goal is to make it easier to secure configurations created […] The post Palo Alto Networks Extends Checkov Tool for Securing Infrastructure appeared first on DevOps.com. View the full article
  13. Join us for episode 1 in our series for DevOps for Java Shops! In this episode Brian Benz walks us through how to deploy a Java application to Azure App Service using GitHub Actions. Brian also covers off feature flags! View the full article
  14. Docker is happy to announce the GA of our V2 Github Action. We’ve been working with @crazy-max over the last few months along with getting feedback from the wider community on how we can improve our existing Github Action. We have now moved from our single action to a clearer division and advanced set of options that not only allow you to just build & push but also support features like multiple architectures and build cache... The post Docker V2 Github Action is Now GA appeared first on Docker Blog. View the full article
  15. Many Organizations adopt DevOps Practices to innovate faster by automating and streamlining the software development and infrastructure management processes. Beyond cultural adoption, DevOps also suggests following certain best practices and Continuous Integration and Continuous Delivery (CI/CD) is among the important ones to start with. CI/CD practice reduces the time it takes to release new software updates by automating deployment activities. Many tools are available to implement this practice. Although AWS has a set of native tools to help achieve your CI/CD goals, it also offers flexibility and extensibility for integrating with numerous third party tools. In this post, you will use GitHub Actions to create a CI/CD workflow and AWS CodeDeploy to deploy a sample Java SpringBoot application to Amazon Elastic Compute Cloud (Amazon EC2) instances in an Autoscaling group. GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place that you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and then combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services. Solution Overview The solution utilizes the following services: GitHub Actions – Workflow Orchestration tool that will host the Pipeline. AWS CodeDeploy – AWS service to manage deployment on Amazon EC2 Autoscaling Group. AWS Auto Scaling – AWS Service to help maintain application availability and elasticity by automatically adding or removing Amazon EC2 instances. Amazon EC2 – Destination Compute server for the application deployment. AWS CloudFormation – AWS infrastructure as code (IaC) service used to spin up the initial infrastructure on AWS side. IAM OIDC identity provider – Federated authentication service to establish trust between GitHub and AWS to allow GitHub Actions to deploy on AWS without maintaining AWS Secrets and credentials. Amazon Simple Storage Service (Amazon S3) – Amazon S3 to store the deployment artifacts. The following diagram illustrates the architecture for the solution: Developer commits code changes from their local repo to the GitHub repository. In this post, the GitHub action is triggered manually, but this can be automated. GitHub action triggers the build stage. GitHub’s Open ID Connector (OIDC) uses the tokens to authenticate to AWS and access resources. GitHub action uploads the deployment artifacts to Amazon S3. GitHub action invokes CodeDeploy. CodeDeploy triggers the deployment to Amazon EC2 instances in an Autoscaling group. CodeDeploy downloads the artifacts from Amazon S3 and deploys to Amazon EC2 instances. Prerequisites Before you begin, you must complete the following prerequisites: An AWS account with permissions to create the necessary resources. A GitHub account with permissions to configure GitHub repositories, create workflows, and configure GitHub secrets. A Git client to clone the provided source code. Steps The following steps provide a high-level overview of the walkthrough: Clone the project from the AWS code samples repository. Deploy the AWS CloudFormation template to create the required services. Update the source code. Setup GitHub secrets. Integrate CodeDeploy with GitHub. Trigger the GitHub Action to build and deploy the code. Verify the deployment. Download the source code Clone the source code repository aws-codedeploy-github-actions-deployment. git clone https://github.com/aws-samples/aws-codedeploy-github-actions-deployment.git Create an empty repository in your personal GitHub account. To create a GitHub repository, see Create a repo. Clone this repo to your computer. Furthermore, ignore the warning about cloning an empty repository. git clone https://github.com/<username>/<repoName>.git Copy the code. We need contents from the hidden .github folder for the GitHub actions to work. cp -r aws-codedeploy-github-actions-deployment/. <new repository> e.g. GitActionsDeploytoAWS Now you should have the following folder structure in your local repository. Repository folder structure The .github folder contains actions defined in the YAML file. The aws/scripts folder contains code to run at the different deployment lifecycle events. The cloudformation folder contains the template.yaml file to create the required AWS resources. Spring-boot-hello-world-example is a sample application used by GitHub actions to build and deploy. Root of the repo contains appspec.yml. This file is required by CodeDeploy to perform deployment on Amazon EC2. Find more details here. The following commands will help make sure that your remote repository points to your personal GitHub repository. git remote remove origin git remote add origin <your repository url> git branch -M main git push -u origin main Deploy the CloudFormation template To deploy the CloudFormation template, complete the following steps: Open AWS CloudFormation console. Enter your account ID, user name, and Password. Check your region, as this solution uses us-east-1. If this is a new AWS CloudFormation account, select Create New Stack. Otherwise, select Create Stack. Select Template is Ready Select Upload a template file Select Choose File. Navigate to template.yml file in your cloned repository at “aws-codedeploy-github-actions-deployment/cloudformation/template.yaml”. Select the template.yml file, and select next. In Specify Stack Details, add or modify the values as needed. Stack name = CodeDeployStack. VPC and Subnets = (these are pre-populated for you) you can change these values if you prefer to use your own Subnets) GitHubThumbprintList = 6938fd4d98bab03faadb97b34396831e3780aea1 GitHubRepoName – Name of your GitHub personal repository which you created. On the Options page, select Next. Select the acknowledgement box to allow for the creation of IAM resources, and then select Create. It will take CloudFormation approximately 10 minutes to create all of the resources. This stack would create the following resources. Two Amazon EC2 Linux instances with Tomcat server and CodeDeploy agent are installed Autoscaling group with Internet Application load balancer CodeDeploy application name and deployment group Amazon S3 bucket to store build artifacts Identity and Access Management (IAM) OIDC identity provider Instance profile for Amazon EC2 Service role for CodeDeploy Security groups for ALB and Amazon EC2 Update the source code On the AWS CloudFormation console, select the Outputs tab. Note that the Amazon S3 bucket name and the ARM of the GitHub IAM Role. We will use this in the next step. Update the Amazon S3 bucket in the workflow file deploy.yml. Navigate to /.github/workflows/deploy.yml from your Project root directory. Replace ##s3-bucket## with the name of the Amazon S3 bucket created previously. Replace ##region## with your AWS Region. Update the Amazon S3 bucket name in after-install.sh. Navigate to aws/scripts/after-install.sh. This script would copy the deployment artifact from the Amazon S3 bucket to the tomcat webapps folder. Remember to save all of the files and push the code to your GitHub repo. Verify that you’re in your git repository folder by running the following command: git remote -V You should see your remote branch address, which is similar to the following: username@3c22fb075f8a GitActionsDeploytoAWS % git remote -v origin git@github.com:<username>/GitActionsDeploytoAWS.git (fetch) origin git@github.com:<username>/GitActionsDeploytoAWS.git (push) Now run the following commands to push your changes: git add . git commit -m “Initial commit” git push Setup GitHub Secrets The GitHub Actions workflows must access resources in your AWS account. Here we are using IAM OpenID Connect identity provider and IAM role with IAM policies to access CodeDeploy and Amazon S3 bucket. OIDC lets your GitHub Actions workflows access resources in AWS without needing to store the AWS credentials as long-lived GitHub secrets. These credentials are stored as GitHub secrets within your GitHub repository, under Settings > Secrets. For more information, see “GitHub Actions secrets”. Navigate to your github repository. Select the Settings tab. Select Secrets on the left menu bar. Select New repository secret. Select Actions under Secrets. Enter the secret name as ‘IAMROLE_GITHUB’. enter the value as ARN of GitHubIAMRole, which you copied from the CloudFormation output section. Integrate CodeDeploy with GitHub For CodeDeploy to be able to perform deployment steps using scripts in your repository, it must be integrated with GitHub. CodeDeploy application and deployment group are already created for you. Please use these applications in the next step: CodeDeploy Application =CodeDeployAppNameWithASG Deployment group = CodeDeployGroupName To link a GitHub account to an application in CodeDeploy, follow until step 10 from the instructions on this page. You can cancel the process after completing step 10. You don’t need to create Deployment. Trigger the GitHub Actions Workflow Now you have the required AWS resources and configured GitHub to build and deploy the code to Amazon EC2 instances. The GitHub actions as defined in the GITHUBREPO/.github/workflows/deploy.yml would let us run the workflow. The workflow is currently setup to be manually run. Follow the following steps to run it manually. Go to your GitHub Repo and select Actions tab Select Build and Deploy link, and select Run workflow as shown in the following image. After a few seconds, the workflow will be displayed. Then, select Build and Deploy. You will see two stages: Build and Package. Deploy. Build and Package The Build and Package stage builds the sample SpringBoot application, generates the war file, and then uploads it to the Amazon S3 bucket. You should be able to see the war file in the Amazon S3 bucket. Deploy In this stage, workflow would invoke the CodeDeploy service and trigger the deployment. Verify the deployment Log in to the AWS Console and navigate to the CodeDeploy console. Select the Application name and deployment group. You will see the status as Succeeded if the deployment is successful. Point your browsers to the URL of the Application Load balancer. Note: You can get the URL from the output section of the CloudFormation stack or Amazon EC2 console Load Balancers. Optional – Automate the deployment on Git Push Workflow can be automated by changing the following line of code in your .github/workflow/deploy.yml file. From workflow_dispatch: {} To #workflow_dispatch: {} push: branches: [ main ] pull_request: This will be interpreted by GitHub actions to automaticaly run the workflows on every push or pull requests done on the main branch. After testing end-to-end flow manually, you can enable the automated deployment. Clean up To avoid incurring future changes, you should clean up the resources that you created. Empty the Amazon S3 bucket: Delete the CloudFormation stack (CodeDeployStack) from the AWS console. Delete the GitHub Secret (‘IAMROLE_GITHUB’) Go to the repository settings on GitHub Page. Select Secrets under Actions. Select IAMROLE_GITHUB, and delete it. Conclusion In this post, you saw how to leverage GitHub Actions and CodeDeploy to securely deploy Java SpringBoot application to Amazon EC2 instances behind AWS Autoscaling Group. You can further add other stages to your pipeline, such as Test and security scanning. Additionally, this solution can be used for other programming languages. About the Authors Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale. Suresh Moolya is a Cloud Application Architect with Amazon Web Services. He works with customers to architect, design, and automate business software at scale on AWS cloud. View the full article
  16. Many Organizations adopt DevOps Practices to innovate faster by automating and streamlining the software development and infrastructure management processes. Beyond cultural adoption, DevOps also suggests following certain best practices and Continuous Integration and Continuous Delivery (CI/CD) is among the important ones to start with. CI/CD practice reduces the time it takes to release new software updates by automating deployment activities. Many tools are available to implement this practice. Although AWS has a set of native tools to help achieve your CI/CD goals, it also offers flexibility and extensibility for integrating with numerous third party tools. In this post, you will use GitHub Actions to create a CI/CD workflow and AWS CodeDeploy to deploy a sample Java SpringBoot application to Amazon Elastic Compute Cloud (Amazon EC2) instances in an Autoscaling group... View the full article
  • Forum Statistics

    43.6k
    Total Topics
    43.1k
    Total Posts
×
×
  • Create New...