Search the Community
Showing results for tags 'hcp packer'.
-
HCP Packer is a powerful tool for managing the lifecycle of image artifacts at scale across any cloud or on-premises environment. We are excited to announce the addition of Packer version and plugin version tracking, now available in HCP Packer and the latest version of Packer Community Edition (1.10.1+). With these additions, users can now quickly check the versions of Packer Community Edition or associated plugins used while creating a build artifact. This enhancement lays the foundation for a secure build pipeline and helps organizations ensure they are leveraging the latest Packer features. Artifact governance challenges As the security demands on the software supply chain grow, organizations increasingly recognize the governance of their base images and build artifacts as a pivotal concern. Without provenance and a clear lineage of where and how each artifact was built, organizations face heightened security threats from unverified software components. Organizations must ensure they employ only trusted artifacts, validated at each stage of their lifecycle, to maintain the integrity and security of their software supply chain. It can be difficult to verify an artifact's legitimacy and compliance without proper visibility into its creation pipeline. Improving build visibility HCP Packer plays a crucial role in the software supply chain by managing the resources at the foundation of infrastructure pipelines: image artifacts. Through proper image management, organizations can shift their security left and address risks earlier in the infrastructure deployment process. With the addition of Packer version and plugin version tracking, users can now see which version of Packer Community Edition or plugins were used for each of their artifacts, directly in the HashiCorp Cloud Platform (HCP). This enhancement marks another step towards complete artifact provenance by providing users with more visibility into the tools used to create an artifact and allowing them to use this information for troubleshooting and risk mitigation. Learn more To learn more about HCP Packer, visit the HCP Packer introduction page on HashiCorp Developer. Get started with HCP Packer for free to track and manage artifacts across all your cloud environments. View the full article
-
hcp packer HCP Packer webhooks now generally available
Hashicorp posted a topic in Infrastructure-as-Code
Today we are excited to announce the general availability of HCP Packer’s webhooks, first introduced at HashiConf in October 2023, which let users notify external systems about specific image events using automation. HCP Packer is a powerful tool that provides image lifecycle management at scale across any cloud and on-premises environment. The addition of webhooks helps organizations further streamline and secure image-related workflows across their multi-cloud infrastructure estate. Previously, when a user completed specific actions in HCP Packer, they then needed to manually orchestrate various external workflows to achieve consistency across their infrastructure. For example, if a user revoked an artifact version, they would need to delete the image in the cloud provider and communicate this update to the appropriate team members. These manual steps added further complexity to image management workflows and opened organizations to security risks caused by human errors. Webhooks for HCP Packer A webhook is a lightweight, event-driven communication that allows two applications to send data to one another as soon as a specific event occurs. With the introduction of webhooks for HCP Packer, users can now implement automation when interacting with the HashiCorp Cloud Platform (HCP) for image lifecycle events such as: Creation, completion, and deletion Revocation and restoration Scheduling and canceling a scheduled revocation Assignment to a channel These automation workflows can be set up and edited directly in HCP: Example workflows include initiating functional tests via HashiCorp Terraform Cloud after publishing a new image version, deleting the image or template in the cloud provider when the corresponding artifact version is deleted, and sending notifications to stakeholders when these events occur. Webhook benefits Webhooks bring two key benefits to HCP Packer: Workflow automation: Webhooks let you automate processes triggered by specific events to reduce manual effort and increase velocity when integrating with existing external pipelines and Terraform Cloud. Enhanced security: Through automation, organizations can mitigate the risk of human errors such as missed notifications and forgotten image management tasks that could lead to outdated and insecure images being left deployed. Summary and resources To learn more about using webhooks for HCP Packer, please refer to the documentation and demo video: Create and manage HCP Packer webhooks HCP Packer webhook events Get started with HCP Packer for free to track and manage artifacts across all your cloud environments. View the full article -
hcp packer Predictable plugin loading in Packer 1.11
Hashicorp posted a topic in Infrastructure-as-Code
In HashiCorp Packer 1.11, now available as an alpha release, we are introducing a predictable approach to plugin loading. With this change, Packer will no longer load plugin binaries installed outside of its plugin directory, nor will Packer load plugin binaries without their respective SHA256SUM file. To aid this transition, Packer 1.11 includes tooling updates to simplify the plugin loading process. What's changing in Packer 1.11? The Packer team has been consistently working on reducing the pain points around plugin usage introduced by the required plugins in Packer 1.7. In user interviews, community forum posts, and various GitHub issues, we continue to hear about difficulties installing plugins into the correct directories and plugin developers facing challenges with the tooling to test their locally built binaries due to the various plugin loading options in Packer. As it stands today, plugins following one of the two naming conventions, packer-plugin-happycloud or packer-plugin-happycloud_v0.0.1_x5.0_darwin_arm64, placed within one of the known directories will get automatically loaded by Packer. The discovery and loading of a plugin within a known directory is a feature that has been in place since the early days of Packer. But in Packer 1.7, with the introduction of required_plugins and packer init, the flexibility of automatic plugin discovery could be confusing to some. Local plugin installation tools In Packer 1.10 we introduced the ability to install a locally sourced plugin using packer plugins install --path, which made the use of locally installed plugins compatible with required_plugins and packer init. In Packer 1.10.2, we extended this behavior to support development plugin binaries — binaries that report "dev" as part of their plugin version. The Packer team decided on "dev" prereleases over "alpha", "beta", and "rc" for plugins to minimize the level of complexity around version pinning used for required_plugins and packer init. Using development binaries with Packer Plugins required through required_plugins have a version constraint that dictates the version of a plugin needed for executing a build. These constraints do not include development versions because installation through commands like packer init is unsupported. However, development plugins are now evaluated at runtime provided their version matches the constraints specified. For example: amazon = { source = "github.com/hashicorp/amazon" version = ">= 1.1.0" }Given the specified version constraint above, only versions greater than or equal to 1.1.0 will be considered. If you have a development binary (i.e. a manually built plugin binary) installed, Packer will use it if: It is the highest compatible version installed. There is no final plugin version with the same version number installed alongside it. So assuming the following hierarchy: /Users/dev/.packer.d/plugins └─ github.com └─ hashicorp └── amazon ├── packer-plugin-amazon_v1.1.0_x5.0_darwin_arm64 ├── packer-plugin-amazon_v1.1.0_x5.0_darwin_arm64_SHA256SUM ├── packer-plugin-amazon_v1.1.1-dev_x5.0_darwin_arm64 └── packer-plugin-amazon_v1.1.1-dev_x5.0_darwin_arm64_SHA256SUMVersion 1.1.1-dev of the Amazon plugin will match the specified version constraint and be used for executing the Packer build. If, however, a 1.1.1 release version of the plugin is available, it will take precedence over the development binary: /Users/dev/.packer.d/plugins └─ github.com └─ hashicorp └── amazon ├── packer-plugin-amazon_v1.1.1-dev_x5.0_darwin_arm64 ├── packer-plugin-amazon_v1.1.1-dev_x5.0_darwin_arm64_SHA256SUM ├── packer-plugin-amazon_v1.1.1_x5.0_darwin_arm64 └── packer-plugin-amazon_v1.1.1_x5.0_darwin_arm64_SHA256SUMHere, version 1.1.1 of the plugin will match the specified version constraint and be used for executing the Packer build. Dropping support for legacy single-component plugins In Packer 1.11.0, we removed Packer's ability to load single-component plugins. These are legacy plugins following the previously deprecated naming convention of packer-builder-happycloud or packer-provisioner-happycloud-shell in favor of supporting only multi-component Packer plugins like the Docker plugin for Packer. Stricter plugin loading In Packer 1.11, we are dropping support for loading plugin binaries following the naming convention of packer-plugin-name. Packer will now load only plugins stored under PACKERPLUGINPATH using the expected namespaced directory and CHECKSUM files. This change drops support for loading plugin binaries in Packer's executable directory or a template's current working directory: /Users/dev/.packer.d/plugins └── github.com └── hashicorp └── happycloud ├── packer-plugin-happycloud_v0.0.1_x5.0_darwin_arm64 └── packer-plugin-happycloud_v0.0.1_x5.0_darwin_arm64_SHA256SUMWhat does this mean for Packer users? As Packer users, if your templates leverage the required_plugins Packer block and you're installing plugins via packer init, your workflows will continue working as they do today. If, however, you use plugins that live outside of Packer's known plugin directory or manually manage Packer plugin directories, you may need to change your plugin management workflow. Packer will no longer support the loading of plugin binaries installed alongside the Packer binary or in the current working directory. Instead of manually placing a downloaded binary into the executable or current working directory, we encourage you to run the command packer plugins install –path <path-to-downloaded-extracted-binary> github.com/hashicorp/happycloud to install the binary into a Packer compatible path. Running the install command with the --path option will generate the associated SHA256SUM file for validating the locally installed plugin. If you prefer to manage the installation manually, you can do so but you will be required to manually construct the namespaced sub-directories and SHA256SUM file. This may sound like a lot of work for installing a binary, but it provides a consistent manner for installing plugins, ensures Packer will load the correct binary at runtime, and makes all plugins compatible with required_plugins and packer init. HCL (HashiCorp Configuration Language) users can safely pin plugin versions and use dev prereleases without having to change any template, as the use of development binaries with required_plugins now works out the box. Installing development binaries in Packer The changes mentioned in this blog post give plugin developers and users who have to build their own versions of trusted plugins the ability to use these binaries without conflicting with the Packer plugin pinning mechanism. Practitioners’ use of manually built plugin binaries, what HashiCorp calls “development binaries”, is a common practice given the open source nature of Packer plugins. In Packer 1.11, we've updated the plugin tooling to treat development binaries as first-class citizens in Packer. A full explanation of how to build development binaries has been documented within the Packer plugin scaffolding repository. Below is a general overview of the new workflow for using development binaries. As an example, to build a custom version of the Docker plugin and install it so Packer will be able to use it, you may follow these steps: Clone the plugin's GitHub repository. In the plugin directory root, run go build to build the plugin as a development binary. Use the packer plugins install command to install the development binary. Run a Packer build with the newly installed plugin. ~> git clone https://github.com/hashicorp/packer-plugin-docker.git ~> cd packer-plugin-docker ~> go build -ldflags="-X github.com/hashicorp/packer-plugin-docker/version.VersionPrerelease=dev" -o packer-plugin-docker-dev # Lets validate its a development prerelease ~> ./packer-plugin-docker-dev describe {"version":"1.0.10-dev","sdk_version":"0.5.2","api_version":"x5.0","builders":["-packer-default-plugin-name-"],"post_processors":["import","push","save","tag"],"provisioners":[],"datasources":[]} # Lets install the development binary ~> packer plugins install --path packer-plugin-docker-dev github.com/hashicorp/docker Successfully installed plugin github.com/hashicorp/docker from $HOME/Development/packer-plugin-docker/packer-plugin-docker-dev to ~/github.com/hashicorp/docker/packer-plugin-docker_v1.0.10-dev_x5.0_darwin_arm64Note: For convenience, the Makefile within the Packer plugin scaffolding repository has been updated to automate the installation of building and installing development binaries via make dev. Next steps We invite you to test the Packer 1.11 alpha release, which is available on HashiCorp Releases. Please let us know how the new plugin loading experience works for you. If you encounter any issues or have any suggestions on how we can improve the loading experience, feel free to start a discussion on the Packer GitHub issue tracker or community forum. View the full article -
In the ever-evolving landscape of IT infrastructure, the ability to create custom images efficiently and consistently is a game-changer. This is where HashiCorp Packer comes into play, a powerful tool that revolutionizes the image creation process across platforms such as AWS, Azure, and GCP, among others. This blog post, based on the Hashicorp Packer course offered by KodeKloud, serves as a comprehensive guide to mastering HashiCorp Packer... View the full article
-
HCP Packer is a powerful tool for tracking, governing, and managing image artifacts across multi-cloud environments. Today at HashiConf, we are introducing two new features for HCP Packer: project-level webhooks and streamlined run task reviews. Project-level webhooks allow users to notify external systems about specific HCP Packer events using automation. Streamlined run task reviews provide meaningful context on run task evaluations for the HCP Packer run task on HashiCorp Terraform Cloud, building on the new functionality released in September. These two additions help organizations improve the efficiency and security of image-related workflows across their multi-cloud infrastructure estate... View the full article
-
Since the beginning of the project, HashiCorp Packer has supported extending its capabilities through plugins. These plugins are built alongside community contributors and partners to help Packer support building images for many cloud providers and hypervisors. In the past, to help Packer users get up and running quickly, popular plugins were bundled into the main Packer binary. This had advantages, notably that users did not have to install plugins separately in order to use them. However, as the plugin system grew, bundling all plugins introduced maintenance issues... View the full article
-
In today’s multi-cloud world, images (such as AMIs for Amazon EC2, virtual machines, Docker containers, and more) lay the foundation for modern infrastructure, security, networking, and applications. Enterprises adopting multi-cloud typically start by using Terraform for centralized provisioning, but Terraform does not handle the details of image creation and management. In many organizations, the workflows in place to create and manage images are siloed, time-consuming, and complex, leading to slow spin-up times and human errors that pose security risks. Organizations need standard processes to ensure all images throughout their infrastructure estate are secure, compliant, and easily accessible... View the full article
-
- golden images
- pipelines
-
(and 3 more)
Tagged with:
-
We're excited to announce the version 2.0.0 release of the Packer Azure plugin, which enables users to build Azure virtual hard disks, managed images, and Compute Gallery (shared image gallery) images. The plugin is one of the most popular ways to build Azure Virtual Machine images and is used by Microsoft Azure via the Azure Image Builder For the past year, we have been tracking the changes to the Azure SDKs and keeping our eyes on the upcoming deprecations, which were sure to disrupt how Packer interacts with Azure. When we found that the version of the Azure SDK the Packer plugin was using would soon be deprecated we began work to migrate to the Terraform tested HashiCorp Go Azure SDK. The HashiCorp Go Azure SDK is generated from and based on the Azure API definitions to provide parity with the official Azure SDK — making it a near drop-in replacement for the Azure SDK, with the ability to resolve issues around auto-rest, polling, and API versioning. Version 2.0.0 of the Packer Azure plugin addresses the known deprecations with minimal disruption to the user, introduces new highly requested features, and combines the stability of the Packer Azure plugin with the Terraform Azure provider.. View the full article
-
HCP Packer, a powerful tool for tracking, governing, and managing image artifacts across multi-cloud environments, has just added audit logs. This enhancement provides a way for organizations to document and monitor user activity across HCP Packer and gain visibility into the activity occurring in the registry. Audit logs are now generally available for all Plus-tier artifact registries. Understanding resource lifecycles is crucial to successfully implement secure multi-cloud infrastructure. According to the 2023 HashiCorp State of the Cloud Strategy Survey, security ranks as the #1 factor in determining the success of an organization’s multi-cloud strategy. But without granular data and effective monitoring capabilities, it can be difficult to uncover security and compliance risks or identify if a breach has occurred. A lack of historical user activity data can also prevent organizations from uncovering insights regarding workflow inefficiencies and wasteful cloud spend. Introducing audit logs Organizations can now gain complete visibility into their HCP Packer activity by relying on the audit log capabilities of the HashiCorp Cloud Platform (HCP). This offers insight into the lifecycles of their image iterations, buckets, and channels. This new functionality provides users with a description, metadata, and streaming capabilities for critical user activity information. Audit log descriptions and metadata HCP Packer’s audit logs consist of a description (string) and metadata field (JSON object/hashmap). All HCP Packer audit logs contain common metadata, which relays information such as audit log status, type of operation being performed, organization and project ID, timestamp, and who initiated the operation. In addition to the common fields, audit logs also send messages on specific operations taking place in HCP Packer such as: Buckets: Image bucket creation, deletion, updates, and labeling Iterations: Image iteration creation, completion, revocation, deletion, and restoration Builds: Build creation and updates Channels: Channel creation, updates, deletion, and iteration assignments For a full list of all audit log types, their descriptions, and metadata, please refer to the HCP Packer documentation. Below is an example of the metadata received when a user updates an image build: { "action":"update", "actor":{ "principal_id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", "service":{ "id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", "name":"test-auditlogs", "user_managed":true }, "type":"TYPE_SERVICE" }, "bucket":{ "id":"01GXXGSNEE1EMJEZ0TEH7KCQVX", "slug":"bucket-test" }, "build":{ "cloud_provider":"aws", "component_type":"aws", "id":"01H5APPBYYF4D0NMVZCRKR85E7", "images":[ { "image_id":"ami-f2", "region":"us-west-2" } ], "labels":{ "os":"ubuntu" }, "status":"DONE" }, "description":"Updated build", "iteration":{ "fingerprint":"f14", "id":"01H5APNAK1BNEVMK3HPS7KZANV", "version":"v5" }, "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", "registry":{ "id":"01GNZQS84K3PTGVVB2YY9R81BC" }, "status":"OK", "timestamp":"2023-07-14T17:21:09Z" }Audit log streaming HCP Packer’s audit logs allow for near real-time streaming of audit events using industry-standard monitoring solutions such as Amazon CloudWatch or Datadog. Forwarding audit logs into these services allow organizations to observe higher-level correlations across all services and activities based on user or resource. Simply select the appropriate provider, enter the configuration details, and begin monitoring the full request and response objects for every interaction with the resource. Amazon CloudWatch The log group and log streams are dynamically created and can be found in the Amazon CloudWatch console with the prefix “/hashicorp”. This lets you clearly see audit logs coming from HashiCorp separately from other log inputs you may have in CloudWatch. Datadog Logs arrive within your Datadog environment a few minutes after Packer usage. Refer to the Datadog documentation for details on log exploration. Key benefits of audit logs Collecting activity data with HCP Packer’s audit logs helps organizations improve their: Visibility: by enabling administrators to actively monitor a stream of user activity and answer important questions such as who is changing what? When? And where? Security: by providing access to a historical audit trail that enables security teams to ensure compliance standards are met in accordance with regulatory requirements Summary and resources For more information on audit logs for HCP Packer, please refer to the documentation: Audit log descriptions and metadata Audit log streaming Get started with HCP Packer for free to track and manage artifacts across all your cloud environments. View the full article
-
HCP Packer, a powerful tool for tracking and managing image artifacts across multi-cloud environments, has just released two new features: Restricted channels Channel management with Terraform Channels let you label and track image iterations and give you control over the delivery of your artifacts. These new additions provide further permissions control for your image channels and help enable a golden image pipeline with Terraform. Artifact management with channels Teams working in software development understand the importance of versioning and tagging applications to keep track of changes and ensure their code is up to date. Similarly, it is crucial to version and tag image artifacts to maintain up-to-date infrastructure. HCP Packer channels allow you to do just that. With channels, you can label image iterations to describe the quality and stability of a build. By assigning iterations human-readable names, downstream consumers can easily reference the images in Packer templates and Terraform configurations. As you release new image versions, the iteration associated with the channel is automatically updated. This makes it simple for consumers to reference the correct version from the registry without having to update their code and ensures that the latest image version is always in use. Today we will cover five recent improvements to HCP Packer channels that give you further visibility and control, and simplify management of your artifact estate: Channel overview page Channel assignment history Channel rollback Restricted channels Channel management with Terraform Channel overview For HCP Packer users to ensure their infrastructure is always up to date, they need visibility and control over all image artifacts. The overview page provides high-level info for your image channels in a centralized location. From here you can review details on an iteration, such as its status, image ancestry, and channel assignment history. Channel assignment history As images are published, assigned, and revoked over time, it is important to maintain visibility into their history. Channel assignment history provides a complete record of artifact activity in a channel. You can browse any existing bucket and select a channel to see exactly which iterations have been made available to downstream consumers. From here you can view each image iteration’s channel history, the status of its parent image, and extended metadata. For plus-tier HCP subscribers, the complete history of channel iterations is tracked and saved for a full year. This page provides further visibility to platform teams, allowing them to see when all iterations were assigned and by whom. Channel rollback Channel rollback builds on the availability of channel assignment history and provides quicker remediation of released artifacts. When revoking a currently assigned iteration, you can now choose to roll back channels to their previously-assigned iteration. This also works with HCP Packer’s inherited revocation to automatically roll back the channel assignments of any descendant images when a parent image is revoked. This workflow allows organizations to reduce their time to remediation during a security incident without impacting downstream provisioning processes. Restricted channels Image builders need to collaborate with other stakeholders to validate that new image iterations meet compliance and functionality requirements before releasing them to downstream consumers. Restricted channels provide control over the release of images by providing a means to limit channel access for other collaborators. This granular permissions control lets you ensure only the necessary users have channel access and enables the least privilege principle. This addition also helps streamline the image-validation process and prevents downstream consumers from using new image iterations before they have been validated and approved. Better together: Channel management with Terraform A golden image is an image on top of which developers can build applications, letting them focus on the application itself instead of system dependencies and patches. A typical golden image includes the most up-to-date common system, logging, monitoring tools, security patches, and application dependencies. Traditionally, operations and security teams had to cross-reference spreadsheets, personally inform downstream developers, and manually update build files when they released new golden images. HCP Packer and Terraform Cloud’s unified workflow enables users to simplify this process and create a successful golden image pipeline. The HCP Packer registry helps users track image metadata and storage location, and provides the correct image to developers automatically through Packer and Terraform integrations. Previously, channel assignment relied on customers exiting Terraform to orchestrate API calls with custom scripting or workflow actions. This led to unnecessary friction during activation and difficulties incorporating HCP Packer into established infrastructure as code (IaC) workflows. With version 0.54 and newer of the HCP provider, you can now create, delete, and update channels directly from Terraform. This new feature deepens the integration of HCP Packer and Terraform Cloud, providing a consolidated and streamlined approach to artifact image management across the two products. Here’s an example HCL snippet to update channels directly from Terraform: resource "hcp_packer_channel" "staging" { name = "staging" bucket_name = "alpine" iteration { id = "iteration-id" } } Summary and resources HCP Packer’s channels provide control and visibility over image artifacts and enable a golden image pipeline with Terraform. The recent improvements to channel management demonstrate HashiCorp’s commitment to simplifying artifact management across multi-cloud environments and enabling platform teams to keep their infrastructure up-to-date. To learn more about HCP Packer’s channels and artifact management, check out the following resources: Documentation Image channels Channel resource in the HCP provider Tutorials Revoke an image and its descendants using inherited revocation Control images with channels Demo videos Channel assignment history and automated rollback Tracking golden image ancestry with HCP Packer Get started with HCP Packer for free to track and manage artifacts across all your cloud environments. View the full article
-
HCP Packer, a powerful tool for creating and tracking image artifacts across multi-cloud environments, has just released two new features: channel assignment history and channel rollback. These additions provide further visibility into image channels and enable a simple, one-click remediation workflow. This post will cover some challenges of artifact management and explain how HCP Packer’s new features aim to solve them. »Artifact management challenges For HCP Packer users to successfully implement a golden image pipeline, they need to have visibility into all artifacts created. Currently, visibility into the lifecycle of iterations ends when they are removed from a channel. Administrators can’t see which artifacts have been made available to downstream consumers and when. This lack of visibility makes it difficult to track the assignment history of iterations and roll back a channel without custom tooling. When a user tries to revoke an iteration currently assigned to one or more channels, it results in an error. This makes channel rollback a multi-step process prone to human error and often requires platform teams to update multiple channels before remediation can occur. Our latest features, channel assignment history and rollback, provide a complete record of artifact activity in a channel and a simple workflow for remediation. »Channel assignment history Users can now browse any existing bucket and select a channel to see exactly which iterations have been made available to downstream consumers. For plus-tier subscribers, the complete history of channel iterations is tracked and saved for a full year. Users can click through to view each channel's history and extended metadata. This allows platform teams to see when a specific iteration was assigned and by whom. »Channel rollback Channel rollback builds on the availability of channel assignment history and aims to provide quicker remediation of released artifacts. Users can now automatically roll back channels to the previously assigned iteration when revoking a currently assigned iteration. »Benefits Greater visibility into artifacts: Users can now see when images were made available to downstream consumers and the user that assigned them. They can also easily view the status of parent images to ensure they are up to date. Visibility into the extended metadata for each channel assignment helps users ensure their internal compliance requirements are met. Access to channel assignment history provides a more complete view of image lifecycles throughout their artifact estate. Efficient remediation workflows: Channel rollback provides a simpler remediation process with one-click access to assign a previous iteration. This approach requires less custom code to automate reliable rollback. This workflow is especially useful when revoking a currently published iteration and it allows organizations to reduce their time to remediation during a security incident. »Summary and resources Channel assignment history and rollback build on HCP Packer’s current capabilities to increase visibility into users' artifact estate and simplify remediation workflows. To learn more about channel assignment history and rollback, check out the image channels and revocation documentation, image revocation tutorial, and demo video: Get started with HCP Packer for free to track and manage artifacts across all your cloud environments. View the full article
-
We are excited to announce the release of Image Ancestry Tracking for HCP Packer, now generally available in the HashiCorp Cloud Platform (HCP). This new feature allows users to track the relationships between machine images and provides a workflow for revoking an image and all its descendants at once. This post will cover the challenges of image relationship management and the details of HCP Packer’s new feature. »Understanding Image Relationships A typical approach for image management is to first build a set of common base or “golden” images for a given operating environment. These base images can be thought of as a parent. They contain the organization’s standard configurations, such as security and compliance policies. _Child _images are then built from these base images to meet specific application needs. »Image Tracking Challenges Tracking the relationships between parent and child images can be difficult and often involves manual processes. This can lead to unclear parent-child dependencies and inconsistent statuses when remediating security or configuration issues in base images. Child images could be left referencing out-of-date parent images without manual tracking and intervention. Currently, users can only trace and revoke one image iteration at a time if a vulnerability is found. There is no way to visualize the child images dependent on that image iteration. The impact of changing a base image may not be fully understood without details on its downstream dependencies. »Introducing Image Ancestry Tracking Image ancestry tracking gives users visibility into image relationships and remediates descendent images when a parent image is revoked. »Track Parent-Child Relationships Image ancestry makes it easy to track image dependencies and discover the correct images to use in deployments. Each image's parent-child relationship and status are now captured and displayed in your Packer registry. When a new base image is created, child images will indicate if they are out of date. »Inherited Revocation Image ancestry tracking can also ensure revocation across all descendant images. If a vulnerability or misconfiguration is identified in a base image, you can choose to revoke only the iteration or the iteration and all its descendants. This workflow is supported for both immediate and scheduled revocation. »Ancestry Tracking Benefits Ancestry tracking and inherited revocation enable safe and effective immutable infrastructure workflows. »Increased Efficiency Image ancestry details allow users to better understand the relationship between images. This visibility lets users quickly see the dependencies of parent images to monitor usage and gauge the impact of potential changes. Child images also show details about the parent image they are based on. This transparency helps streamline build and deployment processes. »Reduced Risk Ancestry tracking immediately prevents the use of all images descending from a revoked parent. This prevents child images from referencing a potentially vulnerable base image. Visibility into image status and dependencies also helps avoid missed child images when remediating security or configuration issues in base images. »Immutable Deployment Processes HCP Packer enables immutable application deployments by launching a set of new instances for each iteration instead of making changes to existing images. Ancestry tacking brings further visibility and control to these deployments to ensure consistent and reliable image management. »Summary & Resources Visibility into the relationships between images is crucial for efficient and secure infrastructure management. Ancestry tracking allows for quick reference of image dependencies or statuses and ensures revocation across descendant images. For more information on HCP Packer and Image Ancestry Tracking, check out our Ancestry and Revoke Images documentation along with this demo video: Get started with HCP Packer for free to begin tracking machine images across all your environments. View the full article
-
Homepage Downloads GitHub About HashiCorp Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities. Works Out of The Box Out of the box Packer comes with support to build images for Amazon EC2, CloudStack, DigitalOcean, Docker, Google Compute Engine, Microsoft Azure, QEMU, VirtualBox, VMware, and more. Support for more platforms is on the way, and anyone can add new platforms via plugins.
-
HCP Packer is a new suite of cloud-based services designed to enhance HashiCorp Packer workflows. The first service we're working on, announced today at HashiConf Europe, is the HCP Packer registry, a metadata tracking tool that aims to bridge image factories and image deployments. The service is currently in development on the HashiCorp Cloud Platform (HCP), and will be available as a beta in the coming months. Sign up to be part of the HCP Packer beta program. Development and security teams will be able to collaborate via HCP Packer to create image management workflows that ensure images are up to date. While HCP Packer is not “Packer in the cloud,” it is designed to support your existing workflows. Teams that use Packer and HashiCorp Terraform should find HCP Packer especially helpful. We have three initial goals for HCP Packer: Simplify image lifecycle management. HCP Packer’s registry aims to reduce the complexity around creating, tracking, and using image artifacts. Centralize governance. You should be able to define your own image-release channels to ensure adoption of the latest approved builds. Support your current workflows. We want HCP Packer to integrate seamlessly into your existing Packer and Terraform workflows. If you want more insight and control over your images — not just in development but in production — consider giving HCP Packer a try. »HCP Packer Use Cases Many organizations have built Packer image pipelines where one team may create and maintain a base image. From there, that same team, or other teams, build layers on top of that base. The end result is a golden image with different installed packages and technologies to suit a particular need. If images are built in many stages, it can be tough to try to figure out which image was built from what. You can end up with complex dependency trees that you need to tease apart, and then keep each step in that dependency tree up to date. That’s how the HCP Packer registry can help. We’re envisioning a service that will: Track these image dependencies for you — both what a given image depends on, and what images depend on a given source. Track images built from the same Packer template across different clouds and versions of the template. That capability would let users see which images across clouds are functionally equivalent. Act as a data source for both Packer and Terraform. This way, users can automatically pull a specific version of an image for the environment about to be provisioned. Allow users to set mutable and customizable release channels for images. Packer and Terraform could then, for example, request the "prod-stable" version and get the latest image promoted to that channel, presumably after some form of internal validation. That’s only the beginning, of course. There are many directions that HCP Packer can go over time. In particular, we think there are exciting opportunities to improve governance and security workflows. Stay tuned. »How will the HCP Packer beta work? We expect HCP Packer to work with recent versions of Packer and your existing workflows. Once you’ve been granted access to the beta, you would just need to set a few environment variables, and Packer would send artifact information to the service, making it available for use. We also plan for there to be more advanced configuration options available within Packer templates. Sign up for more information about the upcoming beta launch and how to use the HCP Packer registry. View the full article
-
hcp packer Using Template Files with HashiCorp Packer
Hashicorp posted a topic in Infrastructure-as-Code
In HashiCorp Packer 1.7, we tagged HCL2 as stable and implemented HCL2-only functions. You can use one such function, the templatefile function to build multiple operating systems with less duplication of configuration. Currently, you need to use the boot_command argument to configure an OS before you connect to the machine. You can use it with many builders, including the vmware-iso or virtualbox-iso builders. A boot_command mimicks manual keystrokes and sends them at a regular cadence. You aggregate these keystrokes for installing and configuring packages in a preseed file. Packer enables sharing preseed files by making them available statically through an HTTP server. You can also access static files using CD files or a floppy. In this post, we’ll use the http_content and the templatefile functions together to build preseed file templates for two Ubuntu images, one with HashiCorp Nomad and one with HashiCorp Consul. »Templating a Preseed File Say the file preseed.pkrtpl is your preseed template file, and you would like to be able to set a user’s name, ID, and password and also the packages installed with it : d-i apt-setup/universe boolean true d-i pkgsel/include %{ for install in installs ~}${install} %{ endfor }string openssh-server cryptsetup build-essential libssl-dev libreadline-dev zlib1g-dev linux-source dkms nfs-common linux-headers-$(uname -r) perl cifs-utils software-properties-common rsync ifupdown d-i passwd/user-fullname string ${user.name} d-i passwd/user-uid string ${user.id} d-i passwd/user-password password ${user.password} d-i passwd/user-password-again password ${user.password} d-i passwd/username string ${user.name} choose-mirror-bin mirror/http/proxy string d-i base-installer/kernel/override-image string linux-server d-i clock-setup/utc boolean true d-i clock-setup/utc-auto boolean true d-i finish-install/reboot_in_progress note d-i grub-installer/only_debian boolean true d-i grub-installer/with_other_os boolean true d-i mirror/country string manual d-i mirror/http/directory string /ubuntu/ d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/proxy string d-i partman-auto-lvm/guided_size string max d-i partman-auto/choose_recipe select atomic d-i partman-auto/method string lvm d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i partman/confirm_write_new_label boolean true d-i pkgsel/install-language-support boolean false d-i pkgsel/update-policy select none d-i pkgsel/upgrade select full-upgrade d-i time/zone string UTC d-i user-setup/allow-password-weak boolean true d-i user-setup/encrypt-home boolean false tasksel tasksel/first multiselect standard, server This template file can be used to install binaries from a variable. In this example, we made it possible to configure the user settings and pass an arbitrary list of packages to install. This simplifies the build while making it more powerful. Note: The .pkrtpl extension is a recommendation and not a requirement. It allows editors to recognize a Packer template file written in HCL2. The following configuration file can then be defined: variables { headless = true } source "virtualbox-iso" "base-ubuntu-amd64" { headless = var.headless iso_url = local.ubuntu_2010_iso_url iso_checksum = "file:${local.ubuntu_2010_iso_checksum_url}" guest_os_type = "Ubuntu_64" hard_drive_interface = "sata" ssh_wait_timeout = "15m" boot_wait = "5s" } locals { ubuntu_2010_dl_folder = "http://cdimage.ubuntu.com/ubuntu/releases/18.04/release/" ubuntu_2010_iso_url = "${local.ubuntu_2010_dl_folder}ubuntu-18.04.5-server-amd64.iso" ubuntu_2010_iso_checksum_url = "${local.ubuntu_2010_dl_folder}SHA256SUMS" builds = { consul = { user = { id = 1000 name = "bob" password = "s3cr2t" } installs = ["consul"] } nomad = { user = { id = 1000 name = "bob" password = "s3cr2t" } installs = ["nomad"] } } } build { name = "ubuntu" description = <", "", "", "/install/vmlinuz", " auto", " console-setup/ask_detect=false", " console-setup/layoutcode=us", " console-setup/modelcode=pc105", " debconf/frontend=noninteractive", " debian-installer=en_US.UTF-8", " fb=false", " initrd=/install/initrd.gz", " kbd-chooser/method=us", " keyboard-configuration/layout=USA", " keyboard-configuration/variant=USA", " locale=en_US.UTF-8", " netcfg/get_domain=vm", " netcfg/get_hostname=vagrant", " grub-installer/bootdev=/dev/sda", " noapic", " preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg", " -- ", "" ] output_directory = "virtualbox_iso_ubuntu_2010_amd64_${source.key}" } } provisioner "shell" { environment_vars = [ "HOME_DIR=/home/vagrant" ] execute_command = "echo '${local.builds[source.name].user.password}' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'" expect_disconnect = true inline = [ "echo hello from the ${source.name} image", "${source.name} version" ] } } Now we can see that this has made it easier to maintain a preseed project. Pretty cool, right? Effectively, the templatefile function can be used in other scenarios, same for the http_content option. We could extend this by skipping the template file and setting everything from the map: [...] http_content = { "/preseed.cfg" = <<<EOF d-i apt-setup/universe boolean true d-i pkgsel/include string openssh-server cryptsetup build-essential libssl-dev libreadline-dev zlib1g-dev linux-source dkms nfs-common linux-headers-$(uname -r) perl cifs-utils software-properties-common rsync ifupdown consul ... EOF } »Conclusion For more information on Packer’s recent additions, review Packer's changelog. Lastly, if you have any issues, do not hesitate to open an issue in the Packer repository on GitHub. View the full article -
The HashiCorp Packer team is excited to announce the release of Data Source, a new component type to fetch or compute data for use elsewhere in a Packer configuration. Starting in Packer 1.7.0, users and plugin developers will use data source plugins within HCL2-enabled build templates. As we continue to focus on favoring HCL2 over legacy JSON templates, data source components will be the first Packer feature exclusively available to HCL2. Data sources in Packer function similarly to Terraform’s data sources. Data source components fetch data from outside Packer and make detailed information about that data available to Packer HCL configuration blocks. A data source runs before any build. This allows build sources in your configuration to access the result of the data source. »Amazon Web Services Data Sources Packer 1.7.0 includes two data sources: the Amazon AMI data source and Amazon Secrets Manager data source. The Amazon AMI data source filters images from the marketplace, similar to the source_ami_filter configuration. The Amazon Secrets Manager data source retrieves secrets for the build configuration, similar to the aws_secretsmanager configuration. Both configuration parameters will remain available for use in Packer build configuration. However, we encourage you to update your configuration to the new data source configuration for future stability. You can upgrade from legacy JSON to HCL2 with the hcl2_upgrade command. This will upgrade the source_ami_filter and aws_secretsmanager options to their respective data sources. »Using Data Sources You can reference a data source in locals and sources as data.<TYPE>.<NAME>.<ATTRIBUTE>. For example, you can use the Amazon Secrets Manager data source as a local variable to store the value and version of a secret. data "amazon-secretsmanager" "basic-example" { name = "my_super_secret" key = "my_secret_key" } # usage example of the data source output locals { secret_value = data.amazon-secretsmanager.basic-example.value secret_version_id = data.amazon-secretsmanager.basic-example.version_id } You can then use the local.secret_value and local.secret_version_id anywhere in your configuration. For more information, review our documentation on using data sources. »Developing a Data Source You can write your own data source. Follow the instructions for Custom Data Sources. »Upcoming Data Sources In an upcoming Packer release, we plan to roll out two more data sources as a replacement for two of our existent functions: Consul Vault If you have any questions or feedback on Packer data sources, you can do so in our Discuss forum, or submit an issue. View the full article
-
Packer is an opensource VM image automation tool from Hashicorp. It heps you automate the process of Virtual machine image creation on cloud as well as on-prem virtualized environments. Packer Use Cases Following are the main use cases for Packer. Golden Image Creation: With packer, you can add all the configurations required for creating a golden VM image to be used across organizations. Monthly VM Patching: You can integrate Packer in your monthly VM image patching pipeline. Immutable Infrastructure: If you want to create an immutable infrastructure using VM images as a deployable artifact, you can use Packer in your CI/CD lifecycle. Packer Tutorial For Beginners In this beginner tutorial, we have covered the steps required to get started with packaging AMI on AWS cloud. Step 1: Download the latest Packer executable from the Packer downloads page. https://www.packer.io/downloads.html. It is available for Windows, Linux, and other Unix platforms. For Linux, you can use get the download link from the download button and download it using wget. wget https://releases.hashicorp.com/packer/1.7.0/packer_1.7.0_linux_amd64.zip Step 2: Unzip the downloaded Packer package and set the path variable in ~/.bashrc export PATH=$PATH:/path/to/packer Reload bashrc file source ~/.bashrc Alternatively, can move the Packer executable to the bin folder. So that it will be available in the path by default. sudo mv packer /usr/local/bin/ Step 3: Verify packer installation by executing the packer command. packer version You should see the output as shown below. Building Virtual Machine Image (AMI) Using Packer Packer configuration templates are written in JSON format. A template has the following three main parts. variables – Where you define custom variables. builders – Where you mention all the required AMI parameters. provisioners – Where you can integrate a shell script, ansible play, or a chef cookbook for configuring a required application in the AMI. An example template for packaging an AWS AMI is given below. { "variables": { "aws_access_key": "", "aws_secret_key": "" }, "builders": [{ "type": "amazon-ebs", "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "region": "us-west-1", "source_ami": "ami-sd3543ds", "instance_type": "t2.medium", "ssh_username": "ec2-user", "ami_name": "packer-demo {{timestamp}}" }], "provisioners": [ { "type": "shell", "script": "demo-script.sh" } ] } In the above example configuration, we are passing the AWS access keys and secret keys as variables. It is not recommended to use access keys as variables. If you have the credentials in ~/.aws/credentials file, you don’t need to pass access keys as variables. Also, we have used a shell provisioner which calls a demo-script.sh file. packer supports the following provisioners. Ansible Chef Salt Shell Powershell Windows cmd File – For copying file from local directory to VM image. Building a Packer Template To build a packer template, you just need to execute the build command with the JSON template. For example, packer build apache.json Before jumping into building the template, let’s look ar some more important concepts and at last we can try out an example with shell provisioner. Different Scenarios for Using Packer Variables You can make use of packer variables for dynamic configuration for packaging AMI. Let’s discuss these scenarios one by one. Using Variable Within Template The variables block holds all the default variables within a template. Asn example is shown below. "variables": { "instance_type": "t2.medium", "region": "us-west-1" } The declared variables can be accessed in other parts of the template using "{{user `your-variable-name`}}" syntax. An example is shown below. "instance_type": "{{user `instance_type`}}", "region": "{{user `region`}}" Using Environment Variables in Templates Packer lets you use the system environment variables. First, you need to declare the environment variables in the variable section to use them in the other parts of the template. Let’s say you want to use the SCRIPT_PATH environment variable that holds the path to a shell script that has to be used in the shell provisioner. You can declare that variable as shown below. "variables": { "script_path": "{{env `SCRIPT_PATH`}}", } After the declaration, you can use the script_path variable in the provisioner as shown below. "provisioners": [ { "type": "shell", "script": "{{user `script_path` }}/demo-script.sh" } ] Using Command-Line Variables First, you need to declare the variable name in the variable block as shown below. "app_name": "{{app_name_cmd_var}}" You can pass variables during the run time using -var flag followed by the variable declaration. You can use this variable in the template using the normal interpolation we used in the above examples. For example, packer build -var 'app_name_cmd_var=apache' apache.json Using a JSON File You can pass a JSON file with variables block to the build command as shown below. packer build -var-file=variables.json apache.json In the above example, variables.json is the variable file and apache.json is the packer template. Packaging an Image In this example, we will bake a t2.micro AMI using a shell provisioner. A shell script which has an update and HTTPD install instruction. We assume that you have the AWS access keys and region set in the ~/.aws/credentials file. Here we are going to user Oregon region and a Redhat AMI with AMI id ami-6f68cf0f Follow the steps given below to set up the project. 1. Create a folder call packer mkdir packer 2. Create a script file named demo-script.sh and copy the following contents on to it. #!/bin/bash sudo yum -y update sudo yum install -y httpd The above script just updates the repository and installs httpd. 3. Create a httpd.json file with the following contents. { "variables": { "ami_id": "ami-6f68cf0f", "app_name": "httpd" }, "builders": [{ "type": "amazon-ebs", "region": "eu-west-1", "source_ami": "{{user `ami_id`}}", "instance_type": "t2.micro", "ssh_username": "ec2-user", "ami_name": "PACKER-DEMO-{{user `app_name` }}", "tags": { "Name": "PACKER-DEMO-{{user `app_name` }}", "Env": "DEMO" } }], "provisioners": [ { "type": "shell", "script": "demo-script.sh" } ] } We have our template ready. Next step is to execute the template to for packaging the AMI with HTTPD installation. 4. Lets validate and inspect our template using the following commands. packer validate httpd.json packer inspect httpd.json Note: If you are using command line variables or a file as a variable, you should pass it while validating it. An example is shown below. packer validate -var='env=TEST' httpd.json The above command should validate and inspect without any errors. 5. To build our new AMI, use the following command. packer build httpd.json The above command will build a new AMI. You can also debug the image creation. Check out this thread for packer debugging. Few Tips 1. You can capture the output of the image build to a file using the following command. packer build httpd.json 2>&1 | sudo tee output.txt 2. Then you can extract the new AMI id to a file named ami.txt using the following command. tail -2 output.txt | head -2 | awk 'match($0, /ami-.*/) { print substr($0, RSTART, RLENGTH) }' > sudo ami.txt In this packer tutorial for beginners, we covered the basics of image creation. Let us know in the comment section if you face any issues in the setup. View the full article
-
Over 10,000 people watched our third annual global HashiTalks livestream — our first HashiTalks livestream that spanned 48 hours instead of the usual 24. Why 48 hours this time? We were so impressed by the extremely high number of quality talk submissions that we decided to expand the number we accepted this year and expand our hours accordingly. All of the HashiTalks 2021 livestream footage is available on YouTube, but we also made clips for some of the individual talks and have posted them in the HashiCorp Resource Library with full abstracts for each talk along with slides for some. Today we wanted to highlight a few of those talks. In this first HashiTalks 2021 highlights blog, we’re sharing a handful of talks about our oldest open source tools, HashiCorp Vagrant and Packer, and our newest projects, HashiCorp Boundary and Waypoint, as well as a few product-agnostic talks. Across all of these topics, the community had plenty of great new use cases and demos to share. More talks will be added to this blog next week. »Boundary Deploying Boundary in Azure with Terraform HashiCorp Ambassador Ned Bellavance has built a reference architecture for deploying Boundary on Microsoft Azure for secure session management. Using Boundary for Identity-Based Multi-Cloud Access Another HashiCorp Ambassador, Jacob Mammoliti, shows how Boundary can be deployed in a multi- or hybrid cloud environment and how it can be leveraged by users to access infrastructure across different clouds with secure access via fine-grained permissions. Jacob has spoken at each of the last three worldwide HashiTalks. »Packer The Packer Plugin Repository, What’s init? Wilken Rivera, a HashiCorp software engineer on the Packer team, walks through two of the major changes in the new Packer 1.7 release: A new plugin repository and the packer init command. »Vagrant Getting your Python Development Environment Ready with Vagrant Mario García will show you how to configure Python-based development environments with Vagrant. »Waypoint Building and Deploying Applications to Kubernetes with GitLab and Waypoint A second talk by HashiCorp Ambassador Jacob Mammoliti walks through setting up a GitLab CI/CD pipeline to automatically build, deploy, and release an application to GKE with Waypoint. I Just Want to Ship My Code. Waypoint, Nomad, and Other Things. HashiCorp software engineer Michael Lange will showcase three demos of the build, deploy, release workflow for a Node.js website using Kubernetes and Docker, Nomad and Docker, and Nomad and raw binaries — all with a Waypoint workflow. »Product-Agnostic Chaos, Creativity, and Cookies Andrew VanLoo shares some thought-provoking tips about information theory and how it can help you be a better software engineer. Measuring DevOps Success with Pipeline Analytics Chris Riley teaches you how to build a strategy for measuring DevOps, and using tools like DORA and Flow metrics as KPIs for success. The Hardest Part of Operating a Service Mesh: Envoy Proxy Christian Posta shares his observability, debugging, and tuning lessons for working with Envoy proxy. »More Highlights These were about half of the total HashiTalks that covered Boundary, Packer, Vagrant, or Waypoint topics. To find all of the talks, go to the HashiTalks schedule page to locate the time of day for any other talks you want to see. Then head over to our livestream recordings on YouTube: HashiTalks 2021: Day 1 HashiTalks 2021: Day 2 In the coming weeks, we'll post highlight roundup blogs for HashiTalk sessions covering our other products, HashiCorp Nomad, Consul, Vault, and Terraform. View the full article
-
- hcp packer
- vagrant
-
(and 4 more)
Tagged with:
-
In Packer, a component is a builder, provisioner, or post-processor. Packer has many built-in components, and historically many users of Packer have depended purely on the built-ins to run their builds. Plugins are standalone binaries that can supply extra, specialized components. Packer’s main codebase loads and runs these plugins, which can then work together with the Packer built-ins to create highly customizable Packer builds. Packer plugins are a key feature that allows Packer to build images on almost any infrastructure type using a wide range of provisioning tools. As Packer has grown in adoption, it has become apparent that the reliance upon built-in components limits community developers who want to create their own builders, provisioners, and post-processors. Contributors who have gotten their community components merged with the Packer must wait for a maintainer to review, merge, and release changes before their users can benefit from updates to their components. To support Packer’s continued evolution and growing ecosystem, we are excited to announce the Packer plugin SDK as part of its (v1.7.0 release)[https://github.com/hashicorp/packer/blob/v1.7.0/CHANGELOG.md#170-february-17-2021]. The SDK makes it easier for third-party developers to create, maintain, release, and share their components as plugins. »Packer Plugin SDK Previously, when developing a plugin, you had to use a number of convenience tools embedded within the codebase with little documentation. As a result, plugin development could be difficult to follow and many unused dependencies were imported. This complex set of embedded tools made the barrier to entry for creating and maintaining Packer plugins higher than it needed to be. The Packer Plugin SDK extracts the required plugin interfaces from the Packer repository into a standalone Go module. Packer plugins can now import the Packer Plugin SDK and use its API, which is explicitly available for Packer plugin functionality. We hope this change lowers the barrier to entry for creating Packer plugins. Packer Plugin SDK v0.1.0 is designed for compatibility with Packer v1.7.0. In future versions, the SDK will be versioned separately from the main Packer codebase. Improvements to the SDK will start from the 0.1.0 baseline and follow a semantic versioning scheme compatible with Go modules. The informal SDK within the Core repository has been removed. The new SDK offers some new features, including support for plugins that contain multiple components. A single plugin can bundle together builders, post-processors, and provisioners that are all specific to a certain technology. This feature will hopefully reduce the maintenance burden for a plugin. Finally, the new SDK supports the new packer init command. Users can declare their desired plugins and plugin versions and Packer will automatically download them. »Upgrading Packer for SDK Support If you only use components currently built into the Packer, nothing changes for you. If you use community-built plugins, you will need to obtain a new version of the plugin when you upgrade to Packer v1.7.0. Previous versions of the plugin will not be compatible. If you maintain a community-plugin, you will need to upgrade your plugin to use the SDK. We have an upgrade guide and a CLI tool to help you. If you’re interested in getting started creating a plugin of your own, check out our updated documentation for writing plugins. To see a presentation and demo of the Packer Plugin Repository and the packer init command, watch our recent HashiTalk: The Packer Plugin Repository, What’s init? View the full article
-
Provisioners give Terraform practitioners a way to prep their infrastructure for use by installing software and deploying applications. While there are several avenues for provisioning infrastructure deployed with Terraform, Packer and Cloud-Init give practitioners repeatability in image deployment or built-in tools. Try our new tutorials to provision infrastructure following HashiCorp’s recommended best practices. Image Deployment with Packer Packer is a HashiCorp tool that builds machine images. Packer allows you to pre-build golden images to deploy using Terraform, and supports a number of provisioning processes. This tutorial will teach you how to create a Packer image with all of the common dependencies you would need for deploying a web application, and build that image in Terraform. Provision Infrastructure with Packer Cloud-Init deployment Cloud-init is a standard configuration support tool available on most Linux distributions and all major cloud providers. It allows you to provision instances with a common scripting language. You will create a Terraform instance with a cloud-init script in your resource configuration. Once you apply your Terraform configuration containing the cloud-init script, your instance will be created to your specifications and will be able to run the web application deployed within the script. Provision Infrastructure with Cloud-init To learn more about provisioning in Terraform, visit the full collection of tutorials on the HashiCorp Learn site. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts