Jump to content

Search the Community

Showing results for tags 'terraform'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. In HashiCorp Terraform, data sources serve as a bridge between the Terraform configuration and external systems or information. Essentially, data sources allow Terraform to query external resources, such as cloud platforms, APIs, databases, or other systems, and use the retrieved information within the configuration. Unlike resources, which represent infrastructure components to be managed by Terraform, […] The article Terraform: How are Data Sources used? appeared first on Build5Nines. View the full article
  2. You already know that Terraform is a popular open-source Infrastructure provisioning tool. And that AWS is one of the leading cloud providers with a wide range of services. But have you ever wondered how Terraform can help you better take advantage of the services AWS has to offer? This guide will explain how Terraform and AWS work together to give you insight and control over your cloud resources. Why Use Terraform with AWS?One of the main benefits of using Terraform with AWS is that it allows you to define your entire AWS infrastructure as code using HashiCorp Configuration Language (HCL). With Terraform configuration files called Terraform code, you can easily provision, change, and version control your AWS resources. This provides consistency and repeatability across your environment. Rather than manually making changes through the AWS Management Console, you can model your AWS setup, test changes locally, and roll out updates automatically. For a hands-on experience with Terraform, check out our Terraform Basics Training Course. Key Reasons to Adopt Terraform for AWSBelow are some of the reasons why you should adopt Terraform for AWS infrastructure management: IaC BenefitsTerraform enables you to treat your infrastructure as code. This approach has several benefits: Reproducibility: Defining your infrastructure in code makes it easy to recreate environments consistently.Version Control: Storing your infrastructure configuration in version-controlled repositories (e.g., Git) allows for collaboration and tracking of changes over time.Automation: It allows for the automation of resource provisioning, updates, and teardown.AWS-Specific BenefitsBroad Service Coverage: Terraform supports a wide range of AWS services, from EC2 instances to S3 buckets, RDS databases, and more.Multi-Region and Multi-Account Deployments: Easily deploy resources across different AWS regions and accounts.Immutable Infrastructure: Terraform encourages the use of immutable infrastructure patterns, promoting reliability and scalability.How Does Terraform Work with AWS?At its core, Terraform utilizes AWS APIs to dynamically provision and manage resources. When initializing a working directory, Terraform will download the AWS provider plugin which understands how to communicate with the various AWS services. The AWS provider contains APIs that map directly to the actual AWS APIs. So, for example, when you define an "aws_instance" resource, the provider knows that maps to the EC2 RunInstances API call. By abstracting the underlying AWS APIs, Terraform provides a declarative way to manage your entire AWS environment as code. The provider handles all the network calls and state synchronization behind the scenes. Getting Started with Terraform on AWS1. Install the Terraform CLI Terraform is distributed as a single binary file that can be downloaded and added to your system PATH. For Linux/Mac users, you can use the official HashiCorp releases and extract the zip file. On Windows, you can download the .zip from the releases and extract it to a directory in your PATH. For more details on how to install Terraform, check the Terraform doc. 2. Verifying the Install Test that Terraform is available by checking the version using this command: terraform -v You should get an output similar to this: Terraform v1.1.9 3. Configuring AWS Credentials Terraform supports different strategies for AWS authentication, such as static credentials, environment variables, or IAM roles. For automation, it is recommended that you use an IAM role attached to your EC2 instance. Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables or create the credentials file at ~/.aws/credentials. 4. Creating the Main Configuration Initialize a new or empty Terraform working directory and create main.tf with your resources: terraform init touch main.tf Add a resource block for an EC2 instance specifying AMI, type, security groups, etc: resource "aws_instance" "example" { ami = "ami-0cff7568" instance_type = "t2.micro" vpc_security_group_ids = ["sg-1234567890abcdef0"] } This defines the infrastructure you want to create. 5. Validating and Applying Changes Run terraform plan to see the actions and changes before applying: terraform plan Then apply the changes: terraform apply Terraform will create the EC2 instance and all required dependencies. You can assess the instance on the AWS console. Adding Modules and Remote StateAs your infrastructure grows more complex, structure it using reusable Terraform modules. Modules define generic AWS patterns like a VPC, Auto Scaling Group, or RDS database that you can call multiple times. Also, ensure you manage those modules in version control along with your main configurations. You can read more about modules from this blog: Terraform Modules - Tutorials and Examples. For team collaboration, maintain a centralized state file to track resource lifecycles. Store the file remotely in S3 backed by DynamoDB for locking. This prevents state collisions and loss during runs. To solidify your understanding of Terraform and prepare for official certification, consider taking our course on Terraform Associate Certification: HashiCorp Certified. This course is designed to help you master Terraform and prepare for the official HashiCorp certification exam. Terraform in AWS Best PracticesFollow the following best practices to get the most out of Terraform in AWS. 1. Use an AWS Credential Profile Rather than hardcoding access keys and secret keys directly in your Terraform configuration, use a credential profile configured by one of the AWS SDKs. This avoids maintaining secrets in multiple locations and prevents accidental commits to version control.If you’re running Terraform from control servers, consider using an IAM instance profile for authentication.2. Break Up AWS Configurations When provisioning multiple services (EC2 instances, security boundaries, ECS clusters, etc.), avoid defining them all in a single configuration file. Instead, break them up into smaller, manageable chunks.Organize your configurations based on logical groupings or services to improve maintainability.3. Keep Secrets Secure If you need to store sensitive data or other information you don’t want to make public, use a terraform.tfvars file and exclude the file from version control (e.g., by using .gitignore).Avoid hardcoding secrets directly in your configuration files.4. Use Remote State Store your Terraform state remotely, ideally in an S3 bucket with versioning enabled. This ensures consistency and allows collaboration among team members.Remote state management provides better visibility into changes made to the infrastructure.5. Leverage Existing Modules Take advantage of shared and community modules. These pre-built modules save time and effort by providing reusable configurations for common AWS resources.Import existing infrastructure into Terraform to avoid re-creating everything from scratch.6. Consistent Naming Convention Adopt a consistent naming convention for your resources. Clear, descriptive names make it easier to manage and troubleshoot your infrastructure.Use meaningful prefixes or suffixes to differentiate between environments (e.g., dev-, prod-).7. Always Format and Validate Use Terraform’s built-in formatting (terraform fmt) and validation (terraform validate) tools. Consistent formatting improves readability, and validation catches errors early in the process.Common Use CasesBelow are some of Terraform’s common use cases in AWS: Web Applications Deployment: Deployment of web servers, load balancers, and databases.Dev/Test Environments Creation: Spinning up isolated environments for development and testing.CI/CD Pipelines Creation: Automating infrastructure provisioning as part of your deployment pipeline.Additional Features to KnowBelow are some advanced operations that you can perform when using Terraform in AWS: Data Sources: Terraform allows you to query existing AWS data, such as AMI IDs and security groups, before defining resources that depend on this data.Output Values: After applying changes, Terraform exposes attributes of resources, making them easily accessible for use in other parts of your infrastructure.Remote Backend: Terraform’s remote backend feature manages the state of your infrastructure and provides locking mechanisms to facilitate collaboration among multiple developers.SSH Bastion Host Module: For enhanced security, Terraform offers an SSH Bastion host module that secures access to EC2 instances.Custom IAM Roles and Policies: Terraform enables the provisioning of custom IAM roles and policies tailored to your infrastructure’s needs.Integration with Other Tools: Terraform’s module registry allows for seamless integration with a variety of other tools, expanding its functionality and utility.An alternative to Terraform when working with AWS is CloudFormation, a service that allows you to model and provision AWS resources in a repeatable and automated way. Read more about it in this blog: Terraform vs. CloudFormation: A Side-by-Side Comparison. Check out our Terraform + AWS Playground to start experimenting with automated infrastructure provisioning. ConclusionTerraform is a powerful tool for managing your infrastructure in AWS. It allows you to automate your deployments and maintain a consistent environment. It also supports other cloud providers, including Microsoft Azure, Google Cloud Platform (GCP), and many others. Join our Terraform Challenge to master how to provision and manage infrastructure using Terraform Sign up on KodeKloud for free now and learn how to use Terraform on the go. View the full article
  3. When using HashiCorp Terraform as the Infrastructure as Code (IaC) tool of choice, it becomes critical to organize the Terraform code as the Terraform project becomes more complex. One of the most common practices is to split the Terraform project from a single file (typically named main.tf) into multiple smaller files. This helps increase maintainability […] The article Terraform: Split main.tf into seperate files appeared first on Build5Nines. View the full article
  4. No-code provisioning gives organizations a self-service workflow in HCP Terraform (formerly Terraform Cloud) for application developers and others who need infrastructure but may not be familiar with Terraform or HashiCorp Configuration Language (HCL). Today, no-code provisioning adds the ability to perform module version upgrades as a generally available feature. No-code provisioning empowers cloud platform teams to publish approved infrastructure modules for push-button self-service, allowing stakeholders with infrastructure needs to provision those modules without having to manage Terraform configurations or learn the complexities of the underlying infrastructure and deployment processes. A more seamless experience for practitioners Originally, Terraform’s no-code provisioning restricted users to the module version with which they originally provisioned the workspace — they could change only variable inputs. This limitation kept users from accessing changes delivered in subsequent versions of the module unless they destroyed the workspace and deployed a fresh one. Module version upgrades for no-code workspaces address this issue by significantly reducing the friction when updating the no-code modules in an HCP Terraform workspace. Now, when an administrator or module owner updates the designated no-code ready module version, a notification about the change will appear in downstream workspaces that use the module, giving practitioners a seamless experience in receiving and applying upgrades to their workspaces. During the beta period, we collected a lot of great feedback from customers, which led directly to the general availability of module version upgrades. Reducing cloud spend with no-code provisioning HashiCorp’s 2023 State of Cloud Strategy Survey revealed that 90% of organizations face gaps in their cloud-related skill sets, and that was a primary cause of cloud waste for 43% of respondents. To combat this, organizations need to bridge the skills gap by abstracting error-prone manual tasks and continuously improving the developer experience. No-code Terraform modules help platform teams close these skills gaps, enabling application developers in multiple business units to provision their own infrastructure in minutes, without significant Terraform training. Administrators and module publishers can manage an allowlist of no-code ready modules for application developers, reducing failed infrastructure builds and costly misconfiguration errors. These approved, reusable no-code modules can be built with cost and security best practices in mind, reducing the occurrence of over-provisioned resources. Getting started with HCP Terraform module version upgrades Module version upgrades in HCP Terraform keep developers’ no-code workspaces up-to-date without them having to know Terraform or ask their platform team to update their infrastructure. For more details about the general availability of no-code module version upgrades, please review the documentation and attend our webinar series on cloud spending: Provisioning no-code infrastructure documentation Create and use no-code modules documentation Optimize cloud spend webinar series View the full article
  5. Terraform’s declarative approach allows for defining infrastructure as code (IaC), enabling teams to automate the deployment and management of resources across various cloud providers, including Microsoft Azure and Amazon AWS. As infrastructure evolves, there may arise a need to remove resources from the Terraform state that are no longer required. When you manually delete resources […] The article Terraform: Remove Resource from State File (.tfstate) appeared first on Build5Nines. View the full article
  6. Cloud Computing has transformed the IT industry by simplifying IT infrastructure management. With Cloud Computing, organizations can easily provision and scale resources as needed without worrying about the underlying infrastructure. Two of the most commonly used tools for infrastructure management and provisioning are Ansible and Terraform. This article discusses what each of the two tools does, their key features, and how they compare in the IaC world. Understanding AnsibleAnsible is an open-source automation tool developed by Red Hat that simplifies complex IT tasks. Its agentless architecture automates configuration management and application deployment on remote machines using SSH and WinRM protocols. Ansible uses declarative language to define its desired state for any system. Instead of providing step-by-step instructions, users describe an end state they'd like their system to reach, leaving Ansible to determine the most efficient route toward that goal. This approach enhances simplicity and readability in Ansible's configuration files (called playbooks). Key Features of Ansible:Agentless Architecture: Ansible does not deploy agents, i.e., it does not require extra software on the target machines. This makes its setup easier and mitigates problems such as out-of-date agents, which are a common problem with agent-based solutions. YAML-based Playbooks: Ansible playbook scripts are written in YAML, making them easy to read for humans and understandable without much expertise. Playbooks define a set of tasks to be executed on managed nodes to achieve the desired state. Idempotent Execution: The Ansible tasks are idempotent, which means that applying the same configuration multiple times yields identical results as applying it just once. This ensures systems stay in their desired states even if repeated configurations are applied, helping prevent infrastructure configuration drift. Extensible: Ansible is highly extensible, supporting custom modules and plugins explicitly created to integrate seamlessly into existing infrastructure or workflows. This extensibility enables Ansible to meet individual users' requirements with ease. Integration: Ansible integrates easily with popular version control systems such as Git, enabling engineers to combine infrastructure configurations with application code to provide visibility and enable collaboration. To learn more about playbooks, check out this blog: What is Ansible Playbook and How to Write it? Understanding TerraformTerraform is a popular open-source tool developed by HashiCorp that enables users to manage infrastructure as code. It provides support for multiple cloud providers, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, and more. Terraform users write declarative configuration files in HCL to define and provision infrastructure resources like virtual machines, storage accounts, and network configurations. This makes it easier for teams to collaborate and manage infrastructure changes in a safe and consistent manner across different cloud providers. Key Features of Terraform:Infrastructure as Code: Terraform enables infrastructure to be defined using code, which can be versioned, shared, and reused like any other software artifact. This approach brings consistency and repeatability to infrastructure provisioning. Declarative Configuration: Similar to Ansible, Terraform follows a declarative approach. Users specify the desired state of their infrastructure in configuration files, and Terraform determines the actions necessary to reach that state. Provider Ecosystem: Terraform supports a vast ecosystem of providers, including major cloud providers like AWS, Azure, and Google Cloud and numerous third-party providers for services like Kubernetes, Docker, and more. This allows users to manage heterogeneous environments using a single tool. Plan and Apply Workflow: Terraform employs a two-step workflow consisting of "plan" and "apply" phases. During the "plan" phase, Terraform generates an execution plan describing its actions to achieve the desired state. In the "apply" phase, Terraform executes the plan, making the necessary changes to the infrastructure. State Management: Terraform maintains a state file that records the current state of the infrastructure. This state file is used to map real-world resources to the configuration, track dependencies, and plan future changes. Proper state management is crucial for ensuring the integrity of infrastructure changes. Ansible vs. Terraform: Key DifferencesNow that we have a basic understanding of Ansible and Terraform, let's compare them across several key dimensions: Use CasesAnsible excels in configuration management by automating the setup and maintenance of servers and infrastructure components at scale. Whether it's configuring software, adjusting system parameters, or managing file systems, Ansible simplifies the complexities associated with maintaining a large and diverse IT environment. Terraform focuses mainly on infrastructure provisioning and management. It is the best choice for defining, controlling, and managing cloud resources, such as infrastructure components and services from different providers. Terraform is usually used for situations where infrastructure is transient and needs to be provisioned dynamically. Language and SyntaxAnsible sets up playbooks in YAML, a format famous for its simplicity and readability. This makes playbooks easy to understand for both beginners and extensively experienced users. In Terraform, users define infrastructure using HCL or JSON. HCL not only handles infrastructure configuration situations but also provides features such as interpolation and resource blocks for defining resources. Execution ModelAnsible uses a push-based model, where the control node is responsible for transmitting commands and configurations via SSH to the targeted nodes. This model is perfect for orchestrating tasks in multiple systems and can even grow to thousands of nodes. Terraform uses a pull-based model, where each target node independently pulls its configuration from a source like a version control repository. This model allows organizations to maintain greater control over their infrastructure and ensures that changes are made consistently and reliably. Resource AbstractionAnsible splits infrastructure operations into individual tasks, which are then run sequentially, one after another, on the target nodes. Though Ansible offers you modules for managing cloud resources, network devices, and so on, it does not outline resource modeling as built-in as Terraform does. The Terraform stack uses a declarative configuration language that allows users to explicitly define dependencies, relationships, and provisioning logic. Adapting this approach helps manage complex IT infrastructure more flexibly and predictably. Ecosystem and IntegrationsAnsible leverages a comprehensive set of modules, roles, and integrations to make the configuration process even easier. It synchronizes perfectly with cloud services such as AWS, Azure, or Google Cloud. Terraform integration works by utilizing provider plugins, which are responsible for managing resources and communicating with the provider's API. When you define your infrastructure with Terraform, you specify the resources you want to create, and Terraform uses the provider plugin to create those resources in the corresponding cloud provider. It also supports modules, which are reusable pieces of infrastructure that can be shared across different projects, teams, and organizations. State ManagementAnsible does not keep a distinct state file. Rather, it utilizes the current state of target nodes while playbook execution is running. Although this makes management easier, there might be issues with monitoring and managing infrastructure variations over time. Terraform keeps a state file that shows the current state of the infrastructure. It uses this information to understand which resources have been created, updated, or destroyed during each run. This information allows Terraform to make intelligent decisions regarding which resources should be created, updated, or destroyed during future runs. Check out this blog to learn How to Manage Terraform State with Examples. Learning Curve and AdoptionAnsible's simplicity and agentless architecture make it relatively easy to learn, particularly for users with experience in YAML and basic scripting. The learning curve may steepen when dealing with complex playbooks and orchestration scenarios. Terraform's learning curve can be steeper due to its declarative language and the need to understand infrastructure concepts like state management and provider configurations. However, Terraform's comprehensive documentation and active community support help mitigate these challenges. Community and SupportAnsible benefits from a large and active community of users, contributors, and maintainers. The Ansible Galaxy repository hosts thousands of reusable roles and playbooks contributed by the community, making it easy to find and share automation solutions. Terraform has a vibrant community that actively contributes modules, plugins, and best practices. HashiCorp provides commercial support for Terraform through its enterprise offerings, along with extensive documentation and training resources. Choosing the Right ToolSelecting the right tool for infrastructure automation depends on various factors, including your organization's requirements, existing infrastructure, team expertise, and long-term goals. Here are the considerations to help you make an informed decision: Infrastructure Complexity: If your environment includes diverse infrastructure components, such as servers, networking equipment, and cloud resources, Ansible's versatility and simplicity may be advantageous. Cloud-native Environments: Terraform's IaC approach and provider ecosystem offer better integration and management capabilities for organizations heavily invested in cloud computing and containerization. Team Skills and Preferences: Consider your team's existing skills and familiarity with programming languages, configuration management tools, and cloud platforms. To minimize learning curves, choose a tool that aligns with your team's expertise and preferences. Automation Goals: Define your automation objectives, such as improving deployment speed, enhancing infrastructure reliability, or optimizing resource utilization. Evaluate how each tool addresses your specific requirements and fits into your workflows. Integration Requirements: Assess the need to integrate automation workflows with existing tools, processes, and third-party services. Look for tools that offer robust integration capabilities and support industry standards for seamless interoperability. Scalability and Maintainability: Consider each tool's scalability and maintainability, including support for version control and collaboration features. Choose a tool that can scale with your organization's growth and evolving infrastructure needs. ConclusionBoth Ansible and Terraform are powerful utilities for infrastructure automation. Ansible stands out in configuration management, application deployment, and general-purpose automation. Terraform, on the other hand, is particularly good in infrastructure provisioning using the IaC methodology. By explaining the anatomy of Ansible and Terraform and addressing their strengths and flaws, your skilled team can make the right decision for your success in DevOps and cloud computing. If you are looking to polish your Terraform skills in a real-world environment? Enroll in our Terraform Basics Training Course, which covers all of Terraform fundamentals. If you want to master Ansible, check out these KodeKloud courses: Learn Ansible Basics – Beginners CourseAnsible Advanced CourseView the full article
  7. HashiCorp Terraform is a great Infrastructure as Code (IaC) tool that allows you to easily manage many resources efficiently. While you can write Terraform code for each individual resource, Terraform supports for_each loops and other programming constructs that enable more efficient resource management to be programmed within a Terraform project. This article will show you […] The article Terraform: How to for_each through a list(objects) appeared first on Build5Nines. View the full article
  8. Today, we are announcing the general availability of HashiCorp Terraform 1.8, which is ready for download and immediately available for use in Terraform Cloud. This version includes two new capabilities to improve the extensibility and flexibility of Terraform: provider-defined functions and refactoring across resource types. Provider-defined functions Terraform includes a wide selection of built-in functions to perform many common operations during provisioning. While they address many general use cases, there have been many requests from the community for more specialized functions and custom logic. With Terraform 1.8, we are excited to introduce provider-defined functions, which allow anyone in the community and HashiCorp’s partner ecosystem to extend the capabilities of Terraform. Provider-defined functions can be used in any Terraform expression, including input validation conditions, output values, local values, data sources, and resource blocks. Additionally, provider-defined functions can be used with checks and tests, which commonly require more complex business logic to write custom assertions that address unique validation scenarios. Provider-defined functions are invoked with the syntax provider::<provider_name>::<function_name>([arguments]). Examples of available functions include rfc_3339_parse in v0.11 of the official time provider and direxists in v2.5 of the local provider. An initial set of functions are now available in the AWS, Google Cloud, and Kubernetes providers. For more details and examples, check out Terraform 1.8 provider functions for AWS, Google Cloud, and Kubernetes. The latest version of the HashiCorp Terraform extension for Visual Studio Code also includes syntax highlighting and auto-completion support for provider-defined functions. To learn how to develop your own provider-defined functions, refer to the Functions section of the Terraform Plugin Framework documentation and try it yourself with the new Implement a function tutorial, part of the Custom framework providers collection. Refactor across resource types Refactoring code is a common practice for Terraform authors, whether it’s to break up a large configuration into multiple modules or simply to rename resources. Terraform provides two mechanisms to support refactoring operations while preserving the state of existing resources: the moved block introduced in Terraform 1.1 and the terraform state mv command. But there is another class of refactoring that involves changing the type of a resource. Changing the resource type previously required a multi-step operation to manually remove the resource from state without destroying it, update the code, and then re-import to the new resource type. In Terraform 1.8, supported resources can be moved between resource types with a new, faster, and less error-prone method. Some use cases for this method include: Renaming a provider after an acquisition or rebrand Splitting a resource into more specific types API changes such as service renames or versioned resources Cross-provider moves Providers can add support for this capability by declaring which resources can be refactored between types. An example moved block might look like this: # Old resource type (commented out) # resource "myprovider_old_resource_type" "example" { # # resource attributes... # } # New resource type resource "myprovider_new_resource_type" "example" { # resource attributes... } moved { from = myprovider_old_resource_type.example to = myprovider_new_resource_type.example } Get started with Terraform 1.8 To learn more about these features and all of the enhancements in Terraform 1.8, review the full Terraform 1.8 changelog. To get started with HashiCorp Terraform: Download Terraform 1.8 Sign up for a free Terraform Cloud account Read the Terraform 1.8 upgrade guide Get hands-on with tutorials at HashiCorp Developer As always, this release wouldn't have been possible without the great community feedback we've received via GitHub issues and from our customers. Thank you! View the full article
  9. Today, we are announcing the general availability of provider-defined functions in the AWS, Google Cloud, and Kubernetes providers in conjunction with the HashiCorp Terraform 1.8 launch. This release represents yet another step forward in our unique approach to ecosystem extensibility. Provider-defined functions will allow anyone in the Terraform community to build custom functions within providers and extend the capabilities of Terraform. Introducing provider-defined functions Previously, users relied on a handful of built-in functions in the Terraform configuration language to perform a variety of tasks, including numeric calculations, string manipulations, collection transformations, validations, and other operations. However, the Terraform community needed more capabilities than the built-in functions could offer. With the release of Terraform 1.8, providers can implement custom functions that you can call from the Terraform configuration. The schema for a function is defined within the provider's schema using the Terraform provider plugin framework. To use a function, declare the provider as a required_provider in the terraform{} block: terraform { required_version = ">= 1.8.0" required_providers { time = { source = "hashicorp/local" version = "2.5.1" } } }Provider-defined functions can perform multiple tasks, including: Transforming existing data Parsing combined data into individual, referenceable components Building combined data from individual components Simplifying validations and assertions To access a provider-defined function, reference the provider:: namespace with the local name of the Terraform Provider. For example, you can use the direxists function by including provider::local::direxists() in your Terraform configuration. Below you’ll find several examples of new provider-defined functions in the officially supported AWS, Google Cloud, and Kubernetes providers. Terraform AWS provider The 5.40 release of the Terraform AWS provider includes its first provider-defined functions to parse and build Amazon Resource Names (ARNs), simplifying Terraform configurations where ARN manipulation is required. The arn_parse provider-defined function is used to parse an ARN and return an object of individual referenceable components, such as a region or account identifier. For example, to get the AWS account ID from an Amazon Elastic Container Registry (ECR) repository, use the arn_parse function to retrieve the account ID and set it as an output: # create an ECR repository resource "aws_ecr_repository" "hashicups" { name = "hashicups" image_scanning_configuration { scan_on_push = true } } # output the account ID of the ECR repository output "hashicups_ecr_repository_account_id" { value = provider::aws::arn_parse(aws_ecr_repository.hashicups.arn).account_id } Running terraform apply against the above configuration outputs the AWS Account ID: Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: hashicups_ecr_repository_account_id = "751192555662" Without the arn_parse function, you would need to define and test a combination of built-in Terraform functions to split the ARN and reference the proper index or define a regular expression to match on a substring. The function handles the parsing for you in a concise manner so that you do not have to worry about doing this yourself. The AWS provider also includes a new arn_build function that builds an ARN from individual attributes and returns it as a string. This provider-defined function can create an ARN that you cannot reference from another resource. For example, you may want to allow another account to pull images from your ECR repository. The arn_build function below constructs an ARN for an IAM policy using an account ID: # allow another account to pull from the ECR repository data "aws_iam_policy_document" "cross_account_pull_ecr" { statement { sid = "AllowCrossAccountPull" effect = "Allow" principals { type = "AWS" identifiers = [ provider::aws::arn_build("aws", "iam", "", var.cross_account_id, "root"), ] } actions = [ "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", ] } }The arn_build function helps to guide and simplify the process of combining substrings to form an ARN, and it improves readability compared to using string interpolation. Without it, you'd have to look up the exact ARN structure in the AWS documentation and manually test it. Terraform Google Cloud provider The 5.23 release of the Terraform Google Cloud provider adds a simplified way to get regions, zones, names, and projects from the IDs of resources that aren’t managed by your Terraform configuration. Provider-defined functions can now help parse Google IDs when adding an IAM binding to a resource that’s managed outside of Terraform: resource "google_cloud_run_service_iam_member" "example_run_invoker_jane" { member = "user:jane@example.com" role = "run.invoker" service = provider::google::name_from_id(var.example_cloud_run_service_id) location = provider::google::location_from_id(var.example_cloud_run_service_id) project = provider::google::project_from_id(var.example_cloud_run_service_id) }The Google Cloud provider also includes a new region_from_zone provider-defined function that helps obtain region names from a given zone (e.g. “us-west1” from “us-west1-a”). This simple string processing could be achieved in multiple ways using Terraform’s built-in functions previously, but the new function simplifies the process: locals { zone = “us-central1-a” # ways to derive the region “us-central1” using built-in functions region_1 = join("-", slice(split("-", local.zone), 0, 2)) region_2 = substr(local.zone, 0, length(local.zone)-2) # our new region_from_zone function makes this easier! region_3 = provider::google::region_from_zone(local.zone) }Terraform Kubernetes provider The 2.28 release of the Terraform Kubernetes provider includes provider-defined functions for encoding and decoding Kubernetes manifests into Terraform, making it easier for practitioners to work with the kubernetes_manifest resource. Users that have a Kubernetes manifest in YAML format can use the manifest_decode function to convert it into a Terraform object. The example below shows how to use the manifest_decode function by referring to a Kubernetes manifest in YAML format embedded in the Terraform configuration: locals { manifest = <If you prefer to decode a YAML file instead of using an embedded YAML format, you can do so by combining the built-in file function with the manifest_decode function. $ cat manifest.yaml --- kind: Namespace apiVersion: v1 metadata: name: test labels: name: testresource "kubernetes_manifest" "example" { manifest = provider::kubernetes::manifest_decode(file("${path.module}/manifest.yaml")) }If your manifest YAML contains multiple Kubernetes resources, you may use the manifestdecodemulti function to decode them into a list which can then be used with the for_each attribute on the kubernetes_manifest resource: $ cat manifest.yaml --- kind: Namespace apiVersion: v1 metadata: name: test-1 labels: name: test-1 --- kind: Namespace apiVersion: v1 metadata: name: test-2 labels: name: test-2 resource "kubernetes_manifest" "example" { for_each = { for m in provider::kubernetes::manifest_decode_multi(file("${path.module}/manifest.yaml"))): m.metadata.name => m } manifest = each.value }Getting started with provider-defined functions Provider-defined functions allow Terraform configurations to become more expressive and readable by declaring practitioner intent and reducing complex, repetitive expressions. To learn about all of the new launch-day provider-defined functions, please review the documentation and changelogs of the aforementioned providers: Terraform AWS provider Terraform Google provider Terraform Kubernetes provider Review our Terraform Plugin Framework documentation to learn more about how provider-defined functions work and how you can make your own. We are thankful to our partners and community members for their valuable contributions to the HashiCorp Terraform ecosystem. View the full article
  10. Recent enhancements in HashiCorp Terraform Cloud help simplify the user experience when working with projects. A new dedicated browsing experience provides better visibility and manageability for projects, and the ability to restrict version control system (VCS) connections to projects enables more fine-grained control to reduce risk. Project overview page As the popularity of projects has grown, customers have found that long project names don’t all fit in the sidebar of the workspaces view. Customers need a better browsing experience for projects that is not restricted to a view designed for browsing workspaces. To address this, we’re introducing a new project overview page to let users view and search all projects they have access to. This view also provides an overview of the number of teams and workspaces associated with each project. When you click into any project, a new dedicated page lists all resources in that project while providing key project details such as workspace name and health. From this page you can click on Settings to manage the project and the teams that have access to it. Scope VCS connections to a project Within a single Terraform Cloud organization, multiple version control system (VCS) connections can be defined and made available for linked workspaces. However, previously, there was no way to limit the scope of these connections, presenting a challenge for organizations with multiple VCS providers or segmented deployments within a provider, operated by different teams or business units. In these environments it is desirable, and often required, to limit end users to only the providers and data they need. This reduces the risk of mistakes and prevents the exposure of sensitive information from other teams. With our latest enhancement, administrators can now control the project scope of VCS connections. By default, each VCS connection is available to all workspaces in the organization. However, if you need to limit which projects can use repositories from a given VCS connection, administrators can now change this setting to limit the connection to only workspaces in the selected project(s). This helps organizations avoid the added overhead of maintaining multiple Terraform Cloud organizations just to isolate VCS environments. It also simplifies the end-user experience by adding another guardrail for safer self-service that limits each team to accessing only the version control providers they need to use. Get started with Terraform Cloud We’re working to ensure Terraform Cloud continues to deliver improvements that help customers have better visibility and control over their environment throughout their infrastructure lifecycle. To learn more about the new features described in this post, visit the Terraform guides and documentation on HashiCorp Developer. If you are new to Terraform, sign up for Terraform Cloud and get started for free today. View the full article
  11. Graphic created by Kevon Mayers Introduction Organizations often use Terraform Modules to orchestrate complex resource provisioning and provide a simple interface for developers to enter the required parameters to deploy the desired infrastructure. Modules enable code reuse and provide a method for organizations to standardize deployment of common workloads such as a three-tier web application, a cloud networking environment, or a data analytics pipeline. When building Terraform modules, it is common for the module author to start with manual testing. Manual testing is performed using commands such as terraform validate for syntax validation, terraform plan to preview the execution plan, and terraform apply followed by manual inspection of resource configuration in the AWS Management Console. Manual testing is prone to human error, not scalable, and can result in unintended issues. Because modules are used by multiple teams in the organization, it is important to ensure that any changes to the modules are extensively tested before the release. In this blog post, we will show you how to validate Terraform modules and how to automate the process using a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Terraform Test Terraform test is a new testing framework for module authors to perform unit and integration tests for Terraform modules. Terraform test can create infrastructure as declared in the module, run validation against the infrastructure, and destroy the test resources regardless if the test passes or fails. Terraform test will also provide warnings if there are any resources that cannot be destroyed. Terraform test uses the same HashiCorp Configuration Language (HCL) syntax used to write Terraform modules. This reduces the burden for modules authors to learn other tools or programming languages. Module authors run the tests using the command terraform test which is available on Terraform CLI version 1.6 or higher. Module authors create test files with the extension *.tftest.hcl. These test files are placed in the root of the Terraform module or in a dedicated tests directory. The following elements are typically present in a Terraform tests file: Provider block: optional, used to override the provider configuration, such as selecting AWS region where the tests run. Variables block: the input variables passed into the module during the test, used to supply non-default values or to override default values for variables. Run block: used to run a specific test scenario. There can be multiple run blocks per test file, Terraform executes run blocks in order. In each run block you specify the command Terraform (plan or apply), and the test assertions. Module authors can specify the conditions such as: length(var.items) != 0. A full list of condition expressions can be found in the HashiCorp documentation. Terraform tests are performed in sequential order and at the end of the Terraform test execution, any failed assertions are displayed. Basic test to validate resource creation Now that we understand the basic anatomy of a Terraform tests file, let’s create basic tests to validate the functionality of the following Terraform configuration. This Terraform configuration will create an AWS CodeCommit repository with prefix name repo-. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } Now we create a Terraform test file in the tests directory. See the following directory structure as an example: ├── main.tf └── tests └── basic.tftest.hcl For this first test, we will not perform any assertion except for validating that Terraform execution plan runs successfully. In the tests file, we create a variable block to set the value for the variable repository_name. We also added the run block with command = plan to instruct Terraform test to run Terraform plan. The completed test should look like the following: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan } Now we will run this test locally. First ensure that you are authenticated into an AWS account, and run the terraform init command in the root directory of the Terraform module. After the provider is initialized, start the test using the terraform test command. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass Our first test is complete, we have validated that the Terraform configuration is valid and the resource can be provisioned successfully. Next, let’s learn how to perform inspection of the resource state. Create resource and validate resource name Re-using the previous test file, we add the assertion block to checks if the CodeCommit repository name starts with a string repo- and provide error message if the condition fails. For the assertion, we use the startswith function. See the following example: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan assert { condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") error_message = "CodeCommit repository name ${var.repository_name} did not start with the expected value of ‘repo-****’." } } Now, let’s assume that another module author made changes to the module by modifying the prefix from repo- to my-repo-. Here is the modified Terraform module. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("my-repo-%s", var.repository_name) description = "Test repository." } We can catch this mistake by running the the terraform test command again. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... fail ╷ │ Error: Test assertion failed │ │ on tests/basic.tftest.hcl line 9, in run "test_resource_creation": │ 9: condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") │ ├──────────────── │ │ aws_codecommit_repository.test.repository_name is "my-repo-MyRepo" │ │ CodeCommit repository name MyRepo did not start with the expected value 'repo-***'. ╵ tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... fail Failure! 0 passed, 1 failed. We have successfully created a unit test using assertions that validates the resource name matches the expected value. For more examples of using assertions see the Terraform Tests Docs. Before we proceed to the next section, don’t forget to fix the repository name in the module (revert the name back to repo- instead of my-repo-) and re-run your Terraform test. Testing variable input validation When developing Terraform modules, it is common to use variable validation as a contract test to validate any dependencies / restrictions. For example, AWS CodeCommit limits the repository name to 100 characters. A module author can use the length function to check the length of the input variable value. We are going to use Terraform test to ensure that the variable validation works effectively. First, we modify the module to use variable validation. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } By default, when variable validation fails during the execution of Terraform test, the Terraform test also fails. To simulate this, create a new test file and insert the repository_name variable with a value longer than 100 characters. # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan } Notice on this new test file, we also set the command to Terraform plan, why is that? Because variable validation runs prior to Terraform apply, thus we can save time and cost by skipping the entire resource provisioning. If we run this Terraform test, it will fail as expected. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… fail ╷ │ Error: Invalid value for variable │ │ on main.tf line 1: │ 1: variable “repository_name” { │ ├──────────────── │ │ var.repository_name is “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” │ │ The repository name must be less than or equal to 100 characters. │ │ This was checked by the validation rule at main.tf:3,3-13. ╵ tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… fail Failure! 1 passed, 1 failed. For other module authors who might iterate on the module, we need to ensure that the validation condition is correct and will catch any problems with input values. In other words, we expect the validation condition to fail with the wrong input. This is especially important when we want to incorporate the contract test in a CI/CD pipeline. To prevent our test from failing due introducing an intentional error in the test, we can use the expect_failures attribute. Here is the modified test file: # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan expect_failures = [ var.repository_name ] } Now if we run the Terraform test, we will get a successful result. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… pass tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… pass Success! 2 passed, 0 failed. As you can see, the expect_failures attribute is used to test negative paths (the inputs that would cause failures when passed into a module). Assertions tend to focus on positive paths (the ideal inputs). For an additional example of a test that validates functionality of a completed module with multiple interconnected resources, see this example in the Terraform CI/CD and Testing on AWS Workshop. Orchestrating supporting resources In practice, end-users utilize Terraform modules in conjunction with other supporting resources. For example, a CodeCommit repository is usually encrypted using an AWS Key Management Service (KMS) key. The KMS key is provided by end-users to the module using a variable called kms_key_id. To simulate this test, we need to orchestrate the creation of the KMS key outside of the module. In this section we will learn how to do that. First, update the Terraform module to add the optional variable for the KMS key. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } variable "kms_key_id" { type = string default = "" } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." kms_key_id = var.kms_key_id != "" ? var.kms_key_id : null } In a Terraform test, you can instruct the run block to execute another helper module. The helper module is used by the test to create the supporting resources. We will create a sub-directory called setup under the tests directory with a single kms.tf file. We also create a new test file for KMS scenario. See the updated directory structure: ├── main.tf └── tests ├── setup │ └── kms.tf ├── basic.tftest.hcl ├── var_validation.tftest.hcl └── with_kms.tftest.hcl The kms.tf file is a helper module to create a KMS key and provide its ARN as the output value. # kms.tf resource "aws_kms_key" "test" { description = "test KMS key for CodeCommit repo" deletion_window_in_days = 7 } output "kms_key_id" { value = aws_kms_key.test.arn } The new test will use two separate run blocks. The first run block (setup) executes the helper module to generate a KMS key. This is done by assigning the command apply which will run terraform apply to generate the KMS key. The second run block (codecommit_with_kms) will then use the KMS key ARN output of the first run as the input variable passed to the main module. # with_kms.tftest.hcl run "setup" { command = apply module { source = "./tests/setup" } } run "codecommit_with_kms" { command = apply variables { repository_name = "MyRepo" kms_key_id = run.setup.kms_key_id } assert { condition = aws_codecommit_repository.test.kms_key_id != null error_message = "KMS key ID attribute value is null" } } Go ahead and run the Terraform init, followed by Terraform test. You should get the successful result like below. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass tests/var_validation.tftest.hcl... in progress run "test_invalid_var"... pass tests/var_validation.tftest.hcl... tearing down tests/var_validation.tftest.hcl... pass tests/with_kms.tftest.hcl... in progress run "create_kms_key"... pass run "codecommit_with_kms"... pass tests/with_kms.tftest.hcl... tearing down tests/with_kms.tftest.hcl... pass Success! 4 passed, 0 failed. We have learned how to run Terraform test and develop various test scenarios. In the next section we will see how to incorporate all the tests into a CI/CD pipeline. Terraform Tests in CI/CD Pipelines Now that we have seen how Terraform Test works locally, let’s see how the Terraform test can be leveraged to create a Terraform module validation pipeline on AWS. The following AWS services are used: AWS CodeCommit – a secure, highly scalable, fully managed source control service that hosts private Git repositories. AWS CodeBuild – a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodePipeline – a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Amazon Simple Storage Service (Amazon S3) – an object storage service offering industry-leading scalability, data availability, security, and performance. Terraform module validation pipeline In the above architecture for a Terraform module validation pipeline, the following takes place: A developer pushes Terraform module configuration files to a git repository (AWS CodeCommit). AWS CodePipeline begins running the pipeline. The pipeline clones the git repo and stores the artifacts to an Amazon S3 bucket. An AWS CodeBuild project configures a compute/build environment with Checkov installed from an image fetched from Docker Hub. CodePipeline passes the artifacts (Terraform module) and CodeBuild executes Checkov to run static analysis of the Terraform configuration files. Another CodeBuild project configured with Terraform from an image fetched from Docker Hub. CodePipeline passes the artifacts (repo contents) and CodeBuild runs Terraform command to execute the tests. CodeBuild uses a buildspec file to declare the build commands and relevant settings. Here is an example of the buildspec files for both CodeBuild Projects: # Checkov version: 0.1 phases: pre_build: commands: - echo pre_build starting build: commands: - echo build starting - echo starting checkov - ls - checkov -d . - echo saving checkov output - checkov -s -d ./ > checkov.result.txt In the above buildspec, Checkov is run against the root directory of the cloned CodeCommit repository. This directory contains the configuration files for the Terraform module. Checkov also saves the output to a file named checkov.result.txt for further review or handling if needed. If Checkov fails, the pipeline will fail. # Terraform Test version: 0.1 phases: pre_build: commands: - terraform init - terraform validate build: commands: - terraform test In the above buildspec, the terraform init and terraform validate commands are used to initialize Terraform, then check if the configuration is valid. Finally, the terraform test command is used to run the configured tests. If any of the Terraform tests fails, the pipeline will fail. For a full example of the CI/CD pipeline configuration, please refer to the Terraform CI/CD and Testing on AWS workshop. The module validation pipeline mentioned above is meant as a starting point. In a production environment, you might want to customize it further by adding Checkov allow-list rules, linting, checks for Terraform docs, or pre-requisites such as building the code used in AWS Lambda. Choosing various testing strategies At this point you may be wondering when you should use Terraform tests or other tools such as Preconditions and Postconditions, Check blocks or policy as code. The answer depends on your test type and use-cases. Terraform test is suitable for unit tests, such as validating resources are created according to the naming specification. Variable validations and Pre/Post conditions are useful for contract tests of Terraform modules, for example by providing error warning when input variables value do not meet the specification. As shown in the previous section, you can also use Terraform test to ensure your contract tests are running properly. Terraform test is also suitable for integration tests where you need to create supporting resources to properly test the module functionality. Lastly, Check blocks are suitable for end to end tests where you want to validate the infrastructure state after all resources are generated, for example to test if a website is running after an S3 bucket configured for static web hosting is created. When developing Terraform modules, you can run Terraform test in command = plan mode for unit and contract tests. This allows the unit and contract tests to run quicker and cheaper since there are no resources created. You should also consider the time and cost to execute Terraform test for complex / large Terraform configurations, especially if you have multiple test scenarios. Terraform test maintains one or many state files within the memory for each test file. Consider how to re-use the module’s state when appropriate. Terraform test also provides test mocking, which allows you to test your module without creating the real infrastructure. Conclusion In this post, you learned how to use Terraform test and develop various test scenarios. You also learned how to incorporate Terraform test in a CI/CD pipeline. Lastly, we also discussed various testing strategies for Terraform configurations and modules. For more information about Terraform test, we recommend the Terraform test documentation and tutorial. To get hands on practice building a Terraform module validation pipeline and Terraform deployment pipeline, check out the Terraform CI/CD and Testing on AWS Workshop. Authors Kevon Mayers Kevon Mayers is a Solutions Architect at AWS. Kevon is a Terraform Contributor and has led multiple Terraform initiatives within AWS. Prior to joining AWS he was working as a DevOps Engineer and Developer, and before that was working with the GRAMMYs/The Recording Academy as a Studio Manager, Music Producer, and Audio Engineer. He also owns a professional production company, MM Productions. Welly Siauw Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He has authored several AWS blog posts and actively leads AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machines and outdoor hiking. View the full article
  12. When working with Terraform, one common question that arises is whether to include the .terraform.lock.hcl file in the Git repository or leave it out by adding it to .gitignore. This decision impacts the version control practices and reproducibility of your infrastructure deployments. In this article, we’ll explore the contents of the .terraform.lock.hcl file, discuss why […] The article Should .terraform.lock.hcl file be added to .gitignore or committed to Git repo? appeared first on Build5Nines. View the full article
  13. The HashiCorp Terraform Cloud Operator for Kubernetes continuously reconciles infrastructure resources using Terraform Cloud. When you use the operator to create a Terraform Cloud workspace, you must reference a Terraform Cloud API token stored in a Kubernetes secret. One way to better secure these secrets instead of hard-coding them involves storing and managing secrets in a centralized secrets manager, like HashiCorp Vault. In this approach, you need to synchronize secrets revoked and created by Vault into Kubernetes. An operator like the Vault Secrets Operator (VSO) can retrieve secrets from an external secrets manager and store them in a Kubernetes secret for workloads to use. This post demonstrates how to use the Vault Secrets Operator (VSO) to retrieve dynamic secrets from Vault and write them to a Kubernetes secret for the Terraform Cloud Operator to reference when creating a workspace. While the example focuses on Terraform Cloud API tokens, you can extend this workflow to any Kubernetes workload or custom resource that requires a secret from Vault. Install Vault and operators The Terraform Cloud Operator requires a user or team API token with permissions to manage workspaces, plan and apply runs, and upload configurations. While you can manually generate a token in the Terraform Cloud UI, configure Vault to issue API tokens for Terraform Cloud. The Terraform Cloud secrets engine for Vault handles the issuance and revocation of different kinds of API tokens in Terraform Cloud. Vault manages the token’s lifecycle and audits its usage and distribution once you reference it in the Terraform Cloud Operator. The demo repository for this post sets up the required infrastructure resources, including a: Vault cluster on HCP Vault Kubernetes cluster on AWS After provisioning infrastructure resources, the demo repo installs Helm charts for Vault, Terraform Cloud Operator, and Vault Secrets Operator in their own namespaces using Terraform. If you do not use Terraform, install each Helm chart by CLI. First, install the Vault Helm chart. If applicable, update the values to reference an external Vault cluster: $ helm repo add hashicorp https://helm.releases.hashicorp.com $ helm install vault hashicorp/vaultInstall the Helm chart for the Terraform Cloud Operator with its default values: $ helm install terraform-cloud-operator hashicorp/terraform-cloud-operatorInstall the Helm chart for VSO with a default Vault connection to your Vault cluster: $ helm install vault-secrets-operator hashicorp/vault-secrets-operator \ --set defaultVaultConnection.enabled=true \ --set defaultVaultConnection.address=$VAULT_ADDRAny custom resources created by VSO will use the default Vault connection. If you have different Vault clusters, you can define a VaultConnection custom resource and reference it in upstream dependencies. After installing Vault and the operators, configure the Kubernetes authentication method in Vault. This ensures VSO can use Kubernetes service accounts to authenticate to Vault. Set up secrets in Vault After installing a Vault cluster and operators into Kubernetes, set up the secrets engines for your Kubernetes application. The Terraform Cloud Operator needs a Terraform Cloud API token with permissions to create projects and workspaces and upload Terraform configuration. On the Terraform Cloud Free tier, you can generate a user token with administrative permissions or a team token for the “owners” team to create workspaces and apply runs. To further secure the operator’s access to Terraform Cloud, upgrade to a plan that supports teams to secure the Terraform Cloud Operator’s access to Terraform Cloud. Then, create a team, associate a team token with it, and scope the token’s access to a Terraform Cloud project. This ensures that the Terraform Cloud Operator has sufficient access to create workspaces and upload configuration in a given project without giving it access to an entire organization. Configure the Terraform Cloud secrets engine for Vault to handle the lifecycle of the Terraform Cloud team API token. The demo repo uses Terraform to enable the backend. Pass in an organization or user token with permissions to create other API tokens. resource "vault_terraform_cloud_secret_backend" "apps" { backend = "terraform" description = "Manages the Terraform Cloud backend" token = var.terraform_cloud_root_token }Create a role for each Terraform Cloud team that needs to use the Terraform Cloud Operator. Then pass the team ID to the role to configure the secrets engine to generate team tokens: resource "vault_terraform_cloud_secret_role" "apps" { backend = vault_terraform_cloud_secret_backend.apps.backend name = "payments-app" organization = var.terraform_cloud_organization team_id = "team-*******" }Build a Vault policy that allows read access to the secrets engine credentials endpoint and role: resource "vault_policy" "terraform_cloud_secrets_engine" { name = "terraform_cloud-secrets-engine-payments-app" policy = <The Terraform Cloud Operator needs the Terraform Cloud team token to create workspaces, upload configurations, and start runs. However, you may also want to pass secrets to workspace variables. For example, a Terraform module may need a username and password to configure HCP Boundary. You can store the credentials in Vault’s key-value secrets engine and configure a Vault policy to read the static secrets. After setting up policies to read the required secrets, create a Vault role for the Kubernetes authentication method, which allows the terraform-cloud service account to authenticate to Vault and retrieve the Terraform Cloud token: resource "vault_kubernetes_auth_backend_role" "terraform_cloud_token" { backend = "kubernetes" role_name = "payments-app" bound_service_account_names = ["terraform-cloud"] bound_service_account_namespaces = ["payments-app"] token_ttl = 86400 token_policies = [ vault_policy.terraform_cloud_secrets_engine.name, ] }Refer to the complete repo to configure the Terraform Cloud secrets engine and store static secrets for the Terraform Cloud workspace variables. Sync secrets from Vault to Kubernetes The Terraform Cloud Operator includes a custom resource to create workspaces and define workspace variables. However, dynamic variables refer to values stored in a Kubernetes Secret or ConfigMap. Use VSO to synchronize secrets from Vault into native Kubernetes secrets. The demo repo for this post retrieves the Terraform Cloud team token and static credentials and stores them as a Kubernetes secret. VSO uses a Kubernetes service account linked to the Kubernetes authentication method role in Vault. First, deploy a service account and service account token for terraform-cloud to the payments-app namespace: apiVersion: v1 kind: ServiceAccount metadata: name: terraform-cloud namespace: payments-app --- apiVersion: v1 kind: Secret metadata: name: terraform-cloud-token namespace: payments-app type: kubernetes.io/service-account-tokenThen, configure a VaultAuth resource for VSO to use the terraform-cloud service account and authenticate to Vault using the kubernetes mount path and payments-app role defined for the authentication method. The configuration shown here sets Vault namespace to admin for your HCP Vault cluster: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultAuth metadata: name: terraform-cloud namespace: payments-app spec: method: kubernetes mount: kubernetes namespace: admin kubernetes: role: payments-app serviceAccount: terraform-cloud audiences: - vaultTo sync the Terraform Cloud team token required by the Terraform Cloud Operator to a Kubernetes secret, define a VaultDynamicSecret resource to retrieve the credentials. VSO uses this resource to retrieve credentials from the terraform/creds/payments-app path in Vault and creates a Kubernetes secret named terraform-cloud-team-token with the token value. The resource refers to VaultAuth for authentication to Vault: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: name: terraform-cloud-team-token namespace: payments-app spec: mount: terraform path: creds/payments-app destination: create: true name: terraform-cloud-team-token type: Opaque vaultAuthRef: terraform-cloudWhen you apply these manifests to your Kubernetes cluster, VSO retrieves the Terraform Cloud team token and stores it in a Kubernetes secret. The Operator’s logs indicate the handling of the VaultAuth resource and synchronization of the VaultDynamicSecret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-14T16:38:47Z DEBUG events Successfully handled VaultAuth resource request {"type": "Normal", "object": {"kind":"VaultAuth","namespace":"payments-app","name":"terraform-cloud","uid":"e7c0464e-9ce8-4f3f-953a-f8eb10853001","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331817"}, "reason": "Accepted"} 2024-03-14T16:38:47Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"d1563879-41ee-4817-a00b-51fe6cff7e6e","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331826"}, "reason": "SecretSynced"}Verify that the Kubernetes secret terraform-cloud-team-token contains the Terraform Cloud team token: $ kubectl get secrets -n payments-app \ terraform-cloud-team-token -o jsonpath='{.data.token}' | base64 -d ******.****.*****Create a Terraform Cloud workspace using secrets You can now configure other Kubernetes resources to reference the secret synchronized by VSO. For the Terraform Cloud Operator, deploy a Workspace resource that references the Kubernetes secret with the team token: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: payments-app-database namespace: payments-app spec: organization: hashicorp-stack-demoapp project: name: payments-app token: secretKeyRef: name: terraform-cloud-team-token key: token name: payments-app-database ## workspace variables omitted for clarityThe team token has administrator access to create and update workspaces in the “payments-app” project in Terraform Cloud. You can use a similar approach to passing Kubernetes secrets as workspace variables. Deploy a Module resource to apply a Terraform configuration in a workspace. The resource references a module source, variables to pass to the module, and outputs to extract. The Terraform Cloud Operator uploads a Terraform configuration to the workspace defining the module. apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: database namespace: payments-app spec: organization: hashicorp-stack-demoapp token: secretKeyRef: name: terraform-cloud-team-token key: token destroyOnDeletion: true module: source: "joatmon08/postgres/aws" version: "14.9.0" ## module variables omitted for clarityTerraform Cloud will start a run to apply the configuration in the workspace. Rotate the team API token Terraform Cloud allows only one active team token at a time. As a result, the Terraform Cloud secrets engine does not assign leases to team tokens and requires manual rotation. However, Terraform Cloud does allow issuance of multiple user tokens. The secrets engine assigns leases to user API tokens and will rotate them dynamically. To rotate a team token, run a Vault command to rotate the role for a team token in Terraform Cloud: $ vault write -f terraform/rotate-role/payments-appVSO must update the Kubernetes secret with the new token when the team token is rotated. Edit a field in the VaultDynamicSecret resource, such as renewalPercent, to force VSO to resynchronize: $ kubectl edit VaultDynamicSecret terraform-cloud-team-token -n payments-app # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: annotations: ## omitted spec: ## omitted renewalPercent: 60 vaultAuthRef: terraform-cloudVSO recognizes the new team token in Vault and reconciles it with the Kubernetes secret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-18T16:10:19Z INFO Vault secret does not support periodic renewal/refresh via reconciliation {"controller": "vaultdynamicsecret", "controllerGroup": "secrets.hashicorp.com", "controllerKind": "VaultDynamicSecret", "VaultDynamicSecret": {"name":"terraform-cloud-team-token","namespace":"payments-app"}, "namespace": "payments-app", "name": "terraform-cloud-team-token", "reconcileID": "3d0a15f1-0edf-450b-8be1-6319cd3b2d02", "podUID": "4eb7f16a-cfcb-484e-b3da-54ddbfc6a6a6", "requeue": false, "horizon": "0s"} 2024-03-18T16:10:19Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"f4f0483c-895d-4b05-894c-24fdb1518489","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"1915673"}, "reason": "SecretRotated"Note that this manual workflow for rotating tokens applies specifically to team and organization tokens generated by the Terraform Cloud secrets engine. User tokens have leases, which VSO handles automatically. VSO also supports the rotation of credentials for static roles in database secrets engines. Set the allowStaticCreds attribute in the VaultDynamicSecret resource for VSO to synchronize changes to static roles. Learn more As shown in this post, rather than store Terraform Cloud API tokens as secrets in Kubernetes, you can manage the tokens with Vault and use the Vault Secrets Operator to synchronize them to Kubernetes secrets for the Terraform Cloud Operator to use. By managing the Terraform Cloud API token in Vault, you can audit its usage and handle its lifecycle in one place. In general, the pattern of synchronizing to a Kubernetes secret allows any permitted Kubernetes custom resource or workload to use the secret while Vault manages its lifecycle. As a result, you can track the usage of secrets across your Kubernetes workloads without refactoring applications already using Kubernetes secrets. Learn more about the Vault Secrets Operator in our VSO documentation. If you want to further secure your secrets in Kubernetes, check out our blog post comparing three methods to inject secrets from Vault into Kubernetes workloads. If you support a GitOps workflow in your organization and want to empower teams to deploy infrastructure resources using Kubernetes, review our documentation on the Terraform Cloud Operator to deploy and manage infrastructure resources through modules. Refer to GitHub for a complete example provisioning a database and other infrastructure resources. View the full article
  14. Terraform from HashiCorp is provided as a command-line tool that must first be installed locally before excution. It’s easy to install, as the full tool is contained within a single executable. This makes it easy to put terraform in any folder on your machine for use. Although, to make it executable from anywhere on your […] The article Terraform: Install Latest Version on macOS and Linux appeared first on Build5Nines. View the full article
  15. The Developer Relations team at HashiCorp loves to hear from the community of certified users, learning about their motivations for becoming certified and how their certifications have impacted their careers. The first in a series, what follows is an interview with Mario Rodríguez Hernández, a technologist from the Canary Islands who’s worked in roles at the top, middle, and bottom of org charts across several companies. Mario is certified in Microsoft Azure, Amazon Web Services, Google Cloud, HashiCorp Terraform, Kubernetes, and many other technologies. Lauren Carey, HashiCorp Developer Relations: Can you tell us about yourself, your background, and your experience with HashiCorp? Mario Rodríguez Hernández: I am from Spain, from the Canary Islands, and I've worked in the IT field for more than 20 years. I started as a help agent, then I moved to system administrator, coding, and different Enterprise Resource Planning (ERP) tools like Microsoft Dynamics, BC and Finance & Operations, and other kinds of tools. Eventually, I moved up into leadership, working as a CIO at different companies in the Canary Islands, and managing 10 people. Eventually, though, I wanted to return to the more technical side of things. I was frustrated with the politics and the bureaucratic type of work I had to do. I wanted to feel the satisfaction of doing things again, of making things happen. So, that’s when I decided to pivot to a more technical role again. During the pandemic, I started studying cloud and DevOps, and I started to prepare for and take different certifications. I now have maybe 60+ certifications. I started with the Azure certification and moved to AWS, and I am now fully certified on Google Cloud. I also felt that Terraform and the infrastructure as code framework was an important piece of the DevOps and cloud-native fields, so I decided to go for the Terraform Associate certification as a part of my transition as well. Lauren: Tell us about your current role. Mario: Today is my first day at Minsait as a senior specialist in the fields of architecture and DevOps. Mainly, I work for utilities companies in Spain and out of Spain. Minsait is part of Indra Group, which is a big company in Spain. I work for customers in Argentina, Spain, Africa, and other places all over the world, mainly focusing on utility companies and electric companies. We have different products for utility and electrical companies, and we develop for the specific needs of the customer too. I mainly have the role of Cloud Architect, but I also participate in DevOps, pipelines, and creating infrastructure with Terraform and multiple clouds, like AWS and Azure. Today I am working on a project with Terraform to create infrastructure in AWS. Terraform is a well-known tool in the IaC field, and having a certification demonstrates to Minsait that I know how to use this tool. Lauren: What motivated you to earn the Terraform certification? Mario: I chose to study Terraform, specifically, because it is the number one product in the infrastructure as code (IaC) space around the world, so I knew it would open doors to help me make the career changes I described earlier. It implements the strategy of IaC in a tool. It helps you create infrastructure and allows you to read the infrastructure that other people have created. It gives you the opportunity to make versions of infrastructure, to experiment, to fail, and to roll back to other versions. It is a great tool to develop in, but also to work in with other people and to collaborate in, which is very important. Instead of creating infrastructure manually every time through a point-and-click cloud-vendor portal, you can write reusable code that automates this process quickly and easily. Lauren: What role did the certification play in your career move to a more technical role? Do you think that having the certification helped you get a job at a company? Mario: Of course. As I said before, I'm from the Canary Islands and compared to the Spanish market, it is a very, very small place. If you want to shine in the world market, you have to stand out from others, and the way that I found to stand out was with the certifications. As I said, before: I did have knowledge of other tools and systems, but I had no certifications. I know what I am doing, but others can’t be sure. A certification is a badge proving that you know the technology. It opened the door for me to find a job. All of the companies that gave me an interview talked to me about my certification. All of them. I'm pretty sure that if I didn’t have this certification, maybe half or more of these interviews would not have happened. Lauren: What was your experience of preparing for and taking the Terraform Associate exam and preparing for it. Did you use any of our materials? Did you use outside materials? Mario: I used both. I used HashiCorp’s materials, which I think are very good; the website, documentation, and sample questions are all very good. But I combined that with books related to the certification. I practiced, of course. I had my little practice exercises. But, mainly I think that it is a very clear tool, so it is easy to learn. It's very, very logical. If you have experience with another programming language, then it's very easy to transition to. If you know a little English, all of the terms are easy to learn. It's a perfect tool. Lauren: Do you feel more confident starting new projects because you have the certification? Mario: Having my Terraform certification makes my employer say, “Oh you have a Terraform certification, you are a Terraform expert, I have a couple of projects for you.” If I didn’t have this certification, it would be difficult to transition onto a project like that. So yes, certification is very important for the type of projects I want to work on. Also, my interviewer said, they didn’t have any Terraform specialists on the team, so it made me stand out that I had the certification and could be a Terraform expert for them. Lauren: Do you have any career plans that you can envision certifications fitting into? Mario: Now I am studying more Kubernetes and things like that. I recently passed the CKAD (Kubernetes Certified Application Developer) and am now already preparing for the CKA (Certified Kubernetes Administrator). When you get deeper into the cloud and start to study things like Kubernetes, you feel the need to dive more into microservices, service mesh, and how to manage secrets in applications. HashiCorp Vault and Consul are perfect for that. Those certifications are on my roadmap. I don't know if it will happen this year, because I set my goals at the beginning of the year, but I surely will add another certification from HashiCorp. Lauren: Did you know that we have a Terraform professional certification coming out? Our Professional certifications are live scenarios, so you're actually working in the application during the exam. So that'll be a good one to sort of, you know, take it to the next level. Mario: It’s perfect, because now I have expertise with Terraform. I'm doing real projects, so that’s perfect for me. Lauren: If someone asked your opinion on HashiCorp certifications, what would you say? Mario: I think it’s a great certification. And taking into account the prices of other certifications, it’s inexpensive, especially considering the value you get from it. There are other certifications on the market that are quite expensive, but really I think that the Terraform Associate exam is under-priced. It is a great offer for us in the community. I recommend studying Terraform, practicing with Terraform, and then trying to earn the certification. Lauren: What other thoughts or stories have we not covered yet that you think might be helpful for others to know? What's something that you want to see in this interview that I didn't cover? Mario: Oh, let me see, it's important to me that I'm a single person from a little island off of a little country, and you are talking to me from such a big company in the IT world all because I studied and I passed a certification. It means a lot to me. It serves as motivation for anyone that, if you want something, anything is possible. Lauren: Finally, how did you celebrate when you got your certification? Did you share it? Mario: I always share new certifications with my network on LinkedIn. I have my teammates and followers, and we share our certifications together. View the full article
  16. Cloud spending can be a big concern for financial operations (FinOps) teams. Google Cloud’s Active Assist tooling can help, generating cost savings recommendations for optimizing Google Cloud resources. Then there are tools like Recommendation Hub and FinOps Hub, which offer consolidated views into recommendations for project owners and billing administrators. These tools are a big step forward in making recommendations more actionable and accessible for teams. However, using them requires granting higher permissions in the console — something that could give more access than security teams are comfortable with. Also, some organizations require more flexibility to tailor the views and reports using information stored in customer-persisted labels or tags. To make these views easier to develop, we are pleased to announce an OSS Recommendations Dashboard, a Looker Studio dashboard powered by open-source infrastructure scripts. The main goal of the dashboard is to pinpoint savings opportunities organization-wide, and to enable default dashboards with filters for folders and projects, helping users focus on key savings areas. This setup simplifies team collaboration and allows organizations to take advantage of potential savings. Additionally, these filters let you customize dashboards for your unique organizational requirements. This release of the OSS Recommendations Dashboard focuses on four primary features: 1. Visibility across the organization Understanding the challenge administrators face in overseeing numerous projects, the dashboard offers a comprehensive view that encapsulates all savings opportunities throughout the organization. 2. Built-in filters for folders and projects By highlighting the organizational structure, built-in filters facilitate easier interaction with relevant teams, making it easier to identify and act upon savings opportunities. 3. Looker Studio for extendability Building on Looker Studio delivers customizable analytics and reporting tools so that administrators can tailor their dashboard to meet their unique needs. 4. Gamification Gamification elements help to enhance user engagement and motivation, making routine project management a more interactive and rewarding experience. The OSS Recommendations Dashboard also includes a week-by-week line chart that showcases projects that have successfully reduced their number of recommendations, and grouping recommendations by projects and folders is another feature that simplifies things for FinOps teams. The tool also ensures easy information access by allowing users to share dashboards and relevant data to a wider audience without having to navigate through excessive permissions. Architecturally, the OSS Recommendations Dashboard is built on Google Cloud and integrated with Active Assist. The solution taps into a mix of tools: Cloud Scheduler Cloud Workflows Cloud Asset Inventory Active Assist Recommendations BigQuery Looker Studio Terraform Cloud Asset Inventory and Active Assist Recommendations are the dashboard’s foundational data sources. Cloud Scheduler and Workflows serve as the orchestration mechanisms, ensuring timely and consistent extraction of data into BigQuery. BigQuery manages and processes this data, setting the stage for Looker Studio to present and visualize these insights. To streamline the deployment process, we've encapsulated this design within a convenient Terraform module. For more on deploying this solution, refer to this guide. Google Cloud's Active Assist can help you optimize your cloud resource operations. Combined with the OSS Recommendations Dashboard can have a profound impact on your FinOps team. By providing project-level visibility and an organization-wide perspective, the OSS Recommendations Dashboard helps FinOps teams have better control over cloud spending. Moreover, because it’s built on Terraform, you can implement this solution promptly, to start getting visibility into your cloud spending fast. If you’re eager for more insight into your Google Cloud spending and want to optimize your resources, the OSS Recommendations Dashboard provides not just clarity, but actionable intelligence. Dive in, explore, and let your organization bask in the newfound savings! View the full article
  17. In November 2023, we announced the general availability of the Terraform Cloud Operator for Kubernetes. The Terraform Cloud Operator streamlines infrastructure management, allowing platform teams to offer a Kubernetes-native experience for their users while standardizing on Terraform workflows. Today we are excited to announce the general availability of version 2.3 of the Terraform Cloud Operator, with the ability to initiate workspace runs declaratively. Introducing workspace run operations In previous versions of the Terraform Cloud Operator v2, the only way to start a run was by patching the restartedAt timestamp in the Module resource. But this approach was not intuitive, did not work for all types of workspaces and workflows, and did not allow users to control the type of run to perform. This challenge hindered migration efforts to the newest version of the Terraform Cloud Operator. . Now with version 2.3, users can declaratively start plan, apply, and refresh runs on workspaces. This enhances self-service by allowing developers to initiate runs on any workspace managed by the Operator, including VCS-driven workspaces. The Workspace custom resource in version 2.3 of the operator supports three new annotations to initiate workspace runs: workspace.app.terraform.io/run-new: Set this annotation to "true" to trigger a new run. workspace.app.terraform.io/run-type: Set to plan (default), apply, or refresh to control the type of run. workspace.app.terraform.io/run-terraform-version: Specifies the version of Terraform to use for a speculative plan run. For other run types, the workspace version is used. As an example, a basic Workspace resource looks like this: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: this spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: kubernetes-operatorUsing kubectl as shown here, annotate the above resource to immediately start a new apply run: kubectl annotate workspace this \ workspace.app.terraform.io/run-new="true" \ workspace.app.terraform.io/run-type=apply --overwriteThe annotation is reflected in the Workspace resource for observability: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: annotations: workspace.app.terraform.io/run-new: "true" workspace.app.terraform.io/run-type: apply name: this spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: kubernetes-operatorAfter the run is successfully triggered, the operator will set the run-new value back to "false". Learn more and get started HashiCorp works to continuously improve the Kubernetes ecosystem by enabling platform teams at scale. Learn more about the Terraform Cloud Operator by reading the documentation and the Deploy infrastructure with the Terraform Cloud Kubernetes Operator v2 tutorial. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today. View the full article
  18. Terraform, the leading IaC (Infrastructure as Code orchestrator), was created 9 years ago by HashiCorp and is considered today as the de facto tool for managing cloud infrastructure with code. What started as an open-source tool quickly became one of the largest software communities in the world, and for every problem you may encounter, someone has already found and published a solution. At the end of the day, DevOps managers are looking for a simple, predictable, drama-free way to manage their infrastructure, and this is probably why many have chosen Terraform, which is a well-known, well-established tool with a very large community. View the full article
  19. HashiCorp Terraform Cloud run tasks have long been a staple for securely sharing Terraform-related data with trusted integration partners. And with the newest enhancements, the benefits go even further. These improvements empower teams to seamlessly expand their use of essential third-party integrations, facilitating automation, configuration management, security, compliance, and orchestration tasks. Recent efforts by the HashiCorp Terraform team have focused on refining the process of associating run tasks within Terraform organizations, significantly reducing day-to-day overhead. Plus, the introduction of a new post-apply stage broadens the potential use cases for run tasks, offering even more value to users. Scoping organizational run tasks Initially, run tasks were tailored to meet the needs of teams provisioning infrastructure with Terraform Cloud. Recognizing the diversity of tools used in Terraform workflows, we integrated them seamlessly into Terraform Cloud as first-class run task integrations. This gave teams additional flexibility in selecting and managing run tasks for their workspaces. As run task adoption grows within organizations, platform operations teams face challenges in ensuring consistency across the organization. Managing individual run task assignments can become cumbersome, with platform teams striving for standardization across workspaces. To address this, we've introduced scopes to organizational run tasks in Terraform Cloud. This feature allows platform teams to define the scope of organizational run tasks, targeting them globally and specifying evaluation stages for enforcement. Organization-wide enforcement eliminates configuration burden and reduces the risk of compliance gaps as new workspaces are created. Multi-stage support further enhances the run task workflow, streamlining configuration and reducing redundant code when using the Terraform Cloud/Enterprise (tfe) provider for run task provisioning and management. Introducing post-apply run tasks Post-provisioning tasks are crucial for managing and optimizing infrastructure on Day 2 and beyond. These tasks include configuration management, monitoring, performance optimization, security management, cost optimization, and scaling to help ensure efficient, secure, and cost-effective operations. Recent discussions with customers underscored the need to securely integrate third-party tools and services into Terraform workflows after infrastructure is provisioned with Terraform Cloud. Post-provisioning processes often require manual intervention before systems or services are production-ready. While API-driven workflows can expedite post-provisioning, the lack of a common workflow poses implementation challenges. In response to these concerns, we've introduced a new post-apply stage to the run task workflow. This stage lets users seamlessly incorporate post-provisioning tasks that automate configuration management, compliance checks, and other post-deployment activities. The feature simplifies the integration of Terraform workflows with users' toolchains, prioritizing security and control. Refined user experience for run tasks As part of the implementation of run task scopes, we've extended support for multi-stage functionality to workspace run tasks. We also introduced two new views that offer users the flexibility to see the run tasks associated with their workspace. Now workspace administrators can choose to view their run task associations as a list or grouped by assigned stages. Summary and resources The advancements in Terraform Cloud's run task workflow empower users to streamline infrastructure provisioning and management. You can elevate your workflow with scopes for organizational run tasks and harness the potential of the post-apply stage. To learn more, explore HashiCorp’s comprehensive run tasks documentation. Additionally, we provide a Terraform run task scaffolding project written in Go to help you write your own custom run task integration. If you're new to Terraform, sign up for Terraform Cloud today and start for free. View the full article
  20. In the realm of containerized applications, Kubernetes reigns supreme. But with great power comes great responsibility, especially when it comes to safeguarding sensitive data within your cluster. Terraform, the infrastructure-as-code darling, offers a powerful solution for managing Kubernetes Secrets securely and efficiently. This blog delves beyond the basics, exploring advanced techniques and considerations for leveraging Terraform to manage your Kubernetes Secrets. Understanding Kubernetes Secrets Kubernetes Secrets provides a mechanism to store and manage sensitive information like passwords, API keys, and tokens used by your applications within the cluster. These secrets are not directly exposed in the container image and are instead injected into the pods at runtime. View the full article
  21. Behave, a Python-based behavior-driven development (BDD) framework for writing human-readable tests that describe the expected behavior of software systems. On the other hand, Terraform is an infrastructure as code (IaC) tool that streamlines the management of infrastructure by enabling developers to define resources and configurations in a declarative manner. By combining Behave's BDD approach with Terraform, you can ensure that infrastructure behaves as expected under various conditions. This integration facilitates early detection of issues and the reliability of infrastructure code. Using Behave for Terraform Testing Testing Terraform configurations with Behave involves a series of structured steps: View the full article
  22. HashiCorp Terraform is the world’s most widely used multi-cloud provisioning product. The Terraform ecosystem has notched more than 3,000 providers, 14,000 modules, and 250 million downloads. Terraform Cloud is the fastest way to adopt Terraform, providing everything practitioners, teams, and global businesses need to create and collaborate on infrastructure and manage risks for security, compliance, and operational constraints. This month, AWS AppFabric added support for Terraform Cloud, expanding an already long list of ways that Terraform can connect, secure and provision infrastructure with AWS. This post will explore the new AppFabric support and highlight two other key existing integrations: Dynamic provider credentials and AWS Service Catalog support for Terraform Cloud. AWS AppFabric support for Terraform Cloud AWS AppFabric now supports Terraform Cloud. IT administrators and security analysts can use AppFabric to quickly integrate with Terraform Cloud, aggregate enriched and normalized SaaS audit logs, and audit end-user access across their SaaS apps. This launch expands AWS AppFabric supported applications used across an organization. AWS AppFabric quickly connects SaaS applications, or data lakes like Amazon Security Lake. For Terraform Cloud users, this integration can accelerate time-to-market and help developers release new features to production faster with streamlined infrastructure provisioning and application delivery workflows. To learn more, visit the AWS AppFabric page and then check out how to connect AppFabric to your Terraform Cloud account. Dynamic credentials with the AWS provider Introduced early last year, Terraform Cloud's dynamic provider credentials let you establish a trust relationship between Terraform Cloud and AWS. They limit the blast radius of compromised credentials by using unique, single-use credentials for each Terraform run. Dynamic credentials also give you fine-grained control over the resources that each of your Terraform Cloud projects and workspaces can manage. Terraform Cloud supports dynamic credentials for AWS and Vault. To learn more, AWS and HashiCorp have since written a joint blog post on how to Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials and you can learn how to configure Dynamic Credentials with the AWS Provider at HashiCorp Developer. Terraform Cloud self-service provisioning with AWS Service Catalog In August 2023, AWS added AWS Service Catalog support for Terraform Cloud. This includes integrated access to key AWS Service Catalog features, including cataloging of standardized and pre-approved Terraform configurations, infrastructure as code templates, access control, resource provisioning with least-privilege access, versioning, sharing to thousands of AWS accounts, and tagging. By combining Terraform Cloud with AWS Service Catalog, we’re connecting the AWS Service Catalog interface that many customers already know, with the existing workflows and policy guardrails of Terraform Cloud. HashiCorp and AWS have since co-presented at HashiConf (Terraform Cloud self-service provisioning with AWS Service Catalog) and partnered on AWS’s blog post on How to Use AWS Service Catalog with HashiCorp Terraform Cloud, demonstrating the workflow for provisioning a new product and offering access to getting-started guides. Self-service infrastructure is no longer a dream Platform teams can use Terraform Cloud, HCP Waypoint, and the AWS Service Catalog to create simplified Terraform-based workflows for developers. Terraform modules can incorporate unit testing, built-in security, policy enforcement, and reliable version updates. Using these tools, platform teams can establish standardized workflows to deploy applications and deliver a smooth and seamless developer experience. Learn more by viewing AWS and HashiCorp’s recent Self-service infrastructure is no longer a dream talk from AWS re:Invent: View the full article
  23. Introduction Today customers want to reduce manual operations for deploying and maintaining their infrastructure. The recommended method to deploy and manage infrastructure on AWS is to follow Infrastructure-As-Code (IaC) model using tools like AWS CloudFormation, AWS Cloud Development Kit (AWS CDK) or Terraform. One of the critical components in terraform is managing the state file which keeps track of your configuration and resources. When you run terraform in an AWS CI/CD pipeline the state file has to be stored in a secured, common path to which the pipeline has access to. You need a mechanism to lock it when multiple developers in the team want to access it at the same time. In this blog post, we will explain how to manage terraform state files in AWS, best practices on configuring them in AWS and an example of how you can manage it efficiently in your Continuous Integration pipeline in AWS when used with AWS Developer Tools such as AWS CodeCommit and AWS CodeBuild. This blog post assumes you have a basic knowledge of terraform, AWS Developer Tools and AWS CI/CD pipeline. Let’s dive in! Challenges with handling state files By default, the state file is stored locally where terraform runs, which is not a problem if you are a single developer working on the deployment. However if not, it is not ideal to store state files locally as you may run into following problems: When working in teams or collaborative environments, multiple people need access to the state file Data in the state file is stored in plain text which may contain secrets or sensitive information Local files can get lost, corrupted, or deleted Best practices for handling state files The recommended practice for managing state files is to use terraform’s built-in support for remote backends. These are: Remote backend on Amazon Simple Storage Service (Amazon S3): You can configure terraform to store state files in an Amazon S3 bucket which provides a durable and scalable storage solution. Storing on Amazon S3 also enables collaboration that allows you to share state file with others. Remote backend on Amazon S3 with Amazon DynamoDB: In addition to using an Amazon S3 bucket for managing the files, you can use an Amazon DynamoDB table to lock the state file. This will allow only one person to modify a particular state file at any given time. It will help to avoid conflicts and enable safe concurrent access to the state file. There are other options available as well such as remote backend on terraform cloud and third party backends. Ultimately, the best method for managing terraform state files on AWS will depend on your specific requirements. When deploying terraform on AWS, the preferred choice of managing state is using Amazon S3 with Amazon DynamoDB. AWS configurations for managing state files Create an Amazon S3 bucket using terraform. Implement security measures for Amazon S3 bucket by creating an AWS Identity and Access Management (AWS IAM) policy or Amazon S3 Bucket Policy. Thus you can restrict access, configure object versioning for data protection and recovery, and enable AES256 encryption with SSE-KMS for encryption control. Next create an Amazon DynamoDB table using terraform with Primary key set to LockID. You can also set any additional configuration options such as read/write capacity units. Once the table is created, you will configure the terraform backend to use it for state locking by specifying the table name in the terraform block of your configuration. For a single AWS account with multiple environments and projects, you can use a single Amazon S3 bucket. If you have multiple applications in multiple environments across multiple AWS accounts, you can create one Amazon S3 bucket for each account. In that Amazon S3 bucket, you can create appropriate folders for each environment, storing project state files with specific prefixes. Now that you know how to handle terraform state files on AWS, let’s look at an example of how you can configure them in a Continuous Integration pipeline in AWS. Architecture Figure 1: Example architecture on how to use terraform in an AWS CI pipeline This diagram outlines the workflow implemented in this blog: The AWS CodeCommit repository contains the application code The AWS CodeBuild job contains the buildspec files and references the source code in AWS CodeCommit The AWS Lambda function contains the application code created after running terraform apply Amazon S3 contains the state file created after running terraform apply. Amazon DynamoDB locks the state file present in Amazon S3 Implementation Pre-requisites Before you begin, you must complete the following prerequisites: Install the latest version of AWS Command Line Interface (AWS CLI) Install terraform latest version Install latest Git version and setup git-remote-codecommit Use an existing AWS account or create a new one Use AWS IAM role with role profile, role permissions, role trust relationship and user permissions to access your AWS account via local terminal Setting up the environment You need an AWS access key ID and secret access key to configure AWS CLI. To learn more about configuring the AWS CLI, follow these instructions. Clone the repo for complete example: git clone https://github.com/aws-samples/manage-terraform-statefiles-in-aws-pipeline After cloning, you could see the following folder structure: Figure 2: AWS CodeCommit repository structure Let’s break down the terraform code into 2 parts – one for preparing the infrastructure and another for preparing the application. Preparing the Infrastructure The main.tf file is the core component that does below: It creates an Amazon S3 bucket to store the state file. We configure bucket ACL, bucket versioning and encryption so that the state file is secure. It creates an Amazon DynamoDB table which will be used to lock the state file. It creates two AWS CodeBuild projects, one for ‘terraform plan’ and another for ‘terraform apply’. Note – It also has the code block (commented out by default) to create AWS Lambda which you will use at a later stage. AWS CodeBuild projects should be able to access Amazon S3, Amazon DynamoDB, AWS CodeCommit and AWS Lambda. So, the AWS IAM role with appropriate permissions required to access these resources are created via iam.tf file. Next you will find two buildspec files named buildspec-plan.yaml and buildspec-apply.yaml that will execute terraform commands – terraform plan and terraform apply respectively. Modify AWS region in the provider.tf file. Update Amazon S3 bucket name, Amazon DynamoDB table name, AWS CodeBuild compute types, AWS Lambda role and policy names to required values using variable.tf file. You can also use this file to easily customize parameters for different environments. With this, the infrastructure setup is complete. You can use your local terminal and execute below commands in the same order to deploy the above-mentioned resources in your AWS account. terraform init terraform validate terraform plan terraform apply Once the apply is successful and all the above resources have been successfully deployed in your AWS account, proceed with deploying your application. Preparing the Application In the cloned repository, use the backend.tf file to create your own Amazon S3 backend to store the state file. By default, it will have below values. You can override them with your required values. bucket = "tfbackend-bucket" key = "terraform.tfstate" region = "eu-central-1" The repository has sample python code stored in main.py that returns a simple message when invoked. In the main.tf file, you can find the below block of code to create and deploy the Lambda function that uses the main.py code (uncomment these code blocks). data "archive_file" "lambda_archive_file" { …… } resource "aws_lambda_function" "lambda" { …… } Now you can deploy the application using AWS CodeBuild instead of running terraform commands locally which is the whole point and advantage of using AWS CodeBuild. Run the two AWS CodeBuild projects to execute terraform plan and terraform apply again. Once successful, you can verify your deployment by testing the code in AWS Lambda. To test a lambda function (console): Open AWS Lambda console and select your function “tf-codebuild” In the navigation pane, in Code section, click Test to create a test event Provide your required name, for example “test-lambda” Accept default values and click Save Click Test again to trigger your test event “test-lambda” It should return the sample message you provided in your main.py file. In the default case, it will display “Hello from AWS Lambda !” message as shown below. Figure 3: Sample Amazon Lambda function response To verify your state file, go to Amazon S3 console and select the backend bucket created (tfbackend-bucket). It will contain your state file. Figure 4: Amazon S3 bucket with terraform state file Open Amazon DynamoDB console and check your table tfstate-lock and it will have an entry with LockID. Figure 5: Amazon DynamoDB table with LockID Thus, you have securely stored and locked your terraform state file using terraform backend in a Continuous Integration pipeline. Cleanup To delete all the resources created as part of the repository, run the below command from your terminal. terraform destroy Conclusion In this blog post, we explored the fundamentals of terraform state files, discussed best practices for their secure storage within AWS environments and also mechanisms for locking these files to prevent unauthorized team access. And finally, we showed you an example of how efficiently you can manage them in a Continuous Integration pipeline in AWS. You can apply the same methodology to manage state files in a Continuous Delivery pipeline in AWS. For more information, see CI/CD pipeline on AWS, Terraform backends types, Purpose of terraform state. Arun Kumar Selvaraj Arun Kumar Selvaraj is a Cloud Infrastructure Architect with AWS Professional Services. He loves building world class capability that provides thought leadership, operating standards and platform to deliver accelerated migration and development paths for his customers. His interests include Migration, CCoE, IaC, Python, DevOps, Containers and Networking. Manasi Bhutada Manasi Bhutada is an ISV Solutions Architect based in the Netherlands. She helps customers design and implement well architected solutions in AWS that address their business problems. She is passionate about data analytics and networking. Beyond work she enjoys experimenting with food, playing pickleball, and diving into fun board games. View the full article
  24. The HashiCorp Terraform ecosystem continues to expand with new integrations that provide additional capabilities to Terraform Cloud, Enterprise, and Community edition users as they provision and manage their cloud and on-premises infrastructure. Terraform is the world’s most widely used multi-cloud provisioning product. Whether you're deploying to Amazon Web Services (AWS), Microsoft Azure, Google Cloud, other cloud and SaaS offerings, or an on-premises datacenter, Terraform can be your single control plane, using infrastructure as code for infrastructure automation to provision and manage your entire infrastructure. Terraform Cloud run tasks Run tasks allow platform teams to easily extend the Terraform Cloud run lifecycle with additional capabilities offered by services from partners. Wiz Wiz, makers of agentless cloud security and compliance for AWS, Azure, Google Cloud, and Kubernetes, launched a new integration with Terraform run tasks that ensures only secure infrastructure is deployed. Acting as a guardrail, it prevents insecure deployments by scanning using predefined security policies, helping to reduce the organization's overall risk exposure. Terraform providers We’ve also approved 17 new verified Terraform providers from 13 different partners: AccuKnox AccuKnox, maker of a zero trust CNAPP (Cloud Native Application Protection) platform, has released the AccuKnox provider for Terraform, which allows for managing KubeArmor resources on Kubernetes clusters or host environments. Chainguard Chainguard, which offers Chainguard Images, a collection of secure minimal container images, released two Terraform providers: the Chainguard Terraform provider to manage Chainguard resources (IAM groups, identities, image repos, etc.) via Terraform, and the imagetest provider for authoring and executing tests using Terraform primitives, designed to work in conjunction with the Chainguard Images project. Cisco Systems Cisco delivers software-defined networking, cloud, and security solutions to help transform your business. Cisco DevNet has released two new providers for the Cisco Multicloud Defense and Cisco Secure Workload products: The Multicloud Defense provider is used to create and manage Multicloud Defense resources such as service VPCs/VNets, gateways, policy rulesets, address objects, service objects, and others. The Cisco Secure Workload provider can be used to manage the secure workload configuration when setting up workload protection policies for various environments. Citrix Citrix, maker of secure, unified digital workspace technology, developed a custom Terraform provider for automating Citrix product deployments and configurations. Using the Terraform with Citrix provider, users can manage Citrix products via infrastructure as code, giving greater efficiency and consistency on infrastructure management, as well as better reusability on infrastructure configuration. Couchbase Couchbase, which manages a distributed NoSQL cloud database, has released the Terraform Couchbase Capella provider to deploy, update, and manage Couchbase Capella infrastructure as code. Genesis Cloud Genesis Cloud offers accelerated cloud GPU computing for machine learning, visual effects rendering, big data analytics, and cognitive computing. The Genesis Cloud Terraform provider is used to interact with resources supported by Genesis Cloud via public API. Hund Hund offers automated monitoring to provide companies with simplified product transparency, from routine maintenance to critical system failures. The company recently published a new Terraform provider that offers resources/data sources to allow practitioners to manage objects on Hund’s hosted status page platform. Managed objects can include components, groups, issues, templates, and more. Mondoo Mondoo creates an index of all cloud, Kubernetes, and on-premises resources to help identify misconfigurations, ensure security, and support auditing and compliance. The company has released a new Mondoo Terraform provider to allow Terraform to manage Mondoo resources. Palo Alto Networks Palo Alto Networks is a multi-cloud security company. It has released a new Terraform provider for Strata Cloud Manager (SCM) that focuses on configuring the unified networking security aspect of SCM. Ping Identity Ping Identity delivers identity solutions that enable companies to balance security and personalized, streamlined user experiences. Ping has released two Terraform providers: The PingDirectory Terraform provider is a plugin for Terraform that supports the management of PingDirectory configuration, while the PingFederate Terraform provider is a plugin for Terraform that supports the management of PingFederate configuration. SquaredUp SquaredUp manages a visualization platform to help enterprises build, run, and optimize complex digital services by surfacing data faster. The company has released a new SquaredUp Terraform provider to help bring a unified visibility across teams and tools for greater insights and observability in your platform. Traceable Traceable is an API security platform that identifies and tests APIs, evaluates API risk posture, stops API attacks, and provides deep analytics for threat hunting and forensic research. The company recently released two integrations: a custom Terraform provider for AWS API Gateways and a Terraform Lambda-based resource provider. These providers allow the deployment of API security tooling to reduce the risk of API security events. VMware VMware offers a breadth of digital solutions that power apps, services, and experiences for their customers. The NSX-T VPC Terraform provider gives NSX VPC administrators a way to automate NSX's virtual private cloud to provide virtualized networking and security services. Learn more about Terraform integrations All integrations are available for review in the HashiCorp Terraform Registry. To verify an existing integration, please refer to our Terraform Cloud Integration Program. If you haven’t already, try the free tier of Terraform Cloud to help simplify your Terraform workflows and management. View the full article
  25. The HashiCorp Terraform team has made a lot of progress over the past few months, simplifying IT operations, increasing developer velocity, and cutting costs for organizations. The new Terraform Cloud and Terraform Enterprise improvements — all now generally available — include: Test-integrated module publishing Explorer for workspace visibility Inactivity-based destruction for ephemeral workspaces Priority variable sets Resource replacement from the UI Auto-apply for run triggers Version constraints in the Terraform version selector Test-integrated module publishing Back in October 2023 at HashiConf, we released the beta version of test-integrated module publishing for Terraform Cloud, along with the Terraform test framework, to streamline module testing and publishing workflows. Now we are excited to announce general availability of test-integrated module publishing. This new feature helps module authors and platform teams produce high-quality modules quickly and securely with more control over when and how modules are published. Since the beta launch, we have made several improvements. First, branch-based publishing and test integration are now compatible with all supported VCS providers in Terraform Cloud: GitHub, GitLab, Bitbucket, and Azure DevOps. Also, test results are now reported back to the connected repository as a VCS status check when tests are initiated by a pull request or merge. This gives module developers immediate in-context feedback without leaving the VCS interface. Finally, to support customers publishing modules at scale, both the Terraform Cloud API and the provider for Terraform Cloud and Enterprise now support branch-based publishing and enablement for test-integrated modules in addition to the UI-based publishing method. Along with being GA in Terraform Cloud, test-integrated module publishing is also available in the January 2024 (v202401-1) release of Terraform Enterprise, available now. Explorer for workspace visibility After we announced the beta version of the explorer for workspace visibility back at HashiDays in May 2023, we have been receiving lots of feedback and making improvements. We are now excited to announce general availability of the explorer for workspace visibility to help users ensure that their environments are secure, reliable, and compliant. Since the beta launch, we’ve made enhancements to allow users to find, view, and use their important operational data from Terraform Cloud more effectively as they monitor workspace efficiency, health, and compliance. For example, we improved the query speed, added more workspace data, introduced CSV exports, and provided options for filtering and conditions. Popular uses of explorer include tracking Terraform module and provider usage in workspaces, finding workspaces without a connected VCS repo, and identifying health issues like drifted workspaces and continuous validation failures. With the new public Explorer API, users can automate the integration of their data into visibility and reporting workflows outside of Terraform Cloud. Inactivity-based destruction for ephemeral workspaces Developer environments cost money to set up and run. If they are left running after developers have finished using them, your organization is incurring unnecessary costs. Ephemeral workspaces in Terraform Cloud and Enterprise— workspaces that expire after a set time and automatically de-provision — are a way to solve this cost overrun. However, it is sometimes hard to predict how much time you should give an ephemeral workspace to live. To give users a more dynamic mechanism for ephemeral workspace removal, we’ve introduced inactivity-based destruction for ephemeral workspaces in Terraform Cloud Plus and Terraform Enterprise (v202312-1). Users of those products can now set a workspace to "destroy if inactive", allowing administrators and developers to establish automated clean up of workspaces that haven't been updated or altered within a specified time frame. This eliminates the need for manual clean-up, reducing wasted infrastructure costs and streamlining workspace management. Priority variable sets to enforce variables across workspaces Variable sets allow Terraform Cloud users to reuse both Terraform-defined and environment variables across certain workspaces or an entire organization. One of the core use cases for this feature is credential management, but variables can also manage anything that can be defined as Terraform variables. When using variable sets for credential management, it is critical to ensure that these variables cannot be tampered with by end users. Priority variable sets for Terraform Cloud and Terraform Enterprise (v202401-1) provide a convenient way to prevent the over-writing of more infrastructure-critical variable sets, such as those used for credentials. Once the platform team has prioritized a variable set, even if a user has access to workspace variables or can modify a workspace’s Terraform configuration, they still won’t be able to override variables in that prioritized set. When creating a new variable set, check the "Prioritize the variable values in this variable set" box to make it a priority variable set. Resource replacement from the UI In the past, Terraform Cloud users were not able to use the UI to regenerate a damaged or degraded resource (or resources) for a VCS-connected workspace without switching to the CLI workflow. This was a tedious and error-prone manual process. In some cases, a remote object may become damaged or degraded in a way that Terraform cannot automatically detect. For example, if software running inside a virtual machine crashes but the virtual machine itself is still running, Terraform will typically have no way to detect and respond to the problem because Terraform directly manages the machine as a whole. Now, if you know that an object is damaged or if you want to force Terraform to replace it for any other reason, you can override Terraform's default behavior using the replace resources option to instruct Terraform to replace the resource(s) you select. Users can now create a new run via the Terraform Cloud UI with the option to replace resources in addition to the CLI and API approach. The replacement workflow is also available in v202401-1 of Terraform Enterprise. Auto-apply for run triggers Run triggers let users connect two workspaces in Terraform Cloud to automatically queue runs when the parent workspace is successfully applied. This is commonly used in multi-tier infrastructure deployments where resources are split between multiple workspaces, or with shared infrastructure like networking or databases. In the past, runs initiated by a run trigger did not auto-apply. Instead, users had to manually confirm the pending run in each workspace individually. The new “auto-apply run triggers” option in the workspace settings allows workspace admins to choose whether to auto-approve runs initiated by a run trigger. This setting is independent from the workspace auto-apply setting, providing more flexibility in defining workspace behavior. It provides an automated way to chain applies across workspaces to simplify operations without human intervention. Auto-apply run triggers are now generally available in Terraform Cloud and Terraform Enterprise v202401-1. Version constraints in the Terraform version selector Each workspace in Terraform Cloud defines the version of Terraform used to execute runs. Previously, version constraints could be set via the workspaces API, but in the UI version selector, the choices were limited to specific versions of Terraform or the “latest” option, which always selects the newest version. Users had to either manually update versions for each workspace or accept the risk of potential behavior changes in new versions. Terraform Cloud and Enterprise (v202302-1) now have an updated Terraform version selector that includes version constraints, allowing workspaces to automatically update specific Terraform versions with patch releases while staying within the selected major or minor version. This provides a more seamless and flexible experience for users who rely on the web console and don’t have direct API access. Get started with Terraform Cloud These Terraform Cloud and Enterprise enhancements represent a continued evolution aimed at helping customers maximize their infrastructure investments and accelerate application delivery. To learn more about these features, visit our Terraform guides and documentation on HashiCorp Developer. If you are new to Terraform, sign up for Terraform Cloud and get started for free today. View the full article
  • Forum Statistics

    43.2k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...