Jump to content

Search the Community

Showing results for tags 'terraform'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Welcome New Members !
    • General Discussion
    • Site News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering, Data Science, BI & AI
    • Development & Programming
    • CI/CD & GitOps
    • Docker, Containers, Microservices & Serverless
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Monitoring, Observability & Logging
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure
    • Red Hat OpenShift

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Development Experience


Cloud Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. HashiCorp Terraform is the world’s most widely used multi-cloud provisioning product. The Terraform ecosystem has notched more than 3,000 providers, 14,000 modules, and 250 million downloads. Terraform Cloud is the fastest way to adopt Terraform, providing everything practitioners, teams, and global businesses need to create and collaborate on infrastructure and manage risks for security, compliance, and operational constraints. This month, AWS AppFabric added support for Terraform Cloud, expanding an already long list of ways that Terraform can connect, secure and provision infrastructure with AWS. This post will explore the new AppFabric support and highlight two other key existing integrations: Dynamic provider credentials and AWS Service Catalog support for Terraform Cloud. AWS AppFabric support for Terraform Cloud AWS AppFabric now supports Terraform Cloud. IT administrators and security analysts can use AppFabric to quickly integrate with Terraform Cloud, aggregate enriched and normalized SaaS audit logs, and audit end-user access across their SaaS apps. This launch expands AWS AppFabric supported applications used across an organization. AWS AppFabric quickly connects SaaS applications, or data lakes like Amazon Security Lake. For Terraform Cloud users, this integration can accelerate time-to-market and help developers release new features to production faster with streamlined infrastructure provisioning and application delivery workflows. To learn more, visit the AWS AppFabric page and then check out how to connect AppFabric to your Terraform Cloud account. Dynamic credentials with the AWS provider Introduced early last year, Terraform Cloud's dynamic provider credentials let you establish a trust relationship between Terraform Cloud and AWS. They limit the blast radius of compromised credentials by using unique, single-use credentials for each Terraform run. Dynamic credentials also give you fine-grained control over the resources that each of your Terraform Cloud projects and workspaces can manage. Terraform Cloud supports dynamic credentials for AWS and Vault. To learn more, AWS and HashiCorp have since written a joint blog post on how to Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials and you can learn how to configure Dynamic Credentials with the AWS Provider at HashiCorp Developer. Terraform Cloud self-service provisioning with AWS Service Catalog In August 2023, AWS added AWS Service Catalog support for Terraform Cloud. This includes integrated access to key AWS Service Catalog features, including cataloging of standardized and pre-approved Terraform configurations, infrastructure as code templates, access control, resource provisioning with least-privilege access, versioning, sharing to thousands of AWS accounts, and tagging. By combining Terraform Cloud with AWS Service Catalog, we’re connecting the AWS Service Catalog interface that many customers already know, with the existing workflows and policy guardrails of Terraform Cloud. HashiCorp and AWS have since co-presented at HashiConf (Terraform Cloud self-service provisioning with AWS Service Catalog) and partnered on AWS’s blog post on How to Use AWS Service Catalog with HashiCorp Terraform Cloud, demonstrating the workflow for provisioning a new product and offering access to getting-started guides. Self-service infrastructure is no longer a dream Platform teams can use Terraform Cloud, HCP Waypoint, and the AWS Service Catalog to create simplified Terraform-based workflows for developers. Terraform modules can incorporate unit testing, built-in security, policy enforcement, and reliable version updates. Using these tools, platform teams can establish standardized workflows to deploy applications and deliver a smooth and seamless developer experience. Learn more by viewing AWS and HashiCorp’s recent Self-service infrastructure is no longer a dream talk from AWS re:Invent: View the full article
  2. The HashiCorp Terraform ecosystem continues to expand with new integrations that provide additional capabilities to Terraform Cloud, Enterprise, and Community edition users as they provision and manage their cloud and on-premises infrastructure. Terraform is the world’s most widely used multi-cloud provisioning product. Whether you're deploying to Amazon Web Services (AWS), Microsoft Azure, Google Cloud, other cloud and SaaS offerings, or an on-premises datacenter, Terraform can be your single control plane, using infrastructure as code for infrastructure automation to provision and manage your entire infrastructure. Terraform Cloud run tasks Run tasks allow platform teams to easily extend the Terraform Cloud run lifecycle with additional capabilities offered by services from partners. Wiz Wiz, makers of agentless cloud security and compliance for AWS, Azure, Google Cloud, and Kubernetes, launched a new integration with Terraform run tasks that ensures only secure infrastructure is deployed. Acting as a guardrail, it prevents insecure deployments by scanning using predefined security policies, helping to reduce the organization's overall risk exposure. Terraform providers We’ve also approved 17 new verified Terraform providers from 13 different partners: AccuKnox AccuKnox, maker of a zero trust CNAPP (Cloud Native Application Protection) platform, has released the AccuKnox provider for Terraform, which allows for managing KubeArmor resources on Kubernetes clusters or host environments. Chainguard Chainguard, which offers Chainguard Images, a collection of secure minimal container images, released two Terraform providers: the Chainguard Terraform provider to manage Chainguard resources (IAM groups, identities, image repos, etc.) via Terraform, and the imagetest provider for authoring and executing tests using Terraform primitives, designed to work in conjunction with the Chainguard Images project. Cisco Systems Cisco delivers software-defined networking, cloud, and security solutions to help transform your business. Cisco DevNet has released two new providers for the Cisco Multicloud Defense and Cisco Secure Workload products: The Multicloud Defense provider is used to create and manage Multicloud Defense resources such as service VPCs/VNets, gateways, policy rulesets, address objects, service objects, and others. The Cisco Secure Workload provider can be used to manage the secure workload configuration when setting up workload protection policies for various environments. Citrix Citrix, maker of secure, unified digital workspace technology, developed a custom Terraform provider for automating Citrix product deployments and configurations. Using the Terraform with Citrix provider, users can manage Citrix products via infrastructure as code, giving greater efficiency and consistency on infrastructure management, as well as better reusability on infrastructure configuration. Couchbase Couchbase, which manages a distributed NoSQL cloud database, has released the Terraform Couchbase Capella provider to deploy, update, and manage Couchbase Capella infrastructure as code. Genesis Cloud Genesis Cloud offers accelerated cloud GPU computing for machine learning, visual effects rendering, big data analytics, and cognitive computing. The Genesis Cloud Terraform provider is used to interact with resources supported by Genesis Cloud via public API. Hund Hund offers automated monitoring to provide companies with simplified product transparency, from routine maintenance to critical system failures. The company recently published a new Terraform provider that offers resources/data sources to allow practitioners to manage objects on Hund’s hosted status page platform. Managed objects can include components, groups, issues, templates, and more. Mondoo Mondoo creates an index of all cloud, Kubernetes, and on-premises resources to help identify misconfigurations, ensure security, and support auditing and compliance. The company has released a new Mondoo Terraform provider to allow Terraform to manage Mondoo resources. Palo Alto Networks Palo Alto Networks is a multi-cloud security company. It has released a new Terraform provider for Strata Cloud Manager (SCM) that focuses on configuring the unified networking security aspect of SCM. Ping Identity Ping Identity delivers identity solutions that enable companies to balance security and personalized, streamlined user experiences. Ping has released two Terraform providers: The PingDirectory Terraform provider is a plugin for Terraform that supports the management of PingDirectory configuration, while the PingFederate Terraform provider is a plugin for Terraform that supports the management of PingFederate configuration. SquaredUp SquaredUp manages a visualization platform to help enterprises build, run, and optimize complex digital services by surfacing data faster. The company has released a new SquaredUp Terraform provider to help bring a unified visibility across teams and tools for greater insights and observability in your platform. Traceable Traceable is an API security platform that identifies and tests APIs, evaluates API risk posture, stops API attacks, and provides deep analytics for threat hunting and forensic research. The company recently released two integrations: a custom Terraform provider for AWS API Gateways and a Terraform Lambda-based resource provider. These providers allow the deployment of API security tooling to reduce the risk of API security events. VMware VMware offers a breadth of digital solutions that power apps, services, and experiences for their customers. The NSX-T VPC Terraform provider gives NSX VPC administrators a way to automate NSX's virtual private cloud to provide virtualized networking and security services. Learn more about Terraform integrations All integrations are available for review in the HashiCorp Terraform Registry. To verify an existing integration, please refer to our Terraform Cloud Integration Program. If you haven’t already, try the free tier of Terraform Cloud to help simplify your Terraform workflows and management. View the full article
  3. The HashiCorp Terraform team has made a lot of progress over the past few months, simplifying IT operations, increasing developer velocity, and cutting costs for organizations. The new Terraform Cloud and Terraform Enterprise improvements — all now generally available — include: Test-integrated module publishing Explorer for workspace visibility Inactivity-based destruction for ephemeral workspaces Priority variable sets Resource replacement from the UI Auto-apply for run triggers Version constraints in the Terraform version selector Test-integrated module publishing Back in October 2023 at HashiConf, we released the beta version of test-integrated module publishing for Terraform Cloud, along with the Terraform test framework, to streamline module testing and publishing workflows. Now we are excited to announce general availability of test-integrated module publishing. This new feature helps module authors and platform teams produce high-quality modules quickly and securely with more control over when and how modules are published. Since the beta launch, we have made several improvements. First, branch-based publishing and test integration are now compatible with all supported VCS providers in Terraform Cloud: GitHub, GitLab, Bitbucket, and Azure DevOps. Also, test results are now reported back to the connected repository as a VCS status check when tests are initiated by a pull request or merge. This gives module developers immediate in-context feedback without leaving the VCS interface. Finally, to support customers publishing modules at scale, both the Terraform Cloud API and the provider for Terraform Cloud and Enterprise now support branch-based publishing and enablement for test-integrated modules in addition to the UI-based publishing method. Along with being GA in Terraform Cloud, test-integrated module publishing is also available in the January 2024 (v202401-1) release of Terraform Enterprise, available now. Explorer for workspace visibility After we announced the beta version of the explorer for workspace visibility back at HashiDays in May 2023, we have been receiving lots of feedback and making improvements. We are now excited to announce general availability of the explorer for workspace visibility to help users ensure that their environments are secure, reliable, and compliant. Since the beta launch, we’ve made enhancements to allow users to find, view, and use their important operational data from Terraform Cloud more effectively as they monitor workspace efficiency, health, and compliance. For example, we improved the query speed, added more workspace data, introduced CSV exports, and provided options for filtering and conditions. Popular uses of explorer include tracking Terraform module and provider usage in workspaces, finding workspaces without a connected VCS repo, and identifying health issues like drifted workspaces and continuous validation failures. With the new public Explorer API, users can automate the integration of their data into visibility and reporting workflows outside of Terraform Cloud. Inactivity-based destruction for ephemeral workspaces Developer environments cost money to set up and run. If they are left running after developers have finished using them, your organization is incurring unnecessary costs. Ephemeral workspaces in Terraform Cloud and Enterprise— workspaces that expire after a set time and automatically de-provision — are a way to solve this cost overrun. However, it is sometimes hard to predict how much time you should give an ephemeral workspace to live. To give users a more dynamic mechanism for ephemeral workspace removal, we’ve introduced inactivity-based destruction for ephemeral workspaces in Terraform Cloud Plus and Terraform Enterprise (v202312-1). Users of those products can now set a workspace to "destroy if inactive", allowing administrators and developers to establish automated clean up of workspaces that haven't been updated or altered within a specified time frame. This eliminates the need for manual clean-up, reducing wasted infrastructure costs and streamlining workspace management. Priority variable sets to enforce variables across workspaces Variable sets allow Terraform Cloud users to reuse both Terraform-defined and environment variables across certain workspaces or an entire organization. One of the core use cases for this feature is credential management, but variables can also manage anything that can be defined as Terraform variables. When using variable sets for credential management, it is critical to ensure that these variables cannot be tampered with by end users. Priority variable sets for Terraform Cloud and Terraform Enterprise (v202401-1) provide a convenient way to prevent the over-writing of more infrastructure-critical variable sets, such as those used for credentials. Once the platform team has prioritized a variable set, even if a user has access to workspace variables or can modify a workspace’s Terraform configuration, they still won’t be able to override variables in that prioritized set. When creating a new variable set, check the "Prioritize the variable values in this variable set" box to make it a priority variable set. Resource replacement from the UI In the past, Terraform Cloud users were not able to use the UI to regenerate a damaged or degraded resource (or resources) for a VCS-connected workspace without switching to the CLI workflow. This was a tedious and error-prone manual process. In some cases, a remote object may become damaged or degraded in a way that Terraform cannot automatically detect. For example, if software running inside a virtual machine crashes but the virtual machine itself is still running, Terraform will typically have no way to detect and respond to the problem because Terraform directly manages the machine as a whole. Now, if you know that an object is damaged or if you want to force Terraform to replace it for any other reason, you can override Terraform's default behavior using the replace resources option to instruct Terraform to replace the resource(s) you select. Users can now create a new run via the Terraform Cloud UI with the option to replace resources in addition to the CLI and API approach. The replacement workflow is also available in v202401-1 of Terraform Enterprise. Auto-apply for run triggers Run triggers let users connect two workspaces in Terraform Cloud to automatically queue runs when the parent workspace is successfully applied. This is commonly used in multi-tier infrastructure deployments where resources are split between multiple workspaces, or with shared infrastructure like networking or databases. In the past, runs initiated by a run trigger did not auto-apply. Instead, users had to manually confirm the pending run in each workspace individually. The new “auto-apply run triggers” option in the workspace settings allows workspace admins to choose whether to auto-approve runs initiated by a run trigger. This setting is independent from the workspace auto-apply setting, providing more flexibility in defining workspace behavior. It provides an automated way to chain applies across workspaces to simplify operations without human intervention. Auto-apply run triggers are now generally available in Terraform Cloud and Terraform Enterprise v202401-1. Version constraints in the Terraform version selector Each workspace in Terraform Cloud defines the version of Terraform used to execute runs. Previously, version constraints could be set via the workspaces API, but in the UI version selector, the choices were limited to specific versions of Terraform or the “latest” option, which always selects the newest version. Users had to either manually update versions for each workspace or accept the risk of potential behavior changes in new versions. Terraform Cloud and Enterprise (v202302-1) now have an updated Terraform version selector that includes version constraints, allowing workspaces to automatically update specific Terraform versions with patch releases while staying within the selected major or minor version. This provides a more seamless and flexible experience for users who rely on the web console and don’t have direct API access. Get started with Terraform Cloud These Terraform Cloud and Enterprise enhancements represent a continued evolution aimed at helping customers maximize their infrastructure investments and accelerate application delivery. To learn more about these features, visit our Terraform guides and documentation on HashiCorp Developer. If you are new to Terraform, sign up for Terraform Cloud and get started for free today. View the full article
  4. HashiCorp and Microsoft have partnered to create Terraform modules that follow Microsoft's Azure Well-Architected Framework and best practices. In a previous blog post, we demonstrated how to accelerate AI adoption on Azure with Terraform. This post covers how to use a simple three-step process to build, secure, and enable OpenAI applications on Azure with HashiCorp Terraform and Vault. The code for this demo can be found on GitHub. You can leverage the Microsoft application outlined in this post and the Microsoft Azure Kubernetes Service (AKS) to integrate with OpenAI. You can also read more about how to deploy an application that uses OpenAI on AKS on the Microsoft website. Key considerations of AI The rise in AI workloads is driving an expansion of cloud operations. Gartner predicts that cloud infrastructure will grow 26.6% in 2024, as organizations deploying generative AI (GenAI) services look to the public cloud. To create a successful AI environment, orchestrating the seamless integration of artificial intelligence and operations demands a focus on security, efficiency, and cost control. Security Data integration, the bedrock of AI, not only requires the harmonious assimilation of diverse data sources but must also include a process to safeguard sensitive information. In this complex landscape, the deployment of public key infrastructure (PKI) and robust secrets management becomes indispensable, adding cryptographic resilience to data transactions and ensuring the secure handling of sensitive information. For more information on the HashiCorp Vault solution, see our use-case page on Automated PKI infrastructure Machine learning models, pivotal in anomaly detection, predictive analytics, and root-cause analysis, not only provide operational efficiency but also serve as sentinels against potential security threats. Automation and orchestration, facilitated by tools like HashiCorp Terraform, extend beyond efficiency to become critical components in fortifying against security vulnerabilities. Scalability and performance, guided by resilient architectures and vigilant monitoring, ensure adaptability to evolving workloads without compromising on security protocols. Efficiency and cost control In response, platform teams are increasingly adopting infrastructure as code (IaC) to enhance efficiency and help control cloud costs. HashiCorp products underpin some of today’s largest AI workloads, using infrastructure as code to help eliminate idle resources and overprovisioning, and reduce infrastructure risk. Automation with Terraform This post delves into specific Terraform configurations tailored for application deployment within a containerized environment. The first step looks at using IaC principles to deploy infrastructure to efficiently scale AI workloads, reduce manual intervention, and foster a more agile and collaborative AI development lifecycle on the Azure platform. The second step focuses on how to build security and compliance into an AI workflow. The final step shows how to manage application deployment on the newly created resources. Prerequisites For this demo, you can use either Azure OpenAI service or OpenAI service: To use Azure OpenAI service, enable it on your Azure subscription using the Request Access to Azure OpenAI Service form. To use OpenAI, sign up on the OpenAI website. Step one: Build First let's look at the Helm provider block in main.tf: provider "helm" { kubernetes { host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host username = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.username password = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.password client_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.cluster_ca_certificate) } }This code uses information from the AKS resource to populate the details in the Helm provider, letting you deploy resources into AKS pods using native Helm charts. With this Helm chart method, you deploy multiple resources using Terraform in the helm_release.tf file. This file sets up HashiCorp Vault, cert-manager, and Traefik Labs’ ingress controller within the pods. The Vault configuration shows the Helm set functionality to customize the deployment: resource "helm_release" "vault" { name = "vault" chart = "hashicorp/vault" set { name = "server.dev.enabled" value = "true" } set { name = "server.dev.devRootToken" value = "AzureA!dem0" } set { name = "ui.enabled" value = "true" } set { name = "ui.serviceType" value = "LoadBalancer" } set { name = "ui.serviceNodePort" value = "null" } set { name = "ui.externalPort" value = "8200" } }In this demo, the Vault server is customized to be in Dev Mode, have a defined root token, and enable external access to the pod via a load balancer using a specific port. At this stage you should have created a resource group with an AKS cluster and servicebus established. The containerized environment should look like this: If you want to log in to the Vault server at this stage, use the EXTERNAL-IP load balancer address with port 8200 (like this: http://[EXTERNAL_IP]:8200/) and log in using AzureA!dem0. Step two: Secure Now that you have established a base infrastructure in the cloud and the microservices environment, you are ready to configure Vault resources to integrate PKI into your environment. This centers around the pki_build.tf.second file, which you need to rename to remove the .second extension and make it executable as a Terraform file. After performing a terraform apply, as you are adding to the current infrastructure, add the elements to set up Vault with a root certificate and issue this within the pod. To do this, use the Vault provider and configure it to define a mount point for the PKI, a root certificate, role cert URL, issuer, and policy needed to build the PKI: resource "vault_mount" "pki" { path = "pki" type = "pki" description = "This is a PKI mount for the Azure AI demo." default_lease_ttl_seconds = 86400 max_lease_ttl_seconds = 315360000 } resource "vault_pki_secret_backend_root_cert" "root_2023" { backend = vault_mount.pki.path type = "internal" common_name = "example.com" ttl = 315360000 issuer_name = "root-2023" }Using the same Vault provider you can also configure Kubernetes authentication to create a role named "issuer" that binds the PKI policy with a Kubernetes service account named issuer: resource "vault_auth_backend" "kubernetes" { type = "kubernetes" } resource "vault_kubernetes_auth_backend_config" "k8_auth_config" { backend = vault_auth_backend.kubernetes.path kubernetes_host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host } resource "vault_kubernetes_auth_backend_role" "k8_role" { backend = vault_auth_backend.kubernetes.path role_name = "issuer" bound_service_account_names = ["issuer"] bound_service_account_namespaces = ["default","cert-manager"] token_policies = ["default", "pki"] token_ttl = 60 token_max_ttl = 120 }The role connects the Kubernetes service account, issuer, which is created in the default namespace with the PKI Vault policy. The tokens returned after authentication are valid for 60 minutes. The Kubernetes service account name, issuer, is created using the Kubernetes provider, discussed in step three, below. These resources are used to configure the model to use HashiCorp Vault to manage the PKI certification process. The image below shows how HashiCorp Vault interacts with cert-manager to issue certificates to be used by the application: Step three: Enable The final stage requires another tf apply as you are again adding to the environment. You now use app_build.tf.third to build an application. To do this you need to rename app_build.tf.third to remove the .third extension and make it executable as a Terraform file. Interestingly, the code in app_build.tf uses the Kubernetes provider resource kubernetes_manifest. The manifest values are the HCL (HashiCorp Configuration Language) representation of a Kubernetes YAML manifest. (We converted an existing manifest from YAML to HCL to get the code needed for this deployment. You can do this using Terraform’s built-in yamldecode() function or the HashiCorp tfk8s tool.) The code below represents an example of a service manifest used to create a service on port 80 to allow access to the store-admin app that was converted using the tfk8s tool: resource "kubernetes_manifest" "service_tls_admin" { manifest = { "apiVersion" = "v1" "kind" = "Service" "metadata" = { "name" = "tls-admin" "namespace" = "default" } "spec" = { "clusterIP" = "10.0.160.208" "clusterIPs" = [ "10.0.160.208", ] "internalTrafficPolicy" = "Cluster" "ipFamilies" = [ "IPv4", ] "ipFamilyPolicy" = "SingleStack" "ports" = [ { "name" = "tls-admin" "port" = 80 "protocol" = "TCP" "targetPort" = 8081 }, ] "selector" = { "app" = "store-admin" } "sessionAffinity" = "None" "type" = "ClusterIP" } } }Putting it all together Once you’ve deployed all the elements and applications, you use the certificate stored in a Kubernetes secret to apply the TLS configuration to inbound HTTPS traffic. In the example below, you associate "example-com-tls" — which includes the certificate created by Vault earlier — with the inbound IngressRoute deployment using the Terraform manifest: resource "kubernetes_manifest" "ingressroute_admin_ing" { manifest = { "apiVersion" = "traefik.containo.us/v1alpha1" "kind" = "IngressRoute" "metadata" = { "name" = "admin-ing" "namespace" = "default" } "spec" = { "entryPoints" = [ "websecure", ] "routes" = [ { "kind" = "Rule" "match" = "Host(`admin.example.com`)" "services" = [ { "name" = "tls-admin" "port" = 80 }, ] }, ] "tls" = { "secretName" = "example-com-tls" } } } }To test access to the OpenAI store-admin site, you need a domain name. You use a FQDN to access the site that you are going to protect using the generated certificate and HTTPS. To set this up, access your AKS cluster. The Kubernetes command-line client, kubectl, is already installed in your Azure Cloud Shell. You enter: kubectl get svc And should get the following output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello LoadBalancer 10.0.23.77 20.53.189.251 443:31506/TCP 94s kubernetes ClusterIP 10.0.0.1 443/TCP 29h makeline-service ClusterIP 10.0.40.79 3001/TCP 4h45m mongodb ClusterIP 10.0.52.32 27017/TCP 4h45m order-service ClusterIP 10.0.130.203 3000/TCP 4h45m product-service ClusterIP 10.0.59.127 3002/TCP 4h45m rabbitmq ClusterIP 10.0.122.75 5672/TCP,15672/TCP 4h45m store-admin LoadBalancer 10.0.131.76 20.28.162.45 80:30683/TCP 4h45m store-front LoadBalancer 10.0.214.72 20.28.162.47 80:32462/TCP 4h45m traefik LoadBalancer 10.0.176.139 20.92.218.96 80:32240/TCP,443:32703/TCP 29h vault ClusterIP 10.0.69.111 8200/TCP,8201/TCP 29h vault-agent-injector-svc ClusterIP 10.0.31.52 443/TCP 29h vault-internal ClusterIP None 8200/TCP,8201/TCP 29h vault-ui LoadBalancer 10.0.110.159 20.92.217.182 8200:32186/TCP 29hLook for the traefik entry and note the EXTERNALl-IP (yours will be different than the one shown above). Then, on your local machine, create a localhost entry for admin.example.com to resolve to the address. For example on MacOS, you can use sudo nano /etc/hosts. If you need more help, search “create localhost” for your machine type. Now you can enter https://admin.example.com in your browser and examine the certificate. This certificate is built from a root certificate authority (CA) held in Vault (example.com) and is valid against this issuer (admin.example.com) to allow for secure access over HTTPS. To verify the right certificate is being issued, expand the detail on our browser and view the cert name and serial number: You can then check this in Vault and see if the common name and serial numbers match. Terraform has configured all of the elements using the three-step approach shown in this post. To test the OpenAI application, follow Microsoft’s instructions. Skip to Step 4 and use https://admin.example.com to access the store-admin and the original store-front load balancer address to access the store-front. DevOps for AI app development To learn more and keep up with the latest trends in DevOps for AI app development, check out this Microsoft Reactor session with HashiCorp Co-Founder and CTO Armon Dadgar: Using DevOps and copilot to simplify and accelerate development of AI apps. It covers how developers can use GitHub Copilot with Terraform to create code modules for faster app development. You can get started by signing up for a free Terraform Cloud account. View the full article
  5. We are excited to announce the latest version of the ServiceNow Service Catalog integration for HashiCorp Terraform, version 2.4. This latest iteration introduces custom workspace naming and tagging, which gives teams the ability to be compliant based on organizational naming conventions. Additionally, version 2.4 introduces a new feature that enables the addition of Terraform Cloud tags during the workspace creation process. This post covers version 2.4’s new features, including: Customizing workspace names Tagging Terraform workspaces Customized workspace names Previously, workspace names were automatically named using ServiceNow RITM ticket numbers. These non-descriptive names created confusion and a lack of clarity. Now users can customize workspace names and adhere to their organization’s naming conventions, while preserving a link to the ServiceNow ticket.This provides the flexibility of adding a more descriptive name, which is prepended to the RITM ticket number upon ordering a particular Catalog Item. Terraform Cloud workspace tags Version 2.4 of the ServiceNow Service Catalog for Terraform enables workspace tagging. You can put multiple tags in a comma-separated list and the Service Catalog's backend script will parse them into separate tags and attach them to the workspace in Terraform Cloud. Some default Catalog items let users update both the name and the tags on a previously created workspace. Not only do tags provide contextual awareness, but they also help admins organize, find, and filter workspaces more effectively in the Terraform Cloud or Enterprise interface, thus reducing the amount of time spent on repetitive manual tasks. Key benefits The general availability of the latest version of ServiceNow Service Catalog for Terraform Cloud and Terraform Enterprise lets users effectively name and tag workspaces. That brings two main benefits: Improved efficiency: The automated, RITM based naming for Terraform workspaces came with significant limitations. Now teams can customize workspace names during creation improving clarity and user control. Teams running large production environments with many ServiceNow-provisioned workspaces can now work within their organization’s naming conventions.This combined effort of customizing workspace names and the ability to tag represents a big step towards delivering more meaningful code and fewer tedious tasks. Reduced risk: Custom tagging helps admins to group and better manage ServiceNow-provisioned workspaces, eliminating confusion and improving clarity. And it gives users a more efficient way to search and filter workspaces based on those tags. Get started and try Terraform Cloud Custom workspace naming and tagging are available today as generally available features. With these updates, the ServiceNow Terraform Catalog becomes even more useful to organizations with many ServiceNow-provisioned workspaces, helping to streamline processes and promote broader adoption. Learn more by reading the ServiceNow Service Catalog documentation. Install the app to your ServiceNow instance from the ServiceNow Store. Get started with Terraform Cloud for free to begin provisioning and managing your infrastructure in any environment. Link your Terraform Cloud and HashiCorp Cloud Platform (HCP) accounts together for a seamless sign-in experience. View the full article
  6. In November 2023, we announced the general availability of the Terraform Cloud Operator for Kubernetes. The Terraform Cloud Operator streamlines infrastructure management, allowing platform teams to offer a Kubernetes-native experience for their users while standardizing Terraform workflows. It simplifies the management of Terraform Cloud workspaces and agent pools, ensuring efficiency and consistency across operations. Today we are excited to announce the general availability of project support in the latest version of the Terraform Cloud Operator, version 2.2. Introducing project support Previously, workspace creation using the operator was limited to the default project in Terraform Cloud. Users needed elevated user permissions, which led to security risks from overly broad access and also hindered self-managed workspaces due to frequent central team dependency. Now with project support, users can specify the project where a workspace will be created. This enhances self-service by allowing users to independently create and manage workspaces, and execute runs within the context of their assigned project. The project name can now be set in the Workspace resource (example code). Also, project administrators can use the new Project custom resource to create and manage projects and team access in the organization: The new Project custom resource manages Terraform Cloud projects and team access (example code). Key benefits The general availability of project support for Terraform Cloud Operator brings two main benefits: Improved efficiency: Projects streamline platform teams’ ability to group related workspaces based on their organization’s resource usage and ownership patterns (e.g. by teams, business units, or services). These workspace groupings reduce complexity when managing and organizing Terraform configurations.. Reduced risk: Instead of managing permissions for each workspace individually, you can group related workspaces into projects, then grant teams access to the project. Those permissions will then apply to all workspaces under that project. This helps teams manage the workspaces they are responsible for while still having their permissions confined to a project, rather than the whole organization, making it easier for organization owners to follow the principle of least privilege. Learn more and get started Take a deeper dive into the Terraform Cloud Operator and securely managing Kubernetes resources by signing up for the Multi-cloud Kubernetes with HashiCorp Terraform webinar. Learn more about project support for the Terraform Cloud Operator by reading the documentation. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today. View the full article
  7. We’re excited to announce that HashiCorp Terraform 1.7 is now generally available, ready for download, and available for use in Terraform Cloud. Terraform 1.7 features a new mocking capability for the Terraform test framework, a new method for removing resources from state, and an enhancement for config-driven import. These additions help Terraform developers more thoroughly test their modules and gives operators safer and more efficient options for state manipulation. Terraform test mocking In Terraform 1.6 we introduced the Terraform testing framework, a native option to perform unit and integration testing of your Terraform code using the HashiCorp Configuration Language (HCL). Terraform 1.7 brings several improvements to the testing framework, highlighted by the new mocking feature. Previously, all tests were executed by making actual provider calls using either a plan or apply operation. This is a great way to observe the real behavior of a module. But it can also be useful to mock provider calls to model more advanced situations and to test without having to create actual infrastructure or requiring credentials. This can be especially useful with cloud resources that take a long time to provision, such as databases and higher-level platform services. Mocking can significantly reduce the time required to run a test suite with many different permutations, giving module authors the ability to thoroughly test their code without slowing down the development process. Test mocking adds powerful flexibility to module testing through two primary functions: mock providers and overrides. Mock providers A mocked provider or resource in a Terraform test will generate fake data for all computed attributes that would normally be provided by the underlying provider APIs. By employing aliases, mocked and real providers can be used together to create a flexible Terraform test suite for your modules. The new mock_provider block defines a mock provider, and within this block you can specify values for computed attributes of resources and data sources. This example mocks the AWS provider and sets a specific value for the Amazon S3 bucket resource. Test runs using the mocked version of this provider will return the specified arn value for all S3 bucket resources instead of randomly generated fake data: mock_provider "aws" { mock_resource "aws_s3_bucket" { defaults = { arn = "arn:aws:s3:::test-bucket-name" } } } run "sets_bucket_name" { variables { bucket_name = "test-bucket-name" } # Validates a known attribute set in the resource configuration assert { condition = output.bucket == "test-bucket-name" error_message = "Wrong ARN value" } # Validates a computed attribute using the mocked resource assert { condition = output.arn == "arn:aws:s3:::test-bucket-name" error_message = "Wrong ARN value" } }Overrides In addition to mocking whole providers, you can also override specific instances of resources, data sources, and modules. Override blocks can be placed at the root of a Terraform test file to apply to all test runs, or within an individual run block, and can be used with both real and mocked providers. Common use cases for overrides include cutting down test execution time for resources that take a long time to provision, child modules where you’re concerned only with simulating the outputs, or to diversify the attributes of a data source for various test scenarios. This example overrides a module and mocks the output values: mock_provider "aws" {} override_module { target = module.big_database outputs = { endpoint = "big_database.012345678901.us-east-1.rds.amazonaws.com:3306" db_name = "test_db" username = "fakeuser" password = "fakepassword" } } run "test" { assert { condition = module.big_database.username == "fakeuser" error_message = "Incorrect username" } }Learn more about test mocking There’s much more you can do with the new mocking capabilities of the Terraform test framework to help enhance your testing and produce higher-quality modules. To learn more, check out the Mocks documentation, and try it out by following the updated Write Terraform tests tutorial. Along with test mocking, Terraform 1.7 includes several other enhancements to the test framework. You can now: Reference variables and run outputs in test provider blocks Use HCL functions in variable and provider blocks Load variable values for tests from *.tfvars files. For a deep dive on all things testing, check out the recently updated Testing HashiCorp Terraform blog post. Config-driven remove During the infrastructure lifecycle, it’s sometimes necessary to modify the state of a resource. The Terraform CLI has multiple commands related to state manipulation, but these all face similar challenges: they operate on only one resource at a time, must be performed locally with direct access to state and credentials, and they immediately modify the state. This is risky because it leaves the configuration and state out of sync, which can lead to accidental resource changes. That’s why in Terraform 1.1 we introduced the concept of config-driven refactoring with the moved block, and continued this with config-driven import in Terraform 1.5. Today with Terraform 1.7, this concept has again been extended with config-driven remove. There are several reasons why you might need to remove a resource from state without actually destroying it: Moving resources between workspaces Cleaning up state after apply-time failures Refresh failures due to manual resource changes Provider deprecations and upgrades As an alternative to the terraform state rm command, the removed block addresses all of these challenges. Just like the moved and import blocks, state removal can now be performed in bulk and is plannable, so you can be confident that the operation will have the intended effect before modifying state. Removed blocks have a simple syntax: removed { # The resource address to remove from state from = aws_instance.example # The lifecycle block instructs Terraform not to destroy the underlying resource lifecycle { destroy = false } }Config-driven remove is also compatible with all Terraform Cloud workflows including VCS-driven workspaces. And soon, structured run output in Terraform Cloud will be able to visually render removal actions alongside other plan activity. Read more about using removed blocks with resources and using removed blocks with modules in the Terraform documentation, and try it out with the updated Manage resources in Terraform state tutorial. Import block for_each Terraform 1.7 also includes an enhancement for config-driven import: the ability to expand import blocks using for_each loops. Previously you could target a particular instance of a resource in the to attribute of an import block, but you had to write a separate import block for each instance. Now you can accomplish this with a single import block: locals { buckets = { "staging" = "bucket-demoapp-staging" "uat" = "bucket-demoapp-uat" "prod" = "bucket-demoapp-prod" } } import { for_each = local.buckets to = aws_s3_bucket.example[each.key] id = each.value } resource "aws_s3_bucket" "example" { for_each = local.buckets bucket = each.value }This technique can also be used to expand imports across multiple module instances. Learn more and see an example in the Import documentation. Get started with Terraform 1.7 For more details and to learn about all of the enhancements in Terraform 1.7, please review the full HashiCorp Terraform 1.7 changelog. Additional resource links include: Download Terraform 1.7 Sign up for a free Terraform Cloud account Read the Terraform 1.7 upgrade guide Get hands-on with tutorials at HashiCorp Developer As always, this release wouldn't have been possible without all of the great community feedback we've received via GitHub issues and from our customers. Thank you! View the full article
  8. How do you know if you can run terraform apply to your infrastructure without negatively affecting critical business applications? You can run terraform validate and terraform plan to check your configuration, but will that be enough? Whether you’ve updated some HashiCorp Terraform configuration or a new version of a module, you want to catch errors quickly before you apply any changes to production infrastructure. In this post, I’ll discuss some testing strategies for HashiCorp Terraform configuration and modules so that you can terraform apply with greater confidence. As a HashiCorp Developer Advocate, I’ve compiled some advice to help Terraform users learn how infrastructure tests fit into their organization’s development practices, the differences in testing modules versus configuration, and approaches to manage the cost of testing. I included a few testing examples with Terraform’s native testing framework. No matter which tool you use, you can generalize the approaches outlined in this post to your overall infrastructure testing strategy. In addition to the testing tools and approaches in this post, you can find other perspectives and examples in the references at the end. The testing pyramid In theory, you might decide to align your infrastructure testing strategy with the test pyramid, which groups tests by type, scope, and granularity. The testing pyramid suggests that engineers write fewer tests in the categories at the top of the pyramid, and more tests in the categories at the bottom. Higher-level tests in the pyramid take more time to run and cost more due to the higher number of resources you have to configure and create. In reality, your tests may not perfectly align with the pyramid shape. The pyramid offers a common framework to describe what scope a test can cover to verify configuration and infrastructure resources. I’ll start at the bottom of the pyramid with unit tests and work my way up the pyramid to end-to-end tests. Manual testing involves spot-checking infrastructure for functionality and can have a high cost in time and effort. Linting and formatting While not on the test pyramid, you often encounter tests to verify the hygiene of your Terraform configuration. Use terraform fmt -check and terraform validate to format and validate the correctness of your Terraform configuration. When you collaborate on Terraform, you may consider testing the Terraform configuration for a set of standards and best practices. Build or use a linting tool to analyze your Terraform configuration for specific best practices and patterns. For example, a linter can verify that your teammate defines a Terraform variable for an instance type instead of hard-coding the value. Unit tests At the bottom of the pyramid, unit tests verify individual resources and configurations for expected values. They should answer the question, “Does my configuration or plan contain the correct metadata?” Traditionally, unit tests should run independently, without external resources or API calls. For additional test coverage, you can use any programming language or testing tool to parse the Terraform configuration in HashiCorp Configuration Language (HCL) or JSON and check for statically defined parameters, such as provider attributes with defaults or hard-coded values. However, none of these tests verify correct variable interpolation, list iteration, or other configuration logic. As a result, I usually write additional unit tests to parse the plan representation instead of the Terraform configuration. Configuration parsing does not require active infrastructure resources or authentication to an infrastructure provider. However, unit tests against a Terraform plan require Terraform to authenticate to your infrastructure provider and make comparisons. These types of tests overlap with security testing done via policy as code because you check attributes in Terraform configuration for the correct values. For example, your Terraform module parses the IP address from an AWS instance’s DNS name and outputs a list of IP addresses to a local file. At a glance, you don’t know if it correctly replaces the hyphens and retrieves the IP address information. variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" } variable "service_kind" { type = string description = "Service kind to search" } locals { ip_addresses = toset([ for service, service_data in var.services : replace(replace(split(".", service_data.node)[0], "ip-", ""), "-", ".") if service_data.kind == var.service_kind ]) } resource "local_file" "ip_addresses" { content = jsonencode(local.ip_addresses) filename = "./${var.service_kind}.hcl" }You could pass an example set of services and run terraform plan to manually check that your module retrieves only the TCP services and outputs their IP addresses. However, as you or your team adds to this module, you may break the module’s ability to retrieve the correct services and IP addresses. Writing unit tests ensures that the logic of searching for services based on kind and retrieving their IP addresses remains functional throughout a module’s lifecycle. This example uses two sets of unit tests written in terraform test to check the logic generating the service’s IP addresses for each service kind. The first set of tests verify the file contents will have two IP addresses for TCP services, while the second set of tests check that the file contents will have one IP address for the HTTP service: variables { services = { "service_0" = { kind = "tcp" node = "ip-10-0-0-0" }, "service_1" = { kind = "http" node = "ip-10-0-0-1" }, "service_2" = { kind = "tcp" node = "ip-10-0-0-2" }, } } run "get_tcp_services" { variables { service_kind = "tcp" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.0", "10.0.0.2"] error_message = "Parsed `tcp` services should return 2 IP addresses, 10.0.0.0 and 10.0.0.2" } assert { condition = local_file.ip_addresses.filename == "./tcp.hcl" error_message = "Filename should include service kind `tcp`" } } run "get_http_services" { variables { service_kind = "http" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.1"] error_message = "Parsed `http` services should return 1 IP address, 10.0.0.1" } assert { condition = local_file.ip_addresses.filename == "./http.hcl" error_message = "Filename should include service kind `http`" } }I set some mock values for a set of services in the services variable. The tests include command = plan to check attributes in the Terraform plan without applying any changes. As a result, the unit tests do not create the local file defined in the module. The example demonstrates positive testing, where I test the input works as expected. Terraform’s testing framework also supports negative testing, where you might expect a validation to fail for an incorrect input. Use the expect_failures attribute to capture the error. If you do not want to use the native testing framework in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice to parse the plan representation in JSON and verify your Terraform logic. Besides testing attributes in the Terraform plan, unit tests can validate: Number of resources or attributes generated by for_each or count Values generated by for expressions Values generated by built-in functions Dependencies between modules Values associated with interpolated values Expected variables or outputs marked as sensitive If you wish to unit test infrastructure by simulating a terraform apply without creating resources, you can choose to use mocks. Some cloud service providers offer community tools that mock the APIs for their service offerings. Beware that not all mocks accurately reflect the behavior and configuration of their target API. Overall, unit tests run very quickly and provide rapid feedback. As an author of a Terraform module or configuration, you can use unit tests to communicate the expected values of configuration to other collaborators in your team and organization. Since unit tests run independently of infrastructure resources, they have a virtually zero cost to run frequently. Contract tests At the next level from the bottom of the pyramid, contract tests check that a configuration using a Terraform module passes properly formatted inputs. Contract tests answer the question, “Does the expected input to the module match what I think I should pass to it?” Contract tests ensure that the contract between a Terraform configuration’s expected inputs to a module and the module’s actual inputs has not been broken. Most contract testing in Terraform helps the module consumer by communicating how the author expects someone to use their module. If you expect someone to use your module in a specific way, use a combination of input variable validations, preconditions, and postconditions to validate the combination of inputs and surface the errors. For example, use a custom input variable validation rule to ensure that an AWS load balancer’s listener rule receives a valid integer range for its priority: variable "listener_rule_priority" { type = number default = 1 description = "Priority of listener rule between 1 to 50000" validation { condition = var.listener_rule_priority > 0 && var.listener_rule_priority < 50000 error_message = "The priority of listener_rule must be between 1 to 50000." } }As a part of input validation, you can use Terraform’s rich language syntax to validate variables with an object structure to enforce that the module receives the correct fields. This module example uses a map to represent a service object and its expected attributes: variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" }In addition to custom validation rules, you can use preconditions and postconditions to verify specific resource attributes defined by the module consumer. For example, you cannot use a validation rule to check if the address blocks overlap. Instead, use a precondition to verify that your IP addresses do not overlap with networks in HashiCorp Cloud Platform (HCP) and your AWS account: resource "hcp_hvn" "main" { hvn_id = var.name cloud_provider = "aws" region = local.hcp_region cidr_block = var.hcp_cidr_block lifecycle { precondition { condition = var.hcp_cidr_block != var.vpc_cidr_block error_message = "HCP HVN must not overlap with VPC CIDR block" } } }Contract tests catch misconfigurations in modules before applying them to live infrastructure resources. You can use them to check for correct identifier formats, naming standards, attribute types (such as private or public networks), and value constraints such as character limits or password requirements. If you do not want to use custom conditions in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice. Maintain these contract tests in the module repository and pull them into each Terraform configuration that uses the module using a CI framework. When someone references the module in their configuration and pushes a change to version control, the contract tests run against the plan representation before you apply. Unit and contract tests may require extra time and effort to build, but they allow you to catch configuration errors before running terraform apply. For larger, more complex configurations with many resources, you should not manually check individual parameters. Instead, use unit and contract tests to quickly automate the verification of important configurations and set a foundation for collaboration across teams and organizations. Lower-level tests communicate system knowledge and expectations to teams that need to maintain and update Terraform configuration. Integration tests With lower-level tests, you do not need to create external resources to run them, but the top half of the pyramid includes tests that require active infrastructure resources to run properly. Integration tests check that a configuration using a Terraform module passes properly formatted inputs. They answer the question, “Does this module or configuration create the resources successfully?” A terraform apply offers limited integration testing because it creates and configures resources while managing dependencies. You should write additional tests to check for configuration parameters on the active resource. In my example, I add a new terraform test to apply the configuration and create the file. Then, I verify that the file exists on my filesystem. The integration test creates the file using a terraform apply and removes the file after issuing a terraform destroy. run "check_file" { variables { service_kind = "tcp" } command = apply assert { condition = fileexists("${var.service_kind}.hcl") error_message = "File `${var.service_kind}.hcl` does not exist" } }Should you verify every parameter that Terraform configures on a resource? You could, but it may not be the best use of your time and effort. Terraform providers include acceptance tests that resources properly create, update, and delete with the right configuration values. Instead, use integration tests to verify that Terraform outputs include the correct values or number of resources. They also test infrastructure configuration that can only be verified after a terraform apply, such as invalid configurations, nonconformant passwords, or results of for_each iteration. When choosing an integration testing framework outside of terraform test, consider the existing integrations and languages within your organization. Integration tests help you determine whether or not to update your module version and ensure they run without errors. Since you have to set up and tear down the resources, you will find that integration tests can take 15 minutes or more to complete, depending on the resource. As a result, implement as much unit and contract testing as possible to fail quickly on wrong configurations instead of waiting for resources to create and delete. End-to-end tests After you apply your Terraform changes to production, you need to know whether or not you’ve affected end-user functionality. End-to-end tests answer the question, “Can someone use the infrastructure system successfully?” For example, application developers and operators should still be able to retrieve a secret from HashiCorp Vault after you upgrade the version. End-to-end tests can verify that changes did not break expected functionality. To check that you’ve upgraded Vault properly, you can create an example secret, retrieve the secret, and delete it from the cluster. I usually write an end-to-end test using a Terraform check to verify that any updates I make to a HashiCorp Cloud Platform (HCP) Vault cluster return a healthy, unsealed status: check "hcp_vault_status" { data "http" "vault_health" { url = "${hcp_vault_cluster.main.vault_public_endpoint_url}/v1/sys/health" } assert { condition = data.http.vault_health.status_code == 200 || data.http.vault_health.status_code == 473 error_message = "${data.http.vault_health.url} returned an unhealthy status code" } }Besides a check block, you can write end-to-end tests in any programming language or testing framework. This usually includes an API call to check an endpoint after creating infrastructure. End-to-end tests usually depend on an entire system, including networks, compute clusters, load balancers, and more. As a result, these tests usually run against long-lived development or production environments. Testing Terraform modules When you test Terraform modules, you want enough verification to ensure a new, stable release of the module for use across your organization. To ensure sufficient test coverage, write unit, contract, and integration tests for modules. A module delivery pipeline starts with a terraform plan and then runs unit tests (and if applicable, contract tests) to verify the expected Terraform resources and configurations. Then, run terraform apply and the integration tests to check that the module can still run without errors. After running integration tests, destroy the resources and release a new module version. The Terraform Cloud private registry offers a branch-based publishing workflow that includes automated testing. If you use terraform test for your modules, the private registry automatically runs those tests before releasing a module. When testing modules, consider the cost and test coverage of module tests. Conduct module tests in a different project or account so that you can independently track the cost of your module testing and ensure module resources do not overwrite environments. On occasion, you can omit integration tests because of their high financial and time cost. Spinning up databases and clusters can take half an hour or more. When you’re constantly pushing changes, you might even create multiple test instances. To manage the cost, run integration tests after merging feature branches and select the minimum number of resources you need to test the module. If possible, avoid creating entire systems. Module testing applies mostly to immutable resources because of its create and delete sequence. The tests cannot accurately represent the end state of brownfield (existing) resources because they do not test updates. As a result, it provides confidence in the module’s successful usage but not necessarily in applying module updates to live infrastructure environments. Testing Terraform configuration Compared to modules, Terraform configuration applied to environments should include end-to-end tests to check for end-user functionality of infrastructure resources. Write unit, integration, and end-to-end tests for configuration of active environments. The unit tests do not need to cover the configuration in modules. Instead, focus on unit testing any configuration not associated with modules. Integration tests can check that changes successfully run in a long-lived development environment, and end-to-end tests verify the environment’s initial functionality. If you use feature branching, merge your changes and apply them to a production environment. In production, run end-to-end tests against the system to confirm system availability. Failed changes to active environments will affect critical business systems. In its ideal form, a long-running development environment that accurately mimics production can help you catch potential problems. From a practical standpoint, you may not always have a development environment that fully replicates a production environment because of cost concerns and the difficulty of replicating user traffic. As a result, you usually run a scaled-down version of production to save money. The difference between development and production will affect the outcome of your tests, so be aware of which tests may be more important for flagging errors or disruptive to run. Even if configuration tests have less accuracy in development, they can still catch a number of errors and help you practice applying and rolling back changes before production. Conclusion Depending on your system’s cost and complexity, you can apply a variety of testing strategies to Terraform modules and configuration. While you can write tests in your programming language or testing framework of choice, you can also use the testing frameworks and constructs built into Terraform for unit, contract, integration, and end-to-end testing. Test type Use case Terraform configuration Unit test Modules, configuration terraform test Contract test Modules Input variable validation Preconditions/postconditions Integration test Modules, configuration terraform test End-to-end test Configuration Check blocks This post has explained the different types of tests and how you can apply them to catch errors in Terraform configurations and modules before production, and how to incorporate them into pipelines. Your Terraform testing strategy does not need to be a perfect test pyramid. At the very least, automate some tests to reduce the time you need to manually verify changes and check for errors before they reach production. Check out our tutorial on how to Write Terraform tests to learn about writing Terraform tests for unit and integration testing and running them in the Terraform Cloud private module registry. For more information on using checks, Use checks to validate infrastructure offers a more in-depth example. If you want to learn about writing tests for security and policy, review our documentation on Sentinel. View the full article
  9. How do you streamline the complex process of managing modern cloud infrastructure? The answer lies in the innovative realm of Infrastructure as Code (IaC) tools, particularly Terraform and Pulumi. Revolutionizing the way we approach cloud infrastructure, these tools shift the focus from traditional, manual management to a sophisticated, code-based methodology. This shift is not just a trend; it's a fundamental change in managing cloud architecture, offering unparalleled efficiency, consistency, and scalability. By automating infrastructure provisioning and management, IaC tools like Terraform and Pulumi have become essential in modern cloud environments. They foster rapid deployment, version control, and seamless scalability, all while minimizing human error. View the full article
  10. One of the most popular cloud-native, PaaS (Platform as a Service) products in Microsoft Azure is Azure App Service. It enables you to easily deploy and host web and API applications in Azure. The service supports ways to configure App Settings and Connection String within the Azure App Service instance. Depending on who has access […] The article Terraform: Deploy Azure App Service with Key Vault Secret Integration appeared first on Build5Nines. View the full article
  11. HashiCorp Terraform, an open-source Infrastructure as Code (IaC) tool, enables easier infrastructure provisioning and management across all types of platforms. At the heart of Terraform’s effectiveness is its workflow that consists of three main stages: Write, Plan, and Apply. This provides a structured process that ensures a seamless creation, modification, and provisioning process for managing […] The article Terraform Workflow Process Explained appeared first on Build5Nines. View the full article
  12. The popular HashiCorp Terraform, open-source, Infrastructure as Code (IaC) tool, empowers DevOps and SRE teams to manage and provision resources more efficiently. One of Terraform’s powerful features is the ability to import existing infrastructure into a Terraform project. This capability allows you to bring already-deployed resources under Terraform’s control, providing visibility, management, and automation. In […] The article Terraform: Import Existing Infrastructure appeared first on Build5Nines. View the full article
  13. HashiCorp Terraform provides a couple functions for working with JSON. These are the jsonencode and jsondecode functions and they grant the ability to encode and decode JSON. This can be a powerfull tool for several scenarios where you may need to work with JSON data within a Terraform project. This article shows some simple examples […] The article Terraform: How to work with JSON (jsondecode, jsonencode, .tfvars.json) appeared first on Build5Nines. View the full article
  14. At Dyte, we have recognized the potential of Terraform in streamlining our alert setup process. By adopting Terraform, we have empowered our engineering teams to set alerts for their respective services without relying on the SRE/DevOps team. Setting up alerts on New Relic can be tedious and repetitive, requiring manual effort. But with the advent of Terraform, New Relic has started supporting the creation of alerts by Terraform. View the full article
  15. Terraform, an open-source infrastructure as code (IAC) tool developed by HashiCorp, provides an effective means to define and provision infrastructure resources. You can automate the process of creating, editing, and deleting resources across cloud-based and on-premise environments using Terraform's powerful features. One of the critical implementations that makes it so effective is state management. In this blog, we will delve into the significance of Terraform state, its management, and practical examples to illustrate its importance... View the full article
  16. In this article, we will discuss what Terraform is and how to install Terraform on various Linux distributions using HashiCorp repositories. What is Terraform? Terraform View the full article
  17. Infrastructure as Code (IaC) has become a cornerstone of modern cloud management, and HashiCorp Terraform is a powerful tool for achieving this. Terraform allows you to create reusable components called modules, enabling you to build consistent and scalable infrastructure in Azure. In this article, we’ll guide you through the process of creating and using your […] The article Terraform: Create your First Module appeared first on Build5Nines. View the full article
  18. Welcome back to the CNCF Tool Interviews Series. Today, we're taking an in-depth look at OpenTofu, a tool that's making significant strides in the Infrastructure as Code (IaC) domain within the open-source community. Let's get started... View the full article
  19. With the recent release of the official Red Hat Cloud Services Provider for Terraform customers can now automate the provisioning Red Hat OpenShift Service on AWS clusters (ROSA) with Terraform. Previously, automating the creation of a ROSA cluster required using the OpenShift Command Line Interface (CLI), either wrapping it in code or using additional tools to automate the necessary CLI commands. Now customers using Terraform can integrate ROSA cluster creation into their existing pipelines. In addition to the Red Hat Cloud Services (RHCS) Provider, Red Hat has made available the ROSA STS Terraform module. This gives customers the option to automate ROSA prerequisites, like operator IAM roles, policies, and identity providers as a distinct step... View the full article
  20. Terraform has quickly become a go-to infrastructure as code (IaC) tool due to its powerful declarative syntax for provisioning and managing infrastructure efficiently. One key feature that distinguishes Terraform from its rivals is its module system. In this blog, we will explore the Terraform module in detail with some practical examples... View the full article
  21. HashiCorp tweaks Terraform with user interface changes and AI infused testingView the full article
  22. Today at HashiConf, we are excited to announce new capabilities for HashiCorp Terraform and Terraform Cloud to improve developer velocity, code quality, and infrastructure cost management. The new Terraform and Terraform Cloud announcements include the following: Terraform test framework (GA) helps produce higher-quality modules Test-integrated module publishing (beta) streamlines the testing and publishing process Generated module tests (beta) help module authors get started in seconds Enhanced editor validation in Visual Studio Code (GA) makes it easier to find and resolve errors Stacks (private preview) simplifies infrastructure provisioning and management at scale Ephemeral workspaces (GA) help optimize infrastructure spend View the full article
  23. Terraform is a powerful infrastructure as code (IaC) tool, but even experienced users sometimes encounter challenges with what seems like basic operations. One such operation is simple string concatenation, which can be perplexing if you’re not familiar with the specific syntax and functions available in Terraform. In this article, we’ll address a common issue that […] The article How to Perform Simple String Concatenation in Terraform appeared first on Build5Nines. View the full article
  24. When working with HashiCorp Terraform, it’s important to understand how to interact with external systems and data. Terraform provides a mechanism to query and use information from external sources through data sources using a data block. In this article, we’ll explore what data sources are, provide a use case to illustrate their importance, and clarify […] The article Terraform: How are Data Sources used? appeared first on Build5Nines. View the full article
  25. The post Terraform: Create Azure Windows VM with file, remote-exec & local-exec provisioner appeared first on DevOpsSchool.com. View the full article
  • Forum Statistics

    44.9k
    Total Topics
    44.7k
    Total Posts
×
×
  • Create New...