Jump to content

Search the Community

Showing results for tags 'terraform cloud'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 15 results

  1. Recent enhancements in HashiCorp Terraform Cloud help simplify the user experience when working with projects. A new dedicated browsing experience provides better visibility and manageability for projects, and the ability to restrict version control system (VCS) connections to projects enables more fine-grained control to reduce risk. Project overview page As the popularity of projects has grown, customers have found that long project names don’t all fit in the sidebar of the workspaces view. Customers need a better browsing experience for projects that is not restricted to a view designed for browsing workspaces. To address this, we’re introducing a new project overview page to let users view and search all projects they have access to. This view also provides an overview of the number of teams and workspaces associated with each project. When you click into any project, a new dedicated page lists all resources in that project while providing key project details such as workspace name and health. From this page you can click on Settings to manage the project and the teams that have access to it. Scope VCS connections to a project Within a single Terraform Cloud organization, multiple version control system (VCS) connections can be defined and made available for linked workspaces. However, previously, there was no way to limit the scope of these connections, presenting a challenge for organizations with multiple VCS providers or segmented deployments within a provider, operated by different teams or business units. In these environments it is desirable, and often required, to limit end users to only the providers and data they need. This reduces the risk of mistakes and prevents the exposure of sensitive information from other teams. With our latest enhancement, administrators can now control the project scope of VCS connections. By default, each VCS connection is available to all workspaces in the organization. However, if you need to limit which projects can use repositories from a given VCS connection, administrators can now change this setting to limit the connection to only workspaces in the selected project(s). This helps organizations avoid the added overhead of maintaining multiple Terraform Cloud organizations just to isolate VCS environments. It also simplifies the end-user experience by adding another guardrail for safer self-service that limits each team to accessing only the version control providers they need to use. Get started with Terraform Cloud We’re working to ensure Terraform Cloud continues to deliver improvements that help customers have better visibility and control over their environment throughout their infrastructure lifecycle. To learn more about the new features described in this post, visit the Terraform guides and documentation on HashiCorp Developer. If you are new to Terraform, sign up for Terraform Cloud and get started for free today. View the full article
  2. The HashiCorp Terraform Cloud Operator for Kubernetes continuously reconciles infrastructure resources using Terraform Cloud. When you use the operator to create a Terraform Cloud workspace, you must reference a Terraform Cloud API token stored in a Kubernetes secret. One way to better secure these secrets instead of hard-coding them involves storing and managing secrets in a centralized secrets manager, like HashiCorp Vault. In this approach, you need to synchronize secrets revoked and created by Vault into Kubernetes. An operator like the Vault Secrets Operator (VSO) can retrieve secrets from an external secrets manager and store them in a Kubernetes secret for workloads to use. This post demonstrates how to use the Vault Secrets Operator (VSO) to retrieve dynamic secrets from Vault and write them to a Kubernetes secret for the Terraform Cloud Operator to reference when creating a workspace. While the example focuses on Terraform Cloud API tokens, you can extend this workflow to any Kubernetes workload or custom resource that requires a secret from Vault. Install Vault and operators The Terraform Cloud Operator requires a user or team API token with permissions to manage workspaces, plan and apply runs, and upload configurations. While you can manually generate a token in the Terraform Cloud UI, configure Vault to issue API tokens for Terraform Cloud. The Terraform Cloud secrets engine for Vault handles the issuance and revocation of different kinds of API tokens in Terraform Cloud. Vault manages the token’s lifecycle and audits its usage and distribution once you reference it in the Terraform Cloud Operator. The demo repository for this post sets up the required infrastructure resources, including a: Vault cluster on HCP Vault Kubernetes cluster on AWS After provisioning infrastructure resources, the demo repo installs Helm charts for Vault, Terraform Cloud Operator, and Vault Secrets Operator in their own namespaces using Terraform. If you do not use Terraform, install each Helm chart by CLI. First, install the Vault Helm chart. If applicable, update the values to reference an external Vault cluster: $ helm repo add hashicorp https://helm.releases.hashicorp.com $ helm install vault hashicorp/vaultInstall the Helm chart for the Terraform Cloud Operator with its default values: $ helm install terraform-cloud-operator hashicorp/terraform-cloud-operatorInstall the Helm chart for VSO with a default Vault connection to your Vault cluster: $ helm install vault-secrets-operator hashicorp/vault-secrets-operator \ --set defaultVaultConnection.enabled=true \ --set defaultVaultConnection.address=$VAULT_ADDRAny custom resources created by VSO will use the default Vault connection. If you have different Vault clusters, you can define a VaultConnection custom resource and reference it in upstream dependencies. After installing Vault and the operators, configure the Kubernetes authentication method in Vault. This ensures VSO can use Kubernetes service accounts to authenticate to Vault. Set up secrets in Vault After installing a Vault cluster and operators into Kubernetes, set up the secrets engines for your Kubernetes application. The Terraform Cloud Operator needs a Terraform Cloud API token with permissions to create projects and workspaces and upload Terraform configuration. On the Terraform Cloud Free tier, you can generate a user token with administrative permissions or a team token for the “owners” team to create workspaces and apply runs. To further secure the operator’s access to Terraform Cloud, upgrade to a plan that supports teams to secure the Terraform Cloud Operator’s access to Terraform Cloud. Then, create a team, associate a team token with it, and scope the token’s access to a Terraform Cloud project. This ensures that the Terraform Cloud Operator has sufficient access to create workspaces and upload configuration in a given project without giving it access to an entire organization. Configure the Terraform Cloud secrets engine for Vault to handle the lifecycle of the Terraform Cloud team API token. The demo repo uses Terraform to enable the backend. Pass in an organization or user token with permissions to create other API tokens. resource "vault_terraform_cloud_secret_backend" "apps" { backend = "terraform" description = "Manages the Terraform Cloud backend" token = var.terraform_cloud_root_token }Create a role for each Terraform Cloud team that needs to use the Terraform Cloud Operator. Then pass the team ID to the role to configure the secrets engine to generate team tokens: resource "vault_terraform_cloud_secret_role" "apps" { backend = vault_terraform_cloud_secret_backend.apps.backend name = "payments-app" organization = var.terraform_cloud_organization team_id = "team-*******" }Build a Vault policy that allows read access to the secrets engine credentials endpoint and role: resource "vault_policy" "terraform_cloud_secrets_engine" { name = "terraform_cloud-secrets-engine-payments-app" policy = <The Terraform Cloud Operator needs the Terraform Cloud team token to create workspaces, upload configurations, and start runs. However, you may also want to pass secrets to workspace variables. For example, a Terraform module may need a username and password to configure HCP Boundary. You can store the credentials in Vault’s key-value secrets engine and configure a Vault policy to read the static secrets. After setting up policies to read the required secrets, create a Vault role for the Kubernetes authentication method, which allows the terraform-cloud service account to authenticate to Vault and retrieve the Terraform Cloud token: resource "vault_kubernetes_auth_backend_role" "terraform_cloud_token" { backend = "kubernetes" role_name = "payments-app" bound_service_account_names = ["terraform-cloud"] bound_service_account_namespaces = ["payments-app"] token_ttl = 86400 token_policies = [ vault_policy.terraform_cloud_secrets_engine.name, ] }Refer to the complete repo to configure the Terraform Cloud secrets engine and store static secrets for the Terraform Cloud workspace variables. Sync secrets from Vault to Kubernetes The Terraform Cloud Operator includes a custom resource to create workspaces and define workspace variables. However, dynamic variables refer to values stored in a Kubernetes Secret or ConfigMap. Use VSO to synchronize secrets from Vault into native Kubernetes secrets. The demo repo for this post retrieves the Terraform Cloud team token and static credentials and stores them as a Kubernetes secret. VSO uses a Kubernetes service account linked to the Kubernetes authentication method role in Vault. First, deploy a service account and service account token for terraform-cloud to the payments-app namespace: apiVersion: v1 kind: ServiceAccount metadata: name: terraform-cloud namespace: payments-app --- apiVersion: v1 kind: Secret metadata: name: terraform-cloud-token namespace: payments-app type: kubernetes.io/service-account-tokenThen, configure a VaultAuth resource for VSO to use the terraform-cloud service account and authenticate to Vault using the kubernetes mount path and payments-app role defined for the authentication method. The configuration shown here sets Vault namespace to admin for your HCP Vault cluster: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultAuth metadata: name: terraform-cloud namespace: payments-app spec: method: kubernetes mount: kubernetes namespace: admin kubernetes: role: payments-app serviceAccount: terraform-cloud audiences: - vaultTo sync the Terraform Cloud team token required by the Terraform Cloud Operator to a Kubernetes secret, define a VaultDynamicSecret resource to retrieve the credentials. VSO uses this resource to retrieve credentials from the terraform/creds/payments-app path in Vault and creates a Kubernetes secret named terraform-cloud-team-token with the token value. The resource refers to VaultAuth for authentication to Vault: apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: name: terraform-cloud-team-token namespace: payments-app spec: mount: terraform path: creds/payments-app destination: create: true name: terraform-cloud-team-token type: Opaque vaultAuthRef: terraform-cloudWhen you apply these manifests to your Kubernetes cluster, VSO retrieves the Terraform Cloud team token and stores it in a Kubernetes secret. The Operator’s logs indicate the handling of the VaultAuth resource and synchronization of the VaultDynamicSecret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-14T16:38:47Z DEBUG events Successfully handled VaultAuth resource request {"type": "Normal", "object": {"kind":"VaultAuth","namespace":"payments-app","name":"terraform-cloud","uid":"e7c0464e-9ce8-4f3f-953a-f8eb10853001","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331817"}, "reason": "Accepted"} 2024-03-14T16:38:47Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"d1563879-41ee-4817-a00b-51fe6cff7e6e","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"331826"}, "reason": "SecretSynced"}Verify that the Kubernetes secret terraform-cloud-team-token contains the Terraform Cloud team token: $ kubectl get secrets -n payments-app \ terraform-cloud-team-token -o jsonpath='{.data.token}' | base64 -d ******.****.*****Create a Terraform Cloud workspace using secrets You can now configure other Kubernetes resources to reference the secret synchronized by VSO. For the Terraform Cloud Operator, deploy a Workspace resource that references the Kubernetes secret with the team token: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: payments-app-database namespace: payments-app spec: organization: hashicorp-stack-demoapp project: name: payments-app token: secretKeyRef: name: terraform-cloud-team-token key: token name: payments-app-database ## workspace variables omitted for clarityThe team token has administrator access to create and update workspaces in the “payments-app” project in Terraform Cloud. You can use a similar approach to passing Kubernetes secrets as workspace variables. Deploy a Module resource to apply a Terraform configuration in a workspace. The resource references a module source, variables to pass to the module, and outputs to extract. The Terraform Cloud Operator uploads a Terraform configuration to the workspace defining the module. apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: database namespace: payments-app spec: organization: hashicorp-stack-demoapp token: secretKeyRef: name: terraform-cloud-team-token key: token destroyOnDeletion: true module: source: "joatmon08/postgres/aws" version: "14.9.0" ## module variables omitted for clarityTerraform Cloud will start a run to apply the configuration in the workspace. Rotate the team API token Terraform Cloud allows only one active team token at a time. As a result, the Terraform Cloud secrets engine does not assign leases to team tokens and requires manual rotation. However, Terraform Cloud does allow issuance of multiple user tokens. The secrets engine assigns leases to user API tokens and will rotate them dynamically. To rotate a team token, run a Vault command to rotate the role for a team token in Terraform Cloud: $ vault write -f terraform/rotate-role/payments-appVSO must update the Kubernetes secret with the new token when the team token is rotated. Edit a field in the VaultDynamicSecret resource, such as renewalPercent, to force VSO to resynchronize: $ kubectl edit VaultDynamicSecret terraform-cloud-team-token -n payments-app # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: secrets.hashicorp.com/v1beta1 kind: VaultDynamicSecret metadata: annotations: ## omitted spec: ## omitted renewalPercent: 60 vaultAuthRef: terraform-cloudVSO recognizes the new team token in Vault and reconciles it with the Kubernetes secret: $ kubectl logs -n vault-secrets-operator $(kubectl get pods \ -n vault-secrets-operator \ -l app.kubernetes.io/instance=vault-secrets-operator -o name) 2024-03-18T16:10:19Z INFO Vault secret does not support periodic renewal/refresh via reconciliation {"controller": "vaultdynamicsecret", "controllerGroup": "secrets.hashicorp.com", "controllerKind": "VaultDynamicSecret", "VaultDynamicSecret": {"name":"terraform-cloud-team-token","namespace":"payments-app"}, "namespace": "payments-app", "name": "terraform-cloud-team-token", "reconcileID": "3d0a15f1-0edf-450b-8be1-6319cd3b2d02", "podUID": "4eb7f16a-cfcb-484e-b3da-54ddbfc6a6a6", "requeue": false, "horizon": "0s"} 2024-03-18T16:10:19Z DEBUG events Secret synced, lease_id="", horizon=0s {"type": "Normal", "object": {"kind":"VaultDynamicSecret","namespace":"payments-app","name":"terraform-cloud-team-token","uid":"f4f0483c-895d-4b05-894c-24fdb1518489","apiVersion":"secrets.hashicorp.com/v1beta1","resourceVersion":"1915673"}, "reason": "SecretRotated"Note that this manual workflow for rotating tokens applies specifically to team and organization tokens generated by the Terraform Cloud secrets engine. User tokens have leases, which VSO handles automatically. VSO also supports the rotation of credentials for static roles in database secrets engines. Set the allowStaticCreds attribute in the VaultDynamicSecret resource for VSO to synchronize changes to static roles. Learn more As shown in this post, rather than store Terraform Cloud API tokens as secrets in Kubernetes, you can manage the tokens with Vault and use the Vault Secrets Operator to synchronize them to Kubernetes secrets for the Terraform Cloud Operator to use. By managing the Terraform Cloud API token in Vault, you can audit its usage and handle its lifecycle in one place. In general, the pattern of synchronizing to a Kubernetes secret allows any permitted Kubernetes custom resource or workload to use the secret while Vault manages its lifecycle. As a result, you can track the usage of secrets across your Kubernetes workloads without refactoring applications already using Kubernetes secrets. Learn more about the Vault Secrets Operator in our VSO documentation. If you want to further secure your secrets in Kubernetes, check out our blog post comparing three methods to inject secrets from Vault into Kubernetes workloads. If you support a GitOps workflow in your organization and want to empower teams to deploy infrastructure resources using Kubernetes, review our documentation on the Terraform Cloud Operator to deploy and manage infrastructure resources through modules. Refer to GitHub for a complete example provisioning a database and other infrastructure resources. View the full article
  3. In November 2023, we announced the general availability of the Terraform Cloud Operator for Kubernetes. The Terraform Cloud Operator streamlines infrastructure management, allowing platform teams to offer a Kubernetes-native experience for their users while standardizing on Terraform workflows. Today we are excited to announce the general availability of version 2.3 of the Terraform Cloud Operator, with the ability to initiate workspace runs declaratively. Introducing workspace run operations In previous versions of the Terraform Cloud Operator v2, the only way to start a run was by patching the restartedAt timestamp in the Module resource. But this approach was not intuitive, did not work for all types of workspaces and workflows, and did not allow users to control the type of run to perform. This challenge hindered migration efforts to the newest version of the Terraform Cloud Operator. . Now with version 2.3, users can declaratively start plan, apply, and refresh runs on workspaces. This enhances self-service by allowing developers to initiate runs on any workspace managed by the Operator, including VCS-driven workspaces. The Workspace custom resource in version 2.3 of the operator supports three new annotations to initiate workspace runs: workspace.app.terraform.io/run-new: Set this annotation to "true" to trigger a new run. workspace.app.terraform.io/run-type: Set to plan (default), apply, or refresh to control the type of run. workspace.app.terraform.io/run-terraform-version: Specifies the version of Terraform to use for a speculative plan run. For other run types, the workspace version is used. As an example, a basic Workspace resource looks like this: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: this spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: kubernetes-operatorUsing kubectl as shown here, annotate the above resource to immediately start a new apply run: kubectl annotate workspace this \ workspace.app.terraform.io/run-new="true" \ workspace.app.terraform.io/run-type=apply --overwriteThe annotation is reflected in the Workspace resource for observability: apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: annotations: workspace.app.terraform.io/run-new: "true" workspace.app.terraform.io/run-type: apply name: this spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: kubernetes-operatorAfter the run is successfully triggered, the operator will set the run-new value back to "false". Learn more and get started HashiCorp works to continuously improve the Kubernetes ecosystem by enabling platform teams at scale. Learn more about the Terraform Cloud Operator by reading the documentation and the Deploy infrastructure with the Terraform Cloud Kubernetes Operator v2 tutorial. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today. View the full article
  4. HashiCorp Terraform Cloud run tasks have long been a staple for securely sharing Terraform-related data with trusted integration partners. And with the newest enhancements, the benefits go even further. These improvements empower teams to seamlessly expand their use of essential third-party integrations, facilitating automation, configuration management, security, compliance, and orchestration tasks. Recent efforts by the HashiCorp Terraform team have focused on refining the process of associating run tasks within Terraform organizations, significantly reducing day-to-day overhead. Plus, the introduction of a new post-apply stage broadens the potential use cases for run tasks, offering even more value to users. Scoping organizational run tasks Initially, run tasks were tailored to meet the needs of teams provisioning infrastructure with Terraform Cloud. Recognizing the diversity of tools used in Terraform workflows, we integrated them seamlessly into Terraform Cloud as first-class run task integrations. This gave teams additional flexibility in selecting and managing run tasks for their workspaces. As run task adoption grows within organizations, platform operations teams face challenges in ensuring consistency across the organization. Managing individual run task assignments can become cumbersome, with platform teams striving for standardization across workspaces. To address this, we've introduced scopes to organizational run tasks in Terraform Cloud. This feature allows platform teams to define the scope of organizational run tasks, targeting them globally and specifying evaluation stages for enforcement. Organization-wide enforcement eliminates configuration burden and reduces the risk of compliance gaps as new workspaces are created. Multi-stage support further enhances the run task workflow, streamlining configuration and reducing redundant code when using the Terraform Cloud/Enterprise (tfe) provider for run task provisioning and management. Introducing post-apply run tasks Post-provisioning tasks are crucial for managing and optimizing infrastructure on Day 2 and beyond. These tasks include configuration management, monitoring, performance optimization, security management, cost optimization, and scaling to help ensure efficient, secure, and cost-effective operations. Recent discussions with customers underscored the need to securely integrate third-party tools and services into Terraform workflows after infrastructure is provisioned with Terraform Cloud. Post-provisioning processes often require manual intervention before systems or services are production-ready. While API-driven workflows can expedite post-provisioning, the lack of a common workflow poses implementation challenges. In response to these concerns, we've introduced a new post-apply stage to the run task workflow. This stage lets users seamlessly incorporate post-provisioning tasks that automate configuration management, compliance checks, and other post-deployment activities. The feature simplifies the integration of Terraform workflows with users' toolchains, prioritizing security and control. Refined user experience for run tasks As part of the implementation of run task scopes, we've extended support for multi-stage functionality to workspace run tasks. We also introduced two new views that offer users the flexibility to see the run tasks associated with their workspace. Now workspace administrators can choose to view their run task associations as a list or grouped by assigned stages. Summary and resources The advancements in Terraform Cloud's run task workflow empower users to streamline infrastructure provisioning and management. You can elevate your workflow with scopes for organizational run tasks and harness the potential of the post-apply stage. To learn more, explore HashiCorp’s comprehensive run tasks documentation. Additionally, we provide a Terraform run task scaffolding project written in Go to help you write your own custom run task integration. If you're new to Terraform, sign up for Terraform Cloud today and start for free. View the full article
  5. HashiCorp Terraform is the world’s most widely used multi-cloud provisioning product. The Terraform ecosystem has notched more than 3,000 providers, 14,000 modules, and 250 million downloads. Terraform Cloud is the fastest way to adopt Terraform, providing everything practitioners, teams, and global businesses need to create and collaborate on infrastructure and manage risks for security, compliance, and operational constraints. This month, AWS AppFabric added support for Terraform Cloud, expanding an already long list of ways that Terraform can connect, secure and provision infrastructure with AWS. This post will explore the new AppFabric support and highlight two other key existing integrations: Dynamic provider credentials and AWS Service Catalog support for Terraform Cloud. AWS AppFabric support for Terraform Cloud AWS AppFabric now supports Terraform Cloud. IT administrators and security analysts can use AppFabric to quickly integrate with Terraform Cloud, aggregate enriched and normalized SaaS audit logs, and audit end-user access across their SaaS apps. This launch expands AWS AppFabric supported applications used across an organization. AWS AppFabric quickly connects SaaS applications, or data lakes like Amazon Security Lake. For Terraform Cloud users, this integration can accelerate time-to-market and help developers release new features to production faster with streamlined infrastructure provisioning and application delivery workflows. To learn more, visit the AWS AppFabric page and then check out how to connect AppFabric to your Terraform Cloud account. Dynamic credentials with the AWS provider Introduced early last year, Terraform Cloud's dynamic provider credentials let you establish a trust relationship between Terraform Cloud and AWS. They limit the blast radius of compromised credentials by using unique, single-use credentials for each Terraform run. Dynamic credentials also give you fine-grained control over the resources that each of your Terraform Cloud projects and workspaces can manage. Terraform Cloud supports dynamic credentials for AWS and Vault. To learn more, AWS and HashiCorp have since written a joint blog post on how to Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials and you can learn how to configure Dynamic Credentials with the AWS Provider at HashiCorp Developer. Terraform Cloud self-service provisioning with AWS Service Catalog In August 2023, AWS added AWS Service Catalog support for Terraform Cloud. This includes integrated access to key AWS Service Catalog features, including cataloging of standardized and pre-approved Terraform configurations, infrastructure as code templates, access control, resource provisioning with least-privilege access, versioning, sharing to thousands of AWS accounts, and tagging. By combining Terraform Cloud with AWS Service Catalog, we’re connecting the AWS Service Catalog interface that many customers already know, with the existing workflows and policy guardrails of Terraform Cloud. HashiCorp and AWS have since co-presented at HashiConf (Terraform Cloud self-service provisioning with AWS Service Catalog) and partnered on AWS’s blog post on How to Use AWS Service Catalog with HashiCorp Terraform Cloud, demonstrating the workflow for provisioning a new product and offering access to getting-started guides. Self-service infrastructure is no longer a dream Platform teams can use Terraform Cloud, HCP Waypoint, and the AWS Service Catalog to create simplified Terraform-based workflows for developers. Terraform modules can incorporate unit testing, built-in security, policy enforcement, and reliable version updates. Using these tools, platform teams can establish standardized workflows to deploy applications and deliver a smooth and seamless developer experience. Learn more by viewing AWS and HashiCorp’s recent Self-service infrastructure is no longer a dream talk from AWS re:Invent: View the full article
  6. In November 2023, we announced the general availability of the Terraform Cloud Operator for Kubernetes. The Terraform Cloud Operator streamlines infrastructure management, allowing platform teams to offer a Kubernetes-native experience for their users while standardizing Terraform workflows. It simplifies the management of Terraform Cloud workspaces and agent pools, ensuring efficiency and consistency across operations. Today we are excited to announce the general availability of project support in the latest version of the Terraform Cloud Operator, version 2.2. Introducing project support Previously, workspace creation using the operator was limited to the default project in Terraform Cloud. Users needed elevated user permissions, which led to security risks from overly broad access and also hindered self-managed workspaces due to frequent central team dependency. Now with project support, users can specify the project where a workspace will be created. This enhances self-service by allowing users to independently create and manage workspaces, and execute runs within the context of their assigned project. The project name can now be set in the Workspace resource (example code). Also, project administrators can use the new Project custom resource to create and manage projects and team access in the organization: The new Project custom resource manages Terraform Cloud projects and team access (example code). Key benefits The general availability of project support for Terraform Cloud Operator brings two main benefits: Improved efficiency: Projects streamline platform teams’ ability to group related workspaces based on their organization’s resource usage and ownership patterns (e.g. by teams, business units, or services). These workspace groupings reduce complexity when managing and organizing Terraform configurations.. Reduced risk: Instead of managing permissions for each workspace individually, you can group related workspaces into projects, then grant teams access to the project. Those permissions will then apply to all workspaces under that project. This helps teams manage the workspaces they are responsible for while still having their permissions confined to a project, rather than the whole organization, making it easier for organization owners to follow the principle of least privilege. Learn more and get started Take a deeper dive into the Terraform Cloud Operator and securely managing Kubernetes resources by signing up for the Multi-cloud Kubernetes with HashiCorp Terraform webinar. Learn more about project support for the Terraform Cloud Operator by reading the documentation. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today. View the full article
  7. Policies are rules that HashiCorp Terraform Cloud enforces at the Terraform run phase that can help with security, compliance, and cost management. Policies can be defined in Terraform Cloud using Sentinel and the Open Policy Agent (OPA) policy as code frameworks. Today, we are excited to announce a new feature that addresses a critical challenge faced by customers in policy as code integration with Terraform Cloud: policy runtime version management; a new feature that enables users to select a specific Sentinel or OPA runtime version for their policy sets. This update introduces a policy runtime version pinning feature that provides Terraform Cloud users with more control, flexibility, and stability in their policy deployments. Policy runtime version management enables users to select specific policy as code runtime versions in Terraform Cloud to reduce the impact of version conflicts, unexpected upgrades, and bugs, making policy enforcement more stable and efficient... View the full article
  8. Run tasks allow users to directly integrate third-party tools and services within their HashiCorp Terraform Cloud workspace. They are used to perform a wide range of operations, such as managing cost, security, compliance, or enhancing workflows with custom logic. Today, we are excited to build on this functionality with the general availability of streamlined run task reviews; a new feature that accelerates run task evaluation by providing critical information in Terraform Cloud. Streamlined run task reviews are now available for use with the Palo Alto Prisma Cloud run task for Terraform Cloud... View the full article
  9. Earlier this year, we announced general availability of no-code provisioning for HashiCorp Terraform Cloud Plus customers. This new deployment method gives organizations a self-service workflow in Terraform Cloud for application developers and others who need infrastructure but may not know Terraform or HashiCorp Configuration Language (HCL). No-code provisioning empowers cloud platform teams to set up their modules for push-button self-service, allowing stakeholders with infrastructure needs to provision those modules without having to manage Terraform configuration repos or write code. No-code provisioning provides a simpler, standardized way to provision with Terraform, bringing even more reusability and increasing provisioning velocity. However, the previous no-code provisioning release restricted users to the module version with which they originally provisioned their infrastructure — they could only change variable inputs. This limitation kept users from accessing changes delivered to subsequent versions of the module unless they destroyed the workspace and deployed a fresh one. Introducing module version upgrades Module version upgrades for no-code workspaces addresses this issue by significantly reducing the friction when updating the no-code modules in a Terraform Cloud workspace. Once an administrator or module owner updates the designated no-code ready version, a notification will appear in downstream workspaces that use the module, giving practitioners a seamless experience in receiving and applying upgrades to their workspaces. The simple module version upgrade process works like this: Users see a notification in their workspace alerting them to the newly available version. From there they can initiate a plan, noting any changes that will take effect with the new module version. If the plan looks good, the user can apply it. If not, any problems can be communicated back to the module author before it impacts the running infrastructure. Terraform Cloud updates the module version used in the workspace, and applies any necessary changes to the resources based on the plan. Summary and resources Self-service workflows like no-code provisioning are becoming essential to scaling infrastructure operations. Module version upgrades keep developers’ no-code workspaces up-to-date without them having to know Terraform or ask their platform team to update their infrastructure. To learn more about module version upgrades for no-code provisioning, please refer to module upgrades in the no-code provisioning documentation. You can get hands-on with the new feature in the updated no-code provisioning tutorial. Get started for free on Terraform Cloud to provision and manage all of your infrastructure. View the full article
  10. Terraform Cloud (TFC) can help manage infrastructure as code (IaC) development for large enterprises. As the number of Google Cloud projects grows, managing access controls for Terraform Cloud projects and workspaces can become complex. Don't worry, we have a solution that is designed to be more secure than using Google Cloud service account keys, and also scales well for hundreds or even thousands of Google Cloud projects, TFC workspaces, and TFC projects using Workload Identity Federation... View the full article
  11. We are excited to announce the general availability of an organization-wide default execution mode; a new agent configuration method for HashiCorp Terraform Cloud that’s also coming soon to Terraform Enterprise. Terraform Cloud facilitates the management and deployment of agents within secure, isolated network environments. With the introduction of self-hosted agents in Terraform Cloud, it became possible to deploy them in pools, creating a flexible architecture that allows a single organization to manage infrastructure across multiple private network segments... View the full article
  12. In today’s multi-cloud world, images (such as AMIs for Amazon EC2, virtual machines, Docker containers, and more) lay the foundation for modern infrastructure, security, networking, and applications. Enterprises adopting multi-cloud typically start by using Terraform for centralized provisioning, but Terraform does not handle the details of image creation and management. In many organizations, the workflows in place to create and manage images are siloed, time-consuming, and complex, leading to slow spin-up times and human errors that pose security risks. Organizations need standard processes to ensure all images throughout their infrastructure estate are secure, compliant, and easily accessible... View the full article
  13. We are excited to announce Drift Detection for Terraform Cloud, now in public beta for HashiCorp Terraform Cloud Business. Drift Detection provides continuous checks against infrastructure state to detect and notify when there are changes. This new feature continues to evolve the capabilities in HashiCorp Terraform to provide operators with continuous visibility into the state of their multi-cloud infrastructure. Drift Detection follows the announcements of Terraform 1.2 and Terraform Run Tasks, which are additional capabilities that increase efficiency while reducing risk related to security, compliance, and operational consistency. The new capabilities expand Terraform’s support for “Day 2” operations for cloud infrastructure. As organizations migrate to the cloud, they need infrastructure automation to efficiently provision and manage their cloud resources. Typically, that consists of three distinct phases: Adopting and establishing a provisioning workflow, standardizing the workflow, and operating and optimizing at scale across multi-cloud environments as well as private datacenters. Infrastructure as code — along with proper guardrails — are essential to this process. Terraform helps organizations with infrastructure automation and provides capabilities across all three of these phases... View the full article
  14. We are pleased to announce the general availability of Consul-Terraform-Sync (CTS) 0.6. This release marks another step in the maturity of our larger Network Infrastructure Automation (NIA) solution. CTS combines the functionality of HashiCorp Terraform and HashiCorp Consul to eliminate manual ticket-based systems across on-premises and cloud environments. Its capabilities can be broken down into two parts: For Day 0 and Day 1, teams use Terraform to quickly deploy network devices and infrastructure in a consistent and reproducible manner. Once established, teams manage Day 2 networking tasks by integrating Consul’s catalog to register services into the system via CTS. Whenever a change is recorded to the service catalog, CTS triggers a Terraform run that uses partner ecosystem integrations to automate updates and deployments for load balancers, firewall policies, and other service-defined networking components. This post covers the evolution of CTS and highlights the new features in CTS 0.6… View the full article
  15. Recently, we announced the general availability of HashiCorp Consul Service (HCS) on Azure, our first fully-managed service for cloud networking automation. Customers can use HCS to discover and automate connections between services running in both Azure Kubernetes Service (AKS) clusters and Azure VMs without worrying about the operational burden of managing their own Consul deployment. HCS can be launched directly from the Azure Portal, but in this blog, we’re going to show you how to manage HCS using Terraform Cloud. Why Use Terraform Cloud instead of the Azure Portal? Terraform has become a de facto tool for many infrastructure workflows and Terraform Cloud has emerged as a powerful way for organizations to enable greater collaboration for their operators. While one of the major benefits of HCS is its integration within the Azure environment, we know that many customers have spent a lot of time and effort building out a robust Terraform workflow. We don’t want to make customers feel like their only options are to either change the way they are managing their environments or self-host a Consul deployment. Terraform Cloud can be used to manage both the deployment of HCS and services that are interacting with it. Additionally, Terraform Cloud enables multiple operators to interact with the HCS environment while layering on additional guardrails to ensure that changes are being applied in a consistent and compliant manner. View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...