Jump to content

Search the Community

Showing results for tags 'iac'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Calendars

  • DevOps Events

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. With Infrastructure as Code (IaC), every aspect of an organization’s infrastructure is defined and managed through code. This infrastructure deployment and configuration automation makes managing the organizations infrastructure much easier over time. Also, by committing the IaC code that configurations the infrastructure into source control, there are change tracking and other benefits for the organizations […] The article Benefits of Convention over Configuration for IaC Deployment Projects appeared first on Build5Nines. View the full article
  2. It’s often challenging to adopt modern DevOps practices around infrastructure-as-code (IaC). Here's how to make the journey smoother. View the full article
  3. At re:Invent in 2023, AWS announced Infrastructure as Code (IaC) support for Amazon CodeWhisperer. CodeWhisperer is an AI-powered productivity tool for the IDE and command line that helps software developers to quickly and efficiently create cloud applications to run on AWS. Languages currently supported for IaC are YAML and JSON for AWS CloudFormation, Typescript and Python for AWS CDK, and HCL for HashiCorp Terraform. In addition to providing code recommendations in the editor, CodeWhisperer also features a security scanner that alerts the developer to potentially insecure infrastructure code, and offers suggested fixes than can be applied with a single click. In this post, we will walk you through some common scenarios and show you how to get the most out of CodeWhisperer in the IDE. CodeWhisperer is supported by several IDEs, such as Visual Studio Code and JetBrains. For the purposes of this post, we’ll focus on Visual Studio Code. There are a few things that you need to follow along with the examples, listed in the prerequisites section below. Prerequisites An AWS Builder ID or an AWS Identity Center login controlled by your organization A supported IDE, like Visual Studio Code The AWS Toolkit IDE extension Authenticate and Connect CloudFormation Now that you have the toolkit configured, open a new source file with the yaml extension. Since YAML files can represent a wide variety of different configuration file types, it helps to add the AWSTemplateFormatVersion: '2010-09-09' header to the file to let CodeWhisperer know that you are editing a CloudFormation file. Just typing the first few characters of that header is likely to result in a recommendation from CodeWhisperer. Press TAB to accept recommendations and Escape to ignore them. AWSTemplateFormatVersion header If you have a good idea about the various resources you want to include in your template, include them in a top level Description field. This will help CodeWhisperer to understand the relationships between the resources you will create in the file. In the example below, we describe the stack we want as a “VPC with public and private subnets”. You can be more descriptive if you want, using a multi-line YAML string to add more specific details about the resources you want to create. Creating a CloudFormation template with a description After accepting that recommendation for the parameters, you can continue to create resources. Creating CloudFormation resources You can also trigger recommendations with inline comments and descriptive logical IDs if you want to create one resource at a time. The more code you have in the file, the more CodeWhisperer will understand from context what you are trying to achieve. CDK It’s also possible to create CDK code using CodeWhisperer. In the example below, we started with a CDK project using cdk init, wrote a few lines of code to create a VPC in a TypeScript file, and CodeWhisperer proposed some code suggestions using what we started to write. After accepting the suggestion, it is possible to customize the code to fit your needs. CodeWhisperer will learn from your coding style and make more precise suggestions as you add more code to the project. Create a CDK stack You can choose whether you want to get suggestions that include code with references with the professional version of CodeWhisperer. If you choose to get the references, you can find them in the Code Reference Log. These references let you know when the code recommendation was a near exact match for code in an open source repository, allowing you to inspect the license and decide if you want to use that code or not. References Terraform HCL After a close collaboration between teams at Hashicorp and AWS, Terraform HashiCorp Configuration Language (HCL) is also supported by CodeWhisperer. CodeWhisperer recommendations are triggered by comments in the file. In this example, we repeat a prompt that is similar to what we used with CloudFormation and CDK. Terraform code suggestion Security Scanner In addition to CodeWhisperer recommendations, the toolkit configuration also includes a built in security scanner. Considering that the resulting code can be edited and combined with other preexisting code, it’s good practice to scan the final result to see if there are any best-practice security recommendations that can be applied. Expand the CodeWhisperer section of the AWS Toolkit to see the “Run Security Scan” button. Click it to initiate a scan, which might take up to a minute to run. In the example below, we defined an S3 bucket that can be read by anyone on the internet. Security scanner Once the security scan completes, the code with issues is underlined and each suggestion is added to the ‘Problems’ tab. Click on any of those to get more details. Scan results CodeWhisperer provides a clickable link to get more information about the vulnerability, and what you can do to fix it. Scanner Link Conclusion The integration of generative AI tools like Amazon CodeWhisperer are transforming the landscape of cloud application development. By supporting Infrastructure as Code (IaC) languages such as CloudFormation, CDK, and Terraform HCL, CodeWhisperer is expanding its reach beyond traditional development roles. This advancement is pivotal in merging runtime and infrastructure code into a cohesive unit, significantly enhancing productivity and collaboration in the development process. The inclusion of IaC enables a broader range of professionals, especially Site Reliability Engineers (SREs), to actively engage in application development, automating and optimizing infrastructure management tasks more efficiently. CodeWhisperer’s capability to perform security scans on the generated code aligns with the critical objectives of system reliability and security, essential for both developers and SREs. By providing insights into security best practices, CodeWhisperer enables robust and secure infrastructure management on the AWS cloud. This makes CodeWhisperer a valuable tool not just for developers, but as a comprehensive solution that bridges different technical disciplines, fostering a collaborative environment for innovation in cloud-based solutions. Bio Eric Beard is a Solutions Architect at AWS specializing in DevOps, CI/CD, and Infrastructure as Code, the author of the AWS Sysops Cookbook, and an editor for the AWS DevOps blog channel. When he’s not helping customers to design Well-Architected systems on AWS, he is usually playing tennis or watching tennis. Amar Meriche is a Sr Technical Account Manager at AWS in Paris. He helps his customers improve their operational posture through advocacy and guidance, and is an active member of the DevOps and IaC community at AWS. He’s passionate about helping customers use the various IaC tools available at AWS following best practices. View the full article
  4. Businesses are increasingly depending on cloud-based services to improve efficiency, increase scalability, and streamline operations in the quickly developing digital age. The requirement for efficient resource management has multiplied as the cloud has become a crucial part of contemporary IT infrastructures. Let us introduce Infrastructure as Code (IaC), a ground-breaking method for managing infrastructure that will fundamentally alter how we deploy and manage cloud resources. Infrastructure as Code has emerged as a pillar of contemporary cloud infrastructure management, allowing businesses to increase automation, efficiency, and scalability while lowering operational risks and complexity related to manual configurations... View the full article
  5. AWS Service Catalog customers can now create, distribute, and launch AWS resources that are configured using third- party Infrastructure as Code (IaC) tools such as Ansible, Chef, Pulumi, Puppet, and more. Within AWS Service Catalog, customers can use these IaC tools in addition to previously supported AWS CloudFormation and HashiCorp Terraform Cloud configurations. View the full article
  6. HashiCorp Terraform is a versatile infrastructure-as-code tool that empowers users to define and provision infrastructure resources with ease using a declarative configuration language. While Terraform provides solutions for converting strings to lists, there are occasions where you’ll need to do the opposite: convert a list into a string. This can be particularly beneficial when configuring […] The article Terraform: Convert String to List (join function) appeared first on Build5Nines. View the full article
  7. In this article, I will demonstrate how to monitor Ansible Automation Platform(AAP) running on OpenShift, using user-workload-monitoring with Prometheus and Grafana... View the full article
  8. Infrastructure as Code (IaC) has revolutionized the way organizations provision and manage their infrastructure. By defining infrastructure through code, IaC offers automation, scalability, and consistency benefits. However, this newfound agility also brings security challenges. IaC security scanning is a critical practice that helps organizations identify and mitigate potential vulnerabilities in their infrastructure code. In this guide, we'll explore the importance of IaC security scanning, its benefits, best practices, and available tools. Click Here To Read More
  9. Managing sensitive information, such as API keys, database passwords, or encryption keys, is a critical aspect of infrastructure and application security. AWS Secrets Manager is a service that helps you protect and manage your application's secrets, and Terraform is a powerful tool for provisioning and managing infrastructure. In this guide, we'll explore how to retrieve secrets from AWS Secret Manager and use them securely in your Terraform configurations. Click Here To Read More
  10. AWS HealthImaging now supports AWS CloudFormation. With CloudFormation support, you can now use CloudFormation templates to create and delete your AWS HealthImaging resources. This helps you automate and standardize DevOps processes across your AWS accounts and AWS Regions for AWS HealthImaging. View the full article
  11. There are countless ways to deploy and operate enterprise applications today, and every team has their preferred toolstack. Standardizing around consistent deployment patterns can reduce operational costs by focusing skill sets on a smaller scope of technologies, and reduce operational complexity by allowing teams to re-use peripheral tools and workflows such as monitoring tools and security scanning processes, across all internal applications. With that in mind, starting with v202309-1, HashiCorp has made operating Terraform Enterprise more flexible than ever. We are excited to announce that Terraform Enterprise now supports two new deployment options: Docker Engine and cloud-managed Kubernetes services (Amazon EKS, Microsoft Azure AKS, and Google Cloud GKE). This allows customers with a preference for Docker or Kubernetes to follow industry-standard patterns for deploying applications in these environments, and simplify overall operation of Terraform Enterprise. These new deployment options are enabled by the new simplified single-container architecture first introduced in Terraform Enterprise v202306-1 and enabled by default on v202309-1... View the full article
  12. Since the beginning of the project, HashiCorp Packer has supported extending its capabilities through plugins. These plugins are built alongside community contributors and partners to help Packer support building images for many cloud providers and hypervisors. In the past, to help Packer users get up and running quickly, popular plugins were bundled into the main Packer binary. This had advantages, notably that users did not have to install plugins separately in order to use them. However, as the plugin system grew, bundling all plugins introduced maintenance issues... View the full article
  13. Terraform is widely known for its ability to efficiently create, manage and update infrastructure resources across cloud providers and on-premises environments. It provides the ability to create resources that depend on each other, and the depends_on meta-argument is a helpful feature for implementing such relationships in a systematic way. This blog covers what Terraform depends_on is, its syntax, the best use cases, and the best practices to follow... View the full article
  14. HashiCorp and Microsoft have partnered to create Terraform modules that follow Microsoft's Azure Well-Architected Framework and best practices. In previous blog posts, we’ve demonstrated how to build a secure Azure reference architecture and deploy securely into Azure with HashiCorp Terraform and Vault, as well as how to manage post-deployment operations. This post looks at how HashiCorp and Microsoft have created building blocks that allow you to repeatedly, securely, and cost-effectively accelerate AI adoption on Azure with Terraform. Specifically, it covers how to do this by using Terraform to provision Azure OpenAI services... View the full article
  15. Ansible is an open-source software provisioning, configuration management, and application deployment tool. It uses declarative language to describe the system’s desired state and leverages SSH to communicate and enforce that state on the target machines. Environment variables are some of the most useful tools in dynamic environments. But what exactly are environment variables? Environment variables are key-value pairs that can influence the behavior of the tools that run in a system. They act as global settings which allows the different processes to communicate and configure their behavior without modifying the code. This tutorial teaches us how to work with environment variables in Ansible using various methods and techniques... View the full article
  16. We're excited to announce the version 2.0.0 release of the Packer Azure plugin, which enables users to build Azure virtual hard disks, managed images, and Compute Gallery (shared image gallery) images. The plugin is one of the most popular ways to build Azure Virtual Machine images and is used by Microsoft Azure via the Azure Image Builder For the past year, we have been tracking the changes to the Azure SDKs and keeping our eyes on the upcoming deprecations, which were sure to disrupt how Packer interacts with Azure. When we found that the version of the Azure SDK the Packer plugin was using would soon be deprecated we began work to migrate to the Terraform tested HashiCorp Go Azure SDK. The HashiCorp Go Azure SDK is generated from and based on the Azure API definitions to provide parity with the official Azure SDK — making it a near drop-in replacement for the Azure SDK, with the ability to resolve issues around auto-rest, polling, and API versioning. Version 2.0.0 of the Packer Azure plugin addresses the known deprecations with minimal disruption to the user, introduces new highly requested features, and combines the stability of the Packer Azure plugin with the Terraform Azure provider.. View the full article
  17. Google Cloud's flagship cloud conference — Google Cloud Next — is back and once again HashiCorp will be there in full force. (Although the conference passes are sold out, you can still watch all the great sessions from Next '23 on demand, at your convenience with a free Digital Pass) For both in-person and remote attendees, we’re pleased to share the latest news on our long-standing relationship with Google Cloud, and how we help organizations provision, secure, run, and connect applications running in Google Cloud. In this post, we’ll share some highlights of our partnership and our plans for the event, Tuesday through Thursday, Aug. 29 - 31, in San Francisco. HashiCorp-Google Cloud developments this year include: Google provider for Terraform passes 350 million downloads Control and secure Terraform workflows on Google Cloud with dynamic provider credentials Validate the health of Google Cloud infrastructure via continuous validation Automate Terraform Cloud from Google Kubernetes Engine (GKE) Create Terraform self-hosted Cloud agents on Google Cloud Manage Google Cloud resources with Terraform and Infrastructure Manager Automate networking across Google Cloud runtimes with HashiCorp Consul and Apigee Google Cloud provider for Terraform surpasses 350 million downloads As of the publication of this post, the download count for the Google Cloud Platform provider for Terraform stands at 359 million downloads, half of which occurred in the past 12 months. While hundreds of millions of downloads represent a major milestone, Google Cloud and HashiCorp continue to develop new integrations to help customers work faster, use more services and features, and provide developer-friendly ways to deploy cloud infrastructure. Control and secure Terraform workflows on Google Cloud with dynamic provider credentials Terraform Cloud's dynamic provider credentials let you establish a trust relationship between Terraform Cloud and Google Cloud. They limit the blast radius of compromised credentials by using unique, short-lived credentials for each Terraform run. Dynamic provider credentials also allow you to scope fine-grained control over the resources that each of your Terraform Cloud projects and workspaces can manage. When you use dynamic provider credentials, Terraform Cloud begins each run by authenticating with Google Cloud, passing it details about the workload, including your organization and workspace name. Your cloud provider then responds with temporary credentials that Terraform Cloud uses to provision your resources for the run. This workflow is based on the OpenID Connect (OIDC) protocol, an open source standard for verifying identity across different systems. You can use Terraform Cloud’s native OIDC integration with Google Cloud to get dynamic credentials for the Google provider in your Terraform Cloud runs. To get started, learn how to configure dynamic credentials with the Google Cloud provider. Validate the health of Google Cloud infrastructure via continuous validation The continuous validation feature in Terraform Cloud allows users to validate the health of their infrastructure beyond the initial provisioning. This helps users to identify issues at the time they first appear and avoid situations where a change is identified only once it causes a customer-facing problem. Users can add checks to their Terraform configuration using check blocks. Check blocks contain assertions that are defined with a custom condition expression and an error message. When the condition expression evaluates to true the check passes, but when the expression evaluates to false Terraform shows a warning message that includes the user-defined error message. Custom conditions can be created using data from Terraform providers’ resources and data sources. Data can also be combined from multiple sources; for example, you can use checks to monitor expirable resources by comparing a resource’s expiration date attribute to the current time returned by Terraform’s built-in time functions. This guide provides multiple use cases of how to use Terraform check blocks and continuous validation with Google Cloud. Automate Terraform Cloud from Google Kubernetes Engine (GKE) The Terraform Cloud Operator for Kubernetes provides first-class integration between Kubernetes and Terraform Cloud by extending the Kubernetes control plane to enable lifecycle management of cloud and on-premises infrastructure. This operator provides a unified way to manage a Kubernetes application and its infrastructure dependencies through a single Kubernetes CustomResourceDefinition (CRD). After the infrastructure dependencies are created, pertinent information such as endpoints and credentials are returned from Terraform Cloud to Kubernetes. Terraform Cloud Operator for Kubernetes helps automate the provisioning of infrastructure from Google Kubernetes Engine (GKE) and lets users manage Terraform Cloud with Kubernetes custom resources. Create self-hosted Terraform Cloud agents on Google Cloud Terraform Cloud agents allow Terraform Cloud to communicate with isolated, private, or on-premises infrastructure. By deploying lightweight agents within a specific network segment, you can establish a simple connection between your environment and Terraform Cloud which allows for provisioning operations and management. Google Cloud Terraform Cloud agents are Terraform modules that create self-hosted agents on Google Cloud. Using Terraform modules you now can quickly create and deploy agent pools for your Terraform Cloud workflows on Google Cloud. Google Cloud agents are available in the Terraform Registry now and include: Terraform Cloud agents on Google Kubernetes Engine (GKE) Managed instance groups using virtual machines Instance groups using container virtual machines Manage Google Cloud resources with Terraform and Infrastructure Manager Google Cloud Infrastructure Manager (Infra Manager) automates the deployment and management of Google Cloud infrastructure resources using Terraform. Infra Manager allows you to use infrastructure as code to manage the lifecycle of Google Cloud resources. Terraform is defined declaratively in a Terraform blueprint that describes the end state of your infrastructure. You can version the Terraform blueprint, either in a public Git repository or in a Cloud Storage bucket, and use object versioning to version blueprints. To learn more, check out the newly published Terraform and Infrastructure Manager guide. Automate networking across Google Cloud runtimes with Consul and Apigee HashiCorp Consul is how teams automate networking across Google Cloud runtimes. Consul now includes several extensions for Envoy. Consul’s Envoy extension capability allows operators to offload service-to-service authorization to external tools and platforms. This allows more options to authorize traffic based on more conditions like allow/deny based on business hours. Apigee’s AuthZ integration is an example of using the external AuthZ extension. Apigee’s implementation requires an API key to be passed between services in order to allow traffic. You can get started with the Consul AuthZ - Apigee repo. If you are using Apigee today or considering an Apigee deployment, check out how to use the Apigee Adapter for Envoy with an Apigee hybrid deployment. Join us at Google Cloud Next and learn what’s next after Next If you are attending Google Next in person or following along virtually, please sign up for the Seamless Infrastructure Deployment and Management with Terraform session, where HashiCorp and Google will cover why Terraform is an integral component of many teams’ infrastructure and applications, and to Google Cloud. The talk will focus on how Terraform is used to build and operate resources as infrastructure as code. You’ll view Google Cloud projects that use Terraform as their foundation and learn Google's guidance on using Terraform to deliver the best user experience, time to value, and efficiency for Google Cloud customers. The session takes place on day two of Google Next, August 30th at 8 a.m. PT. And if you are on site, be sure to join us at booth #1645 for demos and meet 1:1 with our technical experts to learn more about our product suite, and check out the latest on HashiCorp integrations with Google Cloud. After Google Next, join our upcoming webinar series covering Google Cloud projects that use Terraform as their foundation, with guidance on using Terraform to deliver the best user experience, time to value, and efficiency. View the full article
  18. On August 10, 2023, HashiCorp announced that after ~9 years of Terraform being open source under the MPL v2 license, they were suddenly switching it to a non open source BSL v1.1 license. We believe the BSL license is a poison pill for Terraform which threatens the entire community and ecosystem, and in this blog post, we’ll introduce OpenTF, our plan for keeping Terraform open source—forever ... Full blog post: https://blog.gruntwork.io/the-future-of-terraform-must-be-open-ab0b9ba65bca
  19. A community that embraced open source Terraform software has called upon HashiCorp to reverse license changes that limit software usage. View the full article
  20. In this workshop, you will learn the fundamentals of infrastructure-as-code through guided exercises. You will be introduced to Pulumi, an infrastructure-as-code platform, where you can use familiar programming languages to provision modern cloud infrastructure. This workshop is designed to help new users become familiar with the core concepts needed to effectively deploy resources on AWS. […] View the full article
  21. In this workshop, you will learn the fundamentals of infrastructure-as-code through guided exercises. You will be introduced to Pulumi, an infrastructure-as-code platform, where you can use familiar programming languages to provision modern cloud infrastructure. This workshop is designed to help new users become familiar with the core concepts needed to effectively deploy resources on AWS. […] View the full article
  22. Homepage WHY ANSIBLE? Working in IT, you're likely doing the same tasks over and over. What if you could solve problems once and then automate your solutions going forward? Ansible is here to help. COMPLEXITY KILLS PRODUCTIVITY Every business is a digital business. Technology is your innovation engine, and delivering your applications faster helps you win. Historically, that required a lot of manual effort and complicated coordination. But today, there is Ansible - the simple, yet powerful IT automation engine that thousands of companies are using to drive complexity out of their environments and accelerate DevOps initiatives. ANSIBLE LOVES THE REPETITIVE WORK YOUR PEOPLE HATE No one likes repetitive tasks. With Ansible, IT admins can begin automating away the drudgery from their daily tasks. Automation frees admins up to focus on efforts that help deliver more value to the business by speeding time to application delivery, and building on a culture of success. Ultimately, Ansible gives teams the one thing they can never get enough of: time. Allowing smart people to focus on smart things. Ansible is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Automation shouldn’t be more complex than the tasks it’s replacing.
  23. In the context of cloud operations, the last decade was ruled by DevOps and infrastructure-as-code (IaC). But what is true DevOps? Is it developers running their own operations? Is it operators learning development skills to automate infrastructure provisioning using infrastructure-as-code (IaC) tools like Terraform? Though DevOps is widely understood to incorporate both of these, software […] The post Infrastructure-as-Code Pitfalls in Platform Engineering appeared first on DevOps.com. View the full article
  24. As a DevOps Engineer or Site Reliability Engineer (SRE), managing cloud infrastructure deployments is a critical aspect of your daily activities. It is vital to use tools that automate the provisioning and configuration of cloud infrastructure to achieve efficient and scalable infrastructure management. One of the best tools for this is HashiCorp Terraform, and as […] The article Why HashiCorp Terraform is Essential for SREs and DevOps Engineers appeared first on Build5Nines. View the full article
  25. HashiCorp Terraform is great for deploying any Microsoft Azure resource, and the same applies to deploying serverless compute with Azure Function Apps in the Microsoft Azure cloud. Azure Function Apps are a very heavily used compute service in Microsoft Azure, and one that is in high demand for deployment automation by DevOps Engineers and Site […] The article Terraform: Deploy Azure Function App with Consumption Plan appeared first on Build5Nines. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...