Jump to content

Search the Community

Showing results for tags 'ci/cd'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOps Forum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Logging, Monitoring & Observability
    • Red Hat OpenShift
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Graphic created by Kevon Mayers Introduction Organizations often use Terraform Modules to orchestrate complex resource provisioning and provide a simple interface for developers to enter the required parameters to deploy the desired infrastructure. Modules enable code reuse and provide a method for organizations to standardize deployment of common workloads such as a three-tier web application, a cloud networking environment, or a data analytics pipeline. When building Terraform modules, it is common for the module author to start with manual testing. Manual testing is performed using commands such as terraform validate for syntax validation, terraform plan to preview the execution plan, and terraform apply followed by manual inspection of resource configuration in the AWS Management Console. Manual testing is prone to human error, not scalable, and can result in unintended issues. Because modules are used by multiple teams in the organization, it is important to ensure that any changes to the modules are extensively tested before the release. In this blog post, we will show you how to validate Terraform modules and how to automate the process using a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Terraform Test Terraform test is a new testing framework for module authors to perform unit and integration tests for Terraform modules. Terraform test can create infrastructure as declared in the module, run validation against the infrastructure, and destroy the test resources regardless if the test passes or fails. Terraform test will also provide warnings if there are any resources that cannot be destroyed. Terraform test uses the same HashiCorp Configuration Language (HCL) syntax used to write Terraform modules. This reduces the burden for modules authors to learn other tools or programming languages. Module authors run the tests using the command terraform test which is available on Terraform CLI version 1.6 or higher. Module authors create test files with the extension *.tftest.hcl. These test files are placed in the root of the Terraform module or in a dedicated tests directory. The following elements are typically present in a Terraform tests file: Provider block: optional, used to override the provider configuration, such as selecting AWS region where the tests run. Variables block: the input variables passed into the module during the test, used to supply non-default values or to override default values for variables. Run block: used to run a specific test scenario. There can be multiple run blocks per test file, Terraform executes run blocks in order. In each run block you specify the command Terraform (plan or apply), and the test assertions. Module authors can specify the conditions such as: length(var.items) != 0. A full list of condition expressions can be found in the HashiCorp documentation. Terraform tests are performed in sequential order and at the end of the Terraform test execution, any failed assertions are displayed. Basic test to validate resource creation Now that we understand the basic anatomy of a Terraform tests file, let’s create basic tests to validate the functionality of the following Terraform configuration. This Terraform configuration will create an AWS CodeCommit repository with prefix name repo-. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } Now we create a Terraform test file in the tests directory. See the following directory structure as an example: ├── main.tf └── tests └── basic.tftest.hcl For this first test, we will not perform any assertion except for validating that Terraform execution plan runs successfully. In the tests file, we create a variable block to set the value for the variable repository_name. We also added the run block with command = plan to instruct Terraform test to run Terraform plan. The completed test should look like the following: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan } Now we will run this test locally. First ensure that you are authenticated into an AWS account, and run the terraform init command in the root directory of the Terraform module. After the provider is initialized, start the test using the terraform test command. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass Our first test is complete, we have validated that the Terraform configuration is valid and the resource can be provisioned successfully. Next, let’s learn how to perform inspection of the resource state. Create resource and validate resource name Re-using the previous test file, we add the assertion block to checks if the CodeCommit repository name starts with a string repo- and provide error message if the condition fails. For the assertion, we use the startswith function. See the following example: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan assert { condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") error_message = "CodeCommit repository name ${var.repository_name} did not start with the expected value of ‘repo-****’." } } Now, let’s assume that another module author made changes to the module by modifying the prefix from repo- to my-repo-. Here is the modified Terraform module. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("my-repo-%s", var.repository_name) description = "Test repository." } We can catch this mistake by running the the terraform test command again. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... fail ╷ │ Error: Test assertion failed │ │ on tests/basic.tftest.hcl line 9, in run "test_resource_creation": │ 9: condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") │ ├──────────────── │ │ aws_codecommit_repository.test.repository_name is "my-repo-MyRepo" │ │ CodeCommit repository name MyRepo did not start with the expected value 'repo-***'. ╵ tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... fail Failure! 0 passed, 1 failed. We have successfully created a unit test using assertions that validates the resource name matches the expected value. For more examples of using assertions see the Terraform Tests Docs. Before we proceed to the next section, don’t forget to fix the repository name in the module (revert the name back to repo- instead of my-repo-) and re-run your Terraform test. Testing variable input validation When developing Terraform modules, it is common to use variable validation as a contract test to validate any dependencies / restrictions. For example, AWS CodeCommit limits the repository name to 100 characters. A module author can use the length function to check the length of the input variable value. We are going to use Terraform test to ensure that the variable validation works effectively. First, we modify the module to use variable validation. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } By default, when variable validation fails during the execution of Terraform test, the Terraform test also fails. To simulate this, create a new test file and insert the repository_name variable with a value longer than 100 characters. # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan } Notice on this new test file, we also set the command to Terraform plan, why is that? Because variable validation runs prior to Terraform apply, thus we can save time and cost by skipping the entire resource provisioning. If we run this Terraform test, it will fail as expected. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… fail ╷ │ Error: Invalid value for variable │ │ on main.tf line 1: │ 1: variable “repository_name” { │ ├──────────────── │ │ var.repository_name is “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” │ │ The repository name must be less than or equal to 100 characters. │ │ This was checked by the validation rule at main.tf:3,3-13. ╵ tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… fail Failure! 1 passed, 1 failed. For other module authors who might iterate on the module, we need to ensure that the validation condition is correct and will catch any problems with input values. In other words, we expect the validation condition to fail with the wrong input. This is especially important when we want to incorporate the contract test in a CI/CD pipeline. To prevent our test from failing due introducing an intentional error in the test, we can use the expect_failures attribute. Here is the modified test file: # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan expect_failures = [ var.repository_name ] } Now if we run the Terraform test, we will get a successful result. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… pass tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… pass Success! 2 passed, 0 failed. As you can see, the expect_failures attribute is used to test negative paths (the inputs that would cause failures when passed into a module). Assertions tend to focus on positive paths (the ideal inputs). For an additional example of a test that validates functionality of a completed module with multiple interconnected resources, see this example in the Terraform CI/CD and Testing on AWS Workshop. Orchestrating supporting resources In practice, end-users utilize Terraform modules in conjunction with other supporting resources. For example, a CodeCommit repository is usually encrypted using an AWS Key Management Service (KMS) key. The KMS key is provided by end-users to the module using a variable called kms_key_id. To simulate this test, we need to orchestrate the creation of the KMS key outside of the module. In this section we will learn how to do that. First, update the Terraform module to add the optional variable for the KMS key. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } variable "kms_key_id" { type = string default = "" } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." kms_key_id = var.kms_key_id != "" ? var.kms_key_id : null } In a Terraform test, you can instruct the run block to execute another helper module. The helper module is used by the test to create the supporting resources. We will create a sub-directory called setup under the tests directory with a single kms.tf file. We also create a new test file for KMS scenario. See the updated directory structure: ├── main.tf └── tests ├── setup │ └── kms.tf ├── basic.tftest.hcl ├── var_validation.tftest.hcl └── with_kms.tftest.hcl The kms.tf file is a helper module to create a KMS key and provide its ARN as the output value. # kms.tf resource "aws_kms_key" "test" { description = "test KMS key for CodeCommit repo" deletion_window_in_days = 7 } output "kms_key_id" { value = aws_kms_key.test.arn } The new test will use two separate run blocks. The first run block (setup) executes the helper module to generate a KMS key. This is done by assigning the command apply which will run terraform apply to generate the KMS key. The second run block (codecommit_with_kms) will then use the KMS key ARN output of the first run as the input variable passed to the main module. # with_kms.tftest.hcl run "setup" { command = apply module { source = "./tests/setup" } } run "codecommit_with_kms" { command = apply variables { repository_name = "MyRepo" kms_key_id = run.setup.kms_key_id } assert { condition = aws_codecommit_repository.test.kms_key_id != null error_message = "KMS key ID attribute value is null" } } Go ahead and run the Terraform init, followed by Terraform test. You should get the successful result like below. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass tests/var_validation.tftest.hcl... in progress run "test_invalid_var"... pass tests/var_validation.tftest.hcl... tearing down tests/var_validation.tftest.hcl... pass tests/with_kms.tftest.hcl... in progress run "create_kms_key"... pass run "codecommit_with_kms"... pass tests/with_kms.tftest.hcl... tearing down tests/with_kms.tftest.hcl... pass Success! 4 passed, 0 failed. We have learned how to run Terraform test and develop various test scenarios. In the next section we will see how to incorporate all the tests into a CI/CD pipeline. Terraform Tests in CI/CD Pipelines Now that we have seen how Terraform Test works locally, let’s see how the Terraform test can be leveraged to create a Terraform module validation pipeline on AWS. The following AWS services are used: AWS CodeCommit – a secure, highly scalable, fully managed source control service that hosts private Git repositories. AWS CodeBuild – a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodePipeline – a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Amazon Simple Storage Service (Amazon S3) – an object storage service offering industry-leading scalability, data availability, security, and performance. Terraform module validation pipeline In the above architecture for a Terraform module validation pipeline, the following takes place: A developer pushes Terraform module configuration files to a git repository (AWS CodeCommit). AWS CodePipeline begins running the pipeline. The pipeline clones the git repo and stores the artifacts to an Amazon S3 bucket. An AWS CodeBuild project configures a compute/build environment with Checkov installed from an image fetched from Docker Hub. CodePipeline passes the artifacts (Terraform module) and CodeBuild executes Checkov to run static analysis of the Terraform configuration files. Another CodeBuild project configured with Terraform from an image fetched from Docker Hub. CodePipeline passes the artifacts (repo contents) and CodeBuild runs Terraform command to execute the tests. CodeBuild uses a buildspec file to declare the build commands and relevant settings. Here is an example of the buildspec files for both CodeBuild Projects: # Checkov version: 0.1 phases: pre_build: commands: - echo pre_build starting build: commands: - echo build starting - echo starting checkov - ls - checkov -d . - echo saving checkov output - checkov -s -d ./ > checkov.result.txt In the above buildspec, Checkov is run against the root directory of the cloned CodeCommit repository. This directory contains the configuration files for the Terraform module. Checkov also saves the output to a file named checkov.result.txt for further review or handling if needed. If Checkov fails, the pipeline will fail. # Terraform Test version: 0.1 phases: pre_build: commands: - terraform init - terraform validate build: commands: - terraform test In the above buildspec, the terraform init and terraform validate commands are used to initialize Terraform, then check if the configuration is valid. Finally, the terraform test command is used to run the configured tests. If any of the Terraform tests fails, the pipeline will fail. For a full example of the CI/CD pipeline configuration, please refer to the Terraform CI/CD and Testing on AWS workshop. The module validation pipeline mentioned above is meant as a starting point. In a production environment, you might want to customize it further by adding Checkov allow-list rules, linting, checks for Terraform docs, or pre-requisites such as building the code used in AWS Lambda. Choosing various testing strategies At this point you may be wondering when you should use Terraform tests or other tools such as Preconditions and Postconditions, Check blocks or policy as code. The answer depends on your test type and use-cases. Terraform test is suitable for unit tests, such as validating resources are created according to the naming specification. Variable validations and Pre/Post conditions are useful for contract tests of Terraform modules, for example by providing error warning when input variables value do not meet the specification. As shown in the previous section, you can also use Terraform test to ensure your contract tests are running properly. Terraform test is also suitable for integration tests where you need to create supporting resources to properly test the module functionality. Lastly, Check blocks are suitable for end to end tests where you want to validate the infrastructure state after all resources are generated, for example to test if a website is running after an S3 bucket configured for static web hosting is created. When developing Terraform modules, you can run Terraform test in command = plan mode for unit and contract tests. This allows the unit and contract tests to run quicker and cheaper since there are no resources created. You should also consider the time and cost to execute Terraform test for complex / large Terraform configurations, especially if you have multiple test scenarios. Terraform test maintains one or many state files within the memory for each test file. Consider how to re-use the module’s state when appropriate. Terraform test also provides test mocking, which allows you to test your module without creating the real infrastructure. Conclusion In this post, you learned how to use Terraform test and develop various test scenarios. You also learned how to incorporate Terraform test in a CI/CD pipeline. Lastly, we also discussed various testing strategies for Terraform configurations and modules. For more information about Terraform test, we recommend the Terraform test documentation and tutorial. To get hands on practice building a Terraform module validation pipeline and Terraform deployment pipeline, check out the Terraform CI/CD and Testing on AWS Workshop. Authors Kevon Mayers Kevon Mayers is a Solutions Architect at AWS. Kevon is a Terraform Contributor and has led multiple Terraform initiatives within AWS. Prior to joining AWS he was working as a DevOps Engineer and Developer, and before that was working with the GRAMMYs/The Recording Academy as a Studio Manager, Music Producer, and Audio Engineer. He also owns a professional production company, MM Productions. Welly Siauw Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He has authored several AWS blog posts and actively leads AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machines and outdoor hiking. View the full article
  2. Good software engineering teams commit frequently and deploy frequently. Those are some of the main ideas behind continuous integration (CI) and continuous deployment (CD). Gone are the days of quarterly or yearly releases and long-lived feature branches! Today, we’ll show you how you can deploy your Heroku app automatically any time code is merged into your main branch by using GitLab CI/CD. View the full article
  3. The continuous integration/continuous delivery (CI/CD) pipeline represents the steps new software goes through before release. However, it can contain numerous vulnerabilities for hackers to exploit. 1. Vulnerabilities in the Code Many software releases get completed on such tight time frames that developers don’t have enough time to ensure the code is secure. Company leaders know frequent software updates tend to keep customers happy and can give people the impression that a business is on the cutting edge of technology. However, rushing new releases can have disastrous consequences that give hackers easy entry for wreaking havoc. View the full article
  4. CircleCI added a release orchestration capability to its namesake CI/CD platform to give developers more control over app deployments.View the full article
  5. Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Forbes estimates that cloud budgets will break all previous records as businesses will spend over $1 trillion on cloud computing infrastructure in 2024. Since most application releases depend on cloud infrastructure, having good continuous integration and continuous delivery (CI/CD) pipelines and end-to-end observability becomes essential for ensuring highly available systems. By integrating observability tools in CI/CD pipelines, organizations can increase deployment frequency, minimize risks, and build highly available systems. Complementing these practices is site reliability engineering (SRE), a discipline ensuring system reliability, performance, and scalability. View the full article
  6. Implementing Continuous Integration/Continuous Deployment (CI/CD) for a Python application using Django involves several steps to automate testing and deployment processes. This guide will walk you through setting up a basic CI/CD pipeline using GitHub Actions, a popular CI/CD tool that integrates seamlessly with GitHub repositories. Step 1: Setting up Your Django Project Ensure your Django project is in a Git repository hosted on GitHub. This repository will be the basis for setting up your CI/CD pipeline. View the full article
  7. Walrus file is a new feature released in Walrus 0.5. It allows you to describe applications and configure infrastructure resources using a concise YAML. You can then execute walrus apply in the Walrus CLI or import it on the Walrus UI. This will submit the Walrus file to the Walrus server, which will deploy, configure, and manage applications and infrastructure resources. This makes it easy to reuse them across multiple environments. View the full article
  8. In the previous blog post of this series, we discussed the crucial role of Dynatrace as an orchestrator that steps in to stop the testing phase in case of any errors. Additionally, Dynatrace equips SREs and application teams with valuable insights powered by Davis® AI. In this blog post of the series, we will explore the use of Site Reliability Guardian (SRG) in more detail. SRG is a potent tool that automates the analysis of release impacts, ensuring validation of service availability, performance, and capacity objectives throughout the application ecosystem by examining the effect of advanced test suites executed earlier in the testing phase. Validation stage overview The validation stage is a crucial step in the CI/CD (Continuous Integration/Continuous Deployment) process. It involves carefully examining the test results from the previous testing phase. The main goal of this stage is to identify and address any issues or problems that were detected. Doing so reduces the risk of production disruptions and instills confidence in both SREs (Site Reliability Engineers) and end-users. Depending on the outcome of the examination, the build is either approved for deployment to the production environment or rejected. Challenges of the validation stage In the Validation phase, SREs face specific challenges that significantly slow down the CI/CD pipeline. Foremost among these is the complexity associated with data gathering and analysis. The burgeoning reliance on cloud technology stacks amplifies this challenge, creating hurdles due to budgetary constraints, time limitations, and the potential risk of human errors. Additionally, another pivotal challenge arises from the time spent on issue identification. Both SREs and application teams invest substantial time and effort in locating and rectifying software glitches within their local environments. These prolonged processes not only strain resources but also introduce delays within the CI/CD pipeline, hampering the timely release of new features to end-users. Mitigate challenges with Dynatrace With the support of Dynatrace Grail™, AutomationEngine, and the Site Reliability Guardian, SREs and application teams are assisted in making informed release decisions by utilizing telemetry observability and other insights. Additionally, the Visual Resolution Path within generated problem reports helps in reproducing issues in their environments. The Visual Resolution Path offers a chronological overview of events detected by Dynatrace across all components linked to the underlying issue. It incorporates the automatic discovery of newly generated compute resources and any static resources that are in play. This view seamlessly correlates crucial events across all affected components, eliminating the manual effort of sifting through various monitoring tools for infrastructure, process, or service metrics. As a result, businesses and SREs can redirect their manual diagnostic efforts toward fostering innovation. Configure an action for the Site Reliability Guardian in the workflow. The action should focus on validating the guardian’s adherence to the application ecosystem’s specific objectives (SLOs). Additionally, align the action’s validation window with the timeframe derived from the recently completed test events. As the action begins, the Site Reliability Guardian (SRG) evaluates the set objective by analyzing the telemetry data produced during advanced test runs. At the same time, SRG uses DAVIS_EVENTS to identify any potential problems which could result in one of two outcomes. Outcome #1: Build promotion Once the newly developed code is in line with the objectives outlined in the Guardian—and assuming that Davis AI doesn’t generate any new events—the SRG action activates the successful path in the workflow. This path includes a JavaScript action called promote_jenkins_build, which triggers an API call to approve the build being considered, leading to the promotion of the build deployment to production. Outcome #2: Build rejection If Davis AI generates any issue events related to the wider application ecosystem or if any of the objectives configured from the defined guardian are not met, the build rejection workflow is automatically initiated. This triggers the disapprove_jenkins_build JavaScript action, which leads to the rejection of the build. Moreover, by utilizing helpful service analysis tools such as Response Time Hotspots and Outliers, SREs can easily identify the root cause of any issues and save considerable time that would otherwise be spent on debugging or taking necessary actions. SREs can also make use of the Visual Resolution Path to recreate the issues on their setup or identify the events for different components that led to the issue. In both scenarios, a Slack message is sent to the SREs and the impacted app team, capturing the build promotion or rejection.The telemetry data’s automated analytics, powered by SRG and Davis AI, simplify the process of promoting builds. This approach effectively tackles the challenges that come with complex application ecosystems. Additionally, the integration of service tools and Visual Resolution Path helps to identify and fix issues more quickly, resulting in an improved mean time to repair (MTTR). Validation in the platform engineering context Dynatrace—essential within the realm of platform engineering—streamlines the validation process, providing critical insights into performance metrics and automating the identification of build failures. By leveraging SRG and Visual Resolution Path, along with Davis AI causal analysis, development teams can quickly pinpoint issues, and further rectify them ensuring a fail-smart approach. The integration of service analysis tools further enhances the validation phase by automating code-level inspections and facilitating timely resolutions. Through these orchestrated efforts, platform engineering promotes a collaborative environment, enabling more efficient validation cycles and fostering continuous enhancement in software quality and delivery. In conclusion, the integration of Dynatrace observability provides several advantages for SREs and DevOps, enabling them to enhance the key DORA metrics: Deployment Frequency: Improved deployment rate through faster and more informed decision-making. SREs gain visibility into each stage, allowing them to build faster and promptly address issues using the Dynatrace feature set. Change Lead Time: Enhanced efficiency across stages with Dynatrace observability and security tools, leading to quicker postmortems and fewer interruption calls for SREs. Change Failure Rate: Reduction in incidents and rollbacks achieved by utilizing “Configuration Change” events or deployment and annotation events in Dynatrace. This enables SREs to allocate their time more effectively to proactively address actual issues instead of debugging underlying problems. Time to restore service: While these proactive approaches can help improve Deployment Frequency and Change Lead Time, telemetry observability data with Dynatrace AI causation engine Davis AI can aid in improving Time to restore service. In addition, Dynatrace can leverage the events and telemetry data that it receives during the Continuous Integration/Continuous Deployment (CI/CD) pipeline to construct dashboards. By using JavaScript and DQL, these dashboards can help generate reports on the current DORA metrics. This method can be expanded to gain a better understanding of the SRG executions, enabling us to pinpoint the responsible guardians and the SLOs managed by various teams and identify any instances of failure. Addressing such failures can lead to improvements and further enhance the DORA metrics. Below is a sample dashboard that provides insights into DORA and SRG execution. In the next blog post, we’ll discuss the integration of security modules into the DevOps process with the aim of achieving DevSecOps. Additionally, we’ ll explore the incorporation of Chaos Engineering during the testing stage to enhance the overall reliability of the DevSecOps cycle. We’ll ensure that these efforts don’t affect the Time to Restore Service turnaround build time and examine how we can improve the fifth key DORA metric, Reliability. What’s next? Curious to see how it all works? Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact Sales If you’re an existing Dynatrace Managed customer looking to upgrade to Dynatrace SaaS, see How to start your journey to Dynatrace SaaS. The post Automate CI/CD pipelines with Dynatrace: Part 4, Validation stage appeared first on Dynatrace news. View the full article
  9. Introduction Today customers want to reduce manual operations for deploying and maintaining their infrastructure. The recommended method to deploy and manage infrastructure on AWS is to follow Infrastructure-As-Code (IaC) model using tools like AWS CloudFormation, AWS Cloud Development Kit (AWS CDK) or Terraform. One of the critical components in terraform is managing the state file which keeps track of your configuration and resources. When you run terraform in an AWS CI/CD pipeline the state file has to be stored in a secured, common path to which the pipeline has access to. You need a mechanism to lock it when multiple developers in the team want to access it at the same time. In this blog post, we will explain how to manage terraform state files in AWS, best practices on configuring them in AWS and an example of how you can manage it efficiently in your Continuous Integration pipeline in AWS when used with AWS Developer Tools such as AWS CodeCommit and AWS CodeBuild. This blog post assumes you have a basic knowledge of terraform, AWS Developer Tools and AWS CI/CD pipeline. Let’s dive in! Challenges with handling state files By default, the state file is stored locally where terraform runs, which is not a problem if you are a single developer working on the deployment. However if not, it is not ideal to store state files locally as you may run into following problems: When working in teams or collaborative environments, multiple people need access to the state file Data in the state file is stored in plain text which may contain secrets or sensitive information Local files can get lost, corrupted, or deleted Best practices for handling state files The recommended practice for managing state files is to use terraform’s built-in support for remote backends. These are: Remote backend on Amazon Simple Storage Service (Amazon S3): You can configure terraform to store state files in an Amazon S3 bucket which provides a durable and scalable storage solution. Storing on Amazon S3 also enables collaboration that allows you to share state file with others. Remote backend on Amazon S3 with Amazon DynamoDB: In addition to using an Amazon S3 bucket for managing the files, you can use an Amazon DynamoDB table to lock the state file. This will allow only one person to modify a particular state file at any given time. It will help to avoid conflicts and enable safe concurrent access to the state file. There are other options available as well such as remote backend on terraform cloud and third party backends. Ultimately, the best method for managing terraform state files on AWS will depend on your specific requirements. When deploying terraform on AWS, the preferred choice of managing state is using Amazon S3 with Amazon DynamoDB. AWS configurations for managing state files Create an Amazon S3 bucket using terraform. Implement security measures for Amazon S3 bucket by creating an AWS Identity and Access Management (AWS IAM) policy or Amazon S3 Bucket Policy. Thus you can restrict access, configure object versioning for data protection and recovery, and enable AES256 encryption with SSE-KMS for encryption control. Next create an Amazon DynamoDB table using terraform with Primary key set to LockID. You can also set any additional configuration options such as read/write capacity units. Once the table is created, you will configure the terraform backend to use it for state locking by specifying the table name in the terraform block of your configuration. For a single AWS account with multiple environments and projects, you can use a single Amazon S3 bucket. If you have multiple applications in multiple environments across multiple AWS accounts, you can create one Amazon S3 bucket for each account. In that Amazon S3 bucket, you can create appropriate folders for each environment, storing project state files with specific prefixes. Now that you know how to handle terraform state files on AWS, let’s look at an example of how you can configure them in a Continuous Integration pipeline in AWS. Architecture Figure 1: Example architecture on how to use terraform in an AWS CI pipeline This diagram outlines the workflow implemented in this blog: The AWS CodeCommit repository contains the application code The AWS CodeBuild job contains the buildspec files and references the source code in AWS CodeCommit The AWS Lambda function contains the application code created after running terraform apply Amazon S3 contains the state file created after running terraform apply. Amazon DynamoDB locks the state file present in Amazon S3 Implementation Pre-requisites Before you begin, you must complete the following prerequisites: Install the latest version of AWS Command Line Interface (AWS CLI) Install terraform latest version Install latest Git version and setup git-remote-codecommit Use an existing AWS account or create a new one Use AWS IAM role with role profile, role permissions, role trust relationship and user permissions to access your AWS account via local terminal Setting up the environment You need an AWS access key ID and secret access key to configure AWS CLI. To learn more about configuring the AWS CLI, follow these instructions. Clone the repo for complete example: git clone https://github.com/aws-samples/manage-terraform-statefiles-in-aws-pipeline After cloning, you could see the following folder structure: Figure 2: AWS CodeCommit repository structure Let’s break down the terraform code into 2 parts – one for preparing the infrastructure and another for preparing the application. Preparing the Infrastructure The main.tf file is the core component that does below: It creates an Amazon S3 bucket to store the state file. We configure bucket ACL, bucket versioning and encryption so that the state file is secure. It creates an Amazon DynamoDB table which will be used to lock the state file. It creates two AWS CodeBuild projects, one for ‘terraform plan’ and another for ‘terraform apply’. Note – It also has the code block (commented out by default) to create AWS Lambda which you will use at a later stage. AWS CodeBuild projects should be able to access Amazon S3, Amazon DynamoDB, AWS CodeCommit and AWS Lambda. So, the AWS IAM role with appropriate permissions required to access these resources are created via iam.tf file. Next you will find two buildspec files named buildspec-plan.yaml and buildspec-apply.yaml that will execute terraform commands – terraform plan and terraform apply respectively. Modify AWS region in the provider.tf file. Update Amazon S3 bucket name, Amazon DynamoDB table name, AWS CodeBuild compute types, AWS Lambda role and policy names to required values using variable.tf file. You can also use this file to easily customize parameters for different environments. With this, the infrastructure setup is complete. You can use your local terminal and execute below commands in the same order to deploy the above-mentioned resources in your AWS account. terraform init terraform validate terraform plan terraform apply Once the apply is successful and all the above resources have been successfully deployed in your AWS account, proceed with deploying your application. Preparing the Application In the cloned repository, use the backend.tf file to create your own Amazon S3 backend to store the state file. By default, it will have below values. You can override them with your required values. bucket = "tfbackend-bucket" key = "terraform.tfstate" region = "eu-central-1" The repository has sample python code stored in main.py that returns a simple message when invoked. In the main.tf file, you can find the below block of code to create and deploy the Lambda function that uses the main.py code (uncomment these code blocks). data "archive_file" "lambda_archive_file" { …… } resource "aws_lambda_function" "lambda" { …… } Now you can deploy the application using AWS CodeBuild instead of running terraform commands locally which is the whole point and advantage of using AWS CodeBuild. Run the two AWS CodeBuild projects to execute terraform plan and terraform apply again. Once successful, you can verify your deployment by testing the code in AWS Lambda. To test a lambda function (console): Open AWS Lambda console and select your function “tf-codebuild” In the navigation pane, in Code section, click Test to create a test event Provide your required name, for example “test-lambda” Accept default values and click Save Click Test again to trigger your test event “test-lambda” It should return the sample message you provided in your main.py file. In the default case, it will display “Hello from AWS Lambda !” message as shown below. Figure 3: Sample Amazon Lambda function response To verify your state file, go to Amazon S3 console and select the backend bucket created (tfbackend-bucket). It will contain your state file. Figure 4: Amazon S3 bucket with terraform state file Open Amazon DynamoDB console and check your table tfstate-lock and it will have an entry with LockID. Figure 5: Amazon DynamoDB table with LockID Thus, you have securely stored and locked your terraform state file using terraform backend in a Continuous Integration pipeline. Cleanup To delete all the resources created as part of the repository, run the below command from your terminal. terraform destroy Conclusion In this blog post, we explored the fundamentals of terraform state files, discussed best practices for their secure storage within AWS environments and also mechanisms for locking these files to prevent unauthorized team access. And finally, we showed you an example of how efficiently you can manage them in a Continuous Integration pipeline in AWS. You can apply the same methodology to manage state files in a Continuous Delivery pipeline in AWS. For more information, see CI/CD pipeline on AWS, Terraform backends types, Purpose of terraform state. Arun Kumar Selvaraj Arun Kumar Selvaraj is a Cloud Infrastructure Architect with AWS Professional Services. He loves building world class capability that provides thought leadership, operating standards and platform to deliver accelerated migration and development paths for his customers. His interests include Migration, CCoE, IaC, Python, DevOps, Containers and Networking. Manasi Bhutada Manasi Bhutada is an ISV Solutions Architect based in the Netherlands. She helps customers design and implement well architected solutions in AWS that address their business problems. She is passionate about data analytics and networking. Beyond work she enjoys experimenting with food, playing pickleball, and diving into fun board games. View the full article
  10. Hello Everyone I am currently working on integrating automated rollbacks into our CI/CD pipeline to ensure a more robust deployment process. Our team is looking to find the best methods and tools that can be adopted to make this transition as smooth as possible. I came across this article- https://docs.aws.amazon.com/codedeploy/devops traininglatest/userguide/deployments-rollback-and-redeploy.html ,that provided some insights, but I'd love to hear from those of you who have hands-on experience in this area: What strategies have you implemented for automated rollbacks? How do you handle rollback complexities, especially when dealing with dependencies? Are there any specific tools or platforms that you recommend? What lessons have you learned and what pitfalls should we be aware of? Your real-world experiences will supplement what we've learned from the literature and help us make more informed decisions. Thanks in advance!
  11. Jenkins is an open-source automation server widely used for building, testing, and deploying software projects. It provides a platform for continuous integration and continuous delivery (CI/CD), allowing development teams to automate various tasks in the software development lifecycle. View the full article
  12. We've talked about how Continuous Integration and Continuous Delivery (CI/CD) tools can be a source of secrets sprawl. While it's not as insecure as leaving them lying around in a publicly accessible file, CI/CD pipelines can be exploited in a number of ways, and I'm going to share a few with you. This article is not exhaustive. GitHub's Security Hardening Guide for GitHub Actions alone is 16 pages long if you try to print it. OWASP's Top 10 CI/CD Security Risks is 38 pages long. Protecting your CI/CD systems is not a trivial task, but it's an important one. To get you started, here's a quick read on five ways attackers can leverage your CI/CD to gain access to additional systems. View the full article
  13. CI/CD Explained CI/CD stands for continuous integration and continuous deployment and they are the backbone of modern-day DevOps practices. CI/CD is the process that allows software to be continuously built, tested, automated, and delivered in a continuous cadence. In a rapidly developing world with increasing requirements, the development and integration process need to be at the same speed to ensure business delivery. What Is Continuous Integration? CI, or continuous integration, works on automated tests and builds. Changes made by developers are stored in a source branch of a shared repository. Any changes committed to this branch go through builds and testing before merging. This ensures consistent quality checks of the code that gets merged. View the full article
  14. In today's fast-evolving technology landscape, the integration of Artificial Intelligence (AI) into Internet of Things (IoT) systems has become increasingly prevalent. AI-enhanced IoT systems have the potential to revolutionize industries such as healthcare, manufacturing, and smart cities. However, deploying and maintaining these systems can be challenging due to the complexity of the AI models and the need for seamless updates and deployments. This article is tailored for software engineers and explores best practices for implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines for AI-enabled IoT systems, ensuring smooth and efficient operations. View the full article
  15. The post Best DevOps Tools in 2024 appeared first on DevOpsSchool.com. View the full article
  16. until
    About cdCon + GitOpsCon will foster collaboration, discussion, and knowledge sharing by bringing communities, vendors, and end users to meet, discuss, collaborate and start shaping the future of GitOps and CD together. Details https://events.linuxfoundation.org/cdcon-gitopscon/ Event Schedule https://events.linuxfoundation.org/cdcon-gitopscon/program/schedule/
  17. Jenkins is a popular open-source CI/CD that helps automate various aspects of software development, including building, testing, and deploying applications. Jenkins is highly extensible, with over 1,000 available plugins, which help to integrate with various third-party tools and technologies. Consider a scenario where you're working on a large software project with multiple developers. Testing each and every change manually can be time-consuming and prone to human error. This is where Jenkins test cases can come in handy. View the full article
  18. On the west coast of Canada, you will find Vancouver, British Columbia, home to the Canucks, breathtaking scenery, and the Granville Walk of Fame. You will also find the Vancouver Convention Center, which hosts some of the best views from any event space in the world. It was in this picturesque setting that the CD Foundation and OpenGitOps communities came together for a co-located event, cdCon + GitOpsCon 2023. These two communities are distinct but have aligned goals and visions for how DevOps needs to evolve. The CD Foundation acts as a host and incubator for open-source projects like Spinnaker and Jenkins, the newly graduated project Tekton, and the completely new cdEvents. They have a mission of defining continuous delivery best practices. OpenGitOps was started as a Cloud Native Computing Foundation working group with the goal of clearly defining a vendor-neutral, principle-led meaning of GitOps. View the full article
  19. Let's start with a story: Have you heard the news about CircleCI's breach? No, not the one where they accidentally leaked some customer credentials a few years back. This time, it's a bit more serious. It seems that some unauthorized individuals were able to gain access to CircleCI's systems, compromising the secrets stored in CircleCI. CircleCI advised users to rotate "any and all secrets" stored in CircleCI, including those stored in project environment variables or contexts. View the full article
  20. GitHub Actions is relatively new to the world of automation and Continuous Integration (CI). Providing ‘CI as a Service,’ GitHub Actions has many differences from its traditional rival platforms. In this post, we explore the differences between GitHub Actions and traditional build servers. We also look at whether GitHub Actions is a suitable option for building and testing your code. View the full article
  21. When it comes to Ci CD Consulting services, following best practices is crucial. Ensure automation at every stage, from code integration to deployment, to achieve faster and more reliable software delivery. Implement comprehensive testing and monitoring, fostering a culture of continuous improvement. Efficient Ci CD processes empower your team to deliver high-quality software consistently, reducing errors and enhancing productivity.
  22. Although relatively new to the world of continuous integration (CI), GitHub’s adding of Actions has seen its strong community build useful tasks that plug right into your repository. Actions let you run non-standard tasks to help you test, build, and push your work to your deployment tools. View the full article
  23. Palo Alto Networks' Daniel Krivelevich shares a general three-step framework organizations can use to secure the CI/CD pipeline and surrounding areas. View the full article
  24. A GitOps tool like Argo CD can help centralize the automation, installation, and configuration of services onto multiple Kubernetes clusters. Rather than apply changes using a Kubernetes CLI or CI/CD, a GitOps workflow detects changes in version control and applies the changes automatically in the cluster. You can use a GitOps workflow to deploy and manage changes to a Consul cluster, while orchestrating the configuration of Consul service mesh for peering, network policy, and gateways. This approach to managing your Consul cluster and configuration has two benefits. First, a GitOps tool handles the order-of-operations and automation of cluster updates before configuration updates. Second, your Consul configuration uses version control as a source of truth that GitOps enforces across multiple Kubernetes clusters. This post demonstrates a GitOps workflow for deploying a Consul cluster, configuring its service mesh, and upgrading its server with Argo CD. Argo CD annotations for sync waves and resource hooks enable orchestration of Consul cluster deployment followed by service mesh configuration with Custom Resource Definitions (CRDs). Updating a Consul cluster on Kubernetes involves opening a pull request with changes to Helm chart values or CRDs and merging it. Argo CD synchronizes the configuration to match version control and handles the order of operations when applying the changes... View the full article
  • Forum Statistics

    42.4k
    Total Topics
    42.2k
    Total Posts
×
×
  • Create New...