Search the Community
Showing results for tags 'ci/cd'.
-
Agile processes, prioritizing speed, quality, and efficiency, are gradually replacing traditional software development and deployment methods in the age of rapid software delivery. CI/CD has become a fundamental component of contemporary software development, allowing teams to automate the processes of building, testing, and deploying software. The pipeline facilitates the flow of code changes from development to production and is the central component of continuous integration and delivery (CI/CD). Recent studies like “State of DevOps” highlight that the most common DevOps practices are continuous integration (CI) and continuous delivery (CD), used by 80% of organizations. In addition to speeding up software delivery, automating the CI/CD process improves product quality by enabling frequent testing and feedback loops. However, rigorous preparation, strategy, and adherence to best practices are necessary for implementing efficient automation. This article explores 7 essential tactics and industry best practices for CI/CD pipeline automation. 1. Infrastructure as Code (IaC) Treating infrastructure like code is a cornerstone of contemporary DevOps methods. By putting infrastructure needs into code, teams may achieve consistency, reproducibility, and scalability in their environments. IaC tools like Terraform, CloudFormation, or Ansible are crucial for automating CI/CD pipelines. Strategy 1: Use declarative code to define the infrastructure needs for provisioning and configuring CI/CD pipeline resources, including deployment targets, build servers and testing environments. Best Practice: To preserve version history and facilitate teamwork, save infrastructure code in version control repositories with application code. 2. Containerization Containerization has completely transformed software packaging and deployment, as best demonstrated by platforms like Docker. Containers encapsulate an application and its dependencies, providing consistency between environments. Using containerization in CI/CD pipeline automation facilitates smooth deployment and portability. Strategy 2: As part of the continuous integration process, build Docker images to produce lightweight, portable artifacts that can be reliably deployed in a range of contexts. Best Practice: Use container orchestration systems such as Kubernetes to manage containerized applications in production and ensure scalability, robustness, and ease of deployment. 3. Test Automation One of the main components of CI/CD is automated testing, which helps teams to validate code changes quickly and accurately. Teams can reduce the risk of regressions and ensure software quality by automating tests at different stages of the development cycle, such as unit, acceptance, integration, and so on, and catching errors early. Strategy 3: Include automated tests in the continuous integration pipeline to verify code changes immediately after each contribution, giving engineers quick feedback. Best Practice: The best practice is to use a test pyramid method where speed and coverage are optimized by starting with more unit tests that run quickly and moving up to fewer but more comprehensive integration and end-to-end tests at higher levels. 4. Infrastructure Monitoring and Optimization Continuous monitoring of infrastructure performance is crucial for maintaining the reliability and efficiency of CI/CD pipelines. By leveraging monitoring tools such as Prometheus or Datadog, we can track resource utilization, identify issues, and optimize infrastructure configurations to enhance pipeline performance. Strategy 4: Implement automated infrastructure monitoring to track key performance metrics such as CPU usage, memory consumption, and network traffic, enabling proactive identification and resolution of issues that may impact performance. Best Practice: Utilize alerting mechanisms to notify teams of abnormal infrastructure behavior or performance degradation, facilitating rapid response and minimizing downtime in CI/CD pipelines. 5. Security Automation and Compliance Security is a paramount concern in any system, and integrating security practices into CI/CD pipelines is essential for mitigating risks and ensuring regulatory compliance. By automating security checks and compliance audits using tools like SonarQube or OWASP ZAP, we can detect vulnerabilities early in the development lifecycle and enforce security standards consistently. Strategy 5: Embed security scans and compliance checks into the CI/CD pipeline to automatically assess code quality, identify security vulnerabilities, and enforce security policies throughout the software delivery process. Best Practice: Integrate security testing tools with version control systems such as Jenkins or Git to perform automated code analysis on every commit, enabling developers to address security issues promptly and maintain a secure codebase. 6. Monitoring and Feedback Pipelines for continuous integration and delivery (CI/CD) are essential because they offer insight into the functionality and state of deployed applications. Teams may find bottlenecks, spot anomalies, and continuously increase the efficiency of the pipeline by gathering and evaluating metrics. Strategy 6: Use infrastructure and apps to record pertinent metrics and logs, allowing for proactive monitoring and troubleshooting. Best Practice: To guarantee that deployed applications fulfill performance and reliability requirements, incorporate monitoring and alerting technologies into the CI/CD pipeline. This will allow the pipeline to detect and respond to issues automatically. 7. Infrastructure Orchestration Orchestration is essential to automating CI/CD pipelines and controlling infrastructure as code. Delivery workflows are ensured by orchestration technologies such as Jenkins, CircleCI, or GitLab CI/CD, which facilitate the execution of pipeline steps, manage dependencies, and coordinate parallel operations. Strategy 7: Use CI/CD orchestration technologies to automate pipeline steps, such as code compilation, testing, deployment, and release, to coordinate complicated workflows smoothly. Best Practice: Defining pipeline stages and dependencies explicitly, optimizing execution order and resource use, and minimizing build times can foster a highly efficient CI/CD environment. Conclusion Automation lies at the core of successful CI/CD pipelines, enabling teams to deliver high-quality software quickly and at scale. By adopting the strategies and best practices outlined in this article, organizations can streamline their development workflows, reduce manual overhead, and foster a culture of collaboration and continuous improvement. Embracing automation accelerates time-to-market and enhances software delivery’s resilience, reliability, and overall quality in today’s fast-paced digital landscape. The post 7 Strategies and Best Practices for Automating CI/CD Pipelines appeared first on Amazic. View the full article
-
Jenkins is an open-source continuous integration tool that automates technical tasks such as software testing, building, and deployment. It is a Java-based tool, and as a DevOP, knowing how to install and use Jenkins will save you time and resources. Jenkins supports numerous platforms, and this post focuses on installing it on Ubuntu 24.04. We will guide you through a step-by-step process to ensure you don’t get stuck. Let’s begin! Step-By-Step Installation of Jenkins on Ubuntu 24.04 The Jenkins repository is not included in Ubuntu 24.04. As such, we must fetch it and add it to our system. Again, we’ve mentioned that Jenkins is a Java-based tool. Therefore, you must have Java installed, and in this case, we will work with OpenJDK 11. Once you have the two prerequisites in place, installing Jenkins will be an easy task. Proceed with the below steps. Step 1: Install Java We must have a Java Runtime Environment before we can install and use Jenkins. However, not all Java versions are supported. To be safe, consider installing OpenJDK 8 or 11. Verify that you have installed the correct Java version. $ java -version If not installed, use the following command to install OpenJDK 11. $ sudo apt install openjdk-11-jdk Step 2: Fetch and Add the Jenkins Repository Jenkins is available as a stable or weekly version. This step requires us to download the Jenkins GPG key and then its software repository. After verification, we can then add the repository to our source list. First, let’s execute the following command to import the Jenkins GPG key. $ sudo wget -O /usr/share/keyrings/jenkins-keyring.asc https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key The next task is adding the Jenkins repository by executing the following command. $ echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null Step 3: Install Jenkins After adding the stable Jenkins release to our source list, we can proceed with installing it, but first, let’s update the Ubuntu 24.04 repository to refresh the source list. $ sudo apt update Next, install Jenkins and ensure the installation completes without interruptions. $ sudo apt install jenkins -y Once installed, check the version to confirm that we managed to install it successfully. $ jenkins --version Step 4: Configure the Firewall We must modify our Firewall to create a rule allowing Jenkins to communicate via port 8080. First, start the Jenkins service. $ sudo systemctl start jenkins $ sudo systemctl status jenkins Next, add a new UFW rule and check that your firewall is active. If the firewall is inactive, enable it. $ sudo ufw allow 8080 $ sudo ufw status Step 5: Configure Jenkins We will access Jenkins via a browser to set it up. On your browser tab, access the below URL. Be sure to add the correct IP or domain name of your server and port number 8080. http://ip_address:8080 You will get a window displaying the “Getting Started” information. On the page, find the path to the file containing the administrator password. login Go back to your terminal and open the file using a text editor or a command such as “cat.” $ sudo cat /var/lib/jenkins/secrets/initialAdminPassword The administrator password will be displayed on your terminal. Copy the generated password and paste it into your browser in the “Administrator password” input box. At the bottom of the window, click on the Continue button. A new window will open. Click on the selected option to “Install suggested plugins.” Jenkins will initiate the setup. Once the process is complete, you will be prompted to create your administrator credentials. Type the admin username and password, then click the “Save and Continue” button. On the next window, note the Jenkins URL and click the “Save and Finish” button That’s it. Jenkins is now installed and configured on your Ubuntu 24.04. Click on the Start using Jenkins button to enjoy using Jenkins. You will get a window similar to the one below. Conclusion Jenkins has numerous applications, especially for developers. If you use Ubuntu Noble Numbat, this post has shared a step-by-step guide on how to install Jenkins. Hopefully, this post will be insightful to you, and you will be able to install Jenkins. View the full article
-
Cloud technologies are a rapidly evolving landscape. Securing cloud applications is everyone’s responsibility, meaning application development teams are needed to follow strict security guidelines from the earliest development stages, and to make sure of continuous security scans throughout the whole application lifecycle. The rise of generative AI enables new innovative approaches for addressing longstanding challenges with reduced effort. This post showcases how engineering teams can automate efficient remediation of container CVEs (common vulnerabilities and exposures) early in their continuous integration (CI) pipeline. Using cloud services such as Amazon Bedrock, Amazon Inspector, AWS Lambda, and Amazon EventBridge you can architect an event-driven serverless solution for automatically addressing container vulnerabilities detection and patching. Using the power of generative AI and serverless technologies can help simplify what used to be a complex challenge. Overview The exponential growth of modern applications has enabled developers to build highly decoupled microservice-based architectures. However, the distributed nature of those architectures comes with a set of operational challenges. Engineering teams were always responsible for various security aspects of their application environments, such as network security, IAM permissions, TLS certificates, and code vulnerability scanning. Addressing these aspects at the scale of dozens and hundreds of microservices requires a high degree of automation. Automation is imperative for efficient scaling as well as maintaining control and governance. Running applications in containers is a common approach for building microservices. It allows developers to have the same CI pipeline for their applications, regardless of whether they use Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), or AWS Lambda to run it. No matter which programming language you use for your application, the deployable artifact is a container image that commonly includes application code and its dependencies. It is imperative for application development teams to scan those images for vulnerabilities to make sure of their safety prior to deploying them to cloud environments. Amazon Elastic Container Registry (Amazon ECR) is an OCI artifactory that provides two types of scanning, Basic and Enhanced, powered by the Amazon Inspector. The image scanning occurs after the container image is pushed to the registry. The basic scanning is triggered automatically when a new image is pushed, while the enhanced scanning runs continuously for images hosted in Amazon ECR. Both types of scans generate scan reports, but it is still the development team’s responsibility to act on it: read the report, understand the vulnerabilities, patch code, open a pull request, merge, and run CI again. The following steps illustrate how you can build an automated solution that uses the power of generative AI and event-driven serverless architectures to automate this process. The following sample solution uses the “in-context learning” approach, a technique that tailors AI responses to narrow scenarios. Used for CVE patching, the solution builds AI prompts based on the programming language in question and a previously generated example of what a PR might look like. This approach underscores a crucial point: for some narrow use cases, using a smaller Large Language Model (LLM), such as Llama 13B, with assisted prompt might yield equally effective results as a bigger LLM, such as Llama 2 70B. We recommend that you evaluate both few-shot prompts with smaller LLMs and zero-shot prompts with larger LLMs to find the model that works most efficiently for you. Read more about providing prompts and examples in the Amazon Bedrock documentation. Solution architecture Prior to packaging the application as a container, engineering teams should make sure that their CI pipeline includes steps such as static code scanning with tools such as SonarQube or Amazon CodeGuru, and image analysis tools such as Trivy or Docker Scout. Validating your code for vulnerabilities at this stage aligns with the shift-left mentality, and engineers should be able to detect and address potential threats in their code in the earliest stages of development. After packaging the new application code and pushing it to Amazon ECR, the image scanning with Amazon Inspector is triggered. Engineers can use languages supported by Amazon Inspector. As image scanning runs, Amazon Inspector emits EventBridge Finding events for each vulnerability detected. CI is triggered by a developer pushing new code to the shared code repository. This step is not implemented in the provided sample, and different engineering teams can use different tools for their CI pipeline. The application container image is built and pushed to the Amazon ECR. Amazon Inspector is triggered automatically. Note that you must first enable Amazon Inspector ECR enhanced scanning in your account. As Amazon Inspector scans the image, it emits findings in a format of events to EventBridge. Each finding generates a separate event. See the example JSON payload of a finding event in the Inspector documentation. EventBridge is configured to invoke a Lambda function for each finding event. Lambda is invoked for each finding. The function aggregates and updates the Amazon DynamoDB database table with each finding information. Once Amazon Inspector completes the scan, it emits the scan complete event to EventBridge, which calls the PR creation microservice hosted as an Amazon ECS Fargate Task to start the PR generation process. PR creation microservice clones the code repo to see the current dependencies list. Then it retrieves the aggregated findings data from DynamoDB, builds a prompt using the dependencies list, findings data, and in-context learning example based on previous scans. The microservice invokes Amazon Bedrock to generate a new PR content. Once the PR content is generated, the microservice opens a new PR and pushes changes upstream. Engineering teams validate the PR and merge it with code repository. Overtime, as engineering teams gain trust with the process, they might consider automating the merge part as well. Sample implementation Use the example project to replicate this solution in your AWS account. Follow the instructions in README.md for provisioning and testing the sample project using Hashicorp Terraform. Under the /apps directory of the sample project you should see two applications. The /apps/my-awesome-application intentionally contains a set of vulnerable dependencies. This application was used to create examples of what a PR should look like. Once the engineering team took this application through Amazon Inspector and Amazon Bedrock manually, a file containing this example was generated. See in_context_examples.py. Although it can be a one-time manual process, engineering teams can also periodically add more examples as they evolve and improve the generative AI model response. The /apps/my-amazing-application is the actual application that the engineering team works on delivering business value. They deploy this application several times a day to multiple environments, and they want to make sure that it doesn’t have vulnerabilities. Based on the in-context example created previously, they’re continuously using Amazon Inspector to detect new vulnerabilities, as well as Amazon Bedrock to automatically generate pull requests that patch those vulnerabilities. The following example shows a pull request generated when a member of the development team has introduced vulnerable dependencies. The pull request contains details about the packages with detected vulnerabilities and CVEs, as well as recommendations for how to patch them. Moreover, the pull request already contains an updated version of the requirements.txt file with the changes in place. The only thing left for the engineering team to do is review and merge the pull request. Conclusion This post illustrates a simple solution to address container image (OCI) vulnerabilities using AWS Services such as Amazon Inspector, Amazon ECR, Amazon Bedrock, Amazon EventBridge, AWS Lambda, and Amazon Fargate. The serverless and event-driven nature of this solution helps make sure of cost efficiency and minimal operational overhead. Engineering teams do not need to run additional infrastructure to implement this solution. Using generative AI and serverless technologies helps simplify what used to be a complex and laborious process. Having an automated workflow in place allows engineering teams to focus on delivering business value, thereby improving overall security posture without extra operational overhead. Checkout step-by-step deployment instructions and sample code for the solution discussed in the post in this GitHub repository. References https://aws.amazon.com/blogs/aws/amazon-bedrock-now-provides-access-to-llama-2-chat-13b-model/ https://docs.aws.amazon.com/bedrock/latest/userguide/general-guidelines-for-bedrock-users.html https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-a-prompt.html#few-shot-prompting-vs-zero-shot-prompting View the full article
-
Graphic created by Kevon Mayers Introduction Organizations often use Terraform Modules to orchestrate complex resource provisioning and provide a simple interface for developers to enter the required parameters to deploy the desired infrastructure. Modules enable code reuse and provide a method for organizations to standardize deployment of common workloads such as a three-tier web application, a cloud networking environment, or a data analytics pipeline. When building Terraform modules, it is common for the module author to start with manual testing. Manual testing is performed using commands such as terraform validate for syntax validation, terraform plan to preview the execution plan, and terraform apply followed by manual inspection of resource configuration in the AWS Management Console. Manual testing is prone to human error, not scalable, and can result in unintended issues. Because modules are used by multiple teams in the organization, it is important to ensure that any changes to the modules are extensively tested before the release. In this blog post, we will show you how to validate Terraform modules and how to automate the process using a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Terraform Test Terraform test is a new testing framework for module authors to perform unit and integration tests for Terraform modules. Terraform test can create infrastructure as declared in the module, run validation against the infrastructure, and destroy the test resources regardless if the test passes or fails. Terraform test will also provide warnings if there are any resources that cannot be destroyed. Terraform test uses the same HashiCorp Configuration Language (HCL) syntax used to write Terraform modules. This reduces the burden for modules authors to learn other tools or programming languages. Module authors run the tests using the command terraform test which is available on Terraform CLI version 1.6 or higher. Module authors create test files with the extension *.tftest.hcl. These test files are placed in the root of the Terraform module or in a dedicated tests directory. The following elements are typically present in a Terraform tests file: Provider block: optional, used to override the provider configuration, such as selecting AWS region where the tests run. Variables block: the input variables passed into the module during the test, used to supply non-default values or to override default values for variables. Run block: used to run a specific test scenario. There can be multiple run blocks per test file, Terraform executes run blocks in order. In each run block you specify the command Terraform (plan or apply), and the test assertions. Module authors can specify the conditions such as: length(var.items) != 0. A full list of condition expressions can be found in the HashiCorp documentation. Terraform tests are performed in sequential order and at the end of the Terraform test execution, any failed assertions are displayed. Basic test to validate resource creation Now that we understand the basic anatomy of a Terraform tests file, let’s create basic tests to validate the functionality of the following Terraform configuration. This Terraform configuration will create an AWS CodeCommit repository with prefix name repo-. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } Now we create a Terraform test file in the tests directory. See the following directory structure as an example: ├── main.tf └── tests └── basic.tftest.hcl For this first test, we will not perform any assertion except for validating that Terraform execution plan runs successfully. In the tests file, we create a variable block to set the value for the variable repository_name. We also added the run block with command = plan to instruct Terraform test to run Terraform plan. The completed test should look like the following: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan } Now we will run this test locally. First ensure that you are authenticated into an AWS account, and run the terraform init command in the root directory of the Terraform module. After the provider is initialized, start the test using the terraform test command. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass Our first test is complete, we have validated that the Terraform configuration is valid and the resource can be provisioned successfully. Next, let’s learn how to perform inspection of the resource state. Create resource and validate resource name Re-using the previous test file, we add the assertion block to checks if the CodeCommit repository name starts with a string repo- and provide error message if the condition fails. For the assertion, we use the startswith function. See the following example: # basic.tftest.hcl variables { repository_name = "MyRepo" } run "test_resource_creation" { command = plan assert { condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") error_message = "CodeCommit repository name ${var.repository_name} did not start with the expected value of ‘repo-****’." } } Now, let’s assume that another module author made changes to the module by modifying the prefix from repo- to my-repo-. Here is the modified Terraform module. # main.tf variable "repository_name" { type = string } resource "aws_codecommit_repository" "test" { repository_name = format("my-repo-%s", var.repository_name) description = "Test repository." } We can catch this mistake by running the the terraform test command again. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... fail ╷ │ Error: Test assertion failed │ │ on tests/basic.tftest.hcl line 9, in run "test_resource_creation": │ 9: condition = startswith(aws_codecommit_repository.test.repository_name, "repo-") │ ├──────────────── │ │ aws_codecommit_repository.test.repository_name is "my-repo-MyRepo" │ │ CodeCommit repository name MyRepo did not start with the expected value 'repo-***'. ╵ tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... fail Failure! 0 passed, 1 failed. We have successfully created a unit test using assertions that validates the resource name matches the expected value. For more examples of using assertions see the Terraform Tests Docs. Before we proceed to the next section, don’t forget to fix the repository name in the module (revert the name back to repo- instead of my-repo-) and re-run your Terraform test. Testing variable input validation When developing Terraform modules, it is common to use variable validation as a contract test to validate any dependencies / restrictions. For example, AWS CodeCommit limits the repository name to 100 characters. A module author can use the length function to check the length of the input variable value. We are going to use Terraform test to ensure that the variable validation works effectively. First, we modify the module to use variable validation. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." } By default, when variable validation fails during the execution of Terraform test, the Terraform test also fails. To simulate this, create a new test file and insert the repository_name variable with a value longer than 100 characters. # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan } Notice on this new test file, we also set the command to Terraform plan, why is that? Because variable validation runs prior to Terraform apply, thus we can save time and cost by skipping the entire resource provisioning. If we run this Terraform test, it will fail as expected. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… fail ╷ │ Error: Invalid value for variable │ │ on main.tf line 1: │ 1: variable “repository_name” { │ ├──────────────── │ │ var.repository_name is “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” │ │ The repository name must be less than or equal to 100 characters. │ │ This was checked by the validation rule at main.tf:3,3-13. ╵ tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… fail Failure! 1 passed, 1 failed. For other module authors who might iterate on the module, we need to ensure that the validation condition is correct and will catch any problems with input values. In other words, we expect the validation condition to fail with the wrong input. This is especially important when we want to incorporate the contract test in a CI/CD pipeline. To prevent our test from failing due introducing an intentional error in the test, we can use the expect_failures attribute. Here is the modified test file: # var_validation.tftest.hcl variables { repository_name = “this_is_a_repository_name_longer_than_100_characters_7rfD86rGwuqhF3TH9d3Y99r7vq6JZBZJkhw5h4eGEawBntZmvy” } run “test_invalid_var” { command = plan expect_failures = [ var.repository_name ] } Now if we run the Terraform test, we will get a successful result. ❯ terraform test tests/basic.tftest.hcl… in progress run “test_resource_creation”… pass tests/basic.tftest.hcl… tearing down tests/basic.tftest.hcl… pass tests/var_validation.tftest.hcl… in progress run “test_invalid_var”… pass tests/var_validation.tftest.hcl… tearing down tests/var_validation.tftest.hcl… pass Success! 2 passed, 0 failed. As you can see, the expect_failures attribute is used to test negative paths (the inputs that would cause failures when passed into a module). Assertions tend to focus on positive paths (the ideal inputs). For an additional example of a test that validates functionality of a completed module with multiple interconnected resources, see this example in the Terraform CI/CD and Testing on AWS Workshop. Orchestrating supporting resources In practice, end-users utilize Terraform modules in conjunction with other supporting resources. For example, a CodeCommit repository is usually encrypted using an AWS Key Management Service (KMS) key. The KMS key is provided by end-users to the module using a variable called kms_key_id. To simulate this test, we need to orchestrate the creation of the KMS key outside of the module. In this section we will learn how to do that. First, update the Terraform module to add the optional variable for the KMS key. # main.tf variable "repository_name" { type = string validation { condition = length(var.repository_name) <= 100 error_message = "The repository name must be less than or equal to 100 characters." } } variable "kms_key_id" { type = string default = "" } resource "aws_codecommit_repository" "test" { repository_name = format("repo-%s", var.repository_name) description = "Test repository." kms_key_id = var.kms_key_id != "" ? var.kms_key_id : null } In a Terraform test, you can instruct the run block to execute another helper module. The helper module is used by the test to create the supporting resources. We will create a sub-directory called setup under the tests directory with a single kms.tf file. We also create a new test file for KMS scenario. See the updated directory structure: ├── main.tf └── tests ├── setup │ └── kms.tf ├── basic.tftest.hcl ├── var_validation.tftest.hcl └── with_kms.tftest.hcl The kms.tf file is a helper module to create a KMS key and provide its ARN as the output value. # kms.tf resource "aws_kms_key" "test" { description = "test KMS key for CodeCommit repo" deletion_window_in_days = 7 } output "kms_key_id" { value = aws_kms_key.test.arn } The new test will use two separate run blocks. The first run block (setup) executes the helper module to generate a KMS key. This is done by assigning the command apply which will run terraform apply to generate the KMS key. The second run block (codecommit_with_kms) will then use the KMS key ARN output of the first run as the input variable passed to the main module. # with_kms.tftest.hcl run "setup" { command = apply module { source = "./tests/setup" } } run "codecommit_with_kms" { command = apply variables { repository_name = "MyRepo" kms_key_id = run.setup.kms_key_id } assert { condition = aws_codecommit_repository.test.kms_key_id != null error_message = "KMS key ID attribute value is null" } } Go ahead and run the Terraform init, followed by Terraform test. You should get the successful result like below. ❯ terraform test tests/basic.tftest.hcl... in progress run "test_resource_creation"... pass tests/basic.tftest.hcl... tearing down tests/basic.tftest.hcl... pass tests/var_validation.tftest.hcl... in progress run "test_invalid_var"... pass tests/var_validation.tftest.hcl... tearing down tests/var_validation.tftest.hcl... pass tests/with_kms.tftest.hcl... in progress run "create_kms_key"... pass run "codecommit_with_kms"... pass tests/with_kms.tftest.hcl... tearing down tests/with_kms.tftest.hcl... pass Success! 4 passed, 0 failed. We have learned how to run Terraform test and develop various test scenarios. In the next section we will see how to incorporate all the tests into a CI/CD pipeline. Terraform Tests in CI/CD Pipelines Now that we have seen how Terraform Test works locally, let’s see how the Terraform test can be leveraged to create a Terraform module validation pipeline on AWS. The following AWS services are used: AWS CodeCommit – a secure, highly scalable, fully managed source control service that hosts private Git repositories. AWS CodeBuild – a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodePipeline – a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Amazon Simple Storage Service (Amazon S3) – an object storage service offering industry-leading scalability, data availability, security, and performance. Terraform module validation pipeline In the above architecture for a Terraform module validation pipeline, the following takes place: A developer pushes Terraform module configuration files to a git repository (AWS CodeCommit). AWS CodePipeline begins running the pipeline. The pipeline clones the git repo and stores the artifacts to an Amazon S3 bucket. An AWS CodeBuild project configures a compute/build environment with Checkov installed from an image fetched from Docker Hub. CodePipeline passes the artifacts (Terraform module) and CodeBuild executes Checkov to run static analysis of the Terraform configuration files. Another CodeBuild project configured with Terraform from an image fetched from Docker Hub. CodePipeline passes the artifacts (repo contents) and CodeBuild runs Terraform command to execute the tests. CodeBuild uses a buildspec file to declare the build commands and relevant settings. Here is an example of the buildspec files for both CodeBuild Projects: # Checkov version: 0.1 phases: pre_build: commands: - echo pre_build starting build: commands: - echo build starting - echo starting checkov - ls - checkov -d . - echo saving checkov output - checkov -s -d ./ > checkov.result.txt In the above buildspec, Checkov is run against the root directory of the cloned CodeCommit repository. This directory contains the configuration files for the Terraform module. Checkov also saves the output to a file named checkov.result.txt for further review or handling if needed. If Checkov fails, the pipeline will fail. # Terraform Test version: 0.1 phases: pre_build: commands: - terraform init - terraform validate build: commands: - terraform test In the above buildspec, the terraform init and terraform validate commands are used to initialize Terraform, then check if the configuration is valid. Finally, the terraform test command is used to run the configured tests. If any of the Terraform tests fails, the pipeline will fail. For a full example of the CI/CD pipeline configuration, please refer to the Terraform CI/CD and Testing on AWS workshop. The module validation pipeline mentioned above is meant as a starting point. In a production environment, you might want to customize it further by adding Checkov allow-list rules, linting, checks for Terraform docs, or pre-requisites such as building the code used in AWS Lambda. Choosing various testing strategies At this point you may be wondering when you should use Terraform tests or other tools such as Preconditions and Postconditions, Check blocks or policy as code. The answer depends on your test type and use-cases. Terraform test is suitable for unit tests, such as validating resources are created according to the naming specification. Variable validations and Pre/Post conditions are useful for contract tests of Terraform modules, for example by providing error warning when input variables value do not meet the specification. As shown in the previous section, you can also use Terraform test to ensure your contract tests are running properly. Terraform test is also suitable for integration tests where you need to create supporting resources to properly test the module functionality. Lastly, Check blocks are suitable for end to end tests where you want to validate the infrastructure state after all resources are generated, for example to test if a website is running after an S3 bucket configured for static web hosting is created. When developing Terraform modules, you can run Terraform test in command = plan mode for unit and contract tests. This allows the unit and contract tests to run quicker and cheaper since there are no resources created. You should also consider the time and cost to execute Terraform test for complex / large Terraform configurations, especially if you have multiple test scenarios. Terraform test maintains one or many state files within the memory for each test file. Consider how to re-use the module’s state when appropriate. Terraform test also provides test mocking, which allows you to test your module without creating the real infrastructure. Conclusion In this post, you learned how to use Terraform test and develop various test scenarios. You also learned how to incorporate Terraform test in a CI/CD pipeline. Lastly, we also discussed various testing strategies for Terraform configurations and modules. For more information about Terraform test, we recommend the Terraform test documentation and tutorial. To get hands on practice building a Terraform module validation pipeline and Terraform deployment pipeline, check out the Terraform CI/CD and Testing on AWS Workshop. Authors Kevon Mayers Kevon Mayers is a Solutions Architect at AWS. Kevon is a Terraform Contributor and has led multiple Terraform initiatives within AWS. Prior to joining AWS he was working as a DevOps Engineer and Developer, and before that was working with the GRAMMYs/The Recording Academy as a Studio Manager, Music Producer, and Audio Engineer. He also owns a professional production company, MM Productions. Welly Siauw Welly Siauw is a Principal Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He has authored several AWS blog posts and actively leads AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machines and outdoor hiking. View the full article
-
- ci/cd
- test frameworks
-
(and 2 more)
Tagged with:
-
Good software engineering teams commit frequently and deploy frequently. Those are some of the main ideas behind continuous integration (CI) and continuous deployment (CD). Gone are the days of quarterly or yearly releases and long-lived feature branches! Today, we’ll show you how you can deploy your Heroku app automatically any time code is merged into your main branch by using GitLab CI/CD. View the full article
-
The continuous integration/continuous delivery (CI/CD) pipeline represents the steps new software goes through before release. However, it can contain numerous vulnerabilities for hackers to exploit. 1. Vulnerabilities in the Code Many software releases get completed on such tight time frames that developers don’t have enough time to ensure the code is secure. Company leaders know frequent software updates tend to keep customers happy and can give people the impression that a business is on the cutting edge of technology. However, rushing new releases can have disastrous consequences that give hackers easy entry for wreaking havoc. View the full article
-
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Forbes estimates that cloud budgets will break all previous records as businesses will spend over $1 trillion on cloud computing infrastructure in 2024. Since most application releases depend on cloud infrastructure, having good continuous integration and continuous delivery (CI/CD) pipelines and end-to-end observability becomes essential for ensuring highly available systems. By integrating observability tools in CI/CD pipelines, organizations can increase deployment frequency, minimize risks, and build highly available systems. Complementing these practices is site reliability engineering (SRE), a discipline ensuring system reliability, performance, and scalability. View the full article
-
Implementing Continuous Integration/Continuous Deployment (CI/CD) for a Python application using Django involves several steps to automate testing and deployment processes. This guide will walk you through setting up a basic CI/CD pipeline using GitHub Actions, a popular CI/CD tool that integrates seamlessly with GitHub repositories. Step 1: Setting up Your Django Project Ensure your Django project is in a Git repository hosted on GitHub. This repository will be the basis for setting up your CI/CD pipeline. View the full article
-
Walrus file is a new feature released in Walrus 0.5. It allows you to describe applications and configure infrastructure resources using a concise YAML. You can then execute walrus apply in the Walrus CLI or import it on the Walrus UI. This will submit the Walrus file to the Walrus server, which will deploy, configure, and manage applications and infrastructure resources. This makes it easy to reuse them across multiple environments. View the full article
-
In the previous blog post of this series, we discussed the crucial role of Dynatrace as an orchestrator that steps in to stop the testing phase in case of any errors. Additionally, Dynatrace equips SREs and application teams with valuable insights powered by Davis® AI. In this blog post of the series, we will explore the use of Site Reliability Guardian (SRG) in more detail. SRG is a potent tool that automates the analysis of release impacts, ensuring validation of service availability, performance, and capacity objectives throughout the application ecosystem by examining the effect of advanced test suites executed earlier in the testing phase. Validation stage overview The validation stage is a crucial step in the CI/CD (Continuous Integration/Continuous Deployment) process. It involves carefully examining the test results from the previous testing phase. The main goal of this stage is to identify and address any issues or problems that were detected. Doing so reduces the risk of production disruptions and instills confidence in both SREs (Site Reliability Engineers) and end-users. Depending on the outcome of the examination, the build is either approved for deployment to the production environment or rejected. Challenges of the validation stage In the Validation phase, SREs face specific challenges that significantly slow down the CI/CD pipeline. Foremost among these is the complexity associated with data gathering and analysis. The burgeoning reliance on cloud technology stacks amplifies this challenge, creating hurdles due to budgetary constraints, time limitations, and the potential risk of human errors. Additionally, another pivotal challenge arises from the time spent on issue identification. Both SREs and application teams invest substantial time and effort in locating and rectifying software glitches within their local environments. These prolonged processes not only strain resources but also introduce delays within the CI/CD pipeline, hampering the timely release of new features to end-users. Mitigate challenges with Dynatrace With the support of Dynatrace Grail™, AutomationEngine, and the Site Reliability Guardian, SREs and application teams are assisted in making informed release decisions by utilizing telemetry observability and other insights. Additionally, the Visual Resolution Path within generated problem reports helps in reproducing issues in their environments. The Visual Resolution Path offers a chronological overview of events detected by Dynatrace across all components linked to the underlying issue. It incorporates the automatic discovery of newly generated compute resources and any static resources that are in play. This view seamlessly correlates crucial events across all affected components, eliminating the manual effort of sifting through various monitoring tools for infrastructure, process, or service metrics. As a result, businesses and SREs can redirect their manual diagnostic efforts toward fostering innovation. Configure an action for the Site Reliability Guardian in the workflow. The action should focus on validating the guardian’s adherence to the application ecosystem’s specific objectives (SLOs). Additionally, align the action’s validation window with the timeframe derived from the recently completed test events. As the action begins, the Site Reliability Guardian (SRG) evaluates the set objective by analyzing the telemetry data produced during advanced test runs. At the same time, SRG uses DAVIS_EVENTS to identify any potential problems which could result in one of two outcomes. Outcome #1: Build promotion Once the newly developed code is in line with the objectives outlined in the Guardian—and assuming that Davis AI doesn’t generate any new events—the SRG action activates the successful path in the workflow. This path includes a JavaScript action called promote_jenkins_build, which triggers an API call to approve the build being considered, leading to the promotion of the build deployment to production. Outcome #2: Build rejection If Davis AI generates any issue events related to the wider application ecosystem or if any of the objectives configured from the defined guardian are not met, the build rejection workflow is automatically initiated. This triggers the disapprove_jenkins_build JavaScript action, which leads to the rejection of the build. Moreover, by utilizing helpful service analysis tools such as Response Time Hotspots and Outliers, SREs can easily identify the root cause of any issues and save considerable time that would otherwise be spent on debugging or taking necessary actions. SREs can also make use of the Visual Resolution Path to recreate the issues on their setup or identify the events for different components that led to the issue. In both scenarios, a Slack message is sent to the SREs and the impacted app team, capturing the build promotion or rejection.The telemetry data’s automated analytics, powered by SRG and Davis AI, simplify the process of promoting builds. This approach effectively tackles the challenges that come with complex application ecosystems. Additionally, the integration of service tools and Visual Resolution Path helps to identify and fix issues more quickly, resulting in an improved mean time to repair (MTTR). Validation in the platform engineering context Dynatrace—essential within the realm of platform engineering—streamlines the validation process, providing critical insights into performance metrics and automating the identification of build failures. By leveraging SRG and Visual Resolution Path, along with Davis AI causal analysis, development teams can quickly pinpoint issues, and further rectify them ensuring a fail-smart approach. The integration of service analysis tools further enhances the validation phase by automating code-level inspections and facilitating timely resolutions. Through these orchestrated efforts, platform engineering promotes a collaborative environment, enabling more efficient validation cycles and fostering continuous enhancement in software quality and delivery. In conclusion, the integration of Dynatrace observability provides several advantages for SREs and DevOps, enabling them to enhance the key DORA metrics: Deployment Frequency: Improved deployment rate through faster and more informed decision-making. SREs gain visibility into each stage, allowing them to build faster and promptly address issues using the Dynatrace feature set. Change Lead Time: Enhanced efficiency across stages with Dynatrace observability and security tools, leading to quicker postmortems and fewer interruption calls for SREs. Change Failure Rate: Reduction in incidents and rollbacks achieved by utilizing “Configuration Change” events or deployment and annotation events in Dynatrace. This enables SREs to allocate their time more effectively to proactively address actual issues instead of debugging underlying problems. Time to restore service: While these proactive approaches can help improve Deployment Frequency and Change Lead Time, telemetry observability data with Dynatrace AI causation engine Davis AI can aid in improving Time to restore service. In addition, Dynatrace can leverage the events and telemetry data that it receives during the Continuous Integration/Continuous Deployment (CI/CD) pipeline to construct dashboards. By using JavaScript and DQL, these dashboards can help generate reports on the current DORA metrics. This method can be expanded to gain a better understanding of the SRG executions, enabling us to pinpoint the responsible guardians and the SLOs managed by various teams and identify any instances of failure. Addressing such failures can lead to improvements and further enhance the DORA metrics. Below is a sample dashboard that provides insights into DORA and SRG execution. In the next blog post, we’ll discuss the integration of security modules into the DevOps process with the aim of achieving DevSecOps. Additionally, we’ ll explore the incorporation of Chaos Engineering during the testing stage to enhance the overall reliability of the DevSecOps cycle. We’ll ensure that these efforts don’t affect the Time to Restore Service turnaround build time and examine how we can improve the fifth key DORA metric, Reliability. What’s next? Curious to see how it all works? Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact Sales If you’re an existing Dynatrace Managed customer looking to upgrade to Dynatrace SaaS, see How to start your journey to Dynatrace SaaS. The post Automate CI/CD pipelines with Dynatrace: Part 4, Validation stage appeared first on Dynatrace news. View the full article
-
Introduction Today customers want to reduce manual operations for deploying and maintaining their infrastructure. The recommended method to deploy and manage infrastructure on AWS is to follow Infrastructure-As-Code (IaC) model using tools like AWS CloudFormation, AWS Cloud Development Kit (AWS CDK) or Terraform. One of the critical components in terraform is managing the state file which keeps track of your configuration and resources. When you run terraform in an AWS CI/CD pipeline the state file has to be stored in a secured, common path to which the pipeline has access to. You need a mechanism to lock it when multiple developers in the team want to access it at the same time. In this blog post, we will explain how to manage terraform state files in AWS, best practices on configuring them in AWS and an example of how you can manage it efficiently in your Continuous Integration pipeline in AWS when used with AWS Developer Tools such as AWS CodeCommit and AWS CodeBuild. This blog post assumes you have a basic knowledge of terraform, AWS Developer Tools and AWS CI/CD pipeline. Let’s dive in! Challenges with handling state files By default, the state file is stored locally where terraform runs, which is not a problem if you are a single developer working on the deployment. However if not, it is not ideal to store state files locally as you may run into following problems: When working in teams or collaborative environments, multiple people need access to the state file Data in the state file is stored in plain text which may contain secrets or sensitive information Local files can get lost, corrupted, or deleted Best practices for handling state files The recommended practice for managing state files is to use terraform’s built-in support for remote backends. These are: Remote backend on Amazon Simple Storage Service (Amazon S3): You can configure terraform to store state files in an Amazon S3 bucket which provides a durable and scalable storage solution. Storing on Amazon S3 also enables collaboration that allows you to share state file with others. Remote backend on Amazon S3 with Amazon DynamoDB: In addition to using an Amazon S3 bucket for managing the files, you can use an Amazon DynamoDB table to lock the state file. This will allow only one person to modify a particular state file at any given time. It will help to avoid conflicts and enable safe concurrent access to the state file. There are other options available as well such as remote backend on terraform cloud and third party backends. Ultimately, the best method for managing terraform state files on AWS will depend on your specific requirements. When deploying terraform on AWS, the preferred choice of managing state is using Amazon S3 with Amazon DynamoDB. AWS configurations for managing state files Create an Amazon S3 bucket using terraform. Implement security measures for Amazon S3 bucket by creating an AWS Identity and Access Management (AWS IAM) policy or Amazon S3 Bucket Policy. Thus you can restrict access, configure object versioning for data protection and recovery, and enable AES256 encryption with SSE-KMS for encryption control. Next create an Amazon DynamoDB table using terraform with Primary key set to LockID. You can also set any additional configuration options such as read/write capacity units. Once the table is created, you will configure the terraform backend to use it for state locking by specifying the table name in the terraform block of your configuration. For a single AWS account with multiple environments and projects, you can use a single Amazon S3 bucket. If you have multiple applications in multiple environments across multiple AWS accounts, you can create one Amazon S3 bucket for each account. In that Amazon S3 bucket, you can create appropriate folders for each environment, storing project state files with specific prefixes. Now that you know how to handle terraform state files on AWS, let’s look at an example of how you can configure them in a Continuous Integration pipeline in AWS. Architecture Figure 1: Example architecture on how to use terraform in an AWS CI pipeline This diagram outlines the workflow implemented in this blog: The AWS CodeCommit repository contains the application code The AWS CodeBuild job contains the buildspec files and references the source code in AWS CodeCommit The AWS Lambda function contains the application code created after running terraform apply Amazon S3 contains the state file created after running terraform apply. Amazon DynamoDB locks the state file present in Amazon S3 Implementation Pre-requisites Before you begin, you must complete the following prerequisites: Install the latest version of AWS Command Line Interface (AWS CLI) Install terraform latest version Install latest Git version and setup git-remote-codecommit Use an existing AWS account or create a new one Use AWS IAM role with role profile, role permissions, role trust relationship and user permissions to access your AWS account via local terminal Setting up the environment You need an AWS access key ID and secret access key to configure AWS CLI. To learn more about configuring the AWS CLI, follow these instructions. Clone the repo for complete example: git clone https://github.com/aws-samples/manage-terraform-statefiles-in-aws-pipeline After cloning, you could see the following folder structure: Figure 2: AWS CodeCommit repository structure Let’s break down the terraform code into 2 parts – one for preparing the infrastructure and another for preparing the application. Preparing the Infrastructure The main.tf file is the core component that does below: It creates an Amazon S3 bucket to store the state file. We configure bucket ACL, bucket versioning and encryption so that the state file is secure. It creates an Amazon DynamoDB table which will be used to lock the state file. It creates two AWS CodeBuild projects, one for ‘terraform plan’ and another for ‘terraform apply’. Note – It also has the code block (commented out by default) to create AWS Lambda which you will use at a later stage. AWS CodeBuild projects should be able to access Amazon S3, Amazon DynamoDB, AWS CodeCommit and AWS Lambda. So, the AWS IAM role with appropriate permissions required to access these resources are created via iam.tf file. Next you will find two buildspec files named buildspec-plan.yaml and buildspec-apply.yaml that will execute terraform commands – terraform plan and terraform apply respectively. Modify AWS region in the provider.tf file. Update Amazon S3 bucket name, Amazon DynamoDB table name, AWS CodeBuild compute types, AWS Lambda role and policy names to required values using variable.tf file. You can also use this file to easily customize parameters for different environments. With this, the infrastructure setup is complete. You can use your local terminal and execute below commands in the same order to deploy the above-mentioned resources in your AWS account. terraform init terraform validate terraform plan terraform apply Once the apply is successful and all the above resources have been successfully deployed in your AWS account, proceed with deploying your application. Preparing the Application In the cloned repository, use the backend.tf file to create your own Amazon S3 backend to store the state file. By default, it will have below values. You can override them with your required values. bucket = "tfbackend-bucket" key = "terraform.tfstate" region = "eu-central-1" The repository has sample python code stored in main.py that returns a simple message when invoked. In the main.tf file, you can find the below block of code to create and deploy the Lambda function that uses the main.py code (uncomment these code blocks). data "archive_file" "lambda_archive_file" { …… } resource "aws_lambda_function" "lambda" { …… } Now you can deploy the application using AWS CodeBuild instead of running terraform commands locally which is the whole point and advantage of using AWS CodeBuild. Run the two AWS CodeBuild projects to execute terraform plan and terraform apply again. Once successful, you can verify your deployment by testing the code in AWS Lambda. To test a lambda function (console): Open AWS Lambda console and select your function “tf-codebuild” In the navigation pane, in Code section, click Test to create a test event Provide your required name, for example “test-lambda” Accept default values and click Save Click Test again to trigger your test event “test-lambda” It should return the sample message you provided in your main.py file. In the default case, it will display “Hello from AWS Lambda !” message as shown below. Figure 3: Sample Amazon Lambda function response To verify your state file, go to Amazon S3 console and select the backend bucket created (tfbackend-bucket). It will contain your state file. Figure 4: Amazon S3 bucket with terraform state file Open Amazon DynamoDB console and check your table tfstate-lock and it will have an entry with LockID. Figure 5: Amazon DynamoDB table with LockID Thus, you have securely stored and locked your terraform state file using terraform backend in a Continuous Integration pipeline. Cleanup To delete all the resources created as part of the repository, run the below command from your terminal. terraform destroy Conclusion In this blog post, we explored the fundamentals of terraform state files, discussed best practices for their secure storage within AWS environments and also mechanisms for locking these files to prevent unauthorized team access. And finally, we showed you an example of how efficiently you can manage them in a Continuous Integration pipeline in AWS. You can apply the same methodology to manage state files in a Continuous Delivery pipeline in AWS. For more information, see CI/CD pipeline on AWS, Terraform backends types, Purpose of terraform state. Arun Kumar Selvaraj Arun Kumar Selvaraj is a Cloud Infrastructure Architect with AWS Professional Services. He loves building world class capability that provides thought leadership, operating standards and platform to deliver accelerated migration and development paths for his customers. His interests include Migration, CCoE, IaC, Python, DevOps, Containers and Networking. Manasi Bhutada Manasi Bhutada is an ISV Solutions Architect based in the Netherlands. She helps customers design and implement well architected solutions in AWS that address their business problems. She is passionate about data analytics and networking. Beyond work she enjoys experimenting with food, playing pickleball, and diving into fun board games. View the full article
-
- best practices
- terraform
-
(and 1 more)
Tagged with:
-
Jenkins is an open-source automation server widely used for building, testing, and deploying software projects. It provides a platform for continuous integration and continuous delivery (CI/CD), allowing development teams to automate various tasks in the software development lifecycle. View the full article
-
We've talked about how Continuous Integration and Continuous Delivery (CI/CD) tools can be a source of secrets sprawl. While it's not as insecure as leaving them lying around in a publicly accessible file, CI/CD pipelines can be exploited in a number of ways, and I'm going to share a few with you. This article is not exhaustive. GitHub's Security Hardening Guide for GitHub Actions alone is 16 pages long if you try to print it. OWASP's Top 10 CI/CD Security Risks is 38 pages long. Protecting your CI/CD systems is not a trivial task, but it's an important one. To get you started, here's a quick read on five ways attackers can leverage your CI/CD to gain access to additional systems. View the full article
-
CI/CD Explained CI/CD stands for continuous integration and continuous deployment and they are the backbone of modern-day DevOps practices. CI/CD is the process that allows software to be continuously built, tested, automated, and delivered in a continuous cadence. In a rapidly developing world with increasing requirements, the development and integration process need to be at the same speed to ensure business delivery. What Is Continuous Integration? CI, or continuous integration, works on automated tests and builds. Changes made by developers are stored in a source branch of a shared repository. Any changes committed to this branch go through builds and testing before merging. This ensures consistent quality checks of the code that gets merged. View the full article
-
In today's fast-evolving technology landscape, the integration of Artificial Intelligence (AI) into Internet of Things (IoT) systems has become increasingly prevalent. AI-enhanced IoT systems have the potential to revolutionize industries such as healthcare, manufacturing, and smart cities. However, deploying and maintaining these systems can be challenging due to the complexity of the AI models and the need for seamless updates and deployments. This article is tailored for software engineers and explores best practices for implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines for AI-enabled IoT systems, ensuring smooth and efficient operations. View the full article
-
Tools and platforms form the backbone of seamless software delivery in the ever-evolving world of Continuous Integration and Continuous Deployment (CI/CD). For years, Jenkins has been the stalwart, powering countless deployment pipelines and standing as the go-to solution for many DevOps professionals. But as the tech landscape shifts towards cloud-native solutions, AWS CodePipeline emerges as a formidable contender. Offering deep integration with the expansive AWS ecosystem and the agility of a cloud-based platform, CodePipeline is redefining the standards of modern deployment processes. This article dives into the transformative power of AWS CodePipeline, exploring its advantages over Jenkins and showing why many are switching to this cloud-native tool. Brief Background About CodePipeline and Jenkins At its core, AWS CodePipeline is Amazon Web Services' cloud-native continuous integration and continuous delivery service, allowing users to automate the build, test, and deployment phases of their release process. Tailored to the vast AWS ecosystem, CodePipeline leverages other AWS services, making it a seamless choice for teams already integrated with AWS cloud infrastructure. It promises scalability, maintenance ease, and enhanced security, characteristics inherent to many managed AWS services. On the other side of the spectrum is Jenkins – an open-source automation server with a storied history. Known for its flexibility, Jenkins has garnered immense popularity thanks to its extensive plugin system. It's a tool that has grown with the CI/CD movement, evolving from a humble continuous integration tool to a comprehensive automation platform that can handle everything from build to deployment and more. Together, these two tools represent two distinct eras and philosophies in the CI/CD domain. View the full article
-
- jenkins
- cloud-native
-
(and 1 more)
Tagged with:
-
Today, AWS CodePipeline announces support for retrying a pipeline execution from the first action in a stage that failed. This launch provides another remediation option for a failed pipeline execution in addition to the existing option of retrying a failed pipeline execution from the failed action(s). View the full article
-
devops tools Best DevOps Tools in 2024
DevOpsSchool posted a topic in DevOps & SRE General Discussion
The post Best DevOps Tools in 2024 appeared first on DevOpsSchool.com. View the full article -
untilAbout cdCon + GitOpsCon will foster collaboration, discussion, and knowledge sharing by bringing communities, vendors, and end users to meet, discuss, collaborate and start shaping the future of GitOps and CD together. Details https://events.linuxfoundation.org/cdcon-gitopscon/ Event Schedule https://events.linuxfoundation.org/cdcon-gitopscon/program/schedule/
-
Jenkins is a popular open-source CI/CD that helps automate various aspects of software development, including building, testing, and deploying applications. Jenkins is highly extensible, with over 1,000 available plugins, which help to integrate with various third-party tools and technologies. Consider a scenario where you're working on a large software project with multiple developers. Testing each and every change manually can be time-consuming and prone to human error. This is where Jenkins test cases can come in handy. View the full article
-
- 1
-
- testing
- test cases
-
(and 2 more)
Tagged with:
-
On the west coast of Canada, you will find Vancouver, British Columbia, home to the Canucks, breathtaking scenery, and the Granville Walk of Fame. You will also find the Vancouver Convention Center, which hosts some of the best views from any event space in the world. It was in this picturesque setting that the CD Foundation and OpenGitOps communities came together for a co-located event, cdCon + GitOpsCon 2023. These two communities are distinct but have aligned goals and visions for how DevOps needs to evolve. The CD Foundation acts as a host and incubator for open-source projects like Spinnaker and Jenkins, the newly graduated project Tekton, and the completely new cdEvents. They have a mission of defining continuous delivery best practices. OpenGitOps was started as a Cloud Native Computing Foundation working group with the goal of clearly defining a vendor-neutral, principle-led meaning of GitOps. View the full article
-
Let's start with a story: Have you heard the news about CircleCI's breach? No, not the one where they accidentally leaked some customer credentials a few years back. This time, it's a bit more serious. It seems that some unauthorized individuals were able to gain access to CircleCI's systems, compromising the secrets stored in CircleCI. CircleCI advised users to rotate "any and all secrets" stored in CircleCI, including those stored in project environment variables or contexts. View the full article
-
30 Best DevOps Tools to Learn and Master In 2023: Git, Docker ... https://www.simplilearn.com/tutorials/devops-tutorial/devops-tools
-
- devops
- git
-
(and 45 more)
Tagged with:
- devops
- git
- docker
- gitlab
- github
- bitbucket
- maven
- jenkins
- chef
- puppet
- ansible
- kubernetes
- slack
- signalfx
- raygun
- splunk
- selenium
- testing tools
- tools
- gremlin
- servicenow
- elk
- elasticsearch
- logstash
- kibana
- terraform
- phantom
- nagios
- vagrant
- sentry
- gradle
- eg enterprise
- ci/cd
- bamboo
- gitlab ci
- travis ci
- circleci
- codepipeline
- mercurial
- subversion
- soapui
- testcomplete
- zephyr
- prometheus
- datadog
- new relic
- zabbix
-
GitHub Actions is relatively new to the world of automation and Continuous Integration (CI). Providing ‘CI as a Service,’ GitHub Actions has many differences from its traditional rival platforms. In this post, we explore the differences between GitHub Actions and traditional build servers. We also look at whether GitHub Actions is a suitable option for building and testing your code. View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts