Jump to content

Search the Community

Showing results for tags 'codepipeline'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Welcome New Members !
    • General Discussion
    • Site News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Data Engineering, Data Science & AI
    • Development & Programming
    • CI/CD & GitOps
    • Docker, Containers, Microservices & Serverless
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Monitoring, Observability & Logging
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure
    • Red Hat OpenShift

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Development Experience


Cloud Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 12 results

  1. Tools and platforms form the backbone of seamless software delivery in the ever-evolving world of Continuous Integration and Continuous Deployment (CI/CD). For years, Jenkins has been the stalwart, powering countless deployment pipelines and standing as the go-to solution for many DevOps professionals. But as the tech landscape shifts towards cloud-native solutions, AWS CodePipeline emerges as a formidable contender. Offering deep integration with the expansive AWS ecosystem and the agility of a cloud-based platform, CodePipeline is redefining the standards of modern deployment processes. This article dives into the transformative power of AWS CodePipeline, exploring its advantages over Jenkins and showing why many are switching to this cloud-native tool. Brief Background About CodePipeline and Jenkins At its core, AWS CodePipeline is Amazon Web Services' cloud-native continuous integration and continuous delivery service, allowing users to automate the build, test, and deployment phases of their release process. Tailored to the vast AWS ecosystem, CodePipeline leverages other AWS services, making it a seamless choice for teams already integrated with AWS cloud infrastructure. It promises scalability, maintenance ease, and enhanced security, characteristics inherent to many managed AWS services. On the other side of the spectrum is Jenkins – an open-source automation server with a storied history. Known for its flexibility, Jenkins has garnered immense popularity thanks to its extensive plugin system. It's a tool that has grown with the CI/CD movement, evolving from a humble continuous integration tool to a comprehensive automation platform that can handle everything from build to deployment and more. Together, these two tools represent two distinct eras and philosophies in the CI/CD domain. View the full article
  2. Today, AWS CodePipeline announces support for retrying a pipeline execution from the first action in a stage that failed. This launch provides another remediation option for a failed pipeline execution in addition to the existing option of retrying a failed pipeline execution from the failed action(s). View the full article
  3. In a few days, I will board a plane towards the south. My tour around Latin America starts. But I won’t be alone in this adventure, you can find some other News Blog authors, like Jeff or Seb, speaking at AWS Community Days and local events in Peru, Argentina, Chile, and Uruguay. If you see us, come and say hi. We would love to meet you. Last Week’s Launches Here are some launches that got my attention during the previous week. AWS AppSync now supports JavaScript for all resolvers in GraphQL APIs – Last year, we announced that AppSync now supports JavaScript pipeline resolvers. And starting last week, developers can use JavaScript to write unit resolvers, pipeline resolvers, and AppSync functions that are run on the AppSync Javascript runtime. AWS CodePipeline now supports GitLab – Now you can use your GitLab.com source repository to build, test, and deploy code changes using AWS CodePipeline, in addition to other providers like AWS CodeCommit, Bitbucket, GitHub.com, and GitHub Enterprise Server. Amazon CloudWatch Agent adds support for OpenTelemetry traces and AWS X-Ray – With the new version of the agent you are now able to collect metrics, logs, and traces with a single agent, not only for CloudWatch but also for OpenTelemetry and AWS X-Ray. Simplifying the installation, configuration, and management of telemetry collection. New instance types: Amazon EC2 M7a and Amazon EC2 Hpc7a – The new Amazon EC2 M7a is a general purpose instance type powered by 4th Gen AMD EPYC processor. In the announcement blog, you can find all the specifics for this instance type. The new Amazon EC2 Hpc7a instances are also powered by 4th Gen AMD EPYC processors. These instance types are optimized for high performance computing and Channy Yun wrote a blog post describing the different characteristics of the Amazon EC2 Hpc7a instance type. AWS DeepRacer Educator Playbooks – Last week we introduced the AWS DeepRacer educator playblooks, these are a tool for educators to integrate foundational machine learning (ML) curriculum and labs into their classrooms. Educators can use these playbooks to easily upskill students in the basics of ML with autonomous vehicles. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS News Some other updates and news that you might have missed: Guide for using AWS Lambda to process Apache Kafka Streams – Julian Wood just published the most complete guide you can find on how to use Lambda with Apache Kafka. If you are an Amazon Kinesis user, don’t worry. We’ve got you covered with this video series where you will find similar topics. The Official AWS Podcast – Listen each week for updates on the latest AWS news and deep dives into exciting use cases. There are also official AWS podcasts in several languages. Check out the ones in French, German, Italian, and Spanish. AWS Open-Source News and Updates – This is a newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more. Upcoming AWS Events Check your calendars and sign up for these AWS events: AWS Hybrid Cloud & Edge Day (August 30) – Join a free-to-attend one-day virtual event to hear the latest hybrid cloud and edge computing trends, emerging technologies, and learn best practices from AWS leaders, customers, and industry analysts. To learn more, see the detail agenda and register now. AWS Global Summits – The 2023 AWS Summits season is almost ending with the last two in-person events in Mexico City (August 30) and Johannesburg (September 26). AWS re:Invent (November 27–December 1) – But don’t worry because re:Invent season is coming closer. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Registration is now open. AWS Community Days – Join a community-led conference run by AWS user group leaders in your region:Taiwan (August 26), Aotearoa (September 6), Lebanon (September 9), Munich (September 14), Argentina (September 16), Spain (September 23), and Chile (September 30). Check all the upcoming AWS Community Days here. CDK Day (September 29) – A community-led fully virtual event with tracks in English and in Spanish about CDK and related projects. Learn more in the website. That’s all for this week. Check back next Monday for another Week in Review! This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! — Marcia View the full article
  4. As of February 2022, the AWS Cloud spans 84 Availability Zones within 26 geographic Regions, with announced plans for more Availability Zones and Regions. Customers can leverage this global infrastructure to expand their presence to their primary target of users, satisfying data residency requirements, and implementing disaster recovery strategy to make sure of business continuity. Although leveraging multi-Region architecture would address these requirements, deploying and configuring consistent infrastructure stacks across multi-Regions could be challenging, as AWS Regions are designed to be autonomous in nature. Multi-region deployments with Terraform and AWS CodePipeline can help customers with these challenges. In this post, we’ll demonstrate the best practice for multi-Region deployments using HashiCorp Terraform as infrastructure as code (IaC), and AWS CodeBuild , CodePipeline as continuous integration and continuous delivery (CI/CD) for consistency and repeatability of deployments into multiple AWS Regions and AWS Accounts. We’ll dive deep on the IaC deployment pipeline architecture and the best practices for structuring the Terraform project and configuration for multi-Region deployment of multiple AWS target accounts... View the full article
  5. As customers commit to a DevOps mindset and embrace a nearly continuous integration/continuous delivery model to implement change with a higher velocity, assessing every change impact on an application resilience is key. This blog shows an architecture pattern for automating resiliency assessments as part of your CI/CD pipeline. Automatically running a resiliency assessment within CI/CD pipelines, development teams can fail fast and understand quickly if a change negatively impacts an applications resilience. The pipeline can stop the deployment into further environments, such as QA/UAT and Production, until the resilience issues have been improved. AWS Resilience Hub is a managed service that gives you a central place to define, validate and track the resiliency of your AWS applications. It is integrated with AWS Fault Injection Simulator (FIS), a chaos engineering service, to provide fault-injection simulations of real-world failures. Using AWS Resilience Hub, you can assess your applications to uncover potential resilience enhancements. This will allow you to validate your applications recovery time (RTO), recovery point (RPO) objectives and optimize business continuity while reducing recovery costs. Resilience Hub also provides APIs for you to integrate its assessment and testing into your CI/CD pipelines for ongoing resilience validation. AWS CodePipeline is a fully managed continuous delivery service for fast and reliable application and infrastructure updates. You can use AWS CodePipeline to model and automate your software release processes. This enables you to increase the speed and quality of your software updates by running all new changes through a consistent set of quality checks... View the full article
  6. You can now use your GitHub Enterprise Server source repository to build, test, and deploy code changes using AWS CodePipeline. View the full article
  7. AWS CodePipeline Source Action now supports cloning of AWS CodeCommit repositories. With this improvement, when you define a Source Action, CodePipeline will clone the CodeCommit git repository to fetch the commit history and metadata. View the full article
  8. In the post Using Custom Source Actions in AWS CodePipeline for Increased Visibility for Third-Party Source Control, we demonstrated using custom actions in AWS CodePipeline and a worker that periodically polls for jobs and processes further to get the artifact from the Git repository.. View the full article
  9. Researchers at Academic Medical Centers (AMCs) use programs such as Observational Health Data Sciences and Informatics (OHDSI) and Research Electronic Data Capture (REDCap) to interact with healthcare data. Our internal team at AWS has provided solutions such as OHDSI-on-AWS and REDCap environments on AWS to help clinicians analyze healthcare data in the AWS Cloud. Occasionally, these solutions break due to a change in some portion of the solution (e.g. updated services). The Automated Solutions Testing Pipeline enables our team to take a proactive approach to discovering these breaks and their cause in order to expedite the repair process. OHDSI-on-AWS provides these AMCs with the ability to store and analyze observational health data in the AWS cloud. REDCap is a web application for managing surveys and databases with HIPAA-compliant environments. Using our solutions, these programs can be spun up easily on the AWS infrastructure using AWS CloudFormation templates. Updates to AWS services and other program libraries can cause the CloudFormation template to fail during deployment. Other times, the outputs may not be operating correctly, or the template may not work on every AWS region. This can create a negative customer experience. Some customers may discover this kind of break and decide to not move forward with using the solution. Other customers may not even realize the solution is broken, so they might be unknowingly working with an uncooperative environment. Furthermore, we cannot always provide fast support to the customers who contact us about broken solutions. To meet our team’s needs and the needs of our customers, we decided to focus our efforts on taking a CI/CD approach to maintain these solutions. We developed the Automated Testing Pipeline which regularly tests solution deployment and changes to source files. This post shows the features of the Automated Testing Pipeline and provides resources to help you get started using it with your AWS account. Overview of Automated Testing Pipeline Solution The Automated Testing Pipeline solution as a whole is designed to automatically deploy CloudFormation templates, run tests against the deployed environments, send notifications if an issue is discovered, and allow for insightful testing data to be easily explored. CloudFormation templates to be tested are stored in an Amazon S3 bucket. Custom test scripts and TaskCat deployment configuration are stored in an AWS CodeCommit repository. The pipeline is triggered in one of three ways: an update to the CloudFormation Template in S3, an Amazon CloudWatch events rule, and an update to the testing source code repository. Once the pipeline has been triggered, AWS CodeBuild pulls the source code to deploy the CloudFormation template, test the deployed environment, and store the results in an S3 bucket. If any failures are discovered, subscribers to the failure topic are notified. The following diagram shows its overall architecture. Diagram of Automated Testing Pipeline architecture In order to create the Automated Testing Pipeline, two interns collaborated over the course of 5 weeks to produce the architecture and custom test scripts. We divided the work of constructing a serverless architecture and writing out test scripts for the output urls for OHDSI-on-AWS and REDCap environments on AWS. The following tasks were completed to build out the Automated Testing Pipeline solution: Setup AWS IAM roles for accessing AWS resources securely Create CloudWatch events to trigger AWS CodePipeline Setup CodePipeline and CodeBuild to run TaskCat and testing scripts Configure TaskCat to deploy CloudFormation solutions in various AWS Regions Write test scripts to interact with CloudFormation solutions’ deployed environments Subscribe to receive emails detailing test results Create a CloudFormation template for the Automated Testing Pipeline The architecture can be extended to test any CloudFormation stack. For this particular use case, we wrote the test scripts specifically to test the urls output by the CloudFormation solutions. The Automated Testing Pipeline has the following features: Deployed in a single AWS Region, with the exception of the tested CloudFormation solution Has a serverless architecture operating at the AWS Region level Deploys a pipeline which can deploy and test the CloudFormation solution Creates CloudWatch events to activate the pipeline on a schedule or when the solution is updated Creates an Amazon SNS topic for notifying subscribers when there are errors Includes code for running TaskCat and scripts to test solution functionality Built automatically in minutes Low in cost with free tier benefits The pipeline is triggered automatically when an event occurs. These events include a change to the CloudFormation solution template, a change to the code in the testing repository, and an alarm set off by a regular schedule. Additional events can be added in the CloudWatch console. When the pipeline is triggered, the testing environment is set up by CodeBuild. CodeBuild uses a build specification file kept within our source repository to set up the environment and run the test scripts. We created a CodeCommit repository to host the test scripts alongside the build specification. The build specification includes commands run TaskCat — an open-source tool for testing the deployment of CloudFormation templates. TaskCat provides the ability to test the deployment of the CloudFormation solution, but we needed custom test scripts to ensure that we can interact with the deployed environment as expected. If the template is successfully deployed, CodeBuild handles running the test scripts against the CloudFormation solution environment. In our case, the environment is accessed via urls output by the CloudFormation solution. We used a Selenium WebDriver for interacting with the web pages given by the output urls. This allowed us to programmatically navigate a headless web browser in the serverless environment and gave us the ability to use text output by JavaScript functions to understand the state of the test. You can see this interaction occurring in the code snippet below. def log_in(driver, user, passw, link, btn_path, title): """Enter username and password then submit to log in :param driver: webdriver for Chrome page :param user: username as String :param passw: password as String :param link: url for page being tested as String :param btn_path: xpath to submit button :param title: expected page title upon successful sign in :return: success String tuple if log in completed, failure description tuple String otherwise """ try: # post username and password data driver.find_element_by_xpath("//input[ @name='username' ]").send_keys(user) driver.find_element_by_xpath("//input[ @name='password' ]").send_keys(passw) # click sign in button and wait for page update driver.find_element_by_xpath(btn_path).click() except NoSuchElementException: return 'FAILURE', 'Unable to access page elements' try: WebDriverWait(driver, 20).until(ec.url_changes(link)) WebDriverWait(driver, 20).until(ec.title_is(title)) except TimeoutException as e: print("Timeout occurred (" + e + ") while attempting to sign in to " + driver.current_url) if "Sign In" in driver.title or "invalid user" in driver.page_source.lower(): return 'FAILURE', 'Incorrect username or password' else: return 'FAILURE', 'Sign in attempt timed out' return 'SUCCESS', 'Sign in complete' We store the test results in JSON format for ease of parsing. TaskCat generates a dashboard which we customize to display these test results. We are able to insert our JSON results into the dashboard in order to make it easy to find errors and access log files. This dashboard is a static html file that can be hosted on an S3 bucket. In addition, messages are published to topics in SNS whenever an error occurs which provide a link to this dashboard. Customized TaskCat dashboard In true CI/CD fashion, this end-to-end design automatically performs tasks that would otherwise be performed manually. We have shown how deploying solutions, testing solutions, notifying maintainers, and providing a results dashboard are all actions handled entirely by the Automated Testing Pipeline. Getting Started with the Automated Testing Pipeline Prerequisite tasks to complete before deploying the pipeline: Clone the repository found at this GitHub page Create an EC2KeyPair in the region corresponding to the region in which the CloudFormation solution will be deployed Once the prerequisite tasks are completed, the pipeline is ready to be deployed. Detailed information about deployment, altering the source code to fit your use case, and troubleshooting issues can be found at the GitHub page for the Automated Testing Pipeline. For those looking to jump right into deployment, click the Launch Stack button below. Tasks to complete after deployment: Subscribe to SNS topic for error messages Update the code to match the parameters and CloudFormation template that were chosen Skip this step if you are testing OHDSI-on-AWS. Upload the desired CloudFormation template to the created source S3 Bucket Push the source code to the created CodeCommit Repository After the code is pushed to the CodeCommit repository and the CloudFormation template has been uploaded to S3, the pipeline will run automatically. You can visit the CodePipeline console to confirm that the pipeline is running with an “in progress” status. You may desire to alter various aspects of the Automated Testing Pipeline to better fit your use case. Listed below are some actions you can take to modify the solution to fit your needs: Go to CloudWatch Events and update rules for automatically started the pipeline. Scale out testing by providing custom testing scripts or altering the existing ones. Test a different CloudFormation template by uploading it to the source S3 bucket created and configuring the pipeline accordingly. Custom test scripts will likely be required for this use case. Challenges Addressed by the Automated Testing Pipeline The Automated Testing Pipeline directly addresses the challenges we faced with maintaining our OHDSI and REDCap solutions. Additionally, the pipeline can be used whenever there is a need to test CloudFormation templates that are being used on a regular basis or are distributed to other users. Listed below is the set of specific challenges we faced maintaining CloudFormation solutions and how the pipeline addresses them. The desire to better serve our customers guided our decision to create the Automated Testing Pipeline. For example, we know that source code used to build the OHDSI-on-AWS environment changes on occasion. Some of these changes have caused the environment to stop functioning correctly. This left us with cases where our customers had to either open an issue on GitHub or reach out to AWS directly for support. Our customers depend on OHDSI-on-AWS functioning properly, so fixing issues is of high priority to our team. The ability to run tests regularly allows us to take action without depending on notice from our customers. Now, we can be the first ones to know if something goes wrong and get to fixing it sooner. “This automation will help us better monitor the CloudFormation-based projects our customers depend on to ensure they’re always in working order.” — James Wiggins, EDU HCLS SA Manager Cleaning Up If you decide to quit using the Automated Testing Pipeline, follow the steps below to get rid of the resources associated with it in your AWS account. Delete CloudFormation solution root Stack Delete pipeline CloudFormation Stack Delete ATLAS S3 Bucket if OHDSI-on-AWS was chosen Deleting the pipeline CloudFormation stack handles removing the resources associated with its architecture. Depending on the CloudFormation template chosen for testing, additional resources associated with it may need to be removed. Visit our GitHub page for more information on removing resources. Conclusion The ability to continuously test preexisting solutions on AWS has great benefits for our team and our customers. The automated nature of this testing frees up time for us and our customers, and the dashboard makes issues more visible and easier to resolve. We believe that sharing this story can benefit anyone facing challenges maintaining CloudFormation solutions in AWS. Check out the Getting Started with the Automated Testing Pipeline section of this post to deploy the solution. Additional Resources More information about the key services and open-source software used in our pipeline can be found at the following documentation pages: AWS CloudFormation documentation Amazon CloudWatch documentation AWS CodePipeline documentation AWS CodeBuild documentation Amazon SNS documentation AWS IAM documentation TaskCat documentation Selenium documentation Boto3 documentation About the Authors Raleigh Hansen is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. She is passionate about solving problems and improving upon existing systems. She also adores spending time with her two cats. Dan Le is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. He is passionate about technology and enjoys doing art and music. View the full article
  10. AWS CodeStar Connections is a new feature that allows services like AWS CodePipeline to access third-party code source provider. For example, you can now seamlessly connect your Atlassian Bitbucket Cloud source repository to AWS CodePipeline. This allows you to automate the build, test, and deploy phases of your release process each time a code change occurs. This new feature is available in the following Regions: US East (Ohio) US East (N. Virginia) US West (N. California) US West (Oregon) Asia Pacific (Mumbai) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Sydney) Asia Pacific (Tokyo) Canada (Central) EU (Frankfurt) EU (Ireland) EU (London) EU (Paris) South America (São Paulo) The practice of tracking and managing changes to code, or source control, is a foundational element to the development process. Therefore, source control management systems are an essential tool for any developer. In this post, we focus on one specific Git code management product: Atlassian Bitbucket. You can get started for free with Bitbucket Cloud. Atlassian provides detailed documentation on getting started with Bitbucket Cloud, which includes topics such as setting up a team, creating a repository, working with branches, and more. For more information, see Get started with Bitbucket Cloud. Prerequisite For this use case, you use a Bitbucket account, repository, and Amazon Simple Storage Service (Amazon S3) bucket that we have already created. To follow along, you should have the following: A working knowledge of Git and how to fork or clone within your source provider Familiarity with hosting a static website on Amazon S3 To follow along you will need a sample page. Here is some simple html code that you can name index.html and add to your repo. <html> <head> Example Header </head> <body> Example Body Text </body> </html> Solution overview For this use case, you deploy a Hugo website from your Bitbucket Cloud repository to your S3 bucket using CodePipeline. You can then connect your Bitbucket Cloud account to your AWS account to deploy code natively. The walkthrough contains the following steps: Set up CodeStar connections. Add a deployment stage. Use CI/CD to update your website. Setting up CodeStar connections When connecting CodePipeline to Bitbucket Cloud, it helps if you already signed in to Bitbucket. After you sign in to Bitbucket Cloud, you perform the rest of the connection steps on the AWS Management Console. On the console, search for CodePipeline. Choose CodePipeline. Choose Pipelines. Choose Create pipeline. For Pipeline name, enter a name. For Service role, select New service role. For Role name, enter a name for the service role. Choose Next. For Source provider, choose Bitbucket Cloud. For Connection, choose Connect to Bitbucket Cloud. For Connection name, enter a name. For Bitbucket Cloud apps, choose Install a new app. If this isn’t your first time making a connection, you can choose an existing connection. Choose Connect. Confirm you’re logged in as the correct user and choose Grant access. Choose Connect. For Repository name, choose your repository. For Branch name, choose your branch. For Output artifact format, select CodePipeline default. Choose Next. Adding a deployment stage Now that you have created a source stage, you can add a deployment stage. On the Add build stage page, choose Skip build stage.For this use case, you skip the build stage, but if you need to build your own code, choose your build provider from the drop-down menu.You are prompted to confirm you want to skip the build stage. Choose Skip. For Deploy provider, choose Amazon S3. If you have a different destination type or are hosting on traditional compute, you can choose other providers. For Region, choose the Region your S3 bucket is in. For Bucket, choose the bucket you are deploying to. Optionally, you can also choose a deploy path if you need to deploy to a sub-folder. Select Extract file before deploy. Choose next. Review your configuration and choose Create pipeline. If the settings are correct, you see a green success banner and the initial deployment of your pipeline runs successfully. The following screenshot shows our first deployment. Now that the pipeline shows that the deployment was successful, you can check the S3 bucket to make sure the site is being hosted. You should see your static webpage, as in the following screenshot. Using CI/CD to update our website Now that you have created your pipeline, you can edit your website using your IDE, push the changes, and validate that those changes are automatically deployed to the website. For this step, I already cloned my repository and have it opened in my IDE. Open your code in your preferred IDE. Make the change to your code and push it to Bitbucket.The following screenshot shows that we updated the message that viewers see on our website and pushed our code. Look at the pipeline and make sure your code is being processed. The following screenshot shows that the stages were successful and the pipeline processed the correct commit. After your pipeline is successful, you can check the end result. The following screenshot shows our static webpage. Clean up If you created any resources during this that you do not plan on keeping, make sure you clean it up to keep from incurring cost associated with the services. Summary Being able to let your developers use their repository of choice can be important in your transition to the cloud. CodeStar connections makes it easy for you to set up Bitbucket Cloud as a source provider in the AWS Code Suite. Get started building your CI/CD pipeline using Bitbucket Cloud and the AWS Code Suite. View the full article
  11. This post discusses the benefits of and how to build an AWS CI/CD pipeline in AWS CodePipeline for multi-region deployment. The CI/CD pipeline triggers on application code changes pushed to your AWS CodeCommit repository. This automatically feeds into AWS CodeBuild for static and security analysis of the CloudFormation template. Another CodeBuild instance builds the application to generate an AMI image as output. AWS Lambda then copies the AMI image to other Regions. Finally, AWS CloudFormation cross-region actions are triggered and provision the instance into target Regions based on AMI image. The solution is based on using a single pipeline with cross-region actions, which helps in provisioning resources in the current Region and other Regions. This solution also helps manage the complete CI/CD pipeline at one place in one Region and helps as a single point for monitoring and deployment changes. This incurs less cost because a single pipeline can deploy the application into multiple Regions. As a security best practice, the solution also incorporates static and security analysis using cfn-lint and cfn-nag. You use these tools to scan CloudFormation templates for security vulnerabilities. The following diagram illustrates the solution architecture. Multi region AWS CodePipeline architecture Prerequisites Before getting started, you must complete the following prerequisites: Create a repository in CodeCommit and provide access to your user Copy the sample source code from GitHub under your repository Create an Amazon S3 bucket in the current Region and each target Region for your artifact store Creating a pipeline with AWS CloudFormation You use a CloudFormation template for your CI/CD pipeline, which can perform the following actions: Use CodeCommit repository as source code repository Static code analysis on the CloudFormation template to check against the resource specification and block provisioning if this check fails Security code analysis on the CloudFormation template to check against secure infrastructure rules and block provisioning if this check fails Compilation and unit test of application code to generate an AMI image Copy the AMI image into target Regions for deployment Deploy into multiple Regions using the CloudFormation template; for example, us-east-1, us-east-2, and ap-south-1 You use a sample web application to run through your pipeline, which requires Java and Apache Maven for compilation and testing. Additionally, it uses Tomcat 8 for deployment. The following table summarizes the resources that the CloudFormation template creates. Resource Name Type Objective CloudFormationServiceRole AWS::IAM::Role Service role for AWS CloudFormation CodeBuildServiceRole AWS::IAM::Role Service role for CodeBuild CodePipelineServiceRole AWS::IAM::Role Service role for CodePipeline LambdaServiceRole AWS::IAM::Role Service role for Lambda function SecurityCodeAnalysisServiceRole AWS::IAM::Role Service role for security analysis of provisioning CloudFormation template StaticCodeAnalysisServiceRole AWS::IAM::Role Service role for static analysis of provisioning CloudFormation template StaticCodeAnalysisProject AWS::CodeBuild::Project CodeBuild for static analysis of provisioning CloudFormation template SecurityCodeAnalysisProject AWS::CodeBuild::Project CodeBuild for security analysis of provisioning CloudFormation template CodeBuildProject AWS::CodeBuild::Project CodeBuild for compilation, testing, and AMI creation CopyImage AWS::Lambda::Function Python Lambda function for copying AMI images into other Regions AppPipeline AWS::CodePipeline::Pipeline CodePipeline for CI/CD To start creating your pipeline, complete the following steps: Launch the CloudFormation stack with the following link: Launch button for CloudFormation Choose Next. For Specify details, provide the following values: Parameter Description Stack name Name of your stack OtherRegion1 Input the target Region 1 (other than current Region) for deployment OtherRegion2 Input the target Region 2 (other than current Region) for deployment RepositoryBranch Branch name of repository RepositoryName Repository name of the project S3BucketName Input the S3 bucket name for artifact store S3BucketNameForOtherRegion1 Create a bucket in target Region 1 and specify the name for artifact store S3BucketNameForOtherRegion2 Create a bucket in target Region 2 and specify the name for artifact store Choose Next. On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources. Choose Create. Wait for the CloudFormation stack status to change to CREATE_COMPLETE (this takes approximately 5–7 minutes). When the stack is complete, your pipeline should be ready and running in the current Region. To validate the pipeline, check the images and EC2 instances running into the target Regions and also refer the AWS CodePipeline Execution summary as below. AWS CodePipeline Execution Summary We will walk you through the following steps for creating a multi-region deployment pipeline: 1. Using CodeCommit as your source code repository The deployment workflow starts by placing the application code on the CodeCommit repository. When you add or update the source code in CodeCommit, the action generates a CloudWatch event, which triggers the pipeline to run. 2. Static code analysis of CloudFormation template to provision AWS resources Historically, AWS CloudFormation linting was limited to the ValidateTemplate action in the service API. This action tells you if your template is well-formed JSON or YAML, but doesn’t help validate the actual resources you’ve defined. You can use a linter such as the cfn-lint tool for static code analysis to improve your AWS CloudFormation development cycle. The tool validates the provisioning CloudFormation template properties and their values (mappings, joins, splits, conditions, and nesting those functions inside each other) against the resource specification. This can cover the most common of the underlying service constraints and help encode some best practices. The following rules cover underlying service constraints: E2530 – Checks that Lambda functions have correctly configured memory sizes E3025 – Checks that your RDS instances use correct instance types for the database engine W2001 – Checks that each parameter is used at least once You can also add this step as a pre-commit hook for your GIT repository if you are using CodeCommit or GitHub. You provision a CodeBuild project for static code analysis as the first step in CodePipeline after source. This helps in early detection of any linter issues. 3. Security code analysis of CloudFormation template to provision AWS resources You can use Stelligent’s cfn_nag tool to perform additional validation of your template resources for security. The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure provisioning and validates against AWS best practices. For example: IAM rules that are too permissive (wildcards) Security group rules that are too permissive (wildcards) Access logs that aren’t enabled Encryption that isn’t enabled Password literals You provision a CodeBuild project for security code analysis as the second step in CodePipeline. This helps detect any insecure infrastructure provisioning issues. 4. Compiling and testing application code and generating an AMI image Because you use a Java-based application for this walkthrough, you use Amazon Corretto as your JVM. Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto comes with long-term support that includes performance enhancements and security fixes. You also use Apache Maven as a build automation tool to build the sample application, and the HashiCorp Packer tool to generate an AMI image for the application. You provision a CodeBuild project for compilation, unit testing, AMI generation, and storing the AMI ImageId in the Parameter Store, which the CloudFormation template uses as the next step of the pipeline. 5. Copying the AMI image into target Regions You use a Lambda function to copy the AMI image into target Regions so the CloudFormation template can use it to provision instances into that Region as the next step of the pipeline. It also writes the target Region AMI ImageId into the target Region’s Parameter Store. 6. Deploying into multiple Regions with the CloudFormation template You use the CloudFormation template as a cross-region action to provision AWS resources into a target Region. CloudFormation uses Parameter Store’s ImageId as reference and provisions the instances into the target Region. Cleaning up To avoid additional charges, you should delete the following AWS resources after you validate the pipeline: The cross-region CloudFormation stack in the target and current Regions The main CloudFormation stack in the current Region The AMI you created in the target and current Regions The Parameter Store AMI_VERSION in the target and current Regions Conclusion You have now created a multi-region deployment pipeline in CodePipeline without having to worry about the mechanics of creating and copying AMI images across Regions. CodePipeline abstracts the creating and copying of the images in the background in each Region. You can now upload new source code changes to the CodeCommit repository in the primary Region, and changes deploy automatically to other Regions. Cross-region actions are very powerful and are not limited to deploy actions. You can also use them with build and test actions. View the full article
  • Forum Statistics

    39.7k
    Total Topics
    39.9k
    Total Posts
×
×
  • Create New...