Jump to content

Search the Community

Showing results for tags 'codepipeline'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 11 results

  1. In a few days, I will board a plane towards the south. My tour around Latin America starts. But I won’t be alone in this adventure, you can find some other News Blog authors, like Jeff or Seb, speaking at AWS Community Days and local events in Peru, Argentina, Chile, and Uruguay. If you see us, come and say hi. We would love to meet you. Last Week’s Launches Here are some launches that got my attention during the previous week. AWS AppSync now supports JavaScript for all resolvers in GraphQL APIs – Last year, we announced that AppSync now supports JavaScript pipeline resolvers. And starting last week, developers can use JavaScript to write unit resolvers, pipeline resolvers, and AppSync functions that are run on the AppSync Javascript runtime. AWS CodePipeline now supports GitLab – Now you can use your GitLab.com source repository to build, test, and deploy code changes using AWS CodePipeline, in addition to other providers like AWS CodeCommit, Bitbucket, GitHub.com, and GitHub Enterprise Server. Amazon CloudWatch Agent adds support for OpenTelemetry traces and AWS X-Ray – With the new version of the agent you are now able to collect metrics, logs, and traces with a single agent, not only for CloudWatch but also for OpenTelemetry and AWS X-Ray. Simplifying the installation, configuration, and management of telemetry collection. New instance types: Amazon EC2 M7a and Amazon EC2 Hpc7a – The new Amazon EC2 M7a is a general purpose instance type powered by 4th Gen AMD EPYC processor. In the announcement blog, you can find all the specifics for this instance type. The new Amazon EC2 Hpc7a instances are also powered by 4th Gen AMD EPYC processors. These instance types are optimized for high performance computing and Channy Yun wrote a blog post describing the different characteristics of the Amazon EC2 Hpc7a instance type. AWS DeepRacer Educator Playbooks – Last week we introduced the AWS DeepRacer educator playblooks, these are a tool for educators to integrate foundational machine learning (ML) curriculum and labs into their classrooms. Educators can use these playbooks to easily upskill students in the basics of ML with autonomous vehicles. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS News Some other updates and news that you might have missed: Guide for using AWS Lambda to process Apache Kafka Streams – Julian Wood just published the most complete guide you can find on how to use Lambda with Apache Kafka. If you are an Amazon Kinesis user, don’t worry. We’ve got you covered with this video series where you will find similar topics. The Official AWS Podcast – Listen each week for updates on the latest AWS news and deep dives into exciting use cases. There are also official AWS podcasts in several languages. Check out the ones in French, German, Italian, and Spanish. AWS Open-Source News and Updates – This is a newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more. Upcoming AWS Events Check your calendars and sign up for these AWS events: AWS Hybrid Cloud & Edge Day (August 30) – Join a free-to-attend one-day virtual event to hear the latest hybrid cloud and edge computing trends, emerging technologies, and learn best practices from AWS leaders, customers, and industry analysts. To learn more, see the detail agenda and register now. AWS Global Summits – The 2023 AWS Summits season is almost ending with the last two in-person events in Mexico City (August 30) and Johannesburg (September 26). AWS re:Invent (November 27–December 1) – But don’t worry because re:Invent season is coming closer. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Registration is now open. AWS Community Days – Join a community-led conference run by AWS user group leaders in your region:Taiwan (August 26), Aotearoa (September 6), Lebanon (September 9), Munich (September 14), Argentina (September 16), Spain (September 23), and Chile (September 30). Check all the upcoming AWS Community Days here. CDK Day (September 29) – A community-led fully virtual event with tracks in English and in Spanish about CDK and related projects. Learn more in the website. That’s all for this week. Check back next Monday for another Week in Review! This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! — Marcia View the full article
  2. As of February 2022, the AWS Cloud spans 84 Availability Zones within 26 geographic Regions, with announced plans for more Availability Zones and Regions. Customers can leverage this global infrastructure to expand their presence to their primary target of users, satisfying data residency requirements, and implementing disaster recovery strategy to make sure of business continuity. Although leveraging multi-Region architecture would address these requirements, deploying and configuring consistent infrastructure stacks across multi-Regions could be challenging, as AWS Regions are designed to be autonomous in nature. Multi-region deployments with Terraform and AWS CodePipeline can help customers with these challenges. In this post, we’ll demonstrate the best practice for multi-Region deployments using HashiCorp Terraform as infrastructure as code (IaC), and AWS CodeBuild , CodePipeline as continuous integration and continuous delivery (CI/CD) for consistency and repeatability of deployments into multiple AWS Regions and AWS Accounts. We’ll dive deep on the IaC deployment pipeline architecture and the best practices for structuring the Terraform project and configuration for multi-Region deployment of multiple AWS target accounts... View the full article
  3. As customers commit to a DevOps mindset and embrace a nearly continuous integration/continuous delivery model to implement change with a higher velocity, assessing every change impact on an application resilience is key. This blog shows an architecture pattern for automating resiliency assessments as part of your CI/CD pipeline. Automatically running a resiliency assessment within CI/CD pipelines, development teams can fail fast and understand quickly if a change negatively impacts an applications resilience. The pipeline can stop the deployment into further environments, such as QA/UAT and Production, until the resilience issues have been improved. AWS Resilience Hub is a managed service that gives you a central place to define, validate and track the resiliency of your AWS applications. It is integrated with AWS Fault Injection Simulator (FIS), a chaos engineering service, to provide fault-injection simulations of real-world failures. Using AWS Resilience Hub, you can assess your applications to uncover potential resilience enhancements. This will allow you to validate your applications recovery time (RTO), recovery point (RPO) objectives and optimize business continuity while reducing recovery costs. Resilience Hub also provides APIs for you to integrate its assessment and testing into your CI/CD pipelines for ongoing resilience validation. AWS CodePipeline is a fully managed continuous delivery service for fast and reliable application and infrastructure updates. You can use AWS CodePipeline to model and automate your software release processes. This enables you to increase the speed and quality of your software updates by running all new changes through a consistent set of quality checks... View the full article
  4. You can now use your GitHub Enterprise Server source repository to build, test, and deploy code changes using AWS CodePipeline. View the full article
  5. AWS CodePipeline Source Action now supports cloning of AWS CodeCommit repositories. With this improvement, when you define a Source Action, CodePipeline will clone the CodeCommit git repository to fetch the commit history and metadata. View the full article
  6. In the post Using Custom Source Actions in AWS CodePipeline for Increased Visibility for Third-Party Source Control, we demonstrated using custom actions in AWS CodePipeline and a worker that periodically polls for jobs and processes further to get the artifact from the Git repository.. View the full article
  7. Researchers at Academic Medical Centers (AMCs) use programs such as Observational Health Data Sciences and Informatics (OHDSI) and Research Electronic Data Capture (REDCap) to interact with healthcare data. Our internal team at AWS has provided solutions such as OHDSI-on-AWS and REDCap environments on AWS to help clinicians analyze healthcare data in the AWS Cloud. Occasionally, these solutions break due to a change in some portion of the solution (e.g. updated services). The Automated Solutions Testing Pipeline enables our team to take a proactive approach to discovering these breaks and their cause in order to expedite the repair process. OHDSI-on-AWS provides these AMCs with the ability to store and analyze observational health data in the AWS cloud. REDCap is a web application for managing surveys and databases with HIPAA-compliant environments. Using our solutions, these programs can be spun up easily on the AWS infrastructure using AWS CloudFormation templates. Updates to AWS services and other program libraries can cause the CloudFormation template to fail during deployment. Other times, the outputs may not be operating correctly, or the template may not work on every AWS region. This can create a negative customer experience. Some customers may discover this kind of break and decide to not move forward with using the solution. Other customers may not even realize the solution is broken, so they might be unknowingly working with an uncooperative environment. Furthermore, we cannot always provide fast support to the customers who contact us about broken solutions. To meet our team’s needs and the needs of our customers, we decided to focus our efforts on taking a CI/CD approach to maintain these solutions. We developed the Automated Testing Pipeline which regularly tests solution deployment and changes to source files. This post shows the features of the Automated Testing Pipeline and provides resources to help you get started using it with your AWS account. Overview of Automated Testing Pipeline Solution The Automated Testing Pipeline solution as a whole is designed to automatically deploy CloudFormation templates, run tests against the deployed environments, send notifications if an issue is discovered, and allow for insightful testing data to be easily explored. CloudFormation templates to be tested are stored in an Amazon S3 bucket. Custom test scripts and TaskCat deployment configuration are stored in an AWS CodeCommit repository. The pipeline is triggered in one of three ways: an update to the CloudFormation Template in S3, an Amazon CloudWatch events rule, and an update to the testing source code repository. Once the pipeline has been triggered, AWS CodeBuild pulls the source code to deploy the CloudFormation template, test the deployed environment, and store the results in an S3 bucket. If any failures are discovered, subscribers to the failure topic are notified. The following diagram shows its overall architecture. Diagram of Automated Testing Pipeline architecture In order to create the Automated Testing Pipeline, two interns collaborated over the course of 5 weeks to produce the architecture and custom test scripts. We divided the work of constructing a serverless architecture and writing out test scripts for the output urls for OHDSI-on-AWS and REDCap environments on AWS. The following tasks were completed to build out the Automated Testing Pipeline solution: Setup AWS IAM roles for accessing AWS resources securely Create CloudWatch events to trigger AWS CodePipeline Setup CodePipeline and CodeBuild to run TaskCat and testing scripts Configure TaskCat to deploy CloudFormation solutions in various AWS Regions Write test scripts to interact with CloudFormation solutions’ deployed environments Subscribe to receive emails detailing test results Create a CloudFormation template for the Automated Testing Pipeline The architecture can be extended to test any CloudFormation stack. For this particular use case, we wrote the test scripts specifically to test the urls output by the CloudFormation solutions. The Automated Testing Pipeline has the following features: Deployed in a single AWS Region, with the exception of the tested CloudFormation solution Has a serverless architecture operating at the AWS Region level Deploys a pipeline which can deploy and test the CloudFormation solution Creates CloudWatch events to activate the pipeline on a schedule or when the solution is updated Creates an Amazon SNS topic for notifying subscribers when there are errors Includes code for running TaskCat and scripts to test solution functionality Built automatically in minutes Low in cost with free tier benefits The pipeline is triggered automatically when an event occurs. These events include a change to the CloudFormation solution template, a change to the code in the testing repository, and an alarm set off by a regular schedule. Additional events can be added in the CloudWatch console. When the pipeline is triggered, the testing environment is set up by CodeBuild. CodeBuild uses a build specification file kept within our source repository to set up the environment and run the test scripts. We created a CodeCommit repository to host the test scripts alongside the build specification. The build specification includes commands run TaskCat — an open-source tool for testing the deployment of CloudFormation templates. TaskCat provides the ability to test the deployment of the CloudFormation solution, but we needed custom test scripts to ensure that we can interact with the deployed environment as expected. If the template is successfully deployed, CodeBuild handles running the test scripts against the CloudFormation solution environment. In our case, the environment is accessed via urls output by the CloudFormation solution. We used a Selenium WebDriver for interacting with the web pages given by the output urls. This allowed us to programmatically navigate a headless web browser in the serverless environment and gave us the ability to use text output by JavaScript functions to understand the state of the test. You can see this interaction occurring in the code snippet below. def log_in(driver, user, passw, link, btn_path, title): """Enter username and password then submit to log in :param driver: webdriver for Chrome page :param user: username as String :param passw: password as String :param link: url for page being tested as String :param btn_path: xpath to submit button :param title: expected page title upon successful sign in :return: success String tuple if log in completed, failure description tuple String otherwise """ try: # post username and password data driver.find_element_by_xpath("//input[ @name='username' ]").send_keys(user) driver.find_element_by_xpath("//input[ @name='password' ]").send_keys(passw) # click sign in button and wait for page update driver.find_element_by_xpath(btn_path).click() except NoSuchElementException: return 'FAILURE', 'Unable to access page elements' try: WebDriverWait(driver, 20).until(ec.url_changes(link)) WebDriverWait(driver, 20).until(ec.title_is(title)) except TimeoutException as e: print("Timeout occurred (" + e + ") while attempting to sign in to " + driver.current_url) if "Sign In" in driver.title or "invalid user" in driver.page_source.lower(): return 'FAILURE', 'Incorrect username or password' else: return 'FAILURE', 'Sign in attempt timed out' return 'SUCCESS', 'Sign in complete' We store the test results in JSON format for ease of parsing. TaskCat generates a dashboard which we customize to display these test results. We are able to insert our JSON results into the dashboard in order to make it easy to find errors and access log files. This dashboard is a static html file that can be hosted on an S3 bucket. In addition, messages are published to topics in SNS whenever an error occurs which provide a link to this dashboard. Customized TaskCat dashboard In true CI/CD fashion, this end-to-end design automatically performs tasks that would otherwise be performed manually. We have shown how deploying solutions, testing solutions, notifying maintainers, and providing a results dashboard are all actions handled entirely by the Automated Testing Pipeline. Getting Started with the Automated Testing Pipeline Prerequisite tasks to complete before deploying the pipeline: Clone the repository found at this GitHub page Create an EC2KeyPair in the region corresponding to the region in which the CloudFormation solution will be deployed Once the prerequisite tasks are completed, the pipeline is ready to be deployed. Detailed information about deployment, altering the source code to fit your use case, and troubleshooting issues can be found at the GitHub page for the Automated Testing Pipeline. For those looking to jump right into deployment, click the Launch Stack button below. Tasks to complete after deployment: Subscribe to SNS topic for error messages Update the code to match the parameters and CloudFormation template that were chosen Skip this step if you are testing OHDSI-on-AWS. Upload the desired CloudFormation template to the created source S3 Bucket Push the source code to the created CodeCommit Repository After the code is pushed to the CodeCommit repository and the CloudFormation template has been uploaded to S3, the pipeline will run automatically. You can visit the CodePipeline console to confirm that the pipeline is running with an “in progress” status. You may desire to alter various aspects of the Automated Testing Pipeline to better fit your use case. Listed below are some actions you can take to modify the solution to fit your needs: Go to CloudWatch Events and update rules for automatically started the pipeline. Scale out testing by providing custom testing scripts or altering the existing ones. Test a different CloudFormation template by uploading it to the source S3 bucket created and configuring the pipeline accordingly. Custom test scripts will likely be required for this use case. Challenges Addressed by the Automated Testing Pipeline The Automated Testing Pipeline directly addresses the challenges we faced with maintaining our OHDSI and REDCap solutions. Additionally, the pipeline can be used whenever there is a need to test CloudFormation templates that are being used on a regular basis or are distributed to other users. Listed below is the set of specific challenges we faced maintaining CloudFormation solutions and how the pipeline addresses them. The desire to better serve our customers guided our decision to create the Automated Testing Pipeline. For example, we know that source code used to build the OHDSI-on-AWS environment changes on occasion. Some of these changes have caused the environment to stop functioning correctly. This left us with cases where our customers had to either open an issue on GitHub or reach out to AWS directly for support. Our customers depend on OHDSI-on-AWS functioning properly, so fixing issues is of high priority to our team. The ability to run tests regularly allows us to take action without depending on notice from our customers. Now, we can be the first ones to know if something goes wrong and get to fixing it sooner. “This automation will help us better monitor the CloudFormation-based projects our customers depend on to ensure they’re always in working order.” — James Wiggins, EDU HCLS SA Manager Cleaning Up If you decide to quit using the Automated Testing Pipeline, follow the steps below to get rid of the resources associated with it in your AWS account. Delete CloudFormation solution root Stack Delete pipeline CloudFormation Stack Delete ATLAS S3 Bucket if OHDSI-on-AWS was chosen Deleting the pipeline CloudFormation stack handles removing the resources associated with its architecture. Depending on the CloudFormation template chosen for testing, additional resources associated with it may need to be removed. Visit our GitHub page for more information on removing resources. Conclusion The ability to continuously test preexisting solutions on AWS has great benefits for our team and our customers. The automated nature of this testing frees up time for us and our customers, and the dashboard makes issues more visible and easier to resolve. We believe that sharing this story can benefit anyone facing challenges maintaining CloudFormation solutions in AWS. Check out the Getting Started with the Automated Testing Pipeline section of this post to deploy the solution. Additional Resources More information about the key services and open-source software used in our pipeline can be found at the following documentation pages: AWS CloudFormation documentation Amazon CloudWatch documentation AWS CodePipeline documentation AWS CodeBuild documentation Amazon SNS documentation AWS IAM documentation TaskCat documentation Selenium documentation Boto3 documentation About the Authors Raleigh Hansen is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. She is passionate about solving problems and improving upon existing systems. She also adores spending time with her two cats. Dan Le is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. He is passionate about technology and enjoys doing art and music. View the full article
  8. AWS CodeStar Connections is a new feature that allows services like AWS CodePipeline to access third-party code source provider. For example, you can now seamlessly connect your Atlassian Bitbucket Cloud source repository to AWS CodePipeline. This allows you to automate the build, test, and deploy phases of your release process each time a code change occurs. This new feature is available in the following Regions: US East (Ohio) US East (N. Virginia) US West (N. California) US West (Oregon) Asia Pacific (Mumbai) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Sydney) Asia Pacific (Tokyo) Canada (Central) EU (Frankfurt) EU (Ireland) EU (London) EU (Paris) South America (São Paulo) The practice of tracking and managing changes to code, or source control, is a foundational element to the development process. Therefore, source control management systems are an essential tool for any developer. In this post, we focus on one specific Git code management product: Atlassian Bitbucket. You can get started for free with Bitbucket Cloud. Atlassian provides detailed documentation on getting started with Bitbucket Cloud, which includes topics such as setting up a team, creating a repository, working with branches, and more. For more information, see Get started with Bitbucket Cloud. Prerequisite For this use case, you use a Bitbucket account, repository, and Amazon Simple Storage Service (Amazon S3) bucket that we have already created. To follow along, you should have the following: A working knowledge of Git and how to fork or clone within your source provider Familiarity with hosting a static website on Amazon S3 To follow along you will need a sample page. Here is some simple html code that you can name index.html and add to your repo. <html> <head> Example Header </head> <body> Example Body Text </body> </html> Solution overview For this use case, you deploy a Hugo website from your Bitbucket Cloud repository to your S3 bucket using CodePipeline. You can then connect your Bitbucket Cloud account to your AWS account to deploy code natively. The walkthrough contains the following steps: Set up CodeStar connections. Add a deployment stage. Use CI/CD to update your website. Setting up CodeStar connections When connecting CodePipeline to Bitbucket Cloud, it helps if you already signed in to Bitbucket. After you sign in to Bitbucket Cloud, you perform the rest of the connection steps on the AWS Management Console. On the console, search for CodePipeline. Choose CodePipeline. Choose Pipelines. Choose Create pipeline. For Pipeline name, enter a name. For Service role, select New service role. For Role name, enter a name for the service role. Choose Next. For Source provider, choose Bitbucket Cloud. For Connection, choose Connect to Bitbucket Cloud. For Connection name, enter a name. For Bitbucket Cloud apps, choose Install a new app. If this isn’t your first time making a connection, you can choose an existing connection. Choose Connect. Confirm you’re logged in as the correct user and choose Grant access. Choose Connect. For Repository name, choose your repository. For Branch name, choose your branch. For Output artifact format, select CodePipeline default. Choose Next. Adding a deployment stage Now that you have created a source stage, you can add a deployment stage. On the Add build stage page, choose Skip build stage.For this use case, you skip the build stage, but if you need to build your own code, choose your build provider from the drop-down menu.You are prompted to confirm you want to skip the build stage. Choose Skip. For Deploy provider, choose Amazon S3. If you have a different destination type or are hosting on traditional compute, you can choose other providers. For Region, choose the Region your S3 bucket is in. For Bucket, choose the bucket you are deploying to. Optionally, you can also choose a deploy path if you need to deploy to a sub-folder. Select Extract file before deploy. Choose next. Review your configuration and choose Create pipeline. If the settings are correct, you see a green success banner and the initial deployment of your pipeline runs successfully. The following screenshot shows our first deployment. Now that the pipeline shows that the deployment was successful, you can check the S3 bucket to make sure the site is being hosted. You should see your static webpage, as in the following screenshot. Using CI/CD to update our website Now that you have created your pipeline, you can edit your website using your IDE, push the changes, and validate that those changes are automatically deployed to the website. For this step, I already cloned my repository and have it opened in my IDE. Open your code in your preferred IDE. Make the change to your code and push it to Bitbucket.The following screenshot shows that we updated the message that viewers see on our website and pushed our code. Look at the pipeline and make sure your code is being processed. The following screenshot shows that the stages were successful and the pipeline processed the correct commit. After your pipeline is successful, you can check the end result. The following screenshot shows our static webpage. Clean up If you created any resources during this that you do not plan on keeping, make sure you clean it up to keep from incurring cost associated with the services. Summary Being able to let your developers use their repository of choice can be important in your transition to the cloud. CodeStar connections makes it easy for you to set up Bitbucket Cloud as a source provider in the AWS Code Suite. Get started building your CI/CD pipeline using Bitbucket Cloud and the AWS Code Suite. View the full article
  9. This post discusses the benefits of and how to build an AWS CI/CD pipeline in AWS CodePipeline for multi-region deployment. The CI/CD pipeline triggers on application code changes pushed to your AWS CodeCommit repository. This automatically feeds into AWS CodeBuild for static and security analysis of the CloudFormation template. Another CodeBuild instance builds the application to generate an AMI image as output. AWS Lambda then copies the AMI image to other Regions. Finally, AWS CloudFormation cross-region actions are triggered and provision the instance into target Regions based on AMI image. The solution is based on using a single pipeline with cross-region actions, which helps in provisioning resources in the current Region and other Regions. This solution also helps manage the complete CI/CD pipeline at one place in one Region and helps as a single point for monitoring and deployment changes. This incurs less cost because a single pipeline can deploy the application into multiple Regions. As a security best practice, the solution also incorporates static and security analysis using cfn-lint and cfn-nag. You use these tools to scan CloudFormation templates for security vulnerabilities. The following diagram illustrates the solution architecture. Multi region AWS CodePipeline architecture Prerequisites Before getting started, you must complete the following prerequisites: Create a repository in CodeCommit and provide access to your user Copy the sample source code from GitHub under your repository Create an Amazon S3 bucket in the current Region and each target Region for your artifact store Creating a pipeline with AWS CloudFormation You use a CloudFormation template for your CI/CD pipeline, which can perform the following actions: Use CodeCommit repository as source code repository Static code analysis on the CloudFormation template to check against the resource specification and block provisioning if this check fails Security code analysis on the CloudFormation template to check against secure infrastructure rules and block provisioning if this check fails Compilation and unit test of application code to generate an AMI image Copy the AMI image into target Regions for deployment Deploy into multiple Regions using the CloudFormation template; for example, us-east-1, us-east-2, and ap-south-1 You use a sample web application to run through your pipeline, which requires Java and Apache Maven for compilation and testing. Additionally, it uses Tomcat 8 for deployment. The following table summarizes the resources that the CloudFormation template creates. Resource Name Type Objective CloudFormationServiceRole AWS::IAM::Role Service role for AWS CloudFormation CodeBuildServiceRole AWS::IAM::Role Service role for CodeBuild CodePipelineServiceRole AWS::IAM::Role Service role for CodePipeline LambdaServiceRole AWS::IAM::Role Service role for Lambda function SecurityCodeAnalysisServiceRole AWS::IAM::Role Service role for security analysis of provisioning CloudFormation template StaticCodeAnalysisServiceRole AWS::IAM::Role Service role for static analysis of provisioning CloudFormation template StaticCodeAnalysisProject AWS::CodeBuild::Project CodeBuild for static analysis of provisioning CloudFormation template SecurityCodeAnalysisProject AWS::CodeBuild::Project CodeBuild for security analysis of provisioning CloudFormation template CodeBuildProject AWS::CodeBuild::Project CodeBuild for compilation, testing, and AMI creation CopyImage AWS::Lambda::Function Python Lambda function for copying AMI images into other Regions AppPipeline AWS::CodePipeline::Pipeline CodePipeline for CI/CD To start creating your pipeline, complete the following steps: Launch the CloudFormation stack with the following link: Launch button for CloudFormation Choose Next. For Specify details, provide the following values: Parameter Description Stack name Name of your stack OtherRegion1 Input the target Region 1 (other than current Region) for deployment OtherRegion2 Input the target Region 2 (other than current Region) for deployment RepositoryBranch Branch name of repository RepositoryName Repository name of the project S3BucketName Input the S3 bucket name for artifact store S3BucketNameForOtherRegion1 Create a bucket in target Region 1 and specify the name for artifact store S3BucketNameForOtherRegion2 Create a bucket in target Region 2 and specify the name for artifact store Choose Next. On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources. Choose Create. Wait for the CloudFormation stack status to change to CREATE_COMPLETE (this takes approximately 5–7 minutes). When the stack is complete, your pipeline should be ready and running in the current Region. To validate the pipeline, check the images and EC2 instances running into the target Regions and also refer the AWS CodePipeline Execution summary as below. AWS CodePipeline Execution Summary We will walk you through the following steps for creating a multi-region deployment pipeline: 1. Using CodeCommit as your source code repository The deployment workflow starts by placing the application code on the CodeCommit repository. When you add or update the source code in CodeCommit, the action generates a CloudWatch event, which triggers the pipeline to run. 2. Static code analysis of CloudFormation template to provision AWS resources Historically, AWS CloudFormation linting was limited to the ValidateTemplate action in the service API. This action tells you if your template is well-formed JSON or YAML, but doesn’t help validate the actual resources you’ve defined. You can use a linter such as the cfn-lint tool for static code analysis to improve your AWS CloudFormation development cycle. The tool validates the provisioning CloudFormation template properties and their values (mappings, joins, splits, conditions, and nesting those functions inside each other) against the resource specification. This can cover the most common of the underlying service constraints and help encode some best practices. The following rules cover underlying service constraints: E2530 – Checks that Lambda functions have correctly configured memory sizes E3025 – Checks that your RDS instances use correct instance types for the database engine W2001 – Checks that each parameter is used at least once You can also add this step as a pre-commit hook for your GIT repository if you are using CodeCommit or GitHub. You provision a CodeBuild project for static code analysis as the first step in CodePipeline after source. This helps in early detection of any linter issues. 3. Security code analysis of CloudFormation template to provision AWS resources You can use Stelligent’s cfn_nag tool to perform additional validation of your template resources for security. The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure provisioning and validates against AWS best practices. For example: IAM rules that are too permissive (wildcards) Security group rules that are too permissive (wildcards) Access logs that aren’t enabled Encryption that isn’t enabled Password literals You provision a CodeBuild project for security code analysis as the second step in CodePipeline. This helps detect any insecure infrastructure provisioning issues. 4. Compiling and testing application code and generating an AMI image Because you use a Java-based application for this walkthrough, you use Amazon Corretto as your JVM. Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto comes with long-term support that includes performance enhancements and security fixes. You also use Apache Maven as a build automation tool to build the sample application, and the HashiCorp Packer tool to generate an AMI image for the application. You provision a CodeBuild project for compilation, unit testing, AMI generation, and storing the AMI ImageId in the Parameter Store, which the CloudFormation template uses as the next step of the pipeline. 5. Copying the AMI image into target Regions You use a Lambda function to copy the AMI image into target Regions so the CloudFormation template can use it to provision instances into that Region as the next step of the pipeline. It also writes the target Region AMI ImageId into the target Region’s Parameter Store. 6. Deploying into multiple Regions with the CloudFormation template You use the CloudFormation template as a cross-region action to provision AWS resources into a target Region. CloudFormation uses Parameter Store’s ImageId as reference and provisions the instances into the target Region. Cleaning up To avoid additional charges, you should delete the following AWS resources after you validate the pipeline: The cross-region CloudFormation stack in the target and current Regions The main CloudFormation stack in the current Region The AMI you created in the target and current Regions The Parameter Store AMI_VERSION in the target and current Regions Conclusion You have now created a multi-region deployment pipeline in CodePipeline without having to worry about the mechanics of creating and copying AMI images across Regions. CodePipeline abstracts the creating and copying of the images in the background in each Region. You can now upload new source code changes to the CodeCommit repository in the primary Region, and changes deploy automatically to other Regions. Cross-region actions are very powerful and are not limited to deploy actions. You can also use them with build and test actions. View the full article
  10. Getting started with CI/CD to manage your AWS infrastructure is hard: You have to familiarize yourself with the available technologies. You have to create a Proof of Concept to show your team how it works. You have to convince your team to stop using the graphical Management Console. The last part is usually the hardest. That’s why I was looking for a way to introduce CI/CD while still allowing manual changes using the graphical Management Console. If you use CloudFormation to create resources for you, you should never make manual changes to the resources. Otherwise, you risk to losing the infrastructure during the next update run of CloudFormation. Manual changes through Parameter Store AWS Systems Manager (SSM) Parameter Store is a good place to store the configuration of your application. Parameters can be organized like files in a folder like structure. E.g., the parameter /application/stage/instancetype stores the value t2.micro. Parameter Store comes with a nice UI. CloudFormation templates can be parametrized. A CloudFormation parameter can lookup the value from Parameter Store when you create or update a stack. But how do you trigger a stack update when the value of the parameter in Parameter Store changes? /images/2018/03/parameter-store-cloudformation-codepipeline.png Luckily, Parameter Store parameter changes are published to CloudWatch Events. You can subscribe to those events and trigger a CodePipeline execution to update the CloudFormation stack. All of this managed by CloudFormation. Read on if you want to learn how you can connect the pieces: Develop a CloudFormation template using the Parameter Store to start an EC2 instance Using CodePipeline to deploy a CloudFormation stack Listening to CloudWatch Events to trigger the pipeline on parameter changes Simple CloudFormation template using the Parameter Store First, you need the CloudFormation template that describes the EC2 instance. The instance type should be fetched from the /application/stage/instancetype parameter. CloudFormation integrates with Parameter Store using parameters as well. Don’t get confused. CloudFormation parameters and Parameter Store parameters are two different things. You use a CloudFormation parameter of type AWS::SSM::Parameter::Value<String> and set the value to the name of the Parameter Store parameter (e.g., /application/stage/instancetype). CloudFormation will then, on each create or update of the stack, ask Parameter Store for the current value. A second CloudFormation parameter ParentVPCStack is used to reference a CloudFormation that contains the VPC. part 1 of infrastructure.yamlGitHub--- AWSTemplateFormatVersion: '2010-09-09' Description: 'Infrastructure' Parameters: InstanceType: Description: 'Name of Parameter Store parameter to define the instance type.' Type: 'AWS::SSM::Parameter::Value<String>' Default: '/application/stage/instancetype' ParentVPCStack: Description: 'Stack name of parent VPC stack based on vpc/vpc-*azs.yaml template.' Type: String The CloudFormation parameter can then be used as any other parameter. !Ref InstanceType returns the value of the parameter. part 2 of infrastructure.yamlGitHubResources: SecurityGroup: Type: 'AWS::EC2::SecurityGroup' Properties: GroupDescription: !Ref 'AWS::StackName' SecurityGroupIngress: # [...] VpcId: # [...] VirtualMachine: Type: 'AWS::EC2::Instance' Properties: ImageId: 'ami-97785bed' InstanceType: !Ref InstanceType SecurityGroupIds: - !Ref SecurityGroup SubnetId: # [...] Done is the CloudFormation template spinning up a single EC2 instance with the instance type coming from Parameter Store. I removed some parts (# [...]) to focus on the important parts. You can download the full infrastructure.yaml template on GitHub. Simple CodePipeline to deploy a CloudFormation stack Now it’s time to take care of the deployment pipeline. You need an S3 bucket (ArtifactsBucket) to store the artifacts that are moved through the pipeline. I also added the CodeCommit repository (CodeRepository) to the template to store the project’s source code. Finally, CodePipeline and CloudFormation need permissions (PipelineRole) to invoke the AWS API on your behalf to create the resources described in the CloudFormation templates. Since you can create any resource with CloudFormation, you most likely have to grant full permissions to create a stack. In this example, it should be possible to restrict on certain EC2 actions, but you don’t necessarily know which API calls CloudFormation performs for you. part 1 of pipeline.yamlGitHub--- AWSTemplateFormatVersion: '2010-09-09' Description: 'Pipeline' Resources: CodeRepository: Type: 'AWS::CodeCommit::Repository' Properties: RepositoryName: !Ref 'AWS::StackName' ArtifactsBucket: DeletionPolicy: Retain Type: 'AWS::S3::Bucket' Properties: {} PipelineRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - 'cloudformation.amazonaws.com' - 'codepipeline.amazonaws.com' Action: - 'sts:AssumeRole' ManagedPolicyArns: - 'arn:aws:iam::aws:policy/AdministratorAccess' The pipeline itself can now be described. The following figure shows how the pipeline looks like. /images/2018/03/pipeline.png In the first stage, the source code is fetched from the CodeCommit repository. After that, a VPC stack is created. The EC2 instance is later launched into the VPC that is created in the second stage. I reused the vpc-2azs.yaml template from our Free Templates for AWS CloudFormation collection to describe the VPC. That’s one of the big advantages of CloudFormation. Once you have a template, you can reuse it as often as you want. part 2 of pipeline.yamlGitHubPipeline: Type: 'AWS::CodePipeline::Pipeline' Properties: ArtifactStore: Type: S3 Location: !Ref ArtifactsBucket Name: !Ref 'AWS::StackName' RestartExecutionOnUpdate: true RoleArn: !GetAtt 'PipelineRole.Arn' Stages: - Name: Source Actions: - Name: FetchSource ActionTypeId: Category: Source Owner: AWS Provider: CodeCommit Version: 1 Configuration: RepositoryName: !GetAtt 'CodeRepository.Name' BranchName: master PollForSourceChanges: false OutputArtifacts: - Name: Source RunOrder: 1 - Name: VPC Actions: - Name: Deploy ActionTypeId: Category: Deploy Owner: AWS Provider: CloudFormation Version: 1 Configuration: ActionMode: CREATE_UPDATE Capabilities: CAPABILITY_IAM RoleArn: !GetAtt 'PipelineRole.Arn' StackName: !Sub '${AWS::StackName}-vpc' TemplatePath: 'Source::vpc-2azs.yaml' OutputFileName: 'output.json' InputArtifacts: - Name: Source OutputArtifacts: - Name: VPC RunOrder: 1 # template continues in a moment There is one interesting concept that I need to explain. In the VPC stage, the VPC stack is deployed. The CloudFormation stack outputs are stored in a file called output.json and this file is part of the VPC artifact. You can use the VPC artifact later to get access to the stack outputs from the VPC stack. In the third stage, the template containing the EC2 instance is used to create the infrastructure stack. You now pass two input artifacts to CloudFormation deployment, the Source containing the source code from CodeCommit, and the VPC containing the VPC stack outputs. part 3 of pipeline.yamlGitHub# continued template - Name: Production Actions: - Name: DeployInfrastructure ActionTypeId: # [...] Configuration: # [...] TemplatePath: 'Source::infrastructure.yaml' TemplateConfiguration: 'Source::infrastructure.json' InputArtifacts: - Name: VPC - Name: Source RunOrder: 1 The infrastructure.json file wires the value from output.json together with the parameter of the infrastructure.yaml template. infrastructure.jsonGitHub{ "Parameters": { "ParentVPCStack": {"Fn::GetParam": ["VPC", "output.json", "StackName"]} } } The pipeline is mostly done. One last thing is missing. Listening to CloudWatch Events Last but not least, you define a CloudWatch Events Rule to trigger the pipeline whenever the Parameter Store parameter changes. And as always, you need to give AWS (to be more precise, CloudWatch Events) permissions to execute the pipeline. part 4 of pipeline.yamlGitHubPipelineTriggerRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - 'events.amazonaws.com' Action: - 'sts:AssumeRole' Policies: - PolicyName: 'codepipeline' PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: 'codepipeline:StartPipelineExecution' Resource: !Sub 'arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}' ParameterStorePipelineTriggerRule: Type: 'AWS::Events::Rule' Properties: EventPattern: source: - 'aws.ssm' 'detail-type': - 'Parameter Store Change' resources: - !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/application/stage/instancetype' State: ENABLED Targets: - Arn: !Sub 'arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}' Id: pipeline RoleArn: !GetAtt 'PipelineTriggerRole.Arn' There is one more thing to talk about when talking about CloudWatch Events. CodeCommit does also publish an event if the repository changes. The following Event Rule executed the pipeline whenever the source code changes in the repository. part 5 of pipeline.yamlGitHubCodeCommitPipelineTriggerRule: Type: 'AWS::Events::Rule' Properties: EventPattern: source: - 'aws.codecommit' 'detail-type': - 'CodeCommit Repository State Change' resources: - !GetAtt 'CodeRepository.Arn' detail: referenceType: - branch referenceName: - master State: ENABLED Targets: - Arn: !Sub 'arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}' Id: pipeline RoleArn: !GetAtt 'PipelineTriggerRole.Arn' You can download the full pipeline.yaml template on GitHub. Setup instructions Have you installed and configured the AWS CLI? Clone the example repository git clone https://github.com/widdix/parameter-store-cloudformation-codepipeline.git or download the ZIP file cd parameter-store-cloudformation-codepipeline/ Create parameter in Parameter Store: aws ssm put-parameter --name '/application/stage/instancetype' --value 't2.micro' --type String Create pipeline stack with CloudFormation: aws cloudformation create-stack --stack-name cloudonaut --template-body file://pipeline.yaml --capabilities CAPABILITY_IAM Wait until CloudFormation stack is created: aws cloudformation wait stack-create-complete --stack-name cloudonaut Push files to the CodeCommit repository created by the pipeline stack (I don’t use git push here to skip the git configuration): COMMIT_ID="$(aws codecommit put-file --repository-name cloudonaut --branch-name master --file-content file://infrastructure.yaml --file-path infrastructure.yaml --query commitId --output text)" COMMIT_ID="$(aws codecommit put-file --repository-name cloudonaut --branch-name master --parent-commit-id $COMMIT_ID --file-content file://infrastructure.json --file-path infrastructure.json --query commitId --output text)" COMMIT_ID="$(aws codecommit put-file --repository-name cloudonaut --branch-name master --parent-commit-id $COMMIT_ID --file-content file://vpc-2azs.yaml --file-path vpc-2azs.yaml --query commitId --output text)" Wait until the first pipeline run is finished: open 'https://console.aws.amazon.com/codepipeline/home#/view/cloudonaut' Visit the website exposed by the EC2 instance: open "http://$(aws cloudformation describe-stacks --stack-name cloudonaut-infrastructure --query "Stacks[0].Outputs[0].OutputValue" --output text)" Update the parameter value: aws ssm put-parameter --name '/application/stage/instancetype' --value 't2.nano' --type String --overwrite (t2.nano is outside th Free Tier, expect charges of a few cents) Wait until the second pipeline run is finished: open 'https://console.aws.amazon.com/codepipeline/home#/view/cloudonaut' Visit the website exposed by the EC2 instance: open "http://$(aws cloudformation describe-stacks --stack-name cloudonaut-infrastructure --query "Stacks[0].Outputs[0].OutputValue" --output text)" Clean up instructions Remove CloudFormation stacks aws cloudformation delete-stack --stack-name cloudonaut-infrastructure aws cloudformation wait stack-delete-complete --stack-name cloudonaut-infrastructure aws cloudformation delete-stack --stack-name cloudonaut-vpc aws cloudformation wait stack-delete-complete --stack-name cloudonaut-vpc aws cloudformation delete-stack --stack-name cloudonaut aws cloudformation wait stack-delete-complete --stack-name cloudonaut Remove S3 bucket prefixed with cloudonaut-artifactsbucket- including all files: open "https://s3.console.aws.amazon.com/s3/home" Remove Parameter Store parameter: aws ssm delete-parameter --name '/application/stage/instancetype' Summary I like to combine the ease of Parameter Store with the benefits of CI/CD. Parameter Store can be turned into a graphical user interface to configure your infrastructure. This is very handy if you work in a team where not everyone if familiar with the concepts of CI/CD. Introducing Parameter Store to your team simplifies things. You can use a CI/CD approach to deploy infrastructure while the team can still use a graphical user interface to control some parameters without bypassing the CI/CD pipeline. A few examples of parameters that I used in the past: The size of an auto-scaling group allows you to manually start or shutdown a fleet of EC2 instances through the pipeline The storage of your RDS database instance The number of nodes in an Elasticsearch cluster The instance type of EC2 instances managed by an auto-scaling group By the way, parameter Store keeps a record of all changes to a parameter which is handy if you need to record changes to your infrastructure. Unfortunately, CloudFormation does not support encrypted values from Parameter Store which would be awesome to manage secrets such as database passwords. View the full article
  • Forum Statistics

    63.7k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...