Jump to content

Search the Community

Showing results for tags 'amazon ec2'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 20 results

  1. This was a busy week for Amazon Bedrock with many new features! Using GitHub Actions with AWS CodeBuild is much easier. Also, Amazon Q in Amazon CodeCatalyst can now manage more complex issues. I was amazed to meet so many new and old friends at the AWS Summit London. To give you a quick glimpse, here’s AWS Hero Yan Cui starting his presentation at the AWS Community stage. Last week’s launches With so many interesting new features, I start with generative artificial intelligence (generative AI) and then move to the other topics. Here’s what got my attention: Amazon Bedrock – For supported architectures such as Llama, Mistral, or Flan T5, you can now import custom models and access them on demand. Model evaluation is now generally available to help you evaluate, compare, and select the best foundation models (FMs) for your specific use case. You can now access Meta’s Llama 3 models. Agents for Amazon Bedrock – A simplified agent creation and return of control, so that you can define an action schema and get the control back to perform those action without needing to create a specific AWS Lambda function. Agents also added support for Anthropic Claude 3 Haiku and Sonnet to help build faster and more intelligent agents. Knowledge Bases for Amazon Bedrock – You can now ingest data from up to five data sources and provide more complete answers. In the console, you can now chat with one of your documents without needing to set up a vector database (read more in this Machine Learning blog post). Guardrails for Amazon Bedrock – The capability to implement safeguards based on your use cases and responsible AI policies is now available with new safety filters and privacy controls. Amazon Titan – The new watermark detection feature is now generally available in Amazon Bedrock. In this way, you can identify images generated by Amazon Titan Image Generator using an invisible watermark present in all images generated by Amazon Titan. Amazon CodeCatalyst – Amazon Q can now split complex issues into separate, simpler tasks that can then be assigned to a user or back to Amazon Q. CodeCatalyst now also supports approval gates within a workflow. Approval gates pause a workflow that is building, testing, and deploying code so that a user can validate whether it should be allowed to proceed. Amazon EC2 – You can now remove an automatically assigned public IPv4 address from an EC2 instance. If you no longer need the automatically assigned public IPv4 (for example, because you are migrating to using a private IPv4 address for SSH with EC2 instance connect), you can use this option to quickly remove the automatically assigned public IPv4 address and reduce your public IPv4 costs. Network Load Balancer – Now supports Resource Map in AWS Management Console, a tool that displays all your NLB resources and their relationships in a visual format on a single page. Note that Application Load Balancer already supports Resource Map in the console. AWS CodeBuild – Now supports managed GitHub Action self-hosted runners. You can configure CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts. Amazon Route 53 – You can now define a standard DNS configuration in the form of a Profile, apply this configuration to multiple VPCs, and share it across AWS accounts. AWS Direct Connect – Hosted connections now support capacities up to 25 Gbps. Before, the maximum was 10 Gbps. Higher bandwidths simplify deployments of applications such as advanced driver assistance systems (ADAS), media and entertainment (M&E), artificial intelligence (AI), and machine learning (ML). NoSQL Workbench for Amazon DynamoDB – A revamped operation builder user interface to help you better navigate, run operations, and browse your DynamoDB tables. Amazon GameLift – Now supports in preview end-to-end development of containerized workloads, including deployment and scaling on premises, in the cloud, or for hybrid configurations. You can use containers for building, deploying, and running game server packages. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS news Here are some additional projects, blog posts, and news items that you might find interesting: GQL, the new ISO standard for graphs, has arrived – GQL, which stands for Graph Query Language, is the first new ISO database language since the introduction of SQL in 1987. Authorize API Gateway APIs using Amazon Verified Permissions and Amazon Cognito – Externalizing authorization logic for application APIs can yield multiple benefits. Here’s an example of how to use Cedar policies to secure a REST API. Build and deploy a 1 TB/s file system in under an hour – Very nice walkthrough for something that used to be not so easy to do in the recent past. Let’s Architect! Discovering Generative AI on AWS – A new episode in this amazing series of posts that provides a broad introduction to the domain and then shares a mix of videos, blog posts, and hands-on workshops. Building scalable, secure, and reliable RAG applications using Knowledge Bases for Amazon Bedrock – This post explores the new features (including AWS CloudFormation support) and how they align with the AWS Well-Architected Framework. Using the unified CloudWatch Agent to send traces to AWS X-Ray – With added support for the collection of AWS X-Ray and OpenTelemetry traces, you can now provision a single agent to capture metrics, logs, and traces. The executive’s guide to generative AI for sustainability – A guide for implementing a generative AI roadmap within sustainability strategies. AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community. Check out Ricardo’s page for the latest updates. Upcoming AWS events Check your calendars and sign up for upcoming AWS events: AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5). AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania. AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28). GOTO EDA Day London – Join us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners. Browse all upcoming AWS led in-person and virtual events and developer-focused events. That’s all for this week. Check back next Monday for another Weekly Roundup! — Danilo This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  2. Amazon Inspector now offers continuous monitoring of your Amazon EC2 instances for software vulnerabilities without installing an agent or additional software. Currently, Inspector leverages the widely deployed AWS Systems Manager (SSM) agent to assess your EC2 instances for third-party software vulnerabilities. With this expansion, Inspector now offers two scan modes for EC2 scanning, hybrid scan mode and agent-based scan mode. In hybrid scan mode, Inspector relies on SSM agents to collect information from instances to perform vulnerability assessments and automatically switches to agentless scanning for instances that do not have SSM agents installed or configured. For agentless scanning, Inspector takes snapshots of EBS volumes to collect software application inventory from the instances to perform vulnerability assessments. For agent-based scan mode, Inspector only scans instances that have a SSM agent installed and configured. New customers enabling EC2 scanning are configured in hybrid mode by default, while existing customers can migrate to hybrid mode by simply visiting the EC2 settings page within the Inspector console. Once enabled, Inspector automatically discovers all your EC2 instances and starts evaluating them for software vulnerabilities. View the full article
  3. The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on 87 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types in supported regions, including all C7i, M7i, R7i, C7a, M7a, R7a, and M7g instances. View the full article
  4. Amazon Detective, a managed security service that helps analysts investigate potential security issues across AWS, has introduced a new feature to support investigating threats detected by Amazon GuardDuty's EC2 Runtime Monitoring capability. This expansion enhances Detective's ability to provide visualizations and context for investigating runtime threats targeting EC2 instances. View the full article
  5. We’re just two days away from AWS Summit Sydney (April 10–11) and a month away from the AWS Summit season in Southeast Asia, starting with the AWS Summit Singapore (May 7) and the AWS Summit Bangkok (May 30). If you happen to be in Sydney, Singapore, or Bangkok around those dates, please join us. Last Week’s Launches If you haven’t read last week’s Weekly Roundup yet, Channy wrote about the AWS Chips Taste Test, a new initiative from Jeff Barr as part of April’ Fools Day. Here are some launches that caught my attention last week: New Amazon EC2 G6 instances — We announced the general availability of Amazon EC2 G6 instances powered by NVIDIA L4 Tensor Core GPUs. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases. G6 instances deliver up to 2x higher performance for deep learning inference and graphics workloads compared to Amazon EC2 G4dn instances. To learn more, visit the Amazon EC2 G6 instance page. Mistral Large is now available in Amazon Bedrock — Veliswa wrote about the availability of the Mistral Large foundation model, as part of the Amazon Bedrock service. You can use Mistral Large to handle complex tasks that require substantial reasoning capabilities. In addition, Amazon Bedrock is now available in the Paris AWS Region. Amazon Aurora zero-ETL integration with Amazon Redshift now in additional Regions — Zero-ETL integration announcements were my favourite launches last year. This Zero-ETL integration simplifies the process of transferring data between the two services, allowing customers to move data between Amazon Aurora and Amazon Redshift without the need for manual Extract, Transform, and Load (ETL) processes. With this announcement, Zero-ETL integrations between Amazon Aurora and Amazon Redshift is now supported in 11 additional Regions. Announcing AWS Deadline Cloud — If you’re working in films, TV shows, commercials, games, and industrial design and handling complex rendering management for teams creating 2D and 3D visual assets, then you’ll be excited about AWS Deadline Cloud. This new managed service simplifies the deployment and management of render farms for media and entertainment workloads. AWS Clean Rooms ML is Now Generally Available — Last year, I wrote about the preview of AWS Clean Rooms ML. In that post, I elaborated a new capability of AWS Clean Rooms that helps you and your partners apply machine learning (ML) models on your collective data without copying or sharing raw data with each other. Now, AWS Clean Rooms ML is available for you to use. Knowledge Bases for Amazon Bedrock now supports private network policies for OpenSearch Serverless — Here’s exciting news for you who are building with Amazon Bedrock. Now, you can implement Retrieval-Augmented Generation (RAG) with Knowledge Bases for Amazon Bedrock using Amazon OpenSearch Serverless (OSS) collections that have a private network policy. Amazon EKS extended support for Kubernetes versions now generally available — If you’re running Kubernetes version 1.21 and higher, with this Extended Support for Kubernetes, you can stay up-to-date with the latest Kubernetes features and security improvements on Amazon EKS. AWS Lambda Adds Support for Ruby 3.3 — Coding in Ruby? Now, AWS Lambda supports Ruby 3.3 as its runtime. This update allows you to take advantage of the latest features and improvements in the Ruby language. Amazon EventBridge Console Enhancements — The Amazon EventBridge console has been updated with new features and improvements, making it easier for you to manage your event-driven applications with a better user experience. Private Access to the AWS Management Console in Commercial Regions — If you need to restrict access to personal AWS accounts from the company network, you can use AWS Management Console Private Access. With this launch, you can use AWS Management Console Private Access in all commercial AWS Regions. From community.aws The community.aws is a home for us, builders, to share our learnings with building on AWS. Here’s my Top 3 posts from last week: 14 LLMs fought 314 Street Fighter matches. Here’s who won by Banjo Obayomi Build an AI image catalogue! – Claude 3 Haiku by Alan Blockley Following the path of Architecture as Code by Christian Bonzelet Other AWS News Here are some additional news items, open-source projects, and Twitch shows that you might find interesting: Build On Generative AI – Join Tiffany and Darko to learn more about generative AI, see their demos and discuss different aspects of generative AI with the guest speakers. Streaming every Monday on Twitch, 9:00 AM US PT. AWS open source news and updates – If you’re looking for various open-source projects and tools from the AWS community, please read the AWS open-source newsletter maintained by my colleague, Ricardo. Upcoming AWS events Check your calendars and sign up for these AWS events: AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Amsterdam (April 9), Sydney (April 10–11), London (April 24), Singapore (May 7), Berlin (May 15–16), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Dubai (May 29), Thailand (May 30), Stockholm (June 4), and Madrid (June 5). AWS re:Inforce – Explore cloud security in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania for two-and-a-half days of immersive cloud security learning designed to help drive your business initiatives. AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Poland (April 11), Bay Area (April 12), Kenya (April 20), and Turkey (May 18). You can browse all upcoming in-person and virtual events. That’s all for this week. Check back next Monday for another Weekly Roundup! — Donnie This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  6. We are excited to introduce a new Amazon EMR on EC2 feature that enables automatic graceful replacement of unhealthy core nodes to ensure continued optimal cluster operations and prevent data loss. Additionally, EMR on EC2 will publish CloudWatch events to provide visibility into node health and recovery actions. These improvements are available for all Amazon EMR releases. View the full article
  7. Effective April 1, 2024, AWS has extended per-second billing to Red Hat Enterprise Linux (RHEL)-based instances running on Amazon EC2. Customers will pay for RHEL-based instances that are launched in On-Demand, Reserved, and Spot form in one second increments, with a minimum of one minute. View the full article
  8. AWS now provides customers with a new AWS managed policy for Microsoft Windows Volume Snapshot Copy Service (VSS) in Amazon Elastic Compute Cloud(EC2) . With this policy, customers no longer have to configure individual permissions or create their own policies to manage VSS. Customers can simply use this new policy to ensure the necessary permissions are in place for creating application-consistent snapshots using the AWS VSS solution. View the full article
  9. You can now set all new Amazon EC2 instance launches in your account to use Instance Metadata Service Version 2 (IMDSv2) by default. IMDSv2 is an enhancement that requires session-oriented requests to add defense in depth against unauthorized metadata access. To set your instances to IMDSv2-only, you previously had to use the the IMDS Amazon Machine Image (AMI) property, set Instance Metadata Options during instance launch, or update instances after launch using the ModifyInstanceMetadataOptions API. View the full article
  10. AWS Summit season is starting! I’m happy I will meet our customers, partners, and the press next week at the AWS Summit Paris and the week after at the AWS Summit Amsterdam. I’ll show you how mobile application developers can use generative artificial intelligence (AI) to boost their productivity. Be sure to stop by and say hi if you’re around. Now that my talks for the Summit are ready, I took the time to look back at the AWS launches from last week and write this summary for you. Last week’s launches Here are some launches that got my attention: AWS License Manager allows you to track IBM Db2 licenses on Amazon Relational Database Service (Amazon RDS) – I wrote about Amazon RDS when we launched IBM Db2 back in December 2023 and I told you that you must bring your own Db2 license. Starting today, you can track your Amazon RDS for Db2 usage with AWS License Manager. License Manager provides you with better control and visibility of your licenses to help you limit licensing overages and reduce the risk of non-compliance and misreporting. AWS CodeBuild now supports custom images for AWS Lambda – You can now use compute container images stored in an Amazon Elastic Container Registry (Amazon ECR) repository for projects configured to run on Lambda compute. Previously, you had to use one of the managed container images provided by AWS CodeBuild. AWS managed container images include support for AWS Command Line Interface (AWS CLI), Serverless Application Model, and various programming language runtimes. AWS CodeArtifact package group configuration – Administrators of package repositories can now manage the configuration of multiple packages in one single place. A package group allows you to define how packages are updated by internal developers or from upstream repositories. You can now allow or block internal developers to publish packages or allow or block upstream updates for a group of packages. Read my blog post for all the details. Return your Savings Plans – We have announced the ability to return Savings Plans within 7 days of purchase. Savings Plans is a flexible pricing model that can help you reduce your bill by up to 72 percent compared to On-Demand prices, in exchange for a one- or three-year hourly spend commitment. If you realize that the Savings Plan you recently purchased isn’t optimal for your needs, you can return it and if needed, repurchase another Savings Plan that better matches your needs. Amazon EC2 Mac Dedicated Hosts now provide visibility into supported macOS versions – You can now view the latest macOS versions supported on your EC2 Mac Dedicated Host, which enables you to proactively validate if your Dedicated Host can support instances with your preferred macOS versions. Amazon Corretto 22 is now generally available – Corretto 22, an OpenJDK feature release, introduces a range of new capabilities and enhancements for developers. New features like stream gatherers and unnamed variables help you write code that’s clearer and easier to maintain. Additionally, optimizations in garbage collection algorithms boost performance. Existing libraries for concurrency, class files, and foreign functions have also been updated, giving you a more powerful toolkit to build robust and efficient Java applications. Amazon DynamoDB now supports resource-based policies and AWS PrivateLink – With AWS PrivateLink, you can simplify private network connectivity between Amazon Virtual Private Cloud (Amazon VPC), Amazon DynamoDB, and your on-premises data centers using interface VPC endpoints and private IP addresses. On the other side, resource-based policies to help you simplify access control for your DynamoDB resources. With resource-based policies, you can specify the AWS Identity and Access Management (IAM) principals that have access to a resource and what actions they can perform on it. You can attach a resource-based policy to a DynamoDB table or a stream. Resource-based policies also simplify cross-account access control for sharing resources with IAM principals of different AWS accounts. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS news Here are some additional news items, open source projects, and Twitch shows that you might find interesting: British Broadcasting Corporation (BBC) migrated 25PB of archives to Amazon S3 Glacier – The BBC Archives Technology and Services team needed a modern solution to centralize, digitize, and migrate its 100-year-old flagship archives. It began using Amazon Simple Storage Service (Amazon S3) Glacier Instant Retrieval, which is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. I did the math, you need 2,788,555 DVD discs to store 25PB of data. Imagine a pile of DVDs reaching 41.8 kilometers (or 25.9 miles) tall! Read the full story. Build On Generative AI – Season 3 of your favorite weekly Twitch show about all things generative AI is in full swing! Streaming every Monday, 9:00 AM US PT, my colleagues Tiffany and Darko discuss different aspects of generative AI and invite guest speakers to demo their work. AWS open source news and updates – My colleague Ricardo writes this weekly open source newsletter in which he highlights new open source projects, tools, and demos from the AWS Community. Upcoming AWS events Check your calendars and sign up for these AWS events: AWS Summits – As I wrote in the introduction, it’s AWS Summit season again! The first one happens next week in Paris (April 3), followed by Amsterdam (April 9), Sydney (April 10–11), London (April 24), Berlin (May 15–16), and Seoul (May 16–17). AWS Summits are a series of free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. AWS re:Inforce – Join us for AWS re:Inforce (June 10–12) in Philadelphia, Pennsylvania. AWS re:Inforce is a learning conference focused on AWS security solutions, cloud security, compliance, and identity. Connect with the AWS teams that build the security tools and meet AWS customers to learn about their security journeys. You can browse all upcoming in-person and virtual events. That’s all for this week. Check back next Monday for another Weekly Roundup! -- seb This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  11. Starting today, customers are able to view the latest macOS versions supported on their EC2 Mac Dedicated Host, which enables them to proactively validate if their Dedicated Host can support instances with their preferred macOS versions. As each macOS version requires a minimum firmware version on the underlying Apple Mac to successfully boot, a host may not support booting the latest macOS versions if the firmware on the Apple Mac is outdated, which may happen if an allocated Mac Dedicated Host has remained idle for an extended period of time. To ensure supportability for the latest macOS versions, customers can launch and terminate instances on their allocated Mac Dedicated Host which will trigger the host sanitization workflow and update the firmware on the underlying Apple Mac. A Dedicated Host with a long running instance will be updated automatically when customers stop or terminate their running instance. View the full article
  12. Amazon EC2 now enables you to tag your Amazon Machine Images (AMIs) when you create an AMI from an EBS snapshot, or when you copy an AMI within the same or different AWS Regions. Tags are simple key-value pairs that you can assign to AWS resources such as AMIs to easily organize, search, and identify resources, create cost allocation reports, and control access. View the full article
  13. AWS announces the general availability of Amazon EC2 m7i.metal-24xl instance for VMware Cloud on AWS. This offering features a disaggregated storage architecture from compute, supporting both Amazon FSx for NetApp ONTAP and VMware Cloud Flex Storage as primary storage options for customers. With this new offering, customers can now choose among three instance types for VMware Cloud on AWS: i3en.metal, i4i.metal and m7i.metal-24xl. View the full article
  14. Amazon Relational Database Service (Amazon RDS) for SQL Server now supports db.t3.micro instances in all commercial regions and the AWS GovCloud (US) Regions. This provides you with more options in addition to the db.t2.micro instance in the current AWS Free Tier for new AWS customers. View the full article
  15. Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML. You can use EC2 Capacity Blocks to reserve GPU instances in an Amazon EC2 UltraCluster for a future date for the amount of time that you require to run your machine learning (ML) workloads. This is an innovative way to reserve capacity where you can schedule GPU instances to be available on a future date for just the amount of time that you require those instances. View the full article
  16. Amazon Web Services (AWS) cloud service provider offers an EC2 service that can be used to launch virtual machines on the cloud called instances. EC2 instance works exactly like the local computer/laptop containing all the configurations like storage, operating system, etc. More specifically, EBS is utilized to allocate storage space with the EC2 instance to store its data. This guide will explain the use of Elastic Block Storage (EBS) volume with EC2 instances. How to Use EBS Volumes with EC2 Instances? To use EBS volume with EC2 instance, visit the EC2 dashboard and click on the “Instances” page: On the instances page, click on the “Launch instances” button to head to the configuration page: Type the name of the instance and select “Amazon Linux” machine image from the “Quick Start” section: Select the type of instance and click on the “Create new key pair” link to create a key pair file that will be used while connecting to the instance: Enter the name of the key pair and select its format and type before clicking on the “Create key pair” button: Scroll down the configuration page to locate the “Configure storage” button which uses “EBS” storage to be attached to the instance: Review the summary of the instance and hit the “Launch Instance” button: Click on the “Volumes” button from the left panel on the EC2 dashboard: On the volumes page, click on the “Create Volume” button: Change the size according to the need of the EC2 instance. Make sure that the availability zone of the volume and insurance must be the same: Scroll down the page and click on the “Create Volume” button: On the volume page, click on the “Actions” button to expand the menu and hit the “Attach Volume” button: Choose the instance to which the volume should be attached and click on the “Attach volume” button: To verify that the volume is attached, select the instance and click on the “Storage” section: Two volumes have been displayed in the storage section. The first one was created at the time of instance creation and the other is attached later which is highlighted in the screenshot below: That was all about EBS volume with an EC2 instance. Conclusion To use EBS volume to the EC2 instance, head into the EC2 dashboard from the AWS Console and launch an EC2 instance. EBS storage is created while creating the instance to store the data of the instance. After that, head to the volume page and create an EBS volume within the availability zone of the instance. Simply attach the volume to the EC2 instance to add to the previously used storage space. This blog offered the method to use EBS Volumes with EC2 instances. View the full article
  17. You can now use Credential Control Properties to more easily restrict the usage of your IAM Roles for EC2. View the full article
  18. Amazon Web Services (AWS) announces the general availability of Amazon EC2 I4i instances. Designed for storage I/O intensive workloads, I4i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 30% better compute price performance over I3 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). View the full article
  19. Many Organizations adopt DevOps Practices to innovate faster by automating and streamlining the software development and infrastructure management processes. Beyond cultural adoption, DevOps also suggests following certain best practices and Continuous Integration and Continuous Delivery (CI/CD) is among the important ones to start with. CI/CD practice reduces the time it takes to release new software updates by automating deployment activities. Many tools are available to implement this practice. Although AWS has a set of native tools to help achieve your CI/CD goals, it also offers flexibility and extensibility for integrating with numerous third party tools. In this post, you will use GitHub Actions to create a CI/CD workflow and AWS CodeDeploy to deploy a sample Java SpringBoot application to Amazon Elastic Compute Cloud (Amazon EC2) instances in an Autoscaling group. GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place that you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and then combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services. Solution Overview The solution utilizes the following services: GitHub Actions – Workflow Orchestration tool that will host the Pipeline. AWS CodeDeploy – AWS service to manage deployment on Amazon EC2 Autoscaling Group. AWS Auto Scaling – AWS Service to help maintain application availability and elasticity by automatically adding or removing Amazon EC2 instances. Amazon EC2 – Destination Compute server for the application deployment. AWS CloudFormation – AWS infrastructure as code (IaC) service used to spin up the initial infrastructure on AWS side. IAM OIDC identity provider – Federated authentication service to establish trust between GitHub and AWS to allow GitHub Actions to deploy on AWS without maintaining AWS Secrets and credentials. Amazon Simple Storage Service (Amazon S3) – Amazon S3 to store the deployment artifacts. The following diagram illustrates the architecture for the solution: Developer commits code changes from their local repo to the GitHub repository. In this post, the GitHub action is triggered manually, but this can be automated. GitHub action triggers the build stage. GitHub’s Open ID Connector (OIDC) uses the tokens to authenticate to AWS and access resources. GitHub action uploads the deployment artifacts to Amazon S3. GitHub action invokes CodeDeploy. CodeDeploy triggers the deployment to Amazon EC2 instances in an Autoscaling group. CodeDeploy downloads the artifacts from Amazon S3 and deploys to Amazon EC2 instances. Prerequisites Before you begin, you must complete the following prerequisites: An AWS account with permissions to create the necessary resources. A GitHub account with permissions to configure GitHub repositories, create workflows, and configure GitHub secrets. A Git client to clone the provided source code. Steps The following steps provide a high-level overview of the walkthrough: Clone the project from the AWS code samples repository. Deploy the AWS CloudFormation template to create the required services. Update the source code. Setup GitHub secrets. Integrate CodeDeploy with GitHub. Trigger the GitHub Action to build and deploy the code. Verify the deployment. Download the source code Clone the source code repository aws-codedeploy-github-actions-deployment. git clone https://github.com/aws-samples/aws-codedeploy-github-actions-deployment.git Create an empty repository in your personal GitHub account. To create a GitHub repository, see Create a repo. Clone this repo to your computer. Furthermore, ignore the warning about cloning an empty repository. git clone https://github.com/<username>/<repoName>.git Copy the code. We need contents from the hidden .github folder for the GitHub actions to work. cp -r aws-codedeploy-github-actions-deployment/. <new repository> e.g. GitActionsDeploytoAWS Now you should have the following folder structure in your local repository. Repository folder structure The .github folder contains actions defined in the YAML file. The aws/scripts folder contains code to run at the different deployment lifecycle events. The cloudformation folder contains the template.yaml file to create the required AWS resources. Spring-boot-hello-world-example is a sample application used by GitHub actions to build and deploy. Root of the repo contains appspec.yml. This file is required by CodeDeploy to perform deployment on Amazon EC2. Find more details here. The following commands will help make sure that your remote repository points to your personal GitHub repository. git remote remove origin git remote add origin <your repository url> git branch -M main git push -u origin main Deploy the CloudFormation template To deploy the CloudFormation template, complete the following steps: Open AWS CloudFormation console. Enter your account ID, user name, and Password. Check your region, as this solution uses us-east-1. If this is a new AWS CloudFormation account, select Create New Stack. Otherwise, select Create Stack. Select Template is Ready Select Upload a template file Select Choose File. Navigate to template.yml file in your cloned repository at “aws-codedeploy-github-actions-deployment/cloudformation/template.yaml”. Select the template.yml file, and select next. In Specify Stack Details, add or modify the values as needed. Stack name = CodeDeployStack. VPC and Subnets = (these are pre-populated for you) you can change these values if you prefer to use your own Subnets) GitHubThumbprintList = 6938fd4d98bab03faadb97b34396831e3780aea1 GitHubRepoName – Name of your GitHub personal repository which you created. On the Options page, select Next. Select the acknowledgement box to allow for the creation of IAM resources, and then select Create. It will take CloudFormation approximately 10 minutes to create all of the resources. This stack would create the following resources. Two Amazon EC2 Linux instances with Tomcat server and CodeDeploy agent are installed Autoscaling group with Internet Application load balancer CodeDeploy application name and deployment group Amazon S3 bucket to store build artifacts Identity and Access Management (IAM) OIDC identity provider Instance profile for Amazon EC2 Service role for CodeDeploy Security groups for ALB and Amazon EC2 Update the source code On the AWS CloudFormation console, select the Outputs tab. Note that the Amazon S3 bucket name and the ARM of the GitHub IAM Role. We will use this in the next step. Update the Amazon S3 bucket in the workflow file deploy.yml. Navigate to /.github/workflows/deploy.yml from your Project root directory. Replace ##s3-bucket## with the name of the Amazon S3 bucket created previously. Replace ##region## with your AWS Region. Update the Amazon S3 bucket name in after-install.sh. Navigate to aws/scripts/after-install.sh. This script would copy the deployment artifact from the Amazon S3 bucket to the tomcat webapps folder. Remember to save all of the files and push the code to your GitHub repo. Verify that you’re in your git repository folder by running the following command: git remote -V You should see your remote branch address, which is similar to the following: username@3c22fb075f8a GitActionsDeploytoAWS % git remote -v origin git@github.com:<username>/GitActionsDeploytoAWS.git (fetch) origin git@github.com:<username>/GitActionsDeploytoAWS.git (push) Now run the following commands to push your changes: git add . git commit -m “Initial commit” git push Setup GitHub Secrets The GitHub Actions workflows must access resources in your AWS account. Here we are using IAM OpenID Connect identity provider and IAM role with IAM policies to access CodeDeploy and Amazon S3 bucket. OIDC lets your GitHub Actions workflows access resources in AWS without needing to store the AWS credentials as long-lived GitHub secrets. These credentials are stored as GitHub secrets within your GitHub repository, under Settings > Secrets. For more information, see “GitHub Actions secrets”. Navigate to your github repository. Select the Settings tab. Select Secrets on the left menu bar. Select New repository secret. Select Actions under Secrets. Enter the secret name as ‘IAMROLE_GITHUB’. enter the value as ARN of GitHubIAMRole, which you copied from the CloudFormation output section. Integrate CodeDeploy with GitHub For CodeDeploy to be able to perform deployment steps using scripts in your repository, it must be integrated with GitHub. CodeDeploy application and deployment group are already created for you. Please use these applications in the next step: CodeDeploy Application =CodeDeployAppNameWithASG Deployment group = CodeDeployGroupName To link a GitHub account to an application in CodeDeploy, follow until step 10 from the instructions on this page. You can cancel the process after completing step 10. You don’t need to create Deployment. Trigger the GitHub Actions Workflow Now you have the required AWS resources and configured GitHub to build and deploy the code to Amazon EC2 instances. The GitHub actions as defined in the GITHUBREPO/.github/workflows/deploy.yml would let us run the workflow. The workflow is currently setup to be manually run. Follow the following steps to run it manually. Go to your GitHub Repo and select Actions tab Select Build and Deploy link, and select Run workflow as shown in the following image. After a few seconds, the workflow will be displayed. Then, select Build and Deploy. You will see two stages: Build and Package. Deploy. Build and Package The Build and Package stage builds the sample SpringBoot application, generates the war file, and then uploads it to the Amazon S3 bucket. You should be able to see the war file in the Amazon S3 bucket. Deploy In this stage, workflow would invoke the CodeDeploy service and trigger the deployment. Verify the deployment Log in to the AWS Console and navigate to the CodeDeploy console. Select the Application name and deployment group. You will see the status as Succeeded if the deployment is successful. Point your browsers to the URL of the Application Load balancer. Note: You can get the URL from the output section of the CloudFormation stack or Amazon EC2 console Load Balancers. Optional – Automate the deployment on Git Push Workflow can be automated by changing the following line of code in your .github/workflow/deploy.yml file. From workflow_dispatch: {} To #workflow_dispatch: {} push: branches: [ main ] pull_request: This will be interpreted by GitHub actions to automaticaly run the workflows on every push or pull requests done on the main branch. After testing end-to-end flow manually, you can enable the automated deployment. Clean up To avoid incurring future changes, you should clean up the resources that you created. Empty the Amazon S3 bucket: Delete the CloudFormation stack (CodeDeployStack) from the AWS console. Delete the GitHub Secret (‘IAMROLE_GITHUB’) Go to the repository settings on GitHub Page. Select Secrets under Actions. Select IAMROLE_GITHUB, and delete it. Conclusion In this post, you saw how to leverage GitHub Actions and CodeDeploy to securely deploy Java SpringBoot application to Amazon EC2 instances behind AWS Autoscaling Group. You can further add other stages to your pipeline, such as Test and security scanning. Additionally, this solution can be used for other programming languages. About the Authors Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale. Suresh Moolya is a Cloud Application Architect with Amazon Web Services. He works with customers to architect, design, and automate business software at scale on AWS cloud. View the full article
  20. IAM helps customers with capabilities to analyze access and achieve least privilege. When you are working on new permissions for your teams, you can use IAM Access Analyzer policy generation to create a policy based on your access activity and set fine-grained permissions. To analyze and refine existing permissions, you can use last accessed information to identify unused actions in your IAM policies and reduce access. When we launched action last accessed in 2020, we started with S3 management actions to help you restrict access to your critical business data. Now, IAM is increasing visibility into access history by extending last accessed information to Amazon EC2, AWS IAM, and AWS Lambda actions. This makes it easier for you to analyze access and reduce EC2, IAM, and Lambda permissions by providing the latest timestamp when an IAM user or role used an action. Using last accessed information, you can identify unused actions in your IAM policies and tighten permissions confidently. View the full article
  • Forum Statistics

    43.6k
    Total Topics
    43.1k
    Total Posts
×
×
  • Create New...