Jump to content

Search the Community

Showing results for tags 'ebs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 24 results

  1. AWS Backup announces support for restore testing of Amazon EBS Snapshots Archive. AWS Backup restore testing helps perform automated and periodic restore tests of supported AWS resources that have been backed up. AWS Backup is a fully managed service that centralizes and automates data protection across AWS services and hybrid workloads. With this launch, AWS Backup customers can test recovery readiness to prepare for possible data loss events and to measure duration times for restore jobs of Amazon EBS Snapshots Archive to satisfy compliance or regulatory requirements. View the full article
  2. AWS Systems Manager Fleet Manager now provides a new toolset that aims to streamline on-instance volume management by providing an easy GUI based way to manage EBS volumes on your Windows Instances. With this new Fleet Manager capability, customers can readily browse the set of volumes attached to an instance identify volume mount points in the instance file system, view metadata for attached disks and mount as well as format unused EBS volumes. View the full article
  3. This week, it was really difficult to choose what to recap here because, as we’re getting closer to AWS re:Invent, service teams are delivering new capabilities at an incredible pace. Last week’s launches Here are some of the launches that caught my attention last week: Amazon Aurora – Aurora MySQL zero-ETL integration with Amazon Redshift is now generally available. Get a walk-through in our AWS News Blog post. Here’s a recap of data integration innovations at AWS. Optimized reads for Aurora PostgreSQL provide up to 8x improved query latency and up to 30 percent cost savings for I/O-intensive applications. Here’s more of a deep dive from the AWS Database Blog. Amazon EBS – You can now block public sharing of EBS snapshots. Read more about how that works in the launch post. Amazon Data Lifecycle Manager – Support for pre- and post-script automation of EBS snapshots simplifies application-consistent snapshots. Here’s how to use it with Windows applications. AWS Health – There’s now improved visibility into planned lifecycle events like end of standard support of a Kubernetes version in Amazon EKS, Amazon RDS certificate rotations, and end of support for other open source software. Here’s how it works. Amazon CloudFront – Unified security dashboard to enable, monitor, and manage common security protections for your web applications directly from the CloudFront console. Read more at Introducing CloudFront Security Dashboard, a Unified CDN and Security Experience. Amazon Connect – Reduced outbound telephony pricing across Europe and South America. It’s also easier now to deliver persistent chat experiences for end users. AWS Lambda – Busy week for the Lambda team! There is now support for Amazon Linux 2023 as both a managed runtime and a container base image. More details in this Compute Blog post. There’s also enhanced auto scaling for Kafka event sources (the Compute Blog has a post with more details) and faster polling scale-up rate for Amazon SQS events when AWS Lambda functions are configured with SQS. AWS CodeBuild – Now supports AWS Lambda compute to build and test software packages. Read about how it works in this post. Amazon SQS – Now supports JSON protocol to reduce latency and client-side CPU usage. More in the launch post. There’s also a new integration for Amazon SQS in the Amazon EventBridge Pipes console (the week before that, Amazon Kinesis Data Streams was also integrated into the EventBridge Pipes console). Amazon SNS – FIFO topics now support 3,000 messages per second by default. Amazon EventBridge – There are 22 additional Amazon CloudWatch metrics to help you monitor the performance of your event buses. More info in this post from the AWS Compute Blog. Amazon OpenSearch Service – Neural search makes it easier to create and manage semantic search applications. Amazon Timestream – The UNLOAD statement simplifies exporting time-series data for additional insights. Amazon Comprehend – New trust and safety features with toxicity detection and prompt safety classification. Read how to apply that to generative AI applications using LangChain. AWS App Runner – Now available in London, Mumbai, and Paris AWS Regions. AWS Application Migration Service – Support for AWS App2Container replatforming of .NET and Java based applications. Amazon FSx for OpenZFS – Now available in ten additional AWS Regions with support for additional deployment types in seven Regions. AWS Global Accelerator – There’s now IPv6 support for Network Load Balancer (NLB) endpoints. It was already available for Application Load Balancers (ALBs) and Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon GuardDuty – New machine learning (ML) capability enhances threat detection for Amazon EKS. Other AWS news Some other news and blog posts that you might have missed: AWS Local Zones Credit Program – If you have low-latency or data residency requirements for your application, our Local Zones Credit Program can get you started. Fill out our form to receive $500 in AWS credits and apply it to a Local Zones workload. Amazon CodeWhisperer – Customizing coding companions for organizations and optimizing for sustainability. Sharing what we have learned – Creating a correction of errors document to understand what went wrong and what would be done to prevent it from happening again. Good tips for containers – Securing API endpoints using Amazon API Gateway and Amazon VPC Lattice. Another post in this amazing series – Let’s Architect! Tools for developers. A few highlights from Community.AWS: From MVC to Modern Web Frameworks Reduce Stress and Get Your Fridays Back with Observability and OpenTelemetry Sustainable Software Development Life Cycle (S-SDLC) Don’t miss the latest AWS open source newsletter by my colleague Ricardo. Upcoming AWS events Check your calendars and sign up for these AWS events: AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Uruguay (November 14), Central Asia (Kazakhstan, Uzbekistan, Kyrgyzstan, and Mongolia on November 17–18), and Guatemala (November 18). AWS re:Invent (November 27 – December 1) – Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Browse the session catalog and attendee guides and check out the highlights for generative AI. In the AWS re:Invent Builder Hub you can find developer-focused sessions, events, competitions, and content. Here you can browse all upcoming AWS-led in-person and virtual events and developer-focused events. And that’s all from me for this week. We’re now taking a break. The next weekly roundup will be after re:Invent! — Danilo This post is part of our Weekly Roundup series. Check back for a quick roundup of interesting news and announcements from AWS! View the full article
  4. With just 41 days until AWS re:Invent 2023 opens, I’m doing my best to stay heads-down and focused on working with the entire AWS News Blog team to create plenty of awesome new posts for your reading pleasure! I’ll take a short break this morning to share some of the most exciting launches and other news from last week. Here we go! Last Week’s Launches Here are some of the launches that captured my attention: Amazon EBS – The new Attached EBS Status Check CloudWatch metric lets you monitor the status of all of the Amazon Elastic Block Store (Amazon EBS) volumes attached to a particular Amazon Elastic Compute Cloud (Amazon EC2) instance, verifying that the volumes are reachable and able to complete I/O operations. AWS Systems Manager – You can now enable AWS Systems Manager by default for all EC2 instances within an Organization. This lets you confirm that core Systems Manager capabilities are present on all new and existing instances. Amazon EC2 – You can now set unused or obsolete AMIs to a disabled state. This makes the AMI private if it was previously shared, hides it from DescribeImages by default, and prevents new instances from being launched from it. Amazon Textract – You can now use Custom Queries to adapt Textract’s Queries feature to improve extraction accuracy for business-specific documents. You upload sample documents, label the data, and generate an adapter, which you then use in calls to the AnalyzeDocument function. Amazon OpenSearch Service – You can now create Search Pipelines for easier processing of queries and results. Each search pipeline can contain multiple processing steps: query rewriters, natural language processors, result rerankers, and filters; several standard processors are also included. Amazon Linux 2 – The latest quarterly release (AL2023.2) of Amazon Linux 2 includes a core set of Ansible features as well as a curated set of community collections. It also includes Amazon Corretto 21, and many other new features and capabilities. Amazon Rekognition – You can now train custom adapters to reduce the number of false positives and false negative flagged by Amazon Rekognition, giving you the power to tailor the deep learning model to improve performance for your specific use case. Amazon RDS – Amazon Relational Database Service (RDS) now supports PostgreSQL, MySQL, and MariaDB databases on M6in, M6idn, R6in, and R6idn database instances. X in Y – We launched existing services and instance types in additional regions: M6in and M6idn instances in Asia Pacific (Sydney) and Europe (Stockholm). C7gd, M7gd, and R7gd instances in Asia Pacific (Singapore, Tokyo). C7gd instances in Asia Pacific (Sydney). Unified settings for the AWS Management Console in AWS GovCloud (US) Regions. AWS Direct Connect in Seoul, South Korea. AWS Global Accelerator in Hanoi, Vietnam (second location). Amazon FSx for NetApp ONTAP in Asia Pacific (Osaka). AWS Organizations Service Control Policies in AWS China Regions. AWS Verified Access in Asia Pacific (Singapore, Tokyo). Private Access to the AWS Management Console in Israel (Tel Aviv). Amazon RDS Custom for Oracle in Asia Pacific (Jakarta). Other AWS News Here are some other blog posts and news items that you might like: On the Community.AWS Blog, Seth Eliot listed Twelve Resilience Sessions at AWS re:Invent You Won’t Want to Miss, Brooke Jamieson explained How to Learn Generative AI from Scratch, and Daniel Wirjo shared some Patterns for Building Generative AI Applications on Amazon Bedrock. On the AWS Insights blog, fellow news blogger Irshad Buchh explained why Two billion downloads of Terraform AWS Provider shows value of IaC for infrastructure management. The AWS IoT Blog explained How to build a scalable, multi-tenant IoT SaaS platform on AWS using a multi-account strategy. The Amazon SES Blog showed you how to Automate marketing campaigns with real-time customer data using Amazon Pinpoint. The AWS Big Data Blog showed you how to Orchestrate Amazon EMR Serverless jobs with AWS Step functions. The AWS Compute Blog talked about Filtering events in Amazon EventBridge with wildcard pattern matching. The AWS Storage Blog talked about Retaining Amazon EC2 AMI snapshots for compliance using Amazon EBS Snapshots Archive. The AWS Architecture Blog talked about how Internet Travel Service ITS adopts microservices architecture for improved air travel search engine. Some other great sources of AWS news include: AWS Open Source Newsletter AWS Graviton Weekly AWS Cloud Security Weekly Last Week in AWS Upcoming AWS Events Check your calendars and sign up for these AWS events: AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Italy (October 18), UAE (October 21), Jaipur (November 4), Vadodara (November 4), and Brasil (November 4). AWS Innovate: Every Application Edition – Join our free online conference to explore cutting-edge ways to enhance security and reliability, optimize performance on a budget, speed up application development, and revolutionize your applications with generative AI. Register for AWS Innovate Online Americas and EMEA on October 19 and AWS Innovate Online Asia Pacific & Japan on October 26. AWS re:Invent (November 27 – December 1) – Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Browse the session catalog and attendee guides and check out the re:Invent highlights for generative AI. You can browse all upcoming in-person and virtual events. And that’s a wrap. Check back next Monday for another Weekly Roundup! — Jeff; This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  5. Starting today, we are introducing a new Amazon CloudWatch metric called Attached EBS Status Check to monitor if one or more Amazon EBS volumes attached to your EC2 instances are reachable and able to complete I/O operations. With this new metric, you can now quickly detect and respond to any EBS impairments that may potentially be impacting the performance of your applications running on Amazon EC2 instances. View the full article
  6. Last week, there was some great reading about Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS) written by AWS tech leaders. Dr. Werner Vogels wrote Farewell EC2-Classic, it’s been swell, celebrating the 17 years of loyal duty of the original version that started what we now know as cloud computing. You can read how it made the process of acquiring compute resources simple, even though the stack running behind the scenes was incredibly complex. We have come a long way since 2006, and we’re not done innovating for our customers. As celebrated in this year’s AWS Storage Day, Amazon EBS was launched 15 years ago this month. James Hamilton, SVP and distinguished engineer at Amazon, wrote Amazon EBS at 15 Years, about how the service has evolved to handle over 100 trillion I/O operations a day, and transfers over 13 exabytes of data daily. As Dr. Werner said in his piece, “it’s a reminder that building evolvable systems is a strategy, and revisiting your architectures with an open mind is a must.” Our innovation efforts driven by customer feedback continue today, and this week is no different. Last Week’s Launches Here are some launches that got my attention: Renaming Amazon Kinesis Data Analytics to Amazon Managed Service for Apache Flink – You can now use Amazon Managed Service for Apache Flink, a fully managed and serverless service for you to build and run real-time streaming applications using Apache Flink. All your existing running applications in Kinesis Data Analytics will work as-is, without any changes. To learn more, see my blog post. Extended Support for Amazon Aurora and Amazon RDS – You can now get more time for support, up to three years, for Amazon Aurora and Amazon RDS database instances running MySQL 5.7, PostgreSQL 11, and higher major versions. This e will allow you time to upgrade to a new major version to help you meet your business requirements even after the community ends support for these versions. Enhanced Starter Template for AWS Step Functions Workflow Studio – You can now use starter templates to streamline the process of creating and prototyping workflows swiftly, plus a new code mode, which enables builders to move easily between design and code authoring views. With the improved authoring experience in Workflow Studio, you can seamlessly alternate between a drag-and-drop visual builder experience or the new code editor so that you can pick your preferred tool to accelerate development. To learn more, see Enhancing Workflow Studio with new features for streamlined authoring in the AWS Compute Blog. Email Delivery History for Every Email in Amazon SES – You can now troubleshoot individual email delivery problems, confirm delivery of critical messages, and identify engaged recipients on a granular, single email basis. Email senders can investigate trends in delivery performance and see delivery and engagement status for each email sent using Amazon SES Virtual Deliverability Manager. Response Streaming through Amazon SageMaker Real-time Inference – You can now continuously stream inference responses back to the client to help you build interactive experiences for various generative AI applications such as chatbots, virtual assistants, and music generators. For more details on how to use response streaming along with examples, see Invoke to Stream an Inference Response and How containers should respond in the AWS documentation, and Elevating the generative AI experience: Introducing streaming support in Amazon SageMaker hosting in the AWS Machine Learning Blog. For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page. Other AWS News Some other updates and news that you might have missed: AI & Sports: How AWS & the NFL are Changing the Game – Over the last 5 years, AWS has partnered with the National Football League (NFL), helping fans better understand the game, helping broadcasters tell better stories, and helping teams use data to improve operations and player safety. Watch AWS CEO, Adam Selipsky, former NFL All-Pro Larry Fitzgerald, and the NFL Network’s Cynthia Frelund during their earlier livestream discussing the intersection of artificial intelligence and machine learning in sports. Amazon Bedrock Story from Amazon Science – This is a good article explaining the benefits of using Amazon Bedrock to build and scale generative AI applications with leading foundation models, including Amazon’s Titan FMs, which focus on responsible AI to avoid toxic content. Amazon EC2 Flexibility Score – This is an open source tool developed by AWS to assess any configuration used to launch instances through an Auto Scaling Group (ASG) against the recommended EC2 best practices. It converts the best practice adoption into a “flexibility score” that can be used to identify, improve, and monitor the configurations. To learn more open-source news and updates, see this newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more. Upcoming AWS Events Check your calendars and sign up for these AWS events: AWS re:Invent – Ready to start planning your re:Invent? Browse the session catalog now. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. AWS Global Summits – The last in-person AWS Summit will be held in Johannesburg on Sept. 26. AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Aotearoa (Sept. 6), Lebanon (Sept. 9), Munich (Sept. 14), Argentina (Sept. 16), Spain (Sept. 23), and Chile (Sept. 30). Visit the landing page to check out all the upcoming AWS Community Days. CDK Day – A community-led fully virtual event on Sept. 29 with tracks in English and Spanish about CDK and related projects. Learn more at the website. You can browse all upcoming AWS-led in-person and virtual events, and developer-focused events such as AWS DevDay. — Channy This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  7. In May 2019, Amazon Elastic Block Store (EBS) launched the ability for customers to take crash-consistent snapshots of all Amazon EBS volumes attached to an Amazon EC2 instance with a single API call. Now you can choose to take crash-consistent snapshots of a subset of Amazon EBS data volumes attached to an Amazon EC2 instance. You can also use Amazon Data Lifecycle Manager (DLM) to automate taking crash-consistent snapshots of the same subset of Amazon EBS volumes on a retention schedule defined by DLM policies. View the full article
  8. Amazon OpenSearch Service now supports Amazon Elastic Block Store (Amazon EBS) volume type gp3 (General Purpose SSD), in addition to the existing gp2, Magnetic and PIOPS (io1) volumes. You can use gp3 volumes on our latest generation T3, R5, R6g, M5, M6g, C5 and C6g instance families. Amazon EBS gp3 enables customers to provision performance independent of storage capacity, provides better baseline performance, at a 9.6% lower price point per GB than existing gp2 volumes on OpenSearch Service. In addition, with gp3 you now get denser storage on R5, R6g, M5, M6g instance families, which can help you to further optimize your costs. View the full article
  9. As your application needs change, Amazon EBS Elastic Volumes allows you to easily increase capacity, tune performance, and change the type of Amazon EBS volumes. Customers are using EBS Elastic Volumes to migrate to gp3 volumes and save up to 20% per GB compared to gp2 volumes. View the full article
  10. You can now use Identity and Access Management (IAM) condition keys to specify which resource types are permitted in the retention rules created for Recycle Bin. With Recycle Bin, you can retain deleted EBS snapshots and EBS-backed AMIs for a period of time so that you can recover them in the event of an accidental deletion. You can enable Recycle Bin for all or a subset of the Snapshots or AMIs in your account by creating one or more retention rule. Each rule also specifies a retention time period. A deleted EBS snapshot or de-registered AMI can be recovered from the Recycle Bin before the expiration of the retention period. View the full article
  11. You can now use Elastic Volumes to dynamically increase the capacity and tune the performance of your io2 Block Express volumes with no downtime or performance impact, in the same manner as other EBS volumes. Additionally, you can now create a fully initialized io2 Block Express volume from a Fast Snapshot Restore (FSR) enabled snapshot. Volumes that are created from FSR-enabled snapshots instantly deliver their provisioned performance. These features add to the capabilities of the highest-performance EBS volume type - io2 Block Express. View the full article
  12. AWS Compute Optimizer now analyzes additional Amazon EBS metrics to generate enhanced EC2 instance type recommendations. Enhanced recommendations are now available for Compute Optimizer and Cost Explorer Rightsizing Recommendations customers. View the full article
  13. The Amazon Elastic Block Store (EBS) CSI driver now supports creating volumes on worker nodes running in AWS Outposts subnets. View the full article
  14. December 11th, 2020 – You can now create Throughput Optimized HDD (ST1) and Cold HDD Volumes (SC1) as small as 125GB, allowing you to save up to 75% over the previous 500GB minimum when creating volumes at this new minimum size. View the full article
  15. Amazon Elastic Block Store (EBS) io2 volumes are now supported for SAP workloads. View the full article
  16. AWS Compute Optimizer now supports IOPS and throughput-based EBS volume recommendations. View the full article
  17. Amazon Elastic Block Store (EBS) direct APIs now offer Federal Information Processing Standard 140-2 (FIPS) compliant endpoints in the US Commercial and Canada (Central) AWS Regions. FIPS 140-2 is a U.S. and Canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information. View the full article
  18. Amazon Data Lifecycle Manager (DLM) now supports the creation and retention of EBS-backed Amazon Machine Images (AMIs). In addition to defining policies that provide a simple, automated way to back up data stored on EBS volumes, you can now create policies targeting EC2 instances to create EBS-backed AMIs. With this feature, you no longer have to rely on custom scripts to manage your AMIs. Nor will you need to manually delete associated snapshots once an AMI has been de-registered. View the full article
  19. Today, we are reducing the price of Amazon EBS Cold HDD (sc1) volumes by 40%, or an estimated $160 for each 16TB sc1 volume. View the full article
  20. Amazon CloudWatch Application Insights, adds additional storage volume metrics to provide further insights to your storage performance and health along with the new ability to monitor your API Gateway functions. CloudWatch Application Insights is a capability that helps enterprise customers easily setup application monitoring and enhanced observability for AWS resources. The new Elastic Block Store (EBS) metrics provide further details on storage volumes. The integration with the API Gateway service provides insights to the various API commands run through the gateway. View the full article
  21. Today AWS announced the availability of gp3, the next-generation general purpose SSD volumes for Amazon Elastic Block Store (Amazon EBS) that enable customers to provision performance independent of storage capacity and provides up to 20% lower price-point per GB than existing gp2 volumes. With gp3 volumes, customers can scale IOPS (input/output operations per second) and throughput without needing to provision additional block storage capacity, and pay only for the resources they need. View the full article
  22. Today, we are launching a tiered pricing structure for provisioned IOPS charges on io2 volumes. With this tiered pricing, we are reducing the price of provisioning peak IOPS (64,000 IOPS) on io2 volume by 15%, or an estimated $608 per month. View the full article
  23. Amazon Web Services (AWS) announces the availability of Amazon EC2 R5b instances that can utilize up to 60 Gbps of Amazon Elastic Block Storage (EBS) bandwidth and 260 IOPS (I/O operations per second) for large relational database workloads. These instances offer significantly higher EBS performance across all instance sizes, ranging from 10 Gbps of EBS bandwidth on smaller instance sizes to 60 Gbps of EBS bandwidth on the largest instance size. R5b instances are powered by custom second-generation Intel® Xeon® Scalable processors (Cascade Lake) with sustained all-core turbo frequency of 3.1 GHz. These new instances are designed for workloads such as relational databases and File systems that can take advantage of improved EBS bandwidth and IOPS. View the full article
  24. It is very important to have data backups on the cloud for data recovery and protection. EBS snapshots play an important role when it comes to backup of your ec2 instance data (root volumes & additional volumes). Even though snapshots are considered as “poor man’s backup”, it gives you a point in time backup and faster restore options to meet your RPO objective. Towards the end of the article, I have added some key snapshot features and some best practices to manage snapshots. AWS EBS Snapshot Automation Snapshots are the cheapest and easiest way to enable backups for your EC2 instances or EBS volumes. There are three ways to take automated snapshots. EBS Life Cycle manager Cloudwatch Events Lambda Functions. In this tutorial, I will guide you to automate EBS snapshot creation and deletion using all three approaches. EBS Snapshot Automation with Life Cycle manager EC2 lifecycle manage is a native AWS functionality to manage the lifecycle of EBS volumes and snapshots. It is the quickest and easiest way to automate EBS snapshots. It works on the concept of tags. Based on the instance or volume tags you can group EBS volumes and perform snapshot operation in bulk or for a single instance. Follow the steps given below to setup a snapshot lifecycle policy. Step 1: Tag your ec2 instance and volumes EBS snapshots with life cycle manager work with the instance & volume tags. It requires instances and volumes to be tagged to identify the snapshot candidate. You can use the following tag in the instances and volumes that you need automated snapshot. Key = Backup Value = True Step 2: Find the EBS life cycle manager to create a snapshot lifecyle policy. Head over to EC2 dashboard and select “Lifecycle Manager” option under ELASTIC BLOCK STORE category as shown below. You will be taken to the life cycle manager dashboard. Click “Create Snapshot Lifecycle Policy” button. Step 3: Add EBS snapshot life cycle policy rules Enter the policy details as shown below. Make sure you select the right tags for the volumes you need the snapshot. Note: You can add multiple tags to target specific Volumes Enter snapshot schedule details based on your requirements. You can choose retention type for both count & age. For regular backups, count is the ideal way. Also apply proper tags to identify the snapshots. There are two optional parameters for snapshot high availability and fast snapshot restore. You can choose these options for production volumes. Keep in mind that these two options will incur extra charges. Select an IAM role that has permission to create and delete snapshots. If you don’t have an IAM role, you can use the default role option. AWS will automatically create a role for snapshots. I recommend you to create a custom role and use it with the policy to keep track of IAM roles. Also select “enable policy” for the policy to be active immediately after creation. Click create policy. Now the policy manager will automatically create snapshots based on the schedules you have added. Create EBS Volume Snapshots With Cloudwatch Events Cloudwatch custom events & schedules can be used to create EBS snapshots. You can choose AWS services events for cloudwatch to trigger custom actions. To demonstrate this, I will use the cloudwatch schedule to create EBS snapshots. Follow the steps given below. Step1: Create a Cloudwatch Schedule. Head over to cloudwatch service and click create a rule under the rule options as shown below. You can choose either a fixed schedule or a cron expression. Under targets, search for ec2 and select the “EC2 CreateSnapshot API Call” option. Get the Volume ID from the EBS volume information, apply it to the Volume ID field and click “Configure details”. Create more targets if you want to take snapshot of more volumes. Enter the rule name, description and click create rule. Thats it. Based on the cloudwatch schedules, the snapshots will be created. Automate EBS snapshot Creation and Deletion With Lambda Function If you have any use case where lifecycle manger does not suffice the requirements, you can opt for lambda based snapshot creation. Most use cases come under unscheduled activities. One use case I can think of is, taking snapshots just before updating/upgrading stateful systems. You can have an automation that will trigger a lambda function that performs the snapshot action. Getting Started With Lambda Based EBS snapshot We will use Python 2.7 scripts, lambda, IAM role, and cloud watch event schedule for this setup. For this lambda function to work, you need to create a tag named “backup” with the value true for all the instances for which you need a backup. For setting up a lambda function for creating automated snapshots, you need to do the following. A snapshot creation python script with the necessary parameters. An IAM role with snapshot create, modify, and delete access. A lambda function. Configure Python Script Following python code will create snapshots on all the instance which have a tag named “backup.” Note: You can get all the code from here import boto3 import collections import datetime ec = boto3.client('ec2') def lambda_handler(event, context): reservations = ec.describe_instances( Filters=[ {'Name': 'tag-key', 'Values': ['backup', 'Backup']}, ] ).get( 'Reservations', [] ) instances = sum( [ [i for i in r['Instances']] for r in reservations ], []) print "Found %d instances that need backing up" % len(instances) to_tag = collections.defaultdict(list) for instance in instances: try: retention_days = [ int(t.get('Value')) for t in instance['Tags'] if t['Key'] == 'Retention'][0] except IndexError: retention_days = 10 for dev in instance['BlockDeviceMappings']: if dev.get('Ebs', None) is None: continue vol_id = dev['Ebs']['VolumeId'] print "Found EBS volume %s on instance %s" % ( vol_id, instance['InstanceId']) snap = ec.create_snapshot( VolumeId=vol_id, ) to_tag[retention_days].append(snap['SnapshotId']) print "Retaining snapshot %s of volume %s from instance %s for %d days" % ( snap['SnapshotId'], vol_id, instance['InstanceId'], retention_days, ) for retention_days in to_tag.keys(): delete_date = datetime.date.today() + datetime.timedelta(days=retention_days) delete_fmt = delete_date.strftime('%Y-%m-%d') print "Will delete %d snapshots on %s" % (len(to_tag[retention_days]), delete_fmt) ec.create_tags( Resources=to_tag[retention_days], Tags=[ {'Key': 'DeleteOn', 'Value': delete_fmt}, {'Key': 'Name', 'Value': "LIVE-BACKUP"} ] )  Also, you can decide on the retention time for the snapshot. By default, the code sets the retention days as 10. If you want to reduce or increase the retention time, you can change the following parameter in the code. retention_days = 10 The python script will create a snapshot with a tag key “Deletion” and “Date” as the value that is calculated based on the retention days. This will help in deleting the snapshots which are older than the retention time. Lambda Function To Automate Snapshot Creation Now that we have our python script ready for creating snapshots, it has to deployed as a Lambda function. Triggering the Lambda function totally depends on your use case. For demo purposes, we will set up cloudwatch triggers to execute the lambda function whenever a snapshot is required. Follow the steps given below for creating a lambda function. Step 1: Head over to lambda service page and select “create lambda function”. Step 2: Choose “Author from Scratch” and python 2.7 runtime. Also, select an exiting IAM role with snapshot create permissions. Click “Create Function” function button after filling up the details. Step 3: On the next page, if you scroll down, you will find the function code editor. Copy the python script from the above section to the editor and save it. Once saved, click the “Test” button. It will open an evet pop up. Just enter an event name and click create it. Click “Test” button again and you will see the code getting executed and its logs as show blow. As per the code, it should create snapshots of all volumes if a instance has a tag named “Backup:True”. Step 4: Now you have a Lamda function ready to create snapshots. You have to decide what triggers you need to invoke the lambda function. If you click the “Add Trigger” Button from the function dashboard, it will list all the possible trigger options as shown below. You can configure one based on your use case. It can be API gateway wall or a cloudwatch even trigger like I explained above. For example, I if choose cloudwatch event trigger, It will look like the following. Automated Deletion Of EBS Snapshots Using Lambda We have seen how to create a lambda function to create snapshots of instances tagged with a “backup” tag. We cannot keep the snapshots piling up over time. That’s the reason we used the retention days in the python code. It tags the snapshot with the deletion date. The deletion python script scans for snapshots with a tag with a value that matches the current date. If a snapshot matches the requirement, it will delete that snapshot. This lambda function runs every day to remove the old snapshots. Create a lambda function with the cloudwatch event schedule as one day. You can follow the same steps I explained above for creating the lambda function. Here is the python code for snapshot deletion. import boto3 import re import datetime ec = boto3.client('ec2') iam = boto3.client('iam') def lambda_handler(event, context): account_ids = list() try: """ You can replace this try/except by filling in `account_ids` yourself. Get your account ID with: > import boto3 > iam = boto3.client('iam') > print iam.get_user()['User']['Arn'].split(':')[4] """ iam.get_user() except Exception as e: # use the exception message to get the account ID the function executes under account_ids.append(re.search(r'(arn:aws:sts::)([0-9]+)', str(e)).groups()[1]) delete_on = datetime.date.today().strftime('%Y-%m-%d') filters = [ {'Name': 'tag-key', 'Values': ['DeleteOn']}, {'Name': 'tag-value', 'Values': [delete_on]}, ] snapshot_response = ec.describe_snapshots(OwnerIds=account_ids, Filters=filters) for snap in snapshot_response['Snapshots']: print "Deleting snapshot %s" % snap['SnapshotId'] ec.delete_snapshot(SnapshotId=snap['SnapshotId']) How To Restore EBS Snapshot You can restore a snapshot in two ways. Restore the EBS Volume from the snapshot. Restore EC2 Instance from a snapshot You can optionally change following while restoring a snapshot Volume Size Disk Type Availability Zone Restore EBS Volume from Snapshot Follow the steps given below to restore a snapshot to a EBS volume. Step 1: Head over to snapshots, select the snapshot you want to restore, select the “Actions” dropdown, and click create volume. Step 2: Fill in the required details and click “create volume” option. That’s it. Your volume will be created. You can mount this volume to the required instance to access its data. Restore EC2 Instance From Snapshot You can restore a ec2 instance with two simple steps. Please note, the volume Create an image (AMI) from the snapshot. Launch an instance from the AMI created from the snapshot. Follow the below steps. Step 1: Head over to snapshots, select the snapshot you want to restore, select the “Actions” dropdown, and click create image. Step 2: Enter the AMI name, description, and modify the required parameters. Click “Create Image” to register the AMI. Step 3: Now, select AMIs from the left panel menu, select the AMI, and from the “Actions” drop-down, select launch. It will take you to the generic instance launch wizard. You can launch the VM as you normally do with any ec2 instance creation. EBS Snapshot Features Following are the key features of EBS snapshots. Snapshots Backend Storage is s3: Whenever you take a snapshot, it gets stored in S3. EBS snapshots are incremental: Every time you request a Snapshot of your EBS volume, only the changed data in the disk (delta) is copied to the new one. So irrespective of the number of snapshots, you will only pay to changed data present in the Volume. Meaning, your consistent data never gets duplicated between Snapshots. For example, your disk storage can be 20 GB, and snapshot storage can be 30 GB due to the changes notified during every snapshot creation. You can read more about this here It is very important to have data backups on the cloud for data recovery and protection. EBS snapshots play an important role when it comes to backup of your ec2 instance data. Even though snapshots are considered as “poor man’s backup”, it gives you a point in time backup and faster restore options. Towards the end of the article, I have added some key snapshot features and some best practices that you can follow to manage snapshots. EBS Snapshot Best Practices Following are some best practices you can follow to manage EBS snapshots. Standard Tagging: Tag your EBS volumes with standard tags across all your environments. This helps in a well-managed snapshot lifecycle management using the life cycle manager. Tags also help in tracking the cost associated with snapshots. You can have billing reposts based on tags. Application Data Consistency: To have consistency for your snapshot backups, it is recommended to stop the IO activity on your disk and perform the disk snapshot. Simultaneous Snapshot request: Snapshots do not affect disk performance, however, the simultaneous request could affect the disk performance. View the full article
  • Forum Statistics

    42.9k
    Total Topics
    42.2k
    Total Posts
×
×
  • Create New...