Jump to content

Search the Community

Showing results for tags 'backups'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. We're excited to announce a major enhancement to Google Cloud Backup and DR, making it simpler than ever to safeguard your critical Google Compute Engine virtual machines (VMs). You can now leverage the power of Google Cloud tags, including inheritance, to easily configure backup policies for Compute Engine VMs, ensuring consistent protection of your dynamic cloud environments. The challenge of managing VM backups Managing backups for a large number of Compute Engine VMs can be complex, especially when VMs are frequently added, removed, or modified. Manually assigning backup policies to individual VMs is time-consuming and can be error-prone, potentially leaving vital resources unprotected. The power of tag-based backups With our new tag-based backup approach, you can leverage the organizational power of tags to automate the protection of your Compute Engine VMs. Here's how it works: Assign your tags: Apply meaningful tags to your organization, folders, projects, or directly to your VMs. These tags might reflect application names, environments (production, development, testing), or criticality levels. Create tag-based policies: Within Google Backup and DR, you can create backup policies with your desired backup schedules, retention periods, and recovery regions. After policy creation, you can assign them to specific tags. Automate protection: Any VM with a matching tag is automatically assigned the backup policy. New VMs inheriting those tags immediately gain protection. Benefits of tag-based backups Simplified management: Drastically reduce the administrative overhead of configuring VM backups at scale by updating the policy attached to the tag rather than at an individual VM backup level. Consistent protection: Ensure new VMs with inherited tags from the project, folder, or organization are automatically protected without manual intervention. Flexibility: Adjust your backup strategies easily by modifying tags or creating new tag-based policies, including support for tags assigned using Terraform For example, you can easily tag all production projects with the tag backupdr-dynamicprotect:production. You could then create a backup policy that does the following: Takes daily snapshots of VMs in those projects that inherit the backupdr-dynamicprotect:production tag Retains backups for 30 days Updates your Terraform script to include your new protection tag to ensure protection of every new Compute Engine instance that’s created Getting started Review the Google Backup and DR documentation on tag-based backups. Develop a tagging strategy for your Compute Engine VMs. Create backup policies that target your chosen tags. Attend ARC205, Protect your critical workloads and recover your most critical asset - your data at Google Cloud Next. View the full article
  2. When deploying ransomware on a target system, threat actors will almost always look to compromise the backups, too. Organizations that lose their backups end up paying a lot more in ransom demands, and losing even more in the recovery process, a new report from cybersecurity researchers Sophos has claimed, highlighting the importance of keeping the backups safe. The company surveyed almost 3,000 IT and cybersecurity professionals, whose organizations suffered a ransomware attack in 2023. Almost all (94%) respondents said the attackers went after their backups, too, rising to 99% in state and local government, the media, leisure, and entertainment sectors. Higher demands Organizations in the energy, oil and gas, and utilities, were most likely to lose their backups to ransomware (79%), followed by education (71%). Across all sectors, the researchers said, more than half (57%) of all compromise attempts were successful. As a result, the ransom demands grew. Victims whose backups were compromised received, on average, more than two times the ransom demand of those who kept their backups safe. The median ransom demand was around $2.3M (backups compromised) and $1M (backups not compromised). What’s more, organizations with compromised backups were almost twice as likely to pay the ransom, compared to those with safe backups (67% compared to 36%). The median ransom payment for organizations with compromised backups was also double - $2 million versus $1.062 million. These firms were also unable to negotiate down the ransom payment, as the attackers were well-aware of the strong position they held during the negotiations. “Backups are a key part of a holistic cyber risk reduction strategy,” the researchers said. “If your backups are accessible online, you should assume that adversaries will find them. Organizations would be wise to take regular backups and store in multiple locations; be sure to add MFA (multi-factor authentication) to your cloud backup accounts to help prevent attackers from gaining access, practice recovering from backups; and secure your backups.” “Monitor for and respond to suspicious activity around your backups as it may be an indicator that adversaries are attempting to compromise them.” More from TechRadar Pro These two ransomware giants are joining forces to hit more victims across the worldHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  3. Sunday, March 31st, 2024, is World Backup Day — and some of the biggest storage manufacturers celebrate with great deals on SSDs and more. View the full article
  4. AWS now provides customers with a new AWS managed policy for Microsoft Windows Volume Snapshot Copy Service (VSS) in Amazon Elastic Compute Cloud(EC2) . With this policy, customers no longer have to configure individual permissions or create their own policies to manage VSS. Customers can simply use this new policy to ensure the necessary permissions are in place for creating application-consistent snapshots using the AWS VSS solution. View the full article
  5. With World Backup Day approaching, many organizations are increasing their attention to potential security threats and blindspots in their backup processes. The post CRM Backup Trends to Watch on World Backup Day appeared first on Security Boulevard. View the full article
  6. The post 8 Best Open-Source Disk Cloning & Backup Tools for Linux (2024) first appeared on Tecmint: Linux Howtos, Tutorials & Guides .Disk cloning is the process of copying data from one hard disk to another. While you can perform this task using copy-and-paste methods, it’s important The post 8 Best Open-Source Disk Cloning & Backup Tools for Linux (2024) first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  7. March 30 is World Backup Day. No, you don’t get the day off. It’s an initiative backed by some of the providers we recommend in our cloud backup guide like Mega and Backblaze, and even Amazon, asking everyone - individuals? Organizations? - to make at least one backup of their precious data. At TechRadar Pro, we, and maybe you too, reader, believe that any person or business refusing to admit the mortality of their external hard drives and SSDs is possibly (definitely) from another planet. Backblaze data from 2021 suggests that 21% of people have never made a backup. This makes me weep, and so it falls to me to attempt to turn the tide. You can be the most careful person in the world, but your storage will still fail, eventually. The mechanical platters of a hard drive are more prone to failure to that of a solid state drive, because, as the name implies, the latter has no moving parts. So, you can buy any combination of these until the end of time itself for an on-premise backup solution, but this poses four problems: 1) the expense, 2) the sheer amount of space this’ll take up if you start putting those drives in servers and 3) the relative lack of security of a purely on-premise storage configuration. The 3-2-1 backup principle Yes, for truly secure, preserved data, it’s not enough to keep all of your storage devices on one site. The 3-2-1 backup principle, revered by such big names as IONOS and Seagate, suggests that, at all times, you should have three copies of all your data at any one time, across two different types of media, and one of these backups should be held off-site. And, because data is truly mortal, you’ll be replacing these backups and the kinds of media and devices that you keep them on forever, and you’ll love it, because you value your data, right? This maintenance is one of the draining things about on-prem backups. The principle is decades-old, well-worn to the point that even we have published contributors claiming that 3-2-1 backups are out of date because of the existence of the cloud driving the obsolescence of ye olde tape media and compact discs - the things that immediately come to mind when thinking about different storage types. Well, sure. But that brings us to another of the big disadvantages of on-premise backups, which is that: if your business has the luxury of a second site to split backups between, that’s fine, but if you’re committed to 3-2-1, responsible data preservation, and circumstances mean that you don’t have that luxury, how exactly do you make an off-site backup? In primordial times, this was some conundrum, but in the twenty-first century, I’d say that cloud backups can accommodate 3-2-1, no matter what the naysayers think. Cloud backup and security Cloud backup entails trusting your data to another company’s servers, usually in some data center somewhere, and paying a monthly or annual fee for the privilege. In the short to medium term, this can make financial sense, or even beyond that with lifetime cloud storage plans, offering much the same, but for a hefty one-off fee. The other big advantage of cloud backups is that they solve the problem of where you keep your off-site backup, because you’ve ceded control over that to another faceless company, which will have its own data loss prevention strategies, and backups of your backup. Getting another company involved can be a blessing and a curse, though: we recognise that handing off your data, which may mean sensitive client data, to another company’s servers may sound like, well, like a bad idea. Well, to head that off, a number of our recommended providers, like pCloud, MEGA, and Icedrive, offer end-to-end encryption, sometimes referred to as zero-knowledge or client-side encryption, meaning that the company handling your storage have no access to your files or ability to view their metadata. It’s a nice assurance to have in an age of well-justified fears of just how humanity can abuse the internet, and also a very recent phenomenon that many household names have only just started to take note of. Google Drive, for example, only permits end-to-end encryption for Google Workspace accounts belonging to organizations whose administrators have enabled it. Solo professionals looking to use Google Drive will have to rely on server-side encryption - which might protect your files from ne'er-do-wells hacking into Google Towers, but not from Google itself, or anyone with unauthorized access to your account. Google Drive also happens to be, for our and indeed your money, one of the cheapest cloud backup providers going, so that may be something to keep in mind. Cloud storage vs backup Another thing to think about is that Google’s offering, for instance, is also known to many as a cloud storage provider, but that’s not quite the same thing as a cloud backup provider. If a service lets you backup ideally an entire drive but at the absolute bare minimum a single folder on a device to the cloud, that’s what you want in this context. Cloud storage, meanwhile, is focused on keeping copies of specific files, not whole drives, and not all cloud storage services offer cloud backups. Back it up, wrap it up I wish I had a more in-depth, less snippy argument to present for backing up your data at all - I don’t. Do you like having your stuff? Well then. But I do think that the argument for making cloud backups, not just during this momentous March but in general, is strong and clear. Cloud backups alleviate, if not remove completely, a whole lot of the obstacles that the 3-2-1 strategy presents, and the industry is far along enough that providers which aren’t Google, Amazon or Microsoft are popping up left-right and center, if that’s a consideration. We can offer recommendations for cloud backup providers, but the choice, ultimately, is yours. Read provider websites to understand the features offered, and whether any one service is even fit for purpose before you buy in because, as with any business decision, it’s important to do your research. More from TechRadar Pro Cloud backup: Our ultimate checklist to get you the perfect provider View the full article
  8. The post Duplicity – Create Encrypted Incremental Backups in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .Experience shows that you can never be too paranoid about system backups. When it comes to protecting and preserving precious data, it is best to The post Duplicity – Create Encrypted Incremental Backups in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  9. A class-action lawsuit filed in early March 2024 accuses Apple of restricting certain files critical to cloud backups of its devices to its own iCloud platform, and raising the price of the service to the point where it is ‘generating almost pure profit’. The filing for Gamboa v. Apple Inc, submitted in the US District Court of the Northern District of California, would include a nationwide class of users impacted by the monopoly, and a class of Californians who claim to have been overcharged for an iCloud plan. We are not lawyers and make no claim to be scholars on California’s corporation laws - however, with Bloomberg noting that iCloud gives Apple a 70% share of the cloud storage market, owing to the sheer ubiquity of their mobile devices, we think it's fair to question the fairness of locking backups to one service and trapping users in one ever increasing pricing model. iCloud’s competition iCloud’s competitors include Amazon, Google and Microsoft, which all have cloud storage services available on iOS devices for the purposes of storing user data. The prospective lawsuit alleges, however, that requiring the use of iCloud for device backups makes maintaining accounts across multiple services - which may be cheaper, and have a more generous free cloud storage allowance than iCloud’s 5GB - inconvenient. Apple has yet to respond to the filing, but it seems unlikely that it’ll be able to convincingly argue that backup data specific to Apple devices is sensitive enough to require locking to iCloud when, in 2022, Apple settled another class-action lawsuit, Williams v. Apple Inc, for $14.8 million, allowing it to continue to deny that it breached its own terms and conditions by storing user data on servers belonging to its competitors. More from TechRadar Pro Apple's zero day threats doubled last year – three things IT must do nowWe’ve also listed the best business smartphones right nowMicrosoft is facing another major EU investigation - this time around blocking security software purchases View the full article
  10. Amazon DocumentDB (with MongoDB compatibility) Elastic Clusters now support automated backups and the ability to copy snapshots. These new features enhance your application resilience and recovery objectives of your Elastic Clusters. View the full article
  11. In Kubernetes, persistent volumes were initially managed by in-tree plugins, but this approach hindered development and feature implementation since in-tree plugins were compiled and shipped as part of k8s source code. To address this, the Container Storage Interface (CSI) was introduced, standardizing storage system exposure to containerized workloads. CSI drivers for standard volumes like Google Cloud PersistentDisk were developed and are continuously evolving. The implementation for in-tree plugins is being transitioned to CSI drivers. If you have a Google Kubernetes Engine (GKE) cluster(s) that is still using the in-tree volumes, please follow the instructions below to learn how to migrate to CSI provisioned volumes. Why migrate? There are various benefits to using a gce-pd CSI Driver, including improved deployment automation, customer managed keys, volume snapshots and more. In GKE version 1.22 and later, CSI Migration is enabled. Existing volumes that use the gce-pd provider managed through CSI drivers via transparent migration in the kubernetes controller backend. No changes are required to any StorageClass. You must use the pd.csi.storage.gke.io provider in the StorageClass to enable features like CMEK or volume snapshots. An example of a storage Class with an in-tree storage plugin and a CSI driver. code_block <ListValue: [StructValue([('code', 'apiVersion: storage.k8s.io/v1\r\nkind: StorageClass\r\n...\r\nprovisioner: kubernetes.io/gce-pd <--- in-tree\r\nprovisioner: pd.csi.storage.gke.io <--- CSI provisioner'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f1c0>)])]> [Please perform the below actions in your test/dev environment first] Before you begin: To test migration, create a GKE cluster. Once the cluster is ready, check the provisioner of your default storage class. If it’s already a CSI provisioner pd.csi.storage.gke.io then change it to gce-pd (in-tree) by following these instructions Refer to this page if you want to deploy a stateful PostgreSQL database application in a GKE cluster.. We will refer to this sample application throughout this blog. Again, make sure that a storage class (standard) with gce-pd provisioner creates the volumes (PVCs) attached to the pods. As a next step, we will backup this application using Backup for GKE (BfG) and restore the application while changing the provisioner from gce-pd (in-tree) to pd.csi.storage.io (the CSI driver). Create a backup Plan Please follow this page to ensure you have BfG enabled on your cluster. When you enable the BfG agent in your GKE cluster, BfG provides a CustomResourceDefinition that introduces a new kind of Kubernetes resource: the ProtectedApplication. For more on ProtectedApplication, please visit this page. A sample manifest file: code_block <ListValue: [StructValue([('code', 'kind: ProtectedApplication\r\napiVersion: gkebackup.gke.io/v1alpha2\r\nmetadata:\r\n name: postgresql\r\n namespace: blog\r\nspec:\r\n resourceSelection:\r\n type: Selector\r\n selector:\r\n matchLabels:\r\n app.kubernetes.io/name: postgresql-ha\r\n components:\r\n - name: postgresql\r\n resourceKind: StatefulSet\r\n resourceNames: ["db-postgresql-ha-postgresql"]\r\n strategy:\r\n type: BackupAllRestoreAll\r\n backupAllRestoreAll: {}'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f2e0>)])]> If Ready to backup status shows as true, your application is ready for backup. code_block <ListValue: [StructValue([('code', '❯ kubectl describe protectedapplication postgresql\r\n......\r\nStatus:\r\n Ready To Backup: true'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6f400>)])]> Let’s create a backup plan following these instructions. Up until now, we have only created a backup plan and haven’t taken an actual backup. But before we start the backup process, we have to bring down the application. Bring down the Application We have to bring down the application right before taking its backup (This is where the Application downtime starts). We are doing it to prevent any data loss during this migration. My application is currently exposed via a service db-postgresql-ha-pgpool with the following selectors: We’ll patch this service by overriding above selectors with a null value so that no new request can reach the database. Save this file as patch.yaml and apply it using kubectl. code_block <ListValue: [StructValue([('code', 'spec:\r\n selector:\r\n app.kubernetes.io/instance: ""\r\n app.kubernetes.io/name: ""\r\n app.kubernetes.io/component: ""'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc86d6ff10>)])]> code_block <ListValue: [StructValue([('code', '❯ kubectl patch service db-postgresql-ha-pgpool --patch-file patch.yaml\r\nservice/db-postgresql-ha-pgpool patched'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ebc865c8370>)])]> You should no longer be able to connect to your app (i.e., database) now. Start a Backup Manually Navigate to the GKE Console → Backup for GKE → Backup Plans Click Start a backup as shown below. Restore from the Backup We will restore this backup to a Target Cluster. Please note that you do have an option to select the same cluster as your source and your target cluster. The recommendation is to use a new GKE cluster as your target cluster. Restore process completes in following two steps: Create a restore plan Restore a backup using the restore plan Create a restore plan You can follow these instructions to create a restore plan. While adding the transformation rule(s) , we will change the storage class from standard to standard-rwo. Add transformation rules → Add Rule (Rename a PVC’s Storage Class) Please see this page for more details. Next, review the configuration and create a plan. Restore backup using the (previously created) restore plan When a backup is restored, the Kubernetes resources are re-created in the target cluster. Navigate to the GKE Console → Backup for GKE → BACKUPS tab to see the latest backup(s). Select the backup you took before bringing down the application to view the details and click on SET UP A RESTORE. Fill all the mandatory fields and click RESTORE. Once done, switch the context to the target cluster and see how BfG has restored the application successfully in the same namespace. The data was restored into new PVCs (verify with kubectl -n blog get pvc). Their storageclass is gce-pd-gkebackup-de, which is a special storageclass used to provision volumes from the backup. Let’s get the details of one of the restored volumes to confirm BfG has successfully changed the provisioner from in-tree to CSI New volumes are created by the CSI provisioner. Great! Bring up the application Let’s patch the service db-postgresql-ha-pgpool back with the original selectors to bring our application up. Save this patch file as new_patch.yaml and apply using kubectl. We are able to connect to our database application now. Note: This downtime will depend on your application size. For more information, please see this link. Use it todayBackup for GKE can help you reduce the overhead of this migration with a minimal downtime. It can also help you prepare for disaster recovery. View the full article
  12. The post How to Backup and Restore VMs in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides .This is our fourth guide of the Proxmox series, in this tutorial, we will explore how to backup and restore VMs in Promox. As a The post How to Backup and Restore VMs in Proxmox first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
  13. How to run a manual backup from the command line Access the server's command line as the 'root' user via SSH or "Terminal" in WHM. Run the following command to initiate the backup process. /usr/local/cpanel/bin/backup Please note that it may be necessary to append the command with the "--force" option if backups are not scheduled to run on the day you are using the command. The post Cpanel Cheatsheet appeared first on DevOpsSchool.com. View the full article
  14. Backup is defined as the process of creating copies of data and storing them in separate locations or mediums, while restore is defined as the process of retrieving the backed-up data and returning it to its original location or system or to a new one. In other words, backup is akin to data preservation, and restore is in essence data retrieval. View the full article
  15. Today, AWS Backup announces support for AWS CloudFormation resource exclusion, allowing you to exclude resources from your application backups. AWS Backup is a fully managed service that centralizes and automates data protection across AWS services and hybrid workloads. Now, when assigning resources to backup plans, you can exclude specific resources within your CloudFormation stacks, optimizing cost on non-critical resources. View the full article
  16. This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! A new week starts, and Spring is almost here! If you’re curious about AWS news from the previous seven days, I got you covered. Last Week’s Launches Here are the launches that got my attention last week: Amazon S3 – Last week there was AWS Pi Day 2023 celebrating 17 years of innovation since Amazon S3 was introduced on March 14, 2006. For the occasion, the team released many new capabilities: S3 Object Lambda now provides aliases that are interchangeable with bucket names and can be used with Amazon CloudFront to tailor content for end users. S3 now support datasets that are replicated across multiple AWS accounts with cross-account support for S3 Multi-Region Access Points. You can now create and configure replication rules to automatically replicate S3 objects from one AWS Outpost to another. Amazon S3 has also simplified private connectivity from on-premises networks: with private DNS for S3, on-premises applications can use AWS PrivateLink to access S3 over an interface endpoint, while requests from your in-VPC applications access S3 using gateway endpoints. We released Mountpoint for Amazon S3, a high performance open source file client. Read more in the blog. Note that Mountpoint isn’t a general-purpose networked file system, and comes with some restrictions on file operations. Amazon Linux 2023 – Our new Linux-based operating system is now generally available. Sébastien’s post is full of tips and info. Application Auto Scaling – Now can use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. You can use it to scale based on your own application-specific metrics. Read how it works with Amazon ECS services. AWS Data Exchange for Amazon S3 is now generally available – You can now share and find data files directly from S3 buckets, without the need to create or manage copies of the data. Amazon Neptune – Now offers a graph summary API to help understand important metadata about property graphs (PG) and resource description framework (RDF) graphs. Neptune added support for Slow Query Logs to help identify queries that need performance tuning. Amazon OpenSearch Service – The team introduced security analytics that provides new threat monitoring, detection, and alerting features. The service now supports OpenSearch version 2.5 that adds several new features such as support for Point in Time Search and improvements to observability and geospatial functionality. AWS Lake Formation and Apache Hive on Amazon EMR – Introduced fine-grained access controls that allow data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR. Amazon EC2 M1 Mac Instances – You can now update guest environments to a specific or the latest macOS version without having to tear down and recreate the existing macOS environments. AWS Chatbot – Now Integrates With Microsoft Teams to simplify the way you troubleshoot and operate your AWS resources. Amazon GuardDuty RDS Protection for Amazon Aurora – Now generally available to help profile and monitor access activity to Aurora databases in your AWS account without impacting database performance AWS Database Migration Service – Now supports validation to ensure that data is migrated accurately to S3 and can now generate an AWS Glue Data Catalog when migrating to S3. AWS Backup – You can now back up and restore virtual machines running on VMware vSphere 8 and with multiple vNICs. Amazon Kendra – There are new connectors to index documents and search for information across these new content: Confluence Server, Confluence Cloud, Microsoft SharePoint OnPrem, Microsoft SharePoint Cloud. This post shows how to use the Amazon Kendra connector for Microsoft Teams. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS News A few more blog posts you might have missed: Women founders Q&A – We’re talking to six women founders and leaders about how they’re making impacts in their communities, industries, and beyond. What you missed at that 2023 IMAGINE: Nonprofit conference – Where hundreds of nonprofit leaders, technologists, and innovators gathered to learn and share how AWS can drive a positive impact for people and the planet. Monitoring load balancers using Amazon CloudWatch anomaly detection alarms – The metrics emitted by load balancers provide crucial and unique insight into service health, service performance, and end-to-end network performance. Extend geospatial queries in Amazon Athena with user-defined functions (UDFs) and AWS Lambda – Using a solution based on Uber’s Hexagonal Hierarchical Spatial Index (H3) to divide the globe into equally-sized hexagons. How cities can use transport data to reduce pollution and increase safety – A guest post by Rikesh Shah, outgoing head of open innovation at Transport for London. For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more. Upcoming AWS Events Here are some opportunities to meet: AWS Public Sector Day 2023 (March 21, London, UK) – An event dedicated to helping public sector organizations use technology to achieve more with less through the current challenging conditions. Women in Tech at Skills Center Arlington (March 23, VA, USA) – Let’s celebrate the history and legacy of women in tech. The AWS Summits season is warming up! You can sign up here to know when registration opens in your area. That’s all from me for this week. Come back next Monday for another Week in Review! — Danilo View the full article
  17. AWS Backup now supports AWS PrivateLink for VMware workloads, providing direct access to AWS Backup from your VMware environment via a private endpoint within your virtual private network in a scalable manner. With this launch, you can now secure your network architecture by connecting to AWS Backup using private IP addresses in your Amazon Virtual Private Cloud (VPC), eliminating the need to use public IPs, firewall rules, or an Internet Gateway. AWS PrivateLink is available at a low per-GB charge for data processed and a low hourly charge for interface VPC endpoints. See AWS PrivateLink pricing for more information. View the full article
  18. Starting today, you can use easily restore a new Amazon RDS for MySQL database instance from a backup of your existing MySQL 8.0 database, whether it’s running on Amazon EC2 or outside of AWS. This is done by using Percona XtraBackup to create a backup of your existing MySQL database, uploading the resulting files to an Amazon S3 bucket, and then creating a new Amazon RDS DB instance through the RDS Console or AWS Command Line Interface (CLI). View the full article
  19. Starting today Amazon Relational Database Service (RDS) for Oracle supports Amazon RDS Cross-Region Automated Backups. This feature extends the existing RDS backup functionality, giving you the ability to setup automatic replication of system snapshots and transaction logs from a primary AWS Region to a secondary AWS Region. View the full article
  20. AWS CloudHSM automatically takes a backup of your HSM cluster once a day and whenever an HSM is added to or removed from your cluster. Until today, however, customers were responsible for deleting old backups. Deleting out of date backups is important to prevent inactive users and expired login credentials from being used to access sensitive data on the HSM. View the full article
  21. You now can restore Amazon DynamoDB tables even faster when recovering from data loss or corruption. The increased efficiency of restores and their ability to better accommodate workloads with imbalanced write patterns reduce table restore times across base tables of all sizes and data distributions. To accelerate the speed of restores for tables with secondary indexes, you can exclude some or all secondary indexes from being created with the restored tables. View the full article
  22. AWS Backup now supports cross-account backup, enabling AWS customers to securely copy backups across accounts within their AWS Organizations. View the full article
  23. You can now take automatic backups of your Amazon Elastic File System (Amazon EFS) file systems with AWS Backup in AWS Europe (Milan) and AWS Africa (Cape Town) regions, directly using the Amazon EFS console or API. Automatic backups for Amazon EFS further simplifies backup management of your file systems by enabling you to meet your business and regulatory backup compliance requirements. View the full article
  24. AWS Backup adds support for Amazon FSx file systems, automating policy-based backup and restore capabilities for Amazon FSx as well as streamlining compliance and data protection for Amazon FSx customers. You can now create, manage, and restore Amazon FSx backups directly from the AWS Backup console for both Amazon FSx for Windows File Server and Amazon FSx for Lustre file systems. View the full article
  25. AWS Backup is now available in 2 additional Regions: Cape Town (CPT) and Milan (MXP). View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...