Jump to content

Search the Community

Showing results for tags 'amazon ebs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 3 results

  1. Introduction Amazon Elastic Container Service (Amazon ECS) has now enhanced its functionalities by integrating support for Amazon Elastic Block Store (Amazon EBS) volume attachment to Amazon ECS tasks. This feature simplifies using Amazon ECS and AWS Fargate with Amazon EBS. Amazon ECS facilitates seamless provisioning and attachment of EBS volumes to ECS tasks on both Fargate and Amazon Elastic Cloud Compute (Amazon EC2) platforms. In Amazon ECS tasks, you have the flexibility to select EBS volume attributes, such as size, type, IOPS, and throughput, tailoring the storage to meet the specific needs of your application. Additionally, the capability to create volumes from snapshots allows for the rapid deployment of new tasks with pre-populated data. With this feature, Amazon ECS streamlines the deployment of storage-heavy and data-intensive applications, such as ETL processes, media transcoding, and machine learning (ML) inference. For a comprehensive understanding of integrating Amazon ECS with Amazon EBS, see Channy Yun’s launch post, which offers detailed guidance on getting started with this integration. In this post, we discuss performance benchmarking results for Fargate tasks using EBS volumes. The goal aims to assess the performance profiles of various EBS volume configurations under simulated workloads. The insights garnered from this analysis can assist you in identifying the optimal storage configurations for I/O intensive workloads. For context, the data and observations presented in this post are specific to the Oregon Region, reflecting the state of the Fargate’s On-Demand data plane as observed in February 2023. Note that the situation might have changed, offering a different landscape today. EBS volume types Amazon EBS offers a range of block storage volumes, leveraging both Solid State Drive (SSD) and Hard Disk Drive (HDD) technologies to cater to different workload requirements: General Purpose SSD volumes (gp2 and gp3) Provisioned IOPS SSD volumes (io1 and io2 Block Express) Throughput Optimized HDD volumes (st1) Cold HDD volumes (sc1) General Purpose SSD volumes are the most commonly used block storage volume. Backed by solid-state drives, these volumes offer a balanced performance for a broad range of transactional workloads, including boot volumes, medium-sized databases, and low-latency interactive applications. They strike an optimal balance between cost and performance, making them suitable for a variety of use cases that demand consistent, moderate IOPS with reliable throughput. Provisioned IOPS SSD io1 and io2 volumes feature solid-state drives, marking them as Amazon EBS’s storage solutions for high IOPS and low latency needs. Both are tailored for critical applications that demand consistent, rapid access, promising IOPS reliability at a 99.9% rate, suitable for high-performance databases and applications. However, io2 differentiates itself by offering increased durability, larger capacity options, and consistent latency. However, both volumes serve distinct needs depending on the specific requirements of the workload, making sure of flexibility in choice. Throughput Optimized HDD st1 volumes are designed to offer low-cost magnetic storage prioritized for throughput over IOPS. These volumes align with the needs of workloads that benefit from large, sequential reads and writes, making them ideal for processes such as big data analytics, log processing, and data warehousing. Cold HDD sc1 volumes, similar to st1 volumes, focus on throughput but at a more economical rate and with a lower threshold. Best suited for less frequently accessed, sequential cold data, these volumes represent a lowest-cost solution for storage needs that don’t demand constant access. Testing methodology We tested each EBS volume type across multiple Fargate task sizes with XFS. The baseline EBS volume IOPS and throughput available for a Fargate task depend on the total CPU units you request. The difference in storage performance is clear in the results. For example, tasks with 16 vCPUs provide higher IOPS and throughput as compared to tasks with 0.25 vCPUs. To make sure of a thorough examination, we explored a spectrum of Fargate task sizes, ranging from tasks allocated with 0.25 vCPUs up to those with 16 vCPUs, across the following configurations: .25 vCPU | 1 GB .5 vCPU | 2 GB 1 vCPU | 4 GB 2 vCPU | 6 GB 4 vCPU | 8 GB 8 vCPU | 16 GB 16 vCPU | 32 GB Our testing methodology for General Purpose SSD and Provisioned IOPS SSD volumes involved conducting 16 KB random read and write operations, adhering to the guidelines specified in the EBS documentation. For tasks equipped with Throughput Optimized HDD or Cold HDD volumes, our approach entailed executing 1 MiB sequential read and write operations to better gauge their performance under workload conditions typical for these storage types. By repeating each test three times and calculating the mean values, we aimed to make sure of the reliability and accuracy of our performance measurements. General purpose SSD – gp3 volumes Given the versatility and price-to-performance ratio of gp3, we expect this volume type to be the most commonly used block storage for Fargate tasks. gp3 volumes deliver a baseline performance of 3,000 IOPS and 125 MiB/s at any volume size. Fargate supports gp3 volumes with support for a maximum of 16,000 IOPS and 1,000 MiB throughput. We performed tests on gp3 volumes configured with 2,000 GiB size, 16,000 IOPS, and 1000 MiB/s throughput to guarantee maximum storage performance. General purpose SSD – gp3 volumes Given the versatility and price-to-performance ratio of gp3, we expect this volume type to be the most commonly used block storage for Fargate tasks. gp3 volumes deliver a baseline performance of 3,000 IOPS and 125 MiB/s at any volume size. Fargate supports gp3 volumes with support for a maximum of 16,000 IOPS and 1,000 MiB throughput. We performed tests on gp3 volumes configured with 2,000 GiB size, 16,000 IOPS, and 1000 MiB/s throughput to guarantee maximum storage performance. We learned Fargate offers consistent IOPS performance across most task sizes. Tasks with 0.25 vCPU and 1 GB memory are an outlier in this group, as they do not deliver the maximum 16,000 IOPS. Task sizes from 1 vCPU onward achieve the maximum configured IOPS. Tasks with 0.25 vCPU couldn’t go beyond 200 MiB/s and 150 MiB/s in read and write tests respectively. General Purpose SSD – gp2 volumes We recommend customers opt for gp3 volumes over gp2 for several reasons. Firstly, gp3 volumes allow for the provisioning of IOPS independently from storage capacity, offering more flexibility. Secondly, they are more cost-effective, with a 20% lower price per GB compared to gp2 volumes. The gp2 volume performance relies on a burst bucket model, where the size of the volume dictates its baseline IOPS. This baseline determines the rate at which the volume accumulates throughput credits. For those customers with specific needs, Fargate continues to support gp2 volumes. Our decision to include gp2 volumes in our benchmarking was straightforward, as our testing setup was already compatible. We benchmarked gp2 volumes at a size of 6,000 GiB. At this size, gp2 volumes can achieve 16,000 IOPS, the maximum for gp2, due to the volume size proportionally influencing the IOPS allocation. The IOPS performance on gp2 volumes was consistent across all task sizes except 0.5 and 0.25 vCPUs. Tasks with 1 vCPU and larger achieved maximum provisioned throughput of 16,000 IOPS. Throughput performance on gp2 was very similar to gp3 volumes. The test results offer another proof of why customers should prefer gp3 over gp2. Provisioned IOPS SSD – io1 volumes Amazon EBS io1 volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require low latency and have moderate durability requirements or include built-in application redundancy. io1 and io2 volume types provide higher throughput and IOPS compared to gp3 volumes. We performed tests on io1 volumes configured with 2,000 GiB size and 64,000 IOPS Only tasks with 8 or more vCPUs achieved more than 20,000 IOPS. Even though the io1 volumes attached to the tasks supported up to 64,000 IOPS, none of the tasks approached the maximum IOPS mark in our tests. Considering these results, gp3 may turn out to be more cost-effective storage for tasks with fewer than 8 vCPUs. Tasks with io1 volumes reported more I/O throughput as compared with gp3. For applications that need higher throughput and IOPS, io1 volumes are more suitable. All tasks, except 0.25 vCPU, achieved at least 300 MiB/s of throughput. Compare this to gp3, which achieved a maximum of 260 MiB/s. Provisioned IOPS SSD – io2 Block Express volumes Amazon EBS io2 Block Express offers the highest performance block storage in the cloud with 4x higher throughput, IOPS, and capacity than gp3 volumes, along with sub-millisecond latency. io2 Block Express is designed to provide 4,000 MB/s throughput per volume, 256,000 IOPS/volume, up to 64 TiB storage capacity, and 1,000 IOPS per GB as well as 99.999% durability. The io2 volumes we used in benchmarking had 2,000 GiB size and 10,000 IOPS. io2 volumes attained more IOPS on Fargate than io1 volumes on tasks with more vCPUs. However, the IOPS performance of io1 and io2 volumes is identical for tasks with less than 8 vCPUs. Even tasks with 8 and 16 vCPUs achieved about 40,000 IOPS on io2 volumes with 10,000K provisioned IOPS. Note that random write performance on tasks with io2 volumes was much higher than io1, but it’s only applicable for larger tasks. The throughput scaling with task size observed with io2 volumes is similar to that of io1, with io2 volumes demonstrating higher write throughput. Note that in most scenarios, io2 is a more advantageous choice over io1. Although both volume types start at the same price point, io2’s tiered IOPS pricing model makes it a more cost-effective option for configurations requiring high IOPS. Throughput Optimized HDD – st1 Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads, such as Amazon EMR, ETL, data warehouses, and log processing. st1 volumes offer low-cost storage for workloads that need large and sequential I/O. Like gp2, st1 uses a burst bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. st1 volumes provide burst throughput of up to 500 MiB/s. We configured st1 volumes with 13,000 GiB size, which results in a base throughput of 500 MiB/s. st1 volumes are throughput optimized, throughput is a more appropriate measurement of performance. We’ve included IOPS results to be consistent. To summarize, all tasks with over 1 vCPU attain about 500 IOPS. st1 offers consistent throughput across most task sizes. Although io1 and io2 provide over 500 MiB/s throughput on tasks with 8 and 16 vCPUs, st1 offers about 500 MiB/s throughput on most task sizes. This makes st1 better suited for workloads that need higher throughput with smaller task sizes. Cold HDD – sc1 volumes Cold HDD (sc1) volumes provide low-cost magnetic storage that, like st1, defines performance in terms of throughput rather than IOPS. sc1 volumes have lower throughput than st1, making them ideal for large, sequential cold-data workloads. sc1 (such as gp2 and st1) also uses a burst bucket model. Volume size determines the baseline throughput. We maxed the size of sc1 volumes to 16 TiB storage in order to guarantee the maximum throughput of 192 MiB/s. All sc1 volumes have burst throughput of 250 MiB/s. Our tests showed that sc1 volumes achieved about half of the IOPS when compared to st1 volumes. Once again, tasks with one and more vCPUs had consistent IOPS performance. sc1 volumes reported about half of the throughput when compared to st1 volumes. Given that sc1 volumes cost a third of a similarly sized st1 volume, sc1 volumes are great for workloads that need infrequent access to data. Conclusion This post reviewed Amazon EBS performance across different Fargate task sizes. It found that for the majority of workloads on Fargate, gp3 volumes, aptly named for their general-purpose use, are appropriate. However, it is advised against using io1 and io2 volumes for tasks requesting 0.25, 0.5, or 1 vCPU due to insufficient CPU cycles to leverage their potential for more than 30,000 IOPS and 300 MiB throughput. Instead, these high-performance volumes are recommended for workloads requiring significant IOPS and throughput. For tasks needing sequential I/O, st1 volumes, or the more economical sc1 volumes, are also beneficial. View the full article
  2. AWS Systems Manager Fleet Manager now provides a new toolset that aims to streamline on-instance volume management by providing an easy GUI based way to manage EBS volumes on your Windows Instances. With this new Fleet Manager capability, customers can readily browse the set of volumes attached to an instance identify volume mount points in the instance file system, view metadata for attached disks and mount as well as format unused EBS volumes. View the full article
  3. With Amazon GuardDuty, you can monitor your AWS accounts and workloads to detect malicious activity. Today, we are adding to GuardDuty the capability to detect malware. Malware is malicious software that is used to compromise workloads, repurpose resources, or gain unauthorized access to data. When you have GuardDuty Malware Protection enabled, a malware scan is initiated when GuardDuty detects that one of your EC2 instances or container workloads running on EC2 is doing something suspicious. For example, a malware scan is triggered when an EC2 instance is communicating with a command-and-control server that is known to be malicious or is performing denial of service (DoS) or brute-force attacks against other EC2 instances. GuardDuty supports many file system types and scans file formats known to be used to spread or contain malware, including Windows and Linux executables, PDF files, archives, binaries, scripts, installers, email databases, and plain emails. When potential malware is identified, actionable security findings are generated with information such as the threat and file name, the file path, the EC2 instance ID, resource tags and, in the case of containers, the container ID and the container image used. GuardDuty supports container workloads running on EC2, including customer-managed Kubernetes clusters or individual Docker containers. If the container is managed by Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (Amazon ECS), the findings also include the cluster name and the task or pod ID so application and security teams can quickly find the affected container resources. As with all other GuardDuty findings, malware detections are sent to the GuardDuty console, pushed through Amazon EventBridge, routed to AWS Security Hub, and made available in Amazon Detective for incident investigation. How GuardDuty Malware Protection Works When you enable malware protection, you set up an AWS Identity and Access Management (IAM) service-linked role that grants GuardDuty permissions to perform malware scans. When a malware scan is initiated for an EC2 instance, GuardDuty Malware Protection uses those permissions to take a snapshot of the attached Amazon Elastic Block Store (EBS) volumes that are less than 1 TB in size and then restore the EBS volumes in an AWS service account in the same AWS Region to scan them for malware. You can use tagging to include or exclude EC2 instances from those permissions and from scanning. In this way, you don’t need to deploy security software or agents to monitor for malware, and scanning the volumes doesn’t impact running workloads. The EBS volumes in the service account and the snapshots in your account are deleted after the scan. Optionally, you can preserve the snapshots when malware is detected. The service-linked role grants GuardDuty access to AWS Key Management Service (AWS KMS) keys used to encrypt EBS volumes. If the EBS volumes attached to a potentially compromised EC2 instance are encrypted with a customer-managed key, GuardDuty Malware Protection uses the same key to encrypt the replica EBS volumes as well. If the volumes are not encrypted, GuardDuty uses its own key to encrypt the replica EBS volumes and ensure privacy. Volumes encrypted with EBS-managed keys are not supported. Security in cloud is a shared responsibility between you and AWS. As a guardrail, the service-linked role used by GuardDuty Malware Protection cannot perform any operation on your resources (such as EBS snapshots and volumes, EC2 instances, and KMS keys) if it has the GuardDutyExcluded tag. Once you mark your snapshots with GuardDutyExcluded set to true, the GuardDuty service won’t be able to access these snapshots. The GuardDutyExcluded tag supersedes any inclusion tag. Permissions also restrict how GuardDuty can modify your snapshot so that they cannot be made public while shared with the GuardDuty service account. The EBS volumes created by GuardDuty are always encrypted. GuardDuty can use KMS keys only on EBS snapshots that have a GuardDuty scan ID tag. The scan ID tag is added by GuardDuty when snapshots are created after an EC2 finding. The KMS keys that are shared with GuardDuty service account cannot be invoked from any other context except the Amazon EBS service. Once the scan completes successfully, the KMS key grant is revoked and the volume replica in GuardDuty service account is deleted, making sure GuardDuty service cannot access your data after completing the scan operation. Enabling Malware Protection for an AWS Account If you’re not using GuardDuty yet, Malware Protection is enabled by default when you activate GuardDuty for your account. Because I am already using GuardDuty, I need to enable Malware Protection from the console. If you’re using AWS Organizations, your delegated administrator accounts can enable this for existing member accounts and configure if new AWS accounts in the organization should be automatically enrolled. In the GuardDuty console, I choose Malware Protection under Settings in the navigation pane. There, I choose Enable and then Enable Malware Protection. Snapshots are automatically deleted after they are scanned. In General settings, I have the option to retain in my AWS account the snapshots where malware is detected and have them available for further analysis. In Scan options, I can configure a list of inclusion tags, so that only EC2 instances with those tags are scanned, or exclusion tags, so that EC2 instances with tags in the list are skipped. Testing Malware Protection GuardDuty Findings To generate several Amazon GuardDuty findings, including the new Malware Protection findings, I clone the Amazon GuardDuty Tester repo: $ git clone https://github.com/awslabs/amazon-guardduty-tester First, I create an AWS CloudFormation stack using the guardduty-tester.template file. When the stack is ready, I follow the instructions to configure my SSH client to log in to the tester instance through the bastion host. Then, I connect to the tester instance: $ ssh tester From the tester instance, I start the guardduty_tester.sh script to generate the findings: $ ./guardduty_tester.sh *********************************************************************** * Test #1 - Internal port scanning * * This simulates internal reconaissance by an internal actor or an * * external actor after an initial compromise. This is considered a * * low priority finding for GuardDuty because its not a clear indicator* * of malicious intent on its own. * *********************************************************************** Starting Nmap 6.40 ( http://nmap.org ) at 2022-05-19 09:36 UTC Nmap scan report for ip-172-16-0-20.us-west-2.compute.internal (172.16.0.20) Host is up (0.00032s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 5050/tcp closed mmcc MAC Address: 06:25:CB:F4:E0:51 (Unknown) Nmap done: 1 IP address (1 host up) scanned in 4.96 seconds ----------------------------------------------------------------------- *********************************************************************** * Test #2 - SSH Brute Force with Compromised Keys * * This simulates an SSH brute force attack on an SSH port that we * * can access from this instance. It uses (phony) compromised keys in * * many subsequent attempts to see if one works. This is a common * * techique where the bad actors will harvest keys from the web in * * places like source code repositories where people accidentally leave* * keys and credentials (This attempt will not actually succeed in * * obtaining access to the target linux instance in this subnet) * *********************************************************************** 2022-05-19 09:36:29 START 2022-05-19 09:36:29 Crowbar v0.4.3-dev 2022-05-19 09:36:29 Trying 172.16.0.20:22 2022-05-19 09:36:33 STOP 2022-05-19 09:36:33 No results found... 2022-05-19 09:36:33 START 2022-05-19 09:36:33 Crowbar v0.4.3-dev 2022-05-19 09:36:33 Trying 172.16.0.20:22 2022-05-19 09:36:37 STOP 2022-05-19 09:36:37 No results found... 2022-05-19 09:36:37 START 2022-05-19 09:36:37 Crowbar v0.4.3-dev 2022-05-19 09:36:37 Trying 172.16.0.20:22 2022-05-19 09:36:41 STOP 2022-05-19 09:36:41 No results found... 2022-05-19 09:36:41 START 2022-05-19 09:36:41 Crowbar v0.4.3-dev 2022-05-19 09:36:41 Trying 172.16.0.20:22 2022-05-19 09:36:45 STOP 2022-05-19 09:36:45 No results found... 2022-05-19 09:36:45 START 2022-05-19 09:36:45 Crowbar v0.4.3-dev 2022-05-19 09:36:45 Trying 172.16.0.20:22 2022-05-19 09:36:48 STOP 2022-05-19 09:36:48 No results found... 2022-05-19 09:36:49 START 2022-05-19 09:36:49 Crowbar v0.4.3-dev 2022-05-19 09:36:49 Trying 172.16.0.20:22 2022-05-19 09:36:52 STOP 2022-05-19 09:36:52 No results found... 2022-05-19 09:36:52 START 2022-05-19 09:36:52 Crowbar v0.4.3-dev 2022-05-19 09:36:52 Trying 172.16.0.20:22 2022-05-19 09:36:56 STOP 2022-05-19 09:36:56 No results found... 2022-05-19 09:36:56 START 2022-05-19 09:36:56 Crowbar v0.4.3-dev 2022-05-19 09:36:56 Trying 172.16.0.20:22 2022-05-19 09:37:00 STOP 2022-05-19 09:37:00 No results found... 2022-05-19 09:37:00 START 2022-05-19 09:37:00 Crowbar v0.4.3-dev 2022-05-19 09:37:00 Trying 172.16.0.20:22 2022-05-19 09:37:04 STOP 2022-05-19 09:37:04 No results found... 2022-05-19 09:37:04 START 2022-05-19 09:37:04 Crowbar v0.4.3-dev 2022-05-19 09:37:04 Trying 172.16.0.20:22 2022-05-19 09:37:08 STOP 2022-05-19 09:37:08 No results found... 2022-05-19 09:37:08 START 2022-05-19 09:37:08 Crowbar v0.4.3-dev 2022-05-19 09:37:08 Trying 172.16.0.20:22 2022-05-19 09:37:12 STOP 2022-05-19 09:37:12 No results found... 2022-05-19 09:37:12 START 2022-05-19 09:37:12 Crowbar v0.4.3-dev 2022-05-19 09:37:12 Trying 172.16.0.20:22 2022-05-19 09:37:16 STOP 2022-05-19 09:37:16 No results found... 2022-05-19 09:37:16 START 2022-05-19 09:37:16 Crowbar v0.4.3-dev 2022-05-19 09:37:16 Trying 172.16.0.20:22 2022-05-19 09:37:20 STOP 2022-05-19 09:37:20 No results found... 2022-05-19 09:37:20 START 2022-05-19 09:37:20 Crowbar v0.4.3-dev 2022-05-19 09:37:20 Trying 172.16.0.20:22 2022-05-19 09:37:23 STOP 2022-05-19 09:37:23 No results found... 2022-05-19 09:37:23 START 2022-05-19 09:37:23 Crowbar v0.4.3-dev 2022-05-19 09:37:23 Trying 172.16.0.20:22 2022-05-19 09:37:27 STOP 2022-05-19 09:37:27 No results found... 2022-05-19 09:37:27 START 2022-05-19 09:37:27 Crowbar v0.4.3-dev 2022-05-19 09:37:27 Trying 172.16.0.20:22 2022-05-19 09:37:31 STOP 2022-05-19 09:37:31 No results found... 2022-05-19 09:37:31 START 2022-05-19 09:37:31 Crowbar v0.4.3-dev 2022-05-19 09:37:31 Trying 172.16.0.20:22 2022-05-19 09:37:34 STOP 2022-05-19 09:37:34 No results found... 2022-05-19 09:37:35 START 2022-05-19 09:37:35 Crowbar v0.4.3-dev 2022-05-19 09:37:35 Trying 172.16.0.20:22 2022-05-19 09:37:38 STOP 2022-05-19 09:37:38 No results found... 2022-05-19 09:37:38 START 2022-05-19 09:37:38 Crowbar v0.4.3-dev 2022-05-19 09:37:38 Trying 172.16.0.20:22 2022-05-19 09:37:42 STOP 2022-05-19 09:37:42 No results found... 2022-05-19 09:37:42 START 2022-05-19 09:37:42 Crowbar v0.4.3-dev 2022-05-19 09:37:42 Trying 172.16.0.20:22 2022-05-19 09:37:46 STOP 2022-05-19 09:37:46 No results found... ----------------------------------------------------------------------- *********************************************************************** * Test #3 - RDP Brute Force with Password List * * This simulates an RDP brute force attack on the internal RDP port * * of the windows server that we installed in the environment. It uses* * a list of common passwords that can be found on the web. This test * * will trigger a detection, but will fail to get into the target * * windows instance. * *********************************************************************** Sending 250 password attempts at the windows server... Hydra v9.4-dev (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway). Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2022-05-19 09:37:46 [WARNING] rdp servers often don't like many connections, use -t 1 or -t 4 to reduce the number of parallel connections and -W 1 or -W 3 to wait between connection to allow the server to recover [INFO] Reduced number of tasks to 4 (rdp does not like many parallel connections) [WARNING] the rdp module is experimental. Please test, report - and if possible, fix. [DATA] max 4 tasks per 1 server, overall 4 tasks, 1792 login tries (l:7/p:256), ~448 tries per task [DATA] attacking rdp://172.16.0.24:3389/ [STATUS] 1099.00 tries/min, 1099 tries in 00:01h, 693 to do in 00:01h, 4 active 1 of 1 target completed, 0 valid password found Hydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2022-05-19 09:39:23 ----------------------------------------------------------------------- *********************************************************************** * Test #4 - CryptoCurrency Mining Activity * * This simulates interaction with a cryptocurrency mining pool which * * can be an indication of an instance compromise. In this case, we are* * only interacting with the URL of the pool, but not downloading * * any files. This will trigger a threat intel based detection. * *********************************************************************** Calling bitcoin wallets to download mining toolkits ----------------------------------------------------------------------- *********************************************************************** * Test #5 - DNS Exfiltration * * A common exfiltration technique is to tunnel data out over DNS * * to a fake domain. Its an effective technique because most hosts * * have outbound DNS ports open. This test wont exfiltrate any data, * * but it will generate enough unusual DNS activity to trigger the * * detection. * *********************************************************************** Calling large numbers of large domains to simulate tunneling via DNS *********************************************************************** * Test #6 - Fake domain to prove that GuardDuty is working * * This is a permanent fake domain that customers can use to prove that* * GuardDuty is working. Calling this domain will always generate the * * Backdoor:EC2/C&CActivity.B!DNS finding type * *********************************************************************** Calling a well known fake domain that is used to generate a known finding ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> GuardDutyC2ActivityB.com any ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11495 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;GuardDutyC2ActivityB.com. IN ANY ;; ANSWER SECTION: GuardDutyC2ActivityB.com. 6943 IN SOA ns1.markmonitor.com. hostmaster.markmonitor.com. 2018091906 86400 3600 2592000 172800 GuardDutyC2ActivityB.com. 6943 IN NS ns3.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns5.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns7.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns2.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns4.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns6.markmonitor.com. GuardDutyC2ActivityB.com. 6943 IN NS ns1.markmonitor.com. ;; Query time: 27 msec ;; SERVER: 172.16.0.2#53(172.16.0.2) ;; WHEN: Thu May 19 09:39:23 UTC 2022 ;; MSG SIZE rcvd: 238 ***************************************************************************************************** Expected GuardDuty Findings Test 1: Internal Port Scanning Expected Finding: EC2 Instance i-011e73af27562827b is performing outbound port scans against remote host. 172.16.0.20 Finding Type: Recon:EC2/Portscan Test 2: SSH Brute Force with Compromised Keys Expecting two findings - one for the outbound and one for the inbound detection Outbound: i-011e73af27562827b is performing SSH brute force attacks against 172.16.0.20 Inbound: 172.16.0.25 is performing SSH brute force attacks against i-0bada13e0aa12d383 Finding Type: UnauthorizedAccess:EC2/SSHBruteForce Test 3: RDP Brute Force with Password List Expecting two findings - one for the outbound and one for the inbound detection Outbound: i-011e73af27562827b is performing RDP brute force attacks against 172.16.0.24 Inbound: 172.16.0.25 is performing RDP brute force attacks against i-0191573dec3b66924 Finding Type : UnauthorizedAccess:EC2/RDPBruteForce Test 4: Cryptocurrency Activity Expected Finding: EC2 Instance i-011e73af27562827b is querying a domain name that is associated with bitcoin activity Finding Type : CryptoCurrency:EC2/BitcoinTool.B!DNS Test 5: DNS Exfiltration Expected Finding: EC2 instance i-011e73af27562827b is attempting to query domain names that resemble exfiltrated data Finding Type : Trojan:EC2/DNSDataExfiltration Test 6: C&C Activity Expected Finding: EC2 instance i-011e73af27562827b is querying a domain name associated with a known Command & Control server. Finding Type : Backdoor:EC2/C&CActivity.B!DNS After a few minutes, the findings appear in the GuardDuty console. At the top, I see the malicious files found by the new Malware Protection capability. One of the findings is related to an EC2 instance, the other to an ECS cluster. First, I select the finding related to the EC2 instance. In the panel, I see the information on the instance and the malicious file, such as the file name and path. In the Malware scan details section, the Trigger finding ID points to the original GuardDuty finding that triggered the malware scan. In my case, the original finding was that this EC2 instance was performing RDP brute force attacks against another EC2 instance. Here, I choose Investigate with Detective and, directly from the GuardDuty console, I go to the Detective console to visualize AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow data for the EC2 instance, the AWS account, and the IP address affected by the finding. Using Detective, I can analyze, investigate, and identify the root cause of suspicious activities found by GuardDuty. When I select the finding related to the ECS cluster, I have more information on the resource affected, such as the details of the ECS cluster, the task, the containers, and the container images. Using the GuardDuty tester scripts makes it easier to test the overall integration of GuardDuty with other security frameworks you use so that you can be ready when a real threat is detected. Comparing GuardDuty Malware Protection with Amazon Inspector At this point, you might ask yourself how GuardDuty Malware Protection relates to Amazon Inspector, a service that scans AWS workloads for software vulnerabilities and unintended network exposure. The two services complement each other and offer different layers of protection: Amazon Inspector offers proactive protection by identifying and remediating known software and application vulnerabilities that serve as an entry point for attackers to compromise resources and install malware. GuardDuty Malware Protection detects malware that is found to be present on actively running workloads. At that point, the system has already been compromised, but GuardDuty can limit the time of an infection and take action before a system compromise results in a business-impacting event. Availability and Pricing Amazon GuardDuty Malware Protection is available today in all AWS Regions where GuardDuty is available, excluding the AWS China (Beijing), AWS China (Ningxia), AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions. At launch, GuardDuty Malware Protection is integrated with these partner offerings: BitDefender CloudHesive Crowdstrike Fortinet Palo Alto Networks Rapid7 Sophos Sysdig Trellix With GuardDuty, you don’t need to deploy security software or agents to monitor for malware. You only pay for the amount of GB scanned in the file systems (not for the size of the EBS volumes) and for the EBS snapshots during the time they are kept in your account. All EBS snapshots created by GuardDuty are automatically deleted after they are scanned unless you enable snapshot retention when malware is found. For more information, see GuardDuty pricing and EBS pricing. Note that GuardDuty only scans EBS volumes less than 1 TB in size. To help you control costs and avoid repeating alarms, the same volume is not scanned more often than once every 24 hours. Detect malicious activity and protect your applications from malware with Amazon GuardDuty. — Danilo View the full article
  • Forum Statistics

    43.4k
    Total Topics
    42.8k
    Total Posts
×
×
  • Create New...