Jump to content

Search the Community

Showing results for tags 'ec2'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Welcome New Members !
    • General Discussion
    • Ask a Question
    • Site News
  • DevOps & SRE
    • DevOps & SRE
    • Databases
    • Development
    • CI/CD
    • Docker, Containers & Serverless
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Logging, Monitoring & Observability
    • Security
  • Cloud Providers
    • AWS
    • Azure
    • GCP
    • OpenShift

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



Website URL

LinkedIn Profile URL

About Me

Development Experience

Cloud Experience

Current Role



Favourite Tools


  1. Starting today, EC2 Instance Connect is also available in Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Middle East (UAE), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Osaka), Europe (Milan), Europe (Spain), Europe (Zurich), Asia Pacific (Melbourne) regions. Amazon EC2 Instance Connect is a simple and secure way to connect to your instances using Secure Shell (SSH). View the full article
  2. Amazon Web Services (AWS) provides a powerful cloud infrastructure for deploying applications, and it's common to have EC2 instances in private subnets for enhanced security. However, connecting to these instances can be challenging. In this guide, we'll explore best practices for securely and efficiently connecting to an EC2 instance in a private subnet on AWS. Read More Here
  3. You now have an option to delete an instant type EC2 Fleet. All running instances associated with it will be terminated and the fleet will be deleted. View the full article
  4. Starting today, Amazon EC2 Is4gen and Im4gn instances, the latest generation storage-optimized instances, are available in the AWS Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (London) Regions. Is4gen and Im4gn instances are built on the AWS Nitro System and are powered by AWS Graviton2 processors. They feature up to 30TB of storage with the new AWS Nitro SSDs that are custom-designed by AWS to maximize the storage performance of I/O intensive workloads such as SQL/NoSQL databases, search engines, distributed file systems and data analytics which continuously read and write from the SSDs in a sustained manner. AWS Nitro SSDs enable up to 60% lower latency and up to 75% reduced latency variability in Im4gn and Is4gen instances compared to the third generation of storage optimized instances. These instances maximize the number of transactions processed per second (TPS) for I/O intensive workloads such as relational databases (e.g. MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra) which have medium-large size data sets and can benefit from high compute performance and high network throughput. They are also an ideal fit for search engines, and data analytics workloads that require very fast access to data sets on local storage. View the full article
  5. Amazon Nimble Studio adds support for on-demand Amazon Elastic Compute Cloud (EC2) G3 and G5 instances, allowing customers to utilize additional GPU instance types for their creative projects. Artists depend on a mix of CPUs, RAM, and GPUs for their their creative needs. You can now access additional instance types such as the EC2 G3 and G5 instances (EC2 G5 instances utilize the NVIDIA A10G Tensor Core GPU), providing Nimble Studio customers greater flexibility to use the right resources for the project. View the full article
  6. Amazon Relational Database Service (Amazon RDS) for MariaDB now supports R5b database (DB) instances. R5b DB instances support up to 3x the I/O operations per second (IOPS) and 3x the bandwidth on Amazon Elastic Block Store (Amazon EBS) compared to the x86-based memory-optimized R5 DB instances. R5b DB instances are a great choice for IO-intensive DB workloads. View the full article
  7. You can now use the ‘Verified Provider’ label on the EC2 Console to pick public Amazon Machine Images (AMIs) that are owned by Amazon verified accounts. Previously, customers would need to check the owner IDs of AMIs that were publicly shared to identify the source of the AMI. IDs of verified sources were not always easily available. The new label on the console helps you easily identify trusted sources for publicly-shared AMIs. These trusted sources can be Amazon and its partners or AMI providers from AWS Marketplace. View the full article
  8. We are excited to launch two new features that help enforce access controls with Amazon EMR on EC2 clusters (EMR Clusters). These features are supported with jobs that are submitted to the cluster using the EMR Steps API. First is Runtime Role with EMR Steps. A Runtime Role is an AWS Identity and Access Management (IAM) role that you associate with an EMR Step. An EMR Step uses this role to access AWS resources. The second is integration with AWS Lake Formation to apply table and column-level access controls for Apache Spark and Apache Hive jobs with EMR Steps. View the full article
  9. AWS announces the general availability of Amazon EC2 R6a instances. Designed for memory-intensive workloads, R6a instances are built on the AWS Nitro System, which delivers almost all the compute and memory resources of the host hardware to your instances. R6a instances are powered by third-generation AMD EPYC processors with an all-core turbo frequency of up to 3.6 GHz. These memory-optimized instances, which are SAP certified, deliver up to 35% better compute price performance compared to R5a instances for a wide variety of workloads and offer 10% lower cost than comparable x86-based EC2 instances. View the full article
  10. Amazon SageMaker expands access to new ML instances so customers can deploy models on the best instance for their workloads. Now, customers can use ml.g5, ml.p4d, and ml.c6i instances for Asynchronous and Real-time model deployment options. View the full article
  11. Amazon Aurora now supports R6i instances powered by 3rd generation Intel Xeon Scalable processors. R6i instances are the 6th generation of Amazon EC2 memory optimized instances, designed for memory-intensive workloads. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. R6i instances are currently available when using Amazon Aurora PostgreSQL-Compatible Edition. View the full article
  12. Starting today, Amazon EC2 I4i Instances are available in additional Amazon Web Services (AWS) Regions - US West (N. California), Asia Pacific (Hong Kong, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, London, Paris). Designed for storage I/O intensive workloads, I4i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 30% better compute price performance over I3 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). View the full article
  13. EC2 Auto Scaling now publishes predictive scaling policy’s forecasts as a CloudWatch metric, enabling you to analyze, monitor, and set alarms on the accuracy of predictive scaling. Predictive Scaling is a scaling policy that proactively increases the capacity of your Auto Scaling group ahead of predicted demand, improving the availability of your application while reducing the need to stay overprovisioned that otherwise would have increased your EC2 bill. As predictive scaling only increases the capacity for your Auto Scaling groups, applying it to your current scaling configurations strictly enhances your application availability. However, an inaccurate prediction can potentially increase your cost. Now, you can use the extensive list of CloudWatch features to measure accuracy of predictions, view forecasts using the familiar CloudWatch graphs, and also set automatic alarms and notifications when predictions are above your desired levels. View the full article
  14. Amazon Elastic Compute Cloud (Amazon EC2) M1 Mac instances are now generally available (GA). Built on Apple Silicon Mac mini computers and powered by the AWS Nitro System, Amazon EC2 M1 Mac instances deliver up to 60% better price performance over x86-based EC2 Mac instances for building and testing iOS and macOS applications. You still enjoy the same elasticity, scalability, and reliability that the secure, on-demand AWS infrastructure has offered to millions of customers for more than a decade. EC2 M1 Mac instances also enable native Arm64 macOS environments for the first time on AWS to develop, build, test, deploy, and run applications for Apple devices. As a developer who is rearchitecting your macOS applications to natively support Apple Silicon Macs, you can now provision Arm64 macOS environments within minutes, dynamically scale capacity as needed, and benefit from pay-as-you-go pricing to enjoy faster builds and convenient distributed testing. To learn more or get started, see Amazon EC2 Mac Instances. View the full article
  15. AWS Launch Wizard now allows you to deploy SAP HANA in a scale-out architecture using Amazon EC2 x2idn and r6i instances. Customers can deploy up to 16 nodes (1 primary node and 15 secondary nodes) using these instance types. View the full article
  16. With the rise in cloud computing technology, more industries are migrating their workloads to cloud-based infrastructure. As a result of this pattern, technologists have felt the need for some mechanism to automate the process of instance deployment (and other cloud resources). Terraform is one such Open-source tool to facilitate this progress… View the full article
  17. Starting today, you can use Amazon EC2 placement groups to spread instances across distinct hosts on an AWS Outposts rack. Host-level spread placement groups distribute instances across hosts to reduce the likelihood of correlated failures, benefiting workloads that require High Availability (HA) like mission-critical databases. View the full article
  18. Amazon EC2 Auto Scaling now supports a higher default limit for Auto Scaling groups per account. Customers can now create up to 500 Auto Scaling Groups per account, an increase from 200. The limit increase enables customers to provision, manage, and scale EC2 instances for more applications per account. View the full article
  19. The Amazon Relational Database Service (Amazon RDS) Multi-AZ deployment option with one primary and two readable standby database (DB) instances across three Availability Zones (AZs) now supports M5d and R5d instances. This deployment option gives you up to 2x lower transaction commit latency, automated fail overs typically under 35 seconds, and readable standby instances. View the full article
  20. We are excited to announce that the Amazon EC2 VT1 instances now support the AMD-Xilinx Video SDK 2.0, bringing support for Gstreamer, 10-bit HDR video, and dynamic encoder parameters. In addition to new features, this new version offers improved visual quality for 4k video, support for a newer version of FFmpeg (4.4), expanded OS/kernel support, and bug fixes. View the full article
  21. AWS announces the general availability of new memory-optimized Amazon EC2 R6id instances. R6id instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors, with an all-core turbo frequency of 3.5 GHz, up to 7.6 TB of local NVMe-based SSD block-level storage, and up to 15% better price performance than R5d instances. Furthermore, R6id instances also offer up to 58% higher TB storage per vCPU and 34% lower cost per TB and come with always-on memory encryption using Intel Total Memory Encryption (TME). View the full article
  22. Amazon AppStream 2.0 adds new instance sizes stream.standard.xlarge, and stream.standard.2xlarge to the General Purpose instance family. stream.standard.xlarge offers 4 vCPUs and 16 GiB of memory, and stream.standard.2xlarge offers 8 vCPUs and 32 GiB of memory. These new instances provide higher performance options of compute, memory and networking resources for a diverse set of workloads that require more system resources to run effectively. A few examples include Integrated Development Environments, Web Servers and Code Repositories. The new instance sizes are available across all AppStream fleet types Always-On, On-Demand, and Elastic fleets. View the full article
  23. We are excited to announce the following price reductions for Amazon EC2 instances running SLES. When you run SLES on Amazon EC2, you are charged one combined price for the Amazon EC2 infrastructure and the SUSE OS. View the full article
  24. Today, we are announcing support for Amazon Elastic Compute Cloud (EC2) Dedicated Hosts on AWS Outposts, which makes it easier for customers to bring their existing software licenses and workloads that require a dedicated physical server to their Outpost Racks. In addition, customers now have greater flexibility in instance type deployment and more granular placement control, all with consistent hybrid experience on AWS Outposts. View the full article
  25. Amazon Web Services (AWS) announces the general availability of new general purpose Amazon Elastic Compute Cloud (Amazon EC2) M6id instances. M6id instances are powered by third generation Intel Xeon Scalable processors (code name Ice Lake) with an all-core turbo frequency of 3.5 GHz, equipped with up to 7.6 TB of local NVMe-based solid state disk (SSD) block-level storage, and deliver up to 15% better price performance compared to M5d instances. Compared to previous generation instances, M6id instances offer up to 58% higher TB storage per vCPU and 34% lower cost per TB. M6id instances also come with always-on memory encryption by using Intel Total Memory Encryption (TME). Like all modern EC2 instances, M6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers most of the compute and memory resources of the host hardware to your instances. M6id instances are ideal for workloads that require a balance of compute and memory resources along with high-speed, low-latency local block storage, including data logging and media processing. M6id instances will also benefit applications that need temporary storage of data, such as batch and log processing, and applications that need caches and scratch files. View the full article
  • Member Statistics

    Total Members
    Most Online
    Newest Member
  • Create New...