Jump to content

Search the Community

Showing results for tags 'aws'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



Website URL

LinkedIn Profile URL

About Me

Cloud Platforms

Cloud Experience

Development Experience

Current Role



Favourite Tools


  1. Welcome to April’s post announcing new training and certification updates — helping equip you and your teams with the skills to work with AWS services and solutions. This month, we launched 19 new digital training products on AWS Skill Builder, including two new AWS Digital Classroom courses (Digital Classroom – Developing Generative AI Applications on AWS and Digital Classroom – Security Engineering on AWS), nine new digital courses, two new AWS Partner courses (both include an Accreditation Badge to be awarded upon successful completion of the final assessment), and localized versions (Japanese, Korean, Simplified Chinese) of Exam Prep: AWS Certified Advanced Networking – Specialty. Missed our March course update? Check it out here. AWS Digital Classroom AWS Digital Classroom courses combine the depth of live classroom training with the convenience of digital learning. They provide comprehensive knowledge and skills enablement through recordings of expert instructors, demonstrations, hands-on labs, knowledge checks, and assessments. Enjoy flexibility in time, location, and pace, along with the ability to pause, rewind, and revisit content. AWS Digital Classroom is available as part of AWS Skill Builder Team subscription and AWS Skill Builder Individual annual subscription. This month we launched the following new courses: Digital Classroom – Developing Generative AI Applications on AWS Digital Classroom – Security Engineering on AWS AWS Certification exam preparation and updates Exam Prep Standard and Enhanced Courses for AWS Certified Advanced Networking – Specialty (ANS-C01) are now available in Japanese, Korean and Simplified Chinese (English is already available). The Standard version is available to anyone with an AWS Skill Builder account. Enhanced versions include additional prep materials such as hands-on exercises, flashcards, and additional exam-style questions, and are available with an AWS Skill Builder subscription. Free digital courses on AWS Skill Builder The following digital courses on AWS Skill Builder are free to all learners, along with 600+ free digital courses and learning plans. Fundamental Amazon Connect Customer Profiles Fundamentals Amazon Connect Instance Fundamentals Amazon Connect Chat and Messaging Fundamentals AWS for RISE with SAP Introduction Modernizing SAP Workloads on AWS Amazon Q Introduction Amazon Q Business Getting Started Intermediate Migration Evaluator – Gather Insights to Deliver a Migration Business Case Architecting SAP on AWS For AWS Partners Two new digital courses and badges are now available for AWS Partners. Upon successful completion of the final assessment for each AWS Partner: Modernizing SAP Workloads on AWS (Technical) and AWS Partner: Architecting SAP Workloads on AWS (Technical), you will be awarded a Specialized Learner Badge. Fundamental AWS Partner: Modernizing SAP Workloads on AWS (Technical) Intermediate AWS Partner: Architecting SAP Workloads on AWS (Technical) Digital Badges AWS Partner: Modernizing SAP (Specialized) AWS Partner: Architecting SAP (Specialized) View the full article
  2. Process huge volumes of data with Python and DuckDB — An AWS S3 example. Continue reading on Towards Data Science » View the full article
  3. You already know that Terraform is a popular open-source Infrastructure provisioning tool. And that AWS is one of the leading cloud providers with a wide range of services. But have you ever wondered how Terraform can help you better take advantage of the services AWS has to offer? This guide will explain how Terraform and AWS work together to give you insight and control over your cloud resources. Why Use Terraform with AWS?One of the main benefits of using Terraform with AWS is that it allows you to define your entire AWS infrastructure as code using HashiCorp Configuration Language (HCL). With Terraform configuration files called Terraform code, you can easily provision, change, and version control your AWS resources. This provides consistency and repeatability across your environment. Rather than manually making changes through the AWS Management Console, you can model your AWS setup, test changes locally, and roll out updates automatically. For a hands-on experience with Terraform, check out our Terraform Basics Training Course. Key Reasons to Adopt Terraform for AWSBelow are some of the reasons why you should adopt Terraform for AWS infrastructure management: IaC BenefitsTerraform enables you to treat your infrastructure as code. This approach has several benefits: Reproducibility: Defining your infrastructure in code makes it easy to recreate environments consistently.Version Control: Storing your infrastructure configuration in version-controlled repositories (e.g., Git) allows for collaboration and tracking of changes over time.Automation: It allows for the automation of resource provisioning, updates, and teardown.AWS-Specific BenefitsBroad Service Coverage: Terraform supports a wide range of AWS services, from EC2 instances to S3 buckets, RDS databases, and more.Multi-Region and Multi-Account Deployments: Easily deploy resources across different AWS regions and accounts.Immutable Infrastructure: Terraform encourages the use of immutable infrastructure patterns, promoting reliability and scalability.How Does Terraform Work with AWS?At its core, Terraform utilizes AWS APIs to dynamically provision and manage resources. When initializing a working directory, Terraform will download the AWS provider plugin which understands how to communicate with the various AWS services. The AWS provider contains APIs that map directly to the actual AWS APIs. So, for example, when you define an "aws_instance" resource, the provider knows that maps to the EC2 RunInstances API call. By abstracting the underlying AWS APIs, Terraform provides a declarative way to manage your entire AWS environment as code. The provider handles all the network calls and state synchronization behind the scenes. Getting Started with Terraform on AWS1. Install the Terraform CLI Terraform is distributed as a single binary file that can be downloaded and added to your system PATH. For Linux/Mac users, you can use the official HashiCorp releases and extract the zip file. On Windows, you can download the .zip from the releases and extract it to a directory in your PATH. For more details on how to install Terraform, check the Terraform doc. 2. Verifying the Install Test that Terraform is available by checking the version using this command: terraform -v You should get an output similar to this: Terraform v1.1.9 3. Configuring AWS Credentials Terraform supports different strategies for AWS authentication, such as static credentials, environment variables, or IAM roles. For automation, it is recommended that you use an IAM role attached to your EC2 instance. Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables or create the credentials file at ~/.aws/credentials. 4. Creating the Main Configuration Initialize a new or empty Terraform working directory and create main.tf with your resources: terraform init touch main.tf Add a resource block for an EC2 instance specifying AMI, type, security groups, etc: resource "aws_instance" "example" { ami = "ami-0cff7568" instance_type = "t2.micro" vpc_security_group_ids = ["sg-1234567890abcdef0"] } This defines the infrastructure you want to create. 5. Validating and Applying Changes Run terraform plan to see the actions and changes before applying: terraform plan Then apply the changes: terraform apply Terraform will create the EC2 instance and all required dependencies. You can assess the instance on the AWS console. Adding Modules and Remote StateAs your infrastructure grows more complex, structure it using reusable Terraform modules. Modules define generic AWS patterns like a VPC, Auto Scaling Group, or RDS database that you can call multiple times. Also, ensure you manage those modules in version control along with your main configurations. You can read more about modules from this blog: Terraform Modules - Tutorials and Examples. For team collaboration, maintain a centralized state file to track resource lifecycles. Store the file remotely in S3 backed by DynamoDB for locking. This prevents state collisions and loss during runs. To solidify your understanding of Terraform and prepare for official certification, consider taking our course on Terraform Associate Certification: HashiCorp Certified. This course is designed to help you master Terraform and prepare for the official HashiCorp certification exam. Terraform in AWS Best PracticesFollow the following best practices to get the most out of Terraform in AWS. 1. Use an AWS Credential Profile Rather than hardcoding access keys and secret keys directly in your Terraform configuration, use a credential profile configured by one of the AWS SDKs. This avoids maintaining secrets in multiple locations and prevents accidental commits to version control.If you’re running Terraform from control servers, consider using an IAM instance profile for authentication.2. Break Up AWS Configurations When provisioning multiple services (EC2 instances, security boundaries, ECS clusters, etc.), avoid defining them all in a single configuration file. Instead, break them up into smaller, manageable chunks.Organize your configurations based on logical groupings or services to improve maintainability.3. Keep Secrets Secure If you need to store sensitive data or other information you don’t want to make public, use a terraform.tfvars file and exclude the file from version control (e.g., by using .gitignore).Avoid hardcoding secrets directly in your configuration files.4. Use Remote State Store your Terraform state remotely, ideally in an S3 bucket with versioning enabled. This ensures consistency and allows collaboration among team members.Remote state management provides better visibility into changes made to the infrastructure.5. Leverage Existing Modules Take advantage of shared and community modules. These pre-built modules save time and effort by providing reusable configurations for common AWS resources.Import existing infrastructure into Terraform to avoid re-creating everything from scratch.6. Consistent Naming Convention Adopt a consistent naming convention for your resources. Clear, descriptive names make it easier to manage and troubleshoot your infrastructure.Use meaningful prefixes or suffixes to differentiate between environments (e.g., dev-, prod-).7. Always Format and Validate Use Terraform’s built-in formatting (terraform fmt) and validation (terraform validate) tools. Consistent formatting improves readability, and validation catches errors early in the process.Common Use CasesBelow are some of Terraform’s common use cases in AWS: Web Applications Deployment: Deployment of web servers, load balancers, and databases.Dev/Test Environments Creation: Spinning up isolated environments for development and testing.CI/CD Pipelines Creation: Automating infrastructure provisioning as part of your deployment pipeline.Additional Features to KnowBelow are some advanced operations that you can perform when using Terraform in AWS: Data Sources: Terraform allows you to query existing AWS data, such as AMI IDs and security groups, before defining resources that depend on this data.Output Values: After applying changes, Terraform exposes attributes of resources, making them easily accessible for use in other parts of your infrastructure.Remote Backend: Terraform’s remote backend feature manages the state of your infrastructure and provides locking mechanisms to facilitate collaboration among multiple developers.SSH Bastion Host Module: For enhanced security, Terraform offers an SSH Bastion host module that secures access to EC2 instances.Custom IAM Roles and Policies: Terraform enables the provisioning of custom IAM roles and policies tailored to your infrastructure’s needs.Integration with Other Tools: Terraform’s module registry allows for seamless integration with a variety of other tools, expanding its functionality and utility.An alternative to Terraform when working with AWS is CloudFormation, a service that allows you to model and provision AWS resources in a repeatable and automated way. Read more about it in this blog: Terraform vs. CloudFormation: A Side-by-Side Comparison. Check out our Terraform + AWS Playground to start experimenting with automated infrastructure provisioning. ConclusionTerraform is a powerful tool for managing your infrastructure in AWS. It allows you to automate your deployments and maintain a consistent environment. It also supports other cloud providers, including Microsoft Azure, Google Cloud Platform (GCP), and many others. Join our Terraform Challenge to master how to provision and manage infrastructure using Terraform Sign up on KodeKloud for free now and learn how to use Terraform on the go. View the full article
  4. We've just added the new service status forums for Amazon Web Services (AWS) & Google Cloud Platform (GCP) https://devopsforum.uk/forum/32-aws-service-status/ https://devopsforum.uk/forum/33-gcp-service-status/ We've also added a new 'block' on the right-hand side of the main homepage, showing the latest 3 statuses 'Cloud Service Status' We've also added an Azure service status forum, however so far there are no posts - apparently If everything is running smoothly on Azure the feed will be empty https://azure.status.microsoft/en-gb/status/feed/ Here are some other status feeds for Azure services; https://azure.status.microsoft/en-us/status/history/ https://feeder.co/discover/d3ca207d93/azure-microsoft-com-en-us-status
  5. until
    AWS re:Invent is the world's largest, most comprehensive cloud computing event. This year, for the first time ever, re:Invent is available as a free 3-week virtual event. Full Details & Registration
  6. Last Friday was International Women’s Day (IWD), and I want to take a moment to appreciate the amazing ladies in the cloud computing space that are breaking the glass ceiling by reaching technical leadership positions and inspiring others to go and build, as our CTO Werner Vogels says. Last week’s launches Here are some launches that got my attention during the previous week. Amazon Bedrock – Now supports Anthropic’s Claude 3 Sonnet foundational model. Claude 3 Sonnet is two times faster and has the same level of intelligence as Anthropic’s highest-performing models, Claude 2 and Claude 2.1. My favorite characteristic is that Sonnet is better at producing JSON outputs, making it simpler for developers to build applications. It also offers vision capabilities. You can learn more about this foundation model (FM) in the post that Channy wrote early last week. AWS re:Post – Launched last week! AWS re:Post Live is a weekly Twitch livestream show that provides a way for the community to reach out to experts, ask questions, and improve their skills. The show livestreams every Monday at 11 AM PT. Amazon CloudWatch – Now streams daily metrics on CloudWatch metric streams. You can use metric streams to send a stream of near real-time metrics to a destination of your choice. Amazon Elastic Compute Cloud (Amazon EC2) – Announced the general availability of new metal instances, C7gd, M7gd, and R7gd. These instances have up to 3.8 TB of local NVMe-based SSD block-level storage and are built on top of the AWS Nitro System. AWS WAF – Now supports configurable evaluation time windows for request aggregation with rate-based rules. Previously, AWS WAF was fixed to a 5-minute window when aggregating and evaluating the rules. Now you can select windows of 1, 2, 5 or 10 minutes, depending on your application use case. AWS Partners – Last week, we announced the AWS Generative AI Competency Partners. This new specialization features AWS Partners that have shown technical proficiency and a track record of successful projects with generative artificial intelligence (AI) powered by AWS. For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page. Other AWS news Some other updates and news that you may have missed: One of the articles that caught my attention recently compares different design approaches for building serverless microservices. This article, written by Luca Mezzalira and Matt Diamond, compares the three most common designs for serverless workloads and explains the benefits and challenges of using one over the other. And if you are interested in the serverless space, you shouldn’t miss the Serverless Office Hours, which airs live every Tuesday at 10 AM PT. Join the AWS Serverless Developer Advocates for a weekly chat on the latest from the serverless space. The Official AWS Podcast – Listen each week for updates on the latest AWS news and deep dives into exciting use cases. There are also official AWS podcasts in several languages. Check out the ones in French, German, Italian, and Spanish. AWS Open Source News and Updates – This is a newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more. Upcoming AWS events Check your calendars and sign up for these AWS events: AWS Summit season is about to start. The first ones are Paris (April 3), Amsterdam (April 9), and London (April 24). AWS Summits are free events that you can attend in person and learn about the latest in AWS technology. GOTO x AWS EDA Day London 2024 – On May 14, AWS partners with GOTO bring to you the event-driven architecture (EDA) day conference. At this conference, you will get to meet experts in the EDA space and listen to very interesting talks from customers, experts, and AWS. You can browse all upcoming in-person and virtual events here. That’s all for this week. Check back next Monday for another Week in Review! — Marcia This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  7. Amazon Web Services (AWS) has secured a nuclear-powered data center campus as part of a $650 million agreement with Texas-based electricity generation and transmission company Talen Energy. AWS’s acquisition includes the Cumulus data center complex, situated next to Talen’s 2.5 gigawatt Susquehanna nuclear power plant in the northwest of Pennsylvania. Cumulus, which claimed a 48-megawatt capacity when it opened last year, was already on track to expand nearly tenfold to 475 megawatts under Talen’s ownership. The Amazon takeover could spell out even more growth as the cloud giant looks to power its data centers with cleaner energy sources. AWS nuclear-powered data center Amazon’s payments are set to be staggered, with a $350 million lump sum due upon deal closure and a further $300 million awarded upon the completion of certain development milestones set to take place this year. As part of the agreement, Talen will continue to supply Amazon with direct access to power produced by its Susquehanna nuclear power plant, which could reach heights of 960 megawatts in the coming years. While nuclear energy may be a controversial power source, AWS’s acquisition aligns with its commitment to carbon-free and renewable energy sources. Data centers are already being scrutinized for their intensive energy and natural resource consumption, and AWS has plenty of them dotted all over the world. Amazon has also been busy snapping up other green energy opportunities, such as the Oregon-based wind farm that it signed a power purchase agreement for last month. Moreover, the substantial move could be one that keeps it ahead of key rivals. The likes of Microsoft and Google have also been busy transitioning to clean energy in recent months in a bid to reduce the environmental burden of data centers, with nuclear, wind, solar and geothermal plants all being considered. More from TechRadar Pro These are the best cloud hosting providersMicrosoft could be planning to run future data centers with nuclear powerCheck out our roundup of the best cloud storage and best cloud backup tools View the full article
  8. Amazon Web Services (AWS) is a popular cloud platform that provides a variety of services for developing, deploying, and managing applications. It is critical to develop good logging and monitoring practices while running workloads on AWS to ensure the health, security, and performance of your cloud-based infrastructure. In this post, we will look at the significance of logging and monitoring in AWS, as well as the many alternatives and best practices for logging and monitoring, as well as prominent AWS services and tools that may help you achieve these goals. The Importance of Logging and Monitoring in AWS Before we dive into the technical aspects of logging and monitoring in AWS, it’s essential to understand why these activities are critical in a cloud-based environment. View the full article
  9. We’re back with the February course launches and certification updates from AWS Training and Certification to equip you and your teams with the skills to work with AWS services and solutions. This month we launched 22 new digital training products on AWS Skill Builder, including new AWS Digital Classroom courses for annual subscribers, five new solution assignments to build generative AI skills via AWS Cloud Quest, and new prep courses for the AWS Certified Data Engineer – Associate exam. Don’t forget to try the 7-day free trial of AWS Skill Builder Individual subscription* for access to our most immersive, hands-on trainings, including 195+ AWS Builder Labs, enhanced exam prep resources, AWS Cloud Quest, AWS Industry Quest, AWS Jam Journeys, and more. There’s something for every cloud learner, from brand new builder to experienced professional. *terms and conditions apply New Skill Builder subscription features The following new AWS Skill Builder features require an Individual or Team subscription. Individuals can try for free with a 7-day free trial. AWS Digital Classroom Get access to a catalog of AWS Classroom Training courses that have the flexibility of digital training with the depth of classroom training. Available with an annual subscription for individuals or teams, learn more about AWS Digital Classroom and subscribe today — and for a limited time, receive $150 off your annual Individual plan. AWS Cloud Quest AWS Cloud Quest has added five solution assignments to build practical generative AI skills within Machine Learning, Serverless Developer, Solutions Architect, and Security roles. Learn to generate images from text descriptions, create chatbots powered by large language models, use generative AI to build cloud infrastructure, and monitor compute resources using AI-generated code. These hands-on assignments will teach you how to leverage services like Amazon CodeWhisperer, Amazon Lex V2, and Amazon SageMaker for applied generative AI and automation. AWS Certification exam prep and updates Three AWS Certifications are retiring AWS Certification will retire three specialty certifications and their corresponding AWS Skill Builder Exam Prep trainings in April 2024: AWS Certified Data Analytics – Specialty on April 9, 2024; and AWS Certified Database – Specialty and AWS Certified: SAP on AWS – Specialty on April 30, 2024. If you plan to earn these credentials, be sure to take your exam prior to their retirement dates. Exam prep resources Exam Prep Standard Course: AWS Certified Data Engineer – Associate (DEA-C01 – English) (6 hours) is a free digital course designed to prepare you for the AWS Certified Data Engineer – Associate (DEA-C01) exam. During this course you’ll follow a step-by-step plan to prepare, gauging your understanding of topics and concepts from each task statement grouped by exam domains. Become an AWS Skill Builder subscriber and access an enhanced subscription-only 13-hour Exam Prep Enhanced Course: AWS Certified Data Engineer – Associate (DEA-C01 – English) that includes hands-on exercises and exam-style questions to reinforce your knowledge and identify learning gaps. You’ll explore learning strategies to identify incorrect responses to help you determine your readiness to take the exam with the AWS Certification Official Pretest. Exam Prep Official Pretest: AWS Certified Data Engineer – Associate (DEA-C01) (2 hours) helps you prepare for the AWS Certified Data Engineer – Associate (DEA-C01) exam. Gain confidence going into exam day with an official, full-length pretest created by the experts at AWS. Take an AWS Certification Official Pretest to focus your preparation where you need it most, and assess your exam readiness. Exam Prep Official Pretest: AWS Certified Cloud Practitioner (CLF-C02) is now available in French, Italian, German, Spanish-Spain, Traditional Chinese and Indonesian (English, Japanese, Korean, Portuguese, Simplified Chinese, and Spanish LatAm already available). Free digital courses on AWS Skill Builder The following digital courses are free within AWS Skill Builder, along with 600+ other digital courses, learning plans, and resources. Fundamental courses AWS Skill Builder Learner Guide (15 min.) teaches new users how to navigate through AWS Skill Builder and what content types are available to learners. AWS for SAP Fundamentals (45 min.) teaches you the essentials of SAP architecture and provides an understanding of various AWS adoption scenarios for SAP, licensing options, and the AWS support frameworks specific to SAP workloads on AWS. You’ll acquire a foundational knowledge of the basics involved in operating SAP in the AWS Cloud. AWS Mainframe Modernization Refactor with AWS Blu Age Getting Started (60 min.) teaches you the functionality, technical architecture, key use cases and cost structure of AWS Mainframe Modernization Refactor with AWS Blu Age. AWS Mainframe Modernization Replatform with Micro Focus Getting Started (60 min.) teaches you the functionality, technical architecture, key use cases and cost structure of AWS Replatform with Micro Focus. Intermediate courses Containerize and Run .NET Applications on Amazon EKS Windows Pods (2 hours) teaches you Kubernetes, an open-source system for automating deployment, scaling, and managing containerized applications. It also covers Amazon Elastic Kubernetes Service (Amazon EKS), a managed service to run a Kubernetes workload on AWS without the need to install, operate, and maintain your own Kubernetes cluster. Amazon QuickSight Advanced Business Intelligence Authoring (Part 1) (90 min.) teaches you how to author business intelligence experiences using Amazon QuickSight. In this first course of a two-part series, you’ll dive into advanced authoring capabilities in QuickSight, gain expertise in data connectivity, data preparation, and customized highly formatted dashboard building. Amazon QuickSight Advanced Business Intelligence Authoring (Part 2) (90 min.) teaches you how to author business intelligence experiences using Amazon QuickSight. In this second course of a two-part series, you’ll gain practical knowledge on building interactivity, including filters, actions, navigation, and sheets, QuickSight security, QuickSight Q, forecasting, paginated reporting, and data export. AWS Mainframe Modernization – Using MicroFocus Managed Runtime Environment (60 min.) teaches you to build an AWS Replatform with Micro Focus environment using an AWS CloudFormation template to deploy and test an application. AWS Mainframe Modernization – Using Refactor Tools (60 min.) teaches you to setup AWS Blu Insights and use code import and transformation features to refactor Mainframe application code. Amazon Timestream – Data Modeling Techniques (60 min.) teaches you about the significance of efficiently modeling data for your time series workloads using Amazon Timestream. You’ll be introduced to various Timestream features and how to use them for different scenarios. At the end of this course you’ll be able to implement high-performance data models for Amazon Timestream. AWS Training for Partners AWS Partner: SAP on AWS (Technical) (3.5 hours) teaches you key architecture patterns for SAP on AWS, with emphasis on designing, migrating, implementing, and managing SAP solutions. You’ll also gain an understanding of SAP HANA on AWS, and high availability and disaster recovery scenarios. Successfully complete the final assessment and you’ll earn a Credly Accreditation Badge. View the full article
  10. until
    About AWS re:Invent 2023 will be a five-day conference from November 27 to December 1, 2023. The event will continue to be hosted in Las Vegas, Nevada, across the following six venues along the Las Vegas Strip: Caesars Forum (Breakout sessions, content hub, meals) Encore (Breakout sessions, bookable meeting space) Mandalay Bay (Breakout sessions, registration, content hub, meals) MGM Grand (Breakout sessions, registration, content hub, meals) The Venetian (Breakout sessions, registration, Expo, keynotes, content hub, meals) Wynn (Breakout sessions, meals) Prices The full conference pass for AWS re:Invent 2023 costs $2,099. The virtual-only pass is free, which enables virtual attendees to watch Keynotes and Innovation talks. Both require registration with AWS re:Invent 2023 here. Links https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2023/ https://reinvent.awsevents.com/register/?trk=aws.amazon.com https://www.aboutamazon.com/news/aws-reinvent-2023-live https://www.aboutamazon.com/news/aws/what-is-aws-reinvent https://www.spiceworks.com/tech/tech-general/articles/aws-reinvent-2023-guide/
  11. Stuck with making a choice between AWS and Google Cloud? Here is a round-up of both platforms and factors that should inform your decision.View the full article
  12. Learning how various AWS services work can be a daunting task, especially for those who are new to the cloud. There are two dimensions of learning AWS: 1/ understanding what each AWS service does, and 2/ how to use each AWS service. Understanding how each AWS service works can generally be done by reading educational materials, such as AWS Whitepapers or enrolling in a fundamental course on AWS Skill Builder. However, learning how to use an AWS service is a different challenge altogether. To understand how to use AWS, you need to get hands-on keyboard practice within an AWS account. However, if you choose to practice in your own AWS account, you may incur unexpected costs or forget to set up certain security guardrails. Particularly for those who are new to building on AWS, this presents a high barrier to entry to get started. In today’s blog post, we will focus on how Solutions-Focused Immersion Days can help you on your AWS learning journey by providing an interactive learning environment and technical content taught directly by AWS Solutions Architects and specialists. What is a Solutions-Focused Immersion Day? AWS Solutions-Focused Immersion Days (SFIDs) are live, free virtual events designed to help individuals learn how to use AWS products and services or want to improve upon their existing AWS skills. Each SFID is between three to six hours in length and focuses on one or more related AWS services (see below for SFID options). SFIDs leverage a modular content format which feature both technical presentations and hands-on labs to help learners understand how to build, deploy, and operate infrastructure and applications in the cloud. There are a wide variety of topics that are covered at different expertise levels, allowing a broad audience from all different businesses and industries to benefit. These events are designed for the technical audience including Infrastructure Administrators, Database Administrators, Developers, and Architects. However, anyone is welcome to attend and learn new skills. SFID participants join the event through an online meeting platform and can engage with the AWS Solutions Architects and subject matter experts live. For each SFID there are AWS moderators who actively answer questions and address concerns. Why are SFIDs useful? SFIDs present curated and updated information about AWS services, to keep your skills current as AWS services are continually updated. Additionally, each SFIDs’ content and labs follow the AWS Well Architected Framework, ensuring you are learning not only how to use each AWS service, but how to use it in accordance with best practices. Regardless of whether you are new to AWS or an experienced user, there is always something new you can learn from an SFID. Each SFID includes hands-on labs so you can experience building with AWS. These labs are done using AWS accounts provided to you at no charge for the duration of the SFID event. Additionally, these AWS accounts come pre-provisioned with security guardrails and limitations. During each lab session, an AWS Solutions Architect or subject matter expert will share their screen and guide you step-by-step through the process. You may either follow along in your provisioned AWS account or just observe the session and do the lab on your own time. These labs present an excellent opportunity to get hands-on experience with AWS services without fear of incurring unexpected costs or security risks. SFID topics and where to start Each SFID is built around a specific AWS technical topic. Every month, there is a series of SFIDs that cover different solutions, and based on feedback from the customer surveys, the most well-received and popular topics are brought back for future immersion days. If you have no prior experience with AWS, we recommend starting with the introductory SFID session called Introduction to AWS. This is a two-day technical workshop that gives high-level hands-on training for core AWS concepts and services, including compute, monitoring, security, networking, and storage. Once you have some foundational knowledge of AWS and the cloud, there are a variety of more advanced SFIDs to choose from, which you can take in any order you wish. For reference, this is a list of SFIDs that have been conducted in the past and will likely return: Networking: Getting Started with Amazon Virtual Private Cloud (VPC) Storage: Data Protection of application resources using AWS Backup Amazon Simple Storage Service (S3) Replication and Resiliency Workshop Hybrid Cloud Storage with Storage Gateway Getting Started with Amazon Storage – Simple Storage Service and Elastic File System Disaster Recover Strategies on AWS Amazon FSx for NetApp ONTAP Storage Workshop RDS SQL Server Deep Dive Immersion Day Security: Securing your applications at the edge with Web Application Firewall (WAF) Introduction to AWS Identify Access and Management (IAM) Getting Started with Monitoring and Cost Data: Building a Data Lake on AWS Getting started with DynamoDB Compute: Getting Started with Amazon Compute Cloud (EC2) and Autoscaling Getting Started with Serverless Simplify your data integration with AWS Glue Earn digital badges as you complete SFIDs Once you start attending AWS technology learning events, including SFIDs, you can start participating in the AWS Builder’s Quest. AWS Builder’s Quest provides recognition for those who attend and learn from SFIDs. As you complete SFIDs, you can earn digital badges. There are two types of digital badges earned for AWS Builder’s Quest: Learning Participant badges and Level badges. An AWS Builder’s Quest Learning Participant badge shows that you have successfully completed a formal technology learning event. After finishing each SFID and completing the event survey, you’ll earn a Learning Participation badge. As you accumulate Learning Participant badges, you will be rewarded a Level badge. There are 5 Level badges: Bronze for when you earn three Learning badges; silver for five; gold for 10; platinum for 15; and diamond for completing 20. Builders can visually represent their achievements using these digital badges and showcase their learning journey on professional networks such as LinkedIn. AWS Builder’s Quest digital badges How to sign up Signing up for a Solutions-Focused Immersion Day is easy. Go to the website, find an event that interests you, and register through the event page. We hope to see you soon at an SFID, and look forward to cheering you on your AWS Cloud journey. View the full article
  13. By using Generative AI, developers can leverage pre-trained foundation models to gain insights on their code’s structure, the CodeGuru Reviewer recommendation and the potential corrective actions. For example, Generative AI models can generate text content, e.g., to explain a technical concept such as SQL injection attacks or the correct use of a given library. Once the recommendation is well understood, the Generative AI model can be used to refactor the original code so that it complies with the recommendation. The possibilities opened up by Generative AI are numerous when it comes to improving code quality and security. In this post, we will show how you can use CodeGuru Reviewer and Bedrock to improve the quality and security of your code. While CodeGuru Reviewer can provide automated code analysis and recommendations, Bedrock offers a low-friction environment that enables you to gain insights on the CodeGuru recommendations and to find creative ways to remediate your code... View the full article
  14. Customers often ask for help with implementing Blue/Green deployments to Amazon Elastic Container Service (Amazon ECS) using AWS CodeDeploy. Their use cases usually involve cross-Region and cross-account deployment scenarios. These requirements are challenging enough on their own, but in addition to those, there are specific design decisions that need to be considered when using CodeDeploy. These include how to configure CodeDeploy, when and how to create CodeDeploy resources (such as Application and Deployment Group), and how to write code that can be used to deploy to any combination of account and Region. Today, I will discuss those design decisions in detail and how to use CDK Pipelines to implement a self-mutating pipeline that deploys services to Amazon ECS in cross-account and cross-Region scenarios. At the end of this blog post, I also introduce a demo application, available in Java, that follows best practices for developing and deploying cloud infrastructure using AWS Cloud Development Kit (AWS CDK)... View the full article
  15. Cloud-native application development in AWS often requires complex, layered architecture with synchronous and asynchronous interactions between multiple components, e.g., API Gateway, Microservices, Serverless Functions, and system of record integration. Performance engineering requires analysis of the performance and resiliency of each component level and the interactions between these. While there is guidance available at the technical implementation of components, e.g., for AWS API Gateway, Lambda functions, etc., it still mandates understanding and applying end-to-end best practices for achieving the required performance requirements at the overall component architecture level. This article attempts to provide some fine-grained mechanisms to improve the performance of a complex cloud-native architecture flow curated from the on-ground experience and lessons learned from real projects deployed on production. The mission-critical applications often have stringent nonfunctional requirements for concurrency in the form of transactions per second (henceforth called “tps”). A proven mechanism to validate the concurrency requirement is to conduct performance testing. View the full article
  16. Red Hat OpenShift Service on AWS (ROSA) is a fully managed turnkey application platform. It is jointly engineered and supported by Red Hat and AWS through Site Reliability Engineers so customers don’t have to worry about the complexity of infrastructure management. As an application platform running on AWS, a common use case is to connect an application to an AWS managed database. View the full article
  17. WebSocket is a common communication protocol used in web applications to facilitate real-time bi-directional data exchange between client and server. However, when the server has to maintain a direct connection with the client, it can limit the server’s ability to scale down when there are long-running clients. This scale down can occur when nodes are underutilized during periods of low usage. In this post, we demonstrate how to redesign a web application to achieve auto scaling even for long-running clients, with minimal changes to the original application... View the full article
  18. AWS Fargate is a serverless compute engine for running Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS) workloads without managing the underlying infrastructure. AWS Fargate makes it easy to provision and scale secure, isolated, and right-sized compute capacity for containerized applications. As a result, teams are increasingly choosing AWS Fargate to run workloads in a Kubernetes clusters. It is a common practice for multiple teams to share a single Kubernetes cluster. In such cases, cluster administrators often have the need to allocate cost based on a team’s resource usage. Amazon EKS customers can deploy the Amazon EKS optimized bundle of Kubecost for cluster cost visibility when using Amazon EC2. However, in this post, we show you how to analyze costs of running workloads on EKS Fargate using the data in the AWS Cost and Usage Report (CUR). Using Amazon QuickSight, you can visualize your AWS Fargate spend and allocate cost by cluster, namespace, and deployment... View the full article
  19. About a year ago, we published a post on how to Optimize your Spring Boot application for AWS Fargate, where we went into different optimization techniques to speed up the startup time of Spring Boot applications for AWS Fargate. We started the post with “Fast startup times are key to quickly react to disruptions and demand peaks, and they can increase the resource efficiency”. Seekable OCI (SOCI) is a new and simple way to reduce startup times for Java workloads running on AWS Fargate. It can be combined with the earlier optimizations, or you could just use SOCI for a simple win. Customers running applications on Amazon Elastic Container Service (Amazon ECS) with AWS Fargate can now use SOCI to lazily start or in other words: start without waiting for the entire container image to be downloaded. SOCI starts your application immediately and downloads data from the container registry when requested by the application. This improves the overall container startup time. A great deep dive about SOCI and AWS Fargate can be found here. In this post, we’ll dive into techniques to optimize your Java applications using SOCI that don’t require you to change a single line of Java code. In our Spring Boot example application, this improves application startup time by about 25%, and this improvement should get bigger as the container image size gets larger. In addition, we’ll also take a closer look at benchmarks for two different frameworks and approaches. You don’t have to rebuild your images to use SOCI. However, during our tests we also identified optimizations for Java applications only require small modifications to the build process and Dockerfile (i.e., the actual application doesn’t need any adjustments) and reduce the startup time of an AWS Fargate task even further. While we’re focused on Java applications today, we expect SOCI to be helpful for any cases where customers deploy large container images. This testing was carried out with a sample application and the layered jar with SOCI approach may not improve launch times for all Spring Boot applications. We recommended testing this approach with your application and measuring the impact in your environment... View the full article
  20. Let's walk through a more detailed step-by-step process with code for a more comprehensive API Gateway using YARP in ASP.NET Core. We'll consider a simplified scenario with two microservices: UserService and ProductService. The API Gateway will route requests to these services based on the path. Create two separate ASP.NET Core Web API projects for UserService and ProductService. Use the following commands... View the full article
  21. We are excited to announce the availability of improved AWS Well-Architected Framework guidance. In this update, we have made changes across all six pillars of the framework: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability. In this release, we have made the implementation guidance for the new and updated best practices more prescriptive, including enhanced recommendations and steps on reusable architecture patterns targeting specific business outcomes in the Amazon Web Services (AWS) Cloud... View the full article
  22. AWS is pleased to announce an update to the AWS Well-Architected Framework, which will provide customers and partners with more prescriptive guidance on building and operating in the cloud, and enable them to stay up-to-date on the latest architectural best practices in a constantly evolving technological landscape. View the full article
  23. Introducing AWS Well-Architected Review Templates, designed to eliminate duplication and foster consistency across your workloads. With the Well-Architected Tool's latest feature, you can effortlessly craft review templates to answer questions, update notes, and even incorporate Custom Lenses across your workloads. View the full article
  24. Amazon EventBridge rules now support wildcard filters, which enable you to match any character or sequence of characters within a string in your event payload. For example, you can use wildcards to match against values that end in a specific file type in a directory, such as “dir/*.png”, or contain a specific word, such as “*AcmeCorp*”. Support for wildcards allow you to more precisely specify the types of events you want to consume from an EventBridge Event Bus, opening new use cases and helping to optimize your event consumers. View the full article
  25. OpenSearch Service 2.9 now comes with OpenSearch Service Integrations, where customers can take advantage of new schema standards such as Open Telemetry and build dashboards based on an agreed up on schema between your ingestion pipeline and OpenSearch Service. View the full article
  • Forum Statistics

    Total Topics
    Total Posts
  • Create New...