Jump to content

Search the Community

Showing results for tags 'aws lambda'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 16 results

  1. AWS Community Days conferences are in full swing with AWS communities around the globe. The AWS Community Day Poland was hosted last week with more than 600 cloud enthusiasts in attendance. Community speakers Agnieszka Biernacka, Krzysztof Kąkol, and more, presented talks which captivated the audience and resulted in vibrant discussions throughout the day. My teammate, Wojtek Gawroński, was at the event and he’s already looking forward to attending again next year! Last week’s launches Here are some launches that got my attention during the previous week. Amazon CloudFront now supports Origin Access Control (OAC) for Lambda function URL origins – Now you can protect your AWS Lambda URL origins by using Amazon CloudFront Origin Access Control (OAC) to only allow access from designated CloudFront distributions. The CloudFront Developer Guide has more details on how to get started using CloudFront OAC to authenticate access to Lambda function URLs from your designated CloudFront distributions. AWS Client VPN and AWS Verified Access migration and interoperability patterns – If you’re using AWS Client VPN or a similar third-party VPN-based solution to provide secure access to your applications today, you’ll be pleased to know that you can now combine the use of AWS Client VPN and AWS Verified Access for your new or existing applications. These two announcements related to Knowledge Bases for Amazon Bedrock caught my eye: Metadata filtering to improve retrieval accuracy – With metadata filtering, you can retrieve not only semantically relevant chunks but a well-defined subset of those relevant chunks based on applied metadata filters and associated values. Custom prompts for the RetrieveAndGenerate API and configuration of the maximum number of retrieved results – These are two new features which you can now choose as query options alongside the search type to give you control over the search results. These are retrieved from the vector store and passed to the Foundation Models for generating the answer. For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page. Other AWS news AWS open source news and updates – My colleague Ricardo writes this weekly open source newsletter in which he highlights new open source projects, tools, and demos from the AWS Community. Upcoming AWS events AWS Summits – These are free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Whether you’re in the Americas, Asia Pacific & Japan, or EMEA region, learn here about future AWS Summit events happening in your area. AWS Community Days – Join an AWS Community Day event just like the one I mentioned at the beginning of this post to participate in technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from your area. If you’re in Kenya, or Nepal, there’s an event happening in your area this coming weekend. You can browse all upcoming in-person and virtual events here. That’s all for this week. Check back next Monday for another Weekly Roundup! – Veliswa This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS. View the full article
  2. Starting today, customers can protect their AWS Lambda URL origins by using CloudFront Origin Access Control (OAC) to only allow access from designated CloudFront distributions. View the full article
  3. AWS AppSync is a fully managed service that enables developers to build digital experiences based on multiple data sources. With AppSync, you create GraphQL APIs that your applications interact with over the internet (public APIs) or inside your VPC (private APIs). A method of authorization is always required to access your AppSync API. Developers can chose from several authorization modes to authorize their requests based on their business requirements, including calling an AWS Lambda function to implement custom authorization. View the full article
  4. AWS Lambda now supports Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka as event sources in the Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich) Regions, enabling customers to build serverless applications that process streaming data from Kafka event sources. View the full article
  5. AWS Lambda now supports creating serverless applications using Ruby 3.3. Developers can use Ruby 3.3 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available. View the full article
  6. A multi-account architecture on AWS is essential for enhancing security, compliance, and resource management by isolating workloads, enabling granular cost allocation, and facilitating collaboration across distinct environments. It also mitigates risks, improves scalability, and allows for advanced networking configurations. In a streaming architecture, you may have event producers, stream storage, and event consumers in a single account or spread across different accounts depending on your business and IT requirements. For example, your company may want to centralize its clickstream data or log data from multiple different producers across different accounts. Data consumers from marketing, product engineering, or analytics require access to the same streaming data across accounts, which requires the ability to deliver a multi-account streaming architecture. To build a multi-account streaming architecture, you can use Amazon Kinesis Data Streams as the stream storage and AWS Lambda as the event consumer. Amazon Kinesis Data Streams enables real-time processing of streaming data at scale. When integrated with Lambda, it allows for serverless data processing, enabling you to analyze and react to data streams in real time without managing infrastructure. This integration supports various use cases, including real-time analytics, log processing, Internet of Things (IoT) data ingestion, and more, making it valuable for businesses requiring timely insights from their streaming data. In this post, we demonstrate how you can process data ingested into a stream in one account with a Lambda function in another account. The recent launch of Kinesis Data Streams support for resource-based policies enables invoking a Lambda from another account. With a resource-based policy, you can specify AWS accounts, AWS Identity and Access Management (IAM) users, or IAM roles and the exact Kinesis Data Streams actions for which you want to grant access. After access is granted, you can configure a Lambda function in another account to start processing the data stream belonging to your account. This reduces cost and simplifies the data processing pipeline, because you no longer have to copy streaming data using Lambda functions in both accounts. Sharing access to your data streams or registered consumers does not incur additional charges to your account. Cross-account usage of Kinesis Data Streams resources will continue to be billed to the resource owners. In this post, we use Kinesis Data Streams with enhanced fan-out feature, empowering consumers with dedicated read throughput tailored to their applications. By default, Kinesis Data Streams offers shared read throughput of 2 MB/sec per shard across consumers, but with enhanced fan-out, each consumer can enjoy dedicated throughput of 2 MB/sec per shard. This flexibility allows you to seamlessly adapt Kinesis Data Streams to your specific requirements, choosing between enhanced fan-out for dedicated throughput or shared throughput according to your needs. Solution overview For our solution, we deploy Kinesis Data Streams in Account 1 and Lambda as the consumer in Account 2 to receive data from the data stream. The following diagram illustrates the high-level architecture. The setup requires the following key elements: Kinesis data stream in Account 1 and Lambda function in Account 2 Kinesis Data Streams resource policies in Account 1, allowing a cross-account Lambda execution role to perform operations on the Kinesis data stream A Lambda execution role in Account 2 and an enhanced fan-out consumer resource policy in Account 1, allowing the cross-account Lambda execution role to perform operations on the Kinesis data stream For the setup, you use three AWS CloudFormation templates to create the key resources: CloudFormation template 1 creates the following key resources in Account 1: Kinesis data stream Kinesis data stream enhanced fan-out consumer CloudFormation template 2 creates the following key resources in Account 2: Consumer Lambda function Consumer Lambda function execution role CloudFormation template 3 creates the following resource in Account 2: Consumer Lambda function event source mapping The solution supports single-Region deployment, and the CloudFormation templates must be deployed in the same Region across different AWS accounts. In this solution, we use Kinesis Data Streams enhanced fan-out, which is a best practice for deploying architectures requiring large throughput across multiple consumers. Complete the steps in the following sections to deploy this solution. Prerequisites You should have two AWS accounts and the required permissions to run a CloudFormation template to create the services mentioned in the solution architecture. You also need the AWS Command Line Interface (AWS CLI) installed, version 2.15 and above. Launch CloudFormation template 1 Complete the following steps to launch the first CloudFormation template: Sign in to the AWS Management Console as Account 1 and select the appropriate AWS Region. Download and launch CloudFormation template 1 where you want to deploy your Kinesis data stream. For LambdaConsumerAccountId, enter your Lambda consumer account ID and click submit. The CloudFormation template deployment will take a few minutes to complete. When the stack is complete, on the AWS CloudFormation console, navigate to the stack Outputs tab and copy the values of following parameters: KinesisStreamArn KinesisStreamEFOConsumerArn KMSKeyArn You will need these values in later steps. Launch CloudFormation template 2 Complete the following steps to launch the second CloudFormation template: Sign in to the console as Account 2 and select the appropriate Region. Download and launch CloudFormation template 2 where you want to host the Lambda consumer. Provide the following input parameters captured from the previous step: KinesisStreamArn KinesisStreamEFOConsumerArn KMSKeyArn The CloudFormation template creates the following key resources: Lambda consumer Lambda execution role The Lambda function’s execution role is an IAM role that grants the function permission to access AWS services and resources. Here, you create a Lambda execution role that has the required Kinesis Data Streams and Lambda invocation permissions. The CloudFormation template deployment will take a few minutes to complete. When the stack is complete, on the AWS CloudFormation console, navigate to the stack Outputs tab and copy the values of following parameters: KinesisStreamCreateResourcePolicyCommand KinesisStreamEFOConsumerCreateResourcePolicyCommand Run the following AWS CLI commands in Account 1 using AWS CloudShell. We recommend using CloudShell because it will have the latest version of the AWS CLI and avoid any kind of failures. KinesisStreamCreateResourcePolicyCommand – This creates the resource policy in Account 1 for Kinesis Data Stream. The following is a sample resource policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "StreamEFOReadStatementID", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<AWS Lambda - Consumer account id>:role/kds-cross-account-stream-consumer-lambda-execution-role" ] }, "Action": [ "kinesis:DescribeStreamSummary", "kinesis:ListShards", "kinesis:DescribeStream", "kinesis:GetRecords", "kinesis:GetShardIterator" ], "Resource": "arn:aws:kinesis:<region id>:<Account 1 - Amazon KDS account id>:stream/kds-cross-account-stream" } ] } KinesisStreamEFOConsumerCreateResourcePolicyCommand – This creates the resource policy for the enhanced fan-out consumer for the Kinesis data stream in Account 1. The following is a sample resource policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "ConsumerEFOReadStatementID", "Effect": "Allow", "Principal": { "AWS": [ " arn:aws:iam::<AWS Lambda - Consumer account id>:role/kds-cross-account-stream-consumer-lambda-execution-role" ] }, "Action": [ "kinesis:DescribeStreamConsumer", "kinesis:SubscribeToShard" ], "Resource": "arn:aws:kinesis:<region id>:<Account 1 - Amazon KDS account id>:stream/kds-cross-account-stream/consumer/kds-cross-account-stream-efo-consumer:1706616477" } ] } You can also access this policy on the Kinesis Data Streams console, under Enhanced fan-out, Consumer name, and Consumer sharing resource-based policy. Launch CloudFormation template 3 Now that you have created resource policies in Account 1 for the Kinesis data stream and its enhanced fan-out consumer, you can create Lambda event source mapping for the consumer Lambda function in Account 2. Complete the following steps: Sign in to the console as Account 2 and select the appropriate Region. Download and launch CloudFormation template 3 to update the stack you created using CloudFormation template 2. The CloudFormation template creates the Lambda event source mapping. Validate the solution At this point, the deployment is complete. A Kinesis data stream is available to consume the messages and a Lambda function receives these messages in the destination account. To send sample messages to the data stream in Account 1, run the following AWS CLI command using CloudShell: aws kinesis put-record --stream-name kds-cross-account-stream --data sampledatarecord --partition-key samplepartitionkey3 --region <region id> The Lambda function in Account 2 is able to receive the messages, and you should be able to verify the same using Amazon CloudWatch logs: On the CloudWatch console, choose Log groups in the navigation pane. Locate the log group /aws/lambda/kds-cross-account-stream-efo-consumer. Choose Search log group to view the relevant log messages. The following is an example message: "Records": [ { "kinesis": { "kinesisSchemaVersion": "1.0", "partitionKey": "samplepartitionkey3", "sequenceNumber": "49648798411111169765201534322676841348246990356337393698", "data": "sampledatarecord", "approximateArrivalTimestamp": 1706623274.658 }, Clean up It’s always a good practice to clean up all the resources you created as part of this post to avoid any additional cost. To clean up your resources, delete the respective CloudFormation stacks from Accounts 1 and 2, and stop the producer from pushing events to the Kinesis data stream. This makes sure that you are not charged unnecessarily. Summary In this post, we demonstrated how to configure a cross-account Lambda integration with Kinesis Data Streams using AWS resource-based policies. This enables processing of data ingested into a stream within one AWS account through a Lambda function located in another account. To support customers who use a Kinesis data stream in their central account and have multiple consumers reading data from it, we have used the Kinesis Data Streams enhanced fan-out feature. To get started, open the Kinesis Data Streams console or use the new API PutResourcePolicy to attach a resource policy to your data stream or consumer. About the authors Pratik Patel is Sr. Technical Account Manager and streaming analytics specialist. He works with AWS customers and provides ongoing support and technical guidance to help plan and build solutions using best practices and proactively keep customers’ AWS environments operationally healthy. Amar is a Senior Solutions Architect at Amazon AWS in the UK. He works across power, utilities, manufacturing and automotive customers on strategic implementations, specializing in using AWS Streaming and advanced data analytics solutions, to drive optimal business outcomes. View the full article
  7. AWS CodeBuild now supports using container images stored in Amazon ECR repository for projects configured to run on Lambda compute. Previously, you had to use one of the managed container images provided by AWS CodeBuild. AWS managed container images include support for AWS CLI, AWS SAM CLI, and various programming language runtimes. View the full article
  8. Connecting AWS Lambda to an AWS RDS instance allows you to build serverless applications that can interact with relational databases, thereby enabling you to manage database operations without provisioning or managing servers. This comprehensive guide walks you through the process of setting up AWS Lambda to connect to an RDS instance and write data to tables, step-by-step. Prerequisites Before we dive into the steps, ensure you have the following prerequisites covered: View the full article
  9. AWS Lambda now improves the responsiveness for configuring Event Source Mappings (ESMs) and Amazon EventBridge Pipes with event sources such as self-managed Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon DocumentDB, and Amazon MQ. This enhancement allows changes—such as updating, disabling, or deleting ESMs or Pipes—to take effect within 90 seconds, an improvement from the previous time frame of up to 15 minutes. View the full article
  10. AWS CodeBuild customers can now use AWS Lambda to build and test their software packages. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. View the full article
  11. AWS Secrets Manager serves as a centralized and user-friendly solution for effectively handling access to all your secrets within the AWS cloud environment. It simplifies the process of rotating, maintaining, and recovering essential items such as database credentials and API keys throughout their lifecycle. A solid grasp of the AWS Secrets Manager concept is a valuable asset on the path to becoming an AWS Certified Developer. In this blog, you are going to see how to retrieve the secrets that exist in the AWS Service Manager with the help of AWS Lambda in virtual lab settings. Let’s dive in! What is a Secret Manager in AWS? AWS Secrets Manager is a tool that assists in safeguarding confidential information required to access your applications, services, and IT assets. This service makes it simple to regularly change, oversee, and access things like database credentials and API keys securely. Consider the AWS Secrets Manager example, users and applications can retrieve these secrets using specific APIs, eliminating the necessity of storing sensitive data in plain text within the code. This enhances security and simplifies the management of secret information. AWS Secrets Manager Pricing AWS Secrets Manager operates on a pay-as-you-go basis, where your costs are determined by the number of secrets you store and the API calls you make. The service is transparent, with no hidden fees or requirements for long-term commitments. Additionally, there is a 30-day AWS Secrets Manager free tier period, which begins when you store your initial secret, allowing you to explore AWS Secrets Manager without any charges. Once the free trial period ends, you will be billed at a rate of $0.40 per secret each month, and $0.05 for every 10,000 API calls. AWS Secrets Manager Vs Parameter Score What are AWS Lambda functions? AWS Lambda is a service for creating applications that eliminates the need to manually set up or oversee servers. AWS Lambda functions frequently require access to sensitive information like certificates, API keys, or database passwords. It’s crucial to keep these secrets separate from the function code to prevent exposing them in the source code of your application. By using an external secrets manager, you can enhance security and avoid unintentional exposure. Secrets managers offer benefits like access control, auditing, and the ability to manage secret rotation. It’s essential not to store secrets in Lambda configuration environment variables, as these can be seen by anyone with access to view the function’s configuration settings. Architecture Diagram for retrieving secretes in AWS Secrets Manager with AWS Lambda When Lambda invokes your function for the first time, it creates a runtime environment. First, it runs the function’s initialization code, which includes everything outside of the main handler. After that, Lambda executes the function’s handler code, which receives the event payload and processes your application’s logic. For subsequent invocations, Lambda can reuse the same runtime environment. To access secrets, you have a couple of options. One way is to retrieve the secret during each function invocation from within your handler code. This ensures you always have the most up-to-date secret, but it can lead to longer execution times and higher costs, as you’re making a call to the secret manager every time. There may also be additional costs associated with retrieving secrets from the Secret Manager. Another approach is to retrieve the secret during the function’s initialization process. This means you fetch the secret once when the runtime environment is set up, and then you can reuse that secret during subsequent invocations, improving cost efficiency and performance. The Serverless Land pattern example demonstrates how to retrieve a secret during the initialization phase using Node.js and top-level await. If the secret might change between invocations, make sure your handler can verify the secret’s validity and, if necessary, retrieve the updated secret. Another method to optimize this process is to use Lambda extensions. These extensions can fetch secrets from Secrets Manager, cache them, and automatically refresh the cache based on a specified time interval. The extension retrieves the secret from Secrets Manager before the initialization process and provides it via a local HTTP endpoint. Your function can then get the secret from this local endpoint, which is faster than direct retrieval from Secrets Manager. Moreover, you can share the extension among multiple functions, reducing code duplication. The extension takes care of refreshing the cache at the right intervention to ensure that your function always has access to the most recent secret, which enhances reliability. Guidelines to retrieve secrets stored in AWS Secrets Manager with AWS Lambda To retrieve the secrets retained in the AWS Secret Manager with the help of AWS Lambda, you can follow these guided instructions: First, you need to access the Whizlabs Labs library. Click on guided labs on the left side of the lab’s homepage and enter the lab name in the search lab tab. Now, you have found the guided lab for the topic you have entered in the search tab. By clicking on this lab, you can see the lab overview section. Upon reviewing the lab instructions, you may initiate the lab by selecting the “Start Lab” option located on the right side of the screen. Tasks involved in this guided lab are as follows: Task 1: Sign in to the AWS Management Console Start by accessing the AWS Management Console and set the region to N. Virginia a.You need to ensure that you do not edit or remove the 12-digit Account ID in the AWS Console. Copy your username and password from the Lab Console, then paste them into the IAM Username and Password fields in the AWS Console. Afterward, click the ‘Sign in’ button. Task 2: Create a Lambda Function Navigate to the Lambda service. Create a new Lambda function named “WhizFunction” with the runtime set to Python 3.8. Configure the function’s execution role and use the existing role named “Lambda_Secret_Access.” Adjust the function’s timeout to 2 minutes. Adjust the function’s timeout to 2 minutes. Task 3: Write a Lambda to Hard-code Access Keys Develop a Lambda function that creates a DynamoDB table and inserts items. This code will include hard-coded access keys. Download the code provided in the lab document. Replace the existing code in the Lambda function “WhizFunction” with the code from “Code1” in the downloaded zip file. Make sure to change the AWS Access Key and AWS Secret Access Key as instructed in the lab document. Deploy the code and configure a test event named “WhizEvent.” Run the test to create a DynamoDB table with i followed by configuration of the test event. Now click on the save button and click the test button to execute the code. The DynamoDB table was created successfully with some data fields. Task 4: View the DynamoDB Table in the Console Access the DynamoDB service by searching service in the top left corner. In the “Tables” section, you will find a table named “Whizlabs_stud_table1.” You can view the items within the table by selecting the table and clicking “Explore table items.” Task 5: Write a Lambda Code to Return Table Data Modify the Lambda function “WhizFunction” to write code that retrieves data from the DynamoDB table. Replace the existing code with the code from “Code2” in the lab document, making the necessary AWS Access Key and AWS Secret Access Key changes. Deploy the code and execute a test to enable the Lambda function to return data from the table. Task 6: Create a Secret Manager to Store Access Keys Access AWS Secrets Manager and make sure you are in the N. Virginia Region. Create a new secret by specifying it as “Other Type of Secret.” Enter the Access Key and Secret Access Key as key-value pairs. Choose the default encryption key. Name the secret “whizsecret” and proceed with the default settings. Review and store the secret and copy the Secret ARN for later use. Task 7: Write a Lambda to Create DynamoDB Items Using Secrets Manager Modify the Lambda function to create a new DynamoDB table and insert items by retrieving access keys from Secrets Manager. Replace the code with the code from “Code3” in the lab document, updating the Secret ARN. Deploy the code and run a test to create the DynamoDB table and items securely. Task 8: View the DynamoDB Table in the Console Access the DynamoDB service. In the “Tables” section, you will find a table named “Whizlabs_stud_table2.” To view the items, select the table and click “Explore table items.” Task 9: Write a Lambda Code to View Table Items Using Secrets Manager. Modify the Lambda function to write code that fetches table items securely using access and secret keys stored in Secrets Manager. Replace the code with the code from “Code4” in the lab document, updating the Secret ARN. Deploy the code and execute a test to securely access and view table items. Task 10: Cleanup AWS Resources Finally, you can delete the Lambda function “WhizFunction.” Delete both DynamoDB tables created. Delete the secret “whizsecret” from AWS Secrets Manager. Schedule its deletion with a waiting period of 7 days to ensure cleanup. Finally, end the lab by signing out from the AWS Management console. Also Read : Free AWS Developer Associate Exam Questions FAQs How much does the AWS Secret Manager parameter store cost? Parameter Store doesn’t incur any extra costs. However, there is a maximum limit of 10,000 parameters that you can store. What can be stored in AWS secrets manager? AWS Secrets Manager serves as a versatile solution for storing and managing a variety of sensitive information. This includes but is not limited to database credentials, application credentials, OAuth tokens, API keys, and various other secrets essential for different aspects of your operations. It’s important to note that several AWS services seamlessly integrate with Secrets Manager to securely handle and utilize these confidential data points throughout their entire lifecycle. What is the length limit for the AWS secrets manager? In the Secrets Manager console, data is stored in the form of a JSON structure, consisting of key/value pairs that can be easily parsed by a Lambda rotation function. AWS Secret manager limits range from 1 character to 65536 characters. Also, it’s important to note that the tag key names in Secrets Manager are case-sensitive. What are the benefits of AWS Secrets Manager? Secrets Manager provides a secure way to save and oversee your credentials. It makes the process of modifying or rotating your credentials easy, without requiring any complex code or configuration adjustments. Instead of embedding credentials directly in your code or configuration files, you can opt to store them safely using Secrets Manager. What is the best practice for an AWS secrets manager? You can adhere to the below listed AWS Secrets Manager best practices to carry out the secret storing in a better way: Make sure that the AWS Secrets Manager service applies encryption for data at rest by using Key Management Service (KMS) Customer Master Keys (CMKs). Ensure that automatic rotation is turned on for your Amazon Secrets Manager secrets. Also, confirm that the rotation schedule for Amazon Secrets Manager is set up correctly. Conclusion Hope this blog equips you with the knowledge and skills to effectively manage secrets within AWS, ensuring the protection of your critical data. Following the above AWS Secrets Manager tutorial steps can help to access the sensitive information stored in Secret Manager securely with the usage of AWS Lambda. You can also opt for AWS Sandbox to play around with the AWS platform. View the full article
  12. As you might already know, AWS Lambda is a popular and widely used serverless computing platform that allows developers to build and run their applications without having to manage the underlying infrastructure. But have you ever wondered how AWS Lambda Pricing works and how much it would cost to run your serverless application? When it comes to cloud computing, cost is often a major concern. AWS Lambda, Amazon’s serverless computing platform, is no exception. Understanding AWS Lambda Pricing has become increasingly important as the demand for serverless computing continues to rise. View the full article
  13. What is serverless computing? Serverless computing is a cloud computing model that was introduced by AWS in 2014 with its service AWS LAMBDA. The first serverless services were known as Function-as-a-Service (FaaS) but now there are many services such as CaaS or BaaS. It allows developers to build and run applications without the need for managing and maintaining the underlying infrastructure. View the full article
  14. Amazon GuardDuty expands threat detection coverage to continuously monitor network activity logs, starting with VPC Flow Logs, generated from the execution of AWS Lambda functions to detect threats to Lambda such as functions maliciously repurposed for unauthorized cryptocurrency mining, or compromised Lambda functions that are communicating with known threat actor servers. GuardDuty Lambda Protection can be enabled with a few steps in the GuardDuty console, and using AWS Organizations, can be centrally enabled for all existing and new accounts in an organization. View the full article
  15. IAM helps customers with capabilities to analyze access and achieve least privilege. When you are working on new permissions for your teams, you can use IAM Access Analyzer policy generation to create a policy based on your access activity and set fine-grained permissions. To analyze and refine existing permissions, you can use last accessed information to identify unused actions in your IAM policies and reduce access. When we launched action last accessed in 2020, we started with S3 management actions to help you restrict access to your critical business data. Now, IAM is increasing visibility into access history by extending last accessed information to Amazon EC2, AWS IAM, and AWS Lambda actions. This makes it easier for you to analyze access and reduce EC2, IAM, and Lambda permissions by providing the latest timestamp when an IAM user or role used an action. Using last accessed information, you can identify unused actions in your IAM policies and tighten permissions confidently. View the full article
  16. AWS Lambda now supports AWS PrivateLink. With this feature you can manage and invoke Lambda functions from your Virtual Private Cloud (VPC) without exposing your traffic to the public internet. PrivateLink provides private connectivity between your VPCs and AWS services, like Lambda, on the private AWS network. View the full article
  • Forum Statistics

    63.7k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...