Jump to content

Search the Community

Showing results for tags 'serverless'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Corporations deal with massive amounts of data these days. As the amount of data increases, handling the incoming information and generating proper insights becomes necessary. Selecting the right data management services might be baffling since many options are available. Multiple platforms provide services that can assist you in analyzing and querying your data. In this [
]View the full article
  2. As businesses continue to generate massive amounts of data, the need for an efficient and scalable data warehouse becomes paramount. Amazon Redshift has always been at the forefront of providing innovative cloud-based services, and with its latest addition, Amazon Redshift Serverless, the data warehouse industry is being revolutionized. With Amazon Redshift Serverless, AWS has removed [
]View the full article
  3. Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver more relevant, context-specific, and accurate responses. We are excited to announce that Knowledge Bases now supports private network policies for Amazon OpenSearch Serverless (OSS). View the full article
  4. Amazon EMR Serverless is now in scope for FedRAMP Moderate in the US East (Ohio), US East (N. Virginia), US West (N. California), and US West (Oregon) Regions. You can now use EMR Serverless to run your Apache Spark and Hive workloads that are subject to FedRAMP Moderate compliance. View the full article
  5. AWS Secrets Manager launched a new capability that allows customers to create and rotate user credentials for Amazon Redshift Serverless. Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With this launch, you can now create and set up automatic rotation for your user credentials for Amazon Redshift Serverless data warehouse directly from the AWS Secrets Manager console. View the full article
  6. We are excited to announce that AWS Fargate for Windows containers on Amazon ECS has reduced infrastructure pricing by up to 49%. Fargate simplifies the adoption of modern container technology for ECS customers by making it even easier to run their Windows containers on AWS. With Fargate, customers no longer need to set up automatic scaling groups or manage host instances for their application. View the full article
  7. An application is said to be “serverless” when its underlying architecture is fully managed by a cloud provider. This means that developers can focus on.....View the full article
  8. What is SQLite? What is SQLite SQLite is a self-contained, serverless, and zero-configuration relational database management system (RDBMS). It is a C library that provides a lightweight, disk-based database that doesn’t require a separate server process and allows direct access to the database using a nonstandard variant of the SQL query language. Key features of SQLite include: Self-contained: SQLite is a single ordinary file on the disk that contains the entire database, making it easy to distribute and deploy. There is no essential for a separate server process. Serverless: Unlike traditional RDBMS systems, SQLite doesn’t run as a separate server process. Instead, it is embedded directly into the application that applies it. Zero-Configuration: SQLite doesn’t require any setup or administration. Developers can simply include the SQLite library in their application, and the database is ready to use. Cross-Platform: SQLite is cross-platform and can perform on various operating systems, including Windows, Linux, macOS, and mobile platforms like iOS and Android. Transaction Support: SQLite supports ACID (Atomicity, Consistency, Isolation, Durability) transactions, ensuring data integrity even in the face of system failures. What is top use cases of SQLite? Top use cases of SQLite include: Embedded Systems and IoT Devices: SQLite’s lightweight nature makes it well-suited for embedded systems and IoT devices where resources may be limited. It is commonly used in applications that run on devices with low memory and processing power. Mobile Applications: SQLite is the default database engine for both Android and iOS platforms. Many mobile applications use SQLite for local storage, caching, and managing structured data on the device. Desktop Applications: SQLite is often used in desktop applications, especially those that need a simple, embedded database solution. It’s a good choice for applications that don’t require the complexity of a client-server database system. Small to Medium-Sized Websites: For small to medium-sized websites or web applications with low to moderate traffic, SQLite can serve as a lightweight and easy-to-manage database solution. Prototyping and Development: SQLite is often used during the development and prototyping stages of a project due to its simplicity and ease of use. Developers can quickly set up and work with a SQLite database without the need for complex configurations. Educational Purposes: SQLite is commonly used in educational settings to teach database concepts and SQL, thanks to its simplicity and ease of integration into programming projects. Always keep in mind that while SQLite is a powerful tool for certain use cases, it may not be suitable for large-scale applications with high concurrency and heavy write loads. In such cases, more robust client-server database systems like MySQL, PostgreSQL, or Oracle Database are often preferred. What are feature of SQLite? Features of SQLite SQLite is a lightweight and self-contained relational database management system (RDBMS) with several features that make it suitable for specific use cases. Here are some key features of SQLite: Serverless: SQLite operates without a separate server process. The entire database is contained in a single ordinary file on the disk. Zero-Configuration: SQLite requires minimal setup and administration. There is no need to install and configure a database server. Developers can simply include the SQLite library in their application. Cross-Platform: SQLite is cross-platform and can work on various operating systems, including Windows, Linux, macOS, and mobile platforms like Android and iOS. Self-Contained: The entire database is stored in a single file, making it easy to distribute and deploy. This simplicity is especially useful for embedded systems and applications with limited resources. ACID Transactions: SQLite supports ACID (Atomicity, Consistency, Isolation, Durability) transactions, ensuring data integrity even in the face of system failures. Small Footprint: SQLite has a small memory footprint and is suitable for devices with limited resources. This makes it a good choice for embedded systems and mobile devices. Single User Access: SQLite is designed for single-user access scenarios. While it supports concurrent read access, it doesn’t handle concurrent write access as efficiently as some larger RDBMS designed for multi-user environments. Full SQL Support: SQLite supports a significant subset of the SQL standard, making it compatible with standard SQL queries and commands. What is the workflow of SQLite? Here’s a simplified workflow of using SQLite in an application: Include SQLite Library: Include the SQLite library in your application. This can be done by adding the SQLite library files or using a package manager, depending on the programming language and platform. Database Connection: Open a connection to the SQLite database. This connection is typically established by creating a database file or connecting to an existing one. Table Creation: Define the structure of your database by creating tables. SQLite supports standard SQL syntax for creating tables with columns, data types, and constraints. Data Manipulation: Perform CRUD operations (Create, Read, Update, Delete) on the data in your tables. Use SQL commands or an Object-Relational Mapping (ORM) framework to interact with the database. Transactions: Encapsulate related database operations within transactions to ensure consistency. Begin a transaction, perform the required operations, and then either commit the transaction to make the changes permanent or roll back to discard the changes. Error Handling: Implement error handling to manage potential issues during database interactions. SQLite provides error codes that can be used to diagnose and handle errors programmatically. Close Connection: Close the connection to the SQLite database when it is no longer needed or when the application exits. Always remember that while SQLite is a powerful and flexible solution, its suitability depends on the specific requirements of the application. It excels in scenarios where simplicity, low resource usage, and ease of deployment are crucial. For larger-scale applications with high concurrency and complex requirements, other RDBMS systems might be more appropriate. How SQLite Works & Architecture? SQLite Works & Architecture Here’s an explanation of how SQLite works and its architecture: Core Features: Serverless: SQLite doesn’t require a separate server process, making it lightweight and embedded directly within applications. Self-contained: The entire database engine is contained within a single library file, simplifying distribution and deployment. Single-file database: An entire SQLite database is stored in a single cross-platform file, ensuring portability and ease of management. Dynamic typing: Data types are not strictly enforced, allowing flexibility in data storage and manipulation. Full-featured SQL support: Despite its compact size, SQLite supports most of the SQL standard, enabling complex queries and data manipulation. Architecture: Tokenizer and Parser: Tokenizes SQL statements into syntactic units. Parses the tokens into a parse tree representing the query’s structure. Code Generator: Translates the parse tree into virtual machine instructions for execution. B-tree Pager: Manages low-level disk I/O and database file access. Uses B-tree structures for efficient indexing and data retrieval. Virtual Machine: Executes the generated virtual machine instructions. Interacts with the B-tree pager to access and modify database data. OS Interface: Provides a layer for interacting with the underlying operating system’s file system and memory management. Key Advantages: Zero-configuration: No setup or administration required, making it ideal for embedded systems and mobile apps. Highly portable: SQLite runs on diverse platforms without modifications. Small footprint: Minimal storage requirements and resource consumption. Fast and efficient: Optimized for quick reads and writes, even with large databases. Robust and reliable: Proven track record in a wide range of applications. Common Use Cases: Mobile apps: Storing local app data, user preferences, and offline content. Embedded devices: Handling data management in devices with limited resources. Web browsers: Caching web pages and browsing history. Desktop applications: Saving user settings and preferences. Testing and development: Creating lightweight test databases for application development. How to Install and Configure SQLite? SQLite doesn’t require a separate installation or configuration process in the traditional sense. Here’s how to integrate it into your projects: 1. Obtain the library: Download: Download the precompiled SQLite library file (e.g., sqlite3.dll for Windows, libsqlite3.so for Linux) from the official website. Package manager: If using a programming language with package management (e.g., Python, Java), install the SQLite library using the appropriate command: Python: pip install sqlite3 Java: Add the sqlite-jdbc library to your project’s classpath. 2. Link the library: Development environments: Most development environments have built-in support for linking external libraries. Follow their specific instructions to include the SQLite library in your project. Manual linking: If required, link the library during compilation using appropriate compiler flags (e.g., -lsqlite3 for GCC). 3. Interact with SQLite in your code: APIs: Use the provided API functions for your programming language to interact with SQLite databases: Python: Use the sqlite3 module’s functions. Java: Use the java.sql package for JDBC connections and statements. C/C++: Use the SQLite C API functions. Connection: Establish a connection to a database file (or create a new one if it doesn’t exist). SQL commands: Execute SQL commands for creating tables, inserting data, querying, and modifying data. Important considerations: Version compatibility: Ensure the SQLite library version is compatible with your development environment and programming language version. Cross-platform development: SQLite’s portability makes it easy to use on different platforms without code changes. Command-line interface (CLI): SQLite also comes with a command-line shell for interactive database management and testing. SQLite’s serverless nature means you don’t need to set up or configure a separate database server. It’s ready to use within your application as soon as you integrate the library. Fundamental Tutorials of SQLite: Getting started Step by Step Fundamental Tutorials of SQLite To provide the most effective step-by-step tutorials, I’d need some more information: Your preferred programming language: SQLite works with many languages (Python, Java, C++, etc.). Which one are you using? Your experience level: Are you a beginner to databases in general, or do you have some familiarity with SQL concepts? Your learning style: Do you prefer written tutorials, video lessons, interactive exercises, or a combination? Following is a general outline of common steps involved in fundamental SQLite tutorials, which can be adapted to your specific needs: 1. Getting Started: Download and include the SQLite library: Follow the instructions for your chosen language and development environment. Connect to a database: Learn how to establish a connection to an existing SQLite database file or create a new one. Interact with the database: Use the provided API functions to execute SQL commands and interact with the database. 2. Creating Tables: Define table structure: Learn how to use SQL’s CREATE TABLE statement to define the structure of your tables, including columns and data types. Data types: Understand SQLite’s flexible data typing system and common data types like TEXT, INTEGER, REAL, BLOB, etc. 3. Inserting Data: Add data to tables: Use the INSERT INTO statement to insert new records into your tables. Value placeholders: Learn how to use placeholders to safely insert values into SQL statements. 4. Querying Data: Retrieve and filter data: Use SELECT statements to retrieve specific data from tables based on conditions. Filtering conditions: Employ WHERE clauses to filter results based on criteria. Sorting results: Use ORDER BY to arrange results in ascending or descending order. 5. Updating Data: Modify existing records: Use the UPDATE statement to change values in existing records. Target updates: Specify which records to update using WHERE clauses. 6. Deleting Data: Remove records: Use the DELETE FROM statement to remove unwanted records from tables. Exercise caution: Be mindful of data loss when deleting records. 7. Advanced Features (optional): Transactions: Learn how to group multiple SQL operations into transactions to ensure data consistency. Indexes: Improve query performance by creating indexes on frequently searched columns. Foreign keys: Enforce relationships between tables using foreign keys. SQLite command-line shell: Explore interactive database management using the built-in SQLite shell. I’m eager to provide more specific tutorials once I have a better understanding of your preferences. Feel free to share the details, and I’ll guide you through the process effectively! The post What is SQLite and use cases of SQLite? appeared first on DevOpsSchool.com. View the full article
  9. Customers running applications with more than one containers on Amazon Elastic Container Service (ECS) with AWS Fargate can now leverage Seekable OCI (SOCI) to lazily load specific container images within the Amazon ECS task definition. This eliminates the need to generate SOCI indexes for smaller container images within the task definition, while still getting the benefits of SOCI with larger container images, improving the overall application deployment and scale-out time. View the full article
  10. Serverless Architectures on AWS helps to build, secure, and manage the serverless architectures to fuel the most demanding web applications and mobile applications. In this article, we’ll cover what is serverless architecture, how serverless architecture works, serverless architecture use cases, and how to build serverless architecture with the help of AWS Services in hands-on labs. This hands-on labs project is designed for individuals working in the roles of cloud architects and cloud developers. It is also suitable for those who are preparing to obtain the AWS Certified Developer Associate certification... View the full article
  11. PBS is a private, nonprofit corporation, founded in 1969, whose members are America’s public TV stations. They have been an AWS customer for over 10 years using around 100 services. This post about PBS’s success using Amazon Elastic Container Service (Amazon ECS) and AWS Fargate. This post covers their 10-year journey in the cloud. Also, we’ll cover how PBS evolved to use Amazon ECS and AWS Fargate to optimize their resilience, scalability, cost, and application development... View the full article
  12. Introduction Cluster autoscaler, has been the de facto industry standard autoscaling mechanism on kubernetes since the very early version of the platform. However, with the evolving complexity and number of containerized workloads, our customers running on Amazon Elastic Kubernetes Service (Amazon EKS) started to ask for a more flexible way to allocate compute resources to pods and flexibility in instance size and heterogeneity. We addressed those needs with karpenter, a product that automatically launches just the right compute resources to handle your cluster’s applications. Karpenter is designed to take full advantage of Amazon Elastic Compute Cloud (Amazon EC2). Although serving the same purpose, cluster autoscaler and karpenter take a very different approach to autoscaling. In this post, we won’t focus on the differences of the two solutions, but instead we’ll analyze how those can be used to fulfill a specific requirement — scaling an Amazon EKS cluster to zero nodes. Scaling an Amazon EKS cluster to zero nodes can be useful for a variety of reasons. For example, you might want to scale your cluster down to zero nodes when there is no traffic, or you might want to scale your cluster down to zero nodes when you are performing maintenance. This not only reduces costs, but increases the sustainability of resource utilization. Solution overview Cost considerations of scaling down clusters The cost optimization pillar of the AWS Well-Architected Framework includes a specific section that focuses on the financial advantages of implementing a just-in-time supply strategy. Autoscaling is often the preferred approach for matching supply with demand. Figure 1: Adjusting capacity as needed. Autoscaling in Amazon EKS When it comes to Amazon EKS, we need to think of control plane autoscaling and data plane autoscaling as two separate concerns. When Amazon EKS launched in 2018, the goal was to reduce users’ operational overhead by providing a managed control plane for kubernetes. Initially, this included automated upgrades, patches, and backups, but with fixed capacity. An Amazon EC2-backed data plane (with the exception of AWS Fargate) is not fully managed by AWS. Managed node groups reduce the operational burden by automating the provisioning and lifecycle management of nodes. However, upgrades, patches, backups, and autoscaling are the responsibility of the user. In this post, we’ll cover data plane autoscaling, and more specifically, since there are different ways to run Amazon EKS nodes — using Amazon EC2 instances, AWS Fargate, or using AWS Outposts. In this post, we’ll focus on Amazon EKS nodes running on Amazon EC2. Before we go any further, let’s take a closer look at how kubernetes traditionally handles autoscaling for pods and nodes. Autoscaling pods In kubernetes, pods autoscaling is tackled via the Horizontal Pod Autoscaler (HPA), which automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand. Horizontal scaling means that the response to increased load is to deploy more pods. This is different from vertical scaling, which for kubernetes, means assigning more resources (e.g., memory or central process units [CPUs]) to the pods that are already running for the workload. Figure 2: Autoscaling pods with the Horizontal Pod Autoscaler. When the load decreases and the number of pods is above the configured minimum, the Horizontal Pod Autoscaler instructs the workload resource (i.e., the deployment, StatefulSet, or other similar resource) to scale back in. However, Horizontal Pod Autoscaler does not natively support scaling down to 0. There are a few operators that allow you to overcome this limitation by intercepting the requests coming to your pods, or by checking some specific metrics, such as Knative or Keda. However, these are sophisticated mechanisms for achieving serverless behaviour and are beyond the scope of this post on schedule-based scaling to 0. Autoscaling nodes In kubernetes, nodes autoscaling can be addressed using the cluster autoscaler, which is a tool that automatically adjusts the size of the kubernetes cluster when one of the following conditions is true: there are pods that failed to run in the cluster due to insufficient resources. there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes. Figure 3: Autoscaling nodes with the Cluster Autoscaler. Cluster autoscaler decreases the size of the cluster when some nodes are consistently unneeded for a set amount of time. A node is unnecessary when it has low utilization and all of its important pods can be moved elsewhere. When it comes to Amazon EC2-based nodegroups (assuming their minimum size is set to 0) the cluster autoscaler scales the nodegroup to 0 if there are no pods preventing the scale in operation. Pricing model and cost considerations For each Amazon EKS cluster, you pay a basic hourly rate to cover for the managed control plane as well as the cost of running the Amazon EC2-backed data plane and any associated volumes. Hourly Amazon EC2 costs vary depending on the size of the data plane and the underlying instance types. While we would continue to pay the hourly rate for the control plane for the non-production clusters that are used for testing or quality assurance purposes, we may not need the data plane to be available 24 hours a day including weekends. By establishing a schedule-based approach to scale the nodegroups to 0 when unneeded, we can significantly optimize the overall Amazon EC2 compute costs. Cost savings can go beyond bare Amazon EC2 costs. For example, if you use Amazon CloudWatch container insights for monitoring, then you would not be charged when nodes are down given that the costs associated with metrics ingestion are prorated by the hour. In this post, we’ll show you how you can achieve schedule-based scale to 0 for your data plane with Horizontal Pod Autoscaler (HPA) and cluster autoscaler as well as with karpenter. Current mechanisms to scale to zero using HPA and cluster autoscaler We have seen how kubernetes traditionally handles autoscaling for both pods and nodes. We’ve also seen how the current implementations of Horizontal Pod Autoscaler can’t handle schedule-based scale to 0 scenarios. However, the native capabilities can be supplemented with dedicated Kubernetes CronJobs or community-driven open source solutions like cron-hpa or kube downscaler, which can scale pods to 0 on specific schedules. Additionally, we need to make sure that not only we can scale in to 0 but that we can also scale out from 0. Since kubernetes version 1.24, a new feature has been integrated in cluster autoscaler, which makes this easier. Quoting the official announcement: For Kubernetes 1.24, we have contributed a feature to the upstream Cluster Autoscaler project that simplifies scaling the Amazon EKS managed node group (MNG) to and from zero nodes. Before, you had to tag the underlying EC2 Autoscaling Group (ASG) for the Cluster Autoscaler to recognize the resources, labels, and taints of an MNG that was scaled to zero nodes. Starting with kubernetes version 1.24, when there are no running nodes in the MNG and the cluster autoscaler calls the Amazon EKS DescribeNodegroup API to get the information it needs about MNG resources, labels, and taints. When the value of a cluster autoscaler tag on the ASG powering an Amazon EKS MNG conflicts with the value of the MNG itself, the cluster autoscaler prefers the ASG tag so that customers can override values as necessary. Thanks to this new feature, the cluster autoscaler determines which nodegroup needs to be scaled out from 0 based on the definition of the unschedulable pods, but in order for it to be able to do so, it must be up and running. In other words: we cannot scale all of our nodegroups to 0 as we do need to guarantee a minimal stack of core components to be constantly up and running. Such a stack would include, at the very least: the cluster autoscaler, the Core DNS, and the open-source tool of our choice to cover schedule-based scaling of pods. Ideally, we might also need to accommodate Cluster Proportional Autoscaler (CPA) to address Core DNS scalability. To be cost efficient, we might decide to create a dedicated nodegroup for the core components, which would be backed by cheap instance types, and separate nodegroups for applicative workloads. Putting it all together: Kube downscaler or cron-hpa apply a schedule-based scaling to or from 0 for applicative workloads. Cluster autoscaler notices if nodes can be scaled in (including to 0) as underutilized or that some pods cannot be scheduled due to insufficient resources and nodes need to scale out (including from 0). Cluster autoscaler interacts with the AWS ASG API (Application Programming Interface) to terminate or provision new nodes. The nodegroup is scaled to or from 0 as expected. Figure 4: Schedule-based scale to 0 using an EC2 backed technical nodegroup for core components. Eventually, this pattern can be further optimized by moving the minimal stack of core components to AWS Fargate. This means that not a single Amazon EC2 instance is running when the data plane is unneeded. The cost implications of hosting the core components in AWS Fagate must be carefully assessed. Keeping the lower-cost Amazon EC2 instance types may result in a less elegant but more cost-effective solution. Figure 5: Schedule-based scale to 0 using a Fargate profile for core components. How it is done with karpenter With karpenter, we have the concept of provisioner. Provisioners set constraints on the nodes that can be created by parpenter and the pods that can run on those nodes. With the current version of karpenter (0.28.x), there are three ways to scale down the number of nodes to zero using provisioners: Delete all provisioners. Deleting provisioners causes all nodes to be deleted. This option is the simplest to implement, but it may not be feasible in all situations. For example, if you have multiple tenants sharing the same cluster, you may not want to delete all provisioners, as this would prevent any tenants from running workloads. Scale all workloads to zero. Karpenter then deletes the unused nodes. This option is more flexible than deleting all provisioners, but it may not be ideal if your workloads are managed by different team and might be difficoult to implement in a GitOps setup. Add a zero CPU limit to provisioners and then delete all nodes. This option is the most flexible, as it allows you to keep your workloads running while still scaling down the number of nodes to zero. To do this, you need to update the spec.limits.cpu field of your provisioners. The first two options previously described may be difficult to implement in multi-tenant configurations or using GitOps frameworks. Therefore, this post focuses on the third option. Walkthrough Technical considerations Programmatically scaling provisioner limits to zero can be done in a number of ways. One common pattern is to use kubernetes CronJobs. For example, the following Cronjob scales the provisioner limits to zero every work day at 10.30 PM: --- apiVersion: batch/v1 kind: CronJob metadata: name: scale-down-karpenter spec: schedule: "30 22 * * 1-5" jobTemplate: spec: [...] command: - /bin/sh - -c - | kubectl patch provisioner test-provisioner --type merge --patch '{"spec": {"limits": {"resources": {"cpu": "0"}}}}' && echo "test-provisioner patchd at $(date)"; [...] This job runs every night at 10.30 PM and scales the provisioner’s limits to zero, which effectively disables the creation of new nodes until it is manually scaled back up. CronJobs can be used with AWS Lambda to terminate running nodes, or to implement more complex logic such as scaling other infrastructure components, handling errors and notifications, or any event-driven pattern that can be connected to an application or workload. AWS Step Functions can add an additional layer of orchestration to this, allowing you to interact with your cluster using the kubernetes API and run jobs as part of your application’s workflow. More information on how to use the kubernetes API integrations with AWS Step Functions can be found here. This is a simplified example of an AWS Lambda function that can be used to terminate the remaining karpenter nodes: def lambda_handler(event, context): [...] filters = [ {'Name': 'instance-state-name','Values': ['running']}, {'Name': f'tag:{"karpenter.sh/provisioner-name"}', 'Values':"example123"}, {"Name": "tag:aws:eks:cluster-name", "Values": "example123"} ] try: instances = ec2.instances.filter(Filters=filters) RunningInstances = [instance.id for instance in instances] except botocore.exceptions.ClientError as error: logging.error("Some error message here") raise error if len(RunningInstances) > 0: for instances in RunningInstances: logging.info('Found Karpenter node: {}'.format(instances)) try: ec2.instances.filter(InstanceIds=RunningInstances).terminate() except botocore.exceptions.ClientError as error: logging.error("Some error message here") raise error [...] Note: these steps can be difficult to orchestrate in a GitOps setup. The general advise is to create specific conditions for provisioner limits. This is (purely) as example, how this can be done with ArgoCD: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: karpenter namespace: argocd spec: ignoreDifferences: - group: karpenter.sh kind: Provisioner jsonPointers: - /spec/limits/resources/cpu How to move core components to AWS Fargate for further optimization Karpenter and cluster autoscaler run a controller inside a pod running on the cluster. This controller needs to be up and running to orchestrate scale operations up or down. This means that at least one node should be running on the cluster to host those controllers. However, if you are interested in scale-to-zero scenarios, there is an option that should be taken into consideration: AWS Fargate. AWS Fargate is a serverless compute engine that allows you to run containers without having to manage any underlying infrastructure. This means that you can scale your application up and down as needed, without having to worry about running out of resources. AWS Fargate profiles that run karpenter can be configured via AWS Command Line Interface (AWS CLI), AWS Management Console, CDK (Cloud Development Kit), Terraform, AWS CloudFormation, and eksctl. The following example shows how to configure those profiles with ekstcl: apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: <cluster-name> region: <aws-region> fargateProfiles: [...] - name: karpenter podExecutionRoleARN: arn:aws:iam::12345678910:role/FargatePodExecutionRole selectors: - labels: app.kubernetes.io/name: karpenter namespace: karpenter subnets: - subnet-12345 - subnet-67890 - name: karpenter-scaledown podExecutionRoleARN: arn:aws:iam::12345678910:role/FargatePodExecutionRole selectors: - labels: job-name: scale-down-karpenter* namespace: karpenter subnets: - subnet-12345 - subnet-67890 [...] Note: By default, CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. If you want to only run your pods on AWS Fargate in your cluster, then refer to the Getting started with AWS Fargate using Amazon EKS guide. Conclusions In this post, we showed you how to scale your Amazon EKS clusters to save money and reduce your environmental impact. By using cluster autoscaler and karpenter, you can easily and effectively scale your clusters up and down, as needed. These tools can help you to scale your Amazon EKS clusters to zero nodes and save on your resource utilization and carbon footprint. If you want to get started with karpenter, then you can find the official documentation here. The documentation includes instructions on Kubernetes installation and the configuration of provisioners and all the other components required to orchestrate autoscaling. This guide focuses on Amazon EKS, but the same concepts can apply on self hosted kubernetes solutions. View the full article
  13. We ran a $12K experiment to test the cost and performance of Serverless warehouses and dbt concurrent threads, and obtained unexpected results.By: Jeff Chou, Stewart Bryson Image by Los Muertos CrewDatabricks’ SQL warehouse products are a compelling offering for companies looking to streamline their production SQL queries and warehouses. However, as usage scales up, the cost and performance of these systems become crucial to analyze. In this blog we take a technical deep dive into the cost and performance of their serverless SQL warehouse product by utilizing the industry standard TPC-DI benchmark. We hope data engineers and data platform managers can use the results presented here to make better decisions when it comes to their data infrastructure choices. What are Databricks’ SQL warehouse offerings?Before we dive into a specific product, let’s take a step back and look at the different options available today. Databricks currently offers 3 different warehouse options: SQL Classic — Most basic warehouse, runs inside customer’s cloud environmentSQL Pro — Improved performance and good for exploratory data science, runs inside customer’s cloud environmentSQL Serverless — “Best” performance, and the compute is fully managed by Databricks.From a cost perspective, both classic and pro run inside the user’s cloud environment. What this means is you will get 2 bills for your databricks usage — one is your pure Databricks cost (DBU’s) and the other is from your cloud provider (e.g. AWS EC2 bill). To really understand the cost comparison, let’s just look at an example cost breakdown of running on a Small warehouse based on their reported instance types: Cost comparison of jobs compute, and the various SQL serverless options. Prices shown are based on on-demand list prices. Spot prices will vary and were chosen based on the prices at the time of this publication. Image by author.In the table above, we look at the cost comparison of on-demand vs. spot costs as well. You can see from the table that the serverless option has no cloud component, because it’s all managed by Databricks. Serverless could be cost effective compared to pro, if you were using all on-demand instances. But if there are cheap spot nodes available, then Pro may be cheaper. Overall, the pricing for serverless is pretty reasonable in my opinion since it also includes the cloud costs, although it’s still a “premium” price. We also included the equivalent jobs compute cluster, which is the cheapest option across the board. If cost is a concern to you, you can run SQL queries in jobs compute as well! Pros and cons of ServerlessThe Databricks serverless option is a fully managed compute platform. This is pretty much identical to how Snowflake runs, where all of the compute details are hidden from users. At a high level there are pros and cons to this: Pros: You don’t have to think about instances or configurationsSpin up time is much less than starting up a cluster from scratch (5–10 seconds from our observations)Cons: Enterprises may have security issues with all of the compute running inside of DatabricksEnterprises may not be able to leverage their cloud contracts which may have special discounts on specific instancesNo ability to optimize the cluster, so you don’t know if the instances and configurations picked by Databricks are actually good for your jobThe compute is a black box — users have no idea what is going on or what changes Databricks is implementing underneath the hood which may make stability an issue.Because of the inherent black box nature of serverless, we were curious to explore the various tunable parameters people do still have and their impact on performance. So let’s drive into what we explored: Experiment SetupWe tried to take a “practical” approach to this study, and simulate what a real company might do when they want to run a SQL warehouse. Since DBT is such a popular tool in the modern data stack, we decided to look at 2 parameters to sweep and evaluate: Warehouse size — [‘2X-Small’, ‘X-Small’, ‘Small’, ‘Medium’, ‘Large’, ‘X-Large’, ‘2X-Large’, ‘3X-Large’, ‘4X-Large’]DBT Threads — [‘4’, ‘8’, ‘16’, ‘24’, ‘32’, ‘40’, ‘48’]The reason why we picked these two is they are both “universal” tuning parameters for any workload, and they both impact the compute side of the job. DBT threads in particular effectively tune the parallelism of your job as it runs through your DAG. The workload we selected is the popular TPC-DI benchmark, with a scale factor of 1000. This workload in particular is interesting because it’s actually an entire pipeline which mimics more real-world data workloads. For example, a screenshot of our DBT DAG is below, as you can see it’s quite complicated and changing the number of DBT threads could have an impact here. DBT DAG from our TPC-DI Benchmark, Image by authorAs a side note, Databricks has a fantastic open source repo that will help quickly set up the TPC-DI benchmark within Databricks entirely. (We did not use this since we are running with DBT).To get into the weeds of how we ran the experiment, we used Databricks Workflows with a Task Type of dbt as the “runner” for the dbt CLI, and all the jobs were executed concurrently; there should be no variance due to unknown environmental conditions on the Databricks side. Each job spun up a new SQL warehouse and tore it down afterwards, and ran in unique schemas in the same Unity Catalog. We used the Elementary dbt package to collect the execution results and ran a Python notebook at the end of each run to collect those metrics into a centralized schema. Costs were extracted via Databricks System Tables, specifically those for Billable Usage. Try this experiment yourself and clone the Github repo hereResultsBelow are the cost and runtime vs. warehouse size graphs. We can see below that the runtime stops scaling when you get the medium sized warehouses. Anything larger than a medium pretty much had no impact on runtime (or perhaps were worse). This is a typical scaling trend which shows that scaling cluster size is not infinite, they always have some point at which adding more compute provides diminishing returns. For the CS enthusiasts out there, this is just the fundamental CS principal — Amdahls Law.One unusual observation is that the medium warehouse outperformed the next 3 sizes up (large to 2xlarge). We repeated this particular data point a few times, and obtained consistent results so it is not a strange fluke. Because of the black box nature of serverless, we unfortunately don’t know what’s going on under the hood and are unable to give an explanation. Runtime in Minutes across Warehouse Sizes. Image by authorBecause scaling stops at medium, we can see in the cost graph below that the costs start to skyrocket after the medium warehouse size, because well basically you’re throwing more expensive machines while the runtime remains constant. So, you’re paying for extra horsepower with zero benefit. Cost in $ across Warehouse Sizes. Image by authorThe graph below shows the relative change in runtime as we change the number of threads and warehouse size. For values greater than the zero horizontal line, the runtime increased (a bad thing). The Percent Change in Runtime as Threads Increase. Image by authorThe data here is a bit noisy, but there are some interesting insights based on the size of the warehouse: 2x-small — Increasing the number of threads usually made the job run longer.X-small to large — Increasing the number of threads usually helped make the job run about 10% faster, although the gains were pretty flat so continuing to increase thread count had no value.2x-large — There was an actual optimal number of threads, which was 24, as seen in the clear parabolic line3x-large — had a very unusual spike in runtime with a thread count of 8, why? No clue.To put everything together into one comprehensive plot, we can see the plot below which plots the cost vs. duration of the total job. The different colors represent the different warehouse sizes, and the size of the bubbles are the number of DBT threads. Cost vs duration of the jobs. Size of the bubbles represents the number of threads. Image by authorIn the plot above we see the typical trend that larger warehouses typically lead to shorter durations but higher costs. However, we do spot a few unusual points: Medium is the best — From a pure cost and runtime perspective, medium is the best warehouse to chooseImpact of DBT threads — For the smaller warehouses, changing the number of threads appeared to have changed the duration by about +/- 10%, but not the cost much. For larger warehouses, the number of threads impacted both cost and runtime quite significantly.ConclusionIn summary, our top 5 lessons learned about Databricks SQL serverless + DBT products are: Rules of thumbs are bad — We cannot simply rely on “rules of thumb” about warehouse size or the number of dbt threads. Some expected trends do exist, but they are not consistent or predictable and it is entirely dependent on your workload and data.Huge variance — For the exact same workloads the costs ranged from $5 — $45, and runtimes from 2 minutes to 90 minutes, all due to different combinations of number of threads and warehouse size.Serverless scaling has limits — Serverless warehouses do not scale infinitely and eventually larger warehouses will cease to provide any speedup and only end up causing increased costs with no benefit.Medium is great ?— We found the Medium Serverless SQL Warehouse outperformed many of the larger warehouse sizes on both cost and job duration for the TPC-DI benchmark. We have no clue why.Jobs clusters may be cheapest — If costs are a concern, switching to just standard jobs compute with notebooks may be substantially cheaperThe results reported here reveal that the performance of black box “serverless” systems can result in some unusual anomalies. Since it’s all behind Databrick’s walls, we have no idea what is happening. Perhaps it’s all running on giant Spark on Kubernetes clusters, maybe they have special deals with Amazon on certain instances? Either way, the unpredictable nature makes controlling cost and performance tricky. Because each workload is unique across so many dimensions, we can’t rely on “rules of thumb”, or costly experiments that are only true for a workload in its current state. The more chaotic nature of serverless system does beg the question if these systems need a closed loop control system to keep them at bay? As an introspective note — the business model of serverless is truly compelling. Assuming Databricks is a rational business and does not want to decrease their revenue, and they want to lower their costs, one must ask the question: “Is Databricks incentivized to improve the compute under the hood?” The problem is this — if they make serverless 2x faster, then all of sudden their revenue from serverless drops by 50% — that’s a very bad day for Databricks. If they could make it 2x faster, and then increase the DBU costs by 2x to counteract the speedup, then they would remain revenue neutral (this is what they did for Photon actually). So Databricks is really incentivized to decrease their internal costs while keeping customer runtimes about the same. While this is great for Databricks, it’s difficult to pass on any serverless acceleration technology to the user that results in a cost reduction. Interested in learning more about how to improve your Databricks pipelines? Reach out to Jeff Chou and the rest of the Sync Team. ResourcesTry this experiment yourself and clone the Github repo hereRelated ContentWhy Your Data Pipelines Need Closed-Loop Feedback ControlAre Databricks clusters with Photon and Graviton instances worth it?Is Databricks’s autoscaling cost efficient?Introducing Gradient — Databricks optimization made easy5 Lessons Learned from Testing Databricks SQL Serverless + DBT was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
  14. As you might already know, AWS Lambda is a popular and widely used serverless computing platform that allows developers to build and run their applications without having to manage the underlying infrastructure. But have you ever wondered how AWS Lambda Pricing works and how much it would cost to run your serverless application? When it comes to cloud computing, cost is often a major concern. AWS Lambda, Amazon’s serverless computing platform, is no exception. Understanding AWS Lambda Pricing has become increasingly important as the demand for serverless computing continues to rise. View the full article
  15. Event-driven architecture. EDA and serverless functions are two powerful software patterns and concepts that have become popular in recent years with the rise of cloud-native computing. While one is more of an architecture pattern and the other a deployment or implementation detail, when combined, they provide a scalable and efficient solution for modern applications... View the full article
  16. AWS Step Functions announces an Optimized Integration for Amazon EMR Serverless , adding support for the Run a Job (.sync) integration pattern with 6 EMR Serverless API Actions (CreateApplication, StartApplication, StopApplication, DeleteApplication, StartJobRun, and CancelJobRun). View the full article
  17. This post explains how you can orchestrate a PySpark application using Amazon EMR Serverless and AWS Step Functions... View the full article
  18. Serverless architecture is becoming increasingly popular for fintech developers and CTOs looking to simplify their tech stack. The technology offers many benefits, including reduced server management complexity and lower costs due to its pay-as-you-go model. But how exactly do you implement serverless technology? In this article, I provide a comprehensive, step-by-step guide to using serverless architecture, with practical tips and real-world use cases. View the full article
  19. Developers using SAM CLI to author their serverless application with Lambda functions can now create and use Lambda test events to test their function code. Test events are JSON objects that mock the structure of requests emitted by AWS services to invoke a Lambda function and return an execution result, serving to validate a successful operation or to identify errors. Previously, Lambda test events were only available in the Lambda console. With this launch, developers using SAM CLI can create and access a test event from their AWS account and share it with other team members. View the full article
  20. The serverless architecture AWS approach lets you build as well as deploy the applications and services with the infrastructure resources with zero administration. As the application can operate on the servers, the tasks that are relevant to the server management are taken care of by AWS. It means there is no need to handle tasks such as scaling, provisioning, and maintaining the servers to run the applications, storage, and database systems. In this blog article, we will demonstrate how we can deploy a serverless architecture AWS in the Amazon web services to create static web pages. This tutorial is intended for cloud architects and cloud developers as well as those who want to pursue an AWS Certified Developer Associate. An Overview of Serverless Architecture AWS Serverless architecture AWS offers a revolutionary approach to application development and deployment. With AWS Lambda at its core, this paradigm eliminates the need for traditional servers, allowing you to focus solely on your code. By leveraging serverless computing, you can achieve unparalleled scalability, cost-efficiency, and agility for your applications. Serverless AWS empowers developers to respond rapidly to changing demands while reducing operational overhead. Whether you’re building APIs, processing data, or deploying web applications, serverless architecture AWS simplifies the process. Embrace the future of cloud computing with AWS and unleash the full potential of serverless technology. What are the prerequisites for deployment activity? While initiating the deployment activity, it is essential to meet the following requirements: The users who are involved in this project must know about AWS services such as Amazon S3, API Gateway, AWS Lambda, and DynamoDB Must have a basic understanding of HTML, Javascript, and CSS Some of the hardware requirements such as a Laptop/Desktop and a fast and reliable Internet connection are required. Exam Overview of AWS Certified Developer Associate (DVA-C02) Setting up a lab environment To set up the AWS lab Environment, follow the below steps: To launch the project environment, Click on the start project button. You have to wait for a minute until the cloud environment gets provisioned. Once the Lab is started, you will be provided with an IAM username, Password, Access Key, and Secret Access Key Architecture Diagram of Serverless Architecture AWS How to Deploy a static feedback webpage with 100% Serverless Architecture? To build the static webpage with the help of serverless architecture, the below steps need to be followed: Step 1: Access the AWS Management Console To begin, click on the provided button, which will open a new browser tab and take you to the AWS Console. Once you reach the AWS sign-in page, please follow these steps: Keep the Account ID field as it is. Do not edit or remove the 12-digit Account ID displayed in the AWS Console. Editing it may prevent you from proceeding with the lab. In the Lab Console, copy your User Name and Password. Paste the copied Username and Password into the respective fields labeled IAM Username and Password on the AWS Console sign-in page. After entering your credentials, click on the “Sign in” button. Once you’ve successfully signed in to the AWS Management Console, make sure to set the default AWS Region to “US East (N. Virginia)” by selecting “us-east-1.” Also Read : A Guide to Serverless Architecture Step 2: Creating a DynamoDB table To create the DynamoDB table for storing form submission details: Ensure that you are in the “US East (N. Virginia) US-east-1” Region. Click on the “Services” menu, located under the “Database” section. Select “DynamoDB.” Click on “Create table.” In the “Table details” section: Enter “whiz_table” as the Table name. For the Partition key, enter “id” and select the type as “String.” Leave all other settings as their default values. Finally, click on the “Create table” button to create the DynamoDB table. Step 3: Creating an S3 Bucket for Image Storage and Adding Bucket Policy Ensure that you are in the “US East (N. Virginia) US-east-1” Region. Navigate to S3 by clicking on the “Services” menu, located under the “Storage” section. Click on the “Create bucket” button. Enter a unique name for your bucket, such as “whiz-s3-image-data,” and select the region as “us-east-1.” In the “Block Public Access settings for this bucket” section: Uncheck the option “Block all public access” and acknowledge the option. Leave all other settings as their default values and click “Create bucket.” To create the Bucket policy, click on the newly created bucket and go to the “Permissions” tab. Scroll down to the “Bucket Policy” section and click on “Edit.” Paste the following bucket policy and click “Save changes.” Step 4: Creating an S3 Bucket for Web Page Content Ensure that you are in the “US East (N. Virginia) US-east-1” Region. Navigate to S3 by clicking on the “Services” menu, located under the “Storage” section. Click on the “Create bucket” button. Enter a unique name for your bucket, like “whiz-webpage,” and select the region as “us-east-1.” In the “Block Public Access settings for this bucket” section: Uncheck the option “Block all public access” and acknowledge it. Under “Bucket Versioning,” click “Enable.” Leave all other settings as their default values and click “Create bucket.” Once your S3 bucket is created, click on it to open it. To create the Bucket policy, click on the bucket and go to the “Permissions” tab. Scroll down to the “Bucket Policy” section and click on “Edit.” Paste the following bucket policy and click “Save changes.” Step 5: Hosting the Static Webpage Download the zip file named “Whiz_App.” Unzip the files from “Whiz_App.zip” into your local directory and review the code. Return to your S3 bucket and click on the “Upload” button to upload your webpage content. You can drag and drop the files from the unzipped folder. Your webpage content is now hosted in your S3 bucket. To access it, click on the “Properties” tab in your bucket and select “Static website hosting.” Click on “Edit” and choose “Enable.” In the “Hosting type,” select “Host a static website.” Enter the name of your index document (typically “index.html”) and the name of your error document (typically “error.html”). Click “Save changes,” and your static website is now hosted on your S3 bucket. You can access it using the endpoint URL provided in the “Static website hosting” settings. Step 6: Creating a AWS Lambda function Navigate to the AWS Lambda function in the US East (N. Virginia) region. Click on “Create function.” Fill in the following details: Function name: send_data_to_table Runtime: Choose Python 3.9 Under Permissions, choose “Use an existing role” and select whiz_lambda_role. Click “Create function.” Step 7: Creating a Rest API Navigate to the AWS API Gateway service in the US East (N. Virginia) region. Click on “Get started.” Select “Build” under REST API. Choose the protocol as REST. In the “Create new API” section, choose “New API.” Under Settings, set: API name: whiz_api Description: “Trigger the Lambda function” Click “Create API.” Step 8: Creating a Resource and Method for the API After creating the REST API, click “Actions” and choose “Create Resource.” Enter “submit” as the Resource Name and check the option for “Enable API Gateway CORS.” Click “Create Resource.” Once the resource is created, select it (/submit) and choose “Create Method.” Choose the method as POST and click the tick mark. In the POST method: Choose “Integration type” as Lambda Function. Enter the lambda function name (send_data_to_table). Click “Save,” and then click “OK” on the pop-up window. Step 9: Testing the API You can now test if the Lambda integration is working. Click on “Test.” In the Request Body, enter the parameters to check if the configurations are correct. The data entered in the Request Body should be updated in the DynamoDB table, and the image file should be stored in the S3 bucket. After the creation of those web pages, the users can suggest their thoughts in the form of images, and that data will be retained in an AWS Database such as Amazon DynamoDB. The whole deployment process gets automated and thus allows for more accessible updates to a webpage. Step 10: Enabling CORS and Deploying the API After successfully creating the resource and method, you can proceed to deploy the API. Select the POST method, click on “Actions,” and choose “Enable CORS.” Keep all the settings as default and click on “Enable CORS and replace existing CORS headers.” Next, click on “Actions” again and select Deploy API. In the “Deploy API” section: For Deployment stage, select [New stage]. Enter a Stage name, such as “staging.” Click “Deploy.” Once the API is deployed, make a note of the Invoke URL for the /submit method. You can find this URL by navigating to the staging > /submit > POST. Step 11: Modify the index.html Page Open the contents of the Whiz_App in any text editor. You need to insert the API Gateway endpoint URL you noted down previously into Line 64 of the index.html file. Save the file to update it and then upload the updated index.html to the static web page hosting S3 bucket. Navigate to Amazon S3 by clicking on the “Services” menu, under the “Storage” section, and select your static web page hosting bucket. Click on the “upload” button and upload the updated index.html page to the bucket. Step 12: Testing the Web Page Now You can test the web page using the CloudFront domain name. Follow these steps: On the web page, fill in the details in the form: Click on the “Submit” button. You should see a submission message at the top of the web page, indicating whether the form submission was successful or not. Next, navigate to the DynamoDB table to check if the submitted form’s details are recorded. Verify that the entry has been made in the DynamoDB table. Finally, go to the S3 bucket that was created to confirm whether the uploaded image is present. Benefits of AWS Hands-on Labs Summary Hope this blog post shows how the serverless architecture AWS approach helps in creating static webpage creation in real-time settings. By following the above steps, you can get feedback in prior and make the right decisions on time. By the generation of automatic updates to web pages, the users can become highly satisfied. To know more about serverless architecture AWS and its concepts, you can take AWS certified developer associate certification. If you are seeking any study materials for AWS certified developer associate (DVA-C02) certification, make use of our DVA-C02 Study Guides, DVA-C02 practice exams, and so on, and get prepared for AWS developer associate exam questions. To sharpen your practical skills, simply rely upon our AWS hands-on labs and AWS sandboxes. View the full article
  21. The media and entertainment (M&E) industry has evolved with the digitization of content, proliferation of platforms, changes in the way media is consumed, emergence of user generated content, and globalization. These changes are fueled by and fuel consumer behavior and expectations. Those who were once happy to just record a live telecast of a show [
] View the full article
  22. Unlock the power of modern application development. Accelerate innovation, enhance performance, fortify security, and boost reliability while significantly reducing your TCO. Which containers or serverless service should I start with to modernize my existing or build new applications? There are two primary operating models for building, running, and modernizing code on AWS: Kubernetes and Serverless. [
] View the full article
  23. In traditional business models, retailers handle order-fulfillment processes from start to finish—including inventory management, owning or leasing warehouses, and managing supply chains. But many retailers aren’t set up to carry additional inventory. The “endless aisle” business model is an alternative solution for lean retailers that are carrying enough in-store inventory while wanting to avoid revenue loss. Endless aisle is also known as drop-shipping, or fulfilling orders through automated integration with product partners. Such automation results in a customer’s ability to place an order on a tablet or kiosk when they cannot find a specific product of their choice on in-store shelves. Why is the endless aisle concept important for businesses and customers alike? It means that: Businesses no longer need to stock products more than shelf deep. End customers can easily place an order at the store and get it shipped directly to their home or place of choice. Let’s explore these concepts further. Solution overview When customers are in-store and looking to order items that are not available on shelves, a store associate can scan the SKU code on a tablet. The kiosk experience is similar, where the customer can search for the item themselves by typing in its name. For example, if a customer visits a clothing store that only stocks the items on shelves and finds the store is out of a product in their size, preferred color, or both, the associate can scan the SKU and check whether the item is available to ship. The application then raises a request with a store’s product partner. The request returns the available products the associate can show to the customer, who can then choose to place an order. When the order is processed, it is directly fulfilled by the partner. Serverless endless aisle reference architecture Figure 1 illustrates how to architect a serverless endless aisle architecture for order processing. Figure 1. Building endless aisle architecture for order processing Website hosting and security We’ll host the endless aisle website on Amazon Simple Storage Service (Amazon S3) with Amazon CloudFront for better response time. CloudFront is a content delivery network (CDN) service built for high performance and security. CloudFront can reduce the latency to other AWS services by providing access at the edge and by caching the static content, while dynamic content is provided by Amazon API Gateway integration for our use case. A Web Application Firewall (WAF) is used after CloudFront for protection against internet threats, such as cross-site scripting (XSS) and SQL injection. Amazon Cognito is used for managing the application user pool, and provides security for who can then access the application. Solution walkthrough Let’s review the architecture steps in detail. Step 1. The store associate logs into the application with their username and password. When the associate or customer scans the bar code/SKU, the following process flow is executed. Step 2. The front-end application translates the SKU code into a product number and invokes the Get Item API. Step 3. An invoked getItem AWS Lambda function handles the API call. This architecture’s design pattern supports multiple partner integration and allows reusability of the code. The design can be integrated with any partner with the ability to integrate using APIs, and the partner-specific transformation is built separately using Lambda functions. We’ll use Amazon DynamoDB for storing partner information metadata—for example, partner_id, partner_name, partner APIs. Step 4. The getItem Lambda function fetches partner information from an DynamoDB table. It transforms the request body using a Transformation Lambda function. Step 5. The getItem Lambda function calls the right partner API. Upon receiving a request, the partner API returns the available product (based on SKU code) with details such as size, color, and any other variable parameter, along with images. It can also provide links to similar available products the customer may be interested in based on the selected product. This helps retail clients increase their revenue and offer products that aren’t available at a given time on their shelves. The customer then selects from the available products. Having selected the right product with specific details on parameters such as color, size, quantity, and more, they add them to the cart and begin the check-out process. The customer enters their shipping address and payment information to place an order. Step 6. The orders are pushed to an Amazon Simple Queue Service (Amazon SQS) queue named create-order-queue. Amazon SQS provides a straightforward and reliable way for customers to decouple and connect micro-services together using queues. Step 7. Amazon SQS ensures that there is no data loss and orders are processed from the queue by the orders API. The createOrder Lambda function pulls the messages from Amazon SQS and processes them. Step 8. The orders API body is then transformed into the message format expected by the partner API. This transformation can be done by a Lambda function defined in the configuration in the ‘partners-table’ DynamoDB table. Step 9. A partner API is called using the endpoint URL, which is obtained from the partners-table. When the order is placed, a confirmation will be returned by the partner API response. With this confirmation, order details are entered in another DynamoDB table called orders-table. Step 10. With DynamoDB stream, you can track any insert or update to the DynamoDB table. Step 11. A notifier Lambda function invokes Amazon Simple Email Service (Amazon SES) to notify the store about order activity. Step 12. The processed orders are integrated with the customer’s ERP application for the reconciliation process. This can be achieved by Amazon Eventbridge rule that invokes a dataSync Lambda function. Prerequisites For this walkthrough, you’ll need the following prerequisites: An AWS account with admin access AWS Command Line Interface (AWS CLI). See Getting started with the AWS CLI. Node.js (16.x+) and npm. For more information, see Downloading and installing Node.js and npm. aws-cdk (2.x+). See Getting started with the AWS CDK. The GitHub serverless-partner-integration-endless-aisle repository, cloned, and configured on your local machine. Build Locally install CDK library: npm install -g aws-cdk Build an Infrastructure package to create deployable assets, which will be used in CloudFormation template. cd serverless-partner-integration-endless-aisle && sh build.sh Synthesize CloudFormation template To see the CloudFormation template generated by the CDK, execute the below steps. cd serveless-partner-integration-endless-aisle/infrastructure cdk bootstrap && cdk synth Check the output files in the “cdk.out” directory. AWS CloudFormation template is created for deployment in your AWS account. Deploy Use CDK to deploy/redeploy your stack to an AWS Account. Set store email address for notifications. If a store wants to get updates about customer orders, they can set STORE_EMAIL value with store email. You will receive a verification email in this account, after which SES can send you order updates. export STORE_EMAIL=”dummytest@someemail.com” - Put your email here. Set up AWS credentials with the information found in this developer guide. Now run: cdk deploy Testing After the deployment, CDK will output Amazon Cloudfront URL to use for testing. If you have provided STORE_EMAIL address during the set up, then approve the email link received from Amazon SES in your inbox. This will allow order notifications to your inbox. Create a sample user by using the following command, that you can use to login to the website. aws cognito-idp admin-create-user --user-pool-id <REACT_APP_USER_POOL_ID> --username <UserName> --user-attributes Name="email",Value="<USER_EMAIL>" Name="email_verified",Value=true The user will receive password in their email. Open CloudFront URL in a web browser. Login to the website with the username and password. It will ask you to reset your password. Explore different features such as Partner Lookup, Product search, Placing an order, and Order Lookup. Cleaning up To avoid incurring future charges, delete the resources, delete the cloud formation stack when not needed. The following command will delete the infrastructure and website stack created in your AWS account: cdk destroy Conclusion In this blog, we demonstrated how to build an in-store digital channel for retail customers. You can now build your endless aisle application using the architecture described in this blog and integrate with your partners, or reach out to accelerate your retail business. Further reading Serverless on AWS Build a Serverless Web Application Serverless Architecture Design Examples AWS retail case studies View the full article
  24. What is serverless computing? Serverless computing is a cloud computing model that was introduced by AWS in 2014 with its service AWS LAMBDA. The first serverless services were known as Function-as-a-Service (FaaS) but now there are many services such as CaaS or BaaS. It allows developers to build and run applications without the need for managing and maintaining the underlying infrastructure. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...