Jump to content

Search the Community

Showing results for tags 'dynamodb'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 16 results

  1. AWS Summit season is starting! I’m happy I will meet our customers, partners, and the press next week at the AWS Summit Paris and the week after at the AWS Summit Amsterdam. I’ll show you how mobile application developers can use generative artificial intelligence (AI) to boost their productivity. Be sure to stop by and say hi if you’re around. Now that my talks for the Summit are ready, I took the time to look back at the AWS launches from last week and write this summary for you. Last week’s launches Here are some launches that got my attention: AWS License Manager allows you to track IBM Db2 licenses on Amazon Relational Database Service (Amazon RDS) – I wrote about Amazon RDS when we launched IBM Db2 back in December 2023 and I told you that you must bring your own Db2 license. Starting today, you can track your Amazon RDS for Db2 usage with AWS License Manager. License Manager provides you with better control and visibility of your licenses to help you limit licensing overages and reduce the risk of non-compliance and misreporting. AWS CodeBuild now supports custom images for AWS Lambda – You can now use compute container images stored in an Amazon Elastic Container Registry (Amazon ECR) repository for projects configured to run on Lambda compute. Previously, you had to use one of the managed container images provided by AWS CodeBuild. AWS managed container images include support for AWS Command Line Interface (AWS CLI), Serverless Application Model, and various programming language runtimes. AWS CodeArtifact package group configuration – Administrators of package repositories can now manage the configuration of multiple packages in one single place. A package group allows you to define how packages are updated by internal developers or from upstream repositories. You can now allow or block internal developers to publish packages or allow or block upstream updates for a group of packages. Read my blog post for all the details. Return your Savings Plans – We have announced the ability to return Savings Plans within 7 days of purchase. Savings Plans is a flexible pricing model that can help you reduce your bill by up to 72 percent compared to On-Demand prices, in exchange for a one- or three-year hourly spend commitment. If you realize that the Savings Plan you recently purchased isn’t optimal for your needs, you can return it and if needed, repurchase another Savings Plan that better matches your needs. Amazon EC2 Mac Dedicated Hosts now provide visibility into supported macOS versions – You can now view the latest macOS versions supported on your EC2 Mac Dedicated Host, which enables you to proactively validate if your Dedicated Host can support instances with your preferred macOS versions. Amazon Corretto 22 is now generally available – Corretto 22, an OpenJDK feature release, introduces a range of new capabilities and enhancements for developers. New features like stream gatherers and unnamed variables help you write code that’s clearer and easier to maintain. Additionally, optimizations in garbage collection algorithms boost performance. Existing libraries for concurrency, class files, and foreign functions have also been updated, giving you a more powerful toolkit to build robust and efficient Java applications. Amazon DynamoDB now supports resource-based policies and AWS PrivateLink – With AWS PrivateLink, you can simplify private network connectivity between Amazon Virtual Private Cloud (Amazon VPC), Amazon DynamoDB, and your on-premises data centers using interface VPC endpoints and private IP addresses. On the other side, resource-based policies to help you simplify access control for your DynamoDB resources. With resource-based policies, you can specify the AWS Identity and Access Management (IAM) principals that have access to a resource and what actions they can perform on it. You can attach a resource-based policy to a DynamoDB table or a stream. Resource-based policies also simplify cross-account access control for sharing resources with IAM principals of different AWS accounts. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS news Here are some additional news items, open source projects, and Twitch shows that you might find interesting: British Broadcasting Corporation (BBC) migrated 25PB of archives to Amazon S3 Glacier – The BBC Archives Technology and Services team needed a modern solution to centralize, digitize, and migrate its 100-year-old flagship archives. It began using Amazon Simple Storage Service (Amazon S3) Glacier Instant Retrieval, which is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. I did the math, you need 2,788,555 DVD discs to store 25PB of data. Imagine a pile of DVDs reaching 41.8 kilometers (or 25.9 miles) tall! Read the full story. Build On Generative AI – Season 3 of your favorite weekly Twitch show about all things generative AI is in full swing! Streaming every Monday, 9:00 AM US PT, my colleagues Tiffany and Darko discuss different aspects of generative AI and invite guest speakers to demo their work. AWS open source news and updates – My colleague Ricardo writes this weekly open source newsletter in which he highlights new open source projects, tools, and demos from the AWS Community. Upcoming AWS events Check your calendars and sign up for these AWS events: AWS Summits – As I wrote in the introduction, it’s AWS Summit season again! The first one happens next week in Paris (April 3), followed by Amsterdam (April 9), Sydney (April 10–11), London (April 24), Berlin (May 15–16), and Seoul (May 16–17). AWS Summits are a series of free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. AWS re:Inforce – Join us for AWS re:Inforce (June 10–12) in Philadelphia, Pennsylvania. AWS re:Inforce is a learning conference focused on AWS security solutions, cloud security, compliance, and identity. Connect with the AWS teams that build the security tools and meet AWS customers to learn about their security journeys. You can browse all upcoming in-person and virtual events. That’s all for this week. Check back next Monday for another Weekly Roundup! -- seb This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  2. Today, we announced significant price-performance improvements for Cloud Spanner, now providing up to 50% increase in throughput and 2.5 times the storage per node than before, with no change in price. Spanner’s high throughput, virtually unlimited scale, single-digit millisecond latency, five 9s availability SLA, and strong external-consistency semantics are now available at half the cost of Amazon DynamoDB for most workloads. These upgrades will be rolled out to all Spanner customers in the coming months, without the need for reprovisioning, downtime, or any user action... https://cloud.google.com/blog/products/databases/announcing-cloud-spanner-price-performance-updates/
  3. AWS Cloud Map introduces a new API for retrieving the revision of your services. It allows your applications to update the state of your cloud resources only when it has changed, minimizing the discovery traffic and API cost. With AWS Cloud Map, you can define custom names for your application resources, such as Amazon Elastic Container Services (Amazon ECS) tasks, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB tables, or other cloud resources. You can then use these custom names to discover the location and metadata of cloud resources from your applications using AWS SDK and authenticated API calls. View the full article
  4. Amazon DynamoDB now enables you to proactively manage your account and table quotas through enhanced integration with Service Quotas. Using Service Quotas, you can now view the current values of all your DynamoDB quotas. You can also monitor the current utilization of your account-level quotas. View the full article
  5. NoSQL Workbench for Amazon DynamoDB is a client-side application to help visualize and build scalable, high-performance data models. Starting today, NoSQL Workbench adds support for table and global secondary index (GSI) control plane operations such as CreateTable, UpdateTable, and DeleteTable. View the full article
  6. Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data at scale. View the full article
  7. Amazon DynamoDB global tables are now available in the Europe (Milan) and Europe (Stockholm) Regions. With global tables, you can give massively scaled, global applications local access to a DynamoDB table for fast read and write performance. You also can use global tables to replicate DynamoDB table data to additional AWS Regions for higher availability and disaster recovery. View the full article
  8. You now can use Amazon DynamoDB as a source data store with AWS Glue Elastic Views to combine and replicate data across multiple data stores—without having to write custom code. With AWS Glue Elastic Views, you can use Structured Query Language (SQL) to quickly create a virtual table—called a view—from multiple source data stores. Based on this view, AWS Glue Elastic Views copies data from each source data store and creates a replica—called a materialized view—in a target data store. AWS Glue Elastic Views monitors continuously for changes to data in your source data stores, and provides updates to your target data stores automatically, ensuring that data accessed through the materialized view is always up to date. View the full article
  9. AWS Pricing Calculator now supports Amazon DynamoDB. Estimate the cost of DynamoDB workloads before you build them, including the cost of features such as on-demand capacity mode, backup and restore, DynamoDB Streams, and DynamoDB Accelerator (DAX). View the full article
  10. With Amazon Kinesis Data Streams for Amazon DynamoDB, you can capture item-level changes in your DynamoDB tables as a Kinesis data stream. You can enable streaming to a Kinesis data stream on your table with a single click in the DynamoDB console, or via the AWS API or AWS CLI. View the full article
  11. You now can use PartiQL (a SQL-compatible query language)—in addition to already-available DynamoDB operations—to query, insert, update, and delete table data in Amazon DynamoDB. PartiQL makes it easier to interact with DynamoDB and run queries in the AWS Management Console. Because PartiQL is supported for all DynamoDB data-plane operations, it can help improve the productivity of developers by enabling them to use a familiar, structured query language to perform these operations. View the full article
  12. You now can restore Amazon DynamoDB tables even faster when recovering from data loss or corruption. The increased efficiency of restores and their ability to better accommodate workloads with imbalanced write patterns reduce table restore times across base tables of all sizes and data distributions. To accelerate the speed of restores for tables with secondary indexes, you can exclude some or all secondary indexes from being created with the restored tables. View the full article
  13. Now you can export your Amazon DynamoDB table data to your data lake in Amazon S3, and use other AWS services such as Amazon Athena, Amazon SageMaker, and AWS Lake Formation to analyze your data and extract actionable insights. No code-writing is required. View the full article
  14. With Amazon DynamoDB global tables, you can give massively scaled, global applications local access to DynamoDB tables for fast read and write performance. All of your data in DynamoDB is encrypted by default using the AWS Key Management Service (KMS). Starting today, you can now choose a customer managed key for your global tables, giving you full control over the key used for encryption of your DynamoDB data replicated using global tables. Customer managed keys also come with full AWS CloudTrail monitoring so you can view every time the key was used or accessed. View the full article
  15. This is a tale with many twists and turns, a tale of observation, analysis and optimisation, of elation and disappointment. It starts with disk space. Wind back four years to get the background: Weaveworks created the Cortex project, which the CNCF have recently graduated to "incubating" status. Cortex is a time-series database system based on Prometheus. We run Cortex in production as part of Weave Cloud, ingesting billions of metrics from clusters all over the world and serving insight and analysis to their devops owners. I spend one week out of four on SRE duties for Weave Cloud, responding to alerts and looking for ways to make the system run better. Lessons learned from this then feed into our Weave Kubernetes commercial product. Coming on shift September 10th, I noticed that disk consumption by Cortex was higher than I remembered. We expect the product to grow over time, and thus to use more resources, but looking at the data there had been a marked jump a couple of weeks earlier, and consumption by all customers had jumped at the same time. It had to be caused by something at our end. Bit more background: Cortex doesn’t write every sample to the store as it comes in; instead it compresses hours of data into “chunks” which are much more efficient to save and retrieve. But machines sometimes crash, and if we lost a server we wouldn’t want that to impact our users, so we replicate the data to three different servers. Distribution and segmentation of time-series are very carefully arranged so that, when it comes time to flush each chunk to the store, the three copies are identical and we only store the data once. Reason I’m telling you this is, by looking at statistics about the store, I could see this was where the increased disk consumption was coming from: the copies very often did not match, so data was stored more than once. This chart shows the percentage of chunks detected as identical: on the left is from a month earlier, and on the right is the day when I started to look into the problem. OK, what causes Cortex chunks to be non-identical? Over to Jaeger to see inside a single ‘push’ operation: The Cortex distributor replicates incoming data, sending it to ingesters which compress and eventually store the chunks. Somehow, calls to ingesters were not being served within the two second deadline that the distributor imposes. Well that was a surprise, because we pay a lot of attention to latency, particularly the “p99 latency” that tells you the one-in-a-hundred situation. P99 is a good proxy for what customers may occasionally experience, and particularly notable if it’s trending worse. Here’s the chart for September 10th - not bad, eh? But, salutary lesson: Histograms Can Hide Stuff. Let’s see what the 99.9th centile looks like: So one in a thousand operations take over ten times as long as the p99 case! By the way, this is the “tail latency” in the title of this blog post: as we look further and further out into the tail of the distribution, we can find nasty surprises. That’s latency reported on the serving side; from the calling side it’s clearer we have a problem, but unfortunately the histogram buckets here only go up to 1 second: Here’s a chart showing the rate of deadline-exceeded events that day: for each one of these the data samples don’t reach one of the replicas, leading to the chunks-not-identical issue: It’s a very small fraction of the overall throughput, but enough to drive up our disk consumption by 50%. OK, what was causing these slow response times? I love a good mystery, so I threw myself into finding the answer. I looked at: Overloading. I added extra CPUs and RAM to our cloud deployment, but still the occasional delays continued. Locking. Go has a mutex profile, and after staring at it for long enough I figured it just wasn’t showing me any hundred-millisecond delays that would account for the behaviour. Blocking. Go has this kind of profile too, which shows when one part of the program is hanging around waiting for something like IO, but it turns out this describes most of Cortex. Nothing learned here. I looked for long-running operations which could be chewing up resources inside the ingester; one in particular from our Weave Cloud dashboard service was easily cached, so I did that, but still no great improvement. One of my rules of thumb when trying to improve software performance is “It’s always memory”. (Perhaps cribbed from Richard Stiles’ “It's the Memory, Stupid!”, but he was talking about microprocessor design). Anyway, looking at heap profiles threw up one candidate: the buffers used to stream data for queries could be re-used. I implemented that and the results looked good in the staging area, so I rolled it out to production. Here’s what I saw in the dashboard; rollout started at 10:36GMT: I was ecstatic. Problem solved! But. Let’s just open out that timescale a little. A couple of hours after the symptom went away, it was back again! Maybe only half as bad, but I wanted it fixed, not half-fixed. OK, what do we do when we can’t solve a performance problem? We stare at the screen for hours and hours until inspiration strikes. It had been great for a couple of hours. What changed? Maybe some customer behaviour - maybe someone started looking at a particular page around 12:30? Suddenly it hit me. The times when performance was good lined up with the times that DynamoDB was throttling Cortex. What the? That can’t possibly be right. About throttling: AWS charges for DynamoDB both by storage and by IO operations per second, and it’s most cost-effective if you can match the IO provision to demand. If you try to go faster than what you’re paying for, DynamoDB will throttle your requests, but because Cortex is already holding a lot of data in memory we don’t mind going slowly for a bit. The peaks and troughs even out and we get everything stored over time. So that last chart above shows the peaks, when DynamoDB was throttling, and the troughs, when it wasn’t, and those different regions match up exactly to periods of high latency and low latency. Still doesn’t make sense. The DB storage side of Cortex runs completely asynchronously to the input side, which is where the latency was. Well, no matter how impossible it seemed, there had to be some connection. What happens inside Cortex when DynamoDB throttles a write? Cortex waits for a bit then retries the operation. And it hit me: when there is no throttling, there is no waiting. Cortex will fire chunks into DynamoDB as fast as it will take them, and that can be pretty darn fast. Cortex triggers those writes from a timer - we cut chunks at maximum 8 hours - and that timer runs once a minute. In the non-throttled case there would be a burst of intense activity at the start of every minute, followed by a long period where things were relatively quiet. If we zoom right in to a single ingester we can see this in the metrics, going into a throttled period around 10:48: Proposed solution: add some delays to spread out the work when DynamoDB isn’t throttling. We already use a rate-limiter from Google elsewhere in Cortex, so all I had to do was compute a rate which would allow all queued chunks to be written in exactly a minute. The code for that still needs a little tweaking as I write this post. That new rate-limiting code rolled out September 16th, and I was very pleased to see that the latency went down and this time it stayed down: And the rate at which chunks are found identical, which brings down disk consumption, doesn’t recover until 8 hours after a rollout, but it’s now pretty much nailed at 66% where it should be: View the full article
  16. Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPSThe metrics point to crypto still being a toy until it can achieve real world business scale demonstrated by Amazon DynamoDB14 transactions per second. No matter how passionate you may be about the aspirations and future of crypto, it’s the metric that points out that when it comes to actual utility, crypto is still mostly a toy. After all, pretty much any real world problem, including payments, e-commerce, remote telemetry, business process workflows, supply chain and transport logistics, and others require many, many times this bandwidth to handle their current business data needs — let alone future ones. Unfortunately the crypto world’s current solutions to this problem tend to either blunt the advantages of decentralization (hello, sidechains!) or look like clumsy bolt-ons that don’t close the necessary gaps. Real World Business ScaleJust how big is this gap, and what would success look like for crypto scalability? We can see an actual example of both real-world transaction scale and what it would take to enable migrating actual business processes to a new database technology by taking a look at Amazon’s 2019 Prime Day stats. The AWS web site breaks down Amazon retail’s adoption and usage of NoSQL (in the form of DynamoDB) nicely: Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.45 million requests per second. That’s six zeros more than Bitcoin or Eth. Yikes. And this is just one company’s traffic, and only a subset at that. (After all, Amazon is a heavy user of SQL databases as well as DynamoDB), so the actual TPS DynamoDB is doing at peak is even higher than the number above. Talk about having a gap to goal…and it doesn’t stop there. If you imagine using a blockchain (with or without crypto) for a real-world e-commerce application and expect it support multiple companies in a multi-tenanted fashion, want it to replace legacy database systems, and need a little headroom to grow — a sane target might look like 140 million transactions per second. That’s seven orders of magnitude from where we are today. The Myth of CentralizationWhy are these results so different? Let’s examine this dichotomy a little closer. First, note that DynamoDB creates a fully ordered ledger, known as a stream, for each table. Each table is totally ordered and immutable; once emitted, it never changes. DynamoDB is doing its job by using a whole lot of individual servers communicating over a network to form a distributed algorithm that has a consensus algorithm at its heart. Cross-table updates are given ACID properties through a transactional API. DynamoDB’s servers don’t “just trust” the network (or other parts of itself), either — data in transit and at rest is encrypted with modern cryptographic protocols and other machines (or the services running on them) are required to sign and authenticate themselves when they converse. Any of this sound familiar? The classic, albeit defensive, retort to this observation is, “Well, sure, but that’s a centralized database, and decentralized data is so much harder that is just has to be slower.” This defense sounds sort of plausible on the surface, but it doesn’t survive closer inspection. First, let’s talk about centralization. A database running in single tenant mode with no security or isolation can be very fast indeed — think Redis or a hashtable in RAM, either of which can achieve bandwidth numbers like the DynamoDB rates quoted above. But that’s not even remotely a valid model for how a retail giant like Amazon uses DynamoDB. Different teams within Amazon (credit card processing, catalog management, search, website, etc.) do not get to read and write each others’ data directly — these teams essentially assume they are mutually untrustworthy as a defensive measure. In other words, they make a similar assumption that a cryptocurrency blockchain node makes about other nodes in its network! On the other side, DynamoDB supports millions of customer accounts. It has to assume that any one of them can be an evildoer and that it has to protect itself from customers and customers from each other. Amazon retail usage gets exactly the same treatment any other customer would…no more or less privileged than any other DynamoDB user. Again, this sounds pretty familiar if you’re trying to handle money movement on a blockchain: You can’t trust other clients or other nodes. These business-level assumptions are too similar to explain a 7 order of magnitude difference in performance. We’ll need to look elsewhere for an explanation. Is it under the hood?Now let’s look at the technology…maybe the answer is there. “Consensus” often gets thrown up as the reason blockchain bandwidth is so low. While DynamoDB tables are independent outside of transaction boundaries, it’s pretty clear that there’s a lot of consensus, in the form of totally ordered updates, many of which represent financial transactions of some flavor in those Prime Day stats. Both blockchains and highly distributed databases like DynamoDB need to worry about fault tolerance and data durability, so they both need a voting mechanism. Here’s one place where blockchains do have it a little harder: Overcoming Byzantine attacks requires a larger majority (2/3 +1) than simply establishing a quorum (1/2 +1) on a data read or write operation. But the math doesn’t hold up: At best, that accounts for 1/6th of the difference in bandwidth between the two systems, not 7 orders of magnitude. What about Proof of Work? Ethereum, Bitcoin and other PoW-based blockchains intentionally slow down transactions in order to be Sybil resistant. But if that were the only issue, PoS blockchains would be demonstrating results similar to DynamoDB’s performance…and so far, they’re still not in the ballpark. Chalk PoW-versus-PoS up to a couple orders of magnitude, though — it’s at least germane as a difference. How about the network? One difference between two nodes that run on the open Internet and a constellation of servers in (e.g.) AWS EC2 is that the latter run on a proprietary network. Intra-region, and especially intra-Availability Zone (“AZ”) traffic can easily be an order of magnitude higher bandwidth and an order of magnitude lower latency than open Internet-routed traffic, even within a city-sized locale. But given that most production blockchain nodes at companies like Coinbase are running in AWS data centers, this also can’t explain the differences in performance. At best, it’s an indication that routing in blockchains needs more work…and still leaves 3 more orders of magnitude unaccounted for. What about the application itself? Since the Amazon retail results are for multiple teams using different tables, there’s essentially a bunch of implicit sharding going on at the application level: Two teams with unrelated applications can use two separate tables, and neither DynamoDB nor these two users will need to order their respective data writes. Is this a possible semantic difference? For a company like Amazon retail, the teams using DynamoDB “know” when to couple their tables (through use of the transaction API) and when to keep them separate. If a cryptocurrency API requires the blockchain to determine on the fly whether (and how) to shard by looking at every single incoming transaction, then there’s obviously more central coordination required. (Oh, the irony.) But given that we have a published proof point here that a large company obviously will perform application level sharding through its schema design and API usage, it seems clear that this is a spurious difference — at best, it indicates an impoverished API or data model on the part of crypto, not an a priori requirement that a blockchain has to be slow in practice. In fact, we have an indication that this dichotomy is something crypto clients are happy to code to: smart contracts. They’re both 1) distinguished in the API from “normal” (simple transfer) transactions and 2) tend to denote their participants in some fashion. It’s easy to see the similarity between smart contract calls in a decentralized blockchain and use of the DynamoDB transaction API between teams in a large “centralized” company like Amazon retail. Let’s assume this accounts for an order of magnitude; 2 more to go. Managed Services and Cloud OptimizationOne significant difference in the coding practices of a service like DynamoDB versus pretty much any cryptocurrency is that the former is highly optimized for running in the cloud. In fact, you’d be hard pressed to locate a line of code in DynamoDB’s implementation that hasn’t been repeatedly scrutinized to see if there’s a way to wring more performance out of it by thinking hard about how and where it runs. Contrast this to crypto implementations, which practically make it a precept to assume the cloud doesn’t exist. Instance selection, zonal placement, traffic routing, scaling and workload distribution…most of the practical knowledge, operational hygiene, and design methodology learned and practiced over the last decade goes unused in crypto. It’s not hard to imagine that accounts for the remaining gap. Getting Schooled on ScalabilityAre there design patterns we can glean from a successfully scaled distributed system like DynamoDB as we contemplate next-generation cryptocurrency blockchain architectures? We can certainly “reverse engineer” some requirements by looking at how a commercially viable solution like Amazon’s Prime Day works today: Application layer (client-provided) sharding is a hard requirement. This might take a more contract-centric form in a blockchain than in a NoSQL database’s API, but it’s still critical to involve the application in deciding which transactions require total ordering versus partial ordering versus no ordering. Partial ordering via client-provided grouping of transactions in particular is virtually certain to be part of any feasible solution.Quorum voting may indeed be a bottleneck on performance, but Byzantine resistance per se is a red herring. Establishing a majority vote on data durability across mutually authenticated storage servers with full encoding on the wire isn’t much different from a Proof-of-Stake supermajority vote in a blockchain. So while it matters to “sweat the details” on getting this inner loop efficient, it can’t be the case that consensus per se fundamentally forces blockchains to be slow.Routing matters. Routing alone won’t speed up a blockchain by 7 orders of magnitude, but smarter routing might shave off a factor of 10.Infrastructure ignorance comes at a cost. Cryptocurrency developers largely ignore the fact that the cloud exists (certainly that managed services, the most modern incarnation of the cloud, exist). This is surprising, given that the vast majority of cryptocurrency nodes run in the cloud anyway, and it almost certainly accounts for at least some of the large differential in performance. In a system like DynamoDB you can count on the fact that every line of code has been optimized to run well in the cloud. Amazon retail is also a large user of serverless approaches in general, including DynamoDB, AWS Lambda, and other modern cloud services that wring performance and cost savings out of every transaction.We’re not going to solve blockchain scaling in a single article , but there’s a lot we can learn by taking a non-defensive look at the problem and comparing it to the best known distributed algorithms in use by commercial companies today. Only by being willing to learn and adapt ideas from related areas and applications can blockchains and cryptocurrencies grow into the lofty expectations that have been set for them…and claim a meaningful place in scaling up to handle real-world business transactions. Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...