Jump to content

Search the Community

Showing results for tags 'redis'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Calendars

  • DevOps Events

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 11 results

  1. Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork. View the full article
  2. Redis or Remote Dictionary Server is an amazing open-source data structure store. Although Redis was an accidental invention, it is better than traditional caching systems in a few aspects like performance and speed. It means that you can use Redis to perform high-end operations which it facilitates using its low-latency data access feature. Moreover, it lets you do a lot of activities like caching, message broking, real-time data analytics, and implementing the data structures. These features lay the foundation for many famous real-time applications like Instagram, Twitter, and Shopify. So, in this quick blog, we will explain the simple method to install Redis CLI on Linux easily. How to Install Redis CLI on Linux First, update the existing packages to terminate any error that may arise due to outdated dependencies: sudo apt update Now, install the Redis CLI using the following given command: sudo apt install redis-tools -y After installing Redis, you can run the following command check the currently installed version of Redis: redis-cli --version When you enter the previous command, it should return the Redis CLI version. Conclusion Redis CLI is the tool that connects you to and lets you interact with a Redis server. This quick guide briefly explains how to install Redis CLI on Linux. The process starts with updating the packages and ends with installing a new package – redis-tools. Lastly, always ensure that you verify the newly installed packages on your devices. View the full article
  3. Hackers are exploiting misconfigured servers running Docker, Confluence, and other services in order to drop cryptocurrency miners. Researchers at Cado Security Labs recently observed one such malware campaign, noting how threat actors are using multiple “unique and unreported payloads”, including four Golang binaries, to automatically discover Apache Hadoop YARN, Docker, Confluence, and Redis hosts, vulnerable to CVE-2022-26134, an unauthenticated and remote OGNL injection vulnerability that allows for remote code execution. This flaw was first discovered two years ago, when threat actors targeted Confluence servers (typically the confluence user on Linux installations). At the time, the researchers said internet-facing Confluence servers were at “very high risk”, and urged IT teams to apply the patch immediately. It seem that even now, two years later, not all users installed the available fixes. Unidentified threat The tools are also designed to exploit the flaw and drop a cryptocurrency miner, spawn a reverse shell, and enable persistent access to the compromised hosts. Cryptocurrency miners are popular among cybercriminals, as they take advantage of the high compute power of a server to generate almost untraceable profits. One of the most popular crypto-miners out there is called XMRig, a small program mining the Monero currency. On the victim’s side, however, not only are their servers unusable, but the miners would rack up their electricity bill fairly quickly. For now, Cado is unable to attribute the campaign to any specific threat actor, saying it would need the help of law enforcement for that: “As always, it’s worth stressing that without the capabilities of governments or law enforcement agencies, attribution is nearly impossible – particularly where shell script payloads are concerned,” it said. Still, it added that the shell script payloads are similar to ones seen in attacks done by TeamTNT, and WatchDog. More from TechRadar Pro This new Linux malware floods machines with cryptominers and DDoS botsHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  4. Since we launched Memorystore for Redis Cluster in preview, customers across a variety of industries including banking, retail, ads, manufacturing, and social media have taken advantage of the performance and scalability of Memorystore for Redis Cluster. Today we’re thrilled to announce the general availability (GA) of Memorystore for Redis Cluster, providing a 99.99% SLA with replicas enabled. With Memorystore for Redis Cluster, you get a fully-managed and fully open-source software (OSS) compatible Redis Cluster offering with zero downtime scaling (in or out), providing up to 60 times more throughput than Memorystore for Redis, with microseconds latency. Memorystore for Redis Cluster intelligently places primaries and replica nodes across availability zones and manages automatic failovers to maximize availability and reduce complexity. Performance sensitive customers like PLAID, Wright Flyer Studios, and Reddit are excited about this GA and the 99.99% SLA. “Memorystore for Redis Cluster provides us the scalability and manageability required for our workloads, allowing us to use Memorystore in far more meaningful ways. The OSS cluster client compatibility also provides an easier migration path from existing self-managed Redis Clusters. Now that it is GA with a 99.99% SLA, we look forward to adopting it for our caching needs on Google Cloud!” - Stanley Feng, Senior Engineering Manager at Reddit Performance and scalingMemorystore for Redis Cluster takes Memorystore to the next level of scale and performance, supporting 10 times more data and providing up to 60 times more throughput than Memorystore for Redis, with microseconds latency. Memorystore’s vetted, production-ready infrastructure provides a pre-configured and optimal Redis platform to supercharge your performance. With zero downtime scaling, you can start small and grow over time, adding only the capacity required based on workload demands, without taking downtime and interrupting your production applications. Unlike client-sharded caching architectures, which are prone to data loss during scaling operations, the simple scaling APIs of Memorystore for Redis Cluster let you grow or shrink your clusters and optimize costs without data loss. The direct access (proxy-less) architecture of Memorystore for Redis Cluster ensures that throughput scales linearly as nodes are added to the cluster, providing ultra-low and predictable latencies and avoiding the cost and latency overhead of a proxy-based architecture. Applibot, a CyberAgent company, produces and operates smartphone games and services, and depends heavily on Redis and its ultra-fast performance: “After relying on Memorystore, we’re excited for the launch of Memorystore for Redis Cluster, which will dramatically improve performance with its ultra low and predictable latencies. We’re also very excited to take advantage of Memorystore for Redis Cluster’s easy-to-use APIs which make scaling clusters in or out simple, fast, and non-disruptive, enabling us to dynamically size the clusters based on workload demands.” - Naoto Ito, Lead of Backend Engineering at Applibot High availabilityMemorystore for Redis Cluster is built with a robust control plane that provides high availability with data replication across multiple fault-domains (zones), automatic failovers, and intelligent node placement for both primary and replica nodes. Memorystore for Redis Cluster can handle both node and zonal outages by efficiently failing over to the replica nodes, promoting them to become primaries, and automatically repairing failed nodes with seamless orchestration. Memorystore for Redis Cluster’s durable architecture is designed for reliability, as Redis clients directly access the shards (primary and replica nodes). This design avoids the risk of a single point of failure (as each shard is designed to fail independently), which is inherent in proxy-based architectures. In addition, Memorystore for Redis Cluster’s control plane utilizes Create-Before-Destroy and graceful shutdown maintenance strategies to eliminate downtime or workload interruptions. Instead of upgrading active cluster nodes in-place, Memorystore first creates an entirely new replica with the new software (at no extra charges to you), then coordinates a lossless failover from the old node to the new node, and lastly removes the old node gracefully in a gradual process that ensures minimal-to-no impact to your application. NTT DOCOMO, Japan's largest telecommunications company, is thrilled with the general availability of Memorystore for Redis Cluster: “Memorystore for Redis Cluster’s resilient design and 99.99% SLA gives us confidence that our clusters are durable and resilient. Memorystore’s sophisticated control plane and maintenance orchestration ensures minimal impact to our application during maintenance and we’re very excited to utilize this managed offering so we can focus our efforts on creating value for our customers.” - Masatoshi Kato, Manager, Service Design Department, NTT DOCOMO Integrated and automated with Google CloudMemorystore for Redis Cluster is built with Private Service Connect (PSC) for private and secure connectivity by default. With PSC, we simplified the provisioning experience so you can easily configure private networking with an automated single-step process, avoid security issues of bidirectional access, and not be limited by the quota issues posed by Virtual Private Cloud (VPC) peering-based implementations. We also simplified cluster endpoint management by only requiring two IP addresses for any sized cluster (even for 250 Redis nodes!), thereby addressing IP exhaustion and contiguous expansion problems. With PSC, client applications can easily access the cluster from any region and there are advanced security controls available, like separation of permissions for Network and Redis admin and control of the IP address space used for cluster endpoints. Memorystore for Redis Cluster comes with built-in integration with Google Identity and Access Management (IAM) so you can easily manage and control access to your clusters, especially for a microservice based client architecture. We also provide out of the box Audit Logging, in-transit encryption with TLS, and integration with Cloud Monitoring, so your cluster metrics are accessible and actionable. Fully-managed vs do-it-yourselfFor many customers, it’s the fully-managed nature of the Memorystore for Redis Cluster that entices them to migrate. You can offload the tedious and unrewarding work of tuning Redis performance on Compute Engine VMs, orchestrating complex cluster topologies, optimizing networking implementations, managing maintenance, and striving for self-managed high availability in the face of outages and upgrades. Customers can rely on Google to continually invest in building and supporting new Redis features and delivering them to customers with non-disruptive maintenance operations. Now, you can focus on building value for your organizations instead of focusing on managing Redis Clusters. To make adoption even more enticing, Memorystore Committed Use Discounts (CUDs) are already available for Memorystore for Redis Cluster and are fungible with Memorystore for Redis as well as Memorystore for Memcached, enabling savings of 40% with a three-year CUD and 20% with a one-year CUD. Waze, a free navigation and driving app active in more than 185 countries is adopting Memorystore for Redis Cluster to take advantage of the fully-managed offering and supercharge their applications’ performance: “Waze is excited to use Memorystore for Redis Cluster as our primary cache solution, taking advantage of the 99.99% SLA, zero-downtime scalability, and flexible Redis data types. The ultra-fast performance has been instrumental in helping us scale our platform and provide a better experience for our users.” - Yuval Kamran, DevOps Engineer Getting startedAt this point, you may be wondering how to get started or how much effort it would require to migrate to the Memorystore for Redis Cluster. Good news! We’ve already published a zero-downtime migration blog using OSS RIOT with code snippets and a detailed step-by-step walkthrough so you can quickly and easily migrate your Redis implementations from any source, including self-managed or third-party offerings without interrupting your applications. You can easily get started today by heading over to the Cloud console and creating a cluster, and then scaling in or out with just a few clicks. Please let us know if you have any feedback by reaching out to us at cloud-memorystore-pm@google.com.
  5. Ahead of the general availability of Memorystore for Redis Cluster, a fully-managed and highly-scalable low latency Redis Cluster service, we want to share some of its differentiating capabilities in a series of “under the hood” blogs. Just like other Google Cloud services like AlloyDB, where Google made significant enhancements to open source software, we’ve similarly made under-the-hood improvements to the Redis engine. This blog is the first of a series discussing the enhancements we’ve made to Redis Cluster. Why Redis Cluster?With the continued growth of Redis, many developers have turned to Redis Cluster for its improved scalability and performance. In fact, the primary motivation for the release of OSS (open source software) Redis Cluster in 2015 was the need for a highly scalable, in-memory data store that can provide lookups at ultra-low latencies. Because Redis is single-threaded, a clustered offering that scales “out” (or “horizontally”) by adding additional nodes provides vastly superior performance over a single “standalone” node that only scales “up” by growing the VM size. Redis Cluster integrates high availability directly within the engine, eliminating the need for external tools such as Sentinel. Moreover, the architecture of Redis Cluster offers operational flexibility. During updates or maintenance, only specific segments of the cluster are affected, allowing for continuous service while minimizing the impact on user data. Despite the tantalizing opportunity to scale performance with Redis Cluster, the lack of reliable automation and the risky nature of Redis Cluster scaling operations, even today, often limits users from realizing its full benefits. In this blog we detail the specific enhancements the Memorystore team made to Redis to address the shortcomings of OSS scaling. For each of these enhancements, we’ve been actively engaging with the OSS Redis community, sharing our designs, code and improvements. Contributing to OSS and keeping our work aligned with the core Redis Cluster project ensures compatibility, embodies our commitment to open-source principles, and helps to avoid any divergent behaviors. With Memorystore for Redis Cluster, we’ve made significant enhancements to the Redis engine to de-risk both scale-in and scale-out operations. With true zero-downtime scalability, you can fully take advantage of our pay-as-you-go model, increasing capacity ahead of peak events and shrinking it afterwards, empowering you to only pay for what you need. Consider the Black Friday/Cyber Monday sales period when many businesses must prepare for an overwhelming volume of transactions and user interactions. By leveraging the fully-managed Memorystore for Redis Cluster, you can quickly and efficiently scale your infrastructure in preparation for this peak and subsequently reduce it when the demand subsides, ensuring seamless user experiences while optimizing costs. Issues with OSS Redis scalingBelow, we’ve detailed challenges associated with scaling OSS Redis. Then, in the following section we’ll discuss how we’re addressing these risks. In theory, scaling out a self-managed Redis Cluster is fairly straightforward. If you’re running on a service like Compute Engine, you first provision a new set of VMs with the appropriate resource and network configurations. Next, you employ the Redis CLI to integrate these nodes into the cluster, designating some as primary nodes and others as replicas. Finally, you use the Redis CLI to redistribute the slots to these newly integrated nodes. However, in practice things are often much, much more complicated. Scaling a cluster is often time-sensitive — perhaps you received an alert about high CPU utilization or low memory — and like many manual operations, error-prone. And, if you’re managing a cluster with many nodes or with replicas, the complexity only increases. Further, in the current state of OSS Redis Cluster, the resharding process isn't entirely autonomous. An external agent, running outside of Redis, is required to drive the procedure by extracting keys from the source node and subsequently dispatching them to the target node. Redis Cluster partitions its data across 16,384 slots, each assigned to a specific node. The primary reason for resharding is to recalibrate storage and compute capacity in line with changing workloads. This ensures that nodes are adequately primed to manage their data responsibilities. Each slot must have one of the three “migration states” of “stable”, “importing”, or “migrating”, which each play a crucial role in the resharding mechanism. Redis maintains these states in memory. The "stable" state signifies the slot’s regular operational state. The node with a slot in the "importing" state is not yet the data owner and only serves requests prefixed with the ASKING command. The node with a slot in the “migrating” state retains data ownership. If keys requested by clients aren't found on a given node, then that node responds with an "-ASK" error, directing them to the target node which contains the keys. Clients then send the "ASKING" command to the target node to access the keys during the slot's migration. This intricate choreography of “migration state” management not only adds significant overhead but also makes it challenging to identify and address all potential failure cases. The inherent complexity reduces the predictability and reliability of the system, making it difficult to design a straightforward error recovery strategy hence impacting the overall robustness and performance of the OSS Redis system. During a scaling operation the external agent leverages the CLUSTER SETSLOT command to manipulate the migration state of slots on both originating and receiving nodes. Subsequently, it applies CLUSTER GETKEYSINSLOT to extract the keys to be moved, and MIGRATE to move them from source to target, ensuring each key, associated with a slot by its hash value, is transported intact. Upon completion, the cluster's configuration across nodes updates, reflecting the new slot-to-node assignments. This synchronization is vital for maintaining a consistent and accurate view of the cluster's architecture. With this as the backdrop, here are the most common challenges with scaling OSS Redis Cluster. 1. AvailabilityHigh availability lapse in empty shards In OSS Redis, empty shards (those without assigned slots) present operational challenges. For one, they lack identification until they complete the migration of their first slot, making it difficult for external agents to pinpoint issues without sophisticated state tracking. Moreover, even with replicas in place, these empty shards don't support automatic failover until they receive their initial slot. If a primary node in one of these shards becomes unresponsive during the import of its very first slot, it can lead to both service interruptions and potential data loss pertaining to that slot. Single point of failure in slot-ownership finalization Migration states in OSS Redis clusters aren't propagated to replicas. Consequently, during automatic failovers, slot migration states may be lost, and newly elected primaries might remain unaware of ongoing migrations. The lack of state continuity increases the risk of potential data loss when the original primaries experience a crash or out-of-memory (OOM) events. Rectifying this requires a sophisticated external agent, introducing complexity and operation overheads. Higher impact on workload The current OSS migration process is driven by an agent that runs externally to the Redis server. The agent’s external nature both presents risks associated with the agent’s availability and durability as well as increases the risk of scaling activities because, for example, the agent cannot easily check available memory to determine a safe key count to migrate. Because of the agent’s limited control over data volume movement, scaling operations from that external agent can lead to prolonged blocks on running user workloads, directly affecting the cluster’s operations. Due to the agent’s lack of real-time visibility into both memory pressure and client workload levels, it becomes challenging for the agent to balance the migration operation with ongoing customer workloads. For instance, migration might inadvertently coincide with a surge in customer activity, amplifying the impact on user operations and potentially leading to pronounced service disruptions. 2. ReliabilityHigher risk of OOM Externally-driven migration poses a significant OOM risk, stemming from limited control over the amount of data migrated simultaneously. When handling the migration of large keys or dense data structures like a populated hash map, the source node can experience sudden and intense memory pressure. A distinct challenge for external agents is their lack of real-time visibility into this memory pressure. An out-of-memory condition not only jeopardizes the ongoing migration process but can also lead to the OOM killing of the Redis process, causing disruptions to the application itself. Low resilience to transient errors The migration process is susceptible to disruptions from issues like network glitches or node failovers. In such events, the external agent gets minimal error feedback from migration commands. To determine the root cause, the agent executes Redis commands on the affected node, adding load to it even while it’s managing customer workloads. The information retrieved can age quickly. Additionally, resuming an interrupted migration demands explicit operations from the agent. Such high-touch interventions are not only tedious but also introduce a higher risk of errors. 3. EfficiencyHigher migration overhead The existing OSS slot migration process involves multiple inter-process communications (IPCs) between Redis and the external agent. This process requires the agent to pull data from the source and then relay it to the target, effectively doubling the network bandwidth utilization. Moreover, this approach increases the collective CPU overhead. The data being read into the agent is processed and then dispatched to the target, resulting in redundant computational work on both the agent and the source. This not only strains resources but can also slow down the overall migration process, impacting performance. 4. ManagementExternal dependency The external execution of the slot migration protocol, detailed in Redis official documentation, necessitates a sophisticated external agent for the migration's initiation, oversight, and conclusion. Beyond just initiating the migration and overseeing its progression, this agent effectively becomes a complex state machine. Each step or state within this machine not only progresses the data migration but also must account for error handling to ensure state consistency. Given the myriad of steps involved, each with its potential failure modes, maintaining this consistent state while navigating the nuances of each phase compounds the operational intricacies and risks. Absence of stable shard identification OSS Redis 7.0 introduced the CLUSTER SHARDS command, yet it still lacks a mechanism for constant, stable identification of each shard. This omission complicates operations, hindering precise referencing of specific shards and understanding node-to-shard relationships, especially when slot ownership changes over time. Solutions, from Memorystore for Redis ClusterTo address these complexities, and their associated risks, Memorystore for Redis Cluster users with zero-downtime scaling (in or out) from a single click or API. In addition, the following four improvements to Memorystore for Redis Cluster systematically address the inherent challenges in traditional OSS Redis resharding, offering a platform with enhanced efficiency, reliability, and operational simplicity. 1. Engine-driven slot migration (addresses challenges #1, #2, #3, #4) By moving the migration protocol into the Redis engine, Memorystore for Redis Cluster streamlines slot migration and reduces the overall complexity of scaling operations to improve the availability and durability of the cluster. Utilizing the engine's main thread, Redis directly manages migrations, eliminating the dependency on an external agent. This approach reduces network and CPU overhead as data is directly transferred from source to target. Moreover, the system's design inherently balances customer and migration workloads, adjusting migration pace based on server activity. If interruptions occur, the architecture automatically resumes migration after a failover. This design provides Memorystore for Redis Cluster with a more efficient and resilient migration mechanism. 2. Introduction of Shard ID (addresses challenge #4) Memorystore for Redis Cluster introduced a SHARD ID enabling direct referencing of a shard. This facilitates efficient verification of whether two nodes belong to the same shard, particularly in scenarios when nodes are down or don't own slots. By eliminating the need to depend on large slot ranges, the system becomes more robust and navigable. Google contributed this feature to OSS and it has been merged into OSS Redis 7.2. 3. Automatic failover in empty shards (addresses challenge #1) Building on the concept of first-class citizen shards, Memorystore for Redis Cluster enhances the Redis engine to initiate leader elections for empty shards. This ensures that migrated keys on the replicas remain accessible even if the primary node fails. This mechanism enhances reliability, especially during scaling transition periods. 4. Mitigating single point of failure in scaling (addresses challenge #1) Memorystore for Redis Cluster introduced two key solutions: a. Migration states are now replicated to replicas, ensuring they're safeguarded during automatic failovers. b. Proper sequencing for slot ownership, preventing potential data loss if primary nodes crash during migrations. We have been working closely with the OSS community to incorporate these improvements into Redis 8.0. Scaling, simplifiedWith Memorystore for Redis Cluster, you’re unlocking the true potential of a scalable Redis Cluster that also provides microsecond latency. You can easily get started today by heading over to the Cloud console and creating a cluster, and then scaling in or out with just a few clicks. If you want to learn more about migrating to Memorystore for Redis Cluster, take a look at this step-by-step migration blog. Stay tuned for our next blog in this series of deep dives and let us know if you have any feedback by reaching out to us at cloud-memorystore-pm@google.com.
  6. Amazon MemoryDB for Redis is now a Payment Card Industry Data Security Standard (PCI DSS) compliant service. MemoryDB is a fully managed, Redis-compatible, in-memory database that provides low latency, high throughput, and durability at any scale. View the full article
  7. Amazon ElastiCache for Redis and Amazon MemoryDB for Redis now support natively storing and accessing data in the JavaScript Object Notation (JSON) format. With this launch, application developers can effortlessly store, fetch, and update their JSON data inside Redis without needing to manage custom code for serialization and deserialization. Using ElastiCache and MemoryDB, you can now efficiently retrieve and update specific portions of a JSON document without needing to manipulate the entire object, which can help improve performance and help reduce cost. You can also search your JSON document contents using the JSONPath query syntax. View the full article
  8. Redis is a popular key value store with extensive features including sharding, clustering, graph, time series and much more which has made it very popular with developers. It has many of the features you need to build a web app and scale it to large scale. In this article we will demonstrate how to install Redis on Debian Linux version 11. Let’s get started… View the full article
  9. You can now publish the Redis slow log from your Amazon ElastiCache for Redis clusters to Amazon CloudWatch Logs and Amazon Kinesis Data Firehose. The Redis slow log provides visibility into the execution time of commands in your Redis cluster, enabling you to continuously monitor the performance of these operations. You can choose to send these logs in either JSON or text format to Amazon CloudWatch Logs and Amazon Kinesis Data Firehose. View the full article
  10. Amazon ElastiCache for Redis now supports Redis 6. This release brings several new and important features to Amazon ElastiCache for Redis. View the full article
  11. Amazon ElastiCache for Redis Global Datastore, which provides fully managed, fast, reliable and secure cross-region replication, is now available in an additional 6 regions. With expanded region support, Global Datastore is now available in Asia Pacific (Mumbai), South America (Sao Paulo), Europe (Paris), Canada Central (Montreal), and AWS GovCloud (US) Regions. View the full article
  • Forum Statistics

    44k
    Total Topics
    43.6k
    Total Posts
×
×
  • Create New...