InfoQ Homepage Redis Content on InfoQ
-
Valkey 8.0 Now Generally Available with Improved Memory Efficiency
The Linux Foundation has announced the general availability of Valkey 8.0, the open source in-memory storage solution developed as a successor to Redis. By introducing a dictionary per slot and embedding keys directly into dictionary entries, developers can achieve up to 20% more capacity, allowing for the storage of additional keys per node.
-
How Canva Scaled Real-Time Collaboration with WebRTC: from WebSockets to Seamless P2P Communication
Canva recently shared how it implemented real-time mouse pointers for collaborative whiteboarding. Canva chose a WebRTC-based solution to improve scalability, reduce latency, and lower backend load. Since WebRTC uses peer-to-peer communication, Canva can provide users with a smoother, more performant real-time experience than a traditional backend-based WebSocket and Redis solution.
-
Netflix’s Pushy: Evolution of Scalable WebSocket Platform That Handles 100Ms Concurrent Connections
Netflix shared details on the evolution of Pushy, a WebSocket messaging platform that supports push notifications and inter-device communication across many different devices for the company’s products. Netflix’s engineers implemented many improvements across the Pushy ecosystem to ensure the platform's scalability and reliability and support new capabilities.
-
Amazon MemoryDB Provides Fastest Vector Search on AWS
AWS recently announced the general availability of vector search for Amazon MemoryDB, the managed in-memory database with Multi-AZ availability. The new capability provides ultra-low latency and the fastest vector search performance at the highest recall rates among vector databases on AWS.
-
Redis Improves Performance of Vector Semantic Search with Multi-Threaded Query Engine
Redis, the in-memory data structure store, has recently released its enhanced Redis Query Engine. This comes at a time when vector databases are gaining prominence due to their importance in retrieval-augmented generation (RAG) for GenAI applications. Redis announced significant improvements to its Query Engine, using multi-threading to enhance query throughput while maintaining low latency.
-
Introducing Redis Cloud Packages
Redis has released its new product named Redis Cloud Packages, a combination of pre-configured Redis Cloud instances designed to meet specific workloads and use cases, allowing users to skip from manual configurations and removing the hassle of managing Redis instances, making it more accessible and efficient for developers. Users can use a package for caching, NoSQL or vector databases.
-
Microsoft Announces Garnet: a New Open-Source Cache-Store and Redis Alternative
Microsoft Research has recently announced Garnet, an open-source cache-store designed to accelerate applications and services. Using the RESP wire protocol, Garnet is a faster alternative to cache-stores and is compatible with existing Redis clients.
-
Redis Switches to SSPLv1: Restrictive License Sparks Fork by Former Maintainers
Redis has recently announced a change in their license by transitioning from the open-source BSD to the more restrictive Server Side Public License (SSPLv1). The move has promptly led to a fork initiated by former maintainers and reignited discussions surrounding the sustainability of open-source initiatives.
-
Hashnode Creates Scalable Feed Architecture on AWS with Step Functions, EventBridge and Redis
Hashnode created a scalable event-driven architecture (EDA) for composing feed data for thousands of users. The company used serverless services on AWS, including Lambda, Step Functions, EventBridge, and Redis Cache. The solution leverages Step Functions' distributed maps feature that enables high-concurrency processing.
-
Uber's CacheFront: Powering 40M Reads per Second with Significantly Reduced Latency
Uber developed an innovative caching solution, CacheFront, for its in-house distributed database, Docstore. CacheFront enables over 40M reads per second from online storage and achieves substantial performance improvements, including a 75% reduction in P75 latency and over 67% reduction in P99.9 latency, demonstrating its effectiveness in enhancing system efficiency and scalability.
-
lastminute.com Improves Search Scalability Using Microservices with RabbitMQ and Redis
The team at lastminute.com rearchitected the search result aggregation process by breaking up the single service into multiple ones and introducing asynchronous integration. Developers used RabbitMQ for messaging and Redis for storing results from data suppliers. The revised architecture improved scalability and deployability and reduced resource utilization.
-
Amazon ElastiCache Serverless: a New Option for Scaling Cache Capacity Instantly
AWS recently announced the general availability of Amazon ElastiCache Serverless, a new serverless option allowing users to quickly create a cache and instant scale capacity based on application traffic patterns. In addition, the serverless option is compatible with open-source caching solutions Redis and Memcached.
-
.NET Aspire: Cloud-Native App Development with Microsoft's Latest Project
Microsoft released .NET 8 last week; one of the most notable news within the launch was .NET Aspire, the cloud-native development stack for building resilient, observable, and configurable cloud-native applications with the dotnet. .NET Aspire includes a curated set of components enhanced for cloud-native by including service discovery, telemetry, resilience, and health checks by default.
-
How DoorDash Rearchitected its Cache to Improve Scalability and Performance
DoorDash rearchitected the heterogeneous caching system they were using across all of their microservices and created a common, multi-layered cache providing a generic mechanism and solving a number of issues coming from the adoption of a fragmented cache.
-
Redis 7.2 Now Available with Scalable Search, Auto Tiering, Triggers and Functions
Redis Inc recently announced the unified release of Redis 7.2, which includes several new features like auto-tiering, native triggers, and a preview of an enhanced, scalable search capability that provides increased performance for query and search scenarios, including vector similarity search (VSS).