InfoQ Homepage Caching Content on InfoQ
-
Introducing Evalite: the TypeScript Testing Tool for AI Powered Apps
Evalite is a TypeScript-native eval runner designed for AI applications, enabling developers to create reproducible evals with rich outputs. Featuring first-class trace capture, scoring, and a user-friendly web UI, Evalite enhances testing ergonomics and iteration speed. Open-source under MIT, it seamlessly integrates with any LLM, ensuring complete data control and fostering rapid development.
-
Reddit Migrates Comment Backend from Python to Go Microservice to Halve Latency
Reddit has rebuilt its core backend, migrating Comments, Accounts, Posts, and Subreddits from a legacy Python monolith to Go microservices. The migration improves performance, halves critical write latency, and modernizes the platform for future scalability while preserving correctness across multiple datastores.
-
Uber Achieves 150M Reads per Second with CacheFront Improvements
Uber has updated its CacheFront architecture to handle over 150 million reads per second. The new design improves consistency and reduces stale reads by integrating Flux for MySQL binlog tailing, enhancing the storage engine, and introducing Cache Inspector for monitoring and optimization.
-
Pogocache: Open Source Caching Software with Low Latency and Multiple Wire Protocols
A new open-source caching software, Pogocache, recently reached 1.0 general availability, focusing on low latency and CPU efficiency. Pogocache supports multiple popular protocols while claiming better throughput and lower latency than other open-source caching alternatives.
-
How Cloudflare Migrated Quicksilver to Multi-Level Caching While Serving Billions of Requests
The engineering team at Cloudflare recently shared how they transitioned Quicksilver, their internal global key-value store, to a tiered caching architecture. They described their incremental journey from storing everything everywhere to adopting a distributed caching system, improving storage efficiency while preserving consistency guarantees and low-latency reads at the edge.
-
Netflix’s Pushy: Evolution of Scalable WebSocket Platform That Handles 100Ms Concurrent Connections
Netflix shared details on the evolution of Pushy, a WebSocket messaging platform that supports push notifications and inter-device communication across many different devices for the company’s products. Netflix’s engineers implemented many improvements across the Pushy ecosystem to ensure the platform's scalability and reliability and support new capabilities.
-
Uber's CacheFront: Powering 40M Reads per Second with Significantly Reduced Latency
Uber developed an innovative caching solution, CacheFront, for its in-house distributed database, Docstore. CacheFront enables over 40M reads per second from online storage and achieves substantial performance improvements, including a 75% reduction in P75 latency and over 67% reduction in P99.9 latency, demonstrating its effectiveness in enhancing system efficiency and scalability.
-
How RevenueCat Manages Caching for Handling over 1.2 Billion Daily API Requests
RevenueCat extensively uses caching to improve the availability and performance of its product API while ensuring consistency. The company shared its techniques to deliver the platform, which can handle over 1.2 billion daily API requests. The team at RevenueCat created an open-source memcache client that provides several advanced features.
-
Amazon ElastiCache Serverless: a New Option for Scaling Cache Capacity Instantly
AWS recently announced the general availability of Amazon ElastiCache Serverless, a new serverless option allowing users to quickly create a cache and instant scale capacity based on application traffic patterns. In addition, the serverless option is compatible with open-source caching solutions Redis and Memcached.
-
How DoorDash Rearchitected its Cache to Improve Scalability and Performance
DoorDash rearchitected the heterogeneous caching system they were using across all of their microservices and created a common, multi-layered cache providing a generic mechanism and solving a number of issues coming from the adoption of a fragmented cache.
-
Redis 7.2 Now Available with Scalable Search, Auto Tiering, Triggers and Functions
Redis Inc recently announced the unified release of Redis 7.2, which includes several new features like auto-tiering, native triggers, and a preview of an enhanced, scalable search capability that provides increased performance for query and search scenarios, including vector similarity search (VSS).
-
New AWS .NET Distributed Cache Provider for DynamoDB in Preview
Recently AWS announced the preview release of the AWS .NET Distributed Cache Provider for DynamoDB. This library enables Amazon DynamoDB to be used as the storage for ASP.NET Core’s distributed cache framework.
-
AWS Announces Redis 7 Compatibility to Amazon ElastiCache for Redis
Recently AWS announced Redis 7 compatibility with Amazon ElastiCache for Redis, which brings several new features, such as Redis Functions, ACL improvements, and Sharded Pub/Sub.
-
AWS Introduces AWS Parameters and Secrets Lambda Extension to Improve Performances and Security
AWS recently announced the Parameters and Secrets Lambda Extension, a new way for developers to retrieve parameters from Systems Manager Parameter Store and secrets from Secrets Manager. The Lambda extension caches parameters and secrets, reducing latency and costs.
-
Amazon File Cache Now Generally Available
Amazon recently announced the general availability of File Cache, a managed high-speed cache for processing file data stored in disparate locations. The new service can be linked to multiple sources including on premises network file systems and managed AWS services like Amazon FSx or S3.