We are using rueidis as the Redis client in a high-throughput Go service where Redis is heavily used as a shared cache. While Redis itself supports LFU eviction, we are observing significant load on Redis due to repeated reads for a small set of highly frequent keys.
To reduce Redis QPS and tail latency, we are looking for a client-side in-memory cache within rueidis that can retain the most frequently accessed keys, similar to LFU or TinyLFU-based admission/eviction, before falling back to Redis.
Problem Statement
• Many requests repeatedly fetch the same hot keys from Redis
• Redis LFU helps with eviction but does not reduce network round trips
• High QPS on Redis increases: Network utilization, Tail latency, Cost
• Existing simple LRU caches are prone to cache pollution under bursty traffic
We are using rueidis as the Redis client in a high-throughput Go service where Redis is heavily used as a shared cache. While Redis itself supports LFU eviction, we are observing significant load on Redis due to repeated reads for a small set of highly frequent keys.
To reduce Redis QPS and tail latency, we are looking for a client-side in-memory cache within rueidis that can retain the most frequently accessed keys, similar to LFU or TinyLFU-based admission/eviction, before falling back to Redis.
Problem Statement
• Many requests repeatedly fetch the same hot keys from Redis
• Redis LFU helps with eviction but does not reduce network round trips
• High QPS on Redis increases: Network utilization, Tail latency, Cost
• Existing simple LRU caches are prone to cache pollution under bursty traffic