Software Studio
Back to Articles

Redis Beyond Caching: Patterns for Real Applications

RedisNode.jsArchitecture

Most developers know Redis as a cache. Set a key, get a key, maybe set a TTL. But Redis is a full-fledged data structure server, and its capabilities go far beyond caching.

Rate limiting is one of the most practical Redis use cases. The sliding window pattern uses a sorted set with timestamps as scores. For each request, add the current timestamp, remove entries older than the window, and check the set size. If it exceeds the limit, reject the request. This provides accurate per-user rate limiting with minimal code.

Session management with Redis solves the stateless server problem. In a horizontally scaled environment, multiple Cloud Run instances or Kubernetes pods, you can't store sessions in memory. Redis provides a fast, shared session store. The session ID goes in a cookie, the session data lives in Redis with a TTL matching your session duration.

Job queues with BullMQ (backed by Redis) handle background processing elegantly. When a user uploads an image, you don't process it synchronously. You push a job to a Redis-backed queue, return a 202 Accepted, and a worker picks up the job. BullMQ handles retries, backoff, priorities, and dead letter queues.

Pub/Sub enables real-time features without a dedicated message broker. When an article is published, publish an event to a Redis channel. Subscribers, websocket servers, cache invalidation workers, notification services, receive the event immediately. For simple event distribution within a single application, Redis Pub/Sub is lighter than Kafka or Pub/Sub.

Distributed locking prevents concurrent operations from conflicting. When two instances try to process the same webhook, a Redis lock (using SET with NX and PX options) ensures only one succeeds. The Redlock algorithm extends this to Redis clusters for higher availability.

Leaderboards and counters leverage Redis's sorted sets and atomic increment operations. ZINCRBY atomically increments a score. ZRANGEBYSCORE retrieves rankings. These operations are O(log n) regardless of set size, making them suitable for leaderboards with millions of entries.

The operational consideration: Redis is in-memory. Data that can't be reconstructed (user data, transaction records) belongs in PostgreSQL. Data that can be reconstructed (caches, sessions, rate limits) is perfect for Redis. If Redis dies and restarts empty, your application should continue working, just slower while caches rebuild.

For GCP deployments, Memorystore for Redis provides a managed instance with automatic failover. For development, a local Redis container works perfectly. The same client code works against both.