← Back to Guides

Redis Caching Strategies

📖 14 min read | 📅 Updated: January 2025 | 🏷️ Backend & APIs

Introduction

Redis is a high-performance in-memory data store used for caching, session management, and real-time analytics. This guide covers caching strategies, cache invalidation patterns, Redis data structures, and best practices for optimizing application performance.

1. Redis Setup & Connection

// Redis client setup with ioredis
import Redis from 'ioredis';

const redis = new Redis({
    host: process.env.REDIS_HOST || 'localhost',
    port: Number(process.env.REDIS_PORT) || 6379,
    password: process.env.REDIS_PASSWORD,
    db: 0,
    retryStrategy: (times) => {
        const delay = Math.min(times * 50, 2000);
        return delay;
    },
    maxRetriesPerRequest: 3
});

// Connection events
redis.on('connect', () => console.log('Redis connected'));
redis.on('error', (err) => console.error('Redis error:', err));

// Redis cluster setup
const cluster = new Redis.Cluster([
    { host: 'node1', port: 6379 },
    { host: 'node2', port: 6379 },
    { host: 'node3', port: 6379 }
], {
    redisOptions: {
        password: process.env.REDIS_PASSWORD
    }
});

// Basic operations
await redis.set('key', 'value');
await redis.setex('key', 3600, 'value'); // Expires in 1 hour
const value = await redis.get('key');
await redis.del('key');

// Check if exists
const exists = await redis.exists('key'); // Returns 1 or 0

2. Cache-Aside Pattern (Lazy Loading)

// Most common caching pattern
async function getUser(userId: string) {
    const cacheKey = `user:${userId}`;
    
    // 1. Try to get from cache
    const cached = await redis.get(cacheKey);
    if (cached) {
        return JSON.parse(cached);
    }
    
    // 2. Cache miss - get from database
    const user = await db.users.findOne({ id: userId });
    
    if (!user) {
        return null;
    }
    
    // 3. Store in cache
    await redis.setex(
        cacheKey,
        3600, // TTL: 1 hour
        JSON.stringify(user)
    );
    
    return user;
}

// Update user - invalidate cache
async function updateUser(userId: string, data: any) {
    const user = await db.users.update({ id: userId }, data);
    
    // Invalidate cache
    await redis.del(`user:${userId}`);
    
    return user;
}

// Generic cache wrapper
async function cached(
    key: string,
    ttl: number,
    fetchFn: () => Promise
): Promise {
    // Try cache
    const cached = await redis.get(key);
    if (cached) {
        return JSON.parse(cached);
    }
    
    // Fetch data
    const data = await fetchFn();
    
    // Store in cache
    await redis.setex(key, ttl, JSON.stringify(data));
    
    return data;
}

// Usage
const user = await cached(
    `user:${userId}`,
    3600,
    () => db.users.findOne({ id: userId })
);

3. Write-Through Cache

// Write to cache and database simultaneously
async function createUser(userData: any) {
    // 1. Write to database
    const user = await db.users.create(userData);
    
    // 2. Write to cache immediately
    await redis.setex(
        `user:${user.id}`,
        3600,
        JSON.stringify(user)
    );
    
    return user;
}

async function updateUserWriteThrough(userId: string, data: any) {
    // 1. Update database
    const user = await db.users.update({ id: userId }, data);
    
    // 2. Update cache
    await redis.setex(
        `user:${userId}`,
        3600,
        JSON.stringify(user)
    );
    
    return user;
}

// Advantage: Cache always has latest data
// Disadvantage: Slower writes, cache may have unused data

4. Write-Behind Cache (Write-Back)

// Write to cache first, database later (async)
import Bull from 'bull';

const writeQueue = new Bull('write-queue', {
    redis: {
        host: process.env.REDIS_HOST,
        port: 6379
    }
});

async function updateUserWriteBehind(userId: string, data: any) {
    const cacheKey = `user:${userId}`;
    
    // 1. Get current data from cache
    const cached = await redis.get(cacheKey);
    const currentUser = cached ? JSON.parse(cached) : await db.users.findOne({ id: userId });
    
    // 2. Update cache immediately
    const updatedUser = { ...currentUser, ...data };
    await redis.setex(cacheKey, 3600, JSON.stringify(updatedUser));
    
    // 3. Queue database write
    await writeQueue.add({
        type: 'update_user',
        userId,
        data
    });
    
    return updatedUser;
}

// Process queue
writeQueue.process(async (job) => {
    const { type, userId, data } = job.data;
    
    if (type === 'update_user') {
        await db.users.update({ id: userId }, data);
    }
});

// Advantage: Fast writes
// Disadvantage: Risk of data loss if Redis fails before write

5. Cache Invalidation Strategies

// 1. Time-based expiration (TTL)
await redis.setex('key', 3600, 'value'); // Expires in 1 hour

// 2. Manual invalidation
await redis.del('user:123');

// 3. Pattern-based invalidation
async function invalidatePattern(pattern: string) {
    const keys = await redis.keys(pattern);
    if (keys.length > 0) {
        await redis.del(...keys);
    }
}

// Invalidate all user caches
await invalidatePattern('user:*');

// 4. Tag-based invalidation
// Store tags for each cached item
await redis.set('product:123', JSON.stringify(product));
await redis.sadd('tag:electronics', 'product:123');
await redis.sadd('tag:sale', 'product:123');

// Invalidate by tag
async function invalidateByTag(tag: string) {
    const keys = await redis.smembers(`tag:${tag}`);
    if (keys.length > 0) {
        await redis.del(...keys);
        await redis.del(`tag:${tag}`);
    }
}

await invalidateByTag('electronics');

// 5. Version-based invalidation
const version = await redis.incr('cache:version');
const cacheKey = `user:${userId}:v${version}`;

// 6. Dependency-based invalidation
class CacheManager {
    async set(key: string, value: any, dependencies: string[] = []) {
        await redis.set(key, JSON.stringify(value));
        
        // Store dependencies
        for (const dep of dependencies) {
            await redis.sadd(`dep:${dep}`, key);
        }
    }
    
    async invalidate(dependency: string) {
        // Get all keys that depend on this
        const keys = await redis.smembers(`dep:${dependency}`);
        
        if (keys.length > 0) {
            await redis.del(...keys);
        }
        
        await redis.del(`dep:${dependency}`);
    }
}

const cache = new CacheManager();

// Cache with dependencies
await cache.set('order:123', order, ['user:456', 'product:789']);

// Invalidate all caches that depend on user:456
await cache.invalidate('user:456');

6. Cache Stampede Prevention

// Problem: Many requests fetch same data simultaneously on cache miss
// Solution: Use locking

async function getWithLock(
    key: string,
    ttl: number,
    fetchFn: () => Promise
): Promise {
    // Try to get from cache
    const cached = await redis.get(key);
    if (cached) {
        return JSON.parse(cached);
    }
    
    const lockKey = `lock:${key}`;
    const lockTTL = 10; // 10 seconds
    
    // Try to acquire lock
    const acquired = await redis.set(lockKey, '1', 'EX', lockTTL, 'NX');
    
    if (acquired) {
        try {
            // Double-check cache (might be populated by another process)
            const recheck = await redis.get(key);
            if (recheck) {
                return JSON.parse(recheck);
            }
            
            // Fetch data
            const data = await fetchFn();
            
            // Store in cache
            await redis.setex(key, ttl, JSON.stringify(data));
            
            return data;
        } finally {
            // Release lock
            await redis.del(lockKey);
        }
    } else {
        // Lock not acquired, wait and retry
        await new Promise(resolve => setTimeout(resolve, 100));
        return getWithLock(key, ttl, fetchFn);
    }
}

// Probabilistic early expiration (avoid thundering herd)
async function getWithProbabilisticEarlyExpiration(
    key: string,
    ttl: number,
    fetchFn: () => Promise
): Promise {
    const result = await redis.get(key);
    
    if (result) {
        const data = JSON.parse(result);
        const remainingTTL = await redis.ttl(key);
        
        // Probabilistically refresh before expiration
        const delta = ttl * 0.1; // 10% of TTL
        const probability = delta / remainingTTL;
        
        if (Math.random() < probability) {
            // Refresh in background
            fetchFn().then(newData => {
                redis.setex(key, ttl, JSON.stringify(newData));
            });
        }
        
        return data;
    }
    
    // Cache miss
    const data = await fetchFn();
    await redis.setex(key, ttl, JSON.stringify(data));
    return data;
}

7. Advanced Data Structures

// Lists (queues, feeds)
await redis.lpush('notifications:user:123', JSON.stringify(notification));
await redis.ltrim('notifications:user:123', 0, 99); // Keep last 100
const notifications = await redis.lrange('notifications:user:123', 0, 9);

// Sorted Sets (leaderboards, rankings)
await redis.zadd('leaderboard', 1500, 'user:123');
await redis.zincrby('leaderboard', 10, 'user:123');
const top10 = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
const userRank = await redis.zrevrank('leaderboard', 'user:123');

// Hashes (objects with fields)
await redis.hset('user:123', 'name', 'John', 'email', 'john@example.com');
await redis.hincrby('user:123', 'loginCount', 1);
const user = await redis.hgetall('user:123');
const name = await redis.hget('user:123', 'name');

// Sets (unique collections)
await redis.sadd('online:users', 'user:123', 'user:456');
await redis.srem('online:users', 'user:123');
const onlineUsers = await redis.smembers('online:users');
const isOnline = await redis.sismember('online:users', 'user:123');

// Set operations
const common = await redis.sinter('friends:user1', 'friends:user2'); // Common friends
const all = await redis.sunion('group:A', 'group:B');
const diff = await redis.sdiff('all:users', 'banned:users');

// Bitmaps (efficient boolean arrays)
await redis.setbit('daily:active:2024-01-01', userId, 1);
const wasActive = await redis.getbit('daily:active:2024-01-01', userId);
const activeCount = await redis.bitcount('daily:active:2024-01-01');

// HyperLogLog (cardinality estimation)
await redis.pfadd('unique:visitors', 'user1', 'user2');
const uniqueVisitors = await redis.pfcount('unique:visitors');

8. Session Management

// Express session with Redis
import session from 'express-session';
import RedisStore from 'connect-redis';

app.use(session({
    store: new RedisStore({ client: redis }),
    secret: process.env.SESSION_SECRET!,
    resave: false,
    saveUninitialized: false,
    cookie: {
        secure: true,
        httpOnly: true,
        maxAge: 24 * 60 * 60 * 1000 // 24 hours
    }
}));

// Custom session management
class SessionManager {
    private ttl = 24 * 60 * 60; // 24 hours
    
    async createSession(userId: string, data: any): Promise {
        const sessionId = crypto.randomBytes(32).toString('hex');
        const key = `session:${sessionId}`;
        
        await redis.setex(key, this.ttl, JSON.stringify({
            userId,
            ...data,
            createdAt: Date.now()
        }));
        
        return sessionId;
    }
    
    async getSession(sessionId: string) {
        const key = `session:${sessionId}`;
        const data = await redis.get(key);
        
        if (!data) return null;
        
        // Extend TTL on access
        await redis.expire(key, this.ttl);
        
        return JSON.parse(data);
    }
    
    async updateSession(sessionId: string, data: any) {
        const key = `session:${sessionId}`;
        const current = await this.getSession(sessionId);
        
        if (!current) {
            throw new Error('Session not found');
        }
        
        await redis.setex(key, this.ttl, JSON.stringify({
            ...current,
            ...data
        }));
    }
    
    async destroySession(sessionId: string) {
        await redis.del(`session:${sessionId}`);
    }
    
    async getUserSessions(userId: string): Promise {
        const pattern = 'session:*';
        const keys = await redis.keys(pattern);
        const sessions: string[] = [];
        
        for (const key of keys) {
            const data = await redis.get(key);
            if (data) {
                const session = JSON.parse(data);
                if (session.userId === userId) {
                    sessions.push(key.replace('session:', ''));
                }
            }
        }
        
        return sessions;
    }
}

9. Rate Limiting with Redis

// Fixed window rate limiting
class FixedWindowRateLimiter {
    async isAllowed(key: string, limit: number, windowSeconds: number): Promise {
        const current = await redis.incr(key);
        
        if (current === 1) {
            await redis.expire(key, windowSeconds);
        }
        
        return current <= limit;
    }
}

// Sliding window rate limiting
class SlidingWindowRateLimiter {
    async isAllowed(key: string, limit: number, windowMs: number): Promise {
        const now = Date.now();
        const windowStart = now - windowMs;
        
        // Remove old entries
        await redis.zremrangebyscore(key, 0, windowStart);
        
        // Count requests
        const count = await redis.zcard(key);
        
        if (count >= limit) {
            return false;
        }
        
        // Add current request
        await redis.zadd(key, now, `${now}-${Math.random()}`);
        await redis.expire(key, Math.ceil(windowMs / 1000));
        
        return true;
    }
}

// Token bucket
class TokenBucketRateLimiter {
    async consume(key: string, capacity: number, refillRate: number): Promise {
        const now = Date.now();
        const data = await redis.get(key);
        
        let tokens = capacity;
        let lastRefill = now;
        
        if (data) {
            const parsed = JSON.parse(data);
            tokens = parsed.tokens;
            lastRefill = parsed.lastRefill;
            
            // Refill tokens
            const timePassed = now - lastRefill;
            const tokensToAdd = (timePassed / 1000) * refillRate;
            tokens = Math.min(capacity, tokens + tokensToAdd);
        }
        
        if (tokens < 1) {
            return false;
        }
        
        // Consume token
        tokens -= 1;
        
        await redis.set(key, JSON.stringify({
            tokens,
            lastRefill: now
        }));
        
        return true;
    }
}

10. Best Practices

✓ Redis Caching Best Practices:
// Monitoring cache performance
class CacheMetrics {
    async getHitRatio(): Promise {
        const info = await redis.info('stats');
        const hits = parseInt(info.match(/keyspace_hits:(\d+)/)?.[1] || '0');
        const misses = parseInt(info.match(/keyspace_misses:(\d+)/)?.[1] || '0');
        
        return hits / (hits + misses);
    }
    
    async getMemoryUsage(): Promise {
        const info = await redis.info('memory');
        return info.match(/used_memory_human:(.+)/)?.[1] || '0';
    }
}

// Connection pooling
import { Pool } from 'generic-pool';

const pool = Pool({
    create: () => new Redis(redisConfig),
    destroy: (client) => client.quit(),
    max: 10,
    min: 2
});

const client = await pool.acquire();
await client.get('key');
await pool.release(client);

Conclusion

Redis is a powerful caching solution that dramatically improves application performance. Choose the right caching strategy, implement proper invalidation, and monitor cache metrics. Remember: caching adds complexity, so measure performance gains and adjust accordingly.

💡 Pro Tip: Monitor your cache hit ratio continuously. A hit ratio below 80% suggests your caching strategy needs adjustment. Consider cache warming, longer TTLs for stable data, or analyzing which queries benefit most from caching.