10. Caching & Message Brokers

Cache intentionally, protect against stampedes, and pick the right broker and delivery semantics.

Question: How would you prevent a cache stampede?

Answer: A cache stampede occurs when multiple processes simultaneously try to regenerate the same expired cache key. It can be prevented using a locking mechanism. The first process to see the expired key acquires a lock, regenerates the value, and updates the cache. Other processes either wait for the lock to be released and use the new value, or serve stale data while the regeneration happens.

Explanation: Adding jitter (a small random delay) to cache expiration times can also help distribute the load and reduce the chance of many keys expiring at once.

# Psuedocode for stampede protection
val = redis.get(key)
if val is None:
    # Try to acquire a lock, with a short timeout
    if redis.set(lock_key, "1", nx=True, ex=10):
        try:
            val = compute_value()
            redis.setex(key, ttl, val)
        finally:
            redis.delete(lock_key)
    else:
        # Another process has the lock, wait a bit and retry
        time.sleep(0.1)
        val = redis.get(key) # Or serve stale if acceptable

Question: When would you choose Kafka over RabbitMQ?

Answer: Choose Kafka when you need a durable, distributed, and replayable log of events, often for event sourcing, stream processing, or high-throughput data pipelines. Choose RabbitMQ when you need a flexible message broker for traditional work queues, where complex routing logic and per-message acknowledgements are important.

Question: What cache policies and patterns matter in production?

Answer: Know eviction (LRU/LFU/TTL/random), and patterns (cache-aside, write-through, write-back, refresh-ahead).

Explanation: Choose policy based on access patterns, size, and staleness tolerance. In Redis, configure maxmemory-policy.