Skip to main content
Infrastructure

Caching

Storing frequently needed data in a fast, easily accessible location to reduce access times and offload backend systems.

Caching is one of the most effective ways to improve application performance. Instead of reading from the database or re-running heavy calculations on every request, results are stored and served from cache on repeat requests. From browser caches and CDN caching to distributed in-memory caches like Redis – caching is an essential tool at every layer of the IT stack.

What is Caching?

Caching is the temporary storage of data in a faster storage medium (cache) to reduce access to slower sources (database, API, file system). The cache acts as a middle layer: if the requested information is in the cache (cache hit), it is returned immediately. If not (cache miss), it is loaded from the source and stored in the cache. Central challenges are cache invalidation (when to remove stale data?) and eviction strategy (what to remove when the cache is full?). Common strategies are LRU (Least Recently Used), TTL (Time to Live) and LFU (Least Frequently Used).

How does Caching work?

On an incoming request the system checks the cache first. On a cache hit, data is returned from fast storage (e.g. Redis in RAM) – typical response under 1 ms. On a cache miss the source (e.g. PostgreSQL) is queried, the result is returned to the client and also stored in the cache (cache-aside pattern). Alternatively the cache can be filled proactively (write-through or write-behind). TTL values define how long an entry is valid before it expires.

Practical Examples

1

A news site caches the homepage in Redis for 60 seconds: 10 million hits per day hit the database only every 60 seconds.

2

An e-commerce shop caches product catalog data with a 5-minute TTL and invalidates on price changes – page load drops from 800 ms to 50 ms.

3

An API service uses response caching with ETags: clients get HTTP 304 (Not Modified) when content is unchanged, saving bandwidth and load time.

4

A SaaS app caches complex dashboard calculations (aggregations over millions of rows) and refreshes the cache in the background every 5 minutes.

5

A social media service uses a distributed Redis cluster to serve user profiles and feeds to millions of concurrent users with sub-millisecond latency.

Typical Use Cases

Database offload: Cache frequently queried data (product lists, config, user profiles) in RAM

API performance: Cache external API responses to reduce latency and stay within rate limits

Session management: Store user sessions in Redis for fast access in distributed systems

Content delivery: Serve static and semi-static content via CDN caches worldwide

Computation caching: Cache expensive results (reports, aggregations, ML inference)

Advantages and Disadvantages

Advantages

  • Major performance gains: Response times from seconds down to milliseconds
  • Scalability: Backend systems are offloaded and can serve more concurrent users
  • Cost savings: Fewer database queries and API calls lower infrastructure cost
  • Resilience: Cached data can still be served during short backend outages
  • Better UX: Faster load times improve satisfaction and conversion

Disadvantages

  • Cache invalidation is complex: Stale data can lead to inconsistent state
  • Extra infrastructure: Redis cluster or Memcached need operation and monitoring
  • Memory cost: RAM is more expensive than disk, especially for large datasets
  • Cold start: After a flush or restart nothing is cached (cache stampede risk)

Frequently Asked Questions about Caching

When should I use Redis vs Memcached?

Redis offers more data structures (strings, hashes, lists, sets, sorted sets), persistence, pub/sub and Lua scripting. Memcached is simpler, purely in-memory and slightly faster for plain key-value caching. For most use cases Redis is the better choice because it is more versatile. Memcached fits pure session or fragment caching without persistence needs.

How do I avoid stale data in the cache?

Strategies: TTL (Time to Live) lets entries expire after a set time. Event-based invalidation clears the cache when data changes (e.g. after a DB update). Cache-aside reads from cache first and fills on miss. Write-through updates the cache together with the database. The best strategy depends on how often data changes and how much staleness you can tolerate.

What is a cache stampede and how do I prevent it?

A cache stampede happens when a popular cache entry expires and many requests hit the database at once to reload the same value. Mitigations: staggered TTLs (random variation), lock-based caching (only one request reloads, others wait) or proactive cache warming before entries expire.

Related Terms

Want to use Caching in your project?

We are happy to advise you on Caching and find the optimal solution for your requirements. Benefit from our experience across over 200 projects.

Next Step

Questions about the topic? We're happy to help.

Our experts are available for in-depth conversations – no strings attached.

30 min strategy call – 100% free & non-binding

What is Caching? Definition, Strategies & Performance Benefits