[Cache] Caching Strategies

1) Lazy Loading


  • Load data into cache only when necessary.
  • Ask Cache - if unavailable, Ask Database.

1.1) Cache Hit


  1. When your data is in cache and not expired:
  2. Application requests data from the cache.
  3. Cache returns data to the application.

1.2) Cache Miss


  1. Application requests data from the cache.
  2. Cache doesn't have requested data and returns null.
  3. Application requests and receives data from database.
  4. Application updates cache with requested data for quicker access next time.

1.3) Pros & Cons

1.3.1) Pros

  • Only requested data is cached.
  • Most data are never requested, lazy loading avoids filling up your cache with data that isn't requested.
  • Node Failures are not fatal
  • When node fails and is replaced, new node continues to function as requests are being made. (as opposed to that in Write Through)

1.3.2) Cons

  • Cache Miss Penalty
    • Each cache miss results in 3 trips:
      • Initial data request to cache
      • Database query
      • Writing data to node
  • Stale data
    • If data is only written to the cache when there is a cache miss, data in the cache can become stale because there is no updates to the cache when data is changed in the database.

2) Write Through

Adds/updates data in cache whenever data is written to database.

2.1) Pros & Cons

2.1.1) Pros

  • Data in cache is never stale
    • Data stored is always the most updated.
  • Write penalty vs Read penalty
    • Every write involves two trips:
      • A write to cache
      • A write to database
  • This adds latency to the process. But end users are generally more tolerant of latency when updating data than wen retrieving data.

2.1.2) Cons

  • Missing Data
    • When spinning up a new node (eg. due to node failure), there is missing data which continues to be missing until it is added/updated on database.
  • Cache Churn
    • Since most data is never read, there can be a lot of data in cluster that is never read - waste of resource. 
    • Adding TTL helps minimize wasted space.

3) Adding TTL

TTL = Time to Live. This indicates how long a data can live before it is removed.
In the case of Lazy Loading, this prevents data from being too stale.
In the case of Write Through, it reduces the problem of too much unread data in cluster.





https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Strategies.html

Comments

Popular posts from this blog

[Redis] Redis Cluster vs Redis Sentinel

[Unit Testing] Test Doubles (Stubs, Mocks....etc)

[Node.js] Pending HTTP requests lead to unresponsive nodeJS