Distributed Caching with Redis

When there is a need to improve the performance of web applications/microservices every millisecond counts. API gateway provides a powerful feature of distributed caching where API responses can be cached and be available for all distributed microservices. It may span multiple servers on separate  Kubernetes containers. In caching, objects/data are stored in high-speed static RAM  memory for faster access. Memory caching is effective because all microservices apps access the same set of cached data. The objective of distributed cached memory is to store program instructions and data that are used repeatedly by clients. 

Distributed caching is an important caching strategy for decreasing a distributed microservices apps latency and improving its concurrency and scalability for better performance. Cache eviction strategy should be also configured regularly to replace it with the latest fresh data. According to a research by http://www.marketingdive.com:

  • New research by Google has found that 53% of mobile website visitors  will leave if a webpage doesn’t load within 3 seconds. 
  • The average load time for sites is 19 seconds on a 3G connection and 14  seconds on a 4G connection. 

API Caching with Redis distributed caching

Redis is an open-source in-memory data structure project implementing a  distributed caching, in-memory key-value database. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets,  hyper log, bitmaps, streams, and spatial indexes. 

Redis is a high-performance, in-memory, data structure server (not just a key-value store). On large-scale distributed systems with a high number of API calls per second, Redis is a perfect distributed caching solution for this kind of distributed enterprise microservice architecture. It’s faster than usual database calls because Redis serves data from static RAM cache memory. 

Apps are responsible to fetch data from the database and push to the Redis cluster on a master node that updates/writes all new cache data entries into the Redis cluster.  Redis master writes/updates data to Redis slave nodes. Redis server run in two  modes: 

  • Master Mode (Redis Master) ∙
  • Slave Mode (Redis Slave/Redis Replica)

We can configure Redis to choose a mode to write and read from. It is recommended to serve writes through the Redis leader and reads through the Redis follower.

Redis cluster architecture for high availability (HA)

Every leader should have one follower minimum; certainly can have more followers than leaders, which would be preferable to having a single follower per leader so that you can have one fail and still have a backup follower for some redundancy post failover.

Clients write on the leader node and read from follower nodes. Clients can directly connect with leader nodes for reads if followers are not available or down. Every leader node replicates the cached data to its follower. It could be one or many followers. They are all configurable.

All leaders and followers check the health status of every node by using the gossip protocol.

Published by

Rajiv Srivastava

Principal Architect with Wells Fargo

Leave a comment