Cloud Distributed Caching for Microservices

Distributed caching is a very important aspect of cloud-based applications, be it for on-prem, public, or hybrid cloud environments. It facilitates incremental scaling allowing the cache to grow and incorporate the data growth. In this blog we will explore distributed caching on the cloud and why it is useful for environments with high data volume and load. This blog will cover,

  • Challenges with Traditional Caching 
  • What is Distributed Caching
  • Benefits of distributed caching on the cloud
  • Recommended Distributed Caching Database Tools
  • Ways to Deploy Distributed Caching on the cloud

Traditional Distributed Caching Challenges

Traditional distributed caching servers are usually deployed with limited storage and CPU speed on a few limited dedicated servers or Virtual Machines (VMs). Often these caching infrastructures reside on data centers (DCs) that are on-prem or the cloud on VMs which are not resilient, not highly available, and fault-torent. . This kind of traditional caching comes with numerous challenges:

  • Traditional caching is called in-process caching which is at the instance server level. In-process caching stores data at the application level locally like storing in EhCache etc. It doesn’t provide accurate data consistency.
  • In-process cache creates performance issues, because they occupy extra memory, and due to garbage collection overhead.
  • It’s not reliable, because it uses the same heap memory which is used by the application. If an application got crashed due to memory or some other issues, cached data will be also wiped out.
  • Hard to scale cache storage and CPU speed on fewer servers because often these servers are not auto-scalable.
  • High operational cost to manage infrastructure and unutilized hardware resources. These servers are managed manually on traditional DevOps infrastructure.
  • Traditional distributed caching is not containerized (not deployed on Kubernetes/Docker containers). That’s why it is not easily scalable, resilient, and self-managed. Also, more possibilities of these fewer servers crashing if the client load is higher than the actual.

What is Distributed Caching

Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multi-cluster Kubernetes environment on the cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment.

Benefits of Distributed Caching on cloud

These are a few benefits of distributed caching:

  • Periodic caching of frequently used read REST API’s response ensures faster API read performance.
  • Reduced database network calls by accessing cached data directly from distributed caching databases.
  • Resilience and fault tolerance by maintaining multiple copies of data at various caching databases in a cluster. 
  • High availability by auto-scaling the cache databases, based on load or client requests.
  • Storage of session secret tokens like JSON Web Token (ID/JWT)  for authentication & authorization purposes for microservices apps containers.
  • Provide faster read and write access in-memory if it’s used as a dedicated database solution for high-load mission-critical applications.
  • Avoid unnecessary roundtrip data calls to persistent databases.
  • Auto-scalable cloud infrastructure deployment.
  • Containerization of Distributed caching libraries/solutions.
  • Provide consistent read data from any synchronized connected caching data centers (DC).
  • Minimal to no outage, high availability of caching data.
  • Faster data synchronization between caching data servers.

Recommended Distributed Caching Databases Tools

Following are popular industry-recognized caching servers:  

  • Redis 
  • Memcache 
  • GemFire and 
  • HazelCast databases

Redis: It’s one of the most popular distributed caching services. It supports different data structures. It’s an open-source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. It also has an enterprise version. It can be deployed in containers on private, public, and hybrid clouds etc. it provides consistent and faster data synchronization between different data centers (DC).

HazelCast: Hazelcast is a distributed computation and storage platform for consistent low-latency querying, aggregation, and stateful computation against event streams and traditional data sources. It allows you to quickly build resource-efficient, real-time applications. You can deploy it at any scale from small edge devices to a large cluster of cloud instances. A cluster of Hazelcast nodes share both the data storage and computational load which can dynamically scale up and down. When you add new nodes to the cluster, the data is automatically rebalanced across the cluster. The computational tasks (jobs) that are currently in a running state, snapshot their state and scale with a processing guarantee.

Memcached:  It is an open-source, high-performance, distributed memory object caching system. It is generic in nature but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from the results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes easy, quick deployment and development. It solves many data caching problems and the API is available in various commonly used languages.

GemFire: It provides distributed in-memory data grid cache, powered by Apache Geode open source. It scales data services on demand to support high performance. It’s a key-value store that performs read and write operations at fast speeds. It offers highly available parallel message queues, continuous availability, and an event-driven architecture to scale dynamically, with no downtime. 


It provides multi-site replication. As data size requirements increase to support high-performance, real-time apps, they can scale linearly with ease. Applications get low-latency responses to data access requests, and always return fresh data. Maintain transaction integrity across distributed nodes. It supports high-concurrency, low-latency data operations of applications. It also provides node failover and multi Geo (Cross Data Center or Multi Data Center) replication to ensure applications are resilient, whether on-premises or in the cloud.

Ways to Deploy Distributed Caching on Hybrid cloud

These are recommended ways to deploy and setup distributed caching be it public cloud or hybrid cloud:

  • Open source distributed caching on traditional VM instances.
  • Open source distributed caching on Kubernetes container. I would recommend deploying on a Kubernetes container for high availability, resiliency, scalable and faster performance. 
  • Enterprise COTS distributed caching deployment on VM and Container. I would recommend the enterprise version because it will provide additional features and support.
  • The public cloud offers managed services of distributed caching open and enterprise sources like Redis, Hazelcast and Memcache, etc.
  • Caching servers can be deployed on multiple sources like on-prem and public cloud together, public servers, or only one public server in different availability zones.

Conclusion

Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on a hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when a cookie is disabled at the web browser side, improving API query read performance, avoiding operational cost and database hit for the same type of requests, managing secret tokens for authentication and authorization, etc.

Distributed cache syncs data on the hybrid cloud automatically without any manual operation and always gives the latest data. I would recommend industry-standard distributed caching solutions – Redis, Hazelcast, and Memcache. We need to choose a better distributed caching technology in the cloud based on use cases.

Distributed Caching with Redis

When there is a need to improve the performance of web applications/microservices every millisecond counts. API gateway provides a powerful feature of distributed caching where API responses can be cached and be available for all distributed microservices. It may span multiple servers on separate  Kubernetes containers. In caching, objects/data are stored in high-speed static RAM  memory for faster access. Memory caching is effective because all microservices apps access the same set of cached data. The objective of distributed cached memory is to store program instructions and data that are used repeatedly by clients. 

Distributed caching is an important caching strategy for decreasing a distributed microservices apps latency and improving its concurrency and scalability for better performance. Cache eviction strategy should be also configured regularly to replace it with the latest fresh data. According to a research by http://www.marketingdive.com:

  • New research by Google has found that 53% of mobile website visitors  will leave if a webpage doesn’t load within 3 seconds. 
  • The average load time for sites is 19 seconds on a 3G connection and 14  seconds on a 4G connection. 

API Caching with Redis distributed caching

Redis is an open-source in-memory data structure project implementing a  distributed caching, in-memory key-value database. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets,  hyper log, bitmaps, streams, and spatial indexes. 

Redis is a high-performance, in-memory, data structure server (not just a key-value store). On large-scale distributed systems with a high number of API calls per second, Redis is a perfect distributed caching solution for this kind of distributed enterprise microservice architecture. It’s faster than usual database calls because Redis serves data from static RAM cache memory. 

Apps are responsible to fetch data from the database and push to the Redis cluster on a master node that updates/writes all new cache data entries into the Redis cluster.  Redis master writes/updates data to Redis slave nodes. Redis server run in two  modes: 

  • Master Mode (Redis Master) ∙
  • Slave Mode (Redis Slave/Redis Replica)

We can configure Redis to choose a mode to write and read from. It is recommended to serve writes through the Redis leader and reads through the Redis follower.

Redis cluster architecture for high availability (HA)

Every leader should have one follower minimum; certainly can have more followers than leaders, which would be preferable to having a single follower per leader so that you can have one fail and still have a backup follower for some redundancy post failover.

Clients write on the leader node and read from follower nodes. Clients can directly connect with leader nodes for reads if followers are not available or down. Every leader node replicates the cached data to its follower. It could be one or many followers. They are all configurable.

All leaders and followers check the health status of every node by using the gossip protocol.