In today’s cloud-driven landscape, organizations are transitioning from legacy monolithic systems to agile, scalable, and secure cloud-native solutions. Some are even forging new cloud-native applications. However, the concept of cloud-native design remains subjective, lacking a universal blueprint. This blog aims to provide clarity and guidance for designing precise cloud-native applications and container deployment. It addresses the intricacies of end-to-end cloud development, encompassing architecture, development, testing, deployment, security, and observability.
Traditionally, separate development teams handle these aspects in isolation. This blog bridges these gaps and outlines seven practical models for standardizing cloud-native architecture, drawing from real-world experience in cloud-native application design and development.
Seven Models of Cloud Native Design
Cloud-native entails developing microservices/micro-frontend apps and deploying them within containers on private, public, or hybrid cloud platforms. These platforms autonomously manage, automate, orchestrate, and secure these applications and their data. Container orchestration engines handle most cross-cutting concerns. This blog outlines key approaches to creating and deploying modern cloud-native apps, emphasizing performance optimization and cost efficiency. These apps leverage cloud-managed SaaS and automatically deploy new source code changes on cloud container platforms. We will now briefly explore these seven models and their business-value components
1. Modern Design & Development Model
Components
Business Value/ROI
Beyond 12 factor principles
It’s a set of principles widely adopted for cloud-native applications and dev teams offering usability, agility, scalability, modularity, and security. It saves operational costs and improves developer productivity. It’s a set of abstractions of cross-cutting concerns or non-functional requirements (NFR). These are 12+3 factor principles for modern cloud-native apps, where 3 important principles are recent additions. We call it beyond 12-factor principles. One codebase, one application: A single code repo for a single responsibility should exist. Every microservice should have its code repo.API first: New cloud-native app development should start from designing API first.Dependency management: Explicitly declare and isolate dependencies. All dependencies should be declared without implicit reliance on system tools or libraries.Design, build, release, and run: The delivery pipeline should strictly consist of build, release, run. Configuration, credentials, and code: Configuration that varies between deployments should be stored in the environment.Logs: Applications should produce logs as event streams and leave the execution environment to aggregate.Disposability: Fast startup and shutdown are advocated for a more robust and resilient system.Backing services: All backing services are treated as attached resources and attached and detached by the execution environment.Environment parity: All environments should be as similar as possible.Administrative processes: Any needed admin tasks should be kept in source control and packaged with the application.Port binding: Self-contained services should make themselves available to other services by specified ports.Stateless processes: Applications should be deployed as one or more stateless processes with persisted data stored on a backing service.Concurrency: Concurrency is advocated by scaling individual processes.Telemetry: Add observability and monitoring.Authentication and authorization (A&A): Provide proper IAM support for user and application to application security.
Domain Driven Design (DDD)
It’s a design pattern that helps identify separate business use case domains and microservices. The best use case is to migrate legacy monolithic apps to modern micro-services and micro-frontends. Example: Catalog, order, payment services.
API Driven
It’s a method for API design that prioritizes business logic or services ahead of development, promoting service-to-service communication via API interfaces. Cloud-native apps utilize this approach, managing API endpoints with tools like GCP Apigee, Spring Cloud Gateway, and more.
Microservices Design
It’s a set of principles widely adopted for cloud-native applications and dev teams offering usability, agility, scalability, modularity, and security. It saves operational costs and improves developer productivity. It’s a set of abstractions of cross-cutting concerns or non-functional requirements (NFR). These are 12+3 factor principles for modern cloud-native apps, where 3 important principles are recent additions. We call it beyond 12-factor principles. One codebase, one application: A single code repo for a single responsibility should exist. Every microservice should have its code repo.API first: New cloud-native app development should start by designing API first.Dependency management: Explicitly declare and isolate dependencies. All dependencies should be declared without implicit reliance on system tools or libraries. Design, build, release, and run: The delivery pipeline should strictly consist of build, release, and run. Configuration, credentials, and code: Configuration that varies between deployments should be stored in the environment.Logs: Applications should produce logs as event streams and leave the execution environment to aggregate.Disposability: Fast startup and shutdown are advocated for a more robust and resilient system.Backing services: All backing services are treated as attached resources and attached and detached by the execution environment.Environment parity: All environments should be as similar as possible.Administrative processes: Any needed admin tasks should be kept in source control and packaged with the application.Port binding: Self-contained services should make themselves available to other services by specified ports.Stateless processes: Applications should be deployed as one or more stateless processes with persisted data stored on a backing service.Concurrency: Concurrency is advocated by scaling individual processes.Telemetry: Add observability and monitoring.Authentication and authorization (A&A): Provide proper IAM support for user and application to application security.
Micro-Frontends Design
It’s a frontend application architecture where a big UI app is decomposed into smaller UI apps developed by separate dev teams. These micro UI apps can be deployed and managed independently. They can also be divided based on business use cases.
WebAssembly (Wasm)
It’s a next-generation UI platform. It’s an enhancement over Java script, which makes Java script code compilation faster and better performance; both are companions as of now. It’s a binary instruction format, which is lighter to compile and understandable by browser. It also has a lighter payload to perform faster on web browsers.
Modern Databases
In the past, we only had traditional SQL databases as our option. But today, we have a wide variety of modern databases such as NoSQL, which are suitable for various purposes. Additionally, there are numerous companies, both in the public cloud sector and as independent software providers, that offer Database as a Service (DBaaS) through Software as a Service (SaaS) platforms. This means they make databases available to applications through APIs, and they handle the management of these databases on their cloud infrastructure, offering them to clients through subscription-based services.
Event-Driven Design
It’s a backend application architecture where a small application is developed as a collection of services based on a specific business domain (usually derived by DDD). It provides a framework/guidelines to develop, deploy, and manage cloud-native apps.
Distributed Caching Design
It’s the latest de facto standard for microservices communications. In this design, microservices connect with each other using a message broker on every event. Microservices can publish/consume messages to topics. It’s an asynchronous way of communication, which provides a lot of benefits like agility, high transactions, high availability,cut costs, reliability, and decoupling.
2. Modern Infrastructure/DevOps – CI/CD
Components
Business Value/ROI
DevSecOps
It’s an advanced DevOps concept for development, security, and operations. It provides tools and practices to secure data, code, and containers during the CI/CD process. It covers scanning source code for vulnerabilities, early threat/malware detection/prevention, security design audit review, static code analysis, container docker image, payload, database security, etc.
Immutable Infrastructure
This kind of infra is never modified once it’s deployed on the cloud. A new infra has to be deployed for any new change, and the older one must be retired. It reduces operational complexities, debugging time and improves security. No patching or backward compatibility is needed.
Service Mesh
It’s a dedicated infra layer that controls and manages cross-cutting concerns out of the box like service discovery, API tracing, observability, microservices internal east-west communication, circuit-breaker/failure recovery, load balancing, traffic management, mTLS payload security, A&A, etc. It helps to move/extract cross-cutting configurations from business logic source code. It also moves the responsibility of common configurations from the business code developer to the DevOps developer/team.
Declarative API (IaaC)
It’s a very powerful modern way of managing infra as a code. It’s a desired state system automatically managed by the DevOps system. We need to tell the system,” Please make sure that the state I provide will be there,” without manual intervention. It’s an intelligent way of managing infra by Kubernetes and Terraform apps.
Platform as a Service (PaaS)
It’s a cloud computing model that provides a ready-made development platform on top of infrastructure to write direct code and deploy without an understanding of cloud-config complexities for the developer. It gives a developer-friendly environment to build and deploy code on the cloud without any help from the DevOps team. It improves developer productivity.
3. Build & Deployment Model
Components
Business Value/ROI
Serverless
It’s a pure cloud-native development model which provides ready-to-use infra on-demand for app deployment. It saves a lot of costs because it is a spin-up based on on-demand events. Cloud providers manage the infra of serverless servers. They automatically scale up/down based on traffic usage. It works on an event-driven model.
GitOps
An infra-operational framework where the Git source code repository is integrated with CI/CD DevOps pipeline. It automatically triggers when any commit change happens in the Git repo. It provides many benefits like security, compliance, lesser complexity to create/update Kubernetes config script, improved developer productivity, automation, reliability, and faster development. It provides a self-managed declarative infrastructure.
4. Cloud Observability
Components
Business Value/ROI
Tracing
It tracks microservices API interactions and shows interaction and response time with request and response data. It also helps to find out the performance of primary API and buggy API. Apps send app usage metrics details, which APM and other observability tools read. Those tools visualize, generate reports, and send preventive notifications.
Performance Monitoring
Application monitoring measures application performance, availability, and user experience and uses this data to identify and resolve application issues before they impact customers. APM and performance testing tools do it. Infra can be scaled based on SLA or API based on these reports.
5. 4C’s of Cloud Security
Components
Business Value/ROI
Container Security
This practice ensures that the container where the app is deployed is also secured. It’s a policy to secure potential security vulnerabilities. Container security tools usually protect it.
Cluster Security
Practice to secure container orchestration cluster components and apps running on that cluster.
Cloud Security
It comprises the security of data center and availability zones servers in cloud environments. If the Cloud layer is vulnerable (or configured in a vulnerable way), then there is no guarantee that the components built on top of this base are secure. Public service providers have a lot of security services like DDOS.
Container Image Security
Docker images are stored in container repositories. These images must be scanned for any security vulnerabilities. Many tools are available with container repo which continuously scan updated images.
Endpoint Security
It’s a cyber security approach to defend end-consuming endpoints such as laptops, desktops, and mobile devices. An endpoint is any device that connects to the corporate network from outside its firewall. An endpoint security strategy is essential because every remote endpoint can be the entry point for an attack, and the number of endpoints is only increasing with the rapid pandemic-related shift to remote work.
6. Cloud Platforms
Components
Business Value/ROI
Private
It’s managed by either on-prem (inside organization) or private instances isolated with physical secure boundaries on public cloud service providers.
Public
These physical servers are shared with multi-tenants/organizations, provided mainly by the third-party vendor as a SaaS solution.
Hybrid
It’s a combination of private and public clouds. Sometimes, organizations keep their databases and other secure information on-prem and host applications and other services on the public cloud. It’s the most sustainable model for cloud migration. Most of the organizations, around 65%, prefer hybrid models. Public cloud service providers provide hybrid cloud orchestration tools to manage multi-cloud. Sometimes, when many public clouds are combined, they are also called hybrid clouds.
7. Automation
Components
Business Value/ROI
BDD
Behavioral Driven Design framework is based on a given-when-then model. It mainly focuses on the behavior of the product and user acceptance criteria. It provides simple English like the Gherkin language.
Chaos Engineering
It’s a method of testing distributed microservices/micro-front-end apps deployed on a cloud that deliberately introduces failure and faulty scenarios to verify its resilience in the face of random disruptions. These disruptions can cause applications to respond unpredictably and break under pressure. Chaos engineers detect those issues. It’s a must for any true cloud-native app.
Conclusion
Every application has different types and needs. The cloud-native definition is different for other apps and organizations. This same “Seven model” can’t fit on all cloud-native applications architecture. They are sometimes driven by business units, technology compliance, cost, and operational overhead.
Service Meshes have been gaining a lot of popularity lately, more so amongst Spring and Java developers who wish to address cross-cutting concerns. But, are you wondering what exactly are Service Meshes? What are some of the popular types out there? And most importantly, what kind of problems do they actually solve? Well, look no further! This blog is here to provide you with the answers you seek.
What is a Service Mesh?
A service mesh is a dedicated infrastructure layer that helps manage communication between the various microservices within a distributed application. It acts as a transparent and decentralized network of proxies that are deployed alongside the application services. These proxies, often referred to as sidecars, handle service-to-service communication, providing essential features such as service discovery, load balancing, traffic routing, authentication, and observability.
By abstracting away the complexity of network communication, a service mesh enables developers to focus on application logic rather than dealing with the intricacies of networking code. It provides a consistent and flexible way to handle cross-service communication and allows for the implementation of advanced traffic management strategies, security policies, and observability mechanisms.
They provide a standardized approach to managing microservices communication, making it easier to monitor, secure, and control traffic within complex distributed systems.
Components of a Service Mesh
Service mesh architecture typically involves the following components and their interactions:
Data Plane: The data plane refers to a network of sidecar proxies deployed along with each service instance, so that it can communicate with the other services in the system. It acts as an intermediary between the service and the rest of the network. Sidecar proxies handle inbound and outbound traffic, intercepting communication and providing additional features.
Sidecar: It’s based on Envoy proxy. It’s another container which runs in the same Kubernetes POD and takes care of all cross-cutting concerns. It’s based on the sidecar container design pattern.
Application Traffic: Microservices connect through other microservices using sidecar containers. Application traffic is basically communication between Envoy sidecar proxy containers.
Namespace: It’s an isolated space on a Kubernetes POD where the both containers (sidecar and microservices app) run in parallel.
Control Plane: The control plane is the centralized management and configuration layer of the service mesh. It is responsible for controlling and coordinating the behavior of the sidecar proxies. It provides a control plane API that allows administrators to configure policies, rules, and settings for traffic management, security, and observability.
API Endpoints: API endpoints are the entry points through which services within the mesh can communicate with each other
Controllers: A controller is a component responsible for managing and controlling the behavior of the mesh. It is typically a software component that monitors the state and health of services, configures traffic routing and load balancing rules, enforces security policies, and handles other aspects of service-to-service communication within the mesh.
Service Discovery: Service discovery is an essential component in service mesh architecture. It enables services to dynamically locate and connect with each other without hard-coded addresses.
Certificate Authority: It provides and manages root and intermediate certificates and performs certificate signing operations.
Application Microservices: These are the individual services or microservices that make up the application. They are responsible for handling specific functions or tasks.
Use Case: E-commerce Application
Consider an e-commerce application use case, a service mesh would help manage the complex network of microservices responsible for different functions, such as inventory management, order processing, payment processing, and shipping.
The sidecar proxies would handle load balancing, ensuring that traffic is distributed efficiently across multiple instances of each service.
Additionally, the service mesh would provide secure communication between services by enforcing encryption and authentication using TLS. This would help protect sensitive customer information during transmission and prevent unauthorized access to critical services.
Traffic management features would allow operators to control and monitor the flow of requests, enabling them to perform tasks like routing certain requests to a newer version of a service for testing purposes or limiting the rate of incoming requests to prevent overloading.
The observability and monitoring capabilities of the service mesh would provide operators with real-time insights into the application’s performance, enabling them to identify and resolve issues promptly.
They could analyze metrics, logs, and traces to optimize the application’s performance, troubleshoot problems, and ensure a smooth customer experience.
Overall, a service mesh simplifies the management and enhances the resilience, security, and observability of a distributed application, making it an essential component in modern microservices architectures.
What problems do Service Meshes solve?
Service mesh solves several problems in the context of modern application architectures. Here are some of the key problems that service mesh addresses:
Service-to-service communication: In a microservices architecture, applications are composed of multiple independent services that need to communicate with each other. Service mesh provides a dedicated infrastructure layer to handle service-to-service communication, making it easier to manage and secure these interactions.
Service discovery and load balancing: As the number of services increases, it becomes challenging to keep track of their locations and distribute traffic efficiently. Service mesh offers service discovery and load balancing capabilities, allowing services to discover and connect to each other dynamically while automatically distributing the traffic load across multiple instances.
Traffic management and routing: Service mesh enables sophisticated traffic management and routing features, such as request routing based on service version, path, headers, or other attributes. It allows for traffic shifting, canary deployments, and A/B testing, empowering teams to implement complex deployment strategies with ease.
Resilience and fault tolerance: Service mesh provides mechanisms for implementing resilience and fault tolerance patterns, such as retries, timeouts, circuit breaking, and load shedding. These features help services handle failures gracefully, isolate issues, and prevent cascading failures across the system.
Observability and Debugging: Service mesh provides developers with powerful observability features such as distributed tracing, metrics collection, and logging. These capabilities help developers gain insights into the behavior and performance of their services, allowing them to debug issues, trace requests across service boundaries, and optimize the performance of their applications.
Security and authentication: Service mesh strengthens the security of microservices architectures by providing features like transport-level encryption (TLS), mutual authentication, and authorization policies. It allows for fine-grained access control and identity management, enhancing the overall security posture of the system.
Tight coupling of source code: Cloud configuration always comes with tight coupling with business logic source code, which makes it code-heavy to manage and debug for any code issues. This can make the process of adding new business features, inserting additional code, and resolving issues a cumbersome task. However, adopting a service mesh architecture allows for the segregation of cross-cutting concerns from the business logic source code. By employing this approach, the service mesh effectively handles all application configurations independently through the collaboration of DevOps platform/infrastructure teams.
Testing overhead of cross-cutting configuration concerns: Testing new features, during integration, regression, and load testing for feature releases, necessitates additional testing effort. It is crucial to test the entire codebase, including the cross-cutting configuration code, even for minor changes in the business logic. By adopting a service mesh approach, the business logic code becomes more concise and streamlined, resulting in easier and faster testing. Furthermore, developers find it simpler to write fewer JUnit and integration test cases.
Application performance issue: When business logic and cross-cutting configuration are combined, they need extra time to load, deploy, and run on app containers. It consumes extra CPU and RAM for even business-specific API calls, which can cause performance issues. In contrast, a service mesh utilizes a separate side-car container dedicated to running the cross-cutting concerns configuration code. This alleviates the load on the main application container, resulting in improved app performance. By running only the streamlined application business logic, the performance is enhanced.
What key features should you look for when selecting a Service Mesh?
Connect Kubernetes clusters: It provides connectivity between two or more Kubernetes clusters if it’s used with hybrid cloud technologies like Google Anthos, Azure Arc, AWS Outpost, VMware Tanzu Mission Control (TMC), etc. It could spread across on-premises, private, and public cloud providers.
Service discovery with the Ingress Controller and Ingress resources: It provides dynamic service discovery and routing to distributed microservice REST APIs across K8s clusters on multiple clouds with different dynamic IP addresses. It exposes the service by its service name through the Ingress Controller and Ingress resources, which can be used by any client or consumer. The ingress resource provides routing details to various services, and the ingress controller routes incoming requests to the API using the ingress resource.
Circuit breaker resiliency: A circuit breaker provides a retry mechanism if dependent services are not responding to the first attempt. A service mesh provides a powerful feature of the circuit breaker when a dependent service does not respond within a given ETA. Because of this, microservices are more resilient to downtime since a service mesh can reroute requests away from failed services using this mechanism.
API Tracing between microservices: It provides the API Tracing (API to API interactions) feature of microservices, which traces request and response interaction logs. This tracing helps improve the performance of API and SLA. It helps developers debug and diagnose bugs.
Observability: It provides a powerful mechanism to check application health and infra resources like CPU and memory usage. Also, it collects application performance matrices and visualizes them on the web dashboard. Performance metrics can suggest ways to optimize communication in the runtime environment. Also, monitor infrastructure and application monitoring.
Data Payload Security: It provides data encryption in transit between microservice API communications by applying two-way strong mTLS security encryption technology.
API Rate Limiting: It provides a mechanism to restrict the number of backend API calls and prevent distributed denial-of-service (DOS/DDOS) attackers where thousands or even millions of requests hit backend APIs randomly and crash the entire backend software system and infrastructure.
Load balancing: It provides load balancing by using its in-built ingress controller mechanism to expose microservices on Kubernetes clusters as external services exposed through the ingress controller load balancer. Ingress control can map and route client requests to distributed microservices based on ingress resources.
Popular Service Meshes
Istio (OSS)
Istio is an open-source service mesh platform that provides a set of tools and capabilities for managing and securing microservices-based applications. It aims to address common challenges associated with service-to-service communication, observability, security, and traffic management in complex distributed systems. At its core, Istio deploys a sidecar proxy, called Envoy, alongside each microservice in the application. This sidecar proxy intercepts and manages all inbound and outbound traffic for the service, allowing Istio to control and monitor the communication between services.
Advantages:
Istio boasts one of the largest communities for online service mesh and is highly acclaimed and discussed on the internet. Its GitHub contributors far outnumber those of Linkerd by a significant margin.
Furthermore, it offers support for both Kubernetes and VM modes.
Drawbacks:
Istio comes with a cost as it is not available for free. It demands a considerable time investment in terms of reading the documentation, setting it up, ensuring proper functionality, and ongoing maintenance.
The implementation and integration of Istio into production can range from several weeks to several months, depending on the complexity of the infrastructure.
Using Istio requires a significant amount of resource overhead.
Unlike Linkerd, it lacks a built-in administrative dashboard.
Additionally, Istio mandates the use of its own ingress gateway.
The Istio control plane is exclusively supported within Kubernetes containers, meaning there is no VM mode available for the Istio data plane.
Linkerd
Linkerd is an open-source service mesh platform designed to provide observability, reliability, and security to microservices architectures. It is developed by the Cloud Native Computing Foundation (CNCF) and focuses on simplicity, performance, and ease of use.
Advantages
Linkerd leverages the expertise of its creators, who are former Twitter engineers with experience in developing the internal tool, Finagle. They gained valuable insights from working on Linkerd v1, which contributes to the refinement of the service mesh.
Being one of the pioneering service meshes, Linkerd enjoys an active and vibrant community, boasting more than 5,000 users on Slack, along with an engaged mailing list and Discord server.
The availability of comprehensive documentation and tutorials further enhances its appeal.
Linkerd has reached a level of maturity with the release of version 2.9, which is evident from its adoption by prominent corporations such as Nordstrom, eBay, Strava, Expedia, and Subspace.
Additionally, Linkerd offers paid enterprise-grade support through Buoyant, ensuring professional assistance is readily available.
Drawbacks
Using Linkerd service meshes to their full potential requires a significant learning curve. It is important to note that Linkerd is exclusively supported within Kubernetes containers and does not offer a VM-based or “universal” mode.
Unlike Envoy, the Linkerd sidecar proxy differs, providing Buoyant the flexibility to optimize it according to their requirements. However, this customization comes at the expense of losing the inherent extensibility offered by Envoy.
Consequently, Linkerd lacks support for essential features such as circuit breaking, delay injection, and rate limiting. Additionally, there is no straightforward API exposed for easy control of the Linkerd control plane, although a gRPC API binding can be found.
In case you wish to read more about the above service meshes comparison and what more they have to offer, you can read all about it here.
That’s not it, there many many options in the market for you to choose from like:
Service mesh technology is a boon for developers. It increases developer productivity by delegating cross-cutting concerns from application source code to in-house DevSecOps. Service Mesh provides a ton of more features to solve developer challenges and increase developer productivity. It’s now a de facto standard for managing cross-cutting configuration code for cloud-native microservice apps on Kubernetes.
Distributed caching is a very important aspect of cloud-based applications, be it for on-prem, public, or hybrid cloud environments. It facilitates incremental scaling allowing the cache to grow and incorporate the data growth. In this blog we will explore distributed caching on the cloud and why it is useful for environments with high data volume and load. This blog will cover,
Challenges with Traditional Caching
What is Distributed Caching
Benefits of distributed caching on the cloud
Recommended Distributed Caching Database Tools
Ways to Deploy Distributed Caching on the cloud
Traditional Distributed Caching Challenges
Traditional distributed caching servers are usually deployed with limited storage and CPU speed on a few limited dedicated servers or Virtual Machines (VMs). Often these caching infrastructures reside on data centers (DCs) that are on-prem or the cloud on VMs which are not resilient, not highly available, and fault-torent. . This kind of traditional caching comes with numerous challenges:
Traditional caching is called in-process caching which is at the instance server level. In-process caching stores data at the application level locally like storing in EhCache etc. It doesn’t provide accurate data consistency.
In-process cache creates performance issues, because they occupy extra memory, and due to garbage collection overhead.
It’s not reliable, because it uses the same heap memory which is used by the application. If an application got crashed due to memory or some other issues, cached data will be also wiped out.
Hard to scale cache storage and CPU speed on fewer servers because often these servers are not auto-scalable.
High operational cost to manage infrastructure and unutilized hardware resources. These servers are managed manually on traditional DevOps infrastructure.
Traditional distributed caching is not containerized (not deployed on Kubernetes/Docker containers). That’s why it is not easily scalable, resilient, and self-managed. Also, more possibilities of these fewer servers crashing if the client load is higher than the actual.
What is Distributed Caching
Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multi-cluster Kubernetes environment on the cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment.
Benefits of Distributed Caching on cloud
These are a few benefits of distributed caching:
Periodic caching of frequently used read REST API’s response ensures faster API read performance.
Reduced database network calls by accessing cached data directly from distributed caching databases.
Resilience and fault tolerance by maintaining multiple copies of data at various caching databases in a cluster.
High availability by auto-scaling the cache databases, based on load or client requests.
Storage of session secret tokens like JSON Web Token (ID/JWT) for authentication & authorization purposes for microservices apps containers.
Provide faster read and write access in-memory if it’s used as a dedicated database solution for high-load mission-critical applications.
Avoid unnecessary roundtrip data calls to persistent databases.
Auto-scalable cloud infrastructure deployment.
Containerization of Distributed caching libraries/solutions.
Provide consistent read data from any synchronized connected caching data centers (DC).
Minimal to no outage, high availability of caching data.
Faster data synchronization between caching data servers.
Recommended Distributed Caching Databases Tools
Following are popular industry-recognized caching servers:
Redis
Memcache
GemFire and
HazelCast databases
Redis: It’s one of the most popular distributed caching services. It supports different data structures. It’s an open-source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. It also has an enterprise version. It can be deployed in containers on private, public, and hybrid clouds etc. it provides consistent and faster data synchronization between different data centers (DC).
HazelCast: Hazelcast is a distributed computation and storage platform for consistent low-latency querying, aggregation, and stateful computation against event streams and traditional data sources. It allows you to quickly build resource-efficient, real-time applications. You can deploy it at any scale from small edge devices to a large cluster of cloud instances. A cluster of Hazelcast nodes share both the data storage and computational load which can dynamically scale up and down. When you add new nodes to the cluster, the data is automatically rebalanced across the cluster. The computational tasks (jobs) that are currently in a running state, snapshot their state and scale with a processing guarantee.
Memcached: It is an open-source, high-performance, distributed memory object caching system. It is generic in nature but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from the results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes easy, quick deployment and development. It solves many data caching problems and the API is available in various commonly used languages.
GemFire: It provides distributedin-memory data grid cache, powered by Apache Geode open source. It scales data services on demand to support high performance. It’s a key-value store that performs read and write operations at fast speeds. It offers highly available parallel message queues, continuous availability, and an event-driven architecture to scale dynamically, with no downtime.
It provides multi-site replication. As data size requirements increase to support high-performance, real-time apps, they can scale linearly with ease. Applications get low-latency responses to data access requests, and always return fresh data. Maintain transaction integrity across distributed nodes. It supports high-concurrency, low-latency data operations of applications. It also provides node failover and multi Geo (Cross Data Center or Multi Data Center) replication to ensure applications are resilient, whether on-premises or in the cloud.
Ways to Deploy Distributed Caching on Hybrid cloud
These are recommended ways to deploy and setup distributed caching be it public cloud or hybrid cloud:
Open source distributed caching on traditional VM instances.
Open source distributed caching on Kubernetes container. I would recommend deploying on a Kubernetes container for high availability, resiliency, scalable and faster performance.
Enterprise COTS distributed caching deployment on VM and Container. I would recommend the enterprise version because it will provide additional features and support.
The public cloud offers managed services of distributed caching open and enterprise sources like Redis, Hazelcast and Memcache, etc.
Caching servers can be deployed on multiple sources like on-prem and public cloud together, public servers, or only one public server in different availability zones.
Conclusion
Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on a hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when a cookie is disabled at the web browser side, improving API query read performance, avoiding operational cost and database hit for the same type of requests, managing secret tokens for authentication and authorization, etc.
Distributed cache syncs data on the hybrid cloud automatically without any manual operation and always gives the latest data. I would recommend industry-standard distributed caching solutions – Redis, Hazelcast, and Memcache. We need to choose a better distributed caching technology in the cloud based on use cases.
I had been busy with my other work assignments. publishing this blog after a long interval. Hope you like it !
It’s important to understand benefits for the business who is planning to migrate to cloud and invest time and money. These are some generic and common benefits after adopting cloud technology using modern application approach:
Smoother and faster app user experience: Cloud provides faster, highly available app interfaces which improves rich user experience. For example, AWS stores static web pages and images at nearby CDN(Content Delivery Network) servers physically on cloud, which provides faster and smooth application response.
On demand scaling for infrastructure: Cloud provides on demand compute, memory and storage horizontal/vertical scaling. Organizations/customers should not bother about infra prediction for higher load and they also save money to use only required infrastructure resources.
No outage for users and clients: Cloud provides high availability, so whenever any app server is down, client load will be diverted to other app server or a new app server will be created. User and client sessions will also be managed automatically using internal load balancers.
Less operational cost (OPEX): Cloud manages most of the infra management operation automatically or by cloud providers. For example, PAAS (Platform as a service) automates entire platform automation with a smaller number of DevOps resources, which saves a lot operational cost.
Easy to manage: Cloud providers and PAAS platforms provides very easy and intuitive web, CLI and API based console or interface, which can be easily integrated with the CI/CD tools, IAAC (Infrastructure as a Code) and scripts. They can also be integrated with apps etc.
Release app features quickly to compete in market: Cloud provides a lot of ready-to use services on cloud, which takes lesser time to build and deploy apps to cloud quickly using microservices agile like development methodologies. It supports container-based orchestration services like Kubernetes, where smaller microservices can be deployed in quick time, which enables organizations to release new feature quickly.
Increased security: Cloud solutions provider out of the box intrinsic security features at various level of application, network, data and infra level. For example, AWS provides DDOS and OWASP security features with firewall etc.
Increase developer productivity: Cloud providers various tools and services to improve developer productivity like PAAS, Tanzu Build Service. Spring framework, AWS BeanStalk, GCP, OpenShift developer tools etc.
Modular Teams: Cloud migration motivates to follow modern applications microservice framework for dev and test teams to work in agile on independent modules or microservices independently.
Public cloud’s “pay as you go” usage policy: Customer has to pay for pay as you go usage of infra, so that no extra infra resources wasted. These public service providers pricing model saves a lot of cost.
Easy disaster recovery handling: Cloud deployed on multiple data centers (DCs) or availability zones (AZs) for disaster recovery (DR), so that if any site (DC/AZ) is down then client or application load will be automatically routed to other site using load balancers.
Business continuity: It provides all necessary processes and tools to manages business continuity (BC) for smooth and resilient business operations. They. Provide faster site recovery in case of disaster and data backup. Cloud also provides enterprise compliances for various industries like HIPAA for health insurance etc.