Demystified Service Mesh Capabilities for Developers

Service Meshes have been gaining a lot of popularity lately, more so amongst Spring and Java developers who wish to address cross-cutting concerns. But, are you wondering what exactly are Service Meshes? What are some of the popular types out there? And most importantly, what kind of problems do they actually solve? Well, look no further! This blog is here to provide you with the answers you seek.

What is a Service Mesh?

A service mesh is a dedicated infrastructure layer that helps manage communication between the various microservices within a distributed application. It acts as a transparent and decentralized network of proxies that are deployed alongside the application services. These proxies, often referred to as sidecars, handle service-to-service communication, providing essential features such as service discovery, load balancing, traffic routing, authentication, and observability.

By abstracting away the complexity of network communication, a service mesh enables developers to focus on application logic rather than dealing with the intricacies of networking code. It provides a consistent and flexible way to handle cross-service communication and allows for the implementation of advanced traffic management strategies, security policies, and observability mechanisms.

They provide a standardized approach to managing microservices communication, making it easier to monitor, secure, and control traffic within complex distributed systems.

Components of a Service Mesh

Service mesh architecture typically involves the following components and their interactions:

Data Plane: The data plane refers to a network of sidecar proxies deployed along with each service instance, so that it can communicate with the other services in the system. It acts as an intermediary between the service and the rest of the network. Sidecar proxies handle inbound and outbound traffic, intercepting communication and providing additional features.

  1. Sidecar: It’s based on Envoy proxy. It’s another container which runs in the same Kubernetes POD and takes care of all cross-cutting concerns. It’s based on the sidecar container design pattern.
  2. Application Traffic: Microservices connect through other microservices using sidecar containers. Application traffic is basically communication between Envoy sidecar proxy containers.
  3. Namespace: It’s an isolated space on a Kubernetes POD where the both containers (sidecar and microservices app) run in parallel.

Control Plane: The control plane is the centralized management and configuration layer of the service mesh. It is responsible for controlling and coordinating the behavior of the sidecar proxies. It provides a control plane API that allows administrators to configure policies, rules, and settings for traffic management, security, and observability.

  1. API Endpoints: API endpoints are the entry points through which services within the mesh can communicate with each other
  2. Controllers: A controller is a component responsible for managing and controlling the behavior of the mesh. It is typically a software component that monitors the state and health of services, configures traffic routing and load balancing rules, enforces security policies, and handles other aspects of service-to-service communication within the mesh.
  3. Service Discovery: Service discovery is an essential component in service mesh architecture. It enables services to dynamically locate and connect with each other without hard-coded addresses.
  4. Certificate Authority: It provides and manages root and intermediate certificates and performs certificate signing operations. 

Application Microservices: These are the individual services or microservices that make up the application. They are responsible for handling specific functions or tasks.

Use Case: E-commerce Application

Consider an e-commerce application use case, a service mesh would help manage the complex network of microservices responsible for different functions, such as inventory management, order processing, payment processing, and shipping. 

  • The sidecar proxies would handle load balancing, ensuring that traffic is distributed efficiently across multiple instances of each service.
  • Additionally, the service mesh would provide secure communication between services by enforcing encryption and authentication using TLS. This would help protect sensitive customer information during transmission and prevent unauthorized access to critical services.
  • Traffic management features would allow operators to control and monitor the flow of requests, enabling them to perform tasks like routing certain requests to a newer version of a service for testing purposes or limiting the rate of incoming requests to prevent overloading.
  • The observability and monitoring capabilities of the service mesh would provide operators with real-time insights into the application’s performance, enabling them to identify and resolve issues promptly.
  • They could analyze metrics, logs, and traces to optimize the application’s performance, troubleshoot problems, and ensure a smooth customer experience.

Overall, a service mesh simplifies the management and enhances the resilience, security, and observability of a distributed application, making it an essential component in modern microservices architectures.

What problems do Service Meshes solve?

Service mesh solves several problems in the context of modern application architectures. Here are some of the key problems that service mesh addresses:

  1. Service-to-service communication: In a microservices architecture, applications are composed of multiple independent services that need to communicate with each other. Service mesh provides a dedicated infrastructure layer to handle service-to-service communication, making it easier to manage and secure these interactions.
  2. Service discovery and load balancing: As the number of services increases, it becomes challenging to keep track of their locations and distribute traffic efficiently. Service mesh offers service discovery and load balancing capabilities, allowing services to discover and connect to each other dynamically while automatically distributing the traffic load across multiple instances.
  3. Traffic management and routing: Service mesh enables sophisticated traffic management and routing features, such as request routing based on service version, path, headers, or other attributes. It allows for traffic shifting, canary deployments, and A/B testing, empowering teams to implement complex deployment strategies with ease.
  4. Resilience and fault tolerance: Service mesh provides mechanisms for implementing resilience and fault tolerance patterns, such as retries, timeouts, circuit breaking, and load shedding. These features help services handle failures gracefully, isolate issues, and prevent cascading failures across the system.
  5. Observability and Debugging: Service mesh provides developers with powerful observability features such as distributed tracing, metrics collection, and logging. These capabilities help developers gain insights into the behavior and performance of their services, allowing them to debug issues, trace requests across service boundaries, and optimize the performance of their applications.
  6. Security and authentication: Service mesh strengthens the security of microservices architectures by providing features like transport-level encryption (TLS), mutual authentication, and authorization policies. It allows for fine-grained access control and identity management, enhancing the overall security posture of the system.
  7. Tight coupling of source code: Cloud configuration always comes with tight coupling with business logic source code, which makes it code-heavy to manage and debug for any code issues. This can make the process of adding new business features, inserting additional code, and resolving issues a cumbersome task. However, adopting a service mesh architecture allows for the segregation of cross-cutting concerns from the business logic source code. By employing this approach, the service mesh effectively handles all application configurations independently through the collaboration of DevOps platform/infrastructure teams.
  8. Testing overhead of cross-cutting configuration concerns: Testing new features, during integration, regression, and load testing for feature releases, necessitates additional testing effort. It is crucial to test the entire codebase, including the cross-cutting configuration code, even for minor changes in the business logic. By adopting a service mesh approach, the business logic code becomes more concise and streamlined, resulting in easier and faster testing. Furthermore, developers find it simpler to write fewer JUnit and integration test cases.
  9. Application performance issue: When business logic and cross-cutting configuration are combined, they need extra time to load, deploy, and run on app containers. It consumes extra CPU and RAM for even business-specific API calls, which can cause performance issues. In contrast, a service mesh utilizes a separate side-car container dedicated to running the cross-cutting concerns configuration code. This alleviates the load on the main application container, resulting in improved app performance. By running only the streamlined application business logic, the performance is enhanced.

What key features should you look for when selecting a Service Mesh?

  • Connect Kubernetes clusters: It provides connectivity between two or more Kubernetes clusters if it’s used with hybrid cloud technologies like Google Anthos, Azure Arc, AWS Outpost, VMware Tanzu Mission Control (TMC), etc. It could spread across on-premises, private, and public cloud providers.
  • Service discovery with the Ingress Controller and Ingress resources: It provides dynamic service discovery and routing to distributed microservice REST APIs across K8s clusters on multiple clouds with different dynamic IP addresses. It exposes the service by its service name through the Ingress Controller and Ingress resources, which can be used by any client or consumer. The ingress resource provides routing details to various services, and the ingress controller routes incoming requests to the API using the ingress resource.
  • Circuit breaker resiliency: A circuit breaker provides a retry mechanism if dependent services are not responding to the first attempt. A service mesh provides a powerful feature of the circuit breaker when a dependent service does not respond within a given ETA. Because of this, microservices are more resilient to downtime since a service mesh can reroute requests away from failed services using this mechanism.
  • API Tracing between microservices: It provides the API Tracing (API to API interactions) feature of microservices, which traces request and response interaction logs. This tracing helps improve the performance of API and SLA. It helps developers debug and diagnose bugs.
  • Observability: It provides a powerful mechanism to check application health and infra resources like CPU and memory usage. Also, it collects application performance matrices and visualizes them on the web dashboard. Performance metrics can suggest ways to optimize communication in the runtime environment. Also, monitor infrastructure and application monitoring.
  • Data Payload Security: It provides data encryption in transit between microservice API communications by applying two-way strong mTLS security encryption technology.
  • API Rate Limiting: It provides a mechanism to restrict the number of backend API calls and prevent distributed denial-of-service (DOS/DDOS) attackers where thousands or even millions of requests hit backend APIs randomly and crash the entire backend software system and infrastructure.
  • Load balancing: It provides load balancing by using its in-built ingress controller mechanism to expose microservices on Kubernetes clusters as external services exposed through the ingress controller load balancer. Ingress control can map and route client requests to distributed microservices based on ingress resources.

Popular Service Meshes

Istio (OSS)

Istio is an open-source service mesh platform that provides a set of tools and capabilities for managing and securing microservices-based applications. It aims to address common challenges associated with service-to-service communication, observability, security, and traffic management in complex distributed systems. At its core, Istio deploys a sidecar proxy, called Envoy, alongside each microservice in the application. This sidecar proxy intercepts and manages all inbound and outbound traffic for the service, allowing Istio to control and monitor the communication between services.

Advantages:

  • Istio boasts one of the largest communities for online service mesh and is highly acclaimed and discussed on the internet. Its GitHub contributors far outnumber those of Linkerd by a significant margin. 
  • Furthermore, it offers support for both Kubernetes and VM modes.

Drawbacks:

  • Istio comes with a cost as it is not available for free. It demands a considerable time investment in terms of reading the documentation, setting it up, ensuring proper functionality, and ongoing maintenance. 
  • The implementation and integration of Istio into production can range from several weeks to several months, depending on the complexity of the infrastructure.
  • Using Istio requires a significant amount of resource overhead. 
  • Unlike Linkerd, it lacks a built-in administrative dashboard. 
  • Additionally, Istio mandates the use of its own ingress gateway. 
  • The Istio control plane is exclusively supported within Kubernetes containers, meaning there is no VM mode available for the Istio data plane.

Linkerd

Linkerd is an open-source service mesh platform designed to provide observability, reliability, and security to microservices architectures. It is developed by the Cloud Native Computing Foundation (CNCF) and focuses on simplicity, performance, and ease of use.

Advantages

  • Linkerd leverages the expertise of its creators, who are former Twitter engineers with experience in developing the internal tool, Finagle. They gained valuable insights from working on Linkerd v1, which contributes to the refinement of the service mesh. 
  • Being one of the pioneering service meshes, Linkerd enjoys an active and vibrant community, boasting more than 5,000 users on Slack, along with an engaged mailing list and Discord server. 
  • The availability of comprehensive documentation and tutorials further enhances its appeal.
  • Linkerd has reached a level of maturity with the release of version 2.9, which is evident from its adoption by prominent corporations such as Nordstrom, eBay, Strava, Expedia, and Subspace. 
  • Additionally, Linkerd offers paid enterprise-grade support through Buoyant, ensuring professional assistance is readily available.

Drawbacks

  • Using Linkerd service meshes to their full potential requires a significant learning curve. It is important to note that Linkerd is exclusively supported within Kubernetes containers and does not offer a VM-based or “universal” mode. 
  • Unlike Envoy, the Linkerd sidecar proxy differs, providing Buoyant the flexibility to optimize it according to their requirements. However, this customization comes at the expense of losing the inherent extensibility offered by Envoy. 
  • Consequently, Linkerd lacks support for essential features such as circuit breaking, delay injection, and rate limiting. Additionally, there is no straightforward API exposed for easy control of the Linkerd control plane, although a gRPC API binding can be found.

In case you wish to read more about the above service meshes comparison and what more they have to offer, you can read all about it here.

That’s not it, there many many options in the market for you to choose from like:

Conclusion

Service mesh technology is a boon for developers. It increases developer productivity by delegating cross-cutting concerns from application source code to in-house DevSecOps. Service Mesh provides a ton of more features to solve developer challenges and increase developer productivity. It’s now a de facto standard for managing cross-cutting configuration code for cloud-native microservice apps on Kubernetes.

Kubernetes alternatives to Spring Java framework

Spring Cloud and Kubernetes both complement each other to build a cloud-native platform and run microservices on the Kubernetes containers. Kubernetes provides many features which are similar to Spring Cloud and Spring Config Server features.

Spring framework has been around for many years. Even today, many organizations prefer to go with Spring because it provides many advanced features with simple ready-to-use libraries. It’s a great deal when Spring developers will only take care of business logic source code and configuration code is managed by DevOps/DevSecOps operation teams or automated CI/CD tools.

Important Note about Netflix OSS: Starting from the Spring Cloud Greenwich release Train, Netflix OSS, Hystrix, Ribbon, and Zuul are entering into maintenance mode and are now deprecated. This means that there won’t be any new features added to these modules, and the Spring Cloud team will only fix bugs and security issues. The maintenance mode does not include the Eureka module. Spring provides regular releases and patches for its libraries; however, Netflix OSS is almost not active and is not being used by organizations.

Let’s discuss a couple of challenges of cloud configuration code with Spring Cloud and Spring Config Server for microservices architecture:

  • Tight coupling of business logic and configuration source code: Spring configuration provides tight coupling with business logic code, which makes the code-heavy and also makes it difficult to debug production issues. It slows down releases for new business features due to the tight integration of business logic with cross-cutting configuration source code.
  • Extra coding and testing effort: For new feature releases one extra testing effort is required to test new features mainly during integration, regression, and load testing. We need to test the entire code with cross-cutting configurations even for minor code changes in the business logic.
  • Slow build and deployment: It takes extra time to load, deploy, and run heavy code because of the strong bonding of configuration and business logic. It consumes extra CPU and RAM for all business-specific API calls.

Spring doesn’t provide these important features:

  • Continuous Integration (CI): It doesn’t address any CI-related concerns. It only handles the build microservices part.
  • Self-healing of infrastructure: It doesn’t care about self-healing and restarting apps for any crashes. It provides health check APIs and observability features using actuator/micrometer support with Prometheus.
  • Dependency on Java framework: It only supports Java programming language.

Kubernetes alternatives of Spring Cloud

Here are a few better alternatives of Kubernetes for Spring libraries:

Spring CloudKubernetes
Service discoveryK8s provides Cluster API, a service that exposes microservices across all namespaces since “kube-dns” allows lookup. It also provides integration with the ingress controller and K8s ingress resources to intelligently route incoming traffic to designated service.K8s provides ConfigMap and secret to externalize configuration at the infra side natively which is maintained by the DevOps team.
Load balancingNetflix Ribbon provides client-side load balancing on HTTP/TCP requests.K8s provides load balancer services. It’s the responsibility of K8s service to load balance.
Configuration managementSpring Config Server externalizes configuration management through code configuration.K8s services and ingress resources fulfill partial API gateways features like routing and load balancing. K8s supports service mesh architecture implementation tools like Istio, which provides most of the API gateway-related features like service discovery, and API tracing. It’s not a replacement for the external API gateway.
API GatewaySpring Cloud Gateway and Zuul2 provide all API gateway features like request routing, caching, authentication, authorization, API level load balancing, rate limiting, circuit breaker, and so on.K8s provides the same features with health checks, resource isolation, and service mesh.
Resilience and fault toleranceSpring Boot Admin supports scaling and self-healing of applications. It’s used for managing and monitoring Spring Boot applications. Each application is considered a client and registers to the admin server. Spring Boot Actuator endpoints help to monitor the environment.Resilence4j, and Spring Retry projects provide resiliency and fault tolerance mechanisms. They provide circuit breaker, timeout, and retry features.
Scaling and self-healingSpring Batch, Spring Cloud Task, and Spring Cloud Data Flow (SCDF) have capabilities to schedule/on-demand and run batch jobs. Spring tasks can run short-living jobs. A short-lived task could be a Java process or a shell script.Netflix Eureka. Not recommended to use for cloud-native modern applications.
Batch jobsSpring Batch, Spring Cloud Task, and Spring Cloud Data Flow (SCDF) have capabilities to schedule/on-demand and run batch jobs. Spring tasks can run short-living jobs. A short-lived task could be a Java process, a shell script.K8s also provides scheduled Cron job features. It executes batch jobs and provides limited scheduling features. It also works together with Spring Batch.

Conclusion

Spring provides tons of features and has had a proven Java-based framework for many years! Kubernetes provides complimentary features which are comparable with Spring features and can be replaced to extract configuration code from the business logic. Cloud-native microservices’ service architecture (MSA) and 12/15 factor principles recommend keeping cross-cutting configuration code outside of the business logic code. Configuration should be stored and managed separately. In MSA, the same configuration can be shared across many microservices, that’s why configuration should be stored externally and be available for all microservices applications. Also, these configurations should be managed by DevOps teams.

This helps developers to focus only on business logic programming. It will definitely make release faster with lower development costs. Also, building and deployment will be faster for microservices apps. Kubernetes provides better alternatives to replace these legacy Spring libraries features, many of them are deprecated or in the maintenance phase. Kubernetes also provides Service Mesh support.

These Kubernetes alternatives are really helpful for microservices applications and complimentary to the Spring Java framework for microservices development!

My first book release!! Cloud Native Microservices with Spring and Kubernetes (453 pages)

I am happy to announce release of my first book “Cloud Native Microservices with Spring and Kubernetes” with BPB Publications!! It’s all about design, build and deploy scalable cloud native microservices on container using the Spring framework and Kubernetes. Need your support! Please buy and review this book on Amazon. Also, share book detail with your IT software engineers colleagues and friends.

The main objective of this book is to give an overview of cloud-native microservices, their architecture, design patterns, best practices, use cases, and practical coverage of modern applications. This book covers a strong understanding of microservices, API first approach, Testing, Observability, API Gateway, Service Mesh, and Kubernetes alternatives of Spring Cloud. This book covers the implementation of various design patterns of developing cloud native microservices using Spring framework, docker and Kubernetes. It also covers containerization concepts and hands-on code exercises.

After reading this book, the readers will have a holistic understanding of building, running, and managing cloud native microservices applications on Kubernetes containers.

It’s the first book on this subject in India by any Indian writer, which is also economical than foreign publications. This book is about learning of software application design and development using Microservices, Spring and Kubernetes based technologies. It’s useful for software developers, cloud engineers, DevOps and technical architects.

Available on:

This book is available in paperback and Kindle (eBook on free Kindle app on Android/iOS/Laptop/Desktop) editions on amazon and BPB in most of the countries of North America, Europe, the Middle East, Asia, and Africa.

Refer free preview and TOC/Index of this book:

https://drive.google.com/drive/folders/1Lq280d6hUcyh2xm8cFuyt1vADNzk61dg?usp=sharing

What you will learn:

  • Learn fundamentals of microservice and design patterns.
  • Perform end-to-end microservices testing using Cucumber.
  • Learn microservices development using Spring Boot and Kubernetes.
  • Learn to develop reactive, event-driven, and batch microservices.
  • Perform end-to-end microservices testing using Cucumber.
  • Implement API gateway, authentication & authorization, load balancing, caching, and rate limiting.
  • Learn observability and monitoring techniques of microservices

Who this book is for:
This book is for the Spring Developers, Microservice Developers, Cloud Engineers, DevOps Consultants, Technical Architect and Solution Architects.

Table of Contents (Chapters):
1. Overview of Cloud Native microservices
2. Microservice design patterns
3. API first approach
4. Build microservices using the Spring Framework
5. Batch microservices
6. Build reactive and event-driven microservices
7. The API gateway, security, and distributed caching with Redis
8. Microservices testing and API mocking
9. Microservices observability
10. Containers and Kubernetes overview and architecture
11. Run microservices on Kubernetes
12. Service Mesh and Kubernetes alternatives of Spring Cloud

KEY FEATURES:

  • Complete coverage on how to design, build, run, and deploy modern cloud native microservices.
  • Includes numerous sample code exercises on microservices, Spring and Kubernetes.
  • Develop a stronghold on Kubernetes, Spring, and the microservices architecture.
  • Complete guide of application containerization on Kubernetes containers.
  • Coverage on managing modern applications and infrastructure using observability tools.

Chapter 1: Overview of Cloud Native microservices, introduces cloud native modern applications, cloud first overview, benefits, types of clouds, classification, and the need for cloud native modern applications. It will cover detailed microservices (MSA ) overview, characteristics, motivations, benefits, best practices, architecture principles, challenges and solutions, application modernization spectrum, twelve-factor apps, and beyond twelve-factor apps.s

Chapter 2: Microservice design patterns, introduces various microservices design patterns with use cases, advantages, and disadvantages.

Chapter 3: API first approach, discusses fundamentals of the API first approach. It discusses details of the REST overview, API model, best practices, design principles, components, security, communication protocols, and how to document dynamically with OpenAPI Swagger. It discusses API design planning, specifications, API management tools, and testing API with SwaggerHub inspector and PostMan REST client.

Chapter 4: Build microservices using the Spring Framework, is a key chapter of Spring Boot and Spring Cloud components with hands-on lab exercises. It will cover steps to build microservice using the REST API framework. It covers the Spring Cloud config server and resiliency of microservices practical aspects.

Chapter 5: Batch microservices, introduces batch microservices, use cases, Spring Cloud Task, and Spring Batch. It discusses hands-on lab exercises using Spring Cloud Data Flow (SCDF) and Kafka. It also discusses a few Spring batch practices, auto-scaling techniques, batch orchestration, and compositions methods for sequential or parallel batch processing. Last but not the least; it talks about alerts and monitoring of Spring Cloud Task and Spring Batch.

Chapter 6:  Build reactive and event-driven microservices, describes building of reactive microservices, non-blocking synchronous APIs, and event-driven asynchronous microservices. It covers steps to develop sample reactive microservices with Spring’s project Reactor, Spring WebFlux, and event-driven asynchronous microservices. It discusses Spring Cloud Stream, Zookeeper, SpringBoot, and overview of Kafka. It also covers hands-on lab exercises of event-driven asynchronous microservices using Spring Cloud Stream and Kafka.

Chapter 7:  API gateway, security, and distributed caching with Redis, introduces the API Gateway overview, features, advantages, and best practices. It covers hands-on lab exercises to expose REST APIs of microservices externally with the Spring Cloud Gateway. It covers distributed caching overview and hands-on lab exercises using Redis. It discusses API gateway rate limiting and Implementation of API gateway rate limiting with Redis and Spring Boot. Last but not the least, it covers best practices of API Security. Implementation of SSO using Spring Cloud Gateway, Spring Security, Oauth2, Keycloak, OpenId, and JWT tokens.

Chapter 8: Microservices testing and API mocking, describes important aspects of microservices testing practices, challenges, benefits, testing strategy, testing pyramid, and different types of microservices testing. It covers implementation of the integration testing framework using Behavioral Driven Development (BDD) with hands-on code examples. It also discusses microservices testing tools and best practices of microservices testing. It covers the role of testing in the microservices CI/CD pipeline. Last but not the least; it talks about API mocking and hands-on lab implementation with the WireMock framework.

Chapter 9: Microservices observability, covers detail observability and monitoring overview and techniques of microservices with the Spring actuator, micrometer health APIs, and Wavefront APM. It covers application logging overview, best practices, simple logging, and log aggregation of distributed microservices with implementation using Elasticsearch, Fluentd, and Kibana (EFK) on the Kubernetes container. It discusses the need of APM performance and telemetry monitoring tools for distributed microservices and how to trace multiple microservices in a distributed environment. It also covers hands-on lab implementation of monitoring microservices with Prometheus and Grafana.

Chapter 10: Containers and Kubernetes overview and architecture, is a key chapter which introduces containers, docker, docker engine containerization, Buildpacks, components of docker files, build docker files, run docker files, and inspect docker images. It covers docker image registry and how to persist docker images in image container registries. It covers an overview of Kubernetes, need, and architecture. Last but not the least; it covers a detailed introduction of Kubernetes resources.

Chapter 11: Run microservices on Kubernetes, discusses practical aspects of Kubernetes, installation, and configuration with monitoring and visualization tools with Octant and Proxy. It discusses how to create and manage Kubernetes clusters in detail. It discusses hands-on exercises of creating docker images of Java microservices, pushing it to the Docker hub container image registry, and deploying to Kubernetes clusters. It covers hands-on lab examples of exposing API endpoints of microservices outside the Kubernetes cluster by using the Nginx ingress controller. Last but not the least; it covers various popular and useful Kubernetes application deployment and configuration of management tools.

Chapter 12: Service Mesh and Kubernetes alternatives of Spring Cloud, covers a detailed overview and benefits of GitOps and Service Mesh. It covers the Istio Service Mesh architecture and deployment of microservices on Kubernetes with Argo CD. Last but not the least; it discusses various Kubernetes alternatives of Spring Cloud projects and popular cloud buzzwords!

About the Author
Rajiv Srivastava is the founder of cloudificationzone.com, which is a cloud native modern application tech blog site. He is a cloud solution architect and modern application specialist with 17+ years of work experience in software development and architectural design.

KEYWORDS 

1.      Spring framework

2.      API first approach

3.      Cloud Native

4.      Microservices observability

5.      API testing

6.      API gateway

7.      Microservices observability

8.      Service mesh

9.      API gateway

10.    Redis distributed caching

11.    Kafka

12.    Spring Cloud

13.    Service discovery

14.    Spring cloud data flow

15.    Ingress controller