Introduction to Automation Testing Strategies For Microservices

Early end-to-end (E2E) testing of microservices helps you identify bugs early in your software development process. Exploring the testing triangle, challenges, and solutions for microservices testing.

Microservices are distributed applications deployed in different environments and could be developed in different programming languages having different databases with too many internal and external communications. Microservice architecture is dependent on multiple interdependent applications for its end-to-end functionalities. This complex microservices architecture requires a systematic testing strategy to ensure end-to-end (E2E) testing for any given use case. In this blog, we will discuss some of the most adopted automation testing strategies for microservices and to do that we will use the testing triangle approach.

Testing Triangle

It’s a modern way of testing microservices with a bottom-up approach, which is also part of the “Shift-left” testing methodology (The “shift-left” testing method pushes testing towards the early stages of software development. By testing early and often, you can reduce the number of bugs and increase the code quality.). The goal of having multiple stacked layers of the following test pyramid for microservices, is to identify different types of issues at the beginning of testing levels. So, in the end, you will have very few production issues. Each type of testing focuses on a different layer of the overall software system and verifies expected results. For a distributed microservices app, the tests can be organized into the following layers using a bottom-up approach:

DiagramDescription automatically generated

Testing triangle is based on these five principles:

Unit testing (Level 1)

It’s the starting point and level 1 white box testing in the bottom-up approach. Furthermore, it tests a small unit of source code functionality of microservices and verifies the behavior of source code methods or functions inside a microservice by stubbing and mocking dependent modules and test data. Application developers write unit test cases for a small unit of code (independent functions/methods) using different test data and analyzing expected output independently without impacting other parts of the code. It’s a vital part of the “shift-left” testing approach, where issues are identified in the starting phase at method level of microservices. This testing should be done thoroughly with code coverage of more than ~90%. It will reduce the chances of potential bugs in the later phases.

Component testing (Level 2)

It’s the level 2 testing of the Testing Triangle that follows unit testing. This testing aims to test entire microservices functionalities and APIs independently in isolation for individual microservice. By writing component tests for a highly granular microservices layer, the API behavior is driven through tests from the client or consumer perspective. Component tests will test the interaction between microservice APIs/services with the database, messaging queues, and external, and third-party outbound services all as one unit.

It tests a small part of the entire system. In component testing, dependent microservices and database responses are mocked or stubbed. In this testing approach, all microservices APIs are tested with multiple sets of test data.

Contract testing (Level 3)

It’s the level 3 testing approach that verifies agreed contracts between different domain-driven microservices. There are contracts defined before the development of microservices in the API/interface, designing what should be the response for the given client request or query. If any changes happen, then the contract has to be revisited and revised. For example, if any new feature changes are deployed, then they must be exposed with a separate version /v2 API request, and we need to make sure that the older /v1 version still supports client requests for backward compatibility.

It tests a small part of the integration, like:

  • Between microservice to its connected databases.
  • API calls between two microservices.

Integration testing (Level 4)

It’s level 4 testing which verifies end-to-end functionality. It is the next level of contract testing, where integration testing is used to test and verify an entire functionality by testing all related microservices.

According to Martin Fowler, an integration test exercises communication paths through the subsystem to check for any incorrect assumptions, each module has about how to interact with its peers.

It tests a bigger part of the system, mostly the microservices exposing their services with API. For example:

  • Login functionality, which involves multiple microservices interactions.
  • It tests interactions for microservices API and event-driven hub components for a given functionality.

End-to-End (E2E) testing (Level 5)

It’s the final and the level 5 testing approach in the Testing Triangle and it is an end-to-end usability black box testing. It verifies that the entire system as a whole meets business functional goals from a user or a customer or client’s prospective. E2E testing is performed on the external front-end (user interface (UI)) or API client calls with the help of the REST clients. It’s performed on different distributed microservices and SPA (Single Page Apps)/MFE (Micro Front ends) applications. It covers testing of UI, backend microservices, databases, and their internal/external components.

Challenges of Microservice Testing

Many organizations have already adopted digital transformation which uses microservice architecture. IT organizations find it challenging to test microservices applications because of its distributed nature. We will discuss the important challenges and solutions offered by some of the industry experts:

  • Multiple agile microservices teams: Inter-communication between multiple agile microservices dev and test teams is really time taking and difficult. Sometimes, teams work in silos, not sharing enough technical/non-technical details which causes communication gaps.

    Solutions: Testing triangle’s integration and E2E testing can help address the above challenge by testing dependent microservices which are developed by different dev teams.
  • Microservice integration testing-related challenges: Testing of all microservices does not happen parallelly. End-to-end integration testing of inter-dependent microservices is a nightmare in reality, these microservices might not be ready for testing in a test environment. Every microservice will have its own security mechanism and test data. It’s a daunting task to find failover of other microservices when they are dependent on each other.

    Solutions: Testing triangle’s integration testing helps here by testing dependent microservices APIs.
  • Business requirement and design change challenges: Frequent changes in business and technical requirements in the agile development methodology, leads to increased complexity and testing effort. It increases development and testing costs.

    Solutions: Testing triangle provides an effective systematic step-by-step process that reduces complexities, operational cost, and testing effort by full automation testing.
  • Test database challenges: Databases have different types (SQL/NoSQL like Redis, MongoDB, Cassandra, etc.) which have different structures. These structured and unstructured data types can be combined to meet particular business needs. Every database has a different type of test data in distributed microservices development. It’s daunting to maintain different kinds of test data for different databases.

    Solutions: Testing triangle provides automated BDD (Behavioral Driven Design) where we can pass dynamic test data; and TDM (Test Data Management) method which solves test database challenges by managing different kinds/formats of test data.

Conclusion

Testing triangle provides great testing techniques to solve challenges associated with microservices. We need to choose these systematic testing techniques with a perspective on lower complexity, faster testing, time to market, testing cost, and risk mitigation before releasing to production. This testing strategy is required for microservices to avoid real production issues. This ensures that test cases should cover end-to-end functional and non-functional E2E testing for UI, backend, databases, and different PROD and Non-PROD staging environments for reliable product releases.

We have seen microservices introduce many testing challenges which can be solved with step by step (down to top) approach provided by testing triangle techniques.

It’s a modern cloud-native testing strategy to test microservices on the cloud. It finds and fixes maximum bugs during the testing phase until it reaches the highest level (topmost level in the triangle), which is E2E testing.

Tips: Many IT organizations have started following a “Shift-left” culture and have started using a testing culture, especially in situations where identifying and fixing bugs early is important.

Cloud Distributed Caching for Microservices

Distributed caching is a very important aspect of cloud-based applications, be it for on-prem, public, or hybrid cloud environments. It facilitates incremental scaling allowing the cache to grow and incorporate the data growth. In this blog we will explore distributed caching on the cloud and why it is useful for environments with high data volume and load. This blog will cover,

  • Challenges with Traditional Caching 
  • What is Distributed Caching
  • Benefits of distributed caching on the cloud
  • Recommended Distributed Caching Database Tools
  • Ways to Deploy Distributed Caching on the cloud

Traditional Distributed Caching Challenges

Traditional distributed caching servers are usually deployed with limited storage and CPU speed on a few limited dedicated servers or Virtual Machines (VMs). Often these caching infrastructures reside on data centers (DCs) that are on-prem or the cloud on VMs which are not resilient, not highly available, and fault-torent. . This kind of traditional caching comes with numerous challenges:

  • Traditional caching is called in-process caching which is at the instance server level. In-process caching stores data at the application level locally like storing in EhCache etc. It doesn’t provide accurate data consistency.
  • In-process cache creates performance issues, because they occupy extra memory, and due to garbage collection overhead.
  • It’s not reliable, because it uses the same heap memory which is used by the application. If an application got crashed due to memory or some other issues, cached data will be also wiped out.
  • Hard to scale cache storage and CPU speed on fewer servers because often these servers are not auto-scalable.
  • High operational cost to manage infrastructure and unutilized hardware resources. These servers are managed manually on traditional DevOps infrastructure.
  • Traditional distributed caching is not containerized (not deployed on Kubernetes/Docker containers). That’s why it is not easily scalable, resilient, and self-managed. Also, more possibilities of these fewer servers crashing if the client load is higher than the actual.

What is Distributed Caching

Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multi-cluster Kubernetes environment on the cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment.

Benefits of Distributed Caching on cloud

These are a few benefits of distributed caching:

  • Periodic caching of frequently used read REST API’s response ensures faster API read performance.
  • Reduced database network calls by accessing cached data directly from distributed caching databases.
  • Resilience and fault tolerance by maintaining multiple copies of data at various caching databases in a cluster. 
  • High availability by auto-scaling the cache databases, based on load or client requests.
  • Storage of session secret tokens like JSON Web Token (ID/JWT)  for authentication & authorization purposes for microservices apps containers.
  • Provide faster read and write access in-memory if it’s used as a dedicated database solution for high-load mission-critical applications.
  • Avoid unnecessary roundtrip data calls to persistent databases.
  • Auto-scalable cloud infrastructure deployment.
  • Containerization of Distributed caching libraries/solutions.
  • Provide consistent read data from any synchronized connected caching data centers (DC).
  • Minimal to no outage, high availability of caching data.
  • Faster data synchronization between caching data servers.

Recommended Distributed Caching Databases Tools

Following are popular industry-recognized caching servers:  

  • Redis 
  • Memcache 
  • GemFire and 
  • HazelCast databases

Redis: It’s one of the most popular distributed caching services. It supports different data structures. It’s an open-source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. It also has an enterprise version. It can be deployed in containers on private, public, and hybrid clouds etc. it provides consistent and faster data synchronization between different data centers (DC).

HazelCast: Hazelcast is a distributed computation and storage platform for consistent low-latency querying, aggregation, and stateful computation against event streams and traditional data sources. It allows you to quickly build resource-efficient, real-time applications. You can deploy it at any scale from small edge devices to a large cluster of cloud instances. A cluster of Hazelcast nodes share both the data storage and computational load which can dynamically scale up and down. When you add new nodes to the cluster, the data is automatically rebalanced across the cluster. The computational tasks (jobs) that are currently in a running state, snapshot their state and scale with a processing guarantee.

Memcached:  It is an open-source, high-performance, distributed memory object caching system. It is generic in nature but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from the results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes easy, quick deployment and development. It solves many data caching problems and the API is available in various commonly used languages.

GemFire: It provides distributed in-memory data grid cache, powered by Apache Geode open source. It scales data services on demand to support high performance. It’s a key-value store that performs read and write operations at fast speeds. It offers highly available parallel message queues, continuous availability, and an event-driven architecture to scale dynamically, with no downtime. 


It provides multi-site replication. As data size requirements increase to support high-performance, real-time apps, they can scale linearly with ease. Applications get low-latency responses to data access requests, and always return fresh data. Maintain transaction integrity across distributed nodes. It supports high-concurrency, low-latency data operations of applications. It also provides node failover and multi Geo (Cross Data Center or Multi Data Center) replication to ensure applications are resilient, whether on-premises or in the cloud.

Ways to Deploy Distributed Caching on Hybrid cloud

These are recommended ways to deploy and setup distributed caching be it public cloud or hybrid cloud:

  • Open source distributed caching on traditional VM instances.
  • Open source distributed caching on Kubernetes container. I would recommend deploying on a Kubernetes container for high availability, resiliency, scalable and faster performance. 
  • Enterprise COTS distributed caching deployment on VM and Container. I would recommend the enterprise version because it will provide additional features and support.
  • The public cloud offers managed services of distributed caching open and enterprise sources like Redis, Hazelcast and Memcache, etc.
  • Caching servers can be deployed on multiple sources like on-prem and public cloud together, public servers, or only one public server in different availability zones.

Conclusion

Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on a hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when a cookie is disabled at the web browser side, improving API query read performance, avoiding operational cost and database hit for the same type of requests, managing secret tokens for authentication and authorization, etc.

Distributed cache syncs data on the hybrid cloud automatically without any manual operation and always gives the latest data. I would recommend industry-standard distributed caching solutions – Redis, Hazelcast, and Memcache. We need to choose a better distributed caching technology in the cloud based on use cases.

My first book release!! Cloud Native Microservices with Spring and Kubernetes (453 pages)

I am happy to announce release of my first book “Cloud Native Microservices with Spring and Kubernetes” with BPB Publications!! It’s all about design, build and deploy scalable cloud native microservices on container using the Spring framework and Kubernetes. Need your support! Please buy and review this book on Amazon. Also, share book detail with your IT software engineers colleagues and friends.

The main objective of this book is to give an overview of cloud-native microservices, their architecture, design patterns, best practices, use cases, and practical coverage of modern applications. This book covers a strong understanding of microservices, API first approach, Testing, Observability, API Gateway, Service Mesh, and Kubernetes alternatives of Spring Cloud. This book covers the implementation of various design patterns of developing cloud native microservices using Spring framework, docker and Kubernetes. It also covers containerization concepts and hands-on code exercises.

After reading this book, the readers will have a holistic understanding of building, running, and managing cloud native microservices applications on Kubernetes containers.

It’s the first book on this subject in India by any Indian writer, which is also economical than foreign publications. This book is about learning of software application design and development using Microservices, Spring and Kubernetes based technologies. It’s useful for software developers, cloud engineers, DevOps and technical architects.

Available on:

This book is available in paperback and Kindle (eBook on free Kindle app on Android/iOS/Laptop/Desktop) editions on amazon and BPB in most of the countries of North America, Europe, the Middle East, Asia, and Africa.

Refer free preview and TOC/Index of this book:

https://drive.google.com/drive/folders/1Lq280d6hUcyh2xm8cFuyt1vADNzk61dg?usp=sharing

What you will learn:

  • Learn fundamentals of microservice and design patterns.
  • Perform end-to-end microservices testing using Cucumber.
  • Learn microservices development using Spring Boot and Kubernetes.
  • Learn to develop reactive, event-driven, and batch microservices.
  • Perform end-to-end microservices testing using Cucumber.
  • Implement API gateway, authentication & authorization, load balancing, caching, and rate limiting.
  • Learn observability and monitoring techniques of microservices

Who this book is for:
This book is for the Spring Developers, Microservice Developers, Cloud Engineers, DevOps Consultants, Technical Architect and Solution Architects.

Table of Contents (Chapters):
1. Overview of Cloud Native microservices
2. Microservice design patterns
3. API first approach
4. Build microservices using the Spring Framework
5. Batch microservices
6. Build reactive and event-driven microservices
7. The API gateway, security, and distributed caching with Redis
8. Microservices testing and API mocking
9. Microservices observability
10. Containers and Kubernetes overview and architecture
11. Run microservices on Kubernetes
12. Service Mesh and Kubernetes alternatives of Spring Cloud

KEY FEATURES:

  • Complete coverage on how to design, build, run, and deploy modern cloud native microservices.
  • Includes numerous sample code exercises on microservices, Spring and Kubernetes.
  • Develop a stronghold on Kubernetes, Spring, and the microservices architecture.
  • Complete guide of application containerization on Kubernetes containers.
  • Coverage on managing modern applications and infrastructure using observability tools.

Chapter 1: Overview of Cloud Native microservices, introduces cloud native modern applications, cloud first overview, benefits, types of clouds, classification, and the need for cloud native modern applications. It will cover detailed microservices (MSA ) overview, characteristics, motivations, benefits, best practices, architecture principles, challenges and solutions, application modernization spectrum, twelve-factor apps, and beyond twelve-factor apps.s

Chapter 2: Microservice design patterns, introduces various microservices design patterns with use cases, advantages, and disadvantages.

Chapter 3: API first approach, discusses fundamentals of the API first approach. It discusses details of the REST overview, API model, best practices, design principles, components, security, communication protocols, and how to document dynamically with OpenAPI Swagger. It discusses API design planning, specifications, API management tools, and testing API with SwaggerHub inspector and PostMan REST client.

Chapter 4: Build microservices using the Spring Framework, is a key chapter of Spring Boot and Spring Cloud components with hands-on lab exercises. It will cover steps to build microservice using the REST API framework. It covers the Spring Cloud config server and resiliency of microservices practical aspects.

Chapter 5: Batch microservices, introduces batch microservices, use cases, Spring Cloud Task, and Spring Batch. It discusses hands-on lab exercises using Spring Cloud Data Flow (SCDF) and Kafka. It also discusses a few Spring batch practices, auto-scaling techniques, batch orchestration, and compositions methods for sequential or parallel batch processing. Last but not the least; it talks about alerts and monitoring of Spring Cloud Task and Spring Batch.

Chapter 6:  Build reactive and event-driven microservices, describes building of reactive microservices, non-blocking synchronous APIs, and event-driven asynchronous microservices. It covers steps to develop sample reactive microservices with Spring’s project Reactor, Spring WebFlux, and event-driven asynchronous microservices. It discusses Spring Cloud Stream, Zookeeper, SpringBoot, and overview of Kafka. It also covers hands-on lab exercises of event-driven asynchronous microservices using Spring Cloud Stream and Kafka.

Chapter 7:  API gateway, security, and distributed caching with Redis, introduces the API Gateway overview, features, advantages, and best practices. It covers hands-on lab exercises to expose REST APIs of microservices externally with the Spring Cloud Gateway. It covers distributed caching overview and hands-on lab exercises using Redis. It discusses API gateway rate limiting and Implementation of API gateway rate limiting with Redis and Spring Boot. Last but not the least, it covers best practices of API Security. Implementation of SSO using Spring Cloud Gateway, Spring Security, Oauth2, Keycloak, OpenId, and JWT tokens.

Chapter 8: Microservices testing and API mocking, describes important aspects of microservices testing practices, challenges, benefits, testing strategy, testing pyramid, and different types of microservices testing. It covers implementation of the integration testing framework using Behavioral Driven Development (BDD) with hands-on code examples. It also discusses microservices testing tools and best practices of microservices testing. It covers the role of testing in the microservices CI/CD pipeline. Last but not the least; it talks about API mocking and hands-on lab implementation with the WireMock framework.

Chapter 9: Microservices observability, covers detail observability and monitoring overview and techniques of microservices with the Spring actuator, micrometer health APIs, and Wavefront APM. It covers application logging overview, best practices, simple logging, and log aggregation of distributed microservices with implementation using Elasticsearch, Fluentd, and Kibana (EFK) on the Kubernetes container. It discusses the need of APM performance and telemetry monitoring tools for distributed microservices and how to trace multiple microservices in a distributed environment. It also covers hands-on lab implementation of monitoring microservices with Prometheus and Grafana.

Chapter 10: Containers and Kubernetes overview and architecture, is a key chapter which introduces containers, docker, docker engine containerization, Buildpacks, components of docker files, build docker files, run docker files, and inspect docker images. It covers docker image registry and how to persist docker images in image container registries. It covers an overview of Kubernetes, need, and architecture. Last but not the least; it covers a detailed introduction of Kubernetes resources.

Chapter 11: Run microservices on Kubernetes, discusses practical aspects of Kubernetes, installation, and configuration with monitoring and visualization tools with Octant and Proxy. It discusses how to create and manage Kubernetes clusters in detail. It discusses hands-on exercises of creating docker images of Java microservices, pushing it to the Docker hub container image registry, and deploying to Kubernetes clusters. It covers hands-on lab examples of exposing API endpoints of microservices outside the Kubernetes cluster by using the Nginx ingress controller. Last but not the least; it covers various popular and useful Kubernetes application deployment and configuration of management tools.

Chapter 12: Service Mesh and Kubernetes alternatives of Spring Cloud, covers a detailed overview and benefits of GitOps and Service Mesh. It covers the Istio Service Mesh architecture and deployment of microservices on Kubernetes with Argo CD. Last but not the least; it discusses various Kubernetes alternatives of Spring Cloud projects and popular cloud buzzwords!

About the Author
Rajiv Srivastava is the founder of cloudificationzone.com, which is a cloud native modern application tech blog site. He is a cloud solution architect and modern application specialist with 17+ years of work experience in software development and architectural design.

KEYWORDS 

1.      Spring framework

2.      API first approach

3.      Cloud Native

4.      Microservices observability

5.      API testing

6.      API gateway

7.      Microservices observability

8.      Service mesh

9.      API gateway

10.    Redis distributed caching

11.    Kafka

12.    Spring Cloud

13.    Service discovery

14.    Spring cloud data flow

15.    Ingress controller