Bitnami Tanzu Application Catalogue (TAC) : Use Cases & Solutions

In this blog, I will cover a quick introduction of TAC and a couple of use cases and real challenges which can be solved using this :

What is Tanzu Bitnami Application Catalogue (TAC)?

Curate a catalog of production-ready open-source software from the Bitnami collection.

Bitnami Application Catalogue (TAC) is a secure, curated Kubernetes docker images for the popular APIs and libraries to build, run, manage and secure cloud native docker images. It does CVE, virus scanning and always keep secure updated golden images in it’s central SAAS repo. It’s builds. docker images based on OS for CI/CD deployment on Kubernenes.

Why Bitnami Tanzu Application Catalogue (TAC)?

Working with pre-packaged software, that impose security vulnerability, risk and challenges. Developers are sourcing containers from public Docker Hub that are out of date, vulnerable, insecure by default, or broken. Auditing, hardening, integrating, and making software ready for production is time consuming, difficult, and low value add from an organizational standpoint. It’s also frustrating to dev teams as software selection will be limited and forced to opt open source options.

TAC use Cases

  • Keep images up to date with regular patching and updates
  • Manage golden images privately on preferred OS
  • Regular security scan for viruses and vulnerabilities
  • Manage/sync images on their on-prem private image repository using Harbor
  • Non-secure images
  • No enterprise support for regular updates and security patching
  • No virus and CVE scan and transparency of scan reports
  • Hard to manage preferred OS based images and configuration


  1. Available stacks –
  2. How to start and use –
  3. FAQ-

Demo Video

10 Challenges and Solutions for Microservices

I have posted this same blog on Dzone on July 2, 2018. This one is the latest version:

Transitioning/implementing to microservices creates significant challenges for organizations. I have identified these challenges and solution based on my exposure to microservices in production. 

These are the ten major real challenges of implementing microservices architecture and proposed solutions:

1. Data Synchronization (Consistency) — Event sourcing architecture can address this issue using the async messaging platform. The SAGA design pattern can address this challenge.

2. Security — An API Gateway can solve these challenges. There are many open source and enterprise APIs are available like Spring Cloud Gateway, Apigee, WSO2, Kong, Okta (2-step authentication) and public cloud offering from AWS, GCP and Azure etc. Custom solutions can also be developed for API security using JWT token, Spring Security, and Netflix OSS Zuul2.

3.  Services Communication — There are the different way to communicate microservices –
a. Point to point using API Gateway
b. Messaging event driven platform using Kafka and RabbitMQ
c. Service Mesh

4. Service Discovery — This will be addressed by open source Istio Service Mesh, API Gateway, Netflix Eureka APIs. It can also be done using Netflix Eureka at the code level. However, doing it in with the orchestration layer will be better and can be managed by these tools rather doing and maintaining it through code and configuration.

5. Data Staleness — The database should be always updated to give recent data. The API will fetch data from the recent and updated database. A timestamp entry can also be added with each record in the database to check and verify the recent data. Caching can be used and customized with an acceptable eviction policy based on business requirements.

6. Distributed Logging, Cyclic Dependencies of Services and Debugging — There are multiple solutions for this. Externalized logging can be used by pushing log messages to an async messaging platform like Kafka, Google PubSub, ELK etc. Also, a good number of APM tools available like WaveFront, DataDog, App Dynamics, AWS CloudWatch etc.

It’s difficult to identify issues between microservices when services are dependent on each other and they have a cyclic dependency. Correlation ID can be passed by the client in the header to REST APIs to track all the relevant logs across all the pods/Docker containers on all clusters.

7. Testing — This issue can be addressed with unit and integration testing by mocking microservices individually or integrated/dependent APIs which are not available for testing using WireMock, BDD, Cucumber, integration testing.

8. Monitoring & Performance — Monitoring can be done using open-source tools like Prometheus with Grafana APIs by creating gauges and matrices, GCP StackDriver, Kubernetes, Influx DB, combined with Grafana, Dynatrace, Amazon CloudWatch, VisualVM, jProfiler, YourToolKit, Graphite etc.

Tracing can be done by the latest Open tracing project or Uber’s open source Jaeger. It will trace all microservices communication and show request/response, errors on its dashboard. Open tracing , Jaeger are good APIs to trace API logs Many enterprise offerings are also available like Tanzu TSM etc.

9. DevOps Support — Microservices deployment and support-related challenges can be addressed using state-of-the-art CI/CD DevOps tools like Jenkin, Concourse (supports Yaml), Spinnaker is good for multi-cloud deployment. PAAS K8 based solutions TKG, OpenShift.

10. Fault Tolerance — Istio Service Mesh or Spring Hystrix can be used to break the circuit if there is no response from the dependent microservices for the given SLA/ETA and provide a mechanism to re-try and graceful shutdown services without any data loss.

Spring Cloud API Gateway and SpringBoot: Use Cases & Solutions

In this blog, I will cover SpringBoot popularity for Microservices, use cases of SpringBoot Cloud Gateway, a couple of API use cases and real challenges of Microservices which can be solved using API Gateway.

SpringBoot first citizen for Microservices! Why?

Spring is the most popular Java framework on the market, around 60% enterprise applications run on Java, has good integration with almost all popular development libraries.Java EE is bulky and not suitable for Microservices. Different vendors are trying to run Java EE middleware in containers, but it is an anti pattern and difficult to maintain. Spring Boot Introduced in 2014 as part of Spring Framework is Micro-services ready and is the most popular  enterprise Java micro-services framework.

I am going to cover some of the SpringBoot and and SpringBoot Cloud Gateway use cases and what kind of real challenges it can solve:

SpringBoot Cloud Gateway use cases

  • API Service discovery and routing
  • A&A Security
  • API Rate limiting for clients
  • Impose common policies
  • API Caching
  • Control API traffic
  • Circuit breaker and monitoring
  • Path filtering
  • API performance for redundant data request
  • High cost and heavy H/W
  • Throttling of APIs
  • Loose security

SpringBoot Use Cases

  • Increase developer productivity
  • Manual, auto scheduled jobs/batches
  • Security for Authorization & Authentication (A&A)
  • REST API development
  • Develop cloud native applications
  • Microservices deployment on Kubernetes containers
  • API health monitoring, capture and analyse telemetry data
  • Prometheus, Grafana integration support for API performance, usage, SLA
  • SQL Data JPA and Hibernate ORM for MySQL,PostGresSQL and Oracle JDBC
  • Spring Templates for integration with Redis, RabbitMQ etc.
  • API and second level Caching
  • Spring Boot Kubernetes support
  • Application logging by using messaging queue and log forwarder
  • Faster REST API development
  • Good integration with almost all popular libraries

Spring RunTime Enterprise Support- OpenJDK, Spring, Tomcat

VMware provides enterprises support requirements for Java, OpenJDK, and Tomcat Server and Oracle is now charging for JDK fixes. Spring Runtime provides support and signed binaries for OpenJDK, Tomcat, and Spring. Also it includes, VMware’s TC Server, a hardened, curated, and enterprise ready Tomcat installation.

It supports all these Spring 40+ APIs binaries:

Among the application frameworks, there is a clear winner, and it’s called Spring! Both Spring Boot (No. 1) and Spring Framework (No. 2) are well ahead of the competition – especially ahead of Jakarta EE.


April 2020 Status of Spring downloads and Developrs

Evolution of Java Open Sources

Play with docker image and store on Harbor and Docker-Hub

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

In this blog, I will cover how to create a simple docker image using a SpringBoot Java application, store and pull docker images using Docker-Hun and Harbor image repositories and finally how to run this app on local Docker desktop client.

Docker image registry is a persistent storage to store docker images from where it can be pulled by the CI/CD pipeline or K8s deployments and deploy on container.

Docker build services like docker, Buildpack CNB kPack,, VMware Tanzu Build Service (TBS) and other build libraries build docker images and store to docker registries and it can be updated automatically after any commit in source code repository like GitHub.


  1. Install Docker Desktop
  2. Install Harbor image registry
  3. Create Docker-Hub account
  4. Install Java
  5. Install Maven
  6. Install Git
  7. Install Homebrew

Note: This demo app has been setup and run on Mac system.

1. Install Docker Desktop:

Install Docker Desktop. The docker CLI requires the Docker daemon, so you’ll need to have that installed and running locally.

Create a Docker Hub account on and get set go!

You can create docker images locally, then you have choice to push images to Docker Hub cloud SAAS or set a local Harbor private repository for security reason.

Note: Your docker desktop should be always running when you work with Docker containers like building, packaging running and persist in image registry. You can use same Docker Hub login credentials for Docker Desktop.

There are two types of Repositories:

a. Docker Hub Public repositories

This is very convenient public cloud where anyone can create their docker hub account and store docker images free.

Note: DockerHub also provides private private repository.

b. Private repositories

There are many private repositories like Harbor, jFrog, Google Container Registry, Amazon Elastic Container Registry (ECR) which are available on on-prem and on public cloud as a paid service.

I will cover Harbor private registry which is open source, can be deployed locally on-prem and it has enterprise support from VMware.

2. Install Harbor Image Registry:

There are two ways to install –

  1. Install open-source Harbor
  2. Install Harbor on Vmware TKGI (PKS)

If your Docker Desktop is already running and you have logged on your machine, then no need to provide Docker login credentials:

1. Image Registry Login:

# Docker-Hub Login:

docker login
# Harbor Login:

docker login <harbor_address>   
docker login -u admin -p <password> <Harbor Host> 

#Note: Create a Harbor project where you can store all your docker images.e g.: /library

Tip: Docker-Hub provides secret token which is advisable to use when connecting from registry or login.

2. Build Docker Image:

Create a SpringBoot micro-service project. Or you can simple clone and use this readymade Github public repo for local demo purpose:

git clone && cd catalogue-service

Build Docker Images using Maven

If you are using Maven and SpringBoot APP to actually build the Docker image. Go to source project folder and run this Maven command. You need to install Maven before running this command on Mac , Linux and Windows:

mvn clean install dockerfile:build

Maven command to push image to current image registry (You need to be logged in on DockerHub or Harbor on your local system:

mvn install dockerfile:push

List all Docker images:

docker image ls

Show a breakdown of the various layers in the image:

docker image history catalogue-service:latest

Note (Optional): You can also try to build image like this for non Java projects.Go to project folder of source code’s home path (in this case its Java based SpringBoot) project and run this command:

docker image build -t <Docker_Harbor_userId>/<image_name:tag> .

docker image build -t itsrajivsrivastava/catalogue-service .

3. Push Image to Docker /Harbor Registry:

a. Tag your image before pushing:

docker tag <dockerId>/image:tag

docker tag itsrajivsrivastava/catalogue-service itsrajivsrivastava/catalogue-service:latest

docker tag itsrajivsrivastava/catalogue-service

b. Now you should be able to push it:

#Docker-Hub Push (When you are logged-in to Docker-Hub thru local Docker Desktop client

docker push itsrajivsrivastava/catalogue-service:latest

docker push

4. Pull Image to docker from Docker-Hub:

docker pull <image_name>

docker pull itsrajivsrivastava/catalogue-service:latest

docker pull

5. Run Docker Image

Running a Container Docker Image:

docker run -p 8010:8010 itsrajivsrivastava/catalogue-service:latest

Now, test application by –


Docker OCI Image, Docker Engine, Container fundamentals

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

Why Docker?

Docker is a runtime container image which contains source code + dependent libraries + OS base layer configuration. It provides portable container which can be deployed and run on any platform.

Docker is a first citizen for the Kubernetes containers. It’s a tool for developers to package all the deployable source code, dependencies and environment dependencies. DevOps can this as a tool to deploy on Kubernetes containers.

Docker: Build once and run anywhere!

Portable to all OS bases. Please refer official docs of Kubernetes containers for detail information.

Docker is more suitable to package microservices and run on any private, public and hybrid Kubernetes clusters.

Dockerization: Process to convert any source code to Docker portable image.

What is OCI Image:

The Open Container Initiative is a standard authority to standardized docker as a runtime container. It’s industry standard to around container image formats and runtimes to run faster with ease.

Note: To know more refer these links for Docker and OCI images.

What’s Docker Hub:

Docker Hub a docker image registry which is available as a SAAS service on cloud for public. They also offer paid private image repository. It’s provided easy way to start with to push and pull images from Kubernetes deployments.

What’s container:

It’s a logical small packaging of source code+dependencies+OS configuration which is required at run-time. Docker image can be run on container using runtime environment like Java runtime, Nginx etc.

Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments.

Container runtimes

The container runtime is the software that is responsible for running containers.

Kubernetes supports several container runtimes: Docker Engine, Containerd container runtime with an emphasis on simplicity, robustness and portabilityCRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).


Containerd 1.1 – CRI Plugin (current)

containerd architecture

In containerd 1.1, the cri-containerd daemon is now refactored to be a containerd CRI plugin. The CRI plugin is built into containerd 1.1, and enabled by default. Unlike cri-containerd, the CRI plugin interacts with containerd through direct function calls. This new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Users can now use Kubernetes with containerd 1.1 directly. The cri-containerd daemon is no longer needed.

What about Docker Engine?

“Does switching to containerd mean I can’t use Docker Engine anymore?” We hear this question a lot, the short answer is NO.

Docker Engine is built on top of containerd. The next release of Docker Community Edition (Docker CE) will use containerd version 1.1. Of course, it will have the CRI plugin built-in and enabled by default. This means users will have the option to continue using Docker Engine for other purposes typical for Docker users, while also being able to configure Kubernetes to use the underlying containerd that came with and is simultaneously being used by Docker Engine on the same node. See the architecture figure below showing the same containerd being used by Docker Engine and Kubelet:


Microservices API Integration Automation Testing: BDD With Cucumber JVM

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

In cloud native pattern, when we have multiple microservices, which are deployed on multiple clusters in multi-cloud environment, then testing those REST API based microservices is nightmare, because they are talking to each other directly or async way using messaging technologies. Integration testing is a great solution to test use case stories end to end by testing integrated microservices like login authentication, browse and add an item in catalogue, end to end order placement and payment on online eCommerce portal. Integration test cases are expected to be executed after deployment of microservices on QA environments. This phase comes after development phase.

I have designed BDD integration testing framework to test eCommerce microservices with Cucumber JVM, SpringBoot, RestAssured, AssertJ, and JSONAssert etc.

We can create a separate microservice integration test module to run integration test cases of any microservices and execute on QA deployment. It can be configured and integrated with the DevOps CI (Continuous Integration) build pipeline using Jenkins, Bamboo, etc. When the build will be passed after BDD integration is run on QA, then it should be promoted to the staging/pre-prod environment to make sure that all the REST APIs are intact.


This article will describe current challenges, usage of Cucumber JVM BDD in agile development, setup with Spring Boot, and Jenkins integration. Cucumber has very powerful reporting with graphs and tables, which can be integrated with Jenkins and share the report. It also generates JSON reports, which can be integrated with other applications or integration testing. It simplifies behavioural testing by using a simple English-like language that can be written by business/QA (non-developers) and later converted to a technical integration test case with or without mocking. 

Why BDD?

BDD (Behavior Driven Development) is a methodology for developing software through continuous interaction between developers, QAs, and BAs within the agile team. BDD has these major components:

  • Feature file: It contains feature info like scenarios, steps, and examples (test data). It’s written in Gherkin. It’s a plain text file with the “.feature” extension.
  • Scenario: Every feature can have multiple positive and negative test scenarios — for example, login with the wrong password, login with the correct login credentials, etc.
  • Step Definitions: Every scenario contains a list of steps.


BDD is based on these three major pillars:

1. Given: Precondition.

2. When: Test execution.

3. Then: Acceptance and assertions.

Reference: Cucumber doesn’t technically distinguish between these three kinds of steps.


The purpose of Givens is to put the system in a known state before the user (or external system) starts interacting with the system (in the When steps). Avoid talking about user interaction in Givens. If you were creating use cases, Givens would be your preconditions.


  1. Setting up initial data
  2. Setting up the initial configuration
  3. Creating model instances


The purpose of When steps are to describe the key action the user performs, such as interacting with a web page. It actually calls the business logic or actual APIs.


The purpose of Then steps is to observe outcomes like JUnit assertions. The observations should be related to the business value/benefit in your feature description. The observations should also be on some kind of output (calculated value, report, user interface, message).


  • Testing REST APIs or test execution.
  • Verifying that something related to the Given+When is (or is not) in the output.
  • Checking that some external system has received the expected message.

And, But

If you have several Givens, Whens, or Thens, you can write like this:

Scenario: Multiple Givens
    Given one thing
    Given another thing
    Given yet another thing
    When I open my eyes
    Then I see something
    Then I don't see something else

Or you can make it read more fluently by writing:

Scenario: Multiple Givens
    Given one thing
      And another thing
      And yet another thing
    When I open my eyes
    Then I see something
      But I don't see something else
  1. Simplicity: No-technical syntax of features files. Features files are written in plain English, which can be linked with Agile stories.
  2. Communication between business and development is extremely focused as a result of a common English-type language.
  3. The code is easier to maintain, flexible, and extendable.
  4. The code is self-documenting with the examples.
  5. Test data can be changed only in the features file, not in the code.
  6. Stories are easier to “groom” – breakdown, task, and plan.
  7. There is more visibility into team progress and status using reports. Cucumber reports can be shared with top level management, integrated with Jenkins and configured with email notifications. It can also be integrated with automated build and deployment tools like Jenkins email plugins.

Unit Testing vs. TDD vs. BDD

Unit testing is for testing individual modules, whereas TDD is based on writing test cases first and then writing code to make that pass. BDD is based on the behavioral testing based on real scenarios (which can’t be tested in TDD). Testing microservices REST APIs are a good example.

Reference: This article has detail comparison matrix with other BDD tools: -tdd-and-bdd/.

Why BDD with Cucumber? Pros/Cons:

Note: Please refer this comparison reference with other BDD tools/APIs:

Cucumber is a very powerful framework for BDD testing. It has many useful features like testing by example (data tables, which can be part of the test cases ) or parameters, which can be passed directly from feature file(s). Multiple sets of tests can be sent to BDD test cases. Test data can be passed from the feature files without touching code or making changes in properties resource files. Features files and the related code looks readable and maintainable. Additionally, Cucumber supports many different languages and platforms like Ruby, Java, or .NET.

Getting Started With Cucumber

Cucumber setup with SpringBoot, RestAssured, AssertJ, and JSONAssert.

This sample code is developed using Cucumber with SpringBoot v1.5.1, RestAssured v3.0.3, AssertJ and Java 8.


  1. Knowledge of basic Java and SpringBoot framework
  2. BDD fundamentals
  3. Windows/Mac with Java 8 installed
  4. Familiarity of Java based testing framework RestAssured, AssertJ, and JSONAssert

Cucumber Maven Dependencies:


Feature file:

     Feature: Cucumber -  SignUp Services
     Integration Test
     Scenario Outline: set initial configuration
     for SignUp Services
      Given app API Key header "<api_key>"
      And user id is "<userId>"
      And user password is "<password>"
      When access token service is called
      Then retrun access token
      And response code is 200
       |api_key | userId |
     password | client_id|
       | test************ | password | | test************* | test*********** |

Advanced Reporting Dashboard

Image title

Run Cucumber Test Cases From the Command Line


To run Cucumber from the command line, you need to add this Maven plug-in.

Now, Cucumber integration tests can be run by simply using:

# Run integration test cases
$ mvn verify

Test Suite Using Cucumber

Cucumber has a feature to group features/scenarios as a group, which is called a “tag”. It can be annotated in the feature file by using @, for example, @signupServices. These test suites can be run individually based on your requirements.

# Run a Cucumber test suite
$ mvn clean test -Dcucumber.options="--tags @encryptionServices"
# Run multiple Cucumber test suites
$ mvn clean test -Dcucumber.options="--tags
# Run a Cucumber test suites and also generate detail report with
graphs and jar/class files
$ mvn clean install -Dcucumber.options="--tags @loyaltyService"
#Run all test suites and also generate detail report with graphs
and jar/class files
$ mvn clean install

Cucumber Jenkin Integration

Please refer these links to configure and integration with Jenkin.


The same Maven test plugins seen above will be required to create multiple Jenkins profiles for each feature or group of features by adding tags.

Mocking REST API With WireMock : Recording and Manual Modes

Current Challenges (Use Case)

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

I am a cloud architect and an API developer. I have seen these frequent issues in API development during the development phase. Currently, Dev/QA environment is impacted by frequent third-party APIs outages and other service environment issues. It affects Dev/QA teams productivity and happens often, which stops all the development and testing work. There is a need for a mocking API server, which will sync with the main 3rd party servers (service provider) and cache the API response periodically on the mock server for Dev/QA environment. So, when 3rd party REST API services are down then dev and testing work won’t be impacted on Dev/QA servers.


This tutorial will cover installation and usage of Wiremock for using open source WireMock. The objective of using WireMock is to use as a backup mock server in the failover/outage of the actual REST APIs.

Why WireMock?

  1. According to the official document of, WireMock is a simulator for HTTP-based APIs. Some might consider it a service virtualization tool or a mock server. It enables you to stay productive when an API you depend on doesn’t exist or isn’t complete for using (still in dev phase). It supports testing of edge cases and failure modes that the real API won’t reliably produce. And because it’s fast it can reduce your build and test time from hours down to minutes.
  2. Record and Playback — It can run on recording mode. Get up and running quickly by capturing outside third party traffic to and from an existing API. It caches REST API response to WireMock proxy server.
  3. Real API’s request and response can be cached locally and can be used as a mock server in the absence of the real server using recording feature.
  4. It provides a provision to create the requests and corresponding responses in form of JSON objects.
How WireMock works?


  1. WireMock will be used as a mocking server.
    • The scope of this mocking server is to run it as an independent server or at the developer machine for REST API mocking. Also, it should be deployed on the remote non-prod and prod servers for QA testing.
  2. It will sync up real third party servers and record all the tested API requests and responses.
  3. Additionally, mock JSON files can be created for the custom scenarios for the different set of the request.
  4. The Same JSON will be used by web clients internally for local testing during the development phase.

Note: One WireMock instance (JVM) can be only configured for single API server.

Mocking Existing Active APIs (Through WireMock Recording and Playback for New API)

It caches all the API responses when it hits API the first time, and the next time, WireMock will pick up the response from the local cache where WireMock is deployed and returned to the client. To flush off the WireMock cache and get it updated, you need to run “/reset” admin API to refresh the API mapping cache.

Mocking New API When API is Not Available.

Test for the different set of test data, URI parameters like query and path parameters. The Wiremock cache by its unique UUID key will be different for the different request data/query.

Wiremock Can Be Run in These Two Modes:

1. Auto-Switching Mode: Seamless development and testing by pointing to WireMock server when the main server is up/down with its continuous recording mode feature. In this mode, the client will always point to WireMock server only, and WireMock server mock all the responses. In this case, the client doesn’t have to change URL of the server in their build. In this mode, the recording will be always ON. Only, need to run “/reset” admin API to refresh API mapping cache.

2. Manual Switching Mode: Client has to change URL of the server in their build when the main server is down and switching is required to WireMock server. In this mode, the recording will be always on.

Assumptions and Limitations

Only one environment can be configured and mocked with one instance of WireMock i.e. the configured IP address while starting WireMock will be mocked through that instance. Example: for starting the WireMock, we will run below command so only API hitting to will be mocked. So, if the client is connecting to more than one server, then multiple WireMock instances have to run on the different ports and each WireMock will point to one proxy server.

 $ –java -jar wiremock-standalone-2.7.1.jar --port 9000 --proxy-all=" http:///" --record-mappings

Once the user has switched to using cached/stored files of WireMock server using “/reset” API, then the user cannot switch back to original API unless WireMock server is re-started and saved JSON in the WireMock server is deleted.

In the recording mode client will get only previously cached response when the main server is down; WireMock automatically connects with the main server when it’s up and running.


Setup WireMock

Run WireMock as “Standalone mode” or “deployed into a servlet container.” In this article, we will see running WireMock as standalone mode. To setup, WireMock follows the below steps. Download the WireMock from this URL:


=> downloaded the standalone JAR from here.

After running WireMock first time, it will create these two folders in the same home directory where WireMock jar has existed:

1. mappings => It contains request and response JSON.

2. _files => It contains response errors and messages JSON. Also, it contains HTML response as text files.

Note: Every request will create separate mapping JSON file in WireMock home directories with the different file names and unique IDs. So, the same API can be called with the different requests.

It stores request/response JSON like this:

<!-- wp:table -->
<figure class="wp-block-table"><table><tbody><tr><td>{ "id" : "d03988e07a55", "request" : { "url" : "product/id/11111111", "method" : "GET" }, "response" : { "status" : 200, "bodyFileName" : "body-id-11111111.json", "headers" : { "Date" : "Sat, 10 Jun 2018 19:53:44 GMT", "X-Powered-By" : "Servlet/3.0", "correlation-id" : "1497124424607", "Access-Control-Allow-Origin" : "*", "channel" : "ANDROID", "Keep-Alive" : "timeout=10, max=100", "Connection" : "Keep-Alive", "Transfer-Encoding" : "chunked", "Content-Type" : "application/json;charset=UTF-8" } }, "uuid" : "d03988e07a55" }</td></tr></tbody></table></figure>
<!-- /wp:table -->

How Developers Can Utilize WireMock

  1. Developers can install locally on his/her machine and record all the REST APIs when API is available. The response can be modified manually or created manually and placed in the same mapping folder. Now, the client can point to the WireMock server instead of the real server when the real server is down.
  2. Client apps (Native/Web) can also place these auto-generated JSON request/response APIs in their local code and point to the same while development.
  3. The developer can also run a set of test script using JMeter/JUnit test suites and record/mock all the REST APIs request/responses.
  4. How QA Can Utilize WireMock (Sync REST services at Remote Server):A backup remote server is required to sync mock responses of the real server during testing. It should be set up with the help of DevOps team which will be available when the main server is not available. Either DevOps team can point to the mock server and inform all related dev/QA teams by email or it should be automatically switched by checking the REST API server health continuously.

Case 1: Recording and Playback for an Existing API

  1. Assuming WireMock server is running on a host machine on some port, hit the URL of the API from Postman whose response is to be recorded. To record API response, change the actual hostname to WireMock host and port. The response of API is captured as JSON or requested data type on WireMock.
  • GET — Method name of the API
  • localhost — host machine address where WireMock is deployed
  • 9000 — port number
  • API URI (v1/product) — API URL whose response is to be recorded

2. Now, when you hit again, still the response will come from the actual host. To get the recorded response from WireMock, hit POST “/RESET” admin API request on Postman.

__admin/mappings/reset — request to hit to refresh and switch to WireMock cache.



  1. As WireMock is running in the recording mode, whenever you hit an API, it records its response. Unless you use RESET, you will keep on getting the response from actual API. RESET is to be used when we want to start getting the response from WireMock server from recorded JSON files.So we can hit RESET request once we have done the recording for all the required APIs.
  2. If we hit some API which we are hitting for the first time (i.e. its response is not yet recorded) then its response will be saved in WireMock. Once recorded, RESET WireMock and get a recorded response.

Case 2: Recording and Playback for the Same API With Different Request Data

  1. For any API having different request data (parameters, headers, request body data) different mapping files and response JSON will be created on the WireMock cache. For these two same APIs with different request data like query parameter will have two mappings and two JSON responses.


localhost:9000/ v1/products?sku=9956

localhost:9000/ v1/products?sku=9777

2. Run “/RESET” API to switch to the WireMock cached response.

Case 3: Mocking New API (Which Is Not Available or Ready at the API Server)

There are two methods to create your own custom mapping for mocking an API’s response using WireMock.

Scenario: Create mapping for an API having these expected responses-

URI — some/thing

Method — POST

API request body data — { “numbers”: [1, 2, 3,10] }

API response — { “id”: “1”, “name”: “xyz” }

Status: 200

Headers — Content-Type: “application/json”

Method-1: Upload Your Own Custom JSONFfile

  1. Create a JSON file like <<fileName>>.json. It should be a valid JSON file.
  2. Place this JSON payload file in “_files” folder (all API responses are saved here) of WireMock. (Note: DevOps team will help on this)
  3. Select JSON (application/JSON) as content type from the drop-down in Postman and hit create mappings request with request body as shown below. Add this file name for the JSON key “bodyFileName”. e.g: “bodyFileName”= “test.json.” Example: POST: localhost:9000/__admin/mappings

API Definition:

Request Body ParametersExplanationExample
requestContains request data of API to be mocked
methodHTTP method of API to be mockedGET, POST, PUT, DELETE
urlURL of the APIsome/thing
bodyPatternsDefine request body data of the API
equalToJsonContains JSON data to be passed in the request body of API“equalToJson”: “{ \”numbers\”: [1, 2, 3,10] }”
responseContains response data of API to be mocked
statusHttp status code of the API2xx or 4xx
bodyFileNameContains file name for the JSON responsetest.json
headersSpecify headers present in the response and/or request“Content-Type”: “application/json

4. Now hit “/save” mappings request on Postman.

POST: localhost:9000/__admin/mappings/save

Important Note: If we do not save mappings, then the next time WireMock server gets started, our created response will be lost. No mapping gets saved without this on WireMock.

5. Now to check the response of this created mocked API, hit the API from Postman using its parameters, request body data whatever is needed. Now, API response will come from WireMock.

  • POST — method to be called on API some/thing
  • localhost — host machine address where WireMock is deployed
  • 9000 — port number
  • some/thing — API URL whose response is to be checked
  • request body — { “numbers”: [1, 2, 3,10] }
  • headers — Content-Type:application/JSON

Method-2: Upload your JSON Object (No help is required from DevOps)

  1. Select JSON from the drop-down in postman and hit create mappings request on Postman with request body as shown below.

ExamplePOST: localhost:9000/__admin/mappings

2. For adding our JSON object instead of JSON file use “jsonBody” instead of “bodyFileName”.

Request Body ParametersExplanationExample
jsonBodypass JSON object required as API response“jsonBody”: {“id”: “1”,”name”: “xyz”}

3. Save and test created API mapping and hit the “/save” API.

Case 4: Need Updated Data From the API Server

  1. DevOps needs to stop the WireMock server.
  2. DevOps needs to delete following folders in WireMock server
    1. __files
    2. mappings
  3. Restart the WireMock server. The user can start using the WireMock with above-mentioned cases.

DevOps Responsibilities:

  • Restart the server in recording mode

Command to run on cmd (from the directory where WireMock jar is located):

java -jar wiremock-standalone-2.7.1.jar –port <<port-number>> -proxy-all=<<host-name>> –record-mappings

$ java -jar wiremock-standalone-2.7.1.jar --port 9000 -proxy-all="" --record-mappings
  • Upload JSON file to WireMock server’s “_files” folder ( if required to upload response JSON file).
  • Clean up the server: DevOps team has to clean up the server by removing these two folders on demand like weekly or monthly.
    1. __files
    2. mappings