A quick introduction of Cluster API

In this blog we will understand how multi Kubernetes clusters can be managed and orchestrated using the Cluster API  control plane tool.

Cluster API

It’s a sub-project of Kubernetes which works on declarative API and provides support for CRD (Custom Resource Definition), which manages K8s and VMs based on CRD configuration. Cluster API always checks current K8s clusters on worker nodes status and compares them with desired state which is provided by DevOps team using CRD config files. It provides provisioning of multiple K8s clusters which could be spread to multiple nodes/hosts. It extends the functionality of K8s for multi-node cluster management with Kubeadm API.

Cluster API orchestrates  and manages the lifecycle of worker nodes where K8s clusters are provisioned. Also, manages multiple K8s clusters upgrades, failover, rollback etc.

Tip: Cluster API manages K8s clusters of multi-cloud including on-prem and hybrid.

Started by the Kubernetes Special Interest Group (SIG) Cluster Lifecycle, the Cluster API project uses Kubernetes style APIs and patterns to automate cluster lifecycle management.

Refer Cluster API docs here: https://cluster-api.sigs.k8s.io/

Cluster API cluster management reference architecture

These are important components of Cluster API:

Infrastructure provider

Basic hardware infrastructure providers such as Vmware, public infra providers such as AWS, GCP, Azure etc.

Bootstrap provider

It’s also called Cluster API bootstrap provider Kubeadm (CABPK). It generates cluster certificates, initialize/bootstrap control planes. It turns the Machine into a K8s Node. It uses the Kubeadm tool.


It manages the lifecycle of K8s clusters on multiple nodes. Kubeadm is a tool built to provide best-practice “fast paths” for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.

Kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

Control plane

Control plane is a set of services which manages multiple K8s clusters and scale on production environments. It manages and provisioned dedicated machines/VMs, running static pods for components such as kube-apiserver, kube-controller-manager and kube-scheduler.

The default provider uses kubeadm to bootstrap the control plane.

Custom Resource Definitions (CRDs)

It’s a set of K8s YAML configuration files which maintain desired K8s cluster state. It can be persisted on Git server for platform automation for Cluster API based K8s clusters.


It defines configuration of machines like VM etc. It’s a declarative spec for an infrastructure component hosting a Kubernetes Node. It’s like POD on K8s.


It provides declarative updates for Machines and MachineSets.


It maintains a stable set of Machines running at any given time. It’s like a ReplicaSet of K8s.


It defines the conditions when a Node should be considered unhealthy. If the Node has any unhealthy conditions for a given user-configured time, the MachineHealthCheck initiates remediation of the Node. Remediation of Nodes is performed by deleting the corresponding Machine and creating a new one like ReplicaSet. It always maintains the same set of Machine configuration which has been provided in the CRD file.


It contains the Machine or Node role-specific initialization data (usually cloud-init) used by the Infrastructure Provider to bootstrap a Machine into a Node.

Business Benefits of Cloud

I had been busy with my other work assignments. publishing this blog after a long interval. Hope you like it !

It’s important to understand benefits for the business who is planning to migrate to cloud and invest time and money. These are some generic and common benefits after adopting cloud technology using modern application approach:

  • Smoother and faster app user experience: Cloud provides faster, highly available app interfaces which improves rich user experience. For example, AWS stores static web pages and images at nearby CDN(Content Delivery Network) servers physically on cloud, which provides faster and smooth application response.
  • On demand scaling for infrastructure: Cloud provides on demand compute, memory and storage horizontal/vertical scaling. Organizations/customers should not bother about infra prediction for higher load and they also save money to use only required infrastructure resources.
  • No outage for users and clients: Cloud provides high availability, so whenever any app server is down, client load will be diverted to other app server or a new app server will be created. User and client sessions will also be managed automatically using internal load balancers.
  • Less operational cost (OPEX): Cloud manages most of the infra management operation automatically or by cloud providers. For example, PAAS (Platform as a service) automates entire platform automation with a smaller number of DevOps resources, which saves a lot operational cost.
  • Easy to manage: Cloud providers and PAAS platforms provides very easy and intuitive web, CLI and API based console or interface, which can be easily integrated with the CI/CD tools, IAAC (Infrastructure as a Code) and scripts. They can also be integrated with apps etc.
  • Release app features quickly to compete in market: Cloud provides a lot of ready-to use services on cloud, which takes lesser time to build and deploy apps to cloud quickly using microservices agile like development methodologies. It supports container-based orchestration services like Kubernetes, where smaller microservices can be deployed in quick time, which enables organizations to release new feature quickly.
  • Increased security: Cloud solutions provider out of the box intrinsic security features at various level of application, network, data and infra level. For example, AWS provides DDOS and OWASP security features with firewall etc.
  • Increase developer productivity: Cloud providers various tools and services to improve developer productivity like PAAS, Tanzu Build Service. Spring framework, AWS BeanStalk, GCP, OpenShift developer tools etc.
  • Modular Teams:  Cloud migration motivates to follow modern applications microservice framework for dev and test teams to work in agile on independent modules or microservices independently.
  • Public cloud’s “pay as you go” usage policy: Customer has to pay for pay as you go usage of infra, so that no extra infra resources wasted. These public service providers pricing model saves a lot of cost.
  • Easy disaster recovery handling: Cloud deployed on multiple data centers (DCs) or availability zones (AZs) for disaster recovery (DR), so that if any site (DC/AZ) is down then client or application load will be automatically routed to other site using load balancers.
  • Business continuity: It provides all necessary processes and tools to manages business continuity (BC) for smooth and resilient business operations. They. Provide faster site recovery in case of disaster and data backup. Cloud also provides enterprise compliances for various industries like HIPAA for health insurance etc.

Install Tanzu Build Service (TBS v1.0 GA) on Kubernetes – build docker image and store in DockerHub image registry

In this blog I will cover these following scope-

  1. How to install TBS on local KIND/TKGI/TKG and other Kubernetes clusters.
  2. How to auto build portable OCI docker image of a SpringBoot (Java) project using an automated build tool VMware Tanzu Build Service which auto detects Git source code repository commit and inject required dependencies and OS base image based on the source code languages and its configuration files. e.g: application.yaml and Maven’s pom.xml for Java/Spring app.
  3. Push this docker image automatically to Docker Hub image registry. You can use any image registry like onprem Harbor, AWS ECR, GCR, Azure ACR etc.
  4. Test Build Image by downloading from Docker-hub image registry and run using Docker
  5. How to build Dot Net application using TBS (Appendix)
  6. FAQ

Currently, Tanzu Build Service (TBS) ships with the following Buildpacks:

  • Java
  • NodeJS
  • .NET Core (supports Windows DotNet App)
  • Python
  • Golang
  • PHP

Why Tanzu Build Service( TBS)?

  1. Save time to re-build, re-test and re-deploy during patching hundreds of containers.
  2. It auto scans source code configuration and language and inject dependencies into docker image that will be required to compile and/or run the app on container/K8s.
  3. Faster build and patching on hundreds of containers.
  4. Faster developer productivity by setting local build on developer’s machine and sync with source code repo like GutHub etc.
  5. Manage common project dependencies for dev teams and sync all developers code with single git branch to avoid any code conflict/sync issues.
  6. Maintain latest image in image registry.
  7. OCI docker image support, build and run anywhere!

Please refer this official documentation page for more detail


Tanzu Build Service v1.0 GA can be installed on any Kubernetes private and public cloud clusters (v1.14 or later) including local machine using Kubernetes shipped with Docker Desktop, MiniKube and managed K8s like TKGI, TKG, GKE, and AKS clusters etc. 

Build Service Components Tanzu Build Service ships with the following components:

  1. kpack
  2. CNB lifecycle


  1. Create Pivnet account. Refer to the official docs for more details on obtaining a Pivotal Network API token. You can create a free account and try on your local machine.
  2. Install Pivnet CLIhttps://github.com/pivotal-cf/pivnet-cli/releases/tag/v2.0.1
  3. Install Docker Desktop (Mac, Windows) – Optional if you are trying to install on your local single node K8s cluster.
  4. Docker Hub account
  5. Install Kubernetes. I have used TKGI K8s cluster on GCP, you can also use KIND(K8s in Docker) with Docker Desktop.
  6. Install the TKGI CLI or kubectl CLI
  7. Install these three Carvel CLIs for your operating system. These can be found on their respective Tanzu Network pages:
  • kapp is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • ytt is a templating tool that understands YAML structure.
  • kbld is tool that builds, pushes, and relocates container images.

How to Install & Configure TBS:

You can download TBS from VMware Tanzu Network (formerly the Pivotal Network, or PivNet) or install using Pivnet CLI command.

Note: I have used Pivnet CLI for all the downloads from VMware PivNet. Refer this official installation guide and advance level configuration:
#Pivnet login using secret token
$ pivnet login --api-token='my-api-token'

$ pivnet download-product-files --product-slug='build-service' --release-version='1.0.2' --product-file-id=773503

#Unarchive the Build Service Bundle file:
$ tar xvf build-service-<version>.tar -C /tmp

#Login to docker-hub. This step will save docker-hub credentials to your K8s cluster. Note: You can use Harbor's url also.
$ docker login index.docker.io

#Login to VMware registry Docker-Hub thru Docker CLI (downloaded with Docker Desktop client)
$ docker login registry.pivotal.io

#Relocate the images for DockerHub with the Carvel tool kbld by running:
# Syntax: kbld relocate -f /tmp/images.lock --lock-output /tmp/images-relocated.lock --repository <IMAGE-REPOSITORY>

$kbld relocate -f /tmp/images.lock --lock-output /tmp/images-relocated.lock --repository itsrajivsrivastava/tanzu-build-service

Connect with your K8s cluster where you want to install TBS:

$ kubectl config use-context <K8s-cluster-name>

Now, install TBS on K8s. You can run these commands from home folder

Then use ytt to push the bundle to the image registry to DockerHub/image registry. It will upload all laungauge buildpacks and other supporting images to Docker-Hub/Harbor.

Note: It will take good time to upload bunch of images. If it fails then you need to re-run the command after deleting failed build of TBS. You can delete “kpack” and “build-service” Kubernetes namespaces.

$ ytt -f /tmp/values.yaml \
    -f /tmp/manifests/ \
    -v docker_repository="<IMAGE-REPOSITORY>" \
    -v docker_username="<REGISTRY-USERNAME>" \
    -v docker_password="<REGISTRY-PASSWORD>" \
    | kbld -f /tmp/images-relocated.lock -f- \
    | kapp deploy -a tanzu-build-service -f- -y

ytt -f /tmp/values.yaml \
    -f /tmp/manifests/ \
    -v docker_repository=“itsrajivsrivastava” \
    -v docker_username="itsrajivsrivastava" \
    -v docker_password=‘******’ \
    | kbld -f /tmp/images-relocated.lock -f- \
    | kapp deploy -a tanzu-build-service -f- -y


  • IMAGE-REPOSITORY is the image repository where Tanzu Build Service images exist.
  • REGISTRY-USERNAME is the username you use to access the registry. gcr.io expects _json_key as the username when using JSON key file authentication.
  • REGISTRY-PASSWORD is the password you use to access the registry

Install KP CLI:

kp CLI, is used for interacting with your Tanzu Build Service (TBS) installation on K8s cluster. Download the kp binary from the Tanzu Build Service page on Tanzu Network.

Import Tanzu Build Service Dependencies

The Tanzu Build Service Dependencies (Stacks, Buildpacks, Builders, etc.) are used to build applications and keep them patched.

  1. Run this command on CLI  “docker login registry.pivotal.io” 
  2. Accept all these EULA agreements online.

Note: Successfully performing a kp import command requires that your Tanzu Network account has access to the images specified in the Dependency Descriptor file. Currently, users can only access these images if they agree to the EULA for each dependency. Users must navigate to each of the dependency product pages in Tanzu Network and accept the EULA highlighted in yellow underneath the Releases dropdown.

Here are the links to each Tanzu Network page in which users must accept the EULA:

  1. Tanzu Build Service Dependencies
  2. Java Buildpack for VMware Tanzu
  3. Java Native Image Buildpack for VMware Tanzu
  4. Node.js Buildpack for VMware Tanzu
  5. Go Buildpack for VMware Tanzu

Note: `kp import` will fail if it cannot access the images in all of the above Tanzu Network pages.

Note: You must be logged in locally to the registry used for `IMAGE-REGISTRY` during relocation and the Tanzu Network registry `registry.pivotal.io`.

These must be imported with the kp cli and the Dependency Descriptor (descriptor-<version>.yaml) file from the Tanzu Build Service Dependencies page:

$ kp import -f /tmp/descriptor-<version>.yaml

Verify kp Installation

List the custom cluster builders available in your installation:

You should see an output that looks as follows:

$  kp clusterbuilder list
NAME       READY    STACK                          IMAGE
base       true     io.buildpacks.stacks.bionic    itsrajivsrivastava/base@sha256:b3062df93d2da25aeff827605790c508570446e53daa8afe05ed2ab4157d1c02
default    true     io.buildpacks.stacks.bionic    itsrajivsrivastava/default@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
full       true     io.buildpacks.stacks.bionic    itsrajivsrivastava/full@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
tiny       true     io.paketo.stacks.tiny          itsrajivsrivastava/tiny@sha256:abc4879c03512a072623a7dcb18621d68122b5e608b452f411d8bc552386b8c5

# List the custom cluster builders available in your installation:

$ kp clusterstack list
NAME       READY    ID
base       True     io.buildpacks.stacks.bionic
default    True     io.buildpacks.stacks.bionic
full       True     io.buildpacks.stacks.bionic
tiny       True     io.paketo.stacks.tiny

Create Git and Image registry secrets in your K8s cluster:

# Docker Hub secret
$ kp secret create docker-creds --dockerhub itsrajivsrivastava -n tbs-demo
  dockerhub password:
  "docker-creds" created

# GitHub Secret
$ kp secret create github-creds --git https://github.com --git-user rajivmca2004 -n tbs-demo

#Verify and list secret
$ kp secret list -n tbs-demo

NAME                   TARGET
docker-creds           https://index.docker.io/v1/
github-creds           https://github.com

#Delete secret
$ kp secret delete <SECRET-NAME> -n tbs-demo

Create and manage docker image using kp CLI commands:

# Create TBS image: (--tag is mandatory)

$ kp image create <name> \
  --tag <tag> \
  [--builder <builder> or --cluster-builder <cluster-builder>] \
  --namespace <namespace> \
  --env <env> \
  --wait \
  --git <git-repo> \
  --git-revision <git-revision>


$ kp image create spring-petclinic \
  --tag index.docker.io/itsrajivsrivastava/spring-petclinic:latest \
  --namespace tbs-demo \
  --git https://github.com/rajivmca2004/spring-petclinic \
  --git-revision master
"spring-petclinic" created

#  Verify image status
$ kp image status spring-petclinic -n tbs-demo
Status:         Ready
Message:        --
LatestImage:    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:4e712dec26810f026357281be4ba49ff8da3b45698700fd5dc470b7914c0d13d

Last Successful Build
Id:        1
Reason:    CONFIG

Last Failed Build
Id:        --
Reason:    --

# List the project(s):
$ kp image list -n tbs-demo
NAME                READY    LATEST IMAGE
spring-petclinic    True     index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:d96fdd60633cd5582a9c28edceff80762ee79ef2985cd18f518bc1503563b7ef

# To check image list of any specific project 

$ kp image list spring-petclinic -n tbs-demo
NAME                READY    LATEST IMAGE
spring-petclinic    True     index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:d96fdd60633cd5582a9c28edceff80762ee79ef2985cd18f518bc1503563b7ef

Build docker images:

Here, TBS will sync up with two systems:
1. GitHub – TBS will be in sync with GitHub repo for any commit and triggered after every commit. It also builds docker images.
2. Image Registry/Docker Hub – TBS will automatically push this image which has been created in the last step to image registry Docker-Hub and keep it refreshed all the time for developers and CI/CD build and deployment to K8s containers.
There are two ways to build the image:

  1. Auto Build
  2. Manual Build

Note: If you want to build source code directory which is inside sub-directories or if you have a parent project repo and multiple child project inside this

--sub-path DotNetBuild

$ kp image create dotnetbuildtest index.docker.io/itsrajivsrivastava/dotnetbuildtest \
  --sub-path DotNetBuild \
  --namespace tbs-dotnet-demo \
  --git https://github.com/rajivmca2004/DotNetBuild.git \
  --git-revision master

1. Auto Build:

Note: First build will be triggered automatically when you create image at the first time.

To check auto build, just make some code changes in your Git branch and check the build progress using this command, It takes a few seconds to trigger. You can watch build logs locally –

# Use, this command, to check build status, also can be used for the first time to build. It will also show build revisions

$ kp build status spring-petclinic -n tbs-demo
Image:            index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:e09bc84c0287d8d0985244876a68ae4b3963a96707622d81f6d9b9efa581be92
Status:           SUCCESS
Build Reasons:    COMMIT

Pod Name:    spring-petclinic-build-2-5csx8-build-pod

Builder:      itsrajivsrivastava/default@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
Run Image:    index.docker.io/itsrajivsrivastava/run@sha256:ca460a285b00d8f25ca3734b8f783af240771eb10974d90e26501fd52c0271b8

Source:      GitUrl
Url:         https://github.com/rajivmca2004/spring-petclinic
Revision:    c5b4f7f717a1dc239c002c993c178f75283a7751

BUILDPACK ID                           BUILDPACK VERSION
paketo-buildpacks/bellsoft-liberica    4.0.0
paketo-buildpacks/maven                3.1.1
paketo-buildpacks/executable-jar       3.1.1
paketo-buildpacks/apache-tomcat        2.3.0
paketo-buildpacks/dist-zip             2.2.0
paketo-buildpacks/spring-boot          3.2.1

$ kp build list spring-petclinic -n tbs-demo

BUILD    STATUS     IMAGE                                                                                                                          STARTED                FINISHED               REASON
1        SUCCESS    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:b6d6a0944600be3552c0b33e0a0759a12b422168484ffff55f9f9e0be4c93217    2020-10-10 00:39:10    2020-10-10 00:59:40    CONFIG
2        SUCCESS    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:e09bc84c0287d8d0985244876a68ae4b3963a96707622d81f6d9b9efa581be92    2020-10-10 01:07:35    2020-10-10 01:09:48    COMMIT

2. Manual Build:

$ kp image trigger spring-petclinic -n tbs-demo

Check Build Logs:

# To view running logs for a build:

 $ kp build logs spring-petclinic -n tbs-demo

#Check logs for the given release:

 $ kp build logs spring-petclinic --build 1 -n tbs-demo

Note: First build will take some time to download all the dependent libraries. Subsequent build will be super fast!

Applied source code related build packages are mentioned in this following build logs. Since, its a Java app, these build-packs have been applied automatically –

[INFO] Building jar: /workspace/target/spring-petclinic-2.3.0.BUILD-SNAPSHOT.jar
[INFO] --- spring-boot-maven-plugin:2.3.0.RELEASE:repackage (repackage) @ spring-petclinic ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  17:44 min
[INFO] Finished at: 2020-10-09T19:28:07Z
[INFO] ------------------------------------------------------------------------
  Removing source code

Paketo Executable JAR Buildpack 3.1.1
  Process types:
    executable-jar: java org.springframework.boot.loader.JarLauncher
    task:           java org.springframework.boot.loader.JarLauncher
    web:            java org.springframework.boot.loader.JarLauncher

Paketo Spring Boot Buildpack 3.2.1
  Launch Helper: Contributing to layer
    Creating /layers/paketo-buildpacks_spring-boot/helper/exec.d/spring-cloud-bindings
    Writing profile.d/helper
  Web Application Type: Contributing to layer
    Servlet web application detected
    Writing env.launch/BPL_JVM_THREAD_COUNT.default
  Spring Cloud Bindings 1.6.0: Contributing to layer
    Reusing cached download from buildpack
    Copying to /layers/paketo-buildpacks_spring-boot/spring-cloud-bindings
  Image labels:
Reusing layers from image 'index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:129f1e4231095c7504e01a4f233487b056eb28975b93f4dbb6195534bee4220e'
Adding layer 'paketo-buildpacks/bellsoft-liberica:helper'
Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
Reusing layer 'paketo-buildpacks/executable-jar:class-path'
Adding layer 'paketo-buildpacks/spring-boot:helper'
Adding layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings'
Adding layer 'paketo-buildpacks/spring-boot:web-application-type'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Adding label 'org.opencontainers.image.title'
Adding label 'org.opencontainers.image.version'
Adding label 'org.springframework.boot.spring-configuration-metadata.json'
Adding label 'org.springframework.boot.version'
*** Images (sha256:b6d6a0944600be3552c0b33e0a0759a12b422168484ffff55f9f9e0be4c93217):
Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
Adding cache layer 'paketo-buildpacks/maven:application'
Adding cache layer 'paketo-buildpacks/maven:cache'
Build successful

Test Build Image

Pull image from Docker-hub:

docker pull itsrajivsrivastava/spring-petclinic

Run this pulled image on docker:

docker run -p 8080:8080 itsrajivsrivastava/spring-petclinic

Now , test on this browser: http://localhost:8080

How to build Dot Net application using TBS

TBS installation, setup yaml configuration, Github and DockerHub secret are same for Dot NET also.Deployment process is also same as Java on docker. There is a separate DotNet buildpack which will be injected to ASP.Net source code project.

Now, we will create TBS Dot Net image:

$ kp image create dotnetbuildtest \
  --tag index.docker.io/itsrajivsrivastava/dotnetbuildtest:latest \
  --namespace tbs-demo \
  --git https://github.com/rajivmca2004/DotNetBuild \
  --git-revision master

$ kp image status dotnetbuildtest -n tbs-demo

If you face this in .Net apps:

Unable to start Kestrel.
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress

kestrel will bind to both ipv4 & ipv6. maybe  k8s env doesn’t let it bind to ipv6. Change env port 8080 to force it to only bind to ipv4

Fix– Add this in K8s deployment manifests.

Add this in Ku8s deployment manifest file:
         – name: PORT
           value: “8080”


Where exactly build happens on local machine or on K8s cluster Builds happen in pods on the cluster. Basically everything with the service happens in the K8s cluster. You can interact with the service with the kp cli (now kp cli)
Where it stores all dependent libraries K8s or machine from where kp build starts?
App dependent libraries live in the Stack/buildpacks which would be on the k8s cluster.
Does it keep an image copy locallyIt does not keep a copy of the app image, it only uploads to a registry
Can we cover all CI job for build configuration pipeline and deploy with CD tools/pluginTBS is meant to be a solution that works well in a CI/CD setting. Currently have an integration with concourse CI via https://github.com/pivotal/concourse-kpack-resource
It can also integrated with Jenkin with additional configuration
Can build-pack modified or created new custom buildYes, follow this – https://buildpacks.io/docs/operator-guide/create-a-builder/

Build ASP.Net core image and deploy on Kubernetes with Contour ingress controller and MetalLB load balancer

In this blog, I will cover up, how to create OCI docker image from Windows ASP .Net application to .Net Core container using open source “Pack” API and deploy on Kubernetes cluster using open source Contour ingress controller. Also, will set up MetaLB load balancer of Kubernetes cluster.


  1. Build .Net Core OCI docker image of ASP .Net application using Pack buildpack API
  2. Run this docker image on docker for quick docker verification
  3. Push docker image to image registry Harbor
  4. Install and configure Contour ingress controller
  5. MetalLB load balancer for Kubernetes LoadBalancer service to expose as an external IP to public
  6. Create a deployment and Service script to deploy docker image
  7. Create an ingress resource and expose this .Dot app to external IP using Contour ingress controller


  • Kubernetes cluster setup. Note: I have used VMware’s Tanzu Kubernetes Grid (TKG)
  • Kubectl CLI
  • Pack buildpack API
  • Image registry Harbor setup
  • git CLI to download Github source code
  • MacOS/Ubuntu Linux or any shell

1. Build OCI docker image of ASP .Net application using Pack buildpack API:

Install and configure “Pack” API. I have installed on Ubuntu Linux:

wget https://github.com/buildpacks/pack/releases/download/v0.11.2/pack-v0.11.2-linux.tgz
tar xvf pack-v0.11.2-linux.tgz
mv pack /usr/local/bin
# Browse all suggested builders

$ pack suggest-builders	
Suggested builders:
	Google:                gcr.io/buildpacks/builder                    Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python
	Heroku:                heroku/buildpacks:18                         heroku-18 base image with buildpacks for Ruby, Java, Node.js, Python, Golang, & PHP
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:base        Ubuntu bionic base image with buildpacks for Java, NodeJS and Golang
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:full-cf     cflinuxfs3 base image with buildpacks for Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:tiny        Tiny base image (bionic build image, distroless run image) with buildpacks for Golang

Tip: Learn more about a specific builder with:
	pack inspect-builder <builder-image>

# Set this full-cf .Net builder which has support for most of the languages (Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX).Syntax: pack set-default-builder <builder-image>
$ pack set-default-builder gcr.io/paketo-buildpacks/builder:full-cf

# Clone the GitHub project
$ git clone https://github.com/rajivmca2004/paketo-samples-demo.git and && cd paketo-samples-demo/dotnet-core/aspnet

# Building docker image and convert into .Net core container
$ pack build dotnet-aspnet-sample

2. Run this docker image on docker for quick docker verification

# Running docker image for quick verification before deploying to K8s cluster
$ docker run --interactive --tty --env PORT=8080 --publish 8080:8080 dotnet-aspnet-sample

# Viewing
$ curl http://localhost:8080

3. Push docker image to image registry Harbor

$ docker login -u admin -p Harbor123 harbor.vmwaredc.com/library

#Push to Harbor image registry
$ docker push harbor.vmwaredc.com/library/dotnet-aspnet-sample

We need an ingress controller to expose Kubernetes services as external IP. It will work as an internal load balancer to expose to K8s services on http/https and REST APIs of microservices.

4. Install and configure Contour ingress controller

Refer this installation doc of Contour open source for more information

# Run this command to download and install Contour open source project

$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

5. MetalLB load balancer for Kubernetes LoadBalancer service to expose as an external IP to public

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (vSphere, TKG, GCP, AWS, Azure, OpenStack etc). If you’re not running on a supported IaaS platform, LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.

Please follow this MetalLB installation doc for the latest version.  Check that MetalLB is running.

$ kubectl get pods -n metallb-system

Create layer 2 configuration:

Create a metallb-configmap.yaml file and modify your IP range accordingly.

$vim metallb-configmap.yaml

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2

# Configure MetalLB
$ kubectl apply -f metallb-configmap.yaml

6. Create a deployment and Service script to deploy docker image

You can download and refer complete code from GitHub repo.

$ vim dotnetcore-asp-deployment.yml

apiVersion: v1
kind: Service
  name: dotnetcore-demo-service
  namespace: default
  - port: 80
    protocol: TCP
    targetPort: 80
    app: dotnetcore-demo-app
  sessionAffinity: None
  type: ClusterIP
  loadBalancer: {}

apiVersion: apps/v1
kind: Deployment
  name: dotnetcore-app-deployment
  namespace: default
    runAsUser: 0
      app: dotnetcore-demo-app
  replicas: 3
  template: # create pods using pod definition in this template
        app: dotnetcore-demo-app
      - name: dotnetcore-demo-app
        image: harbor.vmwaredc.com/library/dotnet-aspnet-sample
        - containerPort: 9080
          name: server

Deploy the .Net Core pods:

$ kubectl apply -f nginx-deployment.yml

7. Create an ingress resource and expose this .Dot app to external IP using Contour ingress controller

Create an ingress resource:

$ vim dotnetcore-ingress-cluster.yml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
  name: dotnetcore-demo-cluster1-gateway
    app: dotnetcore-demo-app
  - http:
      - path: /
          serviceName: dotnetcore-demo-service
          servicePort: 80

# Create ingress resource
$ kubectl apply -f dotnetcore-ingress-cluster1.yaml

Get the IP of the .Net Core K8s service to access the application:

$ kubectl get svc
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
dotnetcore-demo-service       LoadBalancer   80:30452/TCP   5m

To test open this URL in your browser with external IP=> http://[EXTERNAL-IP]/

Tanzu Kubernetes Grid air gapped installation (TKG v1.1.2) on vSphere v6.7- offline environment

Recently,I have done a POC for a client with the latest TKG v1.1.2 ( Latest updated: July’20) and faced a couple of challenges on air- gapped environment. Installing and configuring TKG management and worker clusters on air-gapped (No Internet/offline) environment is nightmare. You need to plan properly and download all required docker images of TKG and related technology stacks and libraries on your private image registry first. I have used Harbor open source image registry in this blog.

This blog is not replacement of official doc. It’s quick references to join all the dots, tips like how to manually download, tag, push and change K8s manifest files images, prerequisite and other quick references to save time and have everything on a single pager.

I have followed this deploying TKG instructions on an air-gapped environment (Deploy Tanzu Kubernetes Grid to vSphere in an Air-Gapped Environment), there are some more steps required to complete successful installation. This blog will cover TKG v1.1.2 on vSphere 6.7 in air gapped environment.

Note: I have used latest Ubuntu v20.04 LTS on bootstrap VM which will have Internet connectivity. You can sue CentOS are any other Linux flavour.

Note: I have used latest Ubuntu v20.04 LTS on bootstrap VM which will have Internet connectivity. You can use CentOS are any other Linux flavour.

I have used TKG dev plan:

Prerequisite for Bootstrap Env – Ubuntu/CentOS LinuxPackages/URLs
DHCP Should be Enabledhttps://www.tecmint.com/install-dhcp-server-in-ubuntu-debian/Mandatory: DHCP- DHCP installation on Ubuntu
DNS Enabled Mandatory- Public or private DNS muste be enabled on subnet IP tange 
Ubuntu OS Core server installhttps://ubuntu.com/download/alternative-downloadsLatest version 20.04 LTS/ 18.04 LTS
HomeBrew ( if not available)Linux/MAcOS-https://docs.brew.sh/Homebrew-on-Linux
Ubuntu- https://brew.sh/
Ubuntu- https://medium.com/@smartsplash/using-homebrew-on-ubuntu-1089f70c8aa7
Optional – It’s good to install CLIs and other required libaries. Node advisable for air-gapped env installation on K8s clusters.
TKG CLIhttps://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-install-tkg-set-up-tkg.htmlMandatory 

Ref docs:
Mandatory – For K8s
$ brew install kubectl
$ kubectl version
Docker Desktop and CLI Installation and Setup: (Ubuntu)Ubuntu:

Mandatory : Ubuntu- docker.io is available from the Ubuntu repositories (as of Xenial).
# Install Docker
sudo apt install docker.io
sudo apt install docker-compose

# Start /stop
sudo systemctl start docker
sudo systemctl stop docker

sudo docker ps -a
sudo docker rm -f <PID>
docker info
Harborhttps://goharbor.io/docs/1.10/install-config/Mandatory- Need to setup DNS servers for Harbor to resolve domaion name.  
Follow TKG v1.1.2 installation steps on air gapped env for vSphere v6.7https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-install-tkg-vsphere.html 
Download TKG binaries:  Linuxhttps://my.vmware.com/web/vmware/details?downloadGroup=TKG-110&productId=988&rPId=46507 
VMware Tanzu Kubernetes Grid 1.1.0 Kubernetes v1.18.2 OVAPhoton v3 Kubernetes 1.18.2 OVA 
VMware Tanzu Kubernetes Grid 1.1 CLIVMware Tanzu Kubernetes Grid CLI 1.1 Linux 
VMware Tanzu Kubernetes Grid 1.1 Load Balancer OVA Photon v3 capv haproxy v1.2.4 OVA 
clusterawsadm Account Preparation Tool v0.5.3ClusterAdmin AWS v0.5.3 Linux 
VMware Tanzu Kubernetes Grid 1.1 Extension manifestsVMware Tanzu Kubernetes Grid Extensions Manifest 1.1 
Crash Diagnostics v0.2.2Crash Recovery and Diagnostics for Kubernetes 0.2.2 Linux 

Step-1: Setup all prerequisite, install Ubuntu OS on bootstrap VM

Step:2 Download all binaries with your VMware credentials and push/copy all compressed tar files to bootstrap VM machine.

Step:3 Make sure Internet is available on the bootstrap VM from where you need to initiate installation of TKG and other binaries.

Step:4 Install Docker Desktop and CLI. Make sure that the internet-connected machine has Docker installed and running.

Step:5 Install Harbor and create certificate using OpenSSL and https config. Also, add harbor certificates in docker config file in .harbor/harbor.yml

# https related config
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /root/harbor/data/cert/harbor.vmwaredc.com.crt
  private_key: /root/harbor/data/cert/harbor.vmwaredc.com.key
#edit /etc/hosts and add namespace entry of DNS server harbor.vmwaredc.com

$ systemd-resolve --status

UI: https://harbor.vmwaredc.com
Verify: Make sure that you can connect to the private registry from the internet-connected machine.

$ docker login -u admin -p <password> harbor.vmwaredc.com

Step:6 Install kubectl CLI

Step:7 Install tkg CLI on same bootstrap VM with external internet connection, and follow the instructions in Download and Install the Tanzu Kubernetes Grid CLI to download, unpack, and install the Tanzu Kubernetes Grid CLI binary on your internet-connected system.

Step:8 Follow all the steps as mentioned in the installation doc. Open vSphere UI console and provide all vCenter v6.7 server details, vLAN, resource configuration etc. It will create configuration file config.yaml in .tkg folder which will have all main TKG installation configuration.

Note: vCenter server should be an IP or FQDN in only small letters.

Step:9 Upload TKG and HAProxy OVA to vSphere UI console.

Step:10 Add this export before initiating TKG installation –

$ export TKG_CUSTOM_IMAGE_REPOSITORY="harbor.vmwaredc.com/library"

Step:11 Download all required docker images for TKG installation and push to Harbor and follow these steps –


Note: TKG Repo pull all docker images from public image registry- https://registry.tkg.vmware.run/v2/

  • On the bootstrap machine with an internet connection on which you have performed the initial setup tasks and installed the Tanzu Kubernetes Grid CLI, install yq 2.x. NOTE: You must use yq version 2.x. Version 3.x does not work with this script.
  • Run the $ tkg get management-cluster command.
  • Running a tkg command for the first time installs the necessary Tanzu Kubernetes
  • Grid configuration files in the ~/.tkg folder on your system. The script that you create and run in subsequent steps requires the files in the ~/.tkg/bom folder to be present on your machine. Note: TKG v1.1.2 picks bom/bom-1.1.2+vmware.1.yaml image file.
  • Set the IP address or FQDN of your local registry as an environment variable.In the following command example, replace custom-image-repository.io with the address of your private Docker registry.
  • Copy and paste the following script in a text editor, and save it as gen-publish-images.sh
#!/usr/bin/env bash
# Copyright 2020 The TKG Contributors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#     http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.
    echo "TKG_CUSTOM_IMAGE_REPOSITORY variable is not defined"
    exit 1
for TKG_BOM_FILE in "$BOM_DIR"/*.yaml; do
    # Get actual image repository from BoM file
    actualImageRepository=$(yq .imageConfig.imageRepository "$TKG_BOM_FILE" | tr -d '"')
    # Iterate through BoM file to create the complete Image name
    # and then pull, retag and push image to custom registry
    yq .images "$TKG_BOM_FILE" | jq -c '.[]' | while read -r i; do
        # Get imagePath and imageTag
        imagePath=$(jq .imagePath <<<"$i" | tr -d '"')
        imageTag=$(jq .tag <<<"$i" | tr -d '"')
        # create complete image names
        echo "docker pull $actualImage"
        echo "docker tag $actualImage $customImage"
        echo "docker push $customImage"
        echo ""
  • Make the script executable .chmod +x gen-publish-images.sh
  • Generate a new version of the script that is populated with the address of your private Docker registry ../gen-publish-images.sh > publish-images.sh
  • Verify that the generated version of the script contains the correct registry address cat publish-images.sh
  • Make the script executable .chmod +x publish-images.sh
  • Log in to your local private registry. docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
  • Run the script to pull the required images from the public Tanzu Kubernetes Grid registry, retag them, and push them to your private registry ../publish-images.sh
  • When the script finishes, turn off your internet connection. (Optional) After that Internet is not required for TKG.
  • Modify TKG dev installation plan. Run these following commands on the home directory one level up (outside of .tkg folder location) :
$ export REGISTRY="harbor.vmwaredc.com"
$ export NAMESERVER=""
$ export DOMAIN="vmwaredc.com"
$ cat > /tmp/harbor.sh <<EOF
echo "nameserver $NAMESERVER" > /usr/lib/systemd/resolv.conf
echo "domain $DOMAIN" >> /usr/lib/systemd/resolv.conf
rm /etc/resolv.conf
ln -s /usr/lib/systemd/resolv.conf /etc/resolv.conf
mkdir -p /etc/containerd
echo "" > /etc/containerd/config.toml
sed -i '1 i\# Use config version 2 to enable new configuration fields.' /etc/containerd/config.toml
sed -i '2 i\# Config file is parsed as version 1 by default.' /etc/containerd/config.toml
sed -i '3 i\version = 2' /etc/containerd/config.toml
sed -i '4 i\ ' /etc/containerd/config.toml
sed -i '5 i\[plugins]' /etc/containerd/config.toml
sed -i '6 i\  [plugins."io.containerd.grpc.v1.cri"]' /etc/containerd/config.toml
sed -i '7 i\    sandbox_image = "registry.tkg.vmware.run/pause:3.2"' /etc/containerd/config.toml
sed -i '8 i\    [plugins."io.containerd.grpc.v1.cri".registry]' /etc/containerd/config.toml
sed -i '9 i\      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]' /etc/containerd/config.toml
sed -i '10 i\        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."$REGISTRY"]' /etc/containerd/config.toml
sed -i '11 i\          endpoint = ["https://$REGISTRY"]' /etc/containerd/config.toml
sed -i '12 i\      [plugins."io.containerd.grpc.v1.cri".registry.configs]' /etc/containerd/config.toml
sed -i '13 i\        [plugins."io.containerd.grpc.v1.cri".registry.configs."$REGISTRY"]' /etc/containerd/config.toml
sed -i '14 i\          [plugins."io.containerd.grpc.v1.cri".registry.configs."$REGISTRY".tls]' /etc/containerd/config.toml
sed -i '15 i\            insecure_skip_verify = true' /etc/containerd/config.toml
systemctl restart containerd
$ awk '{print "    -", $0}' /tmp/harbor.sh > /tmp/harbor1.yaml
$ awk '{print "      -", $0}' /tmp/harbor.sh > /tmp/harbor2.yaml
$ sed -i '197 e cat /tmp/harbor1.yaml\n' ~/.tkg/providers/infrastructure-vsphere/v0.6.5/cluster-template-dev.yaml
$ sed -i '249 e cat /tmp/harbor2.yaml\n' ~/.tkg/providers/infrastructure-vsphere/v0.6.5/cluster-template-dev.yaml
$ rm /tmp/harbor1.yaml /tmp/harbor2.yaml /tmp/harbor.sh

Step:12 Run this on terminal to initiate installation process, it will create .tkg folder and required config file. In v1.1.2 .bom folder has all image repositiories

$ sudo tkg init --ui -v 6

Step:13  As soon as kind container is up . Run this exec steps into KIND cluster and ran below script.

Identify KIND docker image by :

$ docker ps -a
$ docker exec -it <KIND docker image id> /bin/sh
echo '# explicitly use v2 config format
version = 2
# set default runtime handler to v2, which has a per-pod shim
    sandbox_image = "registry.tkg.vmware.run/pause:3.2"
      default_runtime_name = "runc"
        runtime_type = "io.containerd.runc.v2"
          endpoint = ["https://harbor.vmwaredc.com"]
            insecure_skip_verify = true' > /etc/containerd/config.toml

Step:14 At this step, management cluster is created. Now, you can create work load clusters as per installation instructions.

Step:15 To visualize, monitor and inspect TKG Kubernetes clusters. Install Octant UI dashboard. Octant should immediately launch your default web browser on

$ octant

Note: Or to run it on a specific host and fixed port:



Important Trick:

Pull and push docker images in Air gapped environment

Now, your K8s cluster is ready, next you would like to install K8s deployment or any other K8s images which pulls dependent images from public Internet. Your Kubernetes cluster running on air-gapped environment can’t download any image from public repository (dockerhub, docker.io, gcr etc).

Refer my short blog for how to do operation: Pull and push docker images in Air gapped (No Internet) environment

VMware Tanzu Offerings Cheat Sheet: References, technical docs and demo videos

This blog will cover all important and quick useful references of the VMware Tanzu offerings. It’s a cheat sheet and a single pager info for all the Tanzu enterprise product offerings, technical docs, demo videos and white-papers. Hope this one pager info will be handy for developers, architects, business owners, operators and organisations:

Tanzu Bundles

Tanzu BasicOfficial Doc: https://tanzu.vmware.com/tanzu/basic

Video and demo: https://www.youtube.com/watch?v=KlsprTBsGTE
Tanzu StandardOfficial Doc: https://tanzu.vmware.com/tanzu/standard

Video and demo: https://www.youtube.com/watch?v=78rTGiotTv4
Tanzu AdvancedOfficial Doc: https://tanzu.vmware.com/tanzu/advanced

Video and demo: https://tanzu.vmware.com/tanzu/advanced

Tanzu Resources:

Tanzu Kubernetes Grid
This is VMWare’s enterprise ready upstream Kubernetes distribution and will be available in different form factors based on end user / customer requirements.

TKG provides enterprises with a consistent, upstream aligned, automated multi-cluster operations across SDDC, Public Cloud, and Edge environments that is ready for end-user workloads and ecosystem integrations. TKG does for Kubernetes what Kubernetes does for your containers.

* vSphere7 with native TKG: As embedded in vSphere 7.0  – Fully managed K8s experience on top of vSphere. The solution unlocks on-prem vSphere deployments to run Kubernetes natively. New features in NSX, vCenter and ESXi elevates VMs, K8s pods and K8s clusters to first class citizens in vSphere, enabling vSphere admins to manage and delegate these new computing constructs seamlessly to DevOps teams. This solution also provides all the benefits of the underpinning TKG technology.

* TKG+ – Build your own K8s platform with VMWare support with ClusterAPI and KubeADM support. Its provides true open source K8s experience with support for few open source tools (Harbor registry, Contour, Sonobuy, Dex, EFK,,Velero, Prometheus, Grafana etc.)

* TKGI (Tanzu Kubernetes Grid Integrated)– Fully Managed K8s as a Service on any private/public Cloud. Great opinionated choice for  day 2 operation, because its operation is fully automated.

* TKG as a service on TMC : TKG managed services on TMC (Tanzu Mission Control)

Overview Videos:
Video 1
Video 2

Demo: Tanzu Kubernetes Grid

Tools :
Crash-Diagnostics (CrashD) – Public git page here  which monitors – Bootstrap cluster setup, Management cluster setup, Workload cluster setup
Tanzu Kubernetes Grid Integrated (TKGI)This is VMWare’s enterprise ready upstream Kubernetes distribution with BOSH director. TKGI provides the ability for organizations to rapidly deploy fleets of Kubernetes clusters in a secure and consistent manner across clouds with minimal effort.  It also simplifies the ability to rapidly repave and patch your fleets of Kubernetes clusters. It provides teams access to both Linux and Windows containers


Vmware Spring Runtime (VSR)It’s a standalone product offering under Tanzu to cover Production and Development support for OpenJDK, 40+ Spring projects (including the ones used in IDP) and Tomcat.

Get support and signed binaries for OpenJDK, Tomcat, and Spring. Globe spanning support is available 24*7 and your organization gets access to product team and knowledge base. Avoid maintaining expensive custom code. Get VMware’s TC Server, a hardened, curated, and enterprise ready Tomcat installation.

– Doc:

– Spring technical doc-

– Create quick Spring project :

Tanzu Application Services (VM -Diego Container)
Fully automated PAAS (Platform As a Service platform) to increase productivity by automating all cloud related configuration and deployment on cloud by just a single command and only source code of the application. It’s based on Diego container.

TAS fully automates the deployment and management of applications on any cloud. This makes your operations team more efficient, improves developer productivity, and enhances your security posture.  This enables your organization to achieve the business outcomes they desire by reducing time to market.

Product page on tanzu.vmware.com
Tanzu Build ServiceTBS is a tool to build OCI container images and manage the container life cycle irrespective of the deployment platform. Based on the CNCF project Buildpacks.io TBS takes care the pain of maintaining docker files and brings in standardisation to you docker image build process.

TBS customers will close vulnerabilities orders of magnitude faster, they will have developers who spend nearly no time on image builds, and they will be able to easily and programatically audit production containers. TBS eliminates 95% of the toil of the container lifecycle and allows platform teams to offer automated “code to cloud” style functionality to their developers.

– Doc:

– Overview and Demo:

– Blog:
Tanzu Application Catalog (TAC)TAC is a curated collection of production-ready popular opensource software that can be used by IDP users. Software support is still based on what’s available with the open source version, but VMWare provide the ‘proof of provenance’ as well as enterprise grade testing on these images. Also it allows customers to bring your own Golden Image while Bitnami(VMWare) is making this image for your developers.

Working with pre-packaged software poses risks and challenges. Developers are sourcing containers from Docker Hub that are out of date, vulnerable, insecure by default, or broken. Auditing, hardening, integrating, and making software ready for production is time consuming, difficult, and low value add from an organizational standpoint. It’s also frustrating to dev teams as software selection will be limited and lag behind open source options.

– Doc (on-boarding) :

– Demo: 

– FAQ: 
Tanzu Service Mesh (TSM)Tanzu Service Mesh not only simplifies lifecycle management of service mesh over fleets of K8s clusters, it provides unified management, global policies, and seamless connectivity across complex, multi-cluster mesh topologies managed by disparate teams. It provides app-level observability across services deployed to different clusters, complementing/integrating into modern observability tools you use or are considering.

– Doc: 

– Demo  for Microservices – https://www.youtube.com/watch?v=EquVhIkS1oc

– Public doc:

– Blog:
Tanzu Service Mesh on VMware Tanzu: CONNECT & PROTECT Applications Across Your Kubernetes Clusters and Clouds 
Tanzu Mission Control
VMware Tanzu Mission Control provides a single control point for teams to more easily manage Kubernetes and operate modern, containerized applications across multiple clouds and clusters. It codifies the know-how of operating Kubernetes including deploying and upgrading clusters, setting policies and configurations, understanding the health of clusters and the root cause of underlying issues.

– Doc:

A Closer Look at Tanzu Mission Control :
A Multi-Cluster K8s Management Platform video

Data Protection on Tanzu Mission Control :

TMC Demo:
Tanzu Observability/WaveFront (TO) The VMware Tanzu Observability by Wavefront platform is purpose built to handle requirements of modern  applications and multi-cloud at high scale. It’s a unified solution with analytics (including AI) that ingests visualizes, and  analyzes metrics, traces, histograms and span logs.  So you can resolve incidents  faster across cloud applications,  correlated with the cloud infrastructures views.

Doc, videos and integrations

Application monitoring/End User Monitoring ( EUM ) integration: https://blog.catchpoint.com/2020/06/17/accelerate-observability-with-catchpoint-and-wavefront/


– Demo2:
Microservices Observability with WaveFront

SpringBoot Integration: https://docs.wavefront.com/wavefront_springboot.html
Tanzu Data Services – GreenPlum, GemFire, RabbitMQ, SQL/PostGresSQLVMware also has SQL, NoSQL, messaging broker, analytical and distributed caching solutions.

GreenPlum – Analytical database based on PostGresSQL
GemFire – Distributed caching
RabbitMQ – Messaging broker
SQL/PostGresSQL – SQL databases

Concourse CI/CDIt’s an opensource for platform automation.

The Making of a Cloud-Native CI/CD Tool:
The Concourse Journey (Blog)
Concourse on tanzu.vmware.com
Concourse OSS Site
Concourse Documentation
Hands on Lab (HOL) trial AccessTrial hands on lab without installation- https://www.vmware.com/in/try-vmware/try-hands-on-labs.html

Windows .Net support:

Microsoft developer framework with tools and libraries for building any type of app, including web, mobile, desktop, gaming, IoT, cloud, and microservices. Key Resources:

Pull and push docker images in offline air gapped (No Internet) environment

When you would to install K8s deployment or any other K8s images which pulls dependent images from public Internet. Your Kubernetes cluster running on air-gapped environment can’t download any image from public repository (dockerhub, docker.io, gcr etc). You need to pull it first on bootstrap VM where public internet connectivity is there, then tag it and push it to your local image Harbor. Your K8s cluster will pick images from the local Harbor only. Whenever you have tom install any K8s deployable, you need to manually change deployment manifest and replace image path from public to local repo harbor/jFrog etc.

# Pull from public image registry
docker pull metallb/speaker:v0.9.3

# Tag it with your Harbor host
docker tag metallb/speaker:v0.9.3 $HARBOR_HOST/library/metallb/speaker:v0.9.3

#Push to local image registry harbor/jFrog
docker push $HARBOR_HOST/library/metallb/speaker:v0.9.3

#Change image name in your K8s deployment manifest. You are all set!
$ vi metallb-manifest.yml

apiVersion: apps/v1
kind: Deployment
  name: dotnetcore-app-deployment
  namespace: default
    runAsUser: 0
      app: dotnetcore-demo-app
  replicas: 3 # tells deployment to run N pods matching the template
  template: # create pods using pod definition in this template
        app: dotnetcore-demo-app
      - name: dotnetcore-demo-app
        image: harbor.vmwaredc.com/library/dotnet-aspnet-sample
        - containerPort: 9080
          name: server

$ kubectl apply -f metallb-manifest.yml

Note: Helm package installable really won’t work on air gapped env, because it tries to pull images from public Internet. You need to refer manifesy yml files only, becuase you haver to chnage the image registry server path before running it on K8s cluster.

Secure app and infrastructure monitoring data with WaveFront SAAS: Secure by Design

In this blog, I will cover WaveFront APM community and enterprise edition. It’s a SAAS based cloud service. I will explain security aspects in detail when transmitting monitoring data from organization’s on-prem, private and public clouds. WaveFront Doesn’t send application logs and user data to SAAS cloud. You can add a WaveFront proxy to mask and filter data based on organization’s security policy.

To know fundamentals and other information about WaveFront and it’s technical architecture, please read my other blog-


Security with WaveFront SAAS

Wavefront is secured by design. It only uses these monitoring data from organization’s on-prem data centers/cloud Availability zones (AZs).

  • Metrices
  • Traces & Spans
  • Histogram

There are multiple ways to protect privacy of data on SAAS cloud when data is transmitted from applications and infrastructure servers to the cloud. It’ SAFE to use.

Secure your data with WaveFront Proxy

Wavefront provides these features to secure your data when monitoring your apps/Infra:

Note: It also works on-air gapped environment (Offline with No Internet connectivity). You need to setup a separate VM with public Internet connection which will have a WaveFront (PO) Proxy running. WaveFront agents will push all stats Kubernetes and VM clusters to main WaveFront SAAS cloud and telemetry data will be transmitted from this VM/BM machine to WaveFront cloud SAAS.

Secure By Design

  • WaveFront does’t read and transmit application, user and database logs and send application logs.
  • All local matrices data will be stored at WaveFront Proxy with local persistence/databases
  • Intrusion detection & response
  • Securely stores username/password information ​
  • Does NOT collect information about individual users
  • ​Do NOT install agents that collect user information ​NONE of the built-in integrations collect user information
  • Currently uses AWS to run the Wavefront service and to store customer application data ​The AWS data centres incorporate physical protection against environmental risks ​
  • The service is served from a single AWS region spread across multiple availability zones for failover ​
  • All incoming and outgoing traffic is encrypted. ​Wavefront customer environments are isolated from each other. ​Data is stored on encrypted data volumes. ​
  • Wavefront customer environments are isolated from each other. ​
  • Data is stored on encrypted data volumes. ​
  • Wavefront development, QA, and production use separate equipment and environments and are managed by separate teams. ​
  • Customers retain control and ownership of their content. It doesn’t replicate customer content unless the customer asks for it explicitly.

User and role based Security – Authentication and Authorization

  • User & service account Authentication (SSO, LDAP, SAML, MFA). For SSO, it supports Okta, Google ID, AzureAD. User must be authenticated using login credentials and API call also authenticated thru secure auto expiry token.
  • Authentication using secret token & authorization (RBAC, ACL)
  • It supports user role and service account also
  • Roles & groups access management
  • Users in different teams inside the company can authenticate to different tenants and cannot access the other tenant’s data.
  • Wavefront supports multi-level authorization:
    • Roles and permissions
    • Access control
  • Wavefront supports a high security mode where only the object creator and Super Admin user can view and modify new dashboards.
  • If you use the REST API, you must pass in an API token and must also have the necessary permissions to perform the task, for example, Dashboard permissions to modify dashboards.
  • If you use direct ingestion you are required to pass in an API token and most also have the Direct Data Ingestion permission.

 How it protects user data

  • Mask the monitoring data with different name to maintain privacy
  • WaveFront agent runs at VMs which captures the data and send to WaveFront Proxy first, where filtering/masking logic can be applied, then filtered/masked data are being transmitted to WaveFront SAAS cloud for analytics and dashboards
  • It also provides separate private cloud/separate physical VM boxes to store customer’s data securely
  • It isolates customer’s data on SAAS cloud and never expose to other customers
  • Data can be filtered before sending to WaveFront SAAS server
  • Secure transit over Internet with HTTPS/SSL
  • Data is stored on encrypted data volumes
  • Protect all data traffic with TLS (Transport Layer Security) and HTTPS
  • Perform a manual install and place the Wavefront proxy behind an HTTP proxy.
  • Use proxy configuration properties to set ports, connect times, and more
  • Use a whitelist regx or blacklist regx to control traffic to the Wavefront proxy
  • ​Data Mirroring- Application data is duplicated across two Availability Zones (AZ) in a single AWS region



  1. Rishi Sharda – https://www.linkedin.com/in/rsharda/
  2. Anil Gupta – https://www.linkedin.com/in/legraswindow/

My CKAD Certification experience: Tips and tricks

I have passed CKAD (Certifies Kubernetes Application Developer) exam in June’2020! It was really a race against time! It’s a fast pace, online coding exam!
In this blog, I will share my exam experience, hope it will be helpful for you.

It’s an open book exam, however you can only browse Kubernetes official website and their related blogs. You can’t Google search and find the answers. You can copy and paste YAML based code from K8s official portal only.

Exam Prepration

I have started preparation with CKAD Udemy online course of Mumshad Mannambeth. It’s an awesome online tutorial with real lab environment, where you will be given readymade Kubernetes cluster lab KodeKloud platfrom, It’s superb! Here, you can write code in K8s yaml files and run instantly without any K8s cluster configuration. It’s also creates K8s resources for you and also validates your answers.

Other Resources:

It would be great to setup a MiniKube or Kind ( Kubernetes in Docker) K8s cluster on your laptop/computer to practice more questions. I have used these resources to practice the exam-

D-Day (Exam-Day):

It’s a tough exam where you have to read, understand, code and verify in 6 minutes. There will be 19 questions and passing marks are 67%. It’ very difficult to complete all the questions in 2 hours. These are tips I have followed to do time management:

It’s 2 hours online coding exam where a human exam proctor will keep an eye on you for this duration by looking at you thru Web Camera and your monitor screen thru screen sharing!

Tips to save time and attempt most of the questios:

  • Practice, practice, practice so that your finger remembers popular commands and syntax!
  • Don’t try to create YAML file manually during exam! Use Kubectl imperative commands to generate yaml. Export yaml file by –dry-run -o yaml>pod.yaml and then edit this YAML file in vi editor. Once, you are confident, then save
  • They provide vim/nano. I prefer vi, it’s simple.
  • Always verify K8s objects status and logs after K8s object creating command. It will give you confidence!
  • Hit easy questions with high weight first: >10% and get back later for question with weight: 2–3% or those questions which are big and complex, and you are not sure.
  • Use exam console’s NotePad to track all your questions. Write all questions in new rows and write weightage and mark if you have completed. You can also click on a button to mark your questions which you want to return later. You can also shuffle easy and high score questions on the top and low marks and complex questions at the bottom. e.g: Here d means “done”
    • 1-13-d
    • 2-9-d
    • 3-7-p
  • Save important bookmarks and use as boomark tool bar to save time in searching bookmark links. You can download this bookmark which I have used from Github.
  • Create aliases before starting the exam and save it on Notebook which will be provided on exam console. You can’t use your personal Notepad or any copy-paste from your local computer to exam console browser’s windows. I have just used these aliases:
$ alias k=kubectl
$ alias kx="kubectl config current-context"
$ alias kn="kubectl config use-context"
$ alias kall="kubectl get all"
$ alias kc="kubectl create -f"

All the best!

About Founder – Rajiv Srivastava

Rajiv Srivastava is the founder of https://cloudificationzone.com/, which is a cloud native modern application blog site for cloud native developers, architects and enthusiastic who are interested in end to end design and development of cloud based modern applications  by using build, run, monitor, secure and manage approach with modern technologies.

He is working as a Cloud Native Solution Architect with a leading product development company VMware, a blogger, author, a passionate technologist, Java/Spring/Kubernetes developer and architect.

He has 15+ years of work experience in development and design solution architecture. He has expertise in modern application, cloud migration, Kubernetes platform, event sourcing architecture, NodeJS, Tanzu, cloud, docker, API Gateway, Service Mesh, CI/CD, containerization, GCP, AWS, open-sources, distributed, serverless, Microservices, REST APIs, Spring, Caching, Kafka, RabbitMQ, SQL/No-SQL, MongoDB, ElasticSearch, enterprise integration, unit/integration and performance testing, code profiling etc.

Location: He is based in Gurgaon (New Delhi NCR) India


  1. He is  Sun Certified Java Professional (SCJP)
  2. Certified Kubernetes Application developer (CKAD).
  3. Other Certifications: ElasticSearch, Spring Cloud data Flow, ITIL, Six Sigma White, SNIA etc.

Work Experience:

He has worked with these companies from past 15+ yrs:

  • VMware (Current Company)
  • GlobalLogic
  • Wipro
  • Infogain
  • COLT
  • Sapient
  • Dell EMC

He has worked with these clients:

  • Kohls, USA
  • Apple, USA
  • Fedex, USA
  • AT&T, USA
  • Sprint Telecom, USA
  • Commercial clients etc.


  • MCA (Masters of Computer Applications/Master of Computer Science) from Bangalore University, India in 2004
  • BCA (Bachelors of Computer Applications
  • DST (Diploma in Software Technology) from CMC Ltd.

Skill Set and hand on technical expertise:

Experience in cloud migration, app modernization, Core Java, JEE, SpringBoot (Cloud, DI/IOC, MVC, AOP,Integration, REST Web Services, Security), Kubernetes, Redis, Kafka, RabbitMQ, framework, ArgoCD, Docker, Harbor, Docker Concourse,Grafana, Prometheus, JProfiler, Cucumber, Hibernate, JMS,Tomcat, ElasicSearch, MongoDB, MySQL,ELK, EFK, Splunk.

Domain Expertise:

  • E-Commerce/Retail applications.
  • Order management (Telecom)
  • Search Engine
  • MDM
  • Storage domains
  • Technical: Cloud, No-SQL and Storage.

His Technical Social profile:


  1. https://www.hcltech.com/white-papers/digital-analytics/accelerating-application-transformation
  2. https://www.globallogic.com/gl_news/microservices-test-automation-bdd-with-cucumber-jvm

Work Profile:

# A passionate technologist, a cloud-native application solution architect and API/Microservices developer, blogger, from a rich development background of Java, Python, NodeJS, SpringBoot, REST API, Caching, messaging, No-SQL and MicroServices, Event sourcing architecture, Integration Architecture, unit , integration testing, GCP, AWS, Tanzu products, cloud architecture and open source technologies.
# Founder of cloudificationzone.com
# Overall 15+ years of software of development, designing and implementing Distributed, Cloud-native -based microservices, Client-Server and Web-based enterprise applications using Java, Microservices and open source technologies for B2B, B2C projects enterprise-grade production applications.
# 5+ years of cloud and microservice experience in AWS, GCP, OpenShift (PAAS), Cloud Foundry (PAAS).
# 6+ years of deep eCommerce experience for USA’s top 5 companies – Apple and Kohls and exposure of online shopping, cart checkout, catalog, retail orders, inventory, MDM, product launch, mobile commerce, omnichannel, wallet and promotions, fraud detection, security issues of eCommerce apps etc.
# Excel in building quick Proof Of Concepts (POC), Proof of Technology (POT).
# Cloud migration of Java enterprise application, Kubernetes on-prem, private anc public cloud (AWS,GCP) using Micro-Services architecture. CI/CD pipeline. Design new cloud applications
# Design Patterns and Methodologies: Good Understanding of MicroServices architecture, GOF patterns, Core Java/J2EE Patterns, OOPS, MVC. Agile, SDLC,TDD in multi-project implementations.