Notification System Design

Objective:

Design enterprise level system architecture to support email, SMS, Chat and other public social app integrations using API:

  • Email
  • SMS/OTP
  • Push notifications (Mobile and Web browser)
  • Chat – Whatsapp/Telegram

It’s a generic feature of all kind of web and mobile applications, which is required for all modern distributed applications regardless of using any programming languages and technologies. You can customize based on your business use cases.

I have tried to simplify this design concept to fulfil common use case requirements with high availability, high perfromance and analytical services. It’s a very important medium of communication with customers/users thru their desktop/mobile devices. I would recommend to implement using microservice architecture and deploy ion Kubernetes containers to make it fully cloud native modern system. Let’s get started!

Functional Requirement:

  • Send notifications
  • Prioritize notifications
  • Send notifications based on customer’s saved preferences
  • Single/simple and bulk notification messages
  • Analytics use cases for various notifications
  • Reporting of notification messages

Non-functional requirements (NFR):

  • High perfromance
  • Highly available (HA)
  • Low latency
  • Extendable/Pluggable design to add more clients, adapters and vendors.
  • Support Android/iOS mobile and desktop/laptop web browsers.
  • API integration with all notification modules and external integrations wth clients and service providers/vendors.
  • Scalable for higher load on-prem (VMware Tanzu) and on public cloud services like AWS, GCP, or Azure etc.

System Design Architecture:

Note: Please click on the image to see clear view!

These are the solution design considerations and components:

1. Notification clients:

These clients will request for single and bulk messages using API calls. These clients will send notification messages to simple and bulk notification services:

  • Bulk Notification clients: These clients send bulk notification(s).
  • Simple Notification clients: These clients send single notification(s).

2. Notification Services:

These services are entry services which will expose REST APIs to clients and interact with the clients.  They are responsible to build notification messages by consuming Template Service. These messages will be also validated using Validation Service

  • Simple Notification Service: This service will expose APIs to integrate client with backend services. It’s a main service, which will handle simple notification request.
  • Bulk Notification Service: This service will expose APIs to integrate client with backend services. It’s a main service, which will handle bulk notification request.

This service will also manage notification messages. It wills persist sent messages to databases and maintain activity log. Same message can be resent using APIs of these services. It will provide APIs to add/update/delete and view old and new messages. It will also provide web dashboard which should have filter option to filter messages based on different criteria like date range, priority, module user, user groups etc.

3. Template Service:

This service manages all ready-to use templates for OTP, SMS, Email,chat and other push notification messages. It also provides REST APIs to create, update, delete and manage templates. It will also provide an UI dashboard page to check and manage message templates from web console.

4. User Selection Service:

This service will provide services to choose target users and various application modules. There could be use cases to send bulk messages to specific group of users or different application modules. It could be also AD/IAM/eDirectory/user database/ user groups based on customer’s preferences. Internally, it will consume API services of User Profile Service APIs and check customers notification preferences.

5. User Profile Service:

This service will provide various features including managing users profile and their preferences . It will also provide feature to unsubscribe for notifications and also notification receiving frequency etc. Notification Service will be dependent on this service.

6. Common Notification Service

  • Scheduling Service:

This service will provide APIs to schedule notifications like immediate or any given time. It could be any of these followings:

  • Second
  • Minute
  • Hourly
  • Daily
  • Weekly
  • Monthly
  • Yearly
  • Custom frequency etc.

There could be other services also, which can be auto-triggered messages based on the scheduled times.

  • Validation Service:

This service solely responsible for validating notification messages against business rules and expected format. Bulk messages should be approved by authorized system admin only.

  • Validation Service:

It will also prioritize notification based on high, medium and low priorities. OTP notification messages have higher priority with a time-bound expiry time, they will always be sent in higher priority. Common Outbound Handler will consume messages and process and send based on the same priorities from reading in three different queues high, medium and low. Another use case of bulk messages can be send using low priority during off hours. Application notifications during transactions could be sent to medium priority like email etc. Business will decide priority based on criticality of the notifications.

7. Event Priority Queues (Event Hub):

It will provide event hub service which will consume messages from notification services in high, medium and low topics. It sends processed and validated messages to Notification Handler Service which internally uses Notification Preferences Service to check users personal preferences.

It will have these three topics, which will be used to consume/send messages based on business priority:

  • High
  • Medium
  • Low

8. Common Outbound Handler:

This service will consume notification messages from Event Hub by polling event priority queues based on their priority. High precedence will be given to “High” queue and so on so forth. Finally It will send notification messages to message specific adapter thru Event Hub.

This service will also fetch target user/applications from User Selection Service.

9. Notification DB

It will persist all notification messages with their delivery time, status etc. It will have a cluster of databases with a leader which will be used to perform all write operations and read will be on read replica/followers. It should be No-SQL database.

10. Outbound Event Hub:

It finally transmits message to various supported adapters. These adapters will be based on different devices (desktop/mobile) and notification types( sms/OTP/Email/Chat/Push notifications).

11. Notification Adapters:

These are adapters which will transform incoming messages from event hub (Kafka) and send to external vendors according to their supported format. These are a few adapters, we can add more based on use case requirements:

  • OTP Adapter Service
  • SMS Adapter Service
  • Email Adapter Service
  • In-App Notification Adapter Service
  • WhatsApp Chat Notification Adapter Service
  • Telegram Notification Adapter Service

12. Notification Vendors:

These are the external SAAS (on cloud/on-prem) vendors, which provide actual notification transmission using their infrastructure and technologies. They maybe paid enterprise services like AWS SNS, MailChimp etc.

  • SMS Vendor Integration Service
  • Email Vendor Integration Service
  • App Push Notification Vendor Integration Service
  • WhatsApp Vendor Integration Service
  • Telegram Vendor Integration Service

13. Notification Analytical Service

This service will do all analytics and identify notification usage, trends and do a reporting on top of that. It will pull all final notifications messages from analytical database (Cassandra) and Notification databases for analytics and reporting purpose.

These are a few use cases:

  • Total number of notifications per day/per sec.
  • Which is highly used notification system.
  • What’s average size and frequency of messages.
  • Filter out messages based on their priorities and many more…


14. Notification Tracker

This service will continuously read Event hub queues and track all sent notifications. It captures metadata of the notifications like transmission time delivery status, communication channel, message type etc.

15. Cassandra Database Cluster

This database cluster will persist all notifications for analytics and reporting purpose. It’s based on write more and read less concept.

This will provide good performance and low latency for high number of notifications, because it internally manages high number of write operations and sync up with other database nodes and keep duplicate data/messages for high availability and reliability. Messages will be always available in case of any node get crashed.

Please share feedback and let me know if you have any suggestion to make this design better!

15. Inbound Notification Service

This service will expose API endpoint to external clients and applications to send inbound messages.

16. INBOUND event Hub

This topic will be used to queue and process all incoming notification messages from Inbound notification clients.

17. Inbound Handler

This will consume all incoming notification messages from INBOUND topic.

18. Inbound Notification Clients

These inbound notification messages will come from internal and external sources/applications.

API Introduction and Best practices!

Disclaimer: It has been taken from my book – “Cloud Native Microservices using Spring and Kubernetes“.

Application Programming Interface (API) allows two apps/resources to talk to each other and is mostly referred for Service Oriented Architecture (SOA)

API is gaining more popularity when microservices development is booming for modern cloud-native applications or app modernization. We can’t imagine microservices without APIs, because there are so many distributed services in a microservice architecture, which can’t be easily integrated without the help of API. So, both Microservices and API compliments each other!

It’s an architectural design specification, a set of protocols that provides an interface to integrate and talk different microservices/monolithic apps and databases with each other. API does talk about how external services can communicate with apps, not how it works!

It creates an integration contract between different apps/external clients with a standard set of rules and specifications. It’s followed as a development practice for external clients/apps.

API is based on a contract first design pattern of development, where all developments happen around APIs specifications and protocols. Developers use the same standard practices across different microservices agile development teams.

API best practices

Now, we will discuss a few best API practices in detail:

  • Follow OpenAPI standard: Modern apps API should follow OpenAPI specification to make it compatible and portable for all kinds of apps.
  • API web dashboard support: It should be developer-friendly with the API management dashboard, which helps to create, manage, and monitor APIs in large systems or microservices environments. There are many API open source and enterprise solutions like OpenAPI based SwaggerHub, Google Apigee, and so on. They provide a web-based dashboard to manage APIs dynamically and can be exported  as source code and shared with development teams.
  • Web-based HTTP with REST: Most of the apps, databases, and messaging systems use REST over HTTP protocol communication over the internet. REST is widely accepted, supported by most of the clients and logical integration apps, and so on. It’s more flexible, has rich features. If you are building an API then you should know the basics about HTTP web protocol and its methods, attributes, and status codes. Also, you should have a good understanding of the REST style of API interfaces, because REST is resource oriented architectural style.
  • Return valid structured JSON response: Don’t return plain text message. It should be well-structured JSON, XML, or a similar response. Example is as follows:

“sku”: 101, 

“pInfo”: { 

“fullProductName”: “LG 50B6000FHD 127 Fridge”, 

“brand”: “LG”, 

“model”: “50B6000FHD”, 

“category”: “Fridge”, 

}   

  • Maintain status codes: API must return HTTP status codes because when a client sends a request to server through REST API, it expects response from the server, if it’s a success or failure. There are standard pre-defined error codes for this purpose:
Status codesDescriptions
2xxSuccess category, for example, 200 – Ok
3xxRedirection category, for example,  304 – Not modified
4xxClient error category, for example, 404 – Not found
5xxServer errors category, for example, 500 – Internal server error

  • API Endpoint naming standard: Name the collections using plural nouns. The reason behind this, the same resource can return a single record or multiple records. It’s not recommended to have two separate resources URIs for these two resources. For example, /orders is a valid URI name for API, which serves both purposes.

Use nouns instead of verbs. It will be a standard naming convention, because multiple operations can be done on a single resource or object. For example, /orders is a noun and correct way because order can be created, updated, deleted, and fetched. It’s not recommended to use / createOrder, /updateOrder, /deleteOrder, and so on.

  • Error handling, return error details with error code: Server resource should always returns appropriate error code, internal error code and simple human-readable error message for better error and exception handling at client-side apps, for example:

“status”: “400”, 

“erroCode”: “2200” 

“errorDetail”: “Connection refused” 

}

  • Return appropriate HTTP response status code: Every REST endpoint should return a meaningful HTTP response code to handle server responses in a better way like:
    • 200 for success.
    • 404 for not found.
    • 201 resource is created.
    • 304 not modified. Response already in its cache.
    • 400 bad request. The client request was not processed, as the server could not understand what the client was asking for.
    • 401 unauthorized. Not allowed to access resources, and should re-request with the required credentials.
    • 403 forbidden. Client is authenticated, but the client is not allowed access to the page or resource for some reason.
    • 503 services unavailable. The server is down or unavailable to receive and process the request.
  • Avoid nesting of related resources:  Sometimes, resources are related to each other like /orders resource object is related to catalogue category and user ID. We should not nest resources like: GET /orders/mobile/111 

It’s recommended to use top-level resource and make other related resources as a query parameter like this:

              GET /orders?ctg=mobile&userid=111

  • Handle trailing slashes: It’s always advisable to use only one approach either with trailing spaces like /orders/ or without it /orders to avoid any confusion.
  • Use sorting, filtering, querying, pagination: In many use cases, a simple resource name won’t work.
    • Sorting: You need to request server API resources to sort data in ascending or descending order: GET /orders?sort=asc
    • Filtering:  Filter on some business conditions like return product catalog responses based on price range: GET /orders?minprice=100&maxprice=500
    • Querying: Use cases where you want to query products based on their category like searching electronics products based on mobile category, for example: GET /orders?ctg=mobile&userid=111
    • Pagination: To improve performance and reduce latency on API calls over the internet, the client requests a subset of records at a single request like 10 records at a time for a given page. It’s called pagination: GET /orders?page=1&page_size=10
  • Versioning: Versioning is a very important concept of API, which helps consumers to migrate to newer versions without any outage. In this scenario, some clients can access newer versions, and others can still use older versions. There are various ways of API versioning:
  • Using URI path: It’s a standard technique to maintain different versions of the same APIs to support older versions of API resources, if the server-side API resource is upgraded to a newer version. Clients take some time to migrate and use the latest version of API. A new version to the same API resource can be changed, for example, /order/v1, / order/v2. The internal version of the API uses the 1.2.3 format like this: MAJOR.MINOR.PATCH.
    • Major version: It contains major code changes in business logic or other components. A new major version is added to the new API and the version number is used to route to the correct host.
    • Minor and patch versions: These are used internally for backwardcompatible updates. They are usually communicated in changelogs to inform clients about new functionality or a bug fix. The minor version represents minor changes and the patch contains break-fixes or security patches, and so on.
  • Using query parameters: In this method, version number is added into query parameters in key and value. It’s simple to use, however not recommended because it’s difficult to route requests to APIs. For example: /orders?version=1.
  • Using custom headers: In this method, version number can be added in HTTP request header. It avoids the clutter of URI versions; however, we need to create and manage new headers. For example: Accepts-version: 1.0.
  • Using content negotiation: It’s also added in the header, allows a single resource representation instead of versioning the entire API which gives us more granular control over versioning. In this method, no need to create routing rules at the source API codes of different versions. This approach is not very popular, because it’s difficult to test and verify changes in browsers. For example: Accept: application/json; version=1
  • Caching: API caching is a very much needed feature to improve API read (GET) performance. It caches responses from the API and for the same set of data and makes it available for other similar client requests. It’s recommended to use distributed caching techniques in a distributed microservices environment so that the same cached response should be available for multiple instances of the same microservice app.
  • Rate limiting and throttling: Rate limiting is a technique of counting client requests with counter and limit based on the subscription or maximumallowed limit to control traffic on the server, also it is useful for security reasons to avoid hackers to hit continuously and bring the system down by consuming all the memory and compute resources.

API throttling controls the way API is being consumed by external apps/ or clients. It also indicates a temporary state and is used to control the data that external clients can access through a REST API. When a throttle is triggered, we can disconnect client requests, client apps, device ID, a user or just reduce the response rate. You can define a throttle at the application, API or user level.

There are multiple ways to implement rate-limiting. Spring Cloud Gateway provides rate limiting wrapper using distributed caching such as Redis or a similar caching tool.

  • API gateway support: It’s recommended to expose APIs using API gateway tools to external apps. API gateway takes care of routing and orchestrating to designated server-side API, filtering, rate limiting, throttling, circuit breaker, API, authentication, authorization, and so on out of the box. It makes your API configuration outside of the business logic source code. It makes actual business logic code lighter and easy to debug and maintain.

My first book release!! Cloud Native Microservices with Spring and Kubernetes (453 pages)

I am happy to announce release of my first book “Cloud Native Microservices with Spring and Kubernetes” with BPB Publications!! It’s all about design, build and deploy scalable cloud native microservices on container using the Spring framework and Kubernetes. Need your support! Please buy and review this book on Amazon. Also, share book detail with your IT software engineers colleagues and friends.

The main objective of this book is to give an overview of cloud native microservices, their architecture, design patterns, best practices, use cases  and practical coverage of modern applications. This book covers a strong understanding of microservices, API first approach, Testing, Observability, API Gateway, Service Mesh and Kubernetes alternatives of Spring Cloud. This book covers the implementation of various design patterns of developing cloud native microservices using Spring framework, docker and Kubernetes. It also covers containerization concepts and hands-on code exercises.

After reading this book, the readers will have a holistic understanding of building, running, and managing cloud native microservices applications on Kubernetes containers.

It’s the first book on this subject in India by any Indian writer, which is also economical than foreign publications. This book is about learning of software application design and development using Microservices, Spring and Kubernetes based technologies. It’s useful for software developers, cloud engineers, DevOps and technical architects.

How to buy:

  • Amazon

This book is available in paperback and Kindle (eBook on free Kindle app on Android/iOS/Laptop/Desktop) edition on amazon and BPB in most of the countries of North America, Europe, Middle East, Asia and Africa.

Refer free preview and TOC/Index of this book:

https://drive.google.com/drive/folders/1Lq280d6hUcyh2xm8cFuyt1vADNzk61dg?usp=sharing

What you will learn:

  • Learn fundamentals of microservice and design patterns.
  • Perform end-to-end microservices testing using Cucumber.
  • Learn  microservices development using Spring Boot and Kubernetes.
  • Learn to develop reactive, event-driven, and batch microservices.
  • Perform end-to-end microservices testing using Cucumber.
  • Implement API gateway,authentication & authorization,load balancing, caching, rate limiting.
  • Learn observability and monitoring techniques of microservices

Who this book is for:
This book is for the Spring Developers, Microservice Developers, Cloud Engineers, DevOps Consultants, Technical Architect and Solution Architects.

Table of Contents (Chapters):
1. Overview of Cloud Native microservices
2. Microservice design patterns
3. API first approach
4. Build microservices using the Spring Framework
5. Batch microservices
6. Build reactive and event-driven microservices
7. The API gateway, security, and distributed caching with Redis
8. Microservices testing and API mocking
9. Microservices observability
10. Containers and Kubernetes overview and architecture
11. Run microservices on Kubernetes
12. Service Mesh and Kubernetes alternatives of Spring Cloud

KEY FEATURES:

  • Complete coverage on how to design, build, run, and deploy modern cloud native microservices.
  • Includes numerous sample code exercises on microservices, Spring and Kubernetes.
  • Develop a stronghold on Kubernetes, Spring, and the microservices architecture.
  • Complete guide of application containerization on Kubernetes containers.
  • Coverage on managing modern applications and infrastructure using observability tools.

Chapter 1: Overview of Cloud Native microservices, introduces cloud native modern applications, cloud first overview, benefits, types of clouds, classification, and the need for cloud native modern applications. It will cover detailed microservices (MSA ) overview, characteristics, motivations, benefits, best practices, architecture principles, challenges and solutions, application modernization spectrum, twelve-factor apps, and beyond twelve-factor apps.s

Chapter 2: Microservice design patterns, introduces various microservices design patterns with use cases, advantages, and disadvantages.

Chapter 3: API first approach, discusses fundamentals of the API first approach. It discusses details of the REST overview, API model, best practices, design principles, components, security, communication protocols, and how to document dynamically with OpenAPI Swagger. It discusses API design planning, specifications, API management tools, and testing API with SwaggerHub inspector and PostMan REST client.

Chapter 4: Build microservices using the Spring Framework, is a key chapter of Spring Boot and Spring Cloud components with hands-on lab exercises. It will cover steps to build microservice using the REST API framework. It covers the Spring Cloud config server and resiliency of microservices practical aspects.

Chapter 5: Batch microservices, introduces batch microservices, use cases, Spring Cloud Task, and Spring Batch. It discusses hands-on lab exercises using Spring Cloud Data Flow (SCDF) and Kafka. It also discusses a few Spring batch practices, auto-scaling techniques, batch orchestration, and compositions methods for sequential or parallel batch processing. Last but not the least; it talks about alerts and monitoring of Spring Cloud Task and Spring Batch.

Chapter 6:  Build reactive and event-driven microservices, describes building of reactive microservices, non-blocking synchronous APIs, and event-driven asynchronous microservices. It covers steps to develop sample reactive microservices with Spring’s project Reactor, Spring WebFlux, and event-driven asynchronous microservices. It discusses Spring Cloud Stream, Zookeeper, SpringBoot, and overview of Kafka. It also covers hands-on lab exercises of event-driven asynchronous microservices using Spring Cloud Stream and Kafka.

Chapter 7:  API gateway, security, and distributed caching with Redis, introduces the API Gateway overview, features, advantages, and best practices. It covers hands-on lab exercises to expose REST APIs of microservices externally with the Spring Cloud Gateway. It covers distributed caching overview and hands-on lab exercises using Redis. It discusses API gateway rate limiting and Implementation of API gateway rate limiting with Redis and Spring Boot. Last but not the least, it covers best practices of API Security. Implementation of SSO using Spring Cloud Gateway, Spring Security, Oauth2, Keycloak, OpenId, and JWT tokens.

Chapter 8: Microservices testing and API mocking, describes important aspects of microservices testing practices, challenges, benefits, testing strategy, testing pyramid, and different types of microservices testing. It covers implementation of the integration testing framework using Behavioral Driven Development (BDD) with hands-on code examples. It also discusses microservices testing tools and best practices of microservices testing. It covers the role of testing in the microservices CI/CD pipeline. Last but not the least; it talks about API mocking and hands-on lab implementation with the WireMock framework.

Chapter 9: Microservices observability, covers detail observability and monitoring overview and techniques of microservices with the Spring actuator, micrometer health APIs, and Wavefront APM. It covers application logging overview, best practices, simple logging, and log aggregation of distributed microservices with implementation using Elasticsearch, Fluentd, and Kibana (EFK) on the Kubernetes container. It discusses the need of APM performance and telemetry monitoring tools for distributed microservices and how to trace multiple microservices in a distributed environment. It also covers hands-on lab implementation of monitoring microservices with Prometheus and Grafana.

Chapter 10: Containers and Kubernetes overview and architecture, is a key chapter which introduces containers, docker, docker engine containerization, Buildpacks, components of docker files, build docker files, run docker files, and inspect docker images. It covers docker image registry and how to persist docker images in image container registries. It covers an overview of Kubernetes, need, and architecture. Last but not the least; it covers a detailed introduction of Kubernetes resources.

Chapter 11: Run microservices on Kubernetes, discusses practical aspects of Kubernetes, installation, and configuration with monitoring and visualization tools with Octant and Proxy. It discusses how to create and manage Kubernetes clusters in detail. It discusses hands-on exercises of creating docker images of Java microservices, pushing it to the Docker hub container image registry, and deploying to Kubernetes clusters. It covers hands-on lab examples of exposing API endpoints of microservices outside the Kubernetes cluster by using the Nginx ingress controller. Last but not the least; it covers various popular and useful Kubernetes application deployment and configuration of management tools.

Chapter 12: Service Mesh and Kubernetes alternatives of Spring Cloud, covers a detailed overview and benefits of GitOps and Service Mesh. It covers the Istio Service Mesh architecture and deployment of microservices on Kubernetes with Argo CD. Last but not the least; it discusses various Kubernetes alternatives of Spring Cloud projects and popular cloud buzzwords!

About the Author
Rajiv Srivastava is the founder of cloudificationzone.com, which is a cloud native modern application tech blog site. He is a cloud solution architect and modern application specialist with 16+ years of work experience in software development and architectural design.

KEYWORDS 

1.      Spring framework

2.      API first approach

3.      Cloud Native

4.      Microservices observability

5.      API testing

6.      API gateway

7.      Microservices observability

8.      Service mesh

9.      API gateway

10.    Redis distributed caching

11.    Kafka

12.    Spring Cloud

13.    Service discovery

14.    Spring cloud data flow 15.    Ingress controller

A quick introduction of Cluster API

In this blog we will understand how multi Kubernetes clusters can be managed and orchestrated using the Cluster API  control plane tool.

Cluster API

It’s a sub-project of Kubernetes which works on declarative API and provides support for CRD (Custom Resource Definition), which manages K8s and VMs based on CRD configuration. Cluster API always checks current K8s clusters on worker nodes status and compares them with desired state which is provided by DevOps team using CRD config files. It provides provisioning of multiple K8s clusters which could be spread to multiple nodes/hosts. It extends the functionality of K8s for multi-node cluster management with Kubeadm API.

Cluster API orchestrates  and manages the lifecycle of worker nodes where K8s clusters are provisioned. Also, manages multiple K8s clusters upgrades, failover, rollback etc.

Tip: Cluster API manages K8s clusters of multi-cloud including on-prem and hybrid.

Started by the Kubernetes Special Interest Group (SIG) Cluster Lifecycle, the Cluster API project uses Kubernetes style APIs and patterns to automate cluster lifecycle management.

Refer Cluster API docs here: https://cluster-api.sigs.k8s.io/



Cluster API cluster management reference architecture

These are important components of Cluster API:

Infrastructure provider

Basic hardware infrastructure providers such as Vmware, public infra providers such as AWS, GCP, Azure etc.

Bootstrap provider

It’s also called Cluster API bootstrap provider Kubeadm (CABPK). It generates cluster certificates, initialize/bootstrap control planes. It turns the Machine into a K8s Node. It uses the Kubeadm tool.

Kubeadm

It manages the lifecycle of K8s clusters on multiple nodes. Kubeadm is a tool built to provide best-practice “fast paths” for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.

Kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

Control plane

Control plane is a set of services which manages multiple K8s clusters and scale on production environments. It manages and provisioned dedicated machines/VMs, running static pods for components such as kube-apiserver, kube-controller-manager and kube-scheduler.

The default provider uses kubeadm to bootstrap the control plane.

Custom Resource Definitions (CRDs)

It’s a set of K8s YAML configuration files which maintain desired K8s cluster state. It can be persisted on Git server for platform automation for Cluster API based K8s clusters.

Machine

It defines configuration of machines like VM etc. It’s a declarative spec for an infrastructure component hosting a Kubernetes Node. It’s like POD on K8s.

MachineDeployment

It provides declarative updates for Machines and MachineSets.

MachineSet

It maintains a stable set of Machines running at any given time. It’s like a ReplicaSet of K8s.

MachineHealthCheck

It defines the conditions when a Node should be considered unhealthy. If the Node has any unhealthy conditions for a given user-configured time, the MachineHealthCheck initiates remediation of the Node. Remediation of Nodes is performed by deleting the corresponding Machine and creating a new one like ReplicaSet. It always maintains the same set of Machine configuration which has been provided in the CRD file.

BootstrapData

It contains the Machine or Node role-specific initialization data (usually cloud-init) used by the Infrastructure Provider to bootstrap a Machine into a Node.

Business Benefits of Cloud

I had been busy with my other work assignments. publishing this blog after a long interval. Hope you like it !

It’s important to understand benefits for the business who is planning to migrate to cloud and invest time and money. These are some generic and common benefits after adopting cloud technology using modern application approach:

  • Smoother and faster app user experience: Cloud provides faster, highly available app interfaces which improves rich user experience. For example, AWS stores static web pages and images at nearby CDN(Content Delivery Network) servers physically on cloud, which provides faster and smooth application response.
  • On demand scaling for infrastructure: Cloud provides on demand compute, memory and storage horizontal/vertical scaling. Organizations/customers should not bother about infra prediction for higher load and they also save money to use only required infrastructure resources.
  • No outage for users and clients: Cloud provides high availability, so whenever any app server is down, client load will be diverted to other app server or a new app server will be created. User and client sessions will also be managed automatically using internal load balancers.
  • Less operational cost (OPEX): Cloud manages most of the infra management operation automatically or by cloud providers. For example, PAAS (Platform as a service) automates entire platform automation with a smaller number of DevOps resources, which saves a lot operational cost.
  • Easy to manage: Cloud providers and PAAS platforms provides very easy and intuitive web, CLI and API based console or interface, which can be easily integrated with the CI/CD tools, IAAC (Infrastructure as a Code) and scripts. They can also be integrated with apps etc.
  • Release app features quickly to compete in market: Cloud provides a lot of ready-to use services on cloud, which takes lesser time to build and deploy apps to cloud quickly using microservices agile like development methodologies. It supports container-based orchestration services like Kubernetes, where smaller microservices can be deployed in quick time, which enables organizations to release new feature quickly.
  • Increased security: Cloud solutions provider out of the box intrinsic security features at various level of application, network, data and infra level. For example, AWS provides DDOS and OWASP security features with firewall etc.
  • Increase developer productivity: Cloud providers various tools and services to improve developer productivity like PAAS, Tanzu Build Service. Spring framework, AWS BeanStalk, GCP, OpenShift developer tools etc.
  • Modular Teams:  Cloud migration motivates to follow modern applications microservice framework for dev and test teams to work in agile on independent modules or microservices independently.
  • Public cloud’s “pay as you go” usage policy: Customer has to pay for pay as you go usage of infra, so that no extra infra resources wasted. These public service providers pricing model saves a lot of cost.
  • Easy disaster recovery handling: Cloud deployed on multiple data centers (DCs) or availability zones (AZs) for disaster recovery (DR), so that if any site (DC/AZ) is down then client or application load will be automatically routed to other site using load balancers.
  • Business continuity: It provides all necessary processes and tools to manages business continuity (BC) for smooth and resilient business operations. They. Provide faster site recovery in case of disaster and data backup. Cloud also provides enterprise compliances for various industries like HIPAA for health insurance etc.

Cloud native modern application and microservices related buzzwords

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

On this Covid weekend. I have tried to consolidate a few popular cloud native and microservices related acronyms and buzzwords. Hope it will be helpful for a quick reference for novice and pro professionals :

  • DDD: Domain Driven Design is used for microservices design based on closed context of independent business modules
  • DoS: A Denial-of-Service (DoS) is a cyber-attack  to shut down a machine or network, making it inaccessible to its intended users. DoS attacks accomplish this by flooding the target with traffic, or sending it information that triggers a crash.
  • DDoS: It’s an extension DoS. he incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.
  • API: Application Program Interface. It’s a computing interface which defines interactions between multiple software applications and can be interacted using REST.
  • REST:  Representational state transfer (REST) is a software architectural style that defines a set of constraints to be used for creating Web services.
  • OWASP: The Open Web Application Security Project (OWASP) is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security.
  • DC: Data Center. It’s a physical facility that organizations use to host their critical applications and data.
  • AZ: Availability Zones are isolated virtual locations within data center regions from which public cloud services originate and operate for HA, DR, backup.
  • HA: High Availability
  • DR: Disaster Recovery
  • A&A: Authorization & Authentication
  • BM : Bare Metal on plain OS.
  • OS: Operating system
  • SQL: Structured Query Language for RDBMS databases like MySQL, Oracle
  • No-SQL: No- Structured Query Language for Non-RDBMS databases like document based database MongoDB, or column based database HBase, GreenPlum etc.
  • BYO: Bring Your Own. Example: Bring your own software.
  • DIY: Do It Yourself. It’s used for mainly self-service management/operations.
  • TCP/IP: Transmission Control Protocol (TCP) and the Internet Protocol (IP). The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It works on handshaking dialogues.
  • UDP: User Datagram Protocol (UDP) is one of the core members of the Internet protocol suit. It has no handshaking dialogues.
  • DevOps: A set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery and continuous integration with high software quality. DevOps is complementary with Agile software development.
  • DevSecOps: is the philosophy of integrating security practices within the DevOps process. DevSecOps involves creating a ‘Security as Code’ culture with ongoing, flexible collaboration between release engineers and security teams.
  • CI/CD: Continuous Delivery and Continuous Integration.
  • Agile software development: Iterative sprint based rapid development model.
  • H/W: Hardware
  • H/W: Software
  • CQRS: Command query responsibility segregation is microservice design pattern to separate read and write command responsibilities of data.
  • SAGA: Microservices architectural pattern to implement a transaction that spans across multiple microservices
  • HTTP: Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting web-content
  • HTTPS: HTTP with SSL security
  • SSL & TSL: Secure Sockets Layer and its successor, TLS (Transport Layer Security), are protocols for establishing authenticated and encrypted links between networked computers or cloud appications/services.
  • Greenfield: New scratch application
  • Brownfield: Legacy or old application
  • MFA: Multi-factor authentication which involved more than one device to authenticate like login credentials with mobile based OTP confirmation.
  • OTP: One Time password
  • RBAC: Role-based access control (RBAC) is a method of restricting network access based on the roles of individual users within an enterprise.
  • AWS: Amazon Web services. A leading public cloud provider.
  • GCP: Google Cloud Platform. A leading public cloud provider

Install Tanzu Build Service (TBS v1.0 GA) on Kubernetes – build docker image and store in DockerHub image registry

In this blog I will cover these following scope-

  1. How to install TBS on local KIND/TKGI/TKG and other Kubernetes clusters.
  2. How to auto build portable OCI docker image of a SpringBoot (Java) project using an automated build tool VMware Tanzu Build Service which auto detects Git source code repository commit and inject required dependencies and OS base image based on the source code languages and its configuration files. e.g: application.yaml and Maven’s pom.xml for Java/Spring app.
  3. Push this docker image automatically to Docker Hub image registry. You can use any image registry like onprem Harbor, AWS ECR, GCR, Azure ACR etc.
  4. Test Build Image by downloading from Docker-hub image registry and run using Docker
  5. How to build Dot Net application using TBS (Appendix)
  6. FAQ

Currently, Tanzu Build Service (TBS) ships with the following Buildpacks:

  • Java
  • NodeJS
  • .NET Core (supports Windows DotNet App)
  • Python
  • Golang
  • PHP
  • HTTPD
  • NGINX

Why Tanzu Build Service( TBS)?

  1. Save time to re-build, re-test and re-deploy during patching hundreds of containers.
  2. It auto scans source code configuration and language and inject dependencies into docker image that will be required to compile and/or run the app on container/K8s.
  3. Faster build and patching on hundreds of containers.
  4. Faster developer productivity by setting local build on developer’s machine and sync with source code repo like GutHub etc.
  5. Manage common project dependencies for dev teams and sync all developers code with single git branch to avoid any code conflict/sync issues.
  6. Maintain latest image in image registry.
  7. OCI docker image support, build and run anywhere!

Please refer this official documentation page for more detail

https://docs.pivotal.io/build-service/1-0/

Tanzu Build Service v1.0 GA can be installed on any Kubernetes private and public cloud clusters (v1.14 or later) including local machine using Kubernetes shipped with Docker Desktop, MiniKube and managed K8s like TKGI, TKG, GKE, and AKS clusters etc. 

Build Service Components Tanzu Build Service ships with the following components:

  1. kpack
  2. CNB lifecycle

Prerequisite:

  1. Create Pivnet account. Refer to the official docs for more details on obtaining a Pivotal Network API token. You can create a free account and try on your local machine.
  2. Install Pivnet CLIhttps://github.com/pivotal-cf/pivnet-cli/releases/tag/v2.0.1
  3. Install Docker Desktop (Mac, Windows) – Optional if you are trying to install on your local single node K8s cluster.
  4. Docker Hub account
  5. Install Kubernetes. I have used TKGI K8s cluster on GCP, you can also use KIND(K8s in Docker) with Docker Desktop.
  6. Install the TKGI CLI or kubectl CLI
  7. Install these three Carvel CLIs for your operating system. These can be found on their respective Tanzu Network pages:
  • kapp is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • ytt is a templating tool that understands YAML structure.
  • kbld is tool that builds, pushes, and relocates container images.

How to Install & Configure TBS:

You can download TBS from VMware Tanzu Network (formerly the Pivotal Network, or PivNet) or install using Pivnet CLI command.

Note: I have used Pivnet CLI for all the downloads from VMware PivNet. Refer this official installation guide and advance level configuration:
#Pivnet login using secret token
$ pivnet login --api-token='my-api-token'

$ pivnet download-product-files --product-slug='build-service' --release-version='1.0.2' --product-file-id=773503

#Unarchive the Build Service Bundle file:
$ tar xvf build-service-<version>.tar -C /tmp

#Login to docker-hub. This step will save docker-hub credentials to your K8s cluster. Note: You can use Harbor's url also.
$ docker login index.docker.io

#Login to VMware registry Docker-Hub thru Docker CLI (downloaded with Docker Desktop client)
$ docker login registry.pivotal.io

#Relocate the images for DockerHub with the Carvel tool kbld by running:
# Syntax: kbld relocate -f /tmp/images.lock --lock-output /tmp/images-relocated.lock --repository <IMAGE-REPOSITORY>

$kbld relocate -f /tmp/images.lock --lock-output /tmp/images-relocated.lock --repository itsrajivsrivastava/tanzu-build-service

Connect with your K8s cluster where you want to install TBS:

$ kubectl config use-context <K8s-cluster-name>

Now, install TBS on K8s. You can run these commands from home folder

Then use ytt to push the bundle to the image registry to DockerHub/image registry. It will upload all laungauge buildpacks and other supporting images to Docker-Hub/Harbor.

Note: It will take good time to upload bunch of images. If it fails then you need to re-run the command after deleting failed build of TBS. You can delete “kpack” and “build-service” Kubernetes namespaces.

$ ytt -f /tmp/values.yaml \
    -f /tmp/manifests/ \
    -v docker_repository="<IMAGE-REPOSITORY>" \
    -v docker_username="<REGISTRY-USERNAME>" \
    -v docker_password="<REGISTRY-PASSWORD>" \
    | kbld -f /tmp/images-relocated.lock -f- \
    | kapp deploy -a tanzu-build-service -f- -y

#Example:
ytt -f /tmp/values.yaml \
    -f /tmp/manifests/ \
    -v docker_repository=“itsrajivsrivastava” \
    -v docker_username="itsrajivsrivastava" \
    -v docker_password=‘******’ \
    | kbld -f /tmp/images-relocated.lock -f- \
    | kapp deploy -a tanzu-build-service -f- -y

Where:

  • IMAGE-REPOSITORY is the image repository where Tanzu Build Service images exist.
  • REGISTRY-USERNAME is the username you use to access the registry. gcr.io expects _json_key as the username when using JSON key file authentication.
  • REGISTRY-PASSWORD is the password you use to access the registry

Install KP CLI:

kp CLI, is used for interacting with your Tanzu Build Service (TBS) installation on K8s cluster. Download the kp binary from the Tanzu Build Service page on Tanzu Network.

Import Tanzu Build Service Dependencies

The Tanzu Build Service Dependencies (Stacks, Buildpacks, Builders, etc.) are used to build applications and keep them patched.

  1. Run this command on CLI  “docker login registry.pivotal.io” 
  2. Accept all these EULA agreements online.

Note: Successfully performing a kp import command requires that your Tanzu Network account has access to the images specified in the Dependency Descriptor file. Currently, users can only access these images if they agree to the EULA for each dependency. Users must navigate to each of the dependency product pages in Tanzu Network and accept the EULA highlighted in yellow underneath the Releases dropdown.

Here are the links to each Tanzu Network page in which users must accept the EULA:

  1. Tanzu Build Service Dependencies
  2. Java Buildpack for VMware Tanzu
  3. Java Native Image Buildpack for VMware Tanzu
  4. Node.js Buildpack for VMware Tanzu
  5. Go Buildpack for VMware Tanzu

Note: `kp import` will fail if it cannot access the images in all of the above Tanzu Network pages.

Note: You must be logged in locally to the registry used for `IMAGE-REGISTRY` during relocation and the Tanzu Network registry `registry.pivotal.io`.

These must be imported with the kp cli and the Dependency Descriptor (descriptor-<version>.yaml) file from the Tanzu Build Service Dependencies page:

$ kp import -f /tmp/descriptor-<version>.yaml

Verify kp Installation

List the custom cluster builders available in your installation:

You should see an output that looks as follows:

$  kp clusterbuilder list
NAME       READY    STACK                          IMAGE
base       true     io.buildpacks.stacks.bionic    itsrajivsrivastava/base@sha256:b3062df93d2da25aeff827605790c508570446e53daa8afe05ed2ab4157d1c02
default    true     io.buildpacks.stacks.bionic    itsrajivsrivastava/default@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
full       true     io.buildpacks.stacks.bionic    itsrajivsrivastava/full@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
tiny       true     io.paketo.stacks.tiny          itsrajivsrivastava/tiny@sha256:abc4879c03512a072623a7dcb18621d68122b5e608b452f411d8bc552386b8c5

# List the custom cluster builders available in your installation:

$ kp clusterstack list
NAME       READY    ID
base       True     io.buildpacks.stacks.bionic
default    True     io.buildpacks.stacks.bionic
full       True     io.buildpacks.stacks.bionic
tiny       True     io.paketo.stacks.tiny

Create Git and Image registry secrets in your K8s cluster:

# Docker Hub secret
$ kp secret create docker-creds --dockerhub itsrajivsrivastava -n tbs-demo
  dockerhub password:
  "docker-creds" created

# GitHub Secret
$ kp secret create github-creds --git https://github.com --git-user rajivmca2004 -n tbs-demo

#Verify and list secret
$ kp secret list -n tbs-demo

NAME                   TARGET
default-token-fqwvj
docker-creds           https://index.docker.io/v1/
github-creds           https://github.com

#Delete secret
$ kp secret delete <SECRET-NAME> -n tbs-demo

Create and manage docker image using kp CLI commands:

# Create TBS image: (--tag is mandatory)

$ kp image create <name> \
  --tag <tag> \
  [--builder <builder> or --cluster-builder <cluster-builder>] \
  --namespace <namespace> \
  --env <env> \
  --wait \
  --git <git-repo> \
  --git-revision <git-revision>

#Syntax:

$ kp image create spring-petclinic \
  --tag index.docker.io/itsrajivsrivastava/spring-petclinic:latest \
  --namespace tbs-demo \
  --git https://github.com/rajivmca2004/spring-petclinic \
  --git-revision master
"spring-petclinic" created

#  Verify image status
$ kp image status spring-petclinic -n tbs-demo
Status:         Ready
Message:        --
LatestImage:    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:4e712dec26810f026357281be4ba49ff8da3b45698700fd5dc470b7914c0d13d

Last Successful Build
Id:        1
Reason:    CONFIG

Last Failed Build
Id:        --
Reason:    --

# List the project(s):
  
$ kp image list -n tbs-demo
NAME                READY    LATEST IMAGE
spring-petclinic    True     index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:d96fdd60633cd5582a9c28edceff80762ee79ef2985cd18f518bc1503563b7ef

# To check image list of any specific project 

$ kp image list spring-petclinic -n tbs-demo
NAME                READY    LATEST IMAGE
spring-petclinic    True     index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:d96fdd60633cd5582a9c28edceff80762ee79ef2985cd18f518bc1503563b7ef

Build docker images:

Here, TBS will sync up with two systems:
1. GitHub – TBS will be in sync with GitHub repo for any commit and triggered after every commit. It also builds docker images.
2. Image Registry/Docker Hub – TBS will automatically push this image which has been created in the last step to image registry Docker-Hub and keep it refreshed all the time for developers and CI/CD build and deployment to K8s containers.
There are two ways to build the image:

  1. Auto Build
  2. Manual Build

Note: If you want to build source code directory which is inside sub-directories or if you have a parent project repo and multiple child project inside this

--sub-path DotNetBuild

#Example:
$ kp image create dotnetbuildtest index.docker.io/itsrajivsrivastava/dotnetbuildtest \
  --sub-path DotNetBuild \
  --namespace tbs-dotnet-demo \
  --git https://github.com/rajivmca2004/DotNetBuild.git \
  --git-revision master

1. Auto Build:

Note: First build will be triggered automatically when you create image at the first time.

To check auto build, just make some code changes in your Git branch and check the build progress using this command, It takes a few seconds to trigger. You can watch build logs locally –

# Use, this command, to check build status, also can be used for the first time to build. It will also show build revisions

$ kp build status spring-petclinic -n tbs-demo
Image:            index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:e09bc84c0287d8d0985244876a68ae4b3963a96707622d81f6d9b9efa581be92
Status:           SUCCESS
Build Reasons:    COMMIT

Pod Name:    spring-petclinic-build-2-5csx8-build-pod

Builder:      itsrajivsrivastava/default@sha256:f16ed5de160ca9c13a0080d67280d0b2b843c926595c4d171568d75f96479247
Run Image:    index.docker.io/itsrajivsrivastava/run@sha256:ca460a285b00d8f25ca3734b8f783af240771eb10974d90e26501fd52c0271b8

Source:      GitUrl
Url:         https://github.com/rajivmca2004/spring-petclinic
Revision:    c5b4f7f717a1dc239c002c993c178f75283a7751

BUILDPACK ID                           BUILDPACK VERSION
paketo-buildpacks/bellsoft-liberica    4.0.0
paketo-buildpacks/maven                3.1.1
paketo-buildpacks/executable-jar       3.1.1
paketo-buildpacks/apache-tomcat        2.3.0
paketo-buildpacks/dist-zip             2.2.0
paketo-buildpacks/spring-boot          3.2.1

$ kp build list spring-petclinic -n tbs-demo

BUILD    STATUS     IMAGE                                                                                                                          STARTED                FINISHED               REASON
1        SUCCESS    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:b6d6a0944600be3552c0b33e0a0759a12b422168484ffff55f9f9e0be4c93217    2020-10-10 00:39:10    2020-10-10 00:59:40    CONFIG
2        SUCCESS    index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:e09bc84c0287d8d0985244876a68ae4b3963a96707622d81f6d9b9efa581be92    2020-10-10 01:07:35    2020-10-10 01:09:48    COMMIT

2. Manual Build:

$ kp image trigger spring-petclinic -n tbs-demo

Check Build Logs:

# To view running logs for a build:

 $ kp build logs spring-petclinic -n tbs-demo

#Check logs for the given release:

 $ kp build logs spring-petclinic --build 1 -n tbs-demo

Note: First build will take some time to download all the dependent libraries. Subsequent build will be super fast!

Applied source code related build packages are mentioned in this following build logs. Since, its a Java app, these build-packs have been applied automatically –

[INFO] Building jar: /workspace/target/spring-petclinic-2.3.0.BUILD-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:2.3.0.RELEASE:repackage (repackage) @ spring-petclinic ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  17:44 min
[INFO] Finished at: 2020-10-09T19:28:07Z
[INFO] ------------------------------------------------------------------------
  Removing source code

Paketo Executable JAR Buildpack 3.1.1
  https://github.com/paketo-buildpacks/executable-jar
  Process types:
    executable-jar: java org.springframework.boot.loader.JarLauncher
    task:           java org.springframework.boot.loader.JarLauncher
    web:            java org.springframework.boot.loader.JarLauncher

Paketo Spring Boot Buildpack 3.2.1
  https://github.com/paketo-buildpacks/spring-boot
  Launch Helper: Contributing to layer
    Creating /layers/paketo-buildpacks_spring-boot/helper/exec.d/spring-cloud-bindings
    Writing profile.d/helper
  Web Application Type: Contributing to layer
    Servlet web application detected
    Writing env.launch/BPL_JVM_THREAD_COUNT.default
  Spring Cloud Bindings 1.6.0: Contributing to layer
    Reusing cached download from buildpack
    Copying to /layers/paketo-buildpacks_spring-boot/spring-cloud-bindings
  Image labels:
    org.opencontainers.image.title
    org.opencontainers.image.version
    org.springframework.boot.spring-configuration-metadata.json
    org.springframework.boot.version
===> EXPORT
Reusing layers from image 'index.docker.io/itsrajivsrivastava/spring-petclinic@sha256:129f1e4231095c7504e01a4f233487b056eb28975b93f4dbb6195534bee4220e'
Adding layer 'paketo-buildpacks/bellsoft-liberica:helper'
Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
Reusing layer 'paketo-buildpacks/executable-jar:class-path'
Adding layer 'paketo-buildpacks/spring-boot:helper'
Adding layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings'
Adding layer 'paketo-buildpacks/spring-boot:web-application-type'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Adding label 'org.opencontainers.image.title'
Adding label 'org.opencontainers.image.version'
Adding label 'org.springframework.boot.spring-configuration-metadata.json'
Adding label 'org.springframework.boot.version'
*** Images (sha256:b6d6a0944600be3552c0b33e0a0759a12b422168484ffff55f9f9e0be4c93217):
      index.docker.io/itsrajivsrivastava/spring-petclinic:latest
      index.docker.io/itsrajivsrivastava/spring-petclinic:b1.20201009.190910
Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
Adding cache layer 'paketo-buildpacks/maven:application'
Adding cache layer 'paketo-buildpacks/maven:cache'
===> COMPLETION
Build successful

Test Build Image

Pull image from Docker-hub:

docker pull itsrajivsrivastava/spring-petclinic

Run this pulled image on docker:

docker run -p 8080:8080 itsrajivsrivastava/spring-petclinic

Now , test on this browser: http://localhost:8080

How to build Dot Net application using TBS

TBS installation, setup yaml configuration, Github and DockerHub secret are same for Dot NET also.Deployment process is also same as Java on docker. There is a separate DotNet buildpack which will be injected to ASP.Net source code project.

Now, we will create TBS Dot Net image:

$ kp image create dotnetbuildtest \
  --tag index.docker.io/itsrajivsrivastava/dotnetbuildtest:latest \
  --namespace tbs-demo \
  --git https://github.com/rajivmca2004/DotNetBuild \
  --git-revision master

#Verify
$ kp image status dotnetbuildtest -n tbs-demo

If you face this in .Net apps:

Unable to start Kestrel.
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress

kestrel will bind to both ipv4 & ipv6. maybe  k8s env doesn’t let it bind to ipv6. Change env port 8080 to force it to only bind to ipv4

Fix– Add this in K8s deployment manifests.

Add this in Ku8s deployment manifest file:
         env:
         – name: PORT
           value: “8080”

FAQ

QuestionsAnswers
Where exactly build happens on local machine or on K8s cluster Builds happen in pods on the cluster. Basically everything with the service happens in the K8s cluster. You can interact with the service with the kp cli (now kp cli)
Where it stores all dependent libraries K8s or machine from where kp build starts?
App dependent libraries live in the Stack/buildpacks which would be on the k8s cluster.
Does it keep an image copy locallyIt does not keep a copy of the app image, it only uploads to a registry
Can we cover all CI job for build configuration pipeline and deploy with CD tools/pluginTBS is meant to be a solution that works well in a CI/CD setting. Currently have an integration with concourse CI via https://github.com/pivotal/concourse-kpack-resource
It can also integrated with Jenkin with additional configuration
Can build-pack modified or created new custom buildYes, follow this – https://buildpacks.io/docs/operator-guide/create-a-builder/

ArgoCD GitOps in 30 mins: Setup CD pipeline and deploy an image on Kubernetes

Disclaimer: This blog content has been taken from my latest book:

“Cloud Native Microservices with Spring and Kubernetes”

I have recently worked and tried out a wonderful continuous delivery (CD) tool ArgoCD. It’s an awesome deployment tool, specially designed to deploy microservices workloads on Kubernetes. It’s a declarative GitOps continuous delivery tool for Kubernetes. It’s has awesome web UI dashboard to monitor and manage deployment. It’s directly linked with source code repo like GitHub.

Objective:

  • Why ArgoCD?
  • Prerequisite
  • How to install ArgoCD on Kubernetes cluster
  • How to use ArgoCD using UI and CLI headless modes
  • Create a deployment app in ArgoCD with a sample GitHub Repo and sync
  • Other Kubernetes Operations from UI:

Why ArgoCD?

It works on pull mechanism.Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.

I like the this awesome feature of auto-syncing and deployment on K8s cluster for making any small change in K8s deployment manifest source code.

Prerequisite

Kubernetes cluster should be installed and logged in. You can use KIND (Kubernetes Inside Docker) or MiniKube for local testing. I have used TKG Kubernetes cluster.

Kubectl CLI should be installed.

How to install ArgoCD on Kubernetes cluster

a. Install ArgoCD

Refer these easy official docs to install:

https://tanzu.vmware.com/developer/guides/ci-cd/argocd-gs/
https://argoproj.github.io/argo-cd/getting_started/

How to use ArgoCD

Start ArgoCD Server

$ kubectl port-forward svc/argocd-server -n argocd 9080:443

Forwarding from 127.0.0.1:9080 -> 8080
Forwarding from [::1]:9080 -> 8080

Create a deployment app in ArgoCD with a sample GitHub Repo and sync

You are now almost ready to deploy your application.However, first you need to tell ArgoCD about your deployment target. By default, if you do not add an additional Kubernetes cluster target, ArgoCD will deploy applications to the cluster on which it is installed. To add your target Kubernetes cluster to ArgoCD, use the following:

$ argocd cluster add target-k8s

This will add an ArgoCD service account onto the cluster, which will allow ArgoCD to deploy applications to it

Create an App in ArgoCD with a sample GitHub Repo

There are two ways to create app in ArgoCD:

There are two ways to use ArgoCD:

  1. UI mode
  2. CLI mode (headless)

1. UI mode

Login to ArgoCD:

https://localhost:9080/

# User Id - admin, 
# Password - Can be retrieved from this command:

$ kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2

Create APP – using ArgoCD UI

  1. Give a project name
  2. Select default cluster
  3. Select your Kubernetes cluster namespace
  4. You need url of source code repo. I am using Github repo.
  5. Select “Target Revision” of source code. I have used “HEAD”
  6. Add “Path” from Kubernetes deployment manifest files location

Note: You can also click on “Sync” button on ArgoCD UI.

Now, simply forward the port as you did for the ArgoCD UI.

$ kubectl port-forward svc/online-store-k8s-demo -n default 1090:8080

Once completed, “online-store-k8s-demo” app will be available at http://localhost:9090. You can open this URL on browser now.

Other Kubernetes Operations from UI:

  1. Logging- POD/deployment/services and other K8s objects logs also on ArgoCD UI dashboard
  2. Delete K8s objects
  3. Sync any specific object like re-deploy/sync selected deployment
  4. Rollback and re-deploy from UI
  5. Track all event on K8s objects
  6. Compare source code changes from the previous revision

2. CLI mode (headless)

Login thru CLI: (Optional)

$ argocd login localhost:8080

Manually using CLI

$ argocd app create online-store-k8s-demo --repo https://github.com/rajivmca2004/online-store-k8s-demo --path . --dest-server  https://kubernetes.default.svc --dest-namespace default

Once this completes, you can see the status and configuration of your app by running the following:

$ argocd app list

For a more detailed view of your application configuration, run:

$ argocd app get online-store-k8s-demo

Initially your app will be Out of Sync and no health status. Now you are ready to sync your application to your target cluster. To do this, simply use the sync command for your application:

$ argocd app sync online-store-k8s-demo

Build ASP.Net core image and deploy on Kubernetes with Contour ingress controller and MetalLB load balancer

In this blog, I will cover up, how to create OCI docker image from Windows ASP .Net application to .Net Core container using open source “Pack” API and deploy on Kubernetes cluster using open source Contour ingress controller. Also, will set up MetaLB load balancer of Kubernetes cluster.

Objective:

  1. Build .Net Core OCI docker image of ASP .Net application using Pack buildpack API
  2. Run this docker image on docker for quick docker verification
  3. Push docker image to image registry Harbor
  4. Install and configure Contour ingress controller
  5. MetalLB load balancer for Kubernetes LoadBalancer service to expose as an external IP to public
  6. Create a deployment and Service script to deploy docker image
  7. Create an ingress resource and expose this .Dot app to external IP using Contour ingress controller

Prerequisite:

  • Kubernetes cluster setup. Note: I have used VMware’s Tanzu Kubernetes Grid (TKG)
  • Kubectl CLI
  • Pack buildpack API
  • Image registry Harbor setup
  • git CLI to download Github source code
  • MacOS/Ubuntu Linux or any shell

1. Build OCI docker image of ASP .Net application using Pack buildpack API:

Install and configure “Pack” API. I have installed on Ubuntu Linux:

wget https://github.com/buildpacks/pack/releases/download/v0.11.2/pack-v0.11.2-linux.tgz
tar xvf pack-v0.11.2-linux.tgz
mv pack /usr/local/bin
# Browse all suggested builders

$ pack suggest-builders	
Suggested builders:
	Google:                gcr.io/buildpacks/builder                    Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python
	Heroku:                heroku/buildpacks:18                         heroku-18 base image with buildpacks for Ruby, Java, Node.js, Python, Golang, & PHP
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:base        Ubuntu bionic base image with buildpacks for Java, NodeJS and Golang
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:full-cf     cflinuxfs3 base image with buildpacks for Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX
	Paketo Buildpacks:     gcr.io/paketo-buildpacks/builder:tiny        Tiny base image (bionic build image, distroless run image) with buildpacks for Golang

Tip: Learn more about a specific builder with:
	pack inspect-builder <builder-image>

# Set this full-cf .Net builder which has support for most of the languages (Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX).Syntax: pack set-default-builder <builder-image>
$ pack set-default-builder gcr.io/paketo-buildpacks/builder:full-cf

# Clone the GitHub project
$ git clone https://github.com/rajivmca2004/paketo-samples-demo.git and && cd paketo-samples-demo/dotnet-core/aspnet

# Building docker image and convert into .Net core container
$ pack build dotnet-aspnet-sample

2. Run this docker image on docker for quick docker verification

# Running docker image for quick verification before deploying to K8s cluster
$ docker run --interactive --tty --env PORT=8080 --publish 8080:8080 dotnet-aspnet-sample

# Viewing
$ curl http://localhost:8080

3. Push docker image to image registry Harbor

$ docker login -u admin -p Harbor123 harbor.vmwaredc.com/library

#Push to Harbor image registry
$ docker push harbor.vmwaredc.com/library/dotnet-aspnet-sample

We need an ingress controller to expose Kubernetes services as external IP. It will work as an internal load balancer to expose to K8s services on http/https and REST APIs of microservices.

4. Install and configure Contour ingress controller

Refer this installation doc of Contour open source for more information

# Run this command to download and install Contour open source project

$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

5. MetalLB load balancer for Kubernetes LoadBalancer service to expose as an external IP to public

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (vSphere, TKG, GCP, AWS, Azure, OpenStack etc). If you’re not running on a supported IaaS platform, LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.

Please follow this MetalLB installation doc for the latest version.  Check that MetalLB is running.

$ kubectl get pods -n metallb-system

Create layer 2 configuration:

Create a metallb-configmap.yaml file and modify your IP range accordingly.

$vim metallb-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.10.40.200-10.10.40.250

# Configure MetalLB
$ kubectl apply -f metallb-configmap.yaml

6. Create a deployment and Service script to deploy docker image

You can download and refer complete code from GitHub repo.

$ vim dotnetcore-asp-deployment.yml

apiVersion: v1
kind: Service
metadata:
  name: dotnetcore-demo-service
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: dotnetcore-demo-app
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dotnetcore-app-deployment
  namespace: default
spec:
  securityContext:
    runAsUser: 0
  selector:
    matchLabels:
      app: dotnetcore-demo-app
  replicas: 3
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: dotnetcore-demo-app
    spec:
      containers:
      - name: dotnetcore-demo-app
        image: harbor.vmwaredc.com/library/dotnet-aspnet-sample
        ports:
        - containerPort: 9080
          name: server

Deploy the .Net Core pods:

$ kubectl apply -f nginx-deployment.yml

7. Create an ingress resource and expose this .Dot app to external IP using Contour ingress controller

Create an ingress resource:

$ vim dotnetcore-ingress-cluster.yml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: dotnetcore-demo-cluster1-gateway
  labels:
    app: dotnetcore-demo-app
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: dotnetcore-demo-service
          servicePort: 80

# Create ingress resource
$ kubectl apply -f dotnetcore-ingress-cluster1.yaml

Get the IP of the .Net Core K8s service to access the application:

$ kubectl get svc
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
dotnetcore-demo-service       LoadBalancer     10.109.51.83     10.10.40.201   80:30452/TCP   5m

To test open this URL in your browser with external IP=> http://[EXTERNAL-IP]/

Tanzu Kubernetes Grid air gapped installation (TKG v1.1.2) on vSphere v6.7- offline environment

Recently,I have done a POC for a client with the latest TKG v1.1.2 ( Latest updated: July’20) and faced a couple of challenges on air- gapped environment. Installing and configuring TKG management and worker clusters on air-gapped (No Internet/offline) environment is nightmare. You need to plan properly and download all required docker images of TKG and related technology stacks and libraries on your private image registry first. I have used Harbor open source image registry in this blog.

This blog is not replacement of official doc. It’s quick references to join all the dots, tips like how to manually download, tag, push and change K8s manifest files images, prerequisite and other quick references to save time and have everything on a single pager.

I have followed this deploying TKG instructions on an air-gapped environment (Deploy Tanzu Kubernetes Grid to vSphere in an Air-Gapped Environment), there are some more steps required to complete successful installation. This blog will cover TKG v1.1.2 on vSphere 6.7 in air gapped environment.

Note: I have used latest Ubuntu v20.04 LTS on bootstrap VM which will have Internet connectivity. You can sue CentOS are any other Linux flavour.

Note: I have used latest Ubuntu v20.04 LTS on bootstrap VM which will have Internet connectivity. You can use CentOS are any other Linux flavour.

I have used TKG dev plan:

Prerequisite for Bootstrap Env – Ubuntu/CentOS LinuxPackages/URLs
DHCP Should be Enabledhttps://www.tecmint.com/install-dhcp-server-in-ubuntu-debian/Mandatory: DHCP- DHCP installation on Ubuntu
DNS Enabled Mandatory- Public or private DNS muste be enabled on subnet IP tange 
Ubuntu OS Core server installhttps://ubuntu.com/download/alternative-downloadsLatest version 20.04 LTS/ 18.04 LTS
HomeBrew ( if not available)Linux/MAcOS-https://docs.brew.sh/Homebrew-on-Linux
Ubuntu- https://brew.sh/
Ubuntu- https://medium.com/@smartsplash/using-homebrew-on-ubuntu-1089f70c8aa7
Optional – It’s good to install CLIs and other required libaries. Node advisable for air-gapped env installation on K8s clusters.
TKG CLIhttps://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-install-tkg-set-up-tkg.htmlMandatory 
Kuebctlhttps://v1-17.docs.kubernetes.io/docs/tasks/tools/install-kubectl/

Ref docs:
https://docs.docker.com/engine/install/ubuntu/
Mandatory – For K8s
$ brew install kubectl
$ kubectl version
Docker Desktop and CLI Installation and Setup: (Ubuntu)Ubuntu:

https://gist.github.com/rstacruz/297fc799f094f55d062b982f7dac9e41
Mandatory : Ubuntu- docker.io is available from the Ubuntu repositories (as of Xenial).
# Install Docker
sudo apt install docker.io
sudo apt install docker-compose

# Start /stop
sudo systemctl start docker
sudo systemctl stop docker

#Verify
sudo docker ps -a
sudo docker rm -f <PID>
docker info
Harborhttps://goharbor.io/docs/1.10/install-config/Mandatory- Need to setup DNS servers for Harbor to resolve domaion name.  
Follow TKG v1.1.2 installation steps on air gapped env for vSphere v6.7https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-install-tkg-vsphere.html 
   
Download TKG binaries:  Linuxhttps://my.vmware.com/web/vmware/details?downloadGroup=TKG-110&productId=988&rPId=46507 
VMware Tanzu Kubernetes Grid 1.1.0 Kubernetes v1.18.2 OVAPhoton v3 Kubernetes 1.18.2 OVA 
VMware Tanzu Kubernetes Grid 1.1 CLIVMware Tanzu Kubernetes Grid CLI 1.1 Linux 
VMware Tanzu Kubernetes Grid 1.1 Load Balancer OVA Photon v3 capv haproxy v1.2.4 OVA 
clusterawsadm Account Preparation Tool v0.5.3ClusterAdmin AWS v0.5.3 Linux 
VMware Tanzu Kubernetes Grid 1.1 Extension manifestsVMware Tanzu Kubernetes Grid Extensions Manifest 1.1 
Crash Diagnostics v0.2.2Crash Recovery and Diagnostics for Kubernetes 0.2.2 Linux 

Step-1: Setup all prerequisite, install Ubuntu OS on bootstrap VM

Step:2 Download all binaries with your VMware credentials and push/copy all compressed tar files to bootstrap VM machine.

Step:3 Make sure Internet is available on the bootstrap VM from where you need to initiate installation of TKG and other binaries.

Step:4 Install Docker Desktop and CLI. Make sure that the internet-connected machine has Docker installed and running.

Step:5 Install Harbor and create certificate using OpenSSL and https config. Also, add harbor certificates in docker config file in .harbor/harbor.yml

# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /root/harbor/data/cert/harbor.vmwaredc.com.crt
  private_key: /root/harbor/data/cert/harbor.vmwaredc.com.key
#edit /etc/hosts and add namespace entry of DNS server
10.109.19.13 harbor.vmwaredc.com

$ systemd-resolve --status


UI: https://harbor.vmwaredc.com
Verify: Make sure that you can connect to the private registry from the internet-connected machine.

$ docker login -u admin -p <password> harbor.vmwaredc.com
/library

Step:6 Install kubectl CLI

Step:7 Install tkg CLI on same bootstrap VM with external internet connection, and follow the instructions in Download and Install the Tanzu Kubernetes Grid CLI to download, unpack, and install the Tanzu Kubernetes Grid CLI binary on your internet-connected system.

Step:8 Follow all the steps as mentioned in the installation doc. Open vSphere UI console and provide all vCenter v6.7 server details, vLAN, resource configuration etc. It will create configuration file config.yaml in .tkg folder which will have all main TKG installation configuration.

Note: vCenter server should be an IP or FQDN in only small letters.

Step:9 Upload TKG and HAProxy OVA to vSphere UI console.

Step:10 Add this export before initiating TKG installation –

$ export TKG_CUSTOM_IMAGE_REPOSITORY="harbor.vmwaredc.com/library"

Step:11 Download all required docker images for TKG installation and push to Harbor and follow these steps –

Note:

Note: TKG Repo pull all docker images from public image registry- https://registry.tkg.vmware.run/v2/

  • On the bootstrap machine with an internet connection on which you have performed the initial setup tasks and installed the Tanzu Kubernetes Grid CLI, install yq 2.x. NOTE: You must use yq version 2.x. Version 3.x does not work with this script.
  • Run the $ tkg get management-cluster command.
  • Running a tkg command for the first time installs the necessary Tanzu Kubernetes
  • Grid configuration files in the ~/.tkg folder on your system. The script that you create and run in subsequent steps requires the files in the ~/.tkg/bom folder to be present on your machine. Note: TKG v1.1.2 picks bom/bom-1.1.2+vmware.1.yaml image file.
  • Set the IP address or FQDN of your local registry as an environment variable.In the following command example, replace custom-image-repository.io with the address of your private Docker registry.
  • Copy and paste the following script in a text editor, and save it as gen-publish-images.sh
#!/usr/bin/env bash
# Copyright 2020 The TKG Contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
BOM_DIR=${HOME}/.tkg/bom
if [ -z "$TKG_CUSTOM_IMAGE_REPOSITORY" ]; then
    echo "TKG_CUSTOM_IMAGE_REPOSITORY variable is not defined"
    exit 1
fi
for TKG_BOM_FILE in "$BOM_DIR"/*.yaml; do
    # Get actual image repository from BoM file
    actualImageRepository=$(yq .imageConfig.imageRepository "$TKG_BOM_FILE" | tr -d '"')
    # Iterate through BoM file to create the complete Image name
    # and then pull, retag and push image to custom registry
    yq .images "$TKG_BOM_FILE" | jq -c '.[]' | while read -r i; do
        # Get imagePath and imageTag
        imagePath=$(jq .imagePath <<<"$i" | tr -d '"')
        imageTag=$(jq .tag <<<"$i" | tr -d '"')
        # create complete image names
        actualImage=$actualImageRepository/$imagePath:$imageTag
        customImage=$TKG_CUSTOM_IMAGE_REPOSITORY/$imagePath:$imageTag
        echo "docker pull $actualImage"
        echo "docker tag $actualImage $customImage"
        echo "docker push $customImage"
        echo ""
    done
done 
  • Make the script executable .chmod +x gen-publish-images.sh
  • Generate a new version of the script that is populated with the address of your private Docker registry ../gen-publish-images.sh > publish-images.sh
  • Verify that the generated version of the script contains the correct registry address cat publish-images.sh
  • Make the script executable .chmod +x publish-images.sh
  • Log in to your local private registry. docker login ${TKG_CUSTOM_IMAGE_REPOSITORY}
  • Run the script to pull the required images from the public Tanzu Kubernetes Grid registry, retag them, and push them to your private registry ../publish-images.sh
  • When the script finishes, turn off your internet connection. (Optional) After that Internet is not required for TKG.
  • Modify TKG dev installation plan. Run these following commands on the home directory one level up (outside of .tkg folder location) :
$ export REGISTRY="harbor.vmwaredc.com"
$ export NAMESERVER="10.109.19.5"
$ export DOMAIN="vmwaredc.com"
$ cat > /tmp/harbor.sh <<EOF
echo "nameserver $NAMESERVER" > /usr/lib/systemd/resolv.conf
echo "domain $DOMAIN" >> /usr/lib/systemd/resolv.conf
rm /etc/resolv.conf
ln -s /usr/lib/systemd/resolv.conf /etc/resolv.conf
mkdir -p /etc/containerd
echo "" > /etc/containerd/config.toml
sed -i '1 i\# Use config version 2 to enable new configuration fields.' /etc/containerd/config.toml
sed -i '2 i\# Config file is parsed as version 1 by default.' /etc/containerd/config.toml
sed -i '3 i\version = 2' /etc/containerd/config.toml
sed -i '4 i\ ' /etc/containerd/config.toml
sed -i '5 i\[plugins]' /etc/containerd/config.toml
sed -i '6 i\  [plugins."io.containerd.grpc.v1.cri"]' /etc/containerd/config.toml
sed -i '7 i\    sandbox_image = "registry.tkg.vmware.run/pause:3.2"' /etc/containerd/config.toml
sed -i '8 i\    [plugins."io.containerd.grpc.v1.cri".registry]' /etc/containerd/config.toml
sed -i '9 i\      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]' /etc/containerd/config.toml
sed -i '10 i\        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."$REGISTRY"]' /etc/containerd/config.toml
sed -i '11 i\          endpoint = ["https://$REGISTRY"]' /etc/containerd/config.toml
sed -i '12 i\      [plugins."io.containerd.grpc.v1.cri".registry.configs]' /etc/containerd/config.toml
sed -i '13 i\        [plugins."io.containerd.grpc.v1.cri".registry.configs."$REGISTRY"]' /etc/containerd/config.toml
sed -i '14 i\          [plugins."io.containerd.grpc.v1.cri".registry.configs."$REGISTRY".tls]' /etc/containerd/config.toml
sed -i '15 i\            insecure_skip_verify = true' /etc/containerd/config.toml
systemctl restart containerd
EOF
 
$ awk '{print "    -", $0}' /tmp/harbor.sh > /tmp/harbor1.yaml
$ awk '{print "      -", $0}' /tmp/harbor.sh > /tmp/harbor2.yaml
$ sed -i '197 e cat /tmp/harbor1.yaml\n' ~/.tkg/providers/infrastructure-vsphere/v0.6.5/cluster-template-dev.yaml
$ sed -i '249 e cat /tmp/harbor2.yaml\n' ~/.tkg/providers/infrastructure-vsphere/v0.6.5/cluster-template-dev.yaml
 
$ rm /tmp/harbor1.yaml /tmp/harbor2.yaml /tmp/harbor.sh

Step:12 Run this on terminal to initiate installation process, it will create .tkg folder and required config file. In v1.1.2 .bom folder has all image repositiories

$ sudo tkg init --ui -v 6

Step:13  As soon as kind container is up . Run this exec steps into KIND cluster and ran below script.

Identify KIND docker image by :

$ docker ps -a
$ docker exec -it <KIND docker image id> /bin/sh
echo '# explicitly use v2 config format
version = 2
# set default runtime handler to v2, which has a per-pod shim
[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.tkg.vmware.run/pause:3.2"
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
        runtime_type = "io.containerd.runc.v2"
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.vmwaredc.com"]
          endpoint = ["https://harbor.vmwaredc.com"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.vmwaredc.com"]
          [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.vmwaredc.com".tls]
            insecure_skip_verify = true' > /etc/containerd/config.toml

Step:14 At this step, management cluster is created. Now, you can create work load clusters as per installation instructions.

Step:15 To visualize, monitor and inspect TKG Kubernetes clusters. Install Octant UI dashboard. Octant should immediately launch your default web browser on http://127.0.0.1:7777/#/cluster-overview

$ octant

Note: Or to run it on a specific host and fixed port:

OCTANT_LISTENER_ADDR=0.0.0.0:8900 octant

 

Important Trick:

Pull and push docker images in Air gapped environment

Now, your K8s cluster is ready, next you would like to install K8s deployment or any other K8s images which pulls dependent images from public Internet. Your Kubernetes cluster running on air-gapped environment can’t download any image from public repository (dockerhub, docker.io, gcr etc).

Refer my short blog for how to do operation: Pull and push docker images in Air gapped (No Internet) environment