Thursday, 11 May 2023

How to create Multibranch pipeline

Creating a Multibranch Pipeline

The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.

This eliminates the need for manual Pipeline creation and management.

To create a Multibranch Pipeline:

  • Click New Item on Jenkins home page.

Classic UI left column
  • Enter a name for your Pipeline, select Multibranch Pipeline and click OK.

Jenkins uses the name of the Pipeline to create directories on disk. Pipeline names which include spaces may uncover bugs in scripts which do not expect paths to contain spaces.

Enter a name, select *Multibranch Pipeline*, and click *OK*
  • Add a Branch Source (for example, Git) and enter the location of the repository.

Add a Branch Source
Add the URL to the project repository
  • Save the Multibranch Pipeline project.

Upon Save, Jenkins automatically scans the designated repository and creates appropriate items for each branch in the repository which contains a Jenkinsfile.

By default, Jenkins will not automatically re-index the repository for branch additions or deletions (unless using an Organization Folder), so it is often useful to configure a Multibranch Pipeline to periodically re-index in the configuration:

Setting up branch re-indexing

Additional Environment Variables

Multibranch Pipelines expose additional information about the branch being built through the env global variable, such as:

BRANCH_NAME

Name of the branch for which this Pipeline is executing, for example master.

CHANGE_ID

An identifier corresponding to some kind of change request, such as a pull request number

Additional environment variables are listed in the Global Variable Reference.

Supporting Pull Requests

Multibranch Pipelines can be used for validating pull/change requests with the appropriate plugin. This functionality is provided by the following plugins:

Please consult their documentation for further information on how to use those plugins.

Thursday, 19 January 2023

How to generate kubernetes Yaml Manifest

 

YAML Autogeneration using Kubernetes Extention

One of the easiest ways to create Kubernetes YAML is using the visual studio kubernetes extension.

Install the Kubernetes VS code extension, and it will help develop k8s manifests for most kubernetes objects. It also supports deploying apps to local and remote k8s clusters.

All you have to do is, start typing the Object name and it will automatically populate the options for you. Then, based on your selection, it will autogenerate the basic YAML structure for you as shown n the following image.




This extension supports YAML generation of Pods, Deployment, Statefulset, Replicationset, Persistent Volumes (PV), Persistent Volume Claims (PVC), etc.

Create YAML Manifest Using Kubectl Dry Run

You can create the manifests using the kubectl imperative commands. There is a flag called --dry-run that helps you create the entire manifest template.

Also, you cannot create all the Kubernetes resource YAML using dry-run. For example, you cannot create a Statefulset or a persistent volume using dry-run.

Note: If you are preparing for Kubernetes certifications like CKA, CKAD, or CKS, imperative commands come in handy during the exam.

Kubectl YAML Dry Run Examples

Let’s look at the examples to generate YAML using a dry run and write it to an output file.

Create Pod YAML

Create a pod YAML named myapp which uses image nginx:latest.

kubectl run mypod --image=nginx:latest \
            --labels type=web \
            --dry-run=client -o yaml > mypod.yaml

Create a Pod service YAML

Generate YAML for a Pod Service that exposes a NodePort. This will only work if you have a running pod.

kubectl expose pod mypod \
    --port=80 \
    --name mypod-service \
    --type=NodePort \
    --dry-run=client -o yaml > mypod-service.yaml

Create NodePort Service YAML

Create a service type nodeport with port 30001 with service to pod TCP port mapping on port 80.

kubectl create service nodeport mypod \
    --tcp=80:80 \
    --node-port=30001 \
    --dry-run=client -o yaml > mypod-service.yaml

Create Deployment YAML

Create a deployment named mydeployment with image Nginx

kubectl create deployment mydeployment \
    --image=nginx:latest \
    --dry-run=client -o yaml > mydeployment.yaml

Create Deployment Service YAML

Create a NodePort service YAML for deployment mydeployment with service port 8080

kubectl expose deployment mydeployment \
    --type=NodePort \
    --port=8080 \
    --name=mydeployment-service \
    --dry-run=client -o yaml > mydeployment-service.yaml

Create Job YAML

Crate job named myjob with nginx image.

kubectl create job myjob \
    --image=nginx:latest \
    --dry-run=client -o yaml

Create Cronjob YAML

Create a cronjob named mycronjob with nginx image and a corn schedule.

kubectl create cj mycronjob \
    --image=nginx:latest \
    --schedule="* * * * *" \
    --dry-run=client -o yaml

I have given generic YAML examples. You can further change parameters and use them as per your requirements.

Kubectl & Dry Run Alias

To make things fast, you can set up an alias in ~/.bashrc or ~/.zshrc for kubectl command as follows. So that you don’t have to type kubectl every time.

alias k=kubectl

You can also set up an alias for a kubectl dry run parameters as follows.

alias kdr='kubectl --dry-run=client -o yaml'

You can execute the command as follows.

kdr run web --image=nginx:latest > nginx.yaml

Kubeconfig File Explained

Kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster. You might get this config file directly from the cluster administrator or from a cloud platform if you are using managed Kubernetes cluster.

When you use kubectl, it uses the information in the kubeconfig file to connect to the kubernetes cluster API. The default location of the Kubeconfig file is $HOME/.kube/config

Example Kubeconfig File

Here is an example of a Kubeconfig. It needs the following key information to connect to the Kubernetes clusters.

  1. certificate-authority-data: Cluster CA
  2. server: Cluster endpoint (IP/DNS of master node)
  3. name: Cluster name
  4. user: name of the user/service account.
  5. token: Secret token of the user/service account.
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <ca-data-here>
    server: https://your-k8s-cluster.com
  name: <cluster-name>
contexts:
- context:
    cluster:  <cluster-name>
    user:  <cluster-name-user>
  name:  <cluster-name>
current-context:  <cluster-name>
kind: Config
preferences: {}
users:
- name:  <cluster-name-user>
  user:
    token: <secret-token-here>

Different Methods to Connect Kubernetes Cluster With Kubeconfig File

You can use the Kubeconfig in different ways and each way has its own precedence. Here is the precedence in order,.

  1. Kubectl Context: Kubeconfig with kubectl overrides all other configs. It has the highest precedence.
  2. Environment Variable: KUBECONFIG env variable overrides current context.
  3. Command-Line Reference: The current context has the least precedence over inline config reference and env variable.

Now let’s take a look at all the three ways to use the Kubeconfig file.

Method 1: Connect to Kubernetes Cluster With Kubeconfig Kubectl Context

To connect to the Kubernetes cluster, the basic prerequisite is the Kubectl CLI plugin. If you dont have the CLI installed, follow the instructions given here.

Now follow the steps given below to use the kubeconfig file to interact with the cluster.

Step 1: Move kubeconfig to .kube directory.

Kubectl interacts with the kubernetes cluster using the details available in the Kubeconfig file. By default, kubectl looks for the config file in the /.kube location.

Lets move the kubeconfig file to the .kube directory. Replace /path/to/kubeconfig with your kubeconfig current path.

mv /path/to/kubeconfig ~/.kube

Step 2: List all cluster contexts

You can have any number of kubeconfig in the .kube directory. Each config will have a unique context name (ie, the name of the cluster). You can validate the Kubeconfig file by listing the contexts. You can list all the contexts using the following command. It will list the context name as the name of the cluster.

kubectl config get-contexts

Step 3: Set the current context

Now you need to set the current context to your kubeconfig file. You can set that using the following command. replace <cluster-name> with your listed context name.

kubectl config use-context <cluster-name>  

For example,

kubectl config use-context my-dev-cluster

Step 4: Validate the Kubernetes cluster connectivity

To validate the cluster connectivity, you can execute the following kubectl command to list the cluster nodes.

kubectl get nodes

Method 2: Connect with KUBECONFIG environment variable

You can set the KUBECONFIG environment variable with the kubeconfig file path to connect to the cluster. So wherever you are using the kubectl command from the terminal, the KUBECONFIG env variable should be available. If you set this variable, it overrides the current cluster context.

You can set the variable using the following command. Where dev_cluster_config is the kubeconfig file name.

KUBECONFIG=$HOME/.kube/dev_cluster_config

Method 3: Using Kubeconfig File With Kubectl

You can pass the Kubeconfig file with the Kubectl command to override the current context and KUBECONFIG env variable.

Here is an example to get nodes.

kubectl get nodes --kubeconfig=$HOME/.kube/dev_cluster_config

Also you can use,

KUBECONFIG=$HOME/.kube/dev_cluster_config kubectl get nodes

Merging Multiple Kubeconfig Files

Usually, when you work with Kubernetes services like GKE, all the cluster contexts get added as a single file. However, there are situations where you will be given a Kubeconfig file with limited access to connect to prod or non-prod servers. To manage all clusters effectively using a single config, you can merge the other Kubeconfig files to the default $HOME/.kube/config file using the supported kubectl command.

Lets assume you have three Kubeconfig files in the $HOME/.kube/ directory.

  1. config (default kubeconfig)
  2. dev_config
  3. test_config

You can merge all the three configs into a single file using the following command. Ensure you are running the command from the $HOME/.kube directory

KUBECONFIG=config:dev_config:test_config kubectl config view --merge --flatten > config.new

The above command creates a merged config named config.new.

Now rename the old $HOME.kube/config file.

 mv $HOME/.kube/config $HOME/.kube/config.old

Rename the config.new to config.

mv $HOME/.kube/config.new $HOME/.kube/config

To verify the configuration, try listing the contexts from the config.

kubectl config get-contexts

How to Generate Kubeconfig File?

A kubeconfig needs the following important details.

  1. Cluster endpoint (IP or DNS name of the cluster)
  2. Cluster CA Certificate
  3. Cluster name
  4. Service account user name
  5. Service account token

Note: To generate a Kubeconfig file, you need to have admin permissions in the cluster to create service accounts and roles.

For this demo, I am creating a service account with clusterRole that has limited access to the cluster-wide resources. You can also create a normal role and Rolebinding that limits the user access to a specific namespace.

Step 1: Create a Service Account

The service account name will be the user name in the Kubeconfig. Here I am creating the service account in the kube-system as I am creating a clusterRole. If you want to create a config to give namespace level limited access, create the service account in the required namespace.

kubectl -n kube-system create serviceaccount devops-cluster-admin

Step 2: Create a ClusterRole

Let’s create a clusterRole with limited privileges to cluster objects. You can add the required object access as per your requirements. 


Execute the following command to create the clusterRole.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devops-cluster-admin
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
EOF

Step 3: Create ClusterRoleBinding

The following YAML is a ClusterRoleBinding that binds the devops-cluster-admin service account with the devops-cluster-admin clusterRole.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: devops-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: devops-cluster-admin
subjects:
- kind: ServiceAccount
  name: devops-cluster-admin
  namespace: kube-system
EOF

Step 4: Get all Cluster Details & Secrets

We will retrieve all the required kubeconfig details and save them in variables. Then, finally, we will substitute it directly to the Kubeconfig YAML.

export SA_TOKEN_NAME=$(kubectl -n kube-system get serviceaccount devops-cluster-admin -o=jsonpath='{.secrets[0].name}')

export SA_SECRET_TOKEN=$(kubectl -n kube-system get secret/${SA_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode)

export CLUSTER_NAME=$(kubectl config current-context)

export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CLUSTER_NAME}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')

export CLUSTER_CA_CERT=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')

export CLUSTER_ENDPOINT=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')

Step 5: Generate the Kubeconfig With the variables.

If you execute the following YAML, all the variables get substituted and a config named devops-cluster-admin-config gets generated.

cat << EOF > devops-cluster-admin-config
apiVersion: v1
kind: Config
current-context: ${CLUSTER_NAME}
contexts:
- name: ${CLUSTER_NAME}
  context:
    cluster: ${CLUSTER_NAME}
    user: devops-cluster-admin
clusters:
- name: ${CLUSTER_NAME}
  cluster:
    certificate-authority-data: ${CLUSTER_CA_CERT}
    server: ${CLUSTER_ENDPOINT}
users:
- name: devops-cluster-admin
  user:
    token: ${SA_SECRET_TOKEN}
EOF

Step 5: validate the generated Kubeconfig

To validate the Kubeconfig, execute it with the kubectl command to see if the cluster is getting authenticated.

kubectl get nodes --kubeconfig=devops-cluster-admin-config 

Kubeconfig File FAQs

Let’s look at some of the frequently asked Kubeconfig file questions.

Where do I put the Kubeconfig file?

The default Kubeconfig file location is $HOME/.kube/ folder in the home directory. Kubectl looks for the kubeconfig file using the conext name from the .kube folder. However, if you are using the KUBECONFIG environment variable, you can place the kubeconfig file in a preferred folder and refer to the path in the KUBECONFIG environment variable.

Where is the Kubeconfig file located?

All the kubeconfig files are located in the .kube directory in the user home directory.That is $HOME/.kube/config

How to manage multiple Kubeconfig files?

You can store all the kubeconfig files in $HOME/.kube directory. You need to change the cluster context to connect to a specific cluster.

How to create a Kubeconfig file?

To create a Kubeconfig file, you need to have the cluster endpoint details, cluster CA certificate, and authentication token. Then you need to create a Kubernetes YAML object of type config with all the cluster details.

How to use Proxy with Kubeconfig

If you are behind a corporate proxy, you can use proxy-url: https://proxy.host:port in your Kubeconfig file to connect to the cluster.

Conclusion

In this blog, we learned different ways to connect to the Kubernetes cluster using a custom Kubeconfig file.


How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...