Thursday, 19 January 2023

How to generate kubernetes Yaml Manifest

 

YAML Autogeneration using Kubernetes Extention

One of the easiest ways to create Kubernetes YAML is using the visual studio kubernetes extension.

Install the Kubernetes VS code extension, and it will help develop k8s manifests for most kubernetes objects. It also supports deploying apps to local and remote k8s clusters.

All you have to do is, start typing the Object name and it will automatically populate the options for you. Then, based on your selection, it will autogenerate the basic YAML structure for you as shown n the following image.




This extension supports YAML generation of Pods, Deployment, Statefulset, Replicationset, Persistent Volumes (PV), Persistent Volume Claims (PVC), etc.

Create YAML Manifest Using Kubectl Dry Run

You can create the manifests using the kubectl imperative commands. There is a flag called --dry-run that helps you create the entire manifest template.

Also, you cannot create all the Kubernetes resource YAML using dry-run. For example, you cannot create a Statefulset or a persistent volume using dry-run.

Note: If you are preparing for Kubernetes certifications like CKA, CKAD, or CKS, imperative commands come in handy during the exam.

Kubectl YAML Dry Run Examples

Let’s look at the examples to generate YAML using a dry run and write it to an output file.

Create Pod YAML

Create a pod YAML named myapp which uses image nginx:latest.

kubectl run mypod --image=nginx:latest \
            --labels type=web \
            --dry-run=client -o yaml > mypod.yaml

Create a Pod service YAML

Generate YAML for a Pod Service that exposes a NodePort. This will only work if you have a running pod.

kubectl expose pod mypod \
    --port=80 \
    --name mypod-service \
    --type=NodePort \
    --dry-run=client -o yaml > mypod-service.yaml

Create NodePort Service YAML

Create a service type nodeport with port 30001 with service to pod TCP port mapping on port 80.

kubectl create service nodeport mypod \
    --tcp=80:80 \
    --node-port=30001 \
    --dry-run=client -o yaml > mypod-service.yaml

Create Deployment YAML

Create a deployment named mydeployment with image Nginx

kubectl create deployment mydeployment \
    --image=nginx:latest \
    --dry-run=client -o yaml > mydeployment.yaml

Create Deployment Service YAML

Create a NodePort service YAML for deployment mydeployment with service port 8080

kubectl expose deployment mydeployment \
    --type=NodePort \
    --port=8080 \
    --name=mydeployment-service \
    --dry-run=client -o yaml > mydeployment-service.yaml

Create Job YAML

Crate job named myjob with nginx image.

kubectl create job myjob \
    --image=nginx:latest \
    --dry-run=client -o yaml

Create Cronjob YAML

Create a cronjob named mycronjob with nginx image and a corn schedule.

kubectl create cj mycronjob \
    --image=nginx:latest \
    --schedule="* * * * *" \
    --dry-run=client -o yaml

I have given generic YAML examples. You can further change parameters and use them as per your requirements.

Kubectl & Dry Run Alias

To make things fast, you can set up an alias in ~/.bashrc or ~/.zshrc for kubectl command as follows. So that you don’t have to type kubectl every time.

alias k=kubectl

You can also set up an alias for a kubectl dry run parameters as follows.

alias kdr='kubectl --dry-run=client -o yaml'

You can execute the command as follows.

kdr run web --image=nginx:latest > nginx.yaml

Kubeconfig File Explained

Kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster. You might get this config file directly from the cluster administrator or from a cloud platform if you are using managed Kubernetes cluster.

When you use kubectl, it uses the information in the kubeconfig file to connect to the kubernetes cluster API. The default location of the Kubeconfig file is $HOME/.kube/config

Example Kubeconfig File

Here is an example of a Kubeconfig. It needs the following key information to connect to the Kubernetes clusters.

  1. certificate-authority-data: Cluster CA
  2. server: Cluster endpoint (IP/DNS of master node)
  3. name: Cluster name
  4. user: name of the user/service account.
  5. token: Secret token of the user/service account.
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <ca-data-here>
    server: https://your-k8s-cluster.com
  name: <cluster-name>
contexts:
- context:
    cluster:  <cluster-name>
    user:  <cluster-name-user>
  name:  <cluster-name>
current-context:  <cluster-name>
kind: Config
preferences: {}
users:
- name:  <cluster-name-user>
  user:
    token: <secret-token-here>

Different Methods to Connect Kubernetes Cluster With Kubeconfig File

You can use the Kubeconfig in different ways and each way has its own precedence. Here is the precedence in order,.

  1. Kubectl Context: Kubeconfig with kubectl overrides all other configs. It has the highest precedence.
  2. Environment Variable: KUBECONFIG env variable overrides current context.
  3. Command-Line Reference: The current context has the least precedence over inline config reference and env variable.

Now let’s take a look at all the three ways to use the Kubeconfig file.

Method 1: Connect to Kubernetes Cluster With Kubeconfig Kubectl Context

To connect to the Kubernetes cluster, the basic prerequisite is the Kubectl CLI plugin. If you dont have the CLI installed, follow the instructions given here.

Now follow the steps given below to use the kubeconfig file to interact with the cluster.

Step 1: Move kubeconfig to .kube directory.

Kubectl interacts with the kubernetes cluster using the details available in the Kubeconfig file. By default, kubectl looks for the config file in the /.kube location.

Lets move the kubeconfig file to the .kube directory. Replace /path/to/kubeconfig with your kubeconfig current path.

mv /path/to/kubeconfig ~/.kube

Step 2: List all cluster contexts

You can have any number of kubeconfig in the .kube directory. Each config will have a unique context name (ie, the name of the cluster). You can validate the Kubeconfig file by listing the contexts. You can list all the contexts using the following command. It will list the context name as the name of the cluster.

kubectl config get-contexts

Step 3: Set the current context

Now you need to set the current context to your kubeconfig file. You can set that using the following command. replace <cluster-name> with your listed context name.

kubectl config use-context <cluster-name>  

For example,

kubectl config use-context my-dev-cluster

Step 4: Validate the Kubernetes cluster connectivity

To validate the cluster connectivity, you can execute the following kubectl command to list the cluster nodes.

kubectl get nodes

Method 2: Connect with KUBECONFIG environment variable

You can set the KUBECONFIG environment variable with the kubeconfig file path to connect to the cluster. So wherever you are using the kubectl command from the terminal, the KUBECONFIG env variable should be available. If you set this variable, it overrides the current cluster context.

You can set the variable using the following command. Where dev_cluster_config is the kubeconfig file name.

KUBECONFIG=$HOME/.kube/dev_cluster_config

Method 3: Using Kubeconfig File With Kubectl

You can pass the Kubeconfig file with the Kubectl command to override the current context and KUBECONFIG env variable.

Here is an example to get nodes.

kubectl get nodes --kubeconfig=$HOME/.kube/dev_cluster_config

Also you can use,

KUBECONFIG=$HOME/.kube/dev_cluster_config kubectl get nodes

Merging Multiple Kubeconfig Files

Usually, when you work with Kubernetes services like GKE, all the cluster contexts get added as a single file. However, there are situations where you will be given a Kubeconfig file with limited access to connect to prod or non-prod servers. To manage all clusters effectively using a single config, you can merge the other Kubeconfig files to the default $HOME/.kube/config file using the supported kubectl command.

Lets assume you have three Kubeconfig files in the $HOME/.kube/ directory.

  1. config (default kubeconfig)
  2. dev_config
  3. test_config

You can merge all the three configs into a single file using the following command. Ensure you are running the command from the $HOME/.kube directory

KUBECONFIG=config:dev_config:test_config kubectl config view --merge --flatten > config.new

The above command creates a merged config named config.new.

Now rename the old $HOME.kube/config file.

 mv $HOME/.kube/config $HOME/.kube/config.old

Rename the config.new to config.

mv $HOME/.kube/config.new $HOME/.kube/config

To verify the configuration, try listing the contexts from the config.

kubectl config get-contexts

How to Generate Kubeconfig File?

A kubeconfig needs the following important details.

  1. Cluster endpoint (IP or DNS name of the cluster)
  2. Cluster CA Certificate
  3. Cluster name
  4. Service account user name
  5. Service account token

Note: To generate a Kubeconfig file, you need to have admin permissions in the cluster to create service accounts and roles.

For this demo, I am creating a service account with clusterRole that has limited access to the cluster-wide resources. You can also create a normal role and Rolebinding that limits the user access to a specific namespace.

Step 1: Create a Service Account

The service account name will be the user name in the Kubeconfig. Here I am creating the service account in the kube-system as I am creating a clusterRole. If you want to create a config to give namespace level limited access, create the service account in the required namespace.

kubectl -n kube-system create serviceaccount devops-cluster-admin

Step 2: Create a ClusterRole

Let’s create a clusterRole with limited privileges to cluster objects. You can add the required object access as per your requirements. 


Execute the following command to create the clusterRole.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devops-cluster-admin
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
EOF

Step 3: Create ClusterRoleBinding

The following YAML is a ClusterRoleBinding that binds the devops-cluster-admin service account with the devops-cluster-admin clusterRole.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: devops-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: devops-cluster-admin
subjects:
- kind: ServiceAccount
  name: devops-cluster-admin
  namespace: kube-system
EOF

Step 4: Get all Cluster Details & Secrets

We will retrieve all the required kubeconfig details and save them in variables. Then, finally, we will substitute it directly to the Kubeconfig YAML.

export SA_TOKEN_NAME=$(kubectl -n kube-system get serviceaccount devops-cluster-admin -o=jsonpath='{.secrets[0].name}')

export SA_SECRET_TOKEN=$(kubectl -n kube-system get secret/${SA_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode)

export CLUSTER_NAME=$(kubectl config current-context)

export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CLUSTER_NAME}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')

export CLUSTER_CA_CERT=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')

export CLUSTER_ENDPOINT=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')

Step 5: Generate the Kubeconfig With the variables.

If you execute the following YAML, all the variables get substituted and a config named devops-cluster-admin-config gets generated.

cat << EOF > devops-cluster-admin-config
apiVersion: v1
kind: Config
current-context: ${CLUSTER_NAME}
contexts:
- name: ${CLUSTER_NAME}
  context:
    cluster: ${CLUSTER_NAME}
    user: devops-cluster-admin
clusters:
- name: ${CLUSTER_NAME}
  cluster:
    certificate-authority-data: ${CLUSTER_CA_CERT}
    server: ${CLUSTER_ENDPOINT}
users:
- name: devops-cluster-admin
  user:
    token: ${SA_SECRET_TOKEN}
EOF

Step 5: validate the generated Kubeconfig

To validate the Kubeconfig, execute it with the kubectl command to see if the cluster is getting authenticated.

kubectl get nodes --kubeconfig=devops-cluster-admin-config 

Kubeconfig File FAQs

Let’s look at some of the frequently asked Kubeconfig file questions.

Where do I put the Kubeconfig file?

The default Kubeconfig file location is $HOME/.kube/ folder in the home directory. Kubectl looks for the kubeconfig file using the conext name from the .kube folder. However, if you are using the KUBECONFIG environment variable, you can place the kubeconfig file in a preferred folder and refer to the path in the KUBECONFIG environment variable.

Where is the Kubeconfig file located?

All the kubeconfig files are located in the .kube directory in the user home directory.That is $HOME/.kube/config

How to manage multiple Kubeconfig files?

You can store all the kubeconfig files in $HOME/.kube directory. You need to change the cluster context to connect to a specific cluster.

How to create a Kubeconfig file?

To create a Kubeconfig file, you need to have the cluster endpoint details, cluster CA certificate, and authentication token. Then you need to create a Kubernetes YAML object of type config with all the cluster details.

How to use Proxy with Kubeconfig

If you are behind a corporate proxy, you can use proxy-url: https://proxy.host:port in your Kubeconfig file to connect to the cluster.

Conclusion

In this blog, we learned different ways to connect to the Kubernetes cluster using a custom Kubeconfig file.


Thursday, 17 November 2022

Kubernetes Cluster Monitoring with Grafana and Prometeus

 Monitoring your Kubernetes cluster is critical for ensuring that your services are always available and running. And before you scour the internet for a monitoring system, why not try Grafana and Prometheus Kubernetes cluster monitoring?

In this guide, you’ll learn how to monitor your Kubernetes cluster, viewing internal state metrics with a Prometheus and Grafana dashboard.

Read on so you can keep a close watch on your resources!

Prerequisites

  • A Linux machine with Docker installed — This tutorial uses an Ubuntu 20.04 LTS machine with Docker version 20.10.7. Here’s how to install Ubuntu.
  • A single node Kubernetes Cluster.
AD
  • Helm Package Manager installed — For deploying the Prometheus operator.

Deploying the Kube-Prometheus Stack Helm Chart

Grafana and Prometheus Kubernetes Cluster monitoring provides information on potential performance bottlenecks, cluster health, performance metrics. At the same time, visualize network usage, resource usage patterns of pods, and a high-level overview of what is going on in your cluster.

But before setting up a monitoring system with Grafana and Prometheus, you’ll first deploy the kube-prometheus stack Helm chart. The stack contains Prometheus, Grafana, Alertmanager, Prometheus operator, and other monitoring resources.

1. SSH into your Ubuntu 20.04 machine (if you are running on a cloud server) or simply log into your locally installed Ubuntu 20.04 machine to begin.

2. Next, run the kubectl create command below to create a namespace named monitoring for all the Prometheus and Grafana related deployments.

kubectl create namespace monitoring
Creating a Namespace
Creating a Namespace

3. Run the following helm repo commands to add the (prometheus-community) Helm repo, and update your Helm repo.

# Add prometheus-community repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# Update helm repo
helm repo update
AD

4. After adding the Helm repo, run the helm install command below to deploy the kube-prometheus stack Helm chart. Replace prometheus with your desired release name.

This Helm chart sets up a full Prometheus kubernetes monitoring stack by acting based on a set of Custom Resource Definitions (CRDs).

helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring

Once the deployment completes, you’ll get the following output.

Deploying the kube-prometheus Stack
Deploying the kube-prometheus Stack

5. Finally, run the following command to confirm your kube-prometheus stack deployment.

kubectl get pods -n monitoring

The output below shows the deployment of the kube-prometheus stack. As you can see, each component in the stack is running in your cluster.

Listing Deployed Components from kube-prometheus Stack in monitoring Namespace
Listing Deployed Components from kube-prometheus Stack in monitoring Namespace

Accessing the Prometheus Instance

You’ve successfully deployed your Prometheus instance onto your cluster, and you’re almost ready to monitor your Kubernetes cluster. But how do you access your Prometheus instance? You’ll forward a local port 9090 to your cluster via your Prometheus service with the kubectl port-forward command.

1. Run the kubectl get command below to view all services in the monitoring namespace to check for your Prometheus service.

AD
kubectl get svc -n monitoring

All the services deployed in the monitoring namespace are shown below, including the Prometheus service. You’ll use the Prometheus service to set up port-forwarding so your Prometheus instance can be accessible outside of your cluster.

Listing Deployed Services in the monitoring Namespace
Listing Deployed Services in the monitoring Namespace

2. Next, run the below kubectl port-forward command to forward the local port 9090 to your cluster via the Prometheus service (svc/prometheus-kube-prometheus-prometheus).

kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 9090

But if you’re running a single-node Kubernetes cluster on a cloud server, run the following command instead.

kuebctl port-forward --address 0.0.0.0 svc/prometheus-kube-prometheus-prometheus -n monitoring 9090 

To run the kubectl port-forward command as a background process, freeing up your terminal for further use, append the & symbol at the end of the command. Follow up by pressing Ctrl+C keys to stop the port-forward foreground process (Doing so will not affect the port-forward background process).

3. Open your favorite web browser, and navigate to either of the URLs below to access your Prometheus instance.

  • Navigate to your server’s IP address followed by port 9090 (i.e., http://YOUR_SERVER_IP:9090) if you’re using a cloud server.

For this tutorial, Prometheus is running on a cloud server.

AD

If your Prometheus service works, you’ll get the following page on your web browser.

Accessing Prometheus
Accessing Prometheus

4. Lastly, on your terminal, press the Ctrl+C keys to close the port-forwarding process. Doing so makes Prometheus inaccessible on your browser.

Viewing Prometheus Kubernetes Cluster Internal State Metrics

Viewing your Kubernetes cluster’s internal state metrics is made possible with the Kube-state-metrics (KSM) tool. With the KSM tool you can keep track of the health and usage of your resources, and also internal state objects. Some of the data points that can be potentially viewed via KSM are; node metrics, deployment metrics, and pod metrics.

The KSM tool comes pre-packaged in the kube-prometheus stack and is deployed automatically with the rest of the monitoring components.

You’ll port-forward a local port to your cluster via the kube-state-metrics service. Doing so lets KSM scrape the internal system metrics of your cluster and output a list of queries and values. But before port-forwarding, verify your KSM Kubernetes service first.

1. Run the below command to check for your kube-state-metrics Kubernetes service.

kubectl get svc -n monitoring | grep kube-state-metrics

Below, you can see the KSM Kubernetes service name (prometheus-kube-state-metrics) along with the ClusterIP. Note down the KSM Kubernetes service name as you’ll need it to perform the port forwarding in the next step.

Verifying the KSM Kubernetes Service
Verifying the KSM Kubernetes Service

2. Next, run the below command to port-forward the prometheus-kube-state-metrics service to port 8080.

kubectl port-forward svc/prometheus-kube-state-metrics -n monitoring 8080 

If you are following along with this tutorial with an Ubuntu 20.04 machine hosted by a cloud provider, add the (–address 0.0.0.0) flag to the kubectl port-forward command. Doing so allows external access to the local port via your server’s public IP address.

3. Finally, on your web browser, navigate to either of the URLs below to view the Kube Metrics page, as shown below.

  • Navigate to http://localhost:8080 if you’re on a local Ubuntu machine
  • Navigate to your server’s IP address followed by port 8080 (i.e., http://YOUR_SERVER_IP:8080) if you’re using a cloud server.

Click on the metrics link to access your cluster’s internal state metrics.

Accessing Kube Metrics
Accessing Kube Metrics

You can see below a cluster’s internal state metrics similar to yours.

Listing Cluster Internal State Metrics
Listing Cluster Internal State Metrics
AD

Visualizing a Cluster’s Internal State Metric on Prometheus

You’ve successfully performed kube-prometheus stack Helm chart deployment, kube-state-metrics scrape, and Prometheus job configurations. As a result, CoreDNS, kube-api server, Prometheus operator, and other Kubernetes components have been automatically set up as targets on Prometheus.

1. Navigate to either of the http://localhost:9090/targets or http://<YOUR_SERVER_IP:9090/targets endpoints on your web browser. Doing so lets you verify that these targets have been properly configured.

Accessing the endpoint also lets you verify Prometheus is scraping their metrics and storing the data in a Time-Series Database (TSDB),

Remember to port-forward Prometheus as shown in the “Accessing the Prometheus Instance” section in step two, before navigating to the endpoint. You can as well run it as a background process.

As you can see below, different Kubernetes internal components and monitoring components are configured as targets on Prometheus.

Viewing Pre-configured Monitoring Components as Prometheus Targets
Viewing Pre-configured Monitoring Components as Prometheus Targets

2. Click on the Graph menu to get to a page where you’ll run a PromQL (Prometheus Query Language) query.

Accessing the Graph Page
AD

3. Insert the sample PromQL (Prometheus Query Language) query below into the expression space provided, then click on Execute. The query returns the total amount of unused memory in your Cluster.

sum((container_memory_usage_bytes{container!="POD",container!=""} - on (namespace,pod,container) avg by (namespace,pod,container)(kube_pod_container_resource_requests{resource="memory"})) * -1 >0 ) / (1024*1024*1024)
Executing a PromQL Query
Executing a PromQL Query

4. To view the results of the PromQL query executed in step 3 in a graphical format, click on Graph. This graph will display the total amount of unused memory in your cluster per given time.

With everything set up correctly, the sample cluster metric should look similar to the graph below.

Graphical view of PromQL query of cluster metric
Graphical view of PromQL query of cluster metric

Accessing then Grafana Dashboard

You may have noticed that the visualization capabilities of Prometheus are limited, as you are stuck with only a Graph option. Prometheus is great for scraping metrics from targets configured as jobs, aggregating those metrics, and storing them in a TSDB locally in the Ubuntu machine. But when it comes to standard resource monitoring, Prometheus and Grafana are a great duo.

AD

Prometheus aggregates the metrics exported by the server components such as node exporter, CoreDNS, etc. While Grafana, with visualization being its strong suit, receives these metrics from Prometheus and displays them through numerous visualization options.

During the kube-prometheus stack helm deployment, Grafana had been automatically installed and configured, so you can configure access to Grafana on your Cluster.

To access your Grafana dashboard, you will first need to fetch your username and password stored as secretes automatically created by default in your Kubernetes cluster.

1. Run the following kubectl command to view data stored as secret in your Kubernetes cluster (prometheus-grafana) in YAML format (-o yaml).

kubectl get secret -n monitoring prometheus-grafana -o yaml

As you see below, the username and password for accessing your Grafana dashboard are encoded in base64. Note down the values of the admin-password and admin-user secrets as you’ll need to decode them in the next step.

Viewing Secrets (admin-password and admin-user)
Viewing Secrets (admin-password and admin-user)

2. Next, run each command below to --decode both secrets (admin-password and admin-user). Replace YOUR_USERNAME, and YOUR_PASSWORD with the admin-password and admin-user secret values you noted in step one.

This tutorial doesn’t have an output for each command due to security reasons.

# Decode and print the username
echo YOUR_USERNAME | base64 --decode
# Decode and print the password
echo YOUR_PASSWORD | base64 --decode

3. Run the kubectl command below to port-forward to a local port at 3000 by binding the Grafana port 80 to port 3000. Doing so provides you access to Grafana’s web UI on your browser.

kubectl port-forward svc/prometheus-grafana -n monitoring 3000:80

Add the –address 0.0.0.0 flag if you are following along using an Ubuntu 20.04 machine hosted by a cloud provider.

4. Finally, on your browser, navigate to any of the endpoints below depending on your machine setup:

  • http://localhost:3000 (local)
  • or http://<YOUR_SERVER_IP>:3000 (cloud)

Enter your decoded secret value for admin-user and admin-password in the username and password in the space provided.

Entering Grafana Username and Password
Entering Grafana Username and Password

Once you are logged in, you’ll get the Grafana dashboard, as shown below.

Accessing Grafana Dashboard
Accessing Grafana Dashboard

Interacting with Grafana

By default, the Kube-Prometheus stack deploys Grafana with some pre-configured dashboards for each target configured in Prometheus. With these pre-configured dashboards, you will not need to manually set up a dashboard to visualize each metric aggregated by Prometheus.

Click on the dashboard icon —> Browse and your browser redirects to a page where you’ll see a list of dashboards (step two).

Accessing the list of Pre-configured Dashboards
Accessing the list of Pre-configured Dashboards

Click on any of the pre-configured dashboards below to view its visual compute resource. But for this tutorial, click on the Kubernetes / Compute Resources / Namespace (Pods) dashboard.

Viewing a Pre-configured Dashboard
Viewing a Pre-configured Dashboard

Below is a sample pre-configured dashboard for visualizing compute resource usage by Pods in any of the available namespaces.

For this tutorial, the Data source has been set to Prometheus and the namespace for visualization is set to monitoring.

Visualizing Compute Resource Usage
Visualizing Compute Resource Usage

Conclusion

In this tutorial, you’ve learned how to deploy the Prometheus operator using Helm and viewed your Cluster internal state metrics to monitor your Kubernetes cluster. You’ve also configured Grafana and viewed your Cluster metrics by configuring your Grafana dashboard.

At this point, you already have fully functional Kubernetes cluster monitoring. 

How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...