DevOps Training Program that will provide you with in-depth knowledge of various DevOps tools including Git, Jenkins, Docker, Ansible, Puppet, Kubernetes and Nagios. This training is completely hands-on and designed in a way to help you become a certified practitioner through best practices in Continuous Development, Continuous Testing, Configuration Management and Continuous Integration, and finally, Continuous Monitoring of software throughout its development life cycle.
One of the easiest ways to create Kubernetes YAML is using the visual studio kubernetes extension.
Install the Kubernetes VS code extension, and it will help develop k8s manifests for most kubernetes objects. It also supports deploying apps to local and remote k8s clusters.
All you have to do is, start typing the Object name and it will automatically populate the options for you. Then, based on your selection, it will autogenerate the basic YAML structure for you as shown n the following image.
This extension supports YAML generation of Pods, Deployment, Statefulset, Replicationset, Persistent Volumes (PV), Persistent Volume Claims (PVC), etc.
Create YAML Manifest Using Kubectl Dry Run
You can create the manifests using the kubectl imperative commands. There is a flag called --dry-run that helps you create the entire manifest template.
Also, you cannot create all the Kubernetes resource YAML using dry-run. For example, you cannot create a Statefulset or a persistent volume using dry-run.
Note: If you are preparing for Kubernetes certifications like CKA, CKAD, or CKS, imperative commands come in handy during the exam.
Kubectl YAML Dry Run Examples
Let’s look at the examples to generate YAML using a dry run and write it to an output file.
Create Pod YAML
Create a pod YAML named myapp which uses image nginx:latest.
I have given generic YAML examples. You can further change parameters and use them as per your requirements.
Kubectl & Dry Run Alias
To make things fast, you can set up an alias in ~/.bashrc or ~/.zshrc for kubectl command as follows. So that you don’t have to type kubectl every time.
alias k=kubectl
You can also set up an alias for a kubectl dry run parameters as follows.
A Kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster. You might get this config file directly from the cluster administrator or from a cloud platform if you are using managed Kubernetes cluster.
When you use kubectl, it uses the information in the kubeconfig file to connect to the kubernetes cluster API. The default location of the Kubeconfig file is $HOME/.kube/config
Example Kubeconfig File
Here is an example of a Kubeconfig. It needs the following key information to connect to the Kubernetes clusters.
Different Methods to Connect Kubernetes Cluster With Kubeconfig File
You can use the Kubeconfig in different ways and each way has its own precedence. Here is the precedence in order,.
Kubectl Context: Kubeconfig with kubectl overrides all other configs. It has the highest precedence.
Environment Variable: KUBECONFIG env variable overrides current context.
Command-Line Reference: The current context has the least precedence over inline config reference and env variable.
Now let’s take a look at all the three ways to use the Kubeconfig file.
Method 1: Connect to Kubernetes Cluster With Kubeconfig Kubectl Context
To connect to the Kubernetes cluster, the basic prerequisite is the Kubectl CLI plugin. If you dont have the CLI installed, follow the instructions given here.
Now follow the steps given below to use the kubeconfig file to interact with the cluster.
Step 1: Move kubeconfig to .kube directory.
Kubectl interacts with the kubernetes cluster using the details available in the Kubeconfig file. By default, kubectl looks for the config file in the /.kube location.
Lets move the kubeconfig file to the .kube directory. Replace /path/to/kubeconfig with your kubeconfig current path.
mv /path/to/kubeconfig ~/.kube
Step 2: List all cluster contexts
You can have any number of kubeconfig in the .kube directory. Each config will have a unique context name (ie, the name of the cluster). You can validate the Kubeconfig file by listing the contexts. You can list all the contexts using the following command. It will list the context name as the name of the cluster.
kubectl config get-contexts
Step 3: Set the current context
Now you need to set the current context to your kubeconfig file. You can set that using the following command. replace <cluster-name> with your listed context name.
kubectl config use-context <cluster-name>
For example,
kubectl config use-context my-dev-cluster
Step 4: Validate the Kubernetes cluster connectivity
To validate the cluster connectivity, you can execute the following kubectl command to list the cluster nodes.
kubectl get nodes
Method 2: Connect with KUBECONFIG environment variable
You can set the KUBECONFIG environment variable with the kubeconfig file path to connect to the cluster. So wherever you are using the kubectl command from the terminal, the KUBECONFIG env variable should be available. If you set this variable, it overrides the current cluster context.
You can set the variable using the following command. Where dev_cluster_config is the kubeconfig file name.
KUBECONFIG=$HOME/.kube/dev_cluster_config
Method 3: Using Kubeconfig File With Kubectl
You can pass the Kubeconfig file with the Kubectl command to override the current context and KUBECONFIG env variable.
Here is an example to get nodes.
kubectl get nodes --kubeconfig=$HOME/.kube/dev_cluster_config
Also you can use,
KUBECONFIG=$HOME/.kube/dev_cluster_config kubectl get nodes
Merging Multiple Kubeconfig Files
Usually, when you work with Kubernetes services like GKE, all the cluster contexts get added as a single file. However, there are situations where you will be given a Kubeconfig file with limited access to connect to prod or non-prod servers. To manage all clusters effectively using a single config, you can merge the other Kubeconfig files to the default $HOME/.kube/config file using the supported kubectl command.
Lets assume you have three Kubeconfig files in the $HOME/.kube/ directory.
config (default kubeconfig)
dev_config
test_config
You can merge all the three configs into a single file using the following command. Ensure you are running the command from the $HOME/.kube directory
The above command creates a merged config named config.new.
Now rename the old $HOME.kube/config file.
mv $HOME/.kube/config $HOME/.kube/config.old
Rename the config.new to config.
mv $HOME/.kube/config.new $HOME/.kube/config
To verify the configuration, try listing the contexts from the config.
kubectl config get-contexts
How to Generate Kubeconfig File?
A kubeconfig needs the following important details.
Cluster endpoint (IP or DNS name of the cluster)
Cluster CA Certificate
Cluster name
Service account user name
Service account token
Note: To generate a Kubeconfig file, you need to have admin permissions in the cluster to create service accounts and roles.
For this demo, I am creating a service account with clusterRole that has limited access to the cluster-wide resources. You can also create a normal role and Rolebinding that limits the user access to a specific namespace.
Step 1: Create a Service Account
The service account name will be the user name in the Kubeconfig. Here I am creating the service account in the kube-system as I am creating a clusterRole. If you want to create a config to give namespace level limited access, create the service account in the required namespace.
To validate the Kubeconfig, execute it with the kubectl command to see if the cluster is getting authenticated.
kubectl get nodes --kubeconfig=devops-cluster-admin-config
Kubeconfig File FAQs
Let’s look at some of the frequently asked Kubeconfig file questions.
Where do I put the Kubeconfig file?
The default Kubeconfig file location is $HOME/.kube/ folder in the home directory. Kubectl looks for the kubeconfig file using the conext name from the .kube folder. However, if you are using the KUBECONFIG environment variable, you can place the kubeconfig file in a preferred folder and refer to the path in the KUBECONFIG environment variable.
Where is the Kubeconfig file located?
All the kubeconfig files are located in the .kube directory in the user home directory.That is $HOME/.kube/config
How to manage multiple Kubeconfig files?
You can store all the kubeconfig files in $HOME/.kube directory. You need to change the cluster context to connect to a specific cluster.
How to create a Kubeconfig file?
To create a Kubeconfig file, you need to have the cluster endpoint details, cluster CA certificate, and authentication token. Then you need to create a Kubernetes YAML object of type config with all the cluster details.
How to use Proxy with Kubeconfig
If you are behind a corporate proxy, you can use proxy-url: https://proxy.host:port in your Kubeconfig file to connect to the cluster.
Conclusion
In this blog, we learned different ways to connect to the Kubernetes cluster using a custom Kubeconfig file.
Monitoring your Kubernetes cluster is critical for ensuring that your services are always available and running. And before you scour the internet for a monitoring system, why not try Grafana and Prometheus Kubernetes cluster monitoring?
In this guide, you’ll learn how to monitor your Kubernetes cluster, viewing internal state metrics with a Prometheus and Grafana dashboard.
Read on so you can keep a close watch on your resources!
Prerequisites
A Linux machine with Docker installed — This tutorial uses an Ubuntu 20.04 LTS machine with Docker version 20.10.7. Here’s how to install Ubuntu.
A single node Kubernetes Cluster.
AD
Helm Package Manager installed — For deploying the Prometheus operator.
Grafana and Prometheus Kubernetes Cluster monitoring provides information on potential performance bottlenecks, cluster health, performance metrics. At the same time, visualize network usage, resource usage patterns of pods, and a high-level overview of what is going on in your cluster.
But before setting up a monitoring system with Grafana and Prometheus, you’ll first deploy the kube-prometheus stack Helm chart. The stack contains Prometheus, Grafana, Alertmanager, Prometheus operator, and other monitoring resources.
1. SSH into your Ubuntu 20.04 machine (if you are running on a cloud server) or simply log into your locally installed Ubuntu 20.04 machine to begin.
2. Next, run the kubectl create command below to create a namespace named monitoring for all the Prometheus and Grafana related deployments.
kubectl create namespace monitoring
Creating a Namespace
3. Run the following helm repo commands to add the (prometheus-community) Helm repo, and update your Helm repo.
4. After adding the Helm repo, run the helm install command below to deploy the kube-prometheus stack Helm chart. Replace prometheus with your desired release name.
This Helm chart sets up a full Prometheus kubernetes monitoring stack by acting based on a set of Custom Resource Definitions (CRDs).
Once the deployment completes, you’ll get the following output.
Deploying the kube-prometheus Stack
5. Finally, run the following command to confirm your kube-prometheus stack deployment.
kubectl get pods -n monitoring
The output below shows the deployment of the kube-prometheus stack. As you can see, each component in the stack is running in your cluster.
Listing Deployed Components from kube-prometheus Stack in monitoring Namespace
Accessing the Prometheus Instance
You’ve successfully deployed your Prometheus instance onto your cluster, and you’re almost ready to monitor your Kubernetes cluster. But how do you access your Prometheus instance? You’ll forward a local port 9090 to your cluster via your Prometheus service with the kubectl port-forward command.
1. Run the kubectl get command below to view all services in the monitoring namespace to check for your Prometheus service.
AD
kubectl get svc -n monitoring
All the services deployed in the monitoring namespace are shown below, including the Prometheus service. You’ll use the Prometheus service to set up port-forwarding so your Prometheus instance can be accessible outside of your cluster.
Listing Deployed Services in the monitoring Namespace
2. Next, run the below kubectl port-forward command to forward the local port 9090 to your cluster via the Prometheus service (svc/prometheus-kube-prometheus-prometheus).
To run the kubectl port-forward command as a background process, freeing up your terminal for further use, append the & symbol at the end of the command. Follow up by pressing Ctrl+C keys to stop the port-forward foreground process (Doing so will not affect the port-forward background process).
3. Open your favorite web browser, and navigate to either of the URLs below to access your Prometheus instance.
Navigate to http://localhost:9090 if you’re following along with a local Ubuntu machine
Navigate to your server’s IP address followed by port 9090 (i.e., http://YOUR_SERVER_IP:9090) if you’re using a cloud server.
For this tutorial, Prometheus is running on a cloud server.
AD
If your Prometheus service works, you’ll get the following page on your web browser.
Accessing Prometheus
4. Lastly, on your terminal, press the Ctrl+C keys to close the port-forwarding process. Doing so makes Prometheus inaccessible on your browser.
Viewing Prometheus Kubernetes Cluster Internal State Metrics
Viewing your Kubernetes cluster’s internal state metrics is made possible with the Kube-state-metrics (KSM) tool. With the KSM tool you can keep track of the health and usage of your resources, and also internal state objects. Some of the data points that can be potentially viewed via KSM are; node metrics, deployment metrics, and pod metrics.
The KSM tool comes pre-packaged in the kube-prometheus stack and is deployed automatically with the rest of the monitoring components.
You’ll port-forward a local port to your cluster via the kube-state-metrics service. Doing so lets KSM scrape the internal system metrics of your cluster and output a list of queries and values. But before port-forwarding, verify your KSM Kubernetes service first.
1. Run the below command to check for your kube-state-metrics Kubernetes service.
kubectl get svc -n monitoring |grep kube-state-metrics
Below, you can see the KSM Kubernetes service name (prometheus-kube-state-metrics) along with the ClusterIP. Note down the KSM Kubernetes service name as you’ll need it to perform the port forwarding in the next step.
AD
Verifying the KSM Kubernetes Service
2. Next, run the below command to port-forward the prometheus-kube-state-metrics service to port 8080.
If you are following along with this tutorial with an Ubuntu 20.04 machine hosted by a cloud provider, add the (–address 0.0.0.0) flag to the kubectl port-forward command. Doing so allows external access to the local port via your server’s public IP address.
3. Finally, on your web browser, navigate to either of the URLs below to view the Kube Metrics page, as shown below.
Navigate to http://localhost:8080 if you’re on a local Ubuntu machine
Navigate to your server’s IP address followed by port 8080 (i.e., http://YOUR_SERVER_IP:8080) if you’re using a cloud server.
Click on the metrics link to access your cluster’s internal state metrics.
Accessing Kube Metrics
You can see below a cluster’s internal state metrics similar to yours.
Listing Cluster Internal State Metrics
AD
Visualizing a Cluster’s Internal State Metric on Prometheus
You’ve successfully performed kube-prometheus stack Helm chart deployment, kube-state-metrics scrape, and Prometheus job configurations. As a result, CoreDNS, kube-api server, Prometheus operator, and other Kubernetes components have been automatically set up as targets on Prometheus.
1. Navigate to either of the http://localhost:9090/targets or http://<YOUR_SERVER_IP:9090/targets endpoints on your web browser. Doing so lets you verify that these targets have been properly configured.
Accessing the endpoint also lets you verify Prometheus is scraping their metrics and storing the data in a Time-Series Database (TSDB),
Remember to port-forward Prometheus as shown in the “Accessing the Prometheus Instance” section in step two, before navigating to the endpoint. You can as well run it as a background process.
As you can see below, different Kubernetes internal components and monitoring components are configured as targets on Prometheus.
Viewing Pre-configured Monitoring Components as Prometheus Targets
3. Insert the sample PromQL (Prometheus Query Language) query below into the expression space provided, then click on Execute. The query returns the total amount of unused memory in your Cluster.
4. To view the results of the PromQL query executed in step 3 in a graphical format, click on Graph. This graph will display the total amount of unused memory in your cluster per given time.
With everything set up correctly, the sample cluster metric should look similar to the graph below.
Graphical view of PromQL query of cluster metric
Accessing then Grafana Dashboard
You may have noticed that the visualization capabilities of Prometheus are limited, as you are stuck with only a Graph option. Prometheus is great for scraping metrics from targets configured as jobs, aggregating those metrics, and storing them in a TSDB locally in the Ubuntu machine. But when it comes to standard resource monitoring, Prometheus and Grafana are a great duo.
AD
Prometheus aggregates the metrics exported by the server components such as node exporter, CoreDNS, etc. While Grafana, with visualization being its strong suit, receives these metrics from Prometheus and displays them through numerous visualization options.
During the kube-prometheus stack helm deployment, Grafana had been automatically installed and configured, so you can configure access to Grafana on your Cluster.
To access your Grafana dashboard, you will first need to fetch your username and password stored as secretes automatically created by default in your Kubernetes cluster.
1. Run the following kubectl command to view data stored as secret in your Kubernetes cluster (prometheus-grafana) in YAML format (-o yaml).
kubectl get secret -n monitoring prometheus-grafana -o yaml
As you see below, the username and password for accessing your Grafana dashboard are encoded in base64. Note down the values of theadmin-password and admin-usersecrets as you’ll need to decode them in the next step.
Viewing Secrets (admin-password and admin-user)
2. Next, run each command below to --decode both secrets (admin-password and admin-user). Replace YOUR_USERNAME, and YOUR_PASSWORD with the admin-password and admin-user secret values you noted in step one.
This tutorial doesn’t have an output for each command due to security reasons.
AD
# Decode and print the usernameecho YOUR_USERNAME | base64 --decode
# Decode and print the passwordecho YOUR_PASSWORD | base64 --decode
3. Run the kubectl command below to port-forward to a local port at 3000 by binding the Grafana port 80 to port 3000. Doing so provides you access to Grafana’s web UI on your browser.
Add the –address 0.0.0.0 flag if you are following along using an Ubuntu 20.04 machine hosted by a cloud provider.
4. Finally, on your browser, navigate to any of the endpoints below depending on your machine setup:
http://localhost:3000 (local)
or http://<YOUR_SERVER_IP>:3000 (cloud)
Enter your decoded secret value for admin-user and admin-password in the username and password in the space provided.
Entering Grafana Username and Password
Once you are logged in, you’ll get the Grafana dashboard, as shown below.
Accessing Grafana Dashboard
AD
Interacting with Grafana
By default, the Kube-Prometheus stack deploys Grafana with some pre-configured dashboards for each target configured in Prometheus. With these pre-configured dashboards, you will not need to manually set up a dashboard to visualize each metric aggregated by Prometheus.
Click on the dashboard icon —> Browse and your browser redirects to a page where you’ll see a list of dashboards (step two).
Accessing the list of Pre-configured Dashboards
Click on any of the pre-configured dashboards below to view its visual compute resource. But for this tutorial, click on the Kubernetes / Compute Resources / Namespace (Pods) dashboard.
Viewing a Pre-configured Dashboard
AD
Below is a sample pre-configured dashboard for visualizing compute resource usage by Pods in any of the available namespaces.
For this tutorial, the Data source has been set to Prometheus and the namespace for visualization is set to monitoring.
Visualizing Compute Resource Usage
Conclusion
In this tutorial, you’ve learned how to deploy the Prometheus operator using Helm and viewed your Cluster internal state metrics to monitor your Kubernetes cluster. You’ve also configured Grafana and viewed your Cluster metrics by configuring your Grafana dashboard.
AD
At this point, you already have fully functional Kubernetes cluster monitoring.