DevOps Training Program that will provide you with in-depth knowledge of various DevOps tools including Git, Jenkins, Docker, Ansible, Puppet, Kubernetes and Nagios. This training is completely hands-on and designed in a way to help you become a certified practitioner through best practices in Continuous Development, Continuous Testing, Configuration Management and Continuous Integration, and finally, Continuous Monitoring of software throughout its development life cycle.
Months later, I encountered a maven exception when compiling a java project. The error was as follow:
[ERROR] Error executing Maven. [ERROR] java.lang.IllegalStateException: Unable to load cache item [ERROR] Caused by: Unable to load cache item [ERROR] Caused by: Could not initialize class com.google.inject.internal.cglib.core.$MethodWrapper
My java version at the time was
openjdk version "17.0.2"2022-10-18 OpenJDK Runtime Environment(build 17.0.2+8-Ubuntu-2ubuntu120.04) OpenJDK 64-Bit Server VM(build 17.0.2+8-Ubuntu-2ubuntu120.04, mixed mode, sharing)
and my maven version at the time was
ApacheMaven3.6.3 Maven home:/usr/share/maven Java version:17.0.2,vendor:OracleCorporation,runtime:/usr/lib/jvm/java-17-openjdk-amd64 Default locale:en,platform encoding:UTF-8 OS name:"linux",version:"5.10.16.3-microsoft-standard-wsl2",arch:"amd64",family:"unix"
The cause of the error was that maven version(3.6.3) is old. I needed to upgrade to the latest version of maven.
Unfortunately, I could not upgrade to the latest maven version (3.9.0 at the time) using the aptpackage manager on Ubuntu. Generally, the easiest way to install anything on Ubuntu is via the apt package manager. However, it often does not include the latest JDK packages.
These are the steps to install the latest maven version on Ubuntu:
Follow the installation wizard to install Toad on your machine.
5. Lab Exercise 1: Setting Up SQL Environment
1. Install Toad for Oracle:
Download Toad for Oracle from the official website.
Follow the installation wizard to install Toad on your machine.
-- SQL Command: None, as it involves setting up Toad.
2. Connect to a Database:
Open Toad and click on "New Connection."
Enter your connection details, including username, password, and database connection details (hostname, port), and click "Connect."
-- SQL Command: None, as it involves setting up Toad.
3. Create a Sample Database and Table:
In the SQL Editor within Toad, execute the CREATE TABLE statement to create a table named users with columns id, name, and age.
CREATE TABLE users ( id NUMBER PRIMARY KEY, name VARCHAR2(50), age NUMBER );
ALTER TABLE users
MODIFY id int NOT NULL;
CREATE SEQUENCE users_sequence
START WITH 1
INCREMENT BY 1;
CREATE OR REPLACE TRIGGER users_trigger
BEFORE INSERT ON users
FOR EACH ROW
BEGIN
SELECT users_sequence.nextval INTO :new.id FROM dual;
END;
4. Insert Sample Data:
Use the INSERT INTO statements to add sample data to the users table.
INSERT INTO users (name, age) VALUES ('John Doe', 25); INSERT INTO users (name, age) VALUES ('Jane Smith', 30);
5. Execute Basic Queries:
In the SQL Editor, run a SELECT * FROM users; query to retrieve all data from the users table.
SELECT * FROM users;
6. Lab Exercise 2: Querying Data
1. Basic SELECT Statement:
Retrieve all columns from the users table:
SELECT * FROM users;
2. Filtering Data:
Retrieve users older than 25:
SELECT * FROM users WHERE age > 25;
3. Sorting Data:
Retrieve users sorted by age in descending order:
SELECT * FROM users ORDER BY age DESC;
4. Limiting Results:
Retrieve the first 5 users:
SELECT * FROM users WHERE ROWNUM <= 5;
7. Lab Exercise 3: Exploring Joins
1. Inner Join:
Retrieve information from two tables where there is a match:
SELECT users.id, users.name, orders.order_number FROM users INNER JOIN orders ON users.id = orders.user_id;
2. Left Join:
Retrieve all records from the left table and the matched records from the right table:
SELECT users.id, users.name, orders.order_number FROM users LEFT JOIN orders ON users.id = orders.user_id;
3. Right Join:
Retrieve all records from the right table and the matched records from the left table:
SELECT users.id, users.name, orders.order_number FROM users RIGHT JOIN orders ON users.id = orders.user_id;
4. Full Outer Join:
Retrieve all records when there is a match in either the left or right table:
SELECT users.id, users.name, orders.order_number FROM users FULL OUTER JOIN orders ON users.id = orders.user_id;
8. Lab Exercise 4: Aggregating Data
1. Counting Records:
Count the number of users in the users table:
SELECT COUNT(*) FROM users;
2. Grouping Data:
Group users by age and display the count in each group:
SELECT age, COUNT(*) FROM users GROUP BY age;
9. Lab Exercise 5: Modifying Data and Transactions
1. Updating Records:
Update the age of a user in the users table:
UPDATE users SET age = 28 WHERE name = 'John Doe';
2. Deleting Records:
Delete a user from the users table:
DELETE FROM users WHERE name = 'Jane Smith';
3. Transactions:
Use transactions to ensure atomicity for a series of SQL statements:
BEGIN; -- SQL statements within the transaction COMMIT;
10. Lab Exercise 6: Building a Blood Donation Database
1. Create Tables:
Create tables for donors, donations, and recipients:
It usually takes 5hrs to update their application and updates are manual, which incurs alot of downtime and is affecting their business because clients get locked out which gives their competitors upper hand.
Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle
You should show concepts that implement Plan --Code--Build--Test--Deploy--Monitor
TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)
a.
You can get the docker-compose file from below link
i. The Wiki Server should be up and running and serving on 84
ii. Mount your own container volume to persist data
iii. Login with Credentials DeliApp/admin
TASK B: Version Control The DeliApp Project
Plan & Code
App Name: DeliApp
WorkStation A- Team Osato- 3.142.247.23
WorkStation B - Team -
Developer Workstations are windows machines, Your Project Supervisor will provide you their ip/dns and credentials you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator
When you access the Developer workstation assigned to your group, you will find the code base in the below location:
This PC:---->Desktop---->DeliApp
(You can use Github or Bitbucket )-
1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green):
Build repo : DeliApp_Build --->Developers Access
Deployment repo: DeliApp_Deployment --->-Your Team Access
2)Version control the DeliAppproject located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)
Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo
3)Git branching Strategy for DeliApp_Build
master
release: eg release/release-v1
feature: eg feature/feature-v1
develop
4)Git branching Strategy for DeliApp_Deploy
master
feature eg feature/feature-v1
develop
5. Secure the Repos by Installing git-secrets on your build( DeliApp_Build )and deployment (DeliApp_Deploy )repo --PRE-COMMIT HOOK
1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the DeliApp_Build repo in Source Control Management(SCM)
2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the DeliApp_Deploy repo
3. Demonstrate the git branching Strategy
4. Your git commit should should throw an error when there is a secret in your repo
Hint: Add a text file containing some secrets eg. aws secret key/access key and commit
5. You should get an Error when you try to push to master
TASK C: Set up your Infrastructure
1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B
Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure
i. DEV - t2micro -8gb
ii. UAT(User Acceptance Testing)- t2small -10gb
iii. QA(Quality Assurance) - T2Large-20gb
iv. PROD A- T2Xlarge-30gb
v. PROD B- T2xLarge-30gb
Apache Tomcat Servers should be exposed on Port 4444
Linux Distribution for Apache Tomcat Servers: Ubuntu 18.04
Note: When Bootstrapping your servers make sure you install the Datadog Agent
Apache Tomcat Servers should be exposed on Port 4444
Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04
Note: When Bootstrapping your servers make sure you install the Datadog Agent
2. Set up your Devops tools servers:
(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)
NOTE: USE AZURE CLOUD FOR BELOW
1 Ansible Tower T2xxl- 15gb
1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube
1 Jenkins(CI/CD) t2 xlarge 20gb
1 Vulnerability Scanning Tool Server- Owasp Zap (Install in a Windows instance) See: https://www.devopstreams.com/2022/06/getting-started-with-owasp-zap.html
Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:
Install Sonarqube
Artifactory
Bonus Task:
Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers
Register a Domain using Route 53, eg www.teamdevops.com
Point that domain to the Elastic/Application Loadbalancer
Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B
TASK D: Monitoring
a. Set up continuous monitoring with Datadog by installing Datadog Agent on all your servers
Acceptance criteria:
i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)
ii All running Processes on all your Servers should be monitored(Process monitoring)
ii Tag all your servers on the Datadog dashboard
TASK E: Domain Name System
a. Register a Domain for your Team
i. You can use Route 53, Godaddy or any DNS service of your choice
eg. www.team-excellence.com
TASK F: Set Up Automated Build for Developers
The Developers make use of Maven to Compile the code
a. Set up a C/I Pipeline in Jenkins using Jenkinsfile
b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job
c. The CI Pipeline job should run on an Agent(Slave)
d. Help the developers to version their artifacts, so that each build has a unique artifact version
Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts
Pipeline should have slack channel notification to notify build status
i. Acceptance Criteria:
Automated build after code is pushed to the repository
1. Sonar Analysis on the sonarqube server
2. Artifact uploaded to artifactory
3. Email notification on success or failure
4. Slack Channel Notification
5. Each artifact has a unique version number
6. Code coverage displayed
TASK G: Deploy & Operate (Continous Deployment)
a. Set up a C/D pipeline in Jenkins using Jenkinsfile
create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments
Pipeline job Name:eg DeliApp_Dev_Deploy
i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B)
You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either Dev, Uat , Qa or Prod
ii. Pipeline should have slack channel notification to notify deployment status
iii. Pipeline should have email notification
iv. Deployment Gate
1. Acceptance criteria:
i. Deployment is seen and verified in either Dev, Uat, Qa or Prod
ii. Notification is seen in slack channel
iii. Email notification
TASK H:a. Deployment and Rollback
a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower
Manual Deployment Process is Below:
step 1: login to tomcat server
step 2 :download the artifact
step 3: switch to root
step 4: extract the artifact to Deployment folder
Deployment folder: /var/lib/tomcat8/webapps
Use service id : ubuntu
Acceptance Criteria:
i. Deploy new artifact from artifactory to either Dev, Uat, Qa or Prod
ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod
iii. All credentials should be encrypted
TASK H:b. Domain Name Service and LoadBalancing
i. Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers
ii. Configure your DNS with Route 53 such that if you enter your domain eg www.team-excellence.com it direct you to the LoadBalancer that will inturn point to Prod A or Prod B
Acceptance criteria:
i. Your team domain name eg www.mint.com will take you to your application that is residing on Prod A or Prod B
TASK I:
a. Set Up A 3 Node kubernetes Cluster(Container Orchestration) with Namespace dev,qa,prod
Using a Jenkins pipeline or Jenkins Job -The pipeline or job should be able to Create/Delete the cluster
b. Dockerize the DeliApp
You can use a Dockerfile to create the image or Openshift Source to image tool
c. Deploy the Dockerized DeliApp into the prod Namespace of the cluster(u can use dev and qa for testing)
d. Expose the application using a Load balancer or NodePort
e. Monitor your cluster using prometeus and Grafana
TASK I Acceptance Criteria:
1. You should be able to create/delete a kubernetes cluster
2. Be able to deploy your application into any Namespace(Dev,Qa,Prod)
3. You should be able to access the application through Nodeport or LoadBalancer
4. You should be able to monitor your cluster in Grafana
TASK J: Demonstrate Bash Automation of
i. Tomcat
ii. jenkins
iii. Apache
Acceptance criteria:
1. Show bash scripts and successfully execute them
i. The Wiki Server should be up and running and serving on 100
ii. Mount your own container volume to persist data
iii. Login with Credentials Jackpiro/admin
TASK B: Version Control The JackPiro Project
Plan & Code
App Name: JackPiro
WorkStation A- Team -3.129.65.16
WorkStation B- Team - 18.118.167.59
Developer Workstations are windows machines, Your Project Supervisor will provide you the password you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator
When you access the Developer workstation assigned to your group, you will find the code base in the below location:
C:---->Documents---->App--->JackPiro
(You can use Github or Bitbucket )-
1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green):
Build repo : JackPiro_Build--->Developers Access
Deployment repo: JackPiro_Deployment--->-Your Team Access
2)Version control the JackPiroproject located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)
Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo
3)Git branching Strategy for JackPiro_Build
master
release: eg release/release-v1
feature: eg feature/feature-v1
develop
4)Git branching Strategy for JackPiro_Deploy
master
feature eg feature/feature-v1
develop
TASK B Acceptance Criteria:
1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the JackPiro_Build repo in Source Control Management(SCM)
2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the JackPiro_Deploy repo
3. Demonstrate the git branching Strategy
TASK C: Set up your Infrastructure
1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B
Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure
i. DEV - t2micro -8gb
ii. UAT(User Acceptance Testing)- t2small -10gb
iii. QA(Quality Assurance) - T2Large-20gb
iv. PROD A- T2Xlarge-30gb
v. PROD B- T2xLarge-30gb
Apache Tomcat Servers should be exposed on Port 4444
Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04
Note: When Bootstrapping your servers make sure you install the Datadog Agent
2. Set up your Devops tools servers:
(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)
NOTE: USE AZURE CLOUD FOR BELOW
1 Ansible Tower T2xxl- 15gb
1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube
1 Jenkins(CI/CD) t2 xlarge 20gb
Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:
Install Sonarqube
Artifactory
Bonus Task:
Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers
Register a Domain using Route 53, eg www.teamdevops.com
Point that domain to the Elastic/Application Loadbalancer
Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B
TASK E: Set Up Automated Build for Developers
The Developers make use of Maven to Compile the code
a. Set up a C/I Pipeline in Jenkins using Jenkinsfile
b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job
c. The CI Pipeline job should run on an Agent(Slave)
d. Help the developers to version their artifacts, so that each build has a unique artifact version
Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts
Pipeline should have slack channel notification to notify build status
i. Acceptance Criteria:
Automated build after code is pushed to the repository
1. Sonar Analysis on the sonarqube server
2. Artifact uploaded to artifactory
3. Email notification on success or failure
4. Slack Channel Notification
5. Each artifact has a unique version number
6. Code coverage displayed
TASK F: Deploy & Operate (Continous Deployment)
a. Set up a C/D pipeline in Jenkins using Jenkinsfile
create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments
Pipeline job Name:eg JackPiro_Dev_Deploy
i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B)
You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either Dev, Uat , Qa or Prod
ii. Pipeline should have slack channel notification to notify deployment status
iii. Pipeline should have email notification
iv. Deployment Gate
1. Acceptance criteria:
i. Deployment is seen and verified in either Dev, Uat, Qa or Prod
ii. Notification is seen in slack channel
iii. Email notification
TASK G: Monitoring
a. Set up continous monitoring with Datadog by installing Datadog Agent on all your servers
Acceptance criteria:
i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)
ii All running Processes on all your Servers should be monitored(Process monitoring)
ii Tag all your servers on the Datadog dashboard
TASK H: Deployment and Rollback
a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower
Manual Deployment Process is Below:
step 1: login to tomcat server
step 2 :download the artifact
step 3: switch to root
step 4: extract the artifact to Deployment folder
Deployment folder: /var/lib/tomcat8/webapps
Use service id : ubuntu
Acceptance Criteria:
i. Deploy new artifact from artifactory to either Dev, Uat, Qa or Prod
ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod
Prometheus is an open-source monitoring and alerting tool that collects and stores time-series data, while Grafana is a popular data visualization platform that allows you to create interactive dashboards and visualizations.
By combining these tools, you can gain valuable insights into your Kubernetes cluster’s performance and health, making it easier to identify and troubleshoot issues. However, setting up this stack can be a daunting task, especially if you’re not familiar with the process.
That’s why I’m excited to provide you with a comprehensive tutorial that will guide you through the entire process step-by-step, from installing k3s to configuring Prometheus and Grafana. With my tutorial, you’ll be able to install and configure this powerful monitoring stack in just 5 minutes, saving you a lot of time and effort
We may need some of these and possibly other resources in our application, which we'll have to create ourselves as templates.
For this tutorial, we'll create a deployment and service to expose that deployment. Please note that the emphasis here is not to understand Kubernetes in detail. Hence we'll keep these resources as simple as possible.
Let's edit the file deployment.yaml inside the templates directory to look like:
Now, with our knowledge of Kubernetes, these template files look quite familiar except for some oddities. Note the liberal usage of text within double parentheses {{}}. This is what is called a template directive.
Helm makes use of the Go template language and extends that to something called Helm template language. During the evaluation, every file inside the template directory is submitted to the template rendering engine. This is where the template directive injects actual values into the templates.
7.3. Providing Values
In the previous sub-section, we saw how to use the template directive in our templates. Now, let's understand how we can pass values to the template rendering engine. We typically pass values through Built-in Objects in Helm.
There are many such objects available in Helm, like Release, Values, Chart, and Files.
We can use the file values.yaml in our chart to pass values to the template rendering engine through the Built-in Object Values. Let's modify the values.yaml to look like:
However, note how these values have been accessed within templates using dots separating namespaces. We have used the image repository and tag as “hello-world” and “1.0”, this must match the docker image tag we created for our Spring Boot application.
8. Managing Charts
With everything done so far, we're now ready to play with our chart. Let's see what the different commands available in Helm CLI to make this fun are! Please note that we'll only cover some of the commands available in Helm.
8.1. Helm Lint
Firstly, this is a simple command that takes the path to a chart and runs a battery of tests to ensure that the chart is well-formed:
helm lint ./hello-world
==> Linting ./hello-world1 chart(s) linted, no failures
The output displays the result of the linting with issues that it identifies.
8.2. Helm Template
Also, we've this command to render the template locally for quick feedback:
Please note that this command fakes the values that are otherwise expected to be retrieved in the cluster.
8.3. Helm Install
Once we've verified the chart to be fine, finally, we can run this command to install the chart into the Kubernetes cluster:
helm install --name hello-world ./hello-world
NAME: hello-world
LAST DEPLOYED: Mon Feb 2515:29:592019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world NodePort 10.110.63.169 <none> 80:30439/TCP 1s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-world10001s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
hello-world-7758b9cdf8-cs7980/1 Pending 00s
This command also provides several options to override the values in a chart. Note that we've named the release of this chart with the flag –name. The command responds with the summary of Kubernetes resources created in the process.
8.4. Helm Get
Now, we would like to see which charts are installed as what release. This command lets us query the named releases:
helm ls--all
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
hello-world1 Mon Feb 2515:29:592019 DEPLOYED hello-world-0.1.01.0 default
There are several sub-commands available for this command to get the extended information. These include All, Hooks, Manifest, Notes, and Values.
8.5. Helm Upgrade
What if we've modified our chart and need to install the updated version? This command helps us to upgrade a release to a specified or current version of the chart or configuration:
helm upgrade hello-world ./hello-world
Release "hello-world" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Feb 2515:36:042019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world NodePort 10.110.63.169 <none> 80:30439/TCP 6m5s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-world11116m5s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
hello-world-7758b9cdf8-cs7981/1 Running 06m4s
Please note that with Helm 3, the release upgrade uses a three-way strategic merge patch. Here, it considers the old manifest, cluster live state, and new when generating a patch. Helm 2 used a two-way strategic merge patch that discarded changes applied to the cluster outside of Helm.
8.6. Helm Rollback
It can always happen that a release went wrong and needs to be taken back. This is the command to roll back a release to the previous versions:
helm rollback hello-world1
Rollback was a success! Happy Helming!
We can specify a specific version to roll back to or leave this argument black, in which case it rolls back to the previous version.
8.7. Helm Uninstall
Although less likely, we may want to uninstall a release completely. We can use this command to uninstall a release from Kubernetes:
It removes all of the resources associated with the last release of the chart and the release history.
9. Distributing Charts
While templating is a powerful tool that Helm brings to the world of managing Kubernetes resources, it's not the only benefit of using Helm. As we saw in the previous section, Helm acts as a package manager for the Kubernetes application and makes installing, querying, upgrading, and deleting releases pretty seamless.
In addition to this, we can also use Helm to package, publish, and fetch Kubernetes applications as chart archives. We can also use the Helm CLI for this as it offers several commands to perform these activities. As before, we'll not cover all the available commands.
9.1. Helm Package
Firstly, we need to package the charts we've created to be able to distribute them. This is the command to create a versioned archive file of the chart:
helm package ./hello-world
Successfully packaged chart and saved it to: \hello-world\hello-world-0.1.0.tgz
Note that it produces an archive on our machine that we can distribute manually or through public or private chart repositories. We also have an option to sign the chart archive.
9.2. Helm Repo
Finally, we need a mechanism to work with shared repositories to collaborate. There are several sub-commands available within this command that we can use to add, remove, update, list, or index chart repositories. Let's see how we can use them.
We can create a git repository and use that to function as our chart repository. The only requirement is that it should have an index.yaml file.
We can create index.yaml for our chart repo:
helm repo index my-repo/ --url https://<username>.github.io/my-repo
This generates the index.yaml file, which we should push to the repository along with the chart archives.
After successfully creating the chart repository, subsequently, we can remotely add this repo:
There are quite a several commands available to work with the chart repositories.
9.3. Helm Search
Finally, we should search for a keyword within a chart that can be present on any public or private chart repositories.
helm search repo <KEYWORD>
There are sub-commands available for this command that allows us to search different locations for charts. For instance, we can search for charts in the Artifact Hub or our own repositories. Further, we can search for a keyword in the charts available in all the repositories we've configured.
10. Migration from Helm 2 to Helm 3
Since Helm has been in use for a while, it's obvious to suspect the future of Helm 2 with the significant changes as part of Helm 3. While it's advisable to start with Helm 3 if we are starting fresh, support for Helm 2 will continue in Helm 3 for the near future. Although, there are caveats, and hence will have to make necessary accommodations.
Some of the important changes to note include that Helm 3 no longer automatically generates the release name. However, we've got the necessary flag that we can use to generate the release name. Moreover, the namespaces are no longer created when a release is created. We should create the namespaces in advance.
But there are a couple of options for a project that uses Helm 2 and wishes to migrate to Helm 3. First, we can use Helm 2 and Helm 3 to manage the same cluster and slowly drain away Helm 2 releases while using Helm 3 for new releases. Alternatively, we can decide to manage Helm 2 releases using Helm 3. While this can be tricky, Helm provides a plugin to handle this type of migration.
11. Conclusion
To sum up, in this tutorial, we discussed the core components of Helm, a package manager for Kubernetes applications. We understood the options to install Helm. Furthermore, we went through creating a sample chart and templates with values.
Then, we went through multiple commands available as part of Helm CLI to manage the Kubernetes application as a Helm package. Finally, we discussed the options for distributing Helm packages through repositories. In the process, we saw the changes that have been done as part of Helm 3 compared to Helm