Friday, 22 December 2023

How to upgrade Maven

 

java.lang.IllegalStateException

I had installed maven in my ubuntu using command 

apt install maven

This installed maven in path /usr/share/maven

Months later, I encountered a maven exception when compiling a java project. The error was as follow:

[ERROR] Error executing Maven.
[ERROR] java.lang.IllegalStateException: Unable to load cache item
[ERROR] Caused by: Unable to load cache item
[ERROR] Caused by: Could not initialize class com.google.inject.internal.cglib.core.$MethodWrapper

My java version at the time was

openjdk version "17.0.2" 2022-10-18
OpenJDK Runtime Environment (build 17.0.2+8-Ubuntu-2ubuntu120.04)
OpenJDK 64-Bit Server VM (build 17.0.2+8-Ubuntu-2ubuntu120.04, mixed mode, sharing)

and my maven version at the time was

Apache Maven 3.6.3
Maven home: /usr/share/maven
Java version: 17.0.2, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-17-openjdk-amd64
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "5.10.16.3-microsoft-standard-wsl2", arch: "amd64", family: "unix"

The cause of the error was that maven version(3.6.3) is old. I needed to upgrade to the latest version of maven.

Unfortunately, I could not upgrade to the latest maven version (3.9.0 at the time) using the aptpackage manager on Ubuntu. Generally, the easiest way to install anything on Ubuntu is via the apt package manager. However, it often does not include the latest JDK packages.

These are the steps to install the latest maven version on Ubuntu:

  1. Download the latest maven binaries

a. cd into the /tmp directory on your terminal

b. Check https://maven.apache.org/download.cgi and copy the link for the “Binary tar.gz archive” file.

c. Run the following command to download the binaries:

wget https://dlcdn.apache.org/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-bin.tar.gz

d . Untar the archive file and extract it in the directory

tar -xvf apache-maven-3.9.6-bin.tar.gz
mv apache-maven-3.9.6 maven
mv maven /usr/share/

Note: the latest maven version I was downloading was 3.9.6. Make sure to replace the version in the commands above with the maven version

Tuesday, 12 December 2023

SQL Fundamentals

SQL Fundamentals Course

SQL Fundamentals Course Documentation

Table of Contents

  1. Oracle Cloud Account Setup
  2. Provisioning Oracle Autonomous Database
  3. Connecting to Oracle Autonomous Database
  4. SQL Development Tools Installation
  5. Lab Exercise 1: Setting Up SQL Environment
  6. Lab Exercise 2: Querying Data
  7. Lab Exercise 3: Exploring Joins
  8. Lab Exercise 4: Aggregating Data
  9. Lab Exercise 5: Modifying Data and Transactions
  10. Lab Exercise 6: Building a Blood Donation Database
  11. Final Project: Building a Blood Donation Database

1. Oracle Cloud Account Setup

Sign Up for an Oracle Cloud Account:

Go to Oracle Cloud.

2. Provisioning Oracle Autonomous Database

Access Oracle Cloud Console:

Log in to the Oracle Cloud Console.

Create an Autonomous Database:

Navigate to the "Autonomous Database" section.

Click "Create Autonomous Database" and follow the setup wizard.

Provide details such as database name, username, and password.

Obtain Connection Details:

Once the Autonomous Database is provisioned, note down the connection details (hostname, port, service name, username, password).

3. Connecting to Oracle Autonomous Database

Download SQL Developer or Toad for Oracle:

Download and install Oracle SQL Developer or Toad for Oracle on your local machine.

Connect SQL Developer or Toad to Autonomous Database:

Open SQL Developer or Toad and create a new connection.

Use the connection details obtained earlier (hostname, port, service name, username, password) to connect to the Autonomous Database.

4. SQL Development Tools Installation

Install SQL Developer:

Download SQL Developer from the official website.

Follow the installation wizard to install it on your machine.

Install Toad for Oracle:

Download Toad for Oracle from the official website.

Follow the installation wizard to install Toad on your machine.

5. Lab Exercise 1: Setting Up SQL Environment

1. Install Toad for Oracle:

Download Toad for Oracle from the official website.

Follow the installation wizard to install Toad on your machine.

-- SQL Command: None, as it involves setting up Toad.

2. Connect to a Database:

Open Toad and click on "New Connection."

Enter your connection details, including username, password, and database connection details (hostname, port), and click "Connect."


        -- SQL Command: None, as it involves setting up Toad.

    

3. Create a Sample Database and Table:

In the SQL Editor within Toad, execute the CREATE TABLE statement to create a table named users with columns id, name, and age.

CREATE TABLE users ( id NUMBER PRIMARY KEY, name VARCHAR2(50), age NUMBER ); ALTER TABLE users MODIFY id int NOT NULL; CREATE SEQUENCE users_sequence START WITH 1 INCREMENT BY 1; CREATE OR REPLACE TRIGGER users_trigger BEFORE INSERT ON users FOR EACH ROW BEGIN SELECT users_sequence.nextval INTO :new.id FROM dual; END;

4. Insert Sample Data:

Use the INSERT INTO statements to add sample data to the users table.

INSERT INTO users (name, age) VALUES ('John Doe', 25); INSERT INTO users (name, age) VALUES ('Jane Smith', 30);

5. Execute Basic Queries:

In the SQL Editor, run a SELECT * FROM users; query to retrieve all data from the users table.

SELECT * FROM users;

6. Lab Exercise 2: Querying Data

1. Basic SELECT Statement:

Retrieve all columns from the users table:

SELECT * FROM users;

2. Filtering Data:

Retrieve users older than 25:

SELECT * FROM users WHERE age > 25;

3. Sorting Data:

Retrieve users sorted by age in descending order:

SELECT * FROM users ORDER BY age DESC;

4. Limiting Results:

Retrieve the first 5 users:

SELECT * FROM users WHERE ROWNUM <= 5;

7. Lab Exercise 3: Exploring Joins

1. Inner Join:

Retrieve information from two tables where there is a match:

SELECT users.id, users.name, orders.order_number FROM users INNER JOIN orders ON users.id = orders.user_id;

2. Left Join:

Retrieve all records from the left table and the matched records from the right table:

SELECT users.id, users.name, orders.order_number FROM users LEFT JOIN orders ON users.id = orders.user_id;

3. Right Join:

Retrieve all records from the right table and the matched records from the left table:

SELECT users.id, users.name, orders.order_number FROM users RIGHT JOIN orders ON users.id = orders.user_id;

4. Full Outer Join:

Retrieve all records when there is a match in either the left or right table:

SELECT users.id, users.name, orders.order_number FROM users FULL OUTER JOIN orders ON users.id = orders.user_id;

8. Lab Exercise 4: Aggregating Data

1. Counting Records:

Count the number of users in the users table:

SELECT COUNT(*) FROM users;

2. Grouping Data:

Group users by age and display the count in each group:

SELECT age, COUNT(*) FROM users GROUP BY age;

9. Lab Exercise 5: Modifying Data and Transactions

1. Updating Records:

Update the age of a user in the users table:

UPDATE users SET age = 28 WHERE name = 'John Doe';

2. Deleting Records:

Delete a user from the users table:

DELETE FROM users WHERE name = 'Jane Smith';

3. Transactions:

Use transactions to ensure atomicity for a series of SQL statements:

BEGIN; -- SQL statements within the transaction COMMIT;

10. Lab Exercise 6: Building a Blood Donation Database

1. Create Tables:

Create tables for donors, donations, and recipients:

CREATE TABLE donors ( donor_id NUMBER PRIMARY KEY, donor_name VARCHAR2(50), blood_type VARCHAR2(5) ); CREATE TABLE donations ( donation_id NUMBER PRIMARY KEY, donor_id NUMBER, donation_date DATE, volume_ml NUMBER, FOREIGN KEY (donor_id) REFERENCES donors(donor_id) ); CREATE TABLE recipients ( recipient_id NUMBER PRIMARY KEY, recipient_name VARCHAR2(50), blood_type VARCHAR2(5) );

2. Insert Sample Data:

Insert sample data into each table:

INSERT INTO donors (donor_id, donor_name, blood_type) VALUES (1, 'John Smith', 'O+'); INSERT INTO donors (donor_id, donor_name, blood_type) VALUES (2, 'Jane Doe', 'A-'); INSERT INTO donations (donation_id, donor_id, donation_date, volume_ml) VALUES (1, 1, TO_DATE('2023-01-01', 'YYYY-MM-DD'), 500); INSERT INTO donations (donation_id, donor_id, donation_date, volume_ml) VALUES (2, 2, TO_DATE('2023-02-15', 'YYYY-MM-DD'), 750); INSERT INTO recipients (recipient_id, recipient_name, blood_type) VALUES (1, 'Alice Johnson', 'AB+'); INSERT INTO recipients (recipient_id, recipient_name, blood_type) VALUES (2, 'Bob Williams', 'B-');

3. Write Queries:

Write queries to retrieve information about donors, donations, and recipients.

-- Example queries SELECT * FROM donors; SELECT * FROM donations; SELECT * FROM recipients;

11. Final Project: Building a Blood Donation Database

Project Overview:

For the final project, you will build a Blood Donation Database to manage information about blood donors, donations, and recipients.

Project Tasks:

  1. Create tables for donors, donations, and recipients.
  2. Insert sample data into each table.
  3. Write queries to retrieve information about donors, donations, and recipients.
  4. Implement basic CRUD operations (Create, Read, Update, Delete) for the database.

Project Submission:

Submit your SQL script containing all the queries and commands used to create and populate the Blood Donation Database.

Monday, 27 November 2023

Project DeliApp Nov 2023

    Deli Foods is an Emerging Restaurant business with presence all over the United States designs.

They currently have a legacy web Application Written in Java and hosted by their private server : https://project-deliapp.s3.us-east-2.amazonaws.com/DeliApp/src/main/webapp/index.html

It usually takes 5hrs to update their application and updates are manual, which incurs alot of downtime and is affecting their business because clients get locked out which gives their competitors upper hand.




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--Monitor



TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/containers/blob/main/bitnami/dokuwiki/docker-compose.yml

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/containers/main/bitnami/dokuwiki/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 84

d. Change the default User and password

 to 

         Username: DeliApp

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/containers/tree/main/bitnami/dokuwiki#how-to-use-this-image

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 84

ii. Mount your own container volume to persist data

iii. Login with Credentials DeliApp/admin


TASK B: Version Control The DeliApp Project

Plan & Code

App Name: DeliApp

  • WorkStation A- Team  Osato- 3.142.247.23
  • WorkStation B - Team     -
Developer Workstations are windows machines, Your Project Supervisor will provide you their ip/dns and credentials you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
This PC:---->Desktop---->DeliApp



(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : DeliApp_Build  --->Developers Access
  • Deployment repo: DeliApp_Deployment   --->-Your Team Access

2)Version control the DeliApp project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for DeliApp_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for DeliApp_Deploy

  • master
  • feature eg feature/feature-v1
  • develop



5. Secure the Repos by Installing git-secrets on your build( DeliApp_Build )and deployment (DeliApp_Deploy )repo --PRE-COMMIT HOOK

6. Prevent the developers and your Team from pushing code directly to master by installing PRE-PUSH HOOK

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the DeliApp_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the DeliApp_Deploy repo

3. Demonstrate the git branching Strategy

4. Your git commit should should throw an error when there is a secret in your repo

Hint: Add a text file containing some secrets eg. aws secret key/access key and commit

5. You should get an Error when you try to push to master

TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 18.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

NOTE: USE AZURE CLOUD FOR BELOW

1 Ansible Tower T2xxl- 15gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube

1 Jenkins(CI/CD) t2 xlarge 20gb

1 Vulnerability Scanning Tool Server- Owasp Zap (Install in a Windows instance) See: https://www.devopstreams.com/2022/06/getting-started-with-owasp-zap.html

Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:

Install Sonarqube

Artifactory

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

TASK D: Monitoring

a. Set up continuous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard

TASK E: Domain Name System

a. Register a Domain for your Team

i. You can use Route 53, Godaddy or any DNS service of your choice 

eg. www.team-excellence.com


TASK F: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: DeliApp_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed

TASK G: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg DeliApp_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification

TASK H:a.  Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted

TASK H:b.  Domain Name Service and LoadBalancing

i. Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

ii. Configure your DNS with Route 53 such that if you enter your domain eg www.team-excellence.com it direct you to the LoadBalancer that will inturn point to Prod A or Prod B

Acceptance criteria: 

i. Your team domain name eg www.mint.com will take you to your application that is residing on Prod A or Prod B

 

TASK I: 

    a. Set Up A 3 Node kubernetes Cluster(Container Orchestration) with Namespace dev,qa,prod

  • Using a Jenkins pipeline or Jenkins Job  -The pipeline or job should be able to Create/Delete the cluster

   b. Dockerize the DeliApp

  • You can use a Dockerfile to create the image or Openshift Source to image tool 
  c. Deploy the Dockerized DeliApp into the prod Namespace of the cluster(u can use dev and qa          for testing)
 d. Expose the application using a Load balancer or NodePort
 e.  Monitor your cluster using prometeus and Grafana
 TASK I Acceptance Criteria: 

1. You should be able to create/delete a kubernetes cluster

2. Be able to deploy your application into any Namespace(Dev,Qa,Prod)

3. You should be able to access the application through Nodeport or LoadBalancer

4. You should be able to monitor your cluster in Grafana

TASK J: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Acceptance criteria: 

1. Show bash scripts and successfully execute them


Wednesday, 1 November 2023

Year-End Blitz: DevOps Mastery at $1500 – Secure Your Future!"

Wednesday, 13 September 2023

Project September 23 - Jack Piro

 Violet Streams Resources(VSR) is a Software Consulting Firm that that Builds Web Applications in the Gaming Space

They currently have a legacy web Application witten in Java and hosted by their private server : https://projectjackpirodevops.s3.us-east-2.amazonaws.com/devopsgroup_a-devopsproject-faa298228141/JackPiro/src/main/webapp/index.html

Updates to this application is done manually which incurrs alot of downtime




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--monitor

TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/containers/blob/main/bitnami/dokuwiki/docker-compose.yml

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/containers/main/bitnami/dokuwiki/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 100

d. Change the default User and password

 to 

         Username: Jackpiro

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/containers/tree/main/bitnami/dokuwiki#how-to-use-this-image

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 100

ii. Mount your own container volume to persist data

iii. Login with Credentials Jackpiro/admin

TASK B: Version Control The JackPiro Project

Plan & Code

App Name: JackPiro

  • WorkStation A- Team - 3.129.65.16
  • WorkStation B- Team - 18.118.167.59
Developer Workstations are windows machines, Your Project Supervisor will provide you the password you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
C:---->Documents---->App--->JackPiro


(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : JackPiro_Build  --->Developers Access
  • Deployment repo: JackPiro_Deployment   --->-Your Team Access

2)Version control the JackPiro project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for JackPiro_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for JackPiro_Deploy
  • master
  • feature eg feature/feature-v1
  • develop

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the JackPiro_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the JackPiro_Deploy repo

3. Demonstrate the git branching Strategy

TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

NOTE: USE AZURE CLOUD FOR BELOW

1 Ansible Tower T2xxl- 15gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube

1 Jenkins(CI/CD) t2 xlarge 20gb

Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:

Install Sonarqube

Artifactory

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

TASK E: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: JackPiro_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed


TASK F: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg JackPiro_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification


TASK G: Monitoring

a. Set up continous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard


TASK H: Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted


TASK I: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Tuesday, 15 August 2023

Install Prometheus and Grafana on K3s (Using Helm)

 

Install Prometheus and Grafana on K3s

Prometheus is an open-source monitoring and alerting tool that collects and stores time-series data, while Grafana is a popular data visualization platform that allows you to create interactive dashboards and visualizations.

By combining these tools, you can gain valuable insights into your Kubernetes cluster’s performance and health, making it easier to identify and troubleshoot issues. However, setting up this stack can be a daunting task, especially if you’re not familiar with the process.

That’s why I’m excited to provide you with a comprehensive tutorial that will guide you through the entire process step-by-step, from installing k3s to configuring Prometheus and Grafana. With my tutorial, you’ll be able to install and configure this powerful monitoring stack in just 5 minutes, saving you a lot of time and effort

  1. Clone the k3s-monitoring repository:

    git clone https://github.com/cablespaghetti/k3s-monitoring.git 

  2. cd k3s-monitoring

  3. Add the Prometheus Helm chart repository:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

  4. Install Prometheus and Grafana:

    helm upgrade --install prometheus prometheus-community/kube-prometheus-stack --version 39.13.3 --values kube-prometheus-stack-values.yaml

  5. export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

  6. Edit the service for Grafana to use a NodePort:

    kubectl edit service/prometheus-grafana

    . Then change the type to NodePort and save.
  7. Access Grafana:

    http://<your-k3s-node-ip>:<nodeport>/login

    . Use the following credentials to login:

    • user: admin
    • pass: prom-operator
  8. Import the desired dashboards.
    • just type 1860 on the search to find the node exporter dashboard. This gives the complete vision on the node resources.
dashboard install grafana.com

dashboard install grafana.com

You’ll be able to see now all the resource of the node and their usage:

node exporter full

node exporter full


Thursday, 27 July 2023

All you need to know about Helm, Creating a Chart with Helm

7.1. Creating a Chart

The first step, of course, would be to create a new chart with a given name:

helm create hello-world

Please note that the name of the chart provided here will be the directory's name where the chart is created and stored.

Let's quickly see the directory structure created for us:

hello-world /
  Chart.yaml
  values.yaml
  templates /
  charts /
  .helmignore

Let's understand the relevance of these files and folders created for us:

  • Chart.yaml: This is the main file that contains the description of our chart
  • values.yaml: this is the file that contains the default values for our chart
  • templates: This is the directory where Kubernetes resources are defined as templates
  • charts: This is an optional directory that may contain sub-charts
  • .helmignore: This is where we can define patterns to ignore when packaging (similar in concept to .gitignore)

7.2. Creating Template

If we see inside the template directory, we'll notice that few templates for common Kubernetes resources have already been created for us:

hello-world /
  templates /
    deployment.yaml
    service.yaml
    ingress.yaml
    ......

We may need some of these and possibly other resources in our application, which we'll have to create ourselves as templates.

For this tutorial, we'll create a deployment and service to expose that deployment. Please note that the emphasis here is not to understand Kubernetes in detail. Hence we'll keep these resources as simple as possible.

Let's edit the file deployment.yaml inside the templates directory to look like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "hello-world.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "hello-world.name" . }}
    helm.sh/chart: {{ include "hello-world.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "hello-world.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "hello-world.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP

Similarly, let's edit the file service.yaml to look like:

apiVersion: v1
kind: Service
metadata:
  name: {{ include "hello-world.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "hello-world.name" . }}
    helm.sh/chart: {{ include "hello-world.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: {{ include "hello-world.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}

Now, with our knowledge of Kubernetes, these template files look quite familiar except for some oddities. Note the liberal usage of text within double parentheses {{}}. This is what is called a template directive.

Helm makes use of the Go template language and extends that to something called Helm template language. During the evaluation, every file inside the template directory is submitted to the template rendering engine. This is where the template directive injects actual values into the templates.

7.3. Providing Values

In the previous sub-section, we saw how to use the template directive in our templates. Now, let's understand how we can pass values to the template rendering engine. We typically pass values through Built-in Objects in Helm.

There are many such objects available in Helm, like Release, Values, Chart, and Files.

We can use the file values.yaml in our chart to pass values to the template rendering engine through the Built-in Object Values. Let's modify the values.yaml to look like:

replicaCount: 1
image:
  repository: "hello-world"
  tag: "1.0"
  pullPolicy: IfNotPresent
service:
  type: NodePort
  port: 80

However, note how these values have been accessed within templates using dots separating namespaces. We have used the image repository and tag as “hello-world” and “1.0”, this must match the docker image tag we created for our Spring Boot application.

8. Managing Charts

With everything done so far, we're now ready to play with our chart. Let's see what the different commands available in Helm CLI to make this fun are! Please note that we'll only cover some of the commands available in Helm.

8.1. Helm Lint

Firstly, this is a simple command that takes the path to a chart and runs a battery of tests to ensure that the chart is well-formed:

helm lint ./hello-world
==> Linting ./hello-world
1 chart(s) linted, no failures

The output displays the result of the linting with issues that it identifies.

8.2. Helm Template

Also, we've this command to render the template locally for quick feedback:

helm template ./hello-world
---
# Source: hello-world/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-hello-world
  labels:
    app.kubernetes.io/name: hello-world
    helm.sh/chart: hello-world-0.1.0
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: hello-world
    app.kubernetes.io/instance: release-name

---
# Source: hello-world/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-hello-world
  labels:
    app.kubernetes.io/name: hello-world
    helm.sh/chart: hello-world-0.1.0
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: hello-world
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: hello-world
        app.kubernetes.io/instance: release-name
    spec:
      containers:
        - name: hello-world
          image: "hello-world:1.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP

Please note that this command fakes the values that are otherwise expected to be retrieved in the cluster.

8.3. Helm Install

Once we've verified the chart to be fine, finally, we can run this command to install the chart into the Kubernetes cluster:

helm install --name hello-world ./hello-world
NAME:   hello-world
LAST DEPLOYED: Mon Feb 25 15:29:59 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME         TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
hello-world  NodePort  10.110.63.169  <none>       80:30439/TCP  1s

==> v1/Deployment
NAME         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hello-world  1        0        0           0          1s

==> v1/Pod(related)
NAME                          READY  STATUS   RESTARTS  AGE
hello-world-7758b9cdf8-cs798  0/1    Pending  0         0s

This command also provides several options to override the values in a chart. Note that we've named the release of this chart with the flag –name. The command responds with the summary of Kubernetes resources created in the process.

8.4. Helm Get

Now, we would like to see which charts are installed as what release. This command lets us query the named releases:

helm ls --all
NAME            REVISION        UPDATED                         STATUS          CHART               APP VERSION NAMESPACE
hello-world     1               Mon Feb 25 15:29:59 2019        DEPLOYED        hello-world-0.1.0   1.0         default

There are several sub-commands available for this command to get the extended information. These include All, Hooks, Manifest, Notes, and Values.

8.5. Helm Upgrade

What if we've modified our chart and need to install the updated version? This command helps us to upgrade a release to a specified or current version of the chart or configuration:

helm upgrade hello-world ./hello-world
Release "hello-world" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Feb 25 15:36:04 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME         TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
hello-world  NodePort  10.110.63.169  <none>       80:30439/TCP  6m5s

==> v1/Deployment
NAME         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hello-world  1        1        1           1          6m5s

==> v1/Pod(related)
NAME                          READY  STATUS   RESTARTS  AGE
hello-world-7758b9cdf8-cs798  1/1    Running  0         6m4s

Please note that with Helm 3, the release upgrade uses a three-way strategic merge patch. Here, it considers the old manifest, cluster live state, and new when generating a patch. Helm 2 used a two-way strategic merge patch that discarded changes applied to the cluster outside of Helm.

8.6. Helm Rollback

It can always happen that a release went wrong and needs to be taken back. This is the command to roll back a release to the previous versions:

helm rollback hello-world 1
Rollback was a success! Happy Helming!

We can specify a specific version to roll back to or leave this argument black, in which case it rolls back to the previous version.

8.7. Helm Uninstall

Although less likely, we may want to uninstall a release completely. We can use this command to uninstall a release from Kubernetes:

helm uninstall hello-world
release "hello-world" deleted

It removes all of the resources associated with the last release of the chart and the release history.

9. Distributing Charts

While templating is a powerful tool that Helm brings to the world of managing Kubernetes resources, it's not the only benefit of using Helm. As we saw in the previous section, Helm acts as a package manager for the Kubernetes application and makes installing, querying, upgrading, and deleting releases pretty seamless.

In addition to this, we can also use Helm to package, publish, and fetch Kubernetes applications as chart archives. We can also use the Helm CLI for this as it offers several commands to perform these activities. As before, we'll not cover all the available commands.

9.1. Helm Package

Firstly, we need to package the charts we've created to be able to distribute them. This is the command to create a versioned archive file of the chart:

helm package ./hello-world
Successfully packaged chart and saved it to: \hello-world\hello-world-0.1.0.tgz

Note that it produces an archive on our machine that we can distribute manually or through public or private chart repositories. We also have an option to sign the chart archive.

9.2. Helm Repo

Finally, we need a mechanism to work with shared repositories to collaborate. There are several sub-commands available within this command that we can use to add, remove, update, list, or index chart repositories. Let's see how we can use them.

We can create a git repository and use that to function as our chart repository. The only requirement is that it should have an index.yaml file.

We can create index.yaml for our chart repo:

helm repo index my-repo/ --url https://<username>.github.io/my-repo

This generates the index.yaml file, which we should push to the repository along with the chart archives.

After successfully creating the chart repository, subsequently, we can remotely add this repo:

helm repo add my-repo https://my-pages.github.io/my-repo

Now, we should be able to install the charts from our repo directly:

helm install my-repo/hello-world --name=hello-world

There are quite a several commands available to work with the chart repositories.

helm search repo <KEYWORD>

There are sub-commands available for this command that allows us to search different locations for charts. For instance, we can search for charts in the Artifact Hub or our own repositories. Further, we can search for a keyword in the charts available in all the repositories we've configured.

10. Migration from Helm 2 to Helm 3

Since Helm has been in use for a while, it's obvious to suspect the future of Helm 2 with the significant changes as part of Helm 3. While it's advisable to start with Helm 3 if we are starting fresh, support for Helm 2 will continue in Helm 3 for the near future. Although, there are caveats, and hence will have to make necessary accommodations.

Some of the important changes to note include that Helm 3 no longer automatically generates the release name. However, we've got the necessary flag that we can use to generate the release name. Moreover, the namespaces are no longer created when a release is created. We should create the namespaces in advance.

But there are a couple of options for a project that uses Helm 2 and wishes to migrate to Helm 3. First, we can use Helm 2 and Helm 3 to manage the same cluster and slowly drain away Helm 2 releases while using Helm 3 for new releases. Alternatively, we can decide to manage Helm 2 releases using Helm 3. While this can be tricky, Helm provides a plugin to handle this type of migration.

11. Conclusion

To sum up, in this tutorial, we discussed the core components of Helm, a package manager for Kubernetes applications. We understood the options to install Helm. Furthermore, we went through creating a sample chart and templates with values.

Then, we went through multiple commands available as part of Helm CLI to manage the Kubernetes application as a Helm package. Finally, we discussed the options for distributing Helm packages through repositories. In the process, we saw the changes that have been done as part of Helm 3 compared to Helm 

How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...