Friday, 22 December 2023

How to upgrade Maven

 

java.lang.IllegalStateException

I had installed maven in my ubuntu using command 

apt install maven

This installed maven in path /usr/share/maven

Months later, I encountered a maven exception when compiling a java project. The error was as follow:

[ERROR] Error executing Maven.
[ERROR] java.lang.IllegalStateException: Unable to load cache item
[ERROR] Caused by: Unable to load cache item
[ERROR] Caused by: Could not initialize class com.google.inject.internal.cglib.core.$MethodWrapper

My java version at the time was

openjdk version "17.0.2" 2022-10-18
OpenJDK Runtime Environment (build 17.0.2+8-Ubuntu-2ubuntu120.04)
OpenJDK 64-Bit Server VM (build 17.0.2+8-Ubuntu-2ubuntu120.04, mixed mode, sharing)

and my maven version at the time was

Apache Maven 3.6.3
Maven home: /usr/share/maven
Java version: 17.0.2, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-17-openjdk-amd64
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "5.10.16.3-microsoft-standard-wsl2", arch: "amd64", family: "unix"

The cause of the error was that maven version(3.6.3) is old. I needed to upgrade to the latest version of maven.

Unfortunately, I could not upgrade to the latest maven version (3.9.0 at the time) using the aptpackage manager on Ubuntu. Generally, the easiest way to install anything on Ubuntu is via the apt package manager. However, it often does not include the latest JDK packages.

These are the steps to install the latest maven version on Ubuntu:

  1. Download the latest maven binaries

a. cd into the /tmp directory on your terminal

b. Check https://maven.apache.org/download.cgi and copy the link for the “Binary tar.gz archive” file.

c. Run the following command to download the binaries:

wget https://dlcdn.apache.org/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-bin.tar.gz

d . Untar the archive file and extract it in the directory

tar -xvf apache-maven-3.9.6-bin.tar.gz
mv apache-maven-3.9.6 maven
mv maven /usr/share/

Note: the latest maven version I was downloading was 3.9.6. Make sure to replace the version in the commands above with the maven version

Tuesday, 12 December 2023

SQL Fundamentals

SQL Fundamentals Course

SQL Fundamentals Course Documentation

Table of Contents

  1. Oracle Cloud Account Setup
  2. Provisioning Oracle Autonomous Database
  3. Connecting to Oracle Autonomous Database
  4. SQL Development Tools Installation
  5. Lab Exercise 1: Setting Up SQL Environment
  6. Lab Exercise 2: Querying Data
  7. Lab Exercise 3: Exploring Joins
  8. Lab Exercise 4: Aggregating Data
  9. Lab Exercise 5: Modifying Data and Transactions
  10. Lab Exercise 6: Building a Blood Donation Database
  11. Final Project: Building a Blood Donation Database

1. Oracle Cloud Account Setup

Sign Up for an Oracle Cloud Account:

Go to Oracle Cloud.

2. Provisioning Oracle Autonomous Database

Access Oracle Cloud Console:

Log in to the Oracle Cloud Console.

Create an Autonomous Database:

Navigate to the "Autonomous Database" section.

Click "Create Autonomous Database" and follow the setup wizard.

Provide details such as database name, username, and password.

Obtain Connection Details:

Once the Autonomous Database is provisioned, note down the connection details (hostname, port, service name, username, password).

3. Connecting to Oracle Autonomous Database

Download SQL Developer or Toad for Oracle:

Download and install Oracle SQL Developer or Toad for Oracle on your local machine.

Connect SQL Developer or Toad to Autonomous Database:

Open SQL Developer or Toad and create a new connection.

Use the connection details obtained earlier (hostname, port, service name, username, password) to connect to the Autonomous Database.

4. SQL Development Tools Installation

Install SQL Developer:

Download SQL Developer from the official website.

Follow the installation wizard to install it on your machine.

Install Toad for Oracle:

Download Toad for Oracle from the official website.

Follow the installation wizard to install Toad on your machine.

5. Lab Exercise 1: Setting Up SQL Environment

1. Install Toad for Oracle:

Download Toad for Oracle from the official website.

Follow the installation wizard to install Toad on your machine.

-- SQL Command: None, as it involves setting up Toad.

2. Connect to a Database:

Open Toad and click on "New Connection."

Enter your connection details, including username, password, and database connection details (hostname, port), and click "Connect."


        -- SQL Command: None, as it involves setting up Toad.

    

3. Create a Sample Database and Table:

In the SQL Editor within Toad, execute the CREATE TABLE statement to create a table named users with columns id, name, and age.

CREATE TABLE users ( id NUMBER PRIMARY KEY, name VARCHAR2(50), age NUMBER ); ALTER TABLE users MODIFY id int NOT NULL; CREATE SEQUENCE users_sequence START WITH 1 INCREMENT BY 1; CREATE OR REPLACE TRIGGER users_trigger BEFORE INSERT ON users FOR EACH ROW BEGIN SELECT users_sequence.nextval INTO :new.id FROM dual; END;

4. Insert Sample Data:

Use the INSERT INTO statements to add sample data to the users table.

INSERT INTO users (name, age) VALUES ('John Doe', 25); INSERT INTO users (name, age) VALUES ('Jane Smith', 30);

5. Execute Basic Queries:

In the SQL Editor, run a SELECT * FROM users; query to retrieve all data from the users table.

SELECT * FROM users;

6. Lab Exercise 2: Querying Data

1. Basic SELECT Statement:

Retrieve all columns from the users table:

SELECT * FROM users;

2. Filtering Data:

Retrieve users older than 25:

SELECT * FROM users WHERE age > 25;

3. Sorting Data:

Retrieve users sorted by age in descending order:

SELECT * FROM users ORDER BY age DESC;

4. Limiting Results:

Retrieve the first 5 users:

SELECT * FROM users WHERE ROWNUM <= 5;

7. Lab Exercise 3: Exploring Joins

1. Inner Join:

Retrieve information from two tables where there is a match:

SELECT users.id, users.name, orders.order_number FROM users INNER JOIN orders ON users.id = orders.user_id;

2. Left Join:

Retrieve all records from the left table and the matched records from the right table:

SELECT users.id, users.name, orders.order_number FROM users LEFT JOIN orders ON users.id = orders.user_id;

3. Right Join:

Retrieve all records from the right table and the matched records from the left table:

SELECT users.id, users.name, orders.order_number FROM users RIGHT JOIN orders ON users.id = orders.user_id;

4. Full Outer Join:

Retrieve all records when there is a match in either the left or right table:

SELECT users.id, users.name, orders.order_number FROM users FULL OUTER JOIN orders ON users.id = orders.user_id;

8. Lab Exercise 4: Aggregating Data

1. Counting Records:

Count the number of users in the users table:

SELECT COUNT(*) FROM users;

2. Grouping Data:

Group users by age and display the count in each group:

SELECT age, COUNT(*) FROM users GROUP BY age;

9. Lab Exercise 5: Modifying Data and Transactions

1. Updating Records:

Update the age of a user in the users table:

UPDATE users SET age = 28 WHERE name = 'John Doe';

2. Deleting Records:

Delete a user from the users table:

DELETE FROM users WHERE name = 'Jane Smith';

3. Transactions:

Use transactions to ensure atomicity for a series of SQL statements:

BEGIN; -- SQL statements within the transaction COMMIT;

10. Lab Exercise 6: Building a Blood Donation Database

1. Create Tables:

Create tables for donors, donations, and recipients:

CREATE TABLE donors ( donor_id NUMBER PRIMARY KEY, donor_name VARCHAR2(50), blood_type VARCHAR2(5) ); CREATE TABLE donations ( donation_id NUMBER PRIMARY KEY, donor_id NUMBER, donation_date DATE, volume_ml NUMBER, FOREIGN KEY (donor_id) REFERENCES donors(donor_id) ); CREATE TABLE recipients ( recipient_id NUMBER PRIMARY KEY, recipient_name VARCHAR2(50), blood_type VARCHAR2(5) );

2. Insert Sample Data:

Insert sample data into each table:

INSERT INTO donors (donor_id, donor_name, blood_type) VALUES (1, 'John Smith', 'O+'); INSERT INTO donors (donor_id, donor_name, blood_type) VALUES (2, 'Jane Doe', 'A-'); INSERT INTO donations (donation_id, donor_id, donation_date, volume_ml) VALUES (1, 1, TO_DATE('2023-01-01', 'YYYY-MM-DD'), 500); INSERT INTO donations (donation_id, donor_id, donation_date, volume_ml) VALUES (2, 2, TO_DATE('2023-02-15', 'YYYY-MM-DD'), 750); INSERT INTO recipients (recipient_id, recipient_name, blood_type) VALUES (1, 'Alice Johnson', 'AB+'); INSERT INTO recipients (recipient_id, recipient_name, blood_type) VALUES (2, 'Bob Williams', 'B-');

3. Write Queries:

Write queries to retrieve information about donors, donations, and recipients.

-- Example queries SELECT * FROM donors; SELECT * FROM donations; SELECT * FROM recipients;

11. Final Project: Building a Blood Donation Database

Project Overview:

For the final project, you will build a Blood Donation Database to manage information about blood donors, donations, and recipients.

Project Tasks:

  1. Create tables for donors, donations, and recipients.
  2. Insert sample data into each table.
  3. Write queries to retrieve information about donors, donations, and recipients.
  4. Implement basic CRUD operations (Create, Read, Update, Delete) for the database.

Project Submission:

Submit your SQL script containing all the queries and commands used to create and populate the Blood Donation Database.

Monday, 27 November 2023

Project DeliApp Nov 2023

    Deli Foods is an Emerging Restaurant business with presence all over the United States designs.

They currently have a legacy web Application Written in Java and hosted by their private server : https://project-deliapp.s3.us-east-2.amazonaws.com/DeliApp/src/main/webapp/index.html

It usually takes 5hrs to update their application and updates are manual, which incurs alot of downtime and is affecting their business because clients get locked out which gives their competitors upper hand.




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--Monitor



TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/containers/blob/main/bitnami/dokuwiki/docker-compose.yml

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/containers/main/bitnami/dokuwiki/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 84

d. Change the default User and password

 to 

         Username: DeliApp

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/containers/tree/main/bitnami/dokuwiki#how-to-use-this-image

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 84

ii. Mount your own container volume to persist data

iii. Login with Credentials DeliApp/admin


TASK B: Version Control The DeliApp Project

Plan & Code

App Name: DeliApp

  • WorkStation A- Team  Osato- 3.142.247.23
  • WorkStation B - Team     -
Developer Workstations are windows machines, Your Project Supervisor will provide you their ip/dns and credentials you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
This PC:---->Desktop---->DeliApp



(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : DeliApp_Build  --->Developers Access
  • Deployment repo: DeliApp_Deployment   --->-Your Team Access

2)Version control the DeliApp project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for DeliApp_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for DeliApp_Deploy

  • master
  • feature eg feature/feature-v1
  • develop



5. Secure the Repos by Installing git-secrets on your build( DeliApp_Build )and deployment (DeliApp_Deploy )repo --PRE-COMMIT HOOK

6. Prevent the developers and your Team from pushing code directly to master by installing PRE-PUSH HOOK

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the DeliApp_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the DeliApp_Deploy repo

3. Demonstrate the git branching Strategy

4. Your git commit should should throw an error when there is a secret in your repo

Hint: Add a text file containing some secrets eg. aws secret key/access key and commit

5. You should get an Error when you try to push to master

TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 18.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

NOTE: USE AZURE CLOUD FOR BELOW

1 Ansible Tower T2xxl- 15gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube

1 Jenkins(CI/CD) t2 xlarge 20gb

1 Vulnerability Scanning Tool Server- Owasp Zap (Install in a Windows instance) See: https://www.devopstreams.com/2022/06/getting-started-with-owasp-zap.html

Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:

Install Sonarqube

Artifactory

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

TASK D: Monitoring

a. Set up continuous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard

TASK E: Domain Name System

a. Register a Domain for your Team

i. You can use Route 53, Godaddy or any DNS service of your choice 

eg. www.team-excellence.com


TASK F: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: DeliApp_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed

TASK G: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg DeliApp_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification

TASK H:a.  Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted

TASK H:b.  Domain Name Service and LoadBalancing

i. Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

ii. Configure your DNS with Route 53 such that if you enter your domain eg www.team-excellence.com it direct you to the LoadBalancer that will inturn point to Prod A or Prod B

Acceptance criteria: 

i. Your team domain name eg www.mint.com will take you to your application that is residing on Prod A or Prod B

 

TASK I: 

    a. Set Up A 3 Node kubernetes Cluster(Container Orchestration) with Namespace dev,qa,prod

  • Using a Jenkins pipeline or Jenkins Job  -The pipeline or job should be able to Create/Delete the cluster

   b. Dockerize the DeliApp

  • You can use a Dockerfile to create the image or Openshift Source to image tool 
  c. Deploy the Dockerized DeliApp into the prod Namespace of the cluster(u can use dev and qa          for testing)
 d. Expose the application using a Load balancer or NodePort
 e.  Monitor your cluster using prometeus and Grafana
 TASK I Acceptance Criteria: 

1. You should be able to create/delete a kubernetes cluster

2. Be able to deploy your application into any Namespace(Dev,Qa,Prod)

3. You should be able to access the application through Nodeport or LoadBalancer

4. You should be able to monitor your cluster in Grafana

TASK J: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Acceptance criteria: 

1. Show bash scripts and successfully execute them


Wednesday, 1 November 2023

Year-End Blitz: DevOps Mastery at $1500 – Secure Your Future!"

Wednesday, 13 September 2023

Project September 23 - Jack Piro

 Violet Streams Resources(VSR) is a Software Consulting Firm that that Builds Web Applications in the Gaming Space

They currently have a legacy web Application witten in Java and hosted by their private server : https://projectjackpirodevops.s3.us-east-2.amazonaws.com/devopsgroup_a-devopsproject-faa298228141/JackPiro/src/main/webapp/index.html

Updates to this application is done manually which incurrs alot of downtime




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--monitor

TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/containers/blob/main/bitnami/dokuwiki/docker-compose.yml

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/containers/main/bitnami/dokuwiki/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 100

d. Change the default User and password

 to 

         Username: Jackpiro

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/containers/tree/main/bitnami/dokuwiki#how-to-use-this-image

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 100

ii. Mount your own container volume to persist data

iii. Login with Credentials Jackpiro/admin

TASK B: Version Control The JackPiro Project

Plan & Code

App Name: JackPiro

  • WorkStation A- Team - 3.129.65.16
  • WorkStation B- Team - 18.118.167.59
Developer Workstations are windows machines, Your Project Supervisor will provide you the password you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
C:---->Documents---->App--->JackPiro


(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : JackPiro_Build  --->Developers Access
  • Deployment repo: JackPiro_Deployment   --->-Your Team Access

2)Version control the JackPiro project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for JackPiro_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for JackPiro_Deploy
  • master
  • feature eg feature/feature-v1
  • develop

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the JackPiro_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the JackPiro_Deploy repo

3. Demonstrate the git branching Strategy

TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

NOTE: USE AZURE CLOUD FOR BELOW

1 Ansible Tower T2xxl- 15gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube

1 Jenkins(CI/CD) t2 xlarge 20gb

Insall Helm in your kubernetes Sever(k3s,Eks,kubeadm,miniqube) and the following with helm:

Install Sonarqube

Artifactory

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

TASK E: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: JackPiro_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed


TASK F: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg JackPiro_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification


TASK G: Monitoring

a. Set up continous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard


TASK H: Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted


TASK I: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Tuesday, 15 August 2023

Install Prometheus and Grafana on K3s (Using Helm)

 

Install Prometheus and Grafana on K3s

Prometheus is an open-source monitoring and alerting tool that collects and stores time-series data, while Grafana is a popular data visualization platform that allows you to create interactive dashboards and visualizations.

By combining these tools, you can gain valuable insights into your Kubernetes cluster’s performance and health, making it easier to identify and troubleshoot issues. However, setting up this stack can be a daunting task, especially if you’re not familiar with the process.

That’s why I’m excited to provide you with a comprehensive tutorial that will guide you through the entire process step-by-step, from installing k3s to configuring Prometheus and Grafana. With my tutorial, you’ll be able to install and configure this powerful monitoring stack in just 5 minutes, saving you a lot of time and effort

  1. Clone the k3s-monitoring repository:

    git clone https://github.com/cablespaghetti/k3s-monitoring.git 

  2. cd k3s-monitoring

  3. Add the Prometheus Helm chart repository:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

  4. Install Prometheus and Grafana:

    helm upgrade --install prometheus prometheus-community/kube-prometheus-stack --version 39.13.3 --values kube-prometheus-stack-values.yaml

  5. export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

  6. Edit the service for Grafana to use a NodePort:

    kubectl edit service/prometheus-grafana

    . Then change the type to NodePort and save.
  7. Access Grafana:

    http://<your-k3s-node-ip>:<nodeport>/login

    . Use the following credentials to login:

    • user: admin
    • pass: prom-operator
  8. Import the desired dashboards.
    • just type 1860 on the search to find the node exporter dashboard. This gives the complete vision on the node resources.
dashboard install grafana.com

dashboard install grafana.com

You’ll be able to see now all the resource of the node and their usage:

node exporter full

node exporter full


How to upgrade Maven

  java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...