Sunday, 21 November 2021

Serverless websites on AWS using Lambda and API Gateway


My first thought was: "If there are more than 200+ AWS webservices available as of today, from the statistical standpoint it is very probable that such architecture could be constructed - either by using 1 or combination of more of the remaining 195 services."


The following are said to be required for Serverless process to be feasible:

Req. 1: We need a "Place" in which website components can be stored (storage)

Req. 2: We need an "Address" that can be entered into the browser's location bar allowing users access website (URL endpoint)
Req. 3: We need a "Service" that can listen on "the Address" to the incoming HTTPS requests issued by user's browser and in real-time serve website content stored in "the Place" (webserver app)

Having defined these, the selection of the suitable AWS webservices for the AWS webhosting architecture including creating a corresponding architecture turned out to be an easy task.

Serverless Architecture

AWS Lambda and Amazon API Gateway - two AWS webservices which, when used together as depicted in the following scheme, can fullfil all of our requirements.

AWS Lambda and Amazon API Gateway serverless architecture for hosting static websites

Let's host a serverless website using Lambda and API Gateway

Note: In our example we are going to use AWS Region EU-CENTRAL-1 (Frankfurt). Please feel free to use any other region that suits you most, just make sure that region supports AWS Lambda and Amazon API Gateway. Also if you choose to use different region, be aware that URLs of AWS services mentioned in this article need to be updated as well to fit the new region.

Before we start "playing" with these 2 AWS webservices, we will need a webapp.

1) Sample company Nodejs app

To not overcomplicate things, for the purposes of this article, I created a really very simple Nodejs app.

This website consists of the following website components:

  • 2 HTML files (one representing the index page and the other one the contact page)
  • 1 CSS file (providing formatting and styling - sample fonts and colors)
  • 1 JavaScript file (providing sample animation - snowing effect)
  • 1 PNG image (sample company logo)
  • 1 PDF file (representing sample binary file for download - pricelist)

Before proceeding further, it's necessary that you download the sample Nodejs app I prepared - myLambdaWebsite.zip and store it locally on your computer.




2) AWS Lambda setup

Now, once you have myLambdaWebsite.zip, let's upload it into AWS Lambda.

Here are the steps that need to be taken:

1. Navigate to the awslambda....search for lambda on your aws console


2. Click on the orange button "Create function" at the top right part of the screen



2.1. As "Function name" enter: myLambdaWebsiteas "Runtime" make sure the following is selected: Node.js 14.x

2.2. Click on the orange button "Create function" at the bottom right part of the screen

3. Within the "Code Source" box click on the "Upload From" and choose: " .zip file"


3.1. Click on the "Upload" button and choose the myLambdaWebsite.zip file from the location where you stored it

3.2. Click on the orange button "Save"

This is how your screen should look like if you did it right:

Besides the above mentioned files which were described as being the website components, you can notice that there is 1 additional file that we haven't mentioned yet. It is located in the root directory and it is called "index.js"
What is "index.js" for?

Let me explain..

AWS Lambda is not a webserver to which you just upload your website component files and that's it. AWS Lambda is a serverless compute service that lets you run your code. So that's why we need to upload a special code to the Lambda so that Lambda function knows what to do with the static website component files that we uploaded there as well.

exports.handler = async (event) => {

  try {
    switch(event['path']) { 
      case "/": case "/html": case "/html/": case "/index.html": { 
        event['path'] = "/html/index.html"; 
      } 
    }

    var b = false, 
        s = 200, 
        f = require('fs'), 
        d = f.readFileSync("."+event['path'],"binary"), 
        path = event['path'].split("/"), 
        m = new Map([
            ["htm","text/html"],
            ["html","text/html"],
            ["css","text/css"],
            ["ico","image/x-icon"],
            ["js","text/javascript"],
            ["doc","application/msword"],
            ["docx","application/vnd.openxmlformats-officedocument.wordprocessingml.document"],
            ["gif", "image/gif"],
            ["jpg","image/jpeg"],
            ["jpeg","image/jpeg"],
            ["pdf","application/pdf"],
            ["png","image/png"],
            ["ppt","application/vnd.ms-powerpoint"], 
            ["pptx","application/vnd.openxmlformats-officedocument.presentationml.presentation"],
            ["svg","image/svg+xml"],
            ["txt","text/plain"],
            ["xls","application/vnd.ms-excel"],
            ["zip","application/zip"],
            ["xlsx","application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"]]),
         c = m.get(path[2].split(".")[1]);

    switch(path[1]) { 
      case "img": case "download": { 
        b = true; 
        d = Buffer.from(d,'binary').toString('base64'); 
      } 
    }

  }
  catch (e) { 
    s = 404; 
    c = "text/html", 
    d = f.readFileSync("./html/404.html","binary"); 
  }

  return { 
    statusCode: s, 
    isBase64Encoded: b, 
    headers: {"Content-Type": c}, 
    body: d
  };
};

Everytime this Lambda function is invoked, index.js is executed. Based on the parameters it receives (API Gateway provides to Lambda the URL path), index.js decides which website component file it is going to serve. After it manages to retrieve the website component file from the Lambda function internal local storage, it sends out to the output: HTTP status code, file's Content-Type and the contents of the file itself.

In other words, our index.js file is a simple webserver application written in Node.js that serves static content stored as individual files in the Lambda function itself.

Amazon API Gateway logo

3. Amazon API Gateway setup

Why do we need to configure additional AWS webservice? Isn't AWS Lambda enough?

No it isn't, because AWS Lambda functions are by default not accessible from the outside world (there is no URL which you can type into your browser to access them).

In order to have one, what we need to do, is to setup a custom REST endpoint in the AWS Cloud and "wire" this endpoint to the Lambda function. Technically said, using Amazon API Gateway we have to map access to a specific resource (URL) with the HTTPS method to the invocation of a Lambda function.

Once we do it, we will be able to visit our website by putting this specific endpoint URL as address into the web browser.

And this the reason why we need the Amazon API Gateway.

Here are the steps that need to be taken:

1. Navigate to the Amazon Api Gateway...search for Api Gateway in your console


2. Click on the orange button "Create API" located at the top right part of the screen

3. Within the "REST API" box click on the orange button "Build" 

3.1. As "API name" enter: My Lambda Website 3.2. Click on the blue button "Create API" 

4. From the left sidebar under the "API: My Lambda Website" choose "Settings" (note: there are 2 "Settings" in the left sidebar, choose the indented one with the smaller font)

4.1. Scroll to the bottom of the screen 4.2. Under the "Binary Media Types" heading click on the "+ Add Binary Media Type" and into the input field enter: */*4.3. Scroll to the bottom of the screen and click on the blue button "Save Changes" 


5. From the left sidebar under the "API: My Lambda Website" choose "Resources"5.1. From the "Actions drop-down" choose: Create Method



5.2. Choose "ANY" from the drop-down and click on the grey tick icon


5.3. Make sure the checkbox is checked for the option: "Use Lambda Proxy Integration"
5.4. As "Lambda function" enter the name of your Lambda function: myLambdaWebsite

5.5 Click on the blue button "Save" and then click on the blue button "OK" 6. From the "Actions drop-down" choose: "Create Resource"



6.1. Make sure the checkbox is checked for the option: "Configure as proxy resource" 6.2. Click on the blue button "Create Resource" 


6.3. As "Lambda function" enter the name of your Lambda function: myLambdaWebsite
6.4. Click on the blue button "Save" and then click on the blue button "OK" 

7.  From the "Actions drop-down" choose: Deploy API

7.1. As "Deployment stage" choose: [New Stage], as "Stage name" enter: prod , you can enter any environment name you choose


7.2. Click on the blue button "Deploy" 

8. Copy the "Invoke URL" into the clipboard 

4. Last step

Open your web browser, paste from the clipboard the "Invoke URL" that you just copied, press Enter and the website will load.If you have done everything right according to the steps provided above, you should see on your browser screen this:


Frequently asked questions

1. Are there any limits to the size of the static website which I can upload/host using AWS Lambda?

Yes, there are. You can only upload to each Lambda function ZIP archive of max. size of 50MBs (unzipped content may consume up to 250 MBs though).

2. I uploaded larger website as a ZIP archive into AWS Lambda - the operation was a success, but suddenly I can't use the Lambda online editor anymore. Why is it so?

This is because AWS Lambda console editor has a limit that prevents it from working when the size of your Lambda function exceeds 3MBs.

3. Updating of my static website stored in the local AWS Lambda function storage is not very comfortable for me, especially when I want to add new or update existing images or other binary files. Aren't there any simpler options than including all files in ZIP archive and uploading them back to AWS Lambda?

Yes, there are. But you need to modify index.js file so that it doesn't retrieve files from your AWS Lambda function storage but from some other location (e.g. database). I am about to write new article about this possibility and will reference it from this article once it becomes published.

4. I followed the steps described in this article, but my browser doesn't display images properly. They are rendered as broken images. What am I doing wrong?

This is most probably because you have either skipped step no.4 in the above mentioned Amazon API Gateway instructions or didn't follow this step properly. Please check and repeat this step again, I recommend that you also redeploy your API afterwards.

5. Is AWS Lamba and Amazon API Gateway part of the Free Tier programme? How much do I need to pay for using these webservices?

Great news is that Lambda comes with unlimited Free Tier in which 1 million of Lambda requests and 400,000 GB-seconds of compute time per month are offered to you completely for free. If you go over, you have to pay. See Lambda pricing

As for API Gateway, its Free Tier is more limited and valid only for the 1-year of your AWS account ownership, within this period of time you can make 1 million of API calls, receive 1 million messages and have 750,000 connection minutes per month for free. Fore more info about API gateway pricing, visit this page.

6. Is there a way how to optimize number of API requests and Lambda invocations when hosting website on Lambda and API Gateway?

Yes. There is. One of the ways is to transform your website into the Single Page Application (SPA), the other way is to start embedding content directly in the HTML source of your pages - instead of referencing image files, start embedding them in SVG format, instead of using external CSS and JavaScript files, embed their source into the HTML directly.

7. Quite frankly, the URL of my website is looking quite ugly (%some-random-string%.execute-api.eu-central-1.amazonaws.com/web/). Is there anything I could do about it, can I host my website on traditional domain name like address?

Yes. Just follow the instructions provided by Amazon for setting up custom domain.

Final notes

In the example provided above, I showed you how you can host an entire static website in AWS Cloud using just AWS Lambda and Amazon API Gateway.

Please consider it more like a Proof of Concept which confirms that hosting of static websites using these 2 AWS webservices is technically possible.

There are far much more effective, less complicated and more powerful solutions existing in AWS for static website hosting, 

AWS Lambda and Amazon API Gateway are though very effective for creating serverless backends (microservices).

You can also use them for dynamic website hosting and if this is your case I recommend you to explore the following solutions:

  • AWS Serverless Express - NodeJS-based API framework mimicking routing capabilities of ExpressJS framework inside Lambda functions
  • Bref - an open source project that brings full support for PHP and its frameworks to AWS Lambda
  • ClaudiaJS - Deploying Node.js projects
  • Sparta - framework that transforms a go application into a self-deploying AWS Lambda powered service
  • Up - Deploying Node.js, Golang, Python, Java, Crystal, Clojure and static sites
  • Zappa - Deploying event-driven Python applications (incl. WSGI web apps)

Tuesday, 16 November 2021

How to enable code coverage report using JaCoCo, Maven and Jenkins

 

What is Code Coverage?

Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not. In general, a code coverage system collects information about the running program and then combines that with source information to generate a report on the test suite's code coverage.

Code coverage is part of a feedback loop in the development process. As tests are developed, code coverage highlights aspects of the code which may not be adequately tested and which require additional testing. This loop will continue until coverage meets some specified target.

Why Measure Code Coverage?

It is well understood that unit testing improves the quality and predictability of your software releases. Do you know, however, how well your unit tests actually test your code? How many tests are enough? Do you need more tests? These are the questions code coverage measurement seeks to answer.

Coverage measurement also helps to avoid test entropy. As your code goes through multiple release cycles, there can be a tendency for unit tests to atrophy. As new code is added, it may not meet the same testing standards you put in place when the project was first released. Measuring code coverage can keep your testing up to the standards you require. You can be confident that when you go into production there will be minimal problems because you know the code not only passes its tests but that it is well tested.

In summary, we measure code coverage for the following reasons:

  • To know how well our tests actually test our code
  • To know whether we have enough testing in place
  • To maintain the test quality over the lifecycle of a project

Code coverage is not a panacea. Coverage generally follows an 80-20 rule. Increasing coverage values becomes difficult, with new tests delivering less and less incrementally. If you follow defensive programming principles, where failure conditions are often checked at many levels in your software, some code can be very difficult to reach with practical levels of testing. Coverage measurement is not a replacement for good code review and good programming practices.

In general you should adopt a sensible coverage target and aim for even coverage across all of the modules that make up your code. Relying on a single overall coverage figure can hide large gaps in coverage.

Code coverage is important aspect for maintaining quality. There are different ways to manage code quality. one of the effective ways is to measure code coverage by using plug-ins such as JaCoCo, Cobertura.


We will see how to enable code coverage for your Java project and view coverage report in Jenkins UI.

step # 1: Add Maven JaCoCo plugin in POM.xml under <finalName>MyWebApp</finalName> in your project pom.xml


<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.7.201606060606</version>
<executions>
<execution>
<id>jacoco-initialize</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>jacoco-report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>

It should look similar to below:




Step 2 : Add JaCoCo plug-in in Jenkins:


Step 3:

For Freestyle Job:
Enable in Jenkins job to view code coverage report by going to post build action and add Record JaCoCo coverage report



Step 4 : Run the job by clicking Build now

Step 5:
 Click on the job to view code coverage report.

Tuesday, 9 November 2021

Project March 2022(latest)

 Cloud Eta LLC is an Emerging Consulting Firm that designs business solutions for emerging markets.

They currently have a legacy web Application called FOI App Written in Java and hosted by their private server : https://projectfoiappdevops.s3.us-east-2.amazonaws.com/FoiAppLanding/index.html

It usually takes 5hrs to update their application and updates are manual, which incurs alot of downtime and is affecting their business because clients get locked out which gives their competitors upper hand.




Your Task is to migrate this Application into the cloud and implement Devops Practices to their entire Software Development Life Cycle

You should show concepts that implement Plan --Code--Build--Test--Deploy--Monitor



TASK A - Documentation: Setup a Wiki Server for your Project (Containerization)

a.

You can get the docker-compose file from below link

https://github.com/bitnami/bitnami-docker-dokuwiki/blob/master/docker-compose.yml 

Or

Use the below command on your Terminal to get the Yaml code and create a Docker Compose File

curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-dokuwiki/master/docker-compose.yml

b. mount your own Data Volume on this container

Hint: by modifying the Docker Compose file eg.



c. Change default port of Wiki Server to be running on Port 100

d. Change the default User and password

 to 

         Username: Foi

         Password:  admin

hint: Use the official image documentation to find details to accomplish all this

https://github.com/bitnami/bitnami-docker-dokuwiki

TASK A  Acceptance Criteria: 

i. The Wiki Server should be up and running and serving on 100

ii. Mount your own container volume to persist data

iii. Login with Credentials Foi/admin


TASK B: Version Control The FoiApp Project

Plan & Code

App Name: FoiApp

  • WorkStation A- Team Lion- 3.145.18.54
  • WorkStation B- Team Eagle- 3.22.241.224
  • WorkStation C - Team Elephant-    3.21.105.249 
  • WorkStation D- Team Bear-  3.145.96.17  
  • WorkStation E- Team Unicorn-  3.17.181.196 
Developer Workstations are windows machines, Your Project Supervisor will provide you the password you will use to log into the machine assigned to ur group: You can use Mobaxterm or RemoteDesktop to connect. The Username is Administrator

When you access the Developer workstation assigned to your group, you will find the code base in the below location:
C:---->Documents---->App--->FoiApp


(You can use Github or Bitbucket )- 

1) Set up 2 repos, a Build Repo to store all the code base and a Deployment Repo to store all your deployment scripts and name them accordingly as you see below(in green): 

  • Build repo : FoiApp_Build  --->Developers Access
  • Deployment repo: FoiApp_Deployment   --->-Your Team Access

2)Version control the FoiApp project located in the Developers WorkStation to enable the Developers migrate their code to the Source Control Management Tool(Bitbucket/Git)

  • Set up Developers workstations ssh-keys in bitbucket to access Build Repo and Your Team(Devops) workstation ssh-keys in bitbucket to access the Deployment Repo

3)Git branching Strategy for FoiAp_Build

  • master
  • release: eg    release/release-v1
  • feature:   eg  feature/feature-v1
  • develop

4)Git branching Strategy for FoiApp_Deploy
  • master
  • feature eg feature/feature-v1
  • develop

TASK B Acceptance Criteria: 

1. You should be able to push and pull code from the Developer Workstation assigned to your Team to the FoiApp_Build repo in Source Control Management(SCM) 

2. Your Team (Devops) Should be able to pull and push code from your individual workstations to the FoiApp_Deploy repo

3. Demonstrate the git branching Strategy


TASK C: Set up your Infrastructure

1. Set up your Environment: DEV, UAT, QA, PROD A, PROD B

Provision 6 Apache Tomcat Servers (You can use Any IAC Tool(Terraform, Cloud Formation, Ansible Tower)You can host this use any cloud provider - Aws, Google Cloud, Azure

i. DEV - t2micro -8gb

ii. UAT(User Acceptance Testing)- t2small -10gb

iii. QA(Quality Assurance) - T2Large-20gb

iv. PROD A- T2Xlarge-30gb

v. PROD B- T2xLarge-30gb

Apache Tomcat Servers should be exposed on Port 4444

Linux Distribution for Apache Tomcat Servers: Ubuntu 16.04

Note: When Bootstrapping your servers make sure you install the Datadog Agent

2. Set up your Devops tools servers:

(These can be provisioned Manually or with IAC Tool, Be Free to use any Linux Distributions on theses eg Linux 2, Debian, Ubuntu,etc)

1 Jenkins(CI/CD) t2 xlarge 20gb

1 SonarQube(codeAnalysis) t2small 8gb

1 Ansible Tower T2xxl- 15gb

1 Artifactory Server T2xxl - 8gb

1 Kubernetes Server-You can use EKS, k3s,kubeadm or minikube(Note your kubernetes can be installed in your Jenkins 

 TASK D: Set Up A 3 Node kubernetes Cluster(Container Orchestration) and Deploy dockuwiki server you set up in Task A into it

Label the Nodes: Dev, QA, Prod

1. Set up a Jenkins pipeline to Create/Delete the cluster

2. Set up a Jenkins pipeline to deploy the dokuwiki server into any of the Nodes(Dev, QA,Prod) within your cluster

3. Expose the application using a Load balancer or NodePort

Tip: Convert your docker-compose.yml to kubernetes deployment and service file using kompose

 TASK D Acceptance Criteria: 

1. You should be able to create/delete a kubernetes cluster

2. Be able to deploy your application into any Node(Dev,Qa,Prod)

3. You should be able to access the application through Nodeport or LoadBalancer

TASK E: Set Up Automated Build for Developers 

The Developers make use of Maven to Compile the code

a. Set up a C/I  Pipeline in Jenkins using Jenkinsfile 

b. Enable Webhooks in bitbucket to trigger Automated build to the Pipeline Job

c. The CI Pipeline job should run on an Agent(Slave)

d. Help the developers to version their artifacts, so that each build has a unique artifact version

Tips: https://jfrog.com/knowledge-base/configuring-build-artifacts-with-appropriate-build-numbers-for-jenkins-maven-project/


Pipeline job Name: FoiApp_Build

Pipeline should be able to checkout the code from SCM and build using Maven build tool, Provide code analysis ,codecoverage with sonarqube and upload artifacts to artifactory, and also send email to the team and provide versioning of artifacts

Pipeline should have slack channel notification to notify build status


i. Acceptance Criteria:

 Automated build after code is pushed to the repository

1. Sonar Analysis on the sonarqube server

2. Artifact uploaded to artifactory

3. Email notification on success or failure

4. Slack Channel Notification

5. Each artifact has a unique version number

6. Code coverage displayed


TASK F: Deploy & Operate (Continous Deployment)

a. Set up a C/D pipeline in Jenkins using Jenkinsfile

create 4 CD pipeline jobs for each env (Dev,Uat, QA,Prod) or 1 pipeline that can select any of the 4 enviroments

Pipeline job Name:eg FoiApp_Dev_Deploy


i. Pipeline should be able to deploy any of your LLE (Dev, Uat, Qa) or HLE (Prod A, PROD B) 

You can use DeploytoContainer plugin in jenkins or Deploy using Ansible Tower to pull artifact from artifactory and deploy to either  Dev, Uat , Qa or  Prod

ii. Pipeline should have slack channel notification to notify deployment status

iii. Pipeline should have email notification

iv. Deployment Gate

1. Acceptance criteria:

i. Deployment is seen and verified in either Dev, Uat, Qa or Prod

ii. Notification is seen in slack channel

iii. Email notification


TASK G: Monitoring

a. Set up continous monitoring with Datadog by installing Datadog Agent on all your servers

 Acceptance criteria: 

 i All your infrastructure Sever metrics should be monitored(Infrastructure Monitoring)

ii All running Processes on all your Servers should be monitored(Process monitoring)

ii Tag all your servers on the Datadog dashboard


TASK H: Deployment and Rollback

a. Automate the manual deployment of a Specific Version of the Deli Application using Ansible Tower

Manual Deployment Process is Below:


step 1: login to tomcat server

step 2 :download the artifact

step 3: switch to root

step 4: extract the artifact to Deployment folder 

Deployment folder:  /var/lib/tomcat8/webapps

Use service id : ubuntu


Acceptance Criteria:

i. Deploy new artifact from artifactory to either Dev, Uat, Qa or  Prod

ii. Rollback to an older artfact from Artifactory either to Dev, Uat, Qa or Prod

iii. All credentials should be encrypted


TASK I: Demonstrate Bash Automation of 

i. Tomcat

ii. jenkins

iii. Apache


Acceptance criteria: 

1. Show bash scripts and successfully execute them

Bonus Task:

Add an application or Elastic Loadbalancer to manage traffic between your ProdA and Prod B Servers

Register a Domain using Route 53, eg www.teamdevops.com

Point that domain to the Elastic/Application Loadbalancer 

Acceptance Criteria: When you Enter your domain in the browser, it should Point to Either Prod A or Prod B

Project Team
Team Leads In Yellow
    Team A (Supervisor- Valentine)Lion
    Voke - Team Lead
    Pelatiah
    Bidemi
    Godswill
    Joseph
    vitalis

    Team B(Supervisor - Johnson)Eagle
    Peter 
    Sean - Team Lead
    Victoria Ojo
    Apple
    Shantel
    Damian

    Team C(Supervisor- Juwon)Elephant
    Franklin --Team Lead
    Rita
    Ezekiel
    Onuma
    Mahammad
    Victory

    Team D(Supervisor- Etim/Themmy)Bear
    Paul
    Okoye--Team Lead
    Chidiebere
    henry
    Benard Ogbu
    minie
    Jonathan Henson

    Team E(Supervisor- Adaeze)Unicorn
    Kc
    Solomon-----Team Lead
    Benjamin
    Deji
    iyiola
    Oluwatosin

    Lead Architect - Prince

    • Each Team is to work independently with their supervisors to complete this project.
    • Every Task is expected to be completed within 1 week
    • We are adopting Agile style so each Team is expected to have 15mins Daily Stand up meetings with your supervisors or in some cases the Lead Architect where you will discuss your progress(what you did yesterday, what you will do today, How far you are in achieving your goals and give general updates
    • This will be a 2 week Sprint After which you will have a Demo to Present all your accomplishments.
    • Please Note: DOE(Devops Engineers) and Architects from other establishments have been invited to your Demo so be prepared
    Demo Date :02/03/2022 Time 8pm 



    TASK E: 



    How to upgrade Maven

      java.lang.IllegalStateException I had installed maven in my ubuntu using command  apt install maven This installed maven in path /usr/shar...