0% found this document useful (0 votes)
88 views63 pages

Capstone Project - Health Care

The document outlines a DevOps certification project focused on the healthcare domain, detailing the use of various tools such as Git, Jenkins, Docker, and Kubernetes for continuous integration and deployment. It provides step-by-step instructions for setting up a virtual machine, configuring Terraform, managing infrastructure with Ansible, and implementing a CI/CD pipeline using Jenkins. The project culminates in deploying applications using Docker and Kubernetes, demonstrating the automation of the software development lifecycle.

Uploaded by

B13 1039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views63 pages

Capstone Project - Health Care

The document outlines a DevOps certification project focused on the healthcare domain, detailing the use of various tools such as Git, Jenkins, Docker, and Kubernetes for continuous integration and deployment. It provides step-by-step instructions for setting up a virtual machine, configuring Terraform, managing infrastructure with Ansible, and implementing a CI/CD pipeline using Jenkins. The project culminates in deploying applications using Docker and Kubernetes, demonstrating the automation of the software development lifecycle.

Uploaded by

B13 1039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

DevOps Certification Training

Certification Project – Medicure


Healthcare Domain

Submitted by - Saransh Vijay Vargiya


Submitted to - Staragile
Last date of submission - 20/12/24
Objective:

We will be using this tools for this purposes


- Git - For version control for tracking changes in the
code files
- Jenkins - For continuous integration and continuous
deployment
- Docker - For containerizing applications
- Ansible - Configuration management tools
- Selenium - For automating tests on the deployed web
application
- Terraform - For creation of infrastructure.
- Kubernetes – for running containerized application in
managed cluster.
Step 1: Create a VM and install Terraform on it
- Create a virtual machine and install terraform in it
OS: ubuntu 22
Instance type: t2.micro
- Add network setting > add security group > all
traffic and anywhere

- Connect to the instance.


- Steps to install terraform. Use the below
commands.
#wget -O- https://apt.releases.hashicorp.com/gpg
| sudo gpg --dearmor -o
/usr/share/keyrings/hashicorp-archive-keyring.gpg

#echo "deb [signed-


by=/usr/share/keyrings/hashicorp-archive-
keyring.gpg] https://apt.releases.hashicorp.com
$(lsb_release -cs) main" | sudo tee
/etc/apt/sources.list.d/hashicorp.list

#sudo apt update && sudo apt install terraform


- Now creating the accessskey and secret key in aws
for terraform to connect.
- Once access key and secrect key is creating in the
aws – IAM

- Go and configure it on the terraform VM

- Installing awscli on the VM


Steps:

# apt-get update
# apt-get install awscli -y
# aws configure
Give the valid access key
Give the valid secret key
Press enter, no need to give any region and format
option.

- To verify if the credentials have been set for aws


# cat ~/.aws/credentials
- Write terraform configuration file to create an ec2
server and install ansible on it.
# mkdir myproject
# cd myproject
#vim aws_infra.tf
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["~/.aws/credentials"]
}
resource "tls_private_key" "mykey" {
algorithm = "RSA"
}
resource "aws_key_pair" "aws_key" {
key_name = "web-key"
public_key =
tls_private_key.mykey.public_key_openssh
provisioner "local-exec" {
command = "echo
'${tls_private_key.mykey.private_key_openssh}' >
./web-key.pem"
}

}
resource "aws_vpc" "my-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "my-vpc"
}

resource "aws_subnet" "subnet-1"{

vpc_id = aws_vpc.my-vpc.id
cidr_block = "10.0.1.0/24"
depends_on = [aws_vpc.my-vpc]
map_public_ip_on_launch = true
tags = {
Name = "my-subnet"
}
}

resource "aws_route_table" "my-route-table"{


vpc_id = aws_vpc.my-vpc.id
tags = {
Name = "my-route-table"
}

resource "aws_route_table_association" "a" {


subnet_id = aws_subnet.subnet-1.id
route_table_id = aws_route_table.my-route-table.id
}

resource "aws_internet_gateway" "gw" {


vpc_id = aws_vpc.my-vpc.id
depends_on = [aws_vpc.my-vpc]
tags = {
Name = "my-gw"
}

resource "aws_route" "my-route" {

route_table_id = aws_route_table.my-route-table.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id

}
variable "sg_ports" {
type = list(number)
default = [8080,80,22,443]
}
resource "aws_security_group" "my-sg" {
name = "sg_rule"
vpc_id = aws_vpc.my-vpc.id
dynamic "ingress" {
for_each = var.sg_ports
iterator = port
content{
from_port = port.value
to_port = port.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
egress {

from_port =0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "myec2" {
ami = "ami-0166fe664262f664c"
instance_type = "t2.medium"
key_name = "web-key"
subnet_id = aws_subnet.subnet-1.id
security_groups = [aws_security_group.my-sg.id]
tags = {
Name = "Terrafrom-EC2"
}
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ubuntu"
private_key =
tls_private_key.mykey.private_key_pem
host = self.public_ip
}
inline = [
"sudo apt update",
"sudo apt install software-properties-common",
"sudo add-apt-repository --yes --update
ppa:ansible/ansible",
"sudo apt install ansible -y"
]
}
}

- Now lets run the code to verify the insfrastructure.


# terraform init
- Using the below code to create the VM using the
terraform
# terraform apply --auto-approve
- Now we can see that our vm is created now

- Now connect to the newly created EC2 instance


which has ansible installed on it > We can call this
instance as ansible controller.
- Making sure that its and t2.medium machine as we
will have to setup Jenkins, docker, monitoring tools
on it.
Note: Inorder to connect to the newly created VM
we need to first copy the SHH key in the local
computer.
- Goto > cat web-key.pem > copy the key and save in
the local machine.
- Now we can connect it using any ssh we are using
the mobaxterm.
Step 2: Configuration management of tools using
Ansible

- We can see that ansible is now installed on the


new VM created by using the terraform.

- Before writing and executing the playbook enter the


below command manually.
Add Jenkins key to the server
# sudo wget -O /usr/share/keyrings/jenkins-keyring.asc
https://pkg.jenkins.io/debian/jenkins.io-2023.key
Then add a Jenkins apt repository entry:
# echo "deb [signed-by=/usr/share/keyrings/jenkins-
keyring.asc]" https://pkg.jenkins.io/debian binary/ |
sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null

- Now Create a playbook with the below code.


# vim playbook.yml
- name: Install and setup devops tools
hosts: localhost
become: true
tasks:
- name: Update the apt repo
command: apt-get update
- name: Install multiple packages
package: name={{item}} state=present
loop:
- git
- docker.io
- openjdk-17-jdk
- name: install jenkins
command: sudo apt-get install jenkins -y
- name: Start jenkins and docker services
command: name={{item}} state=started
loop:
- jenkins
- docker
- Now run the playbook
# ansible-playbook playbook1.yml
- The playbook was a success with no failed cases,
And we have configured the java, git, docker,
Jenkins using the Ansible.
- Now let us verify that Jenkins is up and running by
using the ip address:8080
Step 3: Continuous Integration pipeline
- Setup Jenkins dashboard and login to the Jenkins
dashboard.

- Setup maven in tool section of Jenkins


Goto manage Jenkins > tools > maven > setup the
maven name over there.
- Now create a pipeline code to fetch the code from
github and test and build the code.
pipeline{

agent any

tools{
maven 'mymvn'
}

stages{
stage('Clone Repo')
{
steps{
git 'https://github.com/saransh-
vijayvargiya/star-agile-health-care.git'
}
}
stage('Test Code')
{
steps{
sh 'mvn test'
}
}

stage('Build Code')
{
steps{
sh 'mvn package'
}
}

}
}
Now click on build to check whether the build is
happening or not.

- Build is now success.


Step 4: Containerize and implement microservice
architecture
- We will write a dockerfile and save it in the github
repo

-We will go to the CICD pipeline in Jenkins add a stage


to build dockerfile in to an Image
-We will run the image to deploy application on
container.
-Before running the pipeline, go to the terminal of VM
and execute this command:
This command will allow Jenkins to run docker
commands
# chmod -R 777 /var/run/docker.sock
- Now we will write the pipeline to build a Image,
Push the image to the dockerhub.
- First save the docker hub password as a variable as
exposing the password is not a good practice.
- Now let us create a new project and write the
pipeline syntax into it.

pipeline{

agent any

tools{
maven 'mymvn'
}

stages{
stage('Clone Repo')
{
steps{
git 'https://github.com/saransh-vijayvargiya/star-agile-
health-care.git'
}
}
stage('Test Code')
{
steps{
sh 'mvn test'
}
}

stage('Build Code')
{
steps{
sh 'mvn package'
}
}
stage('Build Image')
{
steps{
sh 'docker build -t health-care-
project:$BUILD_NUMBER .'
}
}
stage('Push the Image to dockerhub')
{
steps{

withCredentials([string(credentialsId:
'DOCKER_HUB_PASWD', variable: 'DOCKER_HUB_PASWD')])
{
sh 'docker login -u saranshvijayvargiya -p
${DOCKER_HUB_PASWD} '
}
sh 'docker tag health-care-project:$BUILD_NUMBER
saranshvijayvargiya/health-care-project:$BUILD_NUMBER '
sh 'docker push saranshvijayvargiya/health-care-
project:$BUILD_NUMBER'
}
}

}
}

- Now click on apply and save and go to project and


click on build.
- We can see that the build is success and the image
is created and pushed to the docker hub.
- Now let us set the git hub webhook so that when
ever there is a change in the code it will
automatically trigger the build.
Goto project > general > select the option of GitHub
hook trigger for GITScm polling

- Now navigate to the git repository


Goto setting of the repository > webhook > add
webhook.
- Enter the Jenkins url with /github-webhook/
- And click on add webhook.

- We can observe that the webhook is up and


running.
- Now currently we have 13 build in the Jenkins let
us make changes in the code and see that new
build is triggered automatically or not.

- Adding a “!” in the code and saving commit and


lets observe that the build is triggered
automatically or not.
- After saving the commit the build triggered
automatically. Which means the webhook is
working fine.
So now when ever there is a change in the code
the build will automatically triggered.

- The build is triggered and was success too.

- And the new docker image is also pushed to the


docker hub.
Create new VM and set up kubernetes on it
Connect to master and execute below commands:
Only on MASTER NODE:
============================================
# sudo su -

## Install Containerd

sudo wget
https://raw.githubusercontent.com/lerndevops/labs/m
aster/scripts/installContainerd.sh -P /tmp
sudo bash /tmp/installContainerd.sh
sudo systemctl restart containerd.service
### Install kubeadm,kubelet,kubectl
You will install these packages on all of your machines:
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the
machines in your cluster and does things like starting
pods and containers.
kubectl: the command line util to talk to your cluster.
# sudo wget
https://raw.githubusercontent.com/lerndevops/labs/m
aster/scripts/installK8S.sh -P /tmp
# sudo bash /tmp/installK8S.sh
## Initialize kubernetes Master Node
# sudo kubeadm init --ignore-preflight-errors=all
Execute the below commands to setup kubectl and
apiserver communication
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
## install networking driver --
Weave/flannel/canal/calico etc...
## below installs calico networking driver
kubectl apply -f
https://raw.githubusercontent.com/projectcalico/calico
/v3.24.1/manifests/calico.yaml

# Validate: kubectl get nodes

SETUP worker Nodes:


=====================================
On All Worker Nodes
## Install Containerd
# sudo su -
sudo wget
https://raw.githubusercontent.com/lerndevops/labs/m
aster/scripts/installContainerd.sh -P /tmp
sudo bash /tmp/installContainerd.sh
sudo systemctl restart containerd.service

## Install kubeadm,kubelet,kubectl

sudo wget
https://raw.githubusercontent.com/lerndevops/labs/m
aster/scripts/installK8S.sh -P /tmp
sudo bash /tmp/installK8S.sh

## Run Below on Master Node to get join token

# kubeadm token create --print-join-command

copy the kubeadm join token from master & run it on


all worker nodes

On Master node:
# kubectl get nodes

- Now that we have our k8s cluster with one master


and one worker
- We will be doing the configuration of this cluster in
the Jenkins server
Goto > Jenkins Dashboard > install the Kubernetes
plugin.
Now go to the pipleline syntax view.
Sample step > select the KubernetesDeploy: Deploy
to Kubernetes.
Now in the kind > select Kubernetes configuration
(kubeconfig)
Give any id and description to it.

Now on the master node.


Goto > .kube > cat congif
And copy the content inside of the config folder.
And select the enter directly and paste the contents
of the config folder inside of it > click on add.

Now click on generate pipeline and copy the syntx.


Now goto the projects pipeline
And add a this stage below it.
And in that stage enter the deployment and the
service file name into it which is present in the git
repository.

stage('Deploy to k8s'){
steps{
kubernetesDeploy configs:
'deploymentservice.yml', kubeConfig: [path: ''],
kubeconfigId: 'k8sconfigpwd', secretName: '', ssh:
[sshCredentialsId: '*', sshServer: ''], textCredentials:
[certificateAuthorityData: '', clientCertificateData: '',
clientKeyData: '', serverUrl: 'https://']
}
}
Note: This is our final pipeline syntax:
pipeline{

agent any

tools{
maven 'mymvn'
}

stages{
stage('Clone Repo')
{
steps{
git 'https://github.com/saransh-
vijayvargiya/star-agile-health-care.git'
}
}
stage('Test Code')
{
steps{
sh 'mvn test'
}
}

stage('Build Code')
{
steps{
sh 'mvn clean package'
}
}
stage('Build Image')
{
steps{
sh 'docker build -t health-app .'
}
}

stage('Push the Image to dockerhub')


{
steps{

withCredentials([string(credentialsId:
'DOCKER_HUB_PASWD', variable:
'DOCKER_HUB_PASWD')])
{
sh 'docker login -u saranshvijayvargiya -p
${DOCKER_HUB_PASWD} '
}
sh 'docker tag health-app
saranshvijayvargiya/health-app '
sh 'docker push saranshvijayvargiya/health-
app'
}
}
stage('Deploy to k8s'){
steps{
kubernetesDeploy configs:
'deploymentservice.yml', kubeConfig: [path: ''],
kubeconfigId: 'k8sconfigpwd', secretName: '', ssh:
[sshCredentialsId: '*', sshServer: ''], textCredentials:
[certificateAuthorityData: '', clientCertificateData: '',
clientKeyData: '', serverUrl: 'https://']
}
}

}
}
Now save it and click on the build

We can see the build is success and executed.


Now goto the master node and enter the command
# kubectl get pods

Inorder to see the detailed ingo about the pods use


the command
# kubectl get pods -o wide
Use the command
# kubectl get svc

Now copy the cluster IP and the port no and paste it


on the bowser to access the application.
Monitoring of the Jenkins and Kubernetes cluster using
the Prometheus and Grafana tools.
Install using Helm
===========================
Add helm repo
# helm repo add prometheus-community
https://prometheus-community.github.io/helm-charts

Update helm repo


# helm repo update

Install helm
# helm install prometheus prometheus-
community/prometheus

Expose Prometheus Service


This is required to access prometheus-server using your
browser.
# kubectl expose service prometheus-server --
type=NodePort --target-port=9090 --
name=prometheus-server-ext

In the kubernetes master node run the queries


# kubectl get pods
# kubectl get svc

Go to the browser:
take the public ip of the masternode and nodeport
number

example: http:// http://34.60.116.55:30481/ /

You will see prometheous dashboard


Now got status > target health > it will show all the
targets and its health that its up or not.
We can run the queries to in the Prometheus but is not
user friendly so we will be setting up grafna as Grafana
uses dashboards which are more user friendly.
Grafana:

Install using Helm


Add helm repo
#helm repo add grafana
https://grafana.github.io/helm-charts
Update helm repo
#helm repo update
Install helm
#helm install grafana grafana/grafana
Username is admin and get password by running below
command:
#kubectl get secret --namespace default grafana -o
jsonpath="{.data.admin-password}" | base64 --decode
; echo

reryRHYhWYbtqnuDK5MkKsg3iT28zNBri0lUnt8S

Expose Grafana Service


kubectl expose service grafana --type=NodePort --
target-port=3000 --name=grafana-ext
Loginto grafana: username will be admin and password
will be the text that we copied earlier

Now goto data source and add data source as


Prometheus.
Now enter the Prometheus URL

Click on save and test.


Now on the browser we will find many community
versions of the dashboards we can use them.
Here is a dash board for Kubernetes cluster
Dashboard : 6417
Now goto > dashboard > import a dashboard > enter
the dashboard.
And add target as Prometheus and import it.

Here is the Dashboard 6417


We can add more dashboard as well this is a Dashboard
for Kubernetes
Dashboard : 18283
We can navigate to the Dashboard and switch between
the dashboards too.

You might also like