DevOps Practical
DevOps Practical
Chennai – 69
Aim
This lab aims to create a continuous integration and continuous delivery (CI/CD) pipeline in Azure
DevOps to automate the build process for a Maven project.
Procedure
1. Create an Azure DevOps Project (if you don't have one already):
o Sign in to your Azure DevOps account and navigate to "Organizations" -> "[Your
Organization Name]".
o Click on "New project" and choose a name for your project.
2. Define the Maven Project (Locally):
o You can use an existing Maven project or create a sample project using Maven
Archetypes (https://maven.apache.org/archetypes/index.html).
o Ensure you have a pom.xml file configured with your project dependencies and build
tasks.
3. Connect your Code Repository (Optional):
o If your code resides in a version control system like Git, connect the repository to
your Azure DevOps project. You can use services like GitHub, Azure Repos, etc.
4. Create a CI/CD Pipeline:
o Go to "Pipelines" -> "Pipelines" and select "New pipeline".
o Choose "Starter pipeline" and select your code repository (if applicable) or "Empty
job".
5. Configure the Pipeline YAML:
o Edit the YAML script for your pipeline. Here's an example:
YAML
pool:
vmImage: 'ubuntu-latest' # Adjust VM image if needed
stages:
- stage: Build
jobs:
- job: Build_Job
steps:
- task: Maven@3 # Use the appropriate Maven version task
inputs:
mavenVersion: '3.x' # Specify your Maven version
mavenPomFile: 'pom.xml' # Path to your pom.xml file
goals: 'clean package' # Maven goals to execute (clean and package)
publishMavenArtifacts: true # Optionally publish artifacts
* Explanation:
* `pool` defines the virtual machine image used for the pipeline execution.
* `stages` define the pipeline stages (e.g., Build).
* `jobs` define the jobs within a stage (e.g., Build_Job).
* `task` specifies the "Maven@3" task to use.
* `inputs` configure the task with Maven version, pom.xml location, build
goals, and optionally publishing artifacts.
Inference
• A successful pipeline run indicates that your Maven project has been built and artifacts
(compiled code) are generated (if publishing is enabled).
• The pipeline logs provide details about the build process and any potential errors.
Result
• You will have a functional CI/CD pipeline in Azure DevOps that automatically builds your
Maven project upon code changes pushed to the repository (if connected) or manual
execution.
Lab Practical Solution: Run Regression Tests using Maven Build Pipeline in Azure
Aim:
This lab practical aims to demonstrate how to integrate regression test execution with a Maven
build pipeline deployed in Azure DevOps.
Procedure:
1. Create an Azure DevOps Project
o Login to Azure DevOps (https://azure.microsoft.com/en-us/products/devops) and
create a new project or use an existing one.
2. Define the Maven Build Pipeline
o Go to Pipelines -> Releases and create a new pipeline.
o Choose the template "Empty pipeline".
o In the YAML editor, paste the following code:
YAML
stages:
- stage: Build # Define a build stage
jobs:
- job: BuildTests # Define a job within the build stage
pool: # Define the agent pool to run the job on
vmImage: 'ubuntu-latest'
steps:
- script: mvn clean install # Run maven clean and install commands
displayName: 'Build and Install Tests'
- publish: $(System.DefaultWorkingDirectory)/target/*.jar # Publish the
generated JAR file as artifact
Explanation:
• The pipeline defines two stages: Build and Test.
• The Build stage uses the mvn clean install command to build and install the project with
its dependencies.
• The Build stage publishes the generated JAR file as an artifact.
• The Test stage depends on the successful completion of the Build stage.
• The Test stage downloads the JAR artifact from the previous stage.
• The Test stage runs the mvn test command to execute the regression tests.
• The Test stage publishes the generated test reports (usually located in the
target/surefire-reports directory) as artifacts.
A successful pipeline run indicates that the Maven build process executed without errors, the tests
were downloaded and executed successfully, and the test reports were published as artifacts.
Result:
By examining the published test reports (usually in JUnit XML format), you can analyze the test
results and identify any failing tests that require further investigation.
Lab Practical: Install Jenkins in Cloud
Aim
The aim of this lab practical is to set up and configure Jenkins, an open-source automation server,
in a cloud environment. This will enable you to automate the building, testing, and deployment of
your applications.
Procedure
• The initial setup will prompt you to unlock Jenkins using a randomly generated password
located in the console output where Jenkins was started.
6. Install Plugins (Optional):
• Jenkins offers a vast library of plugins to extend its functionality.
• Visit the "Manage Jenkins" -> "Plugins" section and browse/search for plugins relevant to
your automation needs.
• Install the desired plugins and restart Jenkins for them to take effect.
7. Create a Sample Job (Optional):
• To verify your Jenkins installation, create a simple job.
• Go to "New Item" and choose a job type like "FreeStyle Project".
• Configure the job to perform a basic task like executing a shell script or building a sample
project.
• Save the job and run it to test the Jenkins functionality.
8. Secure Jenkins (Recommended):
• Implement security measures to protect your Jenkins server. This may involve configuring
user authentication, authorization, and network access controls.
Inference
By successfully completing this lab, you will have gained hands-on experience in deploying
Jenkins in a cloud environment. This allows you to automate your software development lifecycle,
improving efficiency and consistency.
Result
A functioning Jenkins server running in your cloud environment, ready to automate your build, test,
and deployment pipelines. You should be able to access the Jenkins web interface and create
jobs to manage your software development process.
Lab Practical: Create a CI Pipeline Using Jenkins
Aim
This lab aims to establish a continuous integration (CI) pipeline using Jenkins. The pipeline will
automate the build and test process for a sample software project upon code changes.
Procedure
1. Setting Up Jenkins:
o Ensure you have a running Jenkins server. If not, download and install it from the
official website https://www.jenkins.io/download/.
o After installation, configure Jenkins by following the initial setup wizard.
2. Creating a New Project:
o Login to the Jenkins dashboard.
o Click on "New Item" and select "Pipeline" from the options.
o Provide a name for your project (e.g., "CI_Pipeline_Project").
o Choose "Pipeline script from SCM" under the "Pipeline" section.
3. Source Code Management (SCM) Configuration:
o Select the SCM tool you're using (e.g., Git).
o Enter the URL of your Git repository containing the sample project code.
o Specify the branch name that triggers the pipeline execution (e.g., "main").
o Leave the credentials section blank for now (we can add them later if needed).
4. Defining the Pipeline Script (Jenkinsfile):
o In the script editor, paste the following code (replace <path_to_your_project> with
the actual path within your repository):
Groovy
pipeline {
agent any
stages {
stage('Checkout Code') {
steps {
git branch: '<branch_name>', credentialsId: '', url:
'<repository_url>'
}
}
stage('Build Project') {
// Replace the following steps with your project-specific build
commands
steps {
sh 'cd <path_to_your_project>'
sh './build.sh' // Assuming you have a build script named
build.sh
}
}
stage('Run Tests') {
// Replace the following steps with your project-specific test
commands
steps {
sh 'cd <path_to_your_project>'
sh './test.sh' // Assuming you have a test script named
test.sh
}
}
}
post {
always {
archiveArtifacts artifacts: '**/*.log' // Archive logs from all
stages
}
success {
// Optional: Send notifications on successful builds (e.g.,
email)
}
failure {
// Optional: Send notifications on failed builds (e.g., email)
}
}
}
• Automation: The build and test process becomes automated, eliminating manual
intervention and reducing errors.
• Faster Feedback: Developers receive quicker feedback on code changes, allowing for
faster bug fixes and improvements.
• Improved Quality: Automated testing helps identify and fix issues early in the development
cycle, leading to better code quality.
• Continuous Integration: Code changes are integrated frequently, minimizing code
divergence and conflicts.
Result
This lab demonstrates the creation of a basic CI pipeline using Jenkins. You can customize the
pipeline script further to integrate specific build tools, testing frameworks, and notification
workflows based on your project requirements. By implementing CI, you establish a foundation for
continuous development practices, leading to higher quality softwAare and faster releases.
Lab Practical: CI/CD Pipeline with Jenkins and Cloud Deployment
Aim
This lab practical aims to establish a continuous delivery (CD) pipeline using Jenkins. The pipeline
will automate the process of fetching code from a version control system (VCS), building the
application, and deploying it to a cloud environment.
Procedure
1. Setting Up Jenkins
• Install Jenkins on your system. You can find installation instructions for various platforms on
the Jenkins website https://www.jenkins.io/download/.
• Start and configure Jenkins according to your needs. This might involve setting up security
measures and installing additional plugins.
2. Configuring the Pipeline
• In Jenkins, create a new pipeline job. You can choose between a freestyle project or a
pipeline script. We'll use a pipeline script for this exercise.
• Within the pipeline script editor, define the pipeline using Jenkins Pipeline DSL (Domain
Specific Language). Here's an example script:
Groovy
pipeline {
agent any
stages {
stage('Checkout Code') {
steps {
git branch: 'main', credentialsId: 'github-credentials', url:
'https://github.com/your-username/your-repo.git'
}
}
stage('Build Application') {
steps {
sh 'mvn clean install' // Replace with your build command (e.g.,
npm install, etc.)
}
}
stage('Deploy to Cloud') {
steps {
// Deployment logic specific to your cloud provider (see examples
below)
}
}
}
}
Explanation of the Script:
• agent any: This specifies that the pipeline can run on any available Jenkins agent.
o Checkout Code: This stage fetches code from the specified Git repository using
This stage depends on your chosen cloud provider. Here are some examples:
• AWS: Install the AWS plugin for Jenkins and configure credentials for accessing your AWS
account. You can then use pipeline steps like aws s3 cp to upload build artifacts to S3
buckets and utilize tools like AWS CodeDeploy for automated deployments to EC2
instances or Elastic Beanstalk environments.
• Azure: Install the Azure DevOps plugin for Jenkins and configure your Azure service
connection. The pipeline can then leverage Azure CLI commands or pre-built functions
within the plugin to deploy applications to Azure App Service, Azure Functions, or Virtual
Machines.
• Google Cloud Platform (GCP): Install the Google Cloud plugin for Jenkins and configure
your GCP service account. The pipeline can utilize gcloud commands to deploy
applications to Cloud Run, App Engine, or Compute Engine instances.
4. Running the Pipeline
• Save your pipeline script and configure the job in Jenkins.
• Trigger the pipeline manually or set up triggers based on events like code commits to your
VCS.
• Jenkins will execute the pipeline stages sequentially. You can monitor the progress of each
stage on the Jenkins dashboard.
Inference
Result
Aim
This lab aims to create an Ansible playbook that automates the setup of a basic web application
infrastructure. This includes installing and configuring a web server (Nginx) and deploying a static
HTML webpage.
Procedure
Replace server1.example.com with the actual hostname or IP address of your web server.
• Prepare Playbook File: Create another file named webserver.yml in your project
directory. This file will contain the Ansible playbook.
2. Building the Playbook:
Open the webserver.yml file and paste the following content:
YAML
---
- hosts: webserver
become: true
tasks:
# Update package lists
- name: Update package lists
apt: update_cache=yes
handlers:
# Restart Nginx on config changes
- name: Restart Nginx on config changes
service: name=nginx state=restarted
when: changed
server.
• Tasks: This section defines a series of tasks that Ansible will execute on the target
server(s).
o The first task updates the package lists on the server.
o The second task installs the Nginx web server package.
o The third task starts the Nginx service and ensures it automatically starts on boot.
o The fourth task creates a directory /var/www/html to store the web content.
o The fifth task copies a sample index.html file (not included here) to the created
directory, setting ownership to www-data.
• Handlers: This section defines actions triggered under specific conditions.
o The handler restarts the Nginx service whenever a configuration change occurs
(indicated by the when: changed condition).
3. Deploying the Playbook:
• Place the index.html file: Create a simple HTML file named index.html containing your
desired content in your project directory.
• Run the Playbook: Navigate to the directory containing the inventory and webserver.yml
files. Execute the following command:
ansible-playbook webserver.yml
4. Verification:
Once the playbook finishes execution, access your web server's IP address in a web browser. You
should see the content of your index.html file displayed.
Inference
This lab demonstrates how Ansible can automate the setup and configuration of a basic web
application infrastructure. By using playbooks, we can manage infrastructure in a repeatable and
consistent manner.
Result
This lab provides a foundational understanding of using Ansible for web application infrastructure
automation. With further exploration, you can learn to deploy more complex applications, manage
configurations, and automate various infrastructure tasks.
Lab Practical: Build a Simple Application using Gradle
Aim
This lab aims to introduce you to Gradle, a powerful build automation tool for Java projects. By
building a simple application, you'll gain hands-on experience with Gradle's core functionalities like
project creation, dependency management, and task execution.
Procedure
This command will prompt you for some configuration options. Choose the following:
This task will compile the source code and create an executable JAR file (usually located in
build/libs).
Gradle offers various built-in tasks that automate different stages of the development process.
Here are some commonly used tasks:
You can explore these tasks by running gradle tasks in your terminal. The output will list all
available tasks and their descriptions.
7. Additional Considerations
• Gradle allows writing custom build scripts to automate complex tasks. Explore the Gradle
documentation for details on advanced build customization.
• Gradle integrates seamlessly with version control systems like Git, allowing you to manage
build configurations alongside your source code.
Inference
By completing this lab, you've gained practical experience with Gradle's core functionalities:
• Project Creation: Gradle simplifies project setup with pre-defined configurations for
common project types like Java applications.
• Dependency Management: Gradle handles external libraries your application relies on,
ensuring consistent versions and resolving conflicts.
• Task Automation: Gradle provides built-in and custom tasks to automate repetitive tasks
like compiling, building, and testing your application.
This lab has laid the groundwork for you to explore Gradle's vast capabilities in managing complex
build processes and integrating with continuous integration and delivery (CI/CD) pipelines for
efficient software development.
Result
This lab has successfully demonstrated how to build a simple Java application using Gradle.
You've created a project structure, written application code, configured dependencies (optional),
built and run the application, and explored some essential Gradle tasks.
Lab Practical: Ansible Installation, Roles, and Playbooks
Aim:
This lab practical aims to equip you with the skills to install and configure Ansible, a popular IT
automation tool. You will learn how to create roles for modular code reuse and write playbooks,
the core automation units in Ansible.
Procedure:
1. Setting Up the Environment
• Control Node: Choose a machine to act as your Ansible control node. This is where you
will install Ansible and manage your infrastructure. Ensure the control node has a non-root
user with sudo privileges.
• Managed Nodes: Identify the machines you want to manage with Ansible (servers, network
devices, etc.). These are referred to as managed nodes. Configure passwordless SSH
access from the control node to each managed node. This allows Ansible to execute tasks
without manual intervention.
2. Installing Ansible
On your control node, follow the installation instructions specific to your operating system.
• For Debian/Ubuntu:
Bash
sudo apt update
sudo apt install ansible
3. Verifying Installation
Once installed, run the following command to verify the Ansible version:
Bash
ansible --version
• - name: Install Apache Web Server: Defines the play name (optional but descriptive)
• hosts: all: Specifies the target group from the inventory (here, all)
• become: true: Grants sudo privileges for tasks within the play
▪ apt:: Uses the apt module for package management (adjust for other
systems)
▪ name: apache2: Specifies the package to install
Navigate to the directory containing your playbook and run the following command:
Bash
ansible-playbook sample_playbook.yml
This will attempt to install Apache on all managed nodes listed in the all group of your inventory.
7. Creating Ansible Roles (Optional)
Ansible roles provide a way to modularize your configuration code. A role encapsulates tasks,
variables, and files related to a specific configuration area (e.g., web server setup, database
configuration).
roles/
my_web_server/
defaults/ # Contains default variables for the role
tasks/ # Contains task files for the role
vars/ # Contains additional variables specific to the role
handlers/ # Contains handler tasks (optional)
meta/ # Contains role metadata (optional)
8. Conclusion
This lab practical provided a foundational understanding of Ansible installation, roles, and
playbooks. You can now leverage Ansible to automate various IT tasks, improving efficiency and
consistency in your infrastructure management.
Inference:
• Ansible offers a powerful and agentless approach to infrastructure automation.
• Playbooks provide a declarative way to define desired configurations.
• Roles enable code reuse and improve maintainability of complex configurations.
Result:
By successfully completing this lab, you will have achieved the following:
• Installed Ansible on your control node: You will have verified the installation using
the ansible --version command.
• Created an inventory file: This file defines the managed nodes Ansible can interact with,
allowing you to target specific groups of machines for configuration changes.
• Written a basic playbook: You will have created a YAML file outlining the tasks Ansible
executes on managed nodes. The example playbook demonstrated installing the Apache
web server.
• Successfully executed a playbook: Running ansible-playbook sample_playbook.yml will
attempt to install Apache on all managed nodes listed in the all group of your inventory.
Verifying the installation on managed nodes confirms successful Ansible execution.
• (Optional) Gained an understanding of Ansible roles: The lab introduced the concept of
roles for modularizing configuration code. You learned about the basic directory structure of
an Ansible role.
This practical exercise equips you with the foundational skills to leverage Ansible for automating IT
tasks across your infrastructure. As you gain experience, you can create more complex playbooks,
utilize roles effectively, and explore advanced features like variables, conditionals, and loops to
automate a wider range of configurations.