0% found this document useful (0 votes)
114 views

Unit 5

Uploaded by

RamMohan Kanchi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

Unit 5

Uploaded by

RamMohan Kanchi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

DevOps-Unit5

Ram Mohan.Kanchi
Various types of testing

Testing is a process used to identify the correctness,completeness and quality


of developed computer software.

It is the process of executing a program/application under +ve and –ve


Conditions by manual or automated .

Effective software testing delivers quality software products satisfying


User’s requirements, needs and expectations.

There are 2 types of testing

1.Manual testing

2.Automation testing
1.Manual testing

Manual testing is a testing technique where tests are conducted manually by human testers
without the use of automated tools or scripts. It involves the tester executing test cases,
interacting with the software system or application, and manually verifying its behavior and
functionality.

There are 3 types of Manual testing

1.White Box testing


2.Black Box Testing
3.Grey Box Testing

1.White Box testing


White-box testing, also known as clear-box testing or structural testing, is a testing technique that
examines the internal structure and implementation details of a software application.
The tester has access to the source code, architecture, and design information, allowing them to
create test cases based on this knowledge.
The objective of white-box testing is to validate the internal logic, code flow, and decision-making
processes within the software system.
2.Black Box Testing
Black-box testing is a software testing technique where the tester
evaluates the functionality and behavior of a software application
without having knowledge of its internal structure, implementation
details, or source code.
The tester treats the software as a "black box" and focuses solely on
the inputs, outputs, and observable behavior.

3.Grey Box Testing


Grey-box testing is a software testing approach that combines
elements of both black-box testing and white-box testing.
In grey-box testing, the tester has partial knowledge of the internal
structure and implementation details of the software application.

There are 2 types of Black Box testing

1.Functional testing
2.non-functional testing
1.Functional testing
Functional testing is a type of software testing that focuses on verifying that a software
application or system functions correctly and meets the specified functional
requirements.
It involves testing the application's features, functionality, and behavior to ensure that it
performs as intended.

2. Non-functional testing
Non-functional testing is a type of software testing that focuses on evaluating the
performance, reliability, security, usability, and other non-functional aspects of a
software application or system.
It involves testing attributes that are not directly related to the functional requirements
but are equally important for ensuring the overall quality and effectiveness of the
software.

There are 3 types of functional testing

1.Unit testing
2.Integration testing
3.System testing
1.Unit testing
Unit testing is a software testing technique that focuses on testing individual units or
components of a software application in isolation.
The goal of unit testing is to verify that each unit of code, such as functions, methods,
or classes, works correctly and performs as expected.

2. Integration testing
Integration testing is a software testing technique that focuses on testing the
interactions and integration between different components, modules, or subsystems
of a software application.
The goal of integration testing is to uncover defects that may arise due to the
interfaces, data exchanges, or interactions between the integrated components.

3. System testing
System testing is a software testing technique that focuses on evaluating the behavior
and performance of a complete software system or application as a whole.
It is performed after integration testing and aims to validate that the system meets
the specified requirements and functions correctly in its intended environment

There are 2 types of integration testing

1.Incremental testing
2.non-incremental testing
1. Incremental testing
Incremental testing is a software testing approach that involves testing an application
or system in incremental stages, gradually adding and testing new functionality or
modules.
It follows an iterative development process where the system is built and tested
incrementally, with each increment adding new features or functionality

2. Non-incremental testing
Non-incremental testing, also known as traditional or waterfall testing, refers to a
testing approach where the entire software application or system is tested as a whole,
typically after the development phase is completed.
In this approach, the testing activities are performed sequentially and do not involve
iterative development or incremental delivery of software increments.

There are 3 types of non-functional testings

1.Performance testing
2.Usability testing
3.Compatibity testing
1.Performance testing
Performance testing is a type of software testing that evaluates the performance
characteristics of a system under specific workload conditions.
It assesses the responsiveness, stability, scalability, and resource usage of a software
application or system to determine how well it performs under expected and peak loads.

There are 5 types of Performance testing


1.Load testing
2.Stress testing
3.Scalability testing
4.Stability testing
5.Spike Testing

1.Load Testing:
Load testing is a common form of performance testing that measures the system's
performance under normal and anticipated peak loads.
It involves simulating multiple users or transactions to evaluate how the system handles
the load and whether it meets performance requirements.

2.Stress Testing: Stress testing is performed to determine the system's behavior and
performance when it is subjected to extreme or beyond-normal workload conditions.
It evaluates the system's ability to handle high volumes of data, traffic, or concurrent
users and identifies its breaking points or performance degradation thresholds.
3.Scalability Testing:
Scalability testing measures how well the system can scale up or handle increased
workloads by adding more resources, such as servers, processors, or memory.
It assesses the system's ability to maintain performance levels as the workload and
user demand increase.

4. Stability Testing
Stability testing is a type of software testing that focuses on evaluating the stability
and robustness of a system or application over an extended period. It aims to
determine whether the software can consistently perform its intended functions
without failures or crashes under varying conditions.

5.Spike Testing: Spike testing evaluates how the system handles sudden and
significant increases in workload. It tests the system's ability to handle sudden spikes
in user activity, transactions, or data volume without experiencing performance issues
or system failures.

2. Usability testing
Usability testing is a type of software testing that focuses on evaluating the usability
and user experience of a system or application. It involves testing the software with
real users to determine how easy it is to use, how intuitive the user interface is, and
how well it meets the needs and expectations of the target audience.
3. Compatibility testing
Compatibility testing is a type of software testing that focuses on
evaluating the compatibility of a system or application across
different platforms, operating systems, browsers, devices, and
network environments.
It ensures that the software functions correctly and consistently
across a variety of configurations, providing a seamless user
experience for all users.

2. Automation testing
Automation testing is a software testing technique that involves
using specialized tools and scripts to automate the execution of
test cases and compare the actual results with the expected
results.
It aims to increase testing efficiency, improve accuracy, and
accelerate the overall testing process.

Here are key aspects of automation testing:


1.Test Automation Tools:
Automation testing utilizes various tools designed for test automation.
These tools provide features for test script development, test execution, result comparison, reporting, and
integration with other testing tools or frameworks.
Examples of popular automation testing tools include Selenium, Appium, TestComplete, and JUnit.

2.Test Script Development:


In automation testing, test scripts are created using programming languages or scripting languages such as
Java, Python, C#, or JavaScript.
The scripts define a series of steps or actions that simulate user interactions with the software application,
perform data validations, and verify expected outcomes.

3.Test Case Selection:


Not all test cases are suitable for automation. In automation testing, test cases that are repetitive, time-
consuming, and stable are typically selected for automation.
Test cases that require frequent execution or involve complex scenarios are also good candidates for
automation

4.Regression Testing:
Automation testing is widely used for regression testing, which involves retesting the software after
modifications or enhancements to ensure that existing functionality is not affected. Automated regression
tests can be executed quickly and repeatedly, providing confidence in the stability of the software after
changes.
5.Test Data Management:
Automation testing requires proper management of test data. Test data should be well-defined, consistent, and reusable.
Automation testing tools often provide features for managing test data, including data-driven testing, parameterization,
and data extraction from external sources.

6.Test Execution and Result Comparison:


Automation testing tools execute test scripts automatically and compare the actual results with the expected results. Any
failures are reported, allowing testers to identify issues and investigate them further.

7.Continuous Integration and Continuous Testing:


Automation testing is often integrated into continuous integration and continuous testing pipelines.
This allows for the automatic execution of tests whenever new code changes are introduced, ensuring that the software
remains functional and stable throughout the development process.

8.Maintenance and Updates:


Automation test scripts require regular maintenance to keep them up to date with changes in the software.
As the software evolves, test scripts may need to be modified or enhanced to reflect new functionalities or user interface
changes. Maintenance efforts are essential to ensure the reliability and effectiveness of automated tests.
9.Parallel and Distributed Testing:
Automation testing can effect parallel execution and distributed testing to speed up the testing process.
By running multiple test scripts in parallel or distributing test execution across multiple machines or devices,
testing time can be significantly reduced.

10.Reporting and Analysis:


Automation testing tools provide reporting capabilities to generate detailed test reports, including test execution
status, passed and failed test cases, and other relevant metrics.
Test reports help in tracking testing progress, identifying trends, and analyzing the overall test coverage.
Automation of testing Pros and cons
Automated Testing is the technique for automating the manual testing process.
In this process, manual testing is replaced by the collection of automated testing tools.
Automated testing helps the software testers to check out the quality of the software.

Advantages /Pros of Automated Testing

Automated Testing has the following advantages:

1.Automated testing improves the coverage of testing as automated execution of test cases is faster
than manual execution.

2.Automated testing reduces the dependability of testing on the availability of the test engineers.

3.Automated testing provides round the clock coverage as automated tests can be run all time in
24*7 environment.

4.Automated testing takes far less resources in execution as compared to manual testing.

5.It helps to train the test engineers to increase their knowledge by producing a repository of
different tests.

6.It helps in testing which is not possible without automation such as reliability testing, stress
testing, load and performance testing. ls.
7.It includes all other activities like selecting the right product build, generating the right test data and analyzing
the results.

8.It acts as test data generator and produces maximum test data to cover a large number of input and expected
output for result comparison.

9.Automated testing has less chances of error hence more reliable.

10.As with automated testing test engineers have free time and can focus on other creative tasks.

Disadvantages/ Cons of Automated Testing/ :


Automated Testing has the following disadvantages:

1.Automated testing is very much expensive than the manual testing.

2.It also becomes inconvenient and burdensome as to decide who would automate and who would train.

3.It has limited to some organisations as many organisations not prefer test automation.

4.Automated testing would also require additionally trained and skilled people.

5.Automated testing only removes the mechanical execution of testing process, but creation of test cases still
required testing professiona
Selenium
Selenium is one of the most widely used open source Web UI (User
Interface) automation testing suite.

It was originally developed by Jason Huggins in 2004 as an internal tool at


Thought Works.

Selenium supports automation across different browsers, platforms and


programming languages.

Selenium can be easily deployed on platforms such as Windows, Linux, Solaris and
Macintosh. Moreover, it supports OS (Operating System) for mobile applications like iOS,
windows mobile and android.

Selenium supports a variety of programming languages through the use of drivers specific to
each language.Languages supported by Selenium include C#, Java, Perl, PHP, Python and
Ruby.

Currently, Selenium Web driver is most popular with Java and C#. Selenium test scripts can be
coded in any of the supported programming languages and can be run directly in most modern
web browsers. Browsers supported by Selenium include Internet Explorer, Mozilla Firefox,
Google Chrome and Safari.
Selenium can be used to automate functional tests and can be integrated with automation test tools such
as Maven, Jenkins, & Docker to achieve continuous testing. It can also be integrated with tools such as TestNG,
& JUnit for managing test cases and generating reports.

Parts of Selenium
1.Selenium
2.Selenium webdriver
3.Selenium Grid
4.Selenium IDE

Selenium Features
•Selenium is an open source and portable Web testing Framework.

•Selenium IDE provides a playback and record feature for authoring tests without the need to learn a test scripting language.

•It can be considered as the leading cloud-based testing platform which helps testers to record their actions and export them as
a reusable script with a simple-to-understand and easy-to-use interface.

•Selenium supports various operating systems, browsers and programming languages. Following is the list:
• Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
• Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris
• Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera, Safari, etc.

•Selenium can also be integrated with testing frameworks like TestNG for application testing and generating reports.
•It also supports parallel test execution which reduces time and increases the efficiency of tests.

•Selenium can be integrated with frameworks like Ant and Maven for source code compilation.

•Selenium requires fewer resources as compared to other automation test tools.

•WebDriver API has been indulged in selenium whichis one of the most important modifications done to selenium.

•Selenium web driver does not require server installation, test scripts interact directly with the browser.

•Selenium commands are categorized in terms of different classes which make it easier to understand and
implement.

•Selenium Remote Control (RC) in conjunction with WebDriver API is known as Selenium 2.0. This version was
built to support the vibrant web pages and Ajax.
Testing backend integration points
Automated testing of backend functionality such as SOAP and REST endpoints is quite cost effective.

SOAP means “Simple Object Acess Protocol”

REST means “Representational State Transfer”

The tests can also be fairly easy to write with tools such as SoapUI

SoapUI can be used to write and execute tests.these tests can be run from the command line and with Maven…
Maven is building tool.

The SoapUI is a good example testing tool, Testers who build test cases get a fairly well-structured environment
For writing tests and running them interactively.

Developers can integrate test cases in their builds without necessarily using the GUI. There are Maven plugins
And command-line runners.

The command line and Maven integration are useful for people maintaining the build server too.

The SoapUI is flexible and works well.


Key features of SOAPUI :
The following are the key features of SOAPUI

1.Test Creation: SOAPUI allows you to create test cases by defining the request and response structure, including
headers, parameters, and payloads. You can create complex test scenarios by adding multiple test steps and configuring
assertions to validate the response.

2.Test Execution: SOAPUI supports both manual and automated test execution. You can execute tests individually or in
bulk, and you have the flexibility to specify test data, input values, and test configurations. SOAPUI provides detailed logs
and reports to analyze the test results.

3.Data-Driven Testing: SOAPUI allows you to create data-driven tests by connecting to various data sources such as
databases, spreadsheets, or CSV files. This enables you to execute the same test case with different data sets and
validate the behavior of the web service under different conditions.

4.Mocking: SOAPUI enables you to create mock services that mimic the behavior of a real web service.
Mock services are useful when the actual service is unavailable or still under development. You can simulate different
response scenarios and test client applications without relying on the actual service.

5.Security Testing: SOAPUI provides various features to test the security aspects of web services. It supports security
protocols such as WS-Security, WS-Policy, and OAuth. You can configure authentication, encryption, and digital
signatures to test the security mechanisms of your web service.
6.Integration with Continuous Integration (CI) Tools: SOAPUI integrates well with CI tools like Jenkins, allowing you
to automate the execution of test suites as part of your CI/CD (Continuous Integration/Continuous Deployment)
pipeline. This helps ensure that your web services are continuously tested as new changes are made.

Test-Driven Development(TDD)
Test-driven development (TDD) is a software development methodology that emphasizes writing automated tests
before implementing the actual code.

TDD follows a cycle of writing tests, implementing code to pass those tests, and then refactoring the code as
needed.

The primary goal of TDD is to ensure that the code is thoroughly tested and meets the requirements defined by the
tests.

Here's a step-by-step overview of the TDD process:

1.Write a test: In TDD, you start by writing a test that describes the behavior or functionality you want to
implement. This test initially fails because the corresponding code doesn't exist yet. The test should be concise,
focused, and written from the perspective of a user or client.
2.Run the test: Execute the test to verify that it fails, as expected. This ensures that the test is correctly assessing the absence
of the functionality you are about to implement.

3.Write the minimum code: Write the minimum amount of code required to make the test pass. The code might not be
optimal or complete at this stage, but it should fulfill the test's requirements.

4.Run the test again: Execute the test suite, including the new test, to ensure that the newly implemented code passes the
test. If the test fails, iterate on the code until it passes.

5.Refactor the code: Refactor the code to improve its design, remove duplication, enhance readability, and optimize
performance while ensuring that all tests still pass. The refactoring step is crucial to maintain code quality and keep the
codebase clean and maintainable.

6.Repeat: Repeat the process by writing the next test that describes the next desired functionality, running the test (which
should fail), implementing the code to pass the test, and refactoring as needed. This iterative cycle continues until all desired
features have been implemented and thoroughly tested.

Advantages of TDD
By following TDD, developers can gain several benefits:

1.Higher test coverage: TDD encourages developers to write tests for every piece of code they write, resulting in
comprehensive test coverage.
2.Improved code quality: TDD promotes writing modular, loosely coupled code that is easier to understand,
maintain, and refactor.

3.Faster feedback: TDD provides immediate feedback on the code's correctness by running the tests
continuously, helping catch bugs early in the development process.

4.Design improvement: TDD forces developers to think about the design and interface of their code upfront,
leading to better overall software architecture.

5.Reduced debugging time: With a solid suite of tests, debugging becomes easier as failing tests pinpoint the
exact source of the problem.

TDD is often associated with unit testing, where individual units of code (such as functions or methods) are
tested in isolation. However, TDD can also be applied to other types of testing, such as integration testing or
acceptance testing, depending on the project's needs.
REPL-driven development
REPL-driven development, also known as Read-Eval-Print Loop-driven development

This style of development is very common when working with interpreted languages
Such as Lisp, Python, Ruby and JavaScript.

REPL is a programming environment that allows developers to interactively write and test
code snippets, where they can input expressions, have them evaluated and the results
printed back.

In REPL-driven development, developers typically write code incrementally, testing and


verifying its behavior in the REPL environment. They can experiment with different code
snippets, evaluate them instantly, and receive immediate feedback.

Here are some key characteristics and benefits of REPL-driven development:

1.Iterative development: Developers can quickly write and test small pieces of code,
getting immediate feedback on their changes. This iterative approach allows for faster
development cycles.
2.Exploration and experimentation: REPLs provide a playground for developers to experiment with code snippets, try
out different ideas, and explore the behavior of the program in real-time. It encourages a more interactive and
exploratory style of development.

3.Incremental problem-solving: Developers can break down complex problems into smaller sub-problems and solve
them one step at a time. They can validate each step's correctness and build upon it iteratively.

4.Rapid prototyping: REPL-driven development allows developers to quickly sample ideas and test suggestions without
the need for writing full-fledged programs. It enables faster validation of concepts and reduces the time required to see
the results.

5.Tight feedback loop: The immediate feedback loop provided by a REPL environment helps in catching errors early,
debugging code effectively, and gaining a better understanding of how code behaves.

6.Learning and exploration: REPLs can be valuable learning tools, especially for newcomers to a programming language
or framework. They provide an interactive environment to experiment, ask "what if" questions, and understand how
different language features work.

7.Language exploration: REPL-driven development is particularly popular in languages like Python, Ruby, Lisp, and
JavaScript, where REPL environments are readily available and integrated into development tools. These languages'
interactive nature aligns well with the REPL-driven workflow.
JavaScript testing
JavaScript testing refers to the process of verifying and validating the correctness, functionality, and behavior of JavaScript
code.

Testing is crucial in software development as it helps identify bugs, ensure code quality, and maintain the expected behavior
of the application.

There are several approaches and tools available for testing JavaScript code, including unit testing, integration testing, and
end-to-end testing.

1.Unit Testing: Unit testing involves testing individual units or components of JavaScript code in isolation. It focuses on testing
small, self-contained parts of the code, such as functions or classes, to ensure they behave as expected. Popular unit testing
frameworks for JavaScript include Jest, Mocha, and Jasmine. These frameworks provide tools for defining test cases, running
tests, and asserting expected outcomes.

2.Integration Testing: Integration testing involves testing the interaction and integration between different components or
modules of a JavaScript application. It ensures that the components work correctly when combined and that the
dependencies between them are handled properly. Frameworks like Jest and Mocha can also be used for integration testing
by setting up a test environment and running tests that exercise multiple components.
3.End-to-End Testing: End-to-end testing, also known as functional testing, involves testing the complete flow of an
application from start to finish. It simulates user interactions and verifies that the application functions as expected in real-
world scenarios. Tools like Cypress and Puppeteer are commonly used for end-to-end testing in JavaScript. These tools
allow you to automate browser interactions, simulate user behavior, and assert the expected outcomes.

4.Mocking and Stubbing: JavaScript testing often involves mocking or stubbing dependencies to isolate the code being
tested from external dependencies. Mocking frameworks like Sinon.js or Jest provide facilities to create mock objects or
functions that simulate the behavior of external dependencies. This allows for controlled and predictable testing
environments.

5.Test Runners and Frameworks: Test runners provide the infrastructure to execute tests and generate test reports.
Popular test runners for JavaScript include Jest, Karma, and Mocha. These runners integrate with testing frameworks and
provide features like test result reporting, code coverage analysis, and test parallelization.

6.Continuous Integration and Testing: Continuous Integration (CI) is a development practice that involves regularly
integrating code changes and running automated tests to detect issues early. CI tools like Jenkins, Travis CI, or GitHub
Actions can be set up to automatically run tests whenever new code is pushed or merged into a repository. This ensures
that tests are executed regularly and helps maintain code quality throughout the development process.

7.Assertion Libraries: Assertion libraries are used in testing frameworks to define expected outcomes and check the actual
results against them. Some popular assertion libraries for JavaScript include Chai, Expect.js, and Node.js' built-in assert
module. These libraries provide a range of assertion styles and matchers to make assertions more readable and expressive.
Virtualization stacks
Virtualization stacks are also known as virtualization software or hypervisors.

Virtualization stacks are software technologies that enable the creation and management of virtual machines
(VMs) or virtualized environments on a physical host machine.

These stacks allow multiple operating systems or instances to run concurrently on the same hardware, providing
isolation, resource allocation, and flexibility.

Here are some commonly used virtualization stacks:

1.VMware vSphere: VMware vSphere is a complete virtualization platform that includes the ESXi hypervisor and
vCenter Server for centralized management. It supports a wide range of virtualization features, such as live
migration, high availability, and distributed resource scheduling. VMware also offers other virtualization products
like VMware Workstation and VMware Fusion for desktop virtualization.

2.Microsoft Hyper-V: Hyper-V is Microsoft's native hypervisor technology built into Windows Server and
Windows 10 Pro/Enterprise. It provides virtualization capabilities for servers and desktops, supporting features
like live migration, high availability, and clustering. Hyper-V integrates well with other Microsoft products and
offers management tools like System Center Virtual Machine Manager (SCVMM).
3.KVM (Kernel-based Virtual Machine): KVM is a virtualization solution for Linux that utilizes the Linux kernel's virtualization
capabilities. It turns the host operating system into a hypervisor and allows running multiple VMs with different operating
systems. KVM is open-source and widely used, and it can be managed using tools like libvirt and virt-manager.

4.Xen: Xen is an open-source hypervisor that supports paravirtualization and hardware-assisted virtualization. It provides
strong isolation between virtual machines and allows running different operating systems concurrently. Xen is used in various
commercial and open-source virtualization solutions, including Citrix Hypervisor (formerly XenServer) and Oracle VM.

5.Proxmox VE: Proxmox Virtual Environment (VE) is an open-source virtualization platform that combines KVM and container-
based virtualization using LXC (Linux Containers). Proxmox VE offers a web-based management interface and includes
features like live migration, high availability, and backup/restore capabilities.

6.Docker: While not a traditional virtualization stack, Docker is a popular containerization platform that allows applications to
run in lightweight, isolated containers. Docker containers share the host OS kernel, making them more lightweight and
portable compared to VMs. Docker provides tools for container image creation, management, and deployment.

These are just a few examples of virtualization stacks available today. Each stack has its own features, management tools, and
compatibility requirements. The choice of a virtualization stack depends on factors such as the intended use case, hardware
compatibility, management requirements, and the operating systems or applications to be virtualized.
code execution at the client
Several of the configuration management systems described here allow you to reuse
The node descriptors to execute code on matching nodes. This is sometimes conveninent.
For example , you want to run a directory-listing command on all HTTP severs facing the
Public internet, possibly for debugging purposes.

In puppet eco system,this command execution system is called “Marionette Collective”


(or) “Mcollective”

It is pretty easy to try out the various deployment systems using Docker to manage the
Base OS , where we will do our experiments. It is time saving method that can be used
When developing and debugging the development code which is specific to a particular
Deployment system. this code will then be used for deployments on pyshical or
Virtual machines.

First, we will try each of the different deployment systems that are usually possible
In local modes. Further down the line, we will see how we can simulate the complete
Deployment of a system with several containers that together form a virtual cluster.

It should be noted ,however , that Docker has some limitations when it comes to matching
A full os.
Code execution at the client in a DevOps context typically refers to running code or scripts on the client machines or devices.
This can be useful for various purposes such as deploying software updates, performing configurations, or executing specific
tasks on client machines as part of a DevOps workflow.

To execute code at the client in a DevOps scenario, you can follow these general steps:

1.Identify the client machines: Determine the target machines where you want to execute the code.

2.Establish a communication channel: Set up a communication channel between the client machines and your deployment
infrastructure. This can be achieved through various mechanisms such as SSH, remote management tools, or agent-based
systems.

3.Prepare the code or script: Develop or configure the code or script that needs to be executed on the client machines. This
code can be written in a programming language suitable for the task, or it could be a script in a scripting language like Python,
PowerShell, or Bash.

4.Package the code: Bundle the code or script along with any required dependencies into a deployable package. This might
involve creating an executable file, a package file, or a container image, depending on the target environment.

5.Distribute the code: Deploy the packaged code to the client machines. This can be done through various methods such as
manual distribution, using configuration management tools like Ansible, or leveraging continuous integration/continuous
deployment (CI/CD) pipelines.
6.Execute the code: Trigger the execution of the code on the client machines. This could be done manually or by
using automation tools and scripts. Ensure that the execution is carried out with the necessary permissions and
access rights.

7.Monitor and manage the execution: Monitor the execution of the code on the client machines to track
progress, identify errors or issues, and gather any required logs or metrics. Use appropriate logging and
monitoring mechanisms to capture relevant information.

8.Handle success and failure scenarios: Handle the outcome of the code execution on the client machines. This
may involve logging success or failure, reporting metrics, rolling back changes if necessary, or triggering further
actions based on the results
Puppet master and agents
Puppet is a configuration management technology to manage the infrastructure on physical or virtual machines.
It is an open-source software configuration management tool developed using Ruby.

Puppet is a configuration management tool developed by Puppet Labs in order to automate


infrastructure management and configuration

Puppet follows the client-server model, where one machine in any cluster acts as the server, known as puppet master and
the other acts as a client known as a slave on nodes/puppet agents.

Puppet consists of a client/server solution, where the client nodes check in regularly with the Puppet server to see
Anything needs to be updated in the local configuration.

the puppet client decides that it’s time to check in with the puppet master to discover any new configuration changes.
this can be due to a timer or manual intervention by an operator at the client. The channel of communication between
the puppet client And master is normally encrypted using SSL.

the puppet client presents its credentials so that puppet master knows exactly which client is calling.

the puppet master runs the necessary code on the client side so that configuration matches the one
decided on by the puppet master.
Puppet Master:
The Puppet master is the central control node in the Puppet architecture. It acts as the server and is responsible for managing
the configuration data and policies. The Puppet master stores manifests and modules that define how the systems should be
configured.
Key responsibilities of the Puppet master include:

1.Storing and managing the Puppet codebase: The Puppet master holds the main repository of Puppet manifests, modules,
and configuration data. These artifacts define the desired state of the systems being managed.

2.Compiling manifests: The Puppet master compiles Puppet manifests, which are written in Puppet's declarative language,
into a catalog of instructions that specify how the systems should be configured.

3.Distributing catalogs: Once the Puppet master compiles the catalogs, it distributes them to the Puppet agents. These
catalogs contain the instructions for each agent to enforce the desired configuration state.

4.Handling agent requests: The Puppet master listens for requests from Puppet agents and responds with the appropriate
catalogs or information. It also manages the communication and synchronization between agents and itself.
Puppet Agents:
Puppet agents are the client nodes in the Puppet architecture. They run on the systems you want to manage and enforce
the desired configuration defined by the Puppet manifests and modules.

Key responsibilities of Puppet agents include:

1.Requesting catalogs: Puppet agents periodically request catalogs from the Puppet master. The catalogs contain the
configuration instructions that the agents should apply to their respective systems.

2.Applying configurations: Upon receiving the catalog from the Puppet master, the Puppet agent applies the
configurations defined in the catalog. This involves making changes to system settings, installing or removing software
packages, managing files, and executing other tasks as specified in the manifest.

3.Reporting to the Puppet master: After applying the configurations, Puppet agents report back to the Puppet master.
They provide information about the success or failure of the configuration process, which can be used for monitoring and
troubleshooting.

4.Running in an autonomous mode: Puppet agents can also run in an autonomous mode known as Puppet apply. In this
mode, they can directly apply Puppet manifests without depend on on a Puppet master. This is useful for ad-hoc
configuration changes or testing.
The Puppet master and agents work together to ensure that the desired state of systems is achieved and maintained. The
master manages the configuration policies, while the agents apply those policies on the client systems, enabling
centralized configuration management and automation.
Pros and Cons of the Puppet echo system

Puppet has a large community , and there are a lot of resources on the internet for Puppet.
there are a lot of different modules , an existing modules written for your component that
you can modify according to your needs.

Puppet requires a number of dependencies on the puppet client machines. Some times this gives rise to problems.
the puppet agent will require a ruby runtime so you need to arrange it.

Puppet configuration can be complex to write and test.


Ansible
Ansible is an open-source automation tool that allows you to automate configuration management, application
deployment, and IT orchestration tasks.
It focuses on simplicity, ease of use, and agentless operation, making it popular among DevOps teams for managing large-
scale infrastructure.

Key features and concepts of Ansible include:

1.Playbooks: Ansible uses Playbooks, which are written in YAML format, to define automation tasks and desired
configurations. Playbooks describe a set of steps that should be executed on target systems to achieve a desired state.

2.Inventory: Ansible uses an Inventory file that lists the target systems or groups of systems (referred to as hosts) that
Ansible manages. This file defines the hosts' connection details, such as IP addresses or hostnames.

3.Modules: Ansible provides a wide range of built-in modules that perform specific tasks on the target hosts. Modules can
handle tasks like file management, package installation, service management, executing commands, or making API calls.
Modules are executed by Ansible on the target hosts to bring them to the desired state.

4.Play: A Play in Ansible is a combination of one or more tasks that target a specific group of hosts defined in the
Inventory. Each Play in a playbook targets a set of hosts and defines the tasks to be executed on those hosts.
5.Roles: Roles in Ansible provide a way to organize and reuse tasks, handlers, and variables across multiple
playbooks. Roles encapsulate a specific functionality or configuration and can be shared and reused across projects.

6.Ad-hoc Commands: Ansible allows you to run ad-hoc commands on target hosts without the need for writing a
playbook. Ad-hoc commands are useful for performing quick tasks, like executing a command, copying files, or
restarting services.

7.Idempotent Execution: Ansible follows an idempotent execution model, meaning that if a task has been executed
once and the system is already in the desired state, subsequent runs of the playbook will not make unnecessary
changes. This ensures predictable and consistent configurations.

8.Ansible Galaxy: Ansible Galaxy is a hub for sharing and discovering reusable Ansible roles. It provides a repository
of pre-built roles contributed by the Ansible community, allowing you to leverage existing configurations and best
practices.

Ansible's agentless architecture allows it to operate over SSH or PowerShell connections, making it easier to manage
heterogeneous infrastructure. It offers powerful capabilities for automating infrastructure management, application
deployment, and continuous delivery pipelines, enabling efficient and streamlined DevOps practices.
Ansible with the Docker method

1.Create a Docker file using the following statement

from williamyeh/ansible:centos7

2.Build the Docker container using the following statement

docker bulid

The above statement will download the image and create an empty docker container.

3.Run the container using the following command

docker run –v ‘pwd’/ansible:/ansible –it <hash> bash

Now ansible is available

-v option is used to make parts of the host fie file system visible to the docker guest container.
The files will nr visible in the /ansible directory in the container.
playbook.yml
-hosts:localhost
vars:
http_port:80
max_clients:200
remote_user:root
tasks:
-name: ensure apache is at the latest version
yum: name=httpd state=latest
Running ansible playbook
Cd\ansible
Ansible-playbook –I inventory playbook.yml --connection=local—
Output
PLAY [localhost]
***************************************************************
GATHERING FACTS
****************************************************************
Ok:[localhost]
TASK:[ensure apache is at the latest vesion]
*****************************************************************
Ok:[localhost]
PLAY RECAP
******************************************************************
Local host :ok=2 changed=0 unreachable=0 failed=0
Deployment tools
Chef
Chef is a powerful configuration management and automation tool that helps with deploying and managing infrastructure.

It allows you to define the desired state of your infrastructure and applications as code, making it easier to manage and scale
your systems.

Here are a few deployment tools and concepts related to Chef:

1.Chef Workstation
2. Chef Client
3. Chef Server
4.Knife
5.Chef Solo
6.ChefDK
7.Test Kitchen
8.Chef Automate

1.Chef Workstation: This is the primary tool used by Chef developers. It provides a set of command-line utilities and
a development environment for authoring, testing, and maintaining Chef cookbooks.
2.Chef Client: The Chef Client is an agent that runs on the target systems and is responsible for configuring and
maintaining the desired state defined in your Chef cookbooks. It communicates with the Chef Server to retrieve the
necessary configuration data.

3.Chef Server: The Chef Server acts as the central repository for storing and managing cookbooks, environments, roles,
and other configuration data. It allows you to store and version control your infrastructure code and distribute it to the Chef
Clients.

4.Knife: Knife is a command-line tool that comes with Chef Workstation. It allows you to interact with the Chef Server,
perform administrative tasks, and manage cookbooks, roles, environments, and nodes.

5.Chef Solo: Chef Solo is a standalone mode of Chef that doesn't require a Chef Server. It is useful for smaller
environments or when you don't need the centralized management provided by the Chef Server.

6.ChefDK: ChefDK is a development kit that includes all the necessary tools and libraries for Chef development. It
provides a bundled installation of Chef Workstation, along with additional utilities and testing frameworks.

7.Test Kitchen: Test Kitchen is a tool that allows you to test your cookbooks on various platforms and configurations using
virtual machines or cloud providers. It helps ensure the desired state is achieved across different environments before
deploying to production.
8.Chef Automate: Chef Automate is a commercial offering from Chef that provides a complete end-
to-end solution for managing infrastructure automation. It combines the capabilities of Chef Server,
Chef Workstation, and additional features for compliance, visibility, and reporting.

These are some of the key deployment tools and concepts related to Chef. Using these tools, you can
streamline your infrastructure deployment and management processes while ensuring consistency and
repeatability in your configurations.
Salt Stack (or) Salt
Salt Stack, commonly known as Salt, is an open-source configuration management and remote execution platform.
It is designed to automate the management and deployment of infrastructure, applications, and systems across a network.
Salt uses a client-server architecture and operates on a master-minion model.

Here are some key features and components of Salt Stack:

1.Master
2.Minion
3.Salt State System
4.Remote Execution
5.Salt Pillar
6. Event System
7. Salt Cloud

1.Master: The Salt master is the central control node that manages the Salt infrastructure. It coordinates communication
and configuration management tasks with the Salt minions.

2.Minion: A Salt minion is an agent installed on the systems or devices that you want to manage. Minions communicate
with the Salt master to receive instructions and execute tasks.
3.Salt State System: Salt uses a declarative approach to configuration management called the Salt State System.
Administrators define the desired state of the systems using YAML or Jinja templates. The Salt master applies these
states to the minions, ensuring that the systems conform to the desired configuration.

4.Remote Execution: Salt provides a powerful remote execution framework that allows administrators to execute
commands and run scripts on multiple systems simultaneously. This feature enables efficient management and
orchestration of tasks across a network.

5.Salt Pillar: Salt Pillar is a system for securely storing and accessing sensitive data, such as passwords, encryption
keys, or other configuration details. Pillar data can be encrypted and distributed to the minions on an as-needed basis.

6.Event System: Salt includes an event-driven architecture that enables real-time communication between the master
and minions. Events can be triggered by various actions, such as a state change, and allow for dynamic responses or
automation.

7.Salt Cloud: Salt Cloud is an extension of Salt that provides integration with various cloud providers. It allows for
the provisioning and management of cloud resources through Salt commands and configurations.

Salt Stack is widely used in DevOps and system administration workflows. It offers scalability, flexibility, and a rich
set of features to manage large-scale infrastructure efficiently. The Salt community actively contributes to its
development, ensuring continuous improvement and support.
ON THE SALT MASTER

curl -L https://bootstrap.saltstack.com -o install_salt.sh

sudo sh install_salt.sh –M

ON EACH SALT MINION

curl -L https://bootstrap.saltstack.com -o install_salt.sh

sudo sh install_salt.sh

STEP1
Open Salt Master and Type the followings 3 commands on terminal

apt update

curl -L https://bootstrap.saltstack.com -o install_salt.sh

sudo sh install_salt.sh –M
STEP2

Open SALT MINION1 and type the following commands

sudo sh install_salt.sh

STEP3
Open SALT MINION2 and type the following commands

sudo sh install_salt.sh

STEP4
Open Master Minion and type the following commands

root@master#ip a

Inet 192.168.112.12/24

STEP5
Open SALT MINION1 and type the following commands

root@server1#VI /etc/salt/minion
Master:192.168.112.12/24

Restart minion1

root@server1#systemctl restart slat-minion

STEP6
Open SALT MINION2 and type the following commands

root@server2#VI /etc/salt/minion

Master:192.168.112.12/24

Restart minion1

root@server2#systemctl restart slat-minion

STEP7
Open Salt Master and Type the followings 3 commands on terminal

root@master#vi /etc/salt/msater
root@master# salt-key –L

Accepted Keys:
Denied Keys:
Unaccepted Keys:
Server1
Server2
Rejected Keys

Accept all keys

root@master#salt-key –A

The following keys are going to be accepted

Unaccepted Keys:
Server1
Server2
Procced?[n/y]y

Key for minion server1 accepted


Key for minion server2 accepted
root@master#salt-key –L
Accepted keys:
Server1
Server2

STEP8

Testing

example1
root@master:#slat ‘*’ test.ping

Server1:
True

Server2:
True

Example2
root@master:#slat ‘*’ cmd.run “ip a|greep 192”

Server1: inet 192.168.112.10


Server2: inet 192.168.112.11
Docker
Docker is an open-source platform that allows developers to automate the deployment and management of applications within lightweight,
isolated containers.

Containers are self-contained units that encapsulate an application and its dependencies, enabling consistent and reliable deployment across
different environments.

Here are some key concepts and components related to Docker:

1. Docker Engine
2. Docker Image
3. Container
4. Docker Registry
5. Docker Compose
6. Docker Swarm
7. Docker CLI

1.Docker Engine: Docker Engine is the core component of Docker. It is responsible for building, running, and managing containers. It consists of
the Docker daemon (server) and the Docker client, which interact with each other using a REST API.

2.Docker Image: A Docker image is a lightweight, standalone, and executable package that contains everything needed to run an application,
including the code, runtime, libraries, and dependencies.
Images are built from a set of instructions called a Dockerfile, which defines the configuration and setup of the container.
3.Container: A container is an instance of a Docker image that runs as an isolated process on the host machine.
Containers provide process-level isolation, meaning each container has its own filesystem, network, and resources,
while sharing the host machine's kernel.
Containers can be started, stopped, and managed using Docker commands or through planning platforms like
Kubernetes.

4.Docker Registry: Docker Registry is a centralized repository for storing and sharing Docker images. The default
public registry is Docker Hub, which hosts a vast number of publicly available images. Organizations can also set
up private registries to store and distribute their own Docker images securely.

5.Docker Compose: Docker Compose is a tool that simplifies the management of multi-container applications. It
allows developers to define and orchestrate(planning) multiple containers as a single application using a YAML file.
With Compose, you can specify the relationships and dependencies between containers, making it easier to set up
and manage complex environments.

6.Docker Swarm: Docker Swarm is Docker's native clustering and orchestration solution for creating and managing
a swarm of Docker nodes. It enables the deployment of services across a cluster of Docker hosts, providing high
availability, scaling, and load balancing capabilities.

7.Docker CLI: The Docker Command Line Interface (CLI) is the primary interface for interacting with Docker. It
provides a set of commands for building, running, managing, and inspecting Docker containers and images. The
Docker CLI allows you to perform various operations, such as starting containers, executing commands inside
containers, managing networks, and more.
Docker has revolutionized application deployment by providing a lightweight and portable solution.
It promotes the concept of "containerization," which allows developers to package applications and their dependencies
into standardized containers, ensuring consistent behavior across different environments.
This makes it easier to build, ship, and run applications, improving scalability, portability, and developer productivity.

Features of Docker

1.Docker has the ability to reduce the size of development by providing a smaller footprint of the operating system via
containers.

2.With containers, it becomes easier for teams across different units, such as development, QA and Operations to work
seamlessly across applications.

3.You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.

4.Since Docker containers are pretty lightweight, they are very easily scalable.

Example Docker Application

 Install docker desktop

 Open command prompt and type the following command


docker –v
o/p docker version 20.10.14
Make a sure docker is in start mode

Now Write a Java Program


Public class Test
{
Public static void main(String args[])
{
System.out.println(“Docker by Priw students”);
}
}
Save the above with the name Test.java

Now Dockerize above java program


To Dockerize java program we need to create Docker file and save that file with the
Name Dockerfile
Create Docker File with the name Dockerfile
from openjdk:8
WORKDIR /app
Copy . /app/
Run javac Test.java
ENTRYPOINT [“java”, ”Test”]
Save the above file with the name Dockerfile

Build Docker image


“Docker Build” command is used to create docker image

Docker build –t ram/docker-helloworld .

Now docker image is build successfully.R\

Run docker image on container


Start docker container

“docker run” command is used to start docker container


docker run ram/docker-helloworld

o/p Docker by Priw students


START
Difference and Comparisons among Puppet, Ansible,
Chef, Salt, and Docker
Puppet, Ansible, Chef, Salt, and Docker are all popular tools used in the field of DevOps and system
administration. While they serve similar purposes, there are differences in their approaches and
functionalities.
Let's explore the key distinctions among these tools:
Puppet:

• Puppet is a configuration management tool that follows a declarative approach.

• It allows administrators to define the desired state of systems using Puppet's domain-specific
language (DSL) called Puppet DSL.

• Puppet uses a client-server architecture, where the Puppet master distributes configurations and
instructions to Puppet agents (clients).

• Puppet focuses on ensuring the desired state of systems by managing configuration files, packages,
services, and other resources.

• It regularly enforces the defined configurations to keep the systems in the desired state.

• Puppet has a large and mature ecosystem with a wide range of community-contributed modules
available for managing various technologies and platforms.
Ansible:
• Ansible is an automation tool that follows an agentless and procedural approach.

• It uses a simple YAML-based language to define automation tasks.

• Ansible connects to remote systems over SSH or WinRM and executes tasks directly without requiring any agent
installation on the target systems.

• Ansible focuses on task execution and orchestration(planning).

• It allows administrators to define a sequence of tasks to be executed on multiple systems simultaneously.

• Ansible has a "push" model, where the Ansible control machine pushes the instructions to the target systems and
performs the necessary configurations and deployments.
Chef:
• Chef is a configuration management tool that follows a procedural and "Infrastructure as Code"
approach.

• It allows administrators to define configurations using a Ruby-based DSL.

• Chef uses a client-server architecture similar to Puppet, where the Chef server distributes
configurations and instructions to Chef clients (nodes).

• Chef focuses on automating the entire lifecycle of infrastructure management, including


configuration, deployment, and continuous integration.

• Chef provides a flexible and extensible framework, allowing administrators to define complex
configurations and workflows.
Salt:
• Salt (SaltStack) is a configuration management and remote execution tool.

• It uses a client-server architecture with a master-minion model, similar to Puppet.

• Salt follows a declarative approach like Puppet, using YAML or Jinja templates to
define system states.

• It also provides a powerful remote execution framework for executing commands


and running scripts on minions.

• Salt emphasizes real-time communication and event-driven automation.

• It allows administrators to react to events and dynamically manage systems.

• Salt also includes features like Salt Pillar for storing sensitive data and Salt Cloud
for integrating with cloud providers.
Docker:
• Docker is a containerization platform that focuses on application deployment and isolation.

• It allows developers to package applications and their dependencies into lightweight, portable containers.

• Docker follows a container-based virtualization approach, where applications run in isolated containers
sharing the host machine's kernel.

• Docker provides tools for building, managing, and distributing containers.

• It enables consistent deployment across different environments, simplifies dependency management, and
promotes scalability and reproducibility.

While Puppet, Ansible, Chef, Salt, and Docker can be used together in a DevOps toolchain, each tool has its
strengths and is suitable for different use cases and preferences. Factors like ease of use, scalability, flexibility,
community support, and the nature of your infrastructure can influence the choice of tool.
Comparison
Terminology comparison

Technology comparison

end

You might also like