0% found this document useful (0 votes)
16 views

devops unit-5

Uploaded by

22wj1a05ah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

devops unit-5

Uploaded by

22wj1a05ah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

DevOps-UNIT – V

(Testing Tools and automation: Various types of testing, Automation of testing


Pros and cons, Selenium - Introduction, Selenium features, JavaScript testing,
Testing backend integration points, Test-driven development, REPL-driven
development Deployment of the system: Deployment systems, Virtualization
stacks, code execution at the client, Puppet master and agents, Ansible,
Deployment tools: Chef, Salt Stack and Docker)

What is testing?
Software Testing is a method to check whether the actual software product
matches expected requirements and to ensure that software product is Defect
free. It involves execution of software/system components using manual or
automated tools to evaluate one or more properties of interest. The purpose of
software testing is to identify errors, gaps or missing requirements in contrast to
actual requirements.

Manual testing:

 Manual testing is a software testing process in which test cases are


executed manually without using any automated tool. All test cases
executed by the tester manually according to the end user's perspective.
Test case reports are also generated manually.

 Manual Testing is one of the most fundamental testing processes as it can


find both visible and hidden defects of the software.

 Manual testing is mandatory for every newly developed software before


automated testing. This testing requires great efforts and time, but it gives
the surety of bug-free software. Manual Testing requires knowledge of
manual testing techniques.
Why Manual testing?

 Manual testing is essential because one of the software


testing fundamentals is "100% automation is not possible.“

 Manual testing will always be an important part of software development. If


nothing else, we will need to perform our tests manually at least once in
order to automate them.

 Acceptance testing in particular is hard to replace, in the same way


Software requirement specifications and quality assurance are
irreplaceable.

1
Types of Manual Testing:

 White-box testing : The white box testing is done by Developer, where they
check every line of a code before giving it to the Test Engineer.

 Black box testing: The black box testing is done by the Test Engineer,
where they can check the functionality of an application or the software
according to the customer /client's needs.

 Gray Box testing: Gray box testing is a combination of white box and
Black box testing. It can be performed by a person who knew both coding
and testing. And if the single person performs white box, as well as black-
box testing for the application, is known as Gray box testing.
How to perform Manual Testing?

 First, tester observes all documents related to software, to select testing


areas. Tester analyses requirement documents to cover all requirements
stated by the customer.
 Tester develops the test cases according to the requirement document. All
test cases are executed manually by using Black box testing and white box
testing.
 If bugs occurred then the testing team informs the development team. The
Development team fixes bugs and handed software to the testing team for a
retest
What is Automation Testing?

 Automation Testing is a software testing technique that performs using


special automated testing software tools to execute a test case suite.

 The automation testing software can also enter test data into the System
Under Test, compare expected and actual results and generate detailed test
reports. Software Test Automation demands considerable investments of
money and resources.

 Successive development cycles will require execution of same test suite


repeatedly. Using a test automation tool, it’s possible to record this test
suite and re-play it as required.

 Once the test suite is automated, no human intervention is required. This


improved ROI of Test Automation. The goal of Automation is to reduce the
number of test cases to be run manually and not to eliminate Manual
Testing altogether.
Why Test Automation?

 Test Automation is the best way to increase the effectiveness, test


coverage, and execution speed in software testing. Automated software
testing is important due to the following reasons:
2
 Manual Testing of all workflows, all fields, all negative scenarios is time and
money consuming It is difficult to test for multilingual sites manually

 Test Automation in software testing does not require Human intervention.


You can run automated test unattended (overnight)

 Test Automation increases the speed of test execution Automation helps


increase Test Coverage

 Manual Testing can become boring and hence error-prone.

Figure: Automated Testing Process

Which Test Cases to Automate?


Test cases to be automated can be selected using the following criterion to
increase the automation ROI
o High Risk – Business Critical test cases
o Test cases that are repeatedly executed
o Test Cases that are very tedious or difficult to perform manually
o Test Cases which are time-consuming
Pros and cons with test automation:
Pros or benefits of Automation Testing:

 70% faster than the manual testing

 Wider test coverage of application features


 Reliable in results

 Ensure Consistency

 Saves Time and Cost


3
 Improves accuracy
 Human Intervention is not required while execution

 Increases Efficiency
 Better speed in executing tests

 Re-usable test scripts

 Test Frequently and thoroughly


 More cycle of execution can be achieved through automation

 Early time to market


The problem areas surrounding test automation:

 The main disadvantages of automated testing are that they usually cost
more money in software, take a lot of effort to implement for the first time,
and need a lot of maintenance.

 It also becomes inconvenient and burdensome as to decide who would


automate and who would train.
 It has limited to some organisations as many organisations not prefer test
automation.
 Automated testing would also require additionally trained and skilled
people.
 Automated testing only removes the mechanical execution of testing
process, but creation of test cases still required testing professionals.
 Cheap tests have lower value:
 It is difficult to create test cradles that are relevant to automated
integration testing.
 The functionality of programs vary over time and tests must be adjusted
accordingly, which takes time and effort.
 It is difficult to write robust tests that work reliably in many different
build scenarios.

4
Unit testing:
Unit testing is the sort of testing that is normally close at heart for developers.
The primary reason is that, by definition, unit testing tests well-defined parts of
the system in isolation from other parts. Thus, they are comparatively easy to
write and use. Many build systems have built-in support for unit tests, which can
be leveraged without undue difficulty. With Maven, for example, there is a
convention that describes how to write tests such that the build system can find
them, execute them, and finally prepare a report of the outcome. Writing tests
basically boils down to writing test methods, which are tagged with source code
annotations to mark the methods as being tests. Since they are ordinary
methods, they can do anything, but by convention, the tests should be written so
that they don't require considerable effort to run. If the test code starts to require
complicated setup and runtime dependencies, we are no longer dealing with unit
tests. Here, the difference between unit testing and functional testing can be a
source of confusion. Often, the same underlying technologies and libraries are
reused between unit and functional testing.

JUnit in general and JUnit in particular:

JUnit is a framework that lets you define unit tests in your Java code and run
them. JUnit belongs to a family of testing frameworks collectively called xUnit.
While JUnit is specific to Java, the ideas are sufficiently generic for ports to have
been made in, for instance, C#. The corresponding test framework for C# is called,
somewhat unimaginatively, NUnit. The N is derived from .NET, the name of
theMicrosoft software platform.

We need some of the following nomenclature before carrying on JUnit:

 Test runner: A test runner runs tests that are defined by an xUnit
framework. JUnit has a way to run unit tests from the command line, and
Maven employs a test runner called Surefire. A test runner also collects and
reports test results. In the case of Surefire, the reports are in XML format,
and these reports can be further processed by other tools, particularly for
visualization.

5
 Test case: A test case is the most fundamental type of test definition. How
you create test cases differs a little bit among JUnit versions. In earlier
versions, you inherited from a JUnit base class; in recent versions, you just
need to annotate the test methods.
 Test fixtures: A test fixture is a known state that the test cases can rely on
so that the tests can have well-defined behavior. It is the responsibility of
the developer to create these. A test fixture is also sometimes known as a
test context. With JUnit, you usually use the @Before and @After
annotations to define test fixtures. @Before is, unsurprisingly, run before a
test case and is used to bring up the environment. @After likewise restores
the state if there is a need to Sometimes, @Before and @After are more
descriptively named Setup and Teardown. Since annotations are used, the
method can have the names that are the most intuitive in that context.
 Test suites: You can group test cases together in test suites. A test suite is
usually a set of test cases that share the same test fixture.
 Test execution: A test execution runs the tests suites and test cases. Here,
all the previous aspects are combined. The test suites and test cases are
located, the appropriate test fixtures are created, and the test cases run.
Lastly, the test results are collected and collated.
 Test result formatter: A test result formatter formats test result output for
human consumption. The format employed by JUnit is versatile enough to
be used by other testing frameworks and formatters not directly associated
with JUnit. So, if you have some tests that don't really use any of the xUnit
frameworks, you can still benefit by presenting the test results in Jenkins
by providing a test result XML file. Since the file format is XML, you can
produce it from your own tool, if need be.
 Assertions: An assertion is a construct in the xUnit framework that makes
sure that a condition is met. If it is not met, it is considered an error, and a
test error is reported. The test case is also usually terminated when the
assertion fails.

6
What is Selenium?

Selenium is a free (open-source) automated testing framework used to validate


web applications across different browsers and platforms. You can use multiple
programming languages like Java, C#, Python, etc to create Selenium Test
Scripts. Testing done using the Selenium testing tool is usually referred to
as Selenium Testing.

Selenium Tool Suite

Selenium Software is not just a single tool but a suite of software, each piece
catering to different Selenium QA testing needs of an organization. Here is the list
of tools

 Selenium Integrated Development Environment (IDE)


 Selenium Remote Control (RC)
 WebDriver
 Selenium Grid
Selenium Features:
o Selenium is an open source and portable Web testing Framework.
o Selenium IDE provides a playback and record feature for authoring tests
without the need to learn a test scripting language.
o It can be considered as the leading cloud-based testing platform which
helps testers to record their actions and export them as a reusable script
with a simple-to-understand and easy-to-use interface.
o Selenium supports various operating systems, browsers and programming
languages. Following is the list:
o Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and
JavaScript
o Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
o Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge,
Opera, Safari, etc.
o It also supports parallel test execution which reduces time and increases
the efficiency of tests.

7
o Selenium can be integrated with frameworks like Ant and Maven for source
code compilation.
o Selenium can also be integrated with testing frameworks like TestNG for
application testing and generating reports.
o Selenium requires fewer resources as compared to other automation test
tools.
o WebDriver API has been indulged in selenium whichis one of the most
important modifications done to selenium.
o Selenium web driver does not require server installation, test scripts
interact directly with the browser.
o Selenium commands are categorized in terms of different classes which
make it easier to understand and implement.
o Selenium Remote Control (RC) in conjunction with WebDriver API is known
as Selenium 2.0. This version was built to support the vibrant web pages
and Ajax.

Figure: Selenium Tool Suite

At the moment, Selenium RC and WebDriver are merged into a single framework
to form Selenium 2. Selenium 1, by the way, refers to Selenium RC.

8
JavaScript testing:
The JS test framework is a tool for examining the functionality of JavaScript web
applications. It helps to ensure all the components are working properly. It also
enables you to easily identify bugs. Therefore, you can quickly take the necessary
steps to fix issues.
JavaScript Unit Testing is a method in which JavaScript test code is written for a
web page or application module. It is then combined with HTML as an inline event
handler and executed in the browser to test if all functionalities work as desired.
These unit tests are then organized in the test suite.
• Karma is a test runner for unit tests in the JavaScript language
• Jasmine is a Cucumber-like behavior testing framework
• Protractor is used for AngularJS.

Testing backend integration points:


Automated testing of backend functionality such as SOAP and REST endpoints is
normally quite cost effective. Backend interfaces tend to be fairly stable, so the
corresponding tests will also require less maintenance effort than GUI tests, for
instance. The tests can also be fairly easy to write with tools such as soapUI,
which can be used to write and execute tests. These tests can also be run from
the command line and with Maven, which is great for Continuous Integration on a
build server. The soapUI is a good example of a tool that appeals to several
different roles. Testers who build test cases get a fairly well-structured
environment for writing tests and running them interactively. Tests can be built
incrementally. Developers can integrate test cases in their builds without
necessarily using the GUI. There are Maven plugins and command-line runners.
The command line and Maven integration are useful for people maintaining the
build server too. Furthermore, the licensing is open source with some added
features in a separate, proprietary version. The open source nature makes the
builds more reliable. It is very stress-inducing when a build fails because a
license has unexpectedly reached its end or a floating license has run out. The
soapUI tool has its share of flaws, but in general, it is flexible and works well.

9
Here's what the user interface looks like:

The soapUI user interface is fairly straightforward. There is a tree view listing test
cases on the left. It is possible to select single tests or entire test suites and run
them. The results are presented in the area on the right. It is also worth noting
that the test cases are defined in XML. This makes it possible to manage them as
code in the source code repository. This also makes it possible to edit them in a
text editor on occasion, for instance, when we need to perform a global search
and replace on an identifier that has changed names.

Test-driven development:
made popular by the Extreme programming movement of the nineties. TDD is
usually described as a sequence of events, as follows:
 Implement the test: As the name implies, you start out by writing the test
and write the code afterwards. One way to see it is that you implement the
interface specifications of the code to be developed and then progress by
10
writing the code. To be able to write the test, the developer must find all
relevant requirement specifications, use cases, and user stories. The shift in
focus from coding to understanding the requirements can be beneficial for
implementing them correctly.
 Verify that the new test fails: The newly added test should fail because
there is nothing to implement the behavior properly yet, only the stubs and
interfaces needed to write the test. Run the test and verify that it fails.
 Write code that implements the tested feature: The code we write
doesn't yet have to be particularly elegant or efficient. Initially, we just want
to makethe new test pass.
 Verify that the new test passes together with the old tests: When the
new test passes, we know that we have implemented the new feature
correctly. Since the old tests also pass, we haven't broken existing
functionality.
 Refactor the code: The word "refactor" has mathematical roots. In
programming, it means cleaning up the code and, among other things,
making it easier to understand and maintain.

REPL-driven development:
This style of development is very common when working with interpreted
languages, such as Lisp, Python, Ruby, and JavaScript. When you work with a
Read Eval Print Loop (REPL), you write small functions that are independent and
also not dependent on a global state. The functions are tested even as you write
them. This style of development differs a bit from TDD. The focus is on writing
small functions with no or very few side effects. This makes the code easy to
comprehend rather than when writing test cases before functioning code is
written, as in TDD. You can combine this style of development with unit testing.
Since you can use REPL-driven development to develop your tests as well, this
combination is a very effective strategy.

11
Deploying the Code
Once after code built and tested, we need to deploy it to our servers so that our
customers can use the newly developed features! There are many competing tools
and options in this space, and the one that is right for you and your organization
will depend on your needs. Puppet, Ansible, Salt, PalletOps, and others helps us
in deploying the code.
Deployment is an important part of the development process, as it enables
organizations to quickly and reliably bring new features and improvements to
their users. To facilitate deployment, DevOps teams often use automation tools
and practices such as continuous integration and delivery (CI/CD) to streamline
the process and ensure that updates can be deployed quickly and consistently.
Deployment can be done manually or automated through the use of tools and
scripts. The goal of deployment in DevOps is to make the process of updating a
production environment as efficient, reliable, and fast as possible and to make
code changes available to end users as quickly and reliably as possible, while
minimizing downtime and disruption.
Why we Use Deployment Systems?
Let's first examine the basics of the problem we are trying to solve. We have a
typical enterprise application, with a number of different high-level components.
In our scenario, we have:
• A web server
• An application server
• A database server
If we only have a single physical server and these few components to worry about
that get released once a year or so, we can install the software manually and be
done with the task. It will be the most cost-effective way of dealing with the
situation, even though manual work is boring and error prone. It's not reasonable
to expect conformity to this simplified release cycle in reality though. It is more
likely that a large organization has hundreds of servers and applications and that
they are all deployed differently, with different requirements. Managing all the
complexity that the real world displays is hard, so it starts to make sense that
there are a lot of different solutions that do basically the same thing in different

12
ways. Whatever the fundamental unit that executes our code is, be it a physical
server, a virtual machine, some form of container technology, or a combination of
these, we have several challenges to deal with like,
 Configuring the base OS
 Describing clusters
 Delivering packages to a system
To address these challenges we need deployment Systems.
Virtualization stacks:
Organizations that have their own internal server farms tend to use virtualization
a lot in order to encapsulate the different components of their applications. There
are many different solutions depending on your requirements. Virtualization
solutions provide virtual machines that have virtual hardware, such as network
cards and CPUs. Virtualization and container techniques are sometimes confused
because they share some similarities. You can use virtualization techniques to
simulate entirely different hardware than the one you have physically. This is
commonly referred to as emulation. If you want to emulate mobile phone
hardware on your developer machine so that you can test your mobile
application, you use virtualization in order to emulate a device. The closer the
underlying hardware is to the target platform, the greater the efficiency the
emulator can have during emulation. As an example, you can use the QEMU
emulator to emulate an Android device. If you emulate an Android x86_64 device
on an x86_64-based developer machine, the emulation will be much more
efficient than if you emulate an ARM-based Android device on an x86_64-based
developer machine. With server virtualization, you are usually not really
interested in the possibility of emulation. You are interested instead in
encapsulating your application's server components. For instance, if a server
application component starts to run amok and consume unreasonable amounts
of CPU time or other resources, you don't want the entire physical machine to
stop working altogether. This can be achieved by creating a virtual machine with,
perhaps, two cores on a machine with 64 cores. Only two cores would be affected
by the runaway application. The same goes for memory allocation. Container-

13
based techniques provide similar degrees of encapsulation and control over
resource allocation as virtualization techniques do. Containers do not normally
provide the emulation features of virtualization, though. This is not an issue since
we rarely need emulation for server applications. The component that abstracts
the underlying hardware and arbitrates hardware resources between different
competing virtual machines is called a hypervisor. The hypervisor can run
directly on the hardware, in which case it is called a bare metal hypervisor.
Otherwise, it runs inside an operating system with the help of the operating
system kernel. VMware is a proprietary virtualization solution, and exists in
desktop and server hypervisor variants. It is well supported and used in many
organizations. The server variant changes names sometimes; currently, it's called
VMware ESX, which is a bare metal hypervisor.
KVM is a virtualization solution for Linux. It runs inside a Linux host
operating system. Since it is an open source solution, it is usually much cheaper
than proprietary solutions since there are no licensing costs per instance and is
therefore popular with organizations that have massive amounts of virtualization.
Xen is another type of virtualization which, amongst other features, has
paravirtualization. Paravirtualization is built upon the idea that that if the guest
operating system can be made to use a modified kernel, it can execute with
greater efficiency. In this way, it sits somewhere between full CPU emulation,
where a fully independent kernel version is used, and container-based
virtualization, where the host kernel is used. VirtualBox is an open source
virtualization solution from Oracle. It is pretty popular with developers and
sometimes used with server installations as well but rarely on a larger scale.
Developers who use Microsoft Windows on their developer machines but want to
emulate Linux server environments locally often find VirtualBox handy. Likewise,
developers who use Linux on their workstations find it useful to emulate Windows
machines.

14
Executing code on the client:
Several of the configuration management systems described here allow you to
reuse the node descriptors to execute code on matching nodes.
The configuration management systems are :
 Puppet
 Ansible
 PalletOps
 Chef
 SaltStack
 Vagrant
 Docker
 Kubernetes

The Puppet master and Puppet agents:


Puppet is a deployment solution that is very popular in larger organizations and
is one of the first systems of its kind. Puppet consists of a client/server solution,
where the client nodes check in regularly with the Puppet server to see if anything
needs to be updated in the local configuration. The Puppet server is called a
Puppet master, and there is a lot of similar wordplay in the names chosen for the
various Puppet components. Puppet provides a lot of flexibility in handling the
complexity of a server farm, and as such, the tool itself is pretty complex. This is
an example scenario of a dialogue between a Puppet client and a Puppet master:
1. The Puppet client decides that it's time to check in with the Puppet master to
discover any new configuration changes. This can be due to a timer or manual
intervention by an operator at the client. The dialogue between the Puppet
client and master is normally encrypted using SSL.
2. The Puppet client presents its credentials so that the Puppet master can know
exactly which client is calling. Managing the client credentials is a separate
issue.
3. The Puppet master figures out which configuration the client should have by
compiling the Puppet catalogue and sending it to the client. This involves a
number of mechanisms, and a particular setup doesn't need to utilize all
possibilities. It is pretty common to have both a role-based and concrete
configuration for a Puppet client. Role-based configurations can be inherited.
15
4. The Puppet master runs the necessary code on the client side such that the
configuration matches the one decided on by the Puppet master. In this sense,
a Puppet configuration is declarative. You declare what configuration a
machine should have, and Puppet figures out how to get from the current to
the desired client state.

There are both pros and cons of the Puppet ecosystem:


• Puppet has a large community, and there are a lot of resources on the Internet
for Puppet. There are a lot of different modules, and if you don't have a really
strange component to deploy, there already is, with all likelihood, an existing
module written for your component that you can modify according to your
needs.
• Puppet requires a number of dependencies on the Puppet client machines.
Sometimes, this gives rise to problems. The Puppet agent will require a Ruby
runtime that sometimes needs to be ahead of the Ruby version available in
your distribution's repositories. Enterprise distributions often lag behind in
versions.
• Puppet configurations can be complex to write and test.

Ansible:
Ansible is a deployment solution that favors simplicity. The Ansible architecture
is agentless; it doesn't need a running daemon on the client side like Puppet does.
Instead, the Ansible server logs in to the Ansible node and issues commands over
SSH in order to install the required configuration. While Ansible's agentless
architecture does make things simpler, you need a Python interpreter installed on
the Ansible nodes. Ansible is somewhat more lenient about the Python version
required for its code to run than Puppet is for its Ruby code to run, so this
dependence on Python being available is not a great hassle in practice. Like
Puppet and others, Ansible focuses on configuration descriptors that are
idempotent. This basically means that the descriptors are declarative and the
Ansible system figures out how to bring the server to the desired state. You can

16
rerun the configuration run, and it will be safe, which is not necessarily the case
for an imperative system.
PalletOps:
PalletOps is an advanced deployment system, which combines the declarative
power of Lisp with a very lightweight server configuration.

PalletOps takes Ansible's agentless idea one step further. Rather than needing a
Ruby or Python interpreter installed on the node that is to be configured, you only
need ssh and a bash installation. These are pretty simple requirements. PalletOps
compiles its Lisp-defined DSL to Bash code that is executed on the slave node.
These are such simple requirements that you can use it on very small and simple
servers—even phones! On the other hand, while there are a number of support
modules for Pallet called crates, there are fewer of them than there are for Puppet
or Ansible.
Chef:
Chef is a configuration management technology used to automate the
infrastructure provisioning. It is developed on the basis of Ruby DSL language. It
is used to streamline the task of configuration and managing the company’s
server. It has the capability to get integrated with any of the cloud technology. In
DevOps, we use Chef to deploy and manage servers and applications in-house
and on the cloud.

SaltStack:
SaltStack, also known as Salt, is a configuration management and orchestration
tool. It uses a central repository to provision new servers and other IT
infrastructure, to make changes to existing ones, and to install software in IT
environments, including physical and virtual servers, as well as the cloud.

Vagrant:
Vagrant is a tool for working with virtual environments, and in most
circumstances, this means working with virtual machines. Vagrant provides a

17
simple and easy to use command-line client for managing these environments,
and an interpreter for the text-based definitions of what each environment looks
like, called Vagrantfiles. Vagrant is open source, which means that anyone can
download it, modify it, and share it freely. Vagrant supports several virtualization
providers, and VirtualBox is a popular provider for developers.

Deploying with Docker:


Docker is a software platform that allows you to build, test, and deploy
applications quickly. Docker packages software into standardized units called
containers that have everything the software needs to run including libraries,
system tools, code, and runtime. Docker's model of creating reusable containers
that can be used on development machines, testing environments, and
production environments is very appealing.
There are several emerging solutions, such as these:
• Docker Swarm: Docker Swarm is compatible with Docker Compose, which is
appealing. Docker Swarm is maintained by the Docker community.
• Kubernetes: Kubernetes is modeled after Google's Borg cluster software, which
is appealing since it's a well-tested model used in-house in Google's vast data
centers. Kubernetes is not the same as Borg though, which must be kept in
mind. It's not clear whether Kubernetes offers scaling the same way Borg does.

...............................................................................................................
The End

18

You might also like