devops unit-5
devops unit-5
What is testing?
Software Testing is a method to check whether the actual software product
matches expected requirements and to ensure that software product is Defect
free. It involves execution of software/system components using manual or
automated tools to evaluate one or more properties of interest. The purpose of
software testing is to identify errors, gaps or missing requirements in contrast to
actual requirements.
Manual testing:
1
Types of Manual Testing:
White-box testing : The white box testing is done by Developer, where they
check every line of a code before giving it to the Test Engineer.
Black box testing: The black box testing is done by the Test Engineer,
where they can check the functionality of an application or the software
according to the customer /client's needs.
Gray Box testing: Gray box testing is a combination of white box and
Black box testing. It can be performed by a person who knew both coding
and testing. And if the single person performs white box, as well as black-
box testing for the application, is known as Gray box testing.
How to perform Manual Testing?
The automation testing software can also enter test data into the System
Under Test, compare expected and actual results and generate detailed test
reports. Software Test Automation demands considerable investments of
money and resources.
Ensure Consistency
Increases Efficiency
Better speed in executing tests
The main disadvantages of automated testing are that they usually cost
more money in software, take a lot of effort to implement for the first time,
and need a lot of maintenance.
4
Unit testing:
Unit testing is the sort of testing that is normally close at heart for developers.
The primary reason is that, by definition, unit testing tests well-defined parts of
the system in isolation from other parts. Thus, they are comparatively easy to
write and use. Many build systems have built-in support for unit tests, which can
be leveraged without undue difficulty. With Maven, for example, there is a
convention that describes how to write tests such that the build system can find
them, execute them, and finally prepare a report of the outcome. Writing tests
basically boils down to writing test methods, which are tagged with source code
annotations to mark the methods as being tests. Since they are ordinary
methods, they can do anything, but by convention, the tests should be written so
that they don't require considerable effort to run. If the test code starts to require
complicated setup and runtime dependencies, we are no longer dealing with unit
tests. Here, the difference between unit testing and functional testing can be a
source of confusion. Often, the same underlying technologies and libraries are
reused between unit and functional testing.
JUnit is a framework that lets you define unit tests in your Java code and run
them. JUnit belongs to a family of testing frameworks collectively called xUnit.
While JUnit is specific to Java, the ideas are sufficiently generic for ports to have
been made in, for instance, C#. The corresponding test framework for C# is called,
somewhat unimaginatively, NUnit. The N is derived from .NET, the name of
theMicrosoft software platform.
Test runner: A test runner runs tests that are defined by an xUnit
framework. JUnit has a way to run unit tests from the command line, and
Maven employs a test runner called Surefire. A test runner also collects and
reports test results. In the case of Surefire, the reports are in XML format,
and these reports can be further processed by other tools, particularly for
visualization.
5
Test case: A test case is the most fundamental type of test definition. How
you create test cases differs a little bit among JUnit versions. In earlier
versions, you inherited from a JUnit base class; in recent versions, you just
need to annotate the test methods.
Test fixtures: A test fixture is a known state that the test cases can rely on
so that the tests can have well-defined behavior. It is the responsibility of
the developer to create these. A test fixture is also sometimes known as a
test context. With JUnit, you usually use the @Before and @After
annotations to define test fixtures. @Before is, unsurprisingly, run before a
test case and is used to bring up the environment. @After likewise restores
the state if there is a need to Sometimes, @Before and @After are more
descriptively named Setup and Teardown. Since annotations are used, the
method can have the names that are the most intuitive in that context.
Test suites: You can group test cases together in test suites. A test suite is
usually a set of test cases that share the same test fixture.
Test execution: A test execution runs the tests suites and test cases. Here,
all the previous aspects are combined. The test suites and test cases are
located, the appropriate test fixtures are created, and the test cases run.
Lastly, the test results are collected and collated.
Test result formatter: A test result formatter formats test result output for
human consumption. The format employed by JUnit is versatile enough to
be used by other testing frameworks and formatters not directly associated
with JUnit. So, if you have some tests that don't really use any of the xUnit
frameworks, you can still benefit by presenting the test results in Jenkins
by providing a test result XML file. Since the file format is XML, you can
produce it from your own tool, if need be.
Assertions: An assertion is a construct in the xUnit framework that makes
sure that a condition is met. If it is not met, it is considered an error, and a
test error is reported. The test case is also usually terminated when the
assertion fails.
6
What is Selenium?
Selenium Software is not just a single tool but a suite of software, each piece
catering to different Selenium QA testing needs of an organization. Here is the list
of tools
7
o Selenium can be integrated with frameworks like Ant and Maven for source
code compilation.
o Selenium can also be integrated with testing frameworks like TestNG for
application testing and generating reports.
o Selenium requires fewer resources as compared to other automation test
tools.
o WebDriver API has been indulged in selenium whichis one of the most
important modifications done to selenium.
o Selenium web driver does not require server installation, test scripts
interact directly with the browser.
o Selenium commands are categorized in terms of different classes which
make it easier to understand and implement.
o Selenium Remote Control (RC) in conjunction with WebDriver API is known
as Selenium 2.0. This version was built to support the vibrant web pages
and Ajax.
At the moment, Selenium RC and WebDriver are merged into a single framework
to form Selenium 2. Selenium 1, by the way, refers to Selenium RC.
8
JavaScript testing:
The JS test framework is a tool for examining the functionality of JavaScript web
applications. It helps to ensure all the components are working properly. It also
enables you to easily identify bugs. Therefore, you can quickly take the necessary
steps to fix issues.
JavaScript Unit Testing is a method in which JavaScript test code is written for a
web page or application module. It is then combined with HTML as an inline event
handler and executed in the browser to test if all functionalities work as desired.
These unit tests are then organized in the test suite.
• Karma is a test runner for unit tests in the JavaScript language
• Jasmine is a Cucumber-like behavior testing framework
• Protractor is used for AngularJS.
9
Here's what the user interface looks like:
The soapUI user interface is fairly straightforward. There is a tree view listing test
cases on the left. It is possible to select single tests or entire test suites and run
them. The results are presented in the area on the right. It is also worth noting
that the test cases are defined in XML. This makes it possible to manage them as
code in the source code repository. This also makes it possible to edit them in a
text editor on occasion, for instance, when we need to perform a global search
and replace on an identifier that has changed names.
Test-driven development:
made popular by the Extreme programming movement of the nineties. TDD is
usually described as a sequence of events, as follows:
Implement the test: As the name implies, you start out by writing the test
and write the code afterwards. One way to see it is that you implement the
interface specifications of the code to be developed and then progress by
10
writing the code. To be able to write the test, the developer must find all
relevant requirement specifications, use cases, and user stories. The shift in
focus from coding to understanding the requirements can be beneficial for
implementing them correctly.
Verify that the new test fails: The newly added test should fail because
there is nothing to implement the behavior properly yet, only the stubs and
interfaces needed to write the test. Run the test and verify that it fails.
Write code that implements the tested feature: The code we write
doesn't yet have to be particularly elegant or efficient. Initially, we just want
to makethe new test pass.
Verify that the new test passes together with the old tests: When the
new test passes, we know that we have implemented the new feature
correctly. Since the old tests also pass, we haven't broken existing
functionality.
Refactor the code: The word "refactor" has mathematical roots. In
programming, it means cleaning up the code and, among other things,
making it easier to understand and maintain.
REPL-driven development:
This style of development is very common when working with interpreted
languages, such as Lisp, Python, Ruby, and JavaScript. When you work with a
Read Eval Print Loop (REPL), you write small functions that are independent and
also not dependent on a global state. The functions are tested even as you write
them. This style of development differs a bit from TDD. The focus is on writing
small functions with no or very few side effects. This makes the code easy to
comprehend rather than when writing test cases before functioning code is
written, as in TDD. You can combine this style of development with unit testing.
Since you can use REPL-driven development to develop your tests as well, this
combination is a very effective strategy.
11
Deploying the Code
Once after code built and tested, we need to deploy it to our servers so that our
customers can use the newly developed features! There are many competing tools
and options in this space, and the one that is right for you and your organization
will depend on your needs. Puppet, Ansible, Salt, PalletOps, and others helps us
in deploying the code.
Deployment is an important part of the development process, as it enables
organizations to quickly and reliably bring new features and improvements to
their users. To facilitate deployment, DevOps teams often use automation tools
and practices such as continuous integration and delivery (CI/CD) to streamline
the process and ensure that updates can be deployed quickly and consistently.
Deployment can be done manually or automated through the use of tools and
scripts. The goal of deployment in DevOps is to make the process of updating a
production environment as efficient, reliable, and fast as possible and to make
code changes available to end users as quickly and reliably as possible, while
minimizing downtime and disruption.
Why we Use Deployment Systems?
Let's first examine the basics of the problem we are trying to solve. We have a
typical enterprise application, with a number of different high-level components.
In our scenario, we have:
• A web server
• An application server
• A database server
If we only have a single physical server and these few components to worry about
that get released once a year or so, we can install the software manually and be
done with the task. It will be the most cost-effective way of dealing with the
situation, even though manual work is boring and error prone. It's not reasonable
to expect conformity to this simplified release cycle in reality though. It is more
likely that a large organization has hundreds of servers and applications and that
they are all deployed differently, with different requirements. Managing all the
complexity that the real world displays is hard, so it starts to make sense that
there are a lot of different solutions that do basically the same thing in different
12
ways. Whatever the fundamental unit that executes our code is, be it a physical
server, a virtual machine, some form of container technology, or a combination of
these, we have several challenges to deal with like,
Configuring the base OS
Describing clusters
Delivering packages to a system
To address these challenges we need deployment Systems.
Virtualization stacks:
Organizations that have their own internal server farms tend to use virtualization
a lot in order to encapsulate the different components of their applications. There
are many different solutions depending on your requirements. Virtualization
solutions provide virtual machines that have virtual hardware, such as network
cards and CPUs. Virtualization and container techniques are sometimes confused
because they share some similarities. You can use virtualization techniques to
simulate entirely different hardware than the one you have physically. This is
commonly referred to as emulation. If you want to emulate mobile phone
hardware on your developer machine so that you can test your mobile
application, you use virtualization in order to emulate a device. The closer the
underlying hardware is to the target platform, the greater the efficiency the
emulator can have during emulation. As an example, you can use the QEMU
emulator to emulate an Android device. If you emulate an Android x86_64 device
on an x86_64-based developer machine, the emulation will be much more
efficient than if you emulate an ARM-based Android device on an x86_64-based
developer machine. With server virtualization, you are usually not really
interested in the possibility of emulation. You are interested instead in
encapsulating your application's server components. For instance, if a server
application component starts to run amok and consume unreasonable amounts
of CPU time or other resources, you don't want the entire physical machine to
stop working altogether. This can be achieved by creating a virtual machine with,
perhaps, two cores on a machine with 64 cores. Only two cores would be affected
by the runaway application. The same goes for memory allocation. Container-
13
based techniques provide similar degrees of encapsulation and control over
resource allocation as virtualization techniques do. Containers do not normally
provide the emulation features of virtualization, though. This is not an issue since
we rarely need emulation for server applications. The component that abstracts
the underlying hardware and arbitrates hardware resources between different
competing virtual machines is called a hypervisor. The hypervisor can run
directly on the hardware, in which case it is called a bare metal hypervisor.
Otherwise, it runs inside an operating system with the help of the operating
system kernel. VMware is a proprietary virtualization solution, and exists in
desktop and server hypervisor variants. It is well supported and used in many
organizations. The server variant changes names sometimes; currently, it's called
VMware ESX, which is a bare metal hypervisor.
KVM is a virtualization solution for Linux. It runs inside a Linux host
operating system. Since it is an open source solution, it is usually much cheaper
than proprietary solutions since there are no licensing costs per instance and is
therefore popular with organizations that have massive amounts of virtualization.
Xen is another type of virtualization which, amongst other features, has
paravirtualization. Paravirtualization is built upon the idea that that if the guest
operating system can be made to use a modified kernel, it can execute with
greater efficiency. In this way, it sits somewhere between full CPU emulation,
where a fully independent kernel version is used, and container-based
virtualization, where the host kernel is used. VirtualBox is an open source
virtualization solution from Oracle. It is pretty popular with developers and
sometimes used with server installations as well but rarely on a larger scale.
Developers who use Microsoft Windows on their developer machines but want to
emulate Linux server environments locally often find VirtualBox handy. Likewise,
developers who use Linux on their workstations find it useful to emulate Windows
machines.
14
Executing code on the client:
Several of the configuration management systems described here allow you to
reuse the node descriptors to execute code on matching nodes.
The configuration management systems are :
Puppet
Ansible
PalletOps
Chef
SaltStack
Vagrant
Docker
Kubernetes
Ansible:
Ansible is a deployment solution that favors simplicity. The Ansible architecture
is agentless; it doesn't need a running daemon on the client side like Puppet does.
Instead, the Ansible server logs in to the Ansible node and issues commands over
SSH in order to install the required configuration. While Ansible's agentless
architecture does make things simpler, you need a Python interpreter installed on
the Ansible nodes. Ansible is somewhat more lenient about the Python version
required for its code to run than Puppet is for its Ruby code to run, so this
dependence on Python being available is not a great hassle in practice. Like
Puppet and others, Ansible focuses on configuration descriptors that are
idempotent. This basically means that the descriptors are declarative and the
Ansible system figures out how to bring the server to the desired state. You can
16
rerun the configuration run, and it will be safe, which is not necessarily the case
for an imperative system.
PalletOps:
PalletOps is an advanced deployment system, which combines the declarative
power of Lisp with a very lightweight server configuration.
PalletOps takes Ansible's agentless idea one step further. Rather than needing a
Ruby or Python interpreter installed on the node that is to be configured, you only
need ssh and a bash installation. These are pretty simple requirements. PalletOps
compiles its Lisp-defined DSL to Bash code that is executed on the slave node.
These are such simple requirements that you can use it on very small and simple
servers—even phones! On the other hand, while there are a number of support
modules for Pallet called crates, there are fewer of them than there are for Puppet
or Ansible.
Chef:
Chef is a configuration management technology used to automate the
infrastructure provisioning. It is developed on the basis of Ruby DSL language. It
is used to streamline the task of configuration and managing the company’s
server. It has the capability to get integrated with any of the cloud technology. In
DevOps, we use Chef to deploy and manage servers and applications in-house
and on the cloud.
SaltStack:
SaltStack, also known as Salt, is a configuration management and orchestration
tool. It uses a central repository to provision new servers and other IT
infrastructure, to make changes to existing ones, and to install software in IT
environments, including physical and virtual servers, as well as the cloud.
Vagrant:
Vagrant is a tool for working with virtual environments, and in most
circumstances, this means working with virtual machines. Vagrant provides a
17
simple and easy to use command-line client for managing these environments,
and an interpreter for the text-based definitions of what each environment looks
like, called Vagrantfiles. Vagrant is open source, which means that anyone can
download it, modify it, and share it freely. Vagrant supports several virtualization
providers, and VirtualBox is a popular provider for developers.
...............................................................................................................
The End
18