Economically Efficient Virtualization Over Cloud Using Docker Containers
Economically Efficient Virtualization Over Cloud Using Docker Containers
net/publication/312568688
CITATIONS READS
14 121
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Krishan Kumar on 17 April 2019.
Abstract—In computing era, a virtual version of a stratagem will scuttle virtually anywhere. Such Docker containers
or resource, such as computer network resources, server, can encloses any payload, and will run coherently on and
storage device, or a combination of these usually refers as between virtually any servers. A developer can build and
virtualization where the framework segregates the resources
into one or more execution instances. A Virtual Machine test with the same container on a laptop which will run with
(VM) created on the host hardware as a software is called scalability, VMs, public instances, in production, on Open-
a hypervisor or a VM manager. Nowadays, hypervisor based Stack clusters, bare metal servers, or combinations of the
virtualization is the most prevailing technology for virtual above. Docker is a uniting block for automating distributed
environment. It is supple and performs for equivalently any systems: database clusters, large scale web deployments,
guest operating system (OS) but it leads to overload on
memory and CPU. Nevertheless, container based virtualization continuous organization systems, creation of lightweight,
(or OS virtualization) technique which allows various different private PAAS environments, service oriented architectures,
VMs employed on top of running kernel. Unalike the existing packaging automation and applications deplopment, testing
virtualization approach, a containerization does not execute a automation and continuous deployment/integration, scaling
VM as a complete OS instance but runs as partial instances and deploying web apps, backend and databases services.
of the OS which reduces the CPU and memory overload and
also provides the portability due to small size of containers. II. D OCKER T ECHNOLOGY
Docker is a tool to make Linux Containers (LXC) easier
to use. Docker containers could be specifically deployed for Docker is an open source platform that organize auto-
the applications and systems deployment using lightweight matically of any application as a self sufficient, portable,
containerization technique which results reducing economical light weight container that will surries virtually anywhere[1].
overload instead of VMs over Cloud.
It utilizes namespaces, cgroups and AuFS as underlying
Keywords-Docker, Container, System Level Virtualization, technologies.
Cloud, Virtual Machines, LXC.
A. Underlying Technologies
I. I NTRODUCTION The competence of Docker occurs from the underlying
Docker[1] is a tool which renders the lightweight vir- technology[2] it is assemble from. The summarizes foremost
tualization at system level through extending a common OS traits that Docker operate to deploy easy containerization
container format approach on Linux called Linux Containers happen as follow: a) Namespaces provide an isolation to
(LXC). It employs cgroups, LXC, and the Linux kernel container. When we launch a container instance, Docker con-
itself unlike conventional VMs, a Docker containers do cocts a namespaces sets for that instance of the container[3].
not include a separate OS. Even, it hastens in the same This allots a isolation layer where processes with the fallacy
OS as its host. This enables Docker to share the host OS that they are the only processes on the system. b) Control
resources and bargains on the kernel’s functionality which is Groups (CPU, Memory, I/O, Device etc.) enforce resource
provided by the underlying LXC and cgroups technologies accounting and limiting for Docker containers. Applications
that offers resource isolation (memory, CPU, I/O, network, which are running in isolation is to have contained them, not
etc.) which reduce the CPU and memory overload. It also just in terms of pertained file system and/or dependencies,
proffer the portability due to small size of containers. but also, resources. Docker empower by control groups to
Hence, a container based virtualization framework is an fairly allotment of the hardware resources available to that
easy, lightweight virtualized environment where it tolerates containers. and c) AUFS (Union File System) manoeuvres
us to create containers holding all the dependencies for through building layers, which furnishing them fast and very
portable applications. It automates s a portable, lightweight, light weight even. Docker utilizes file systems (e.g. union)
self sufficient container deployment of any application that to produce the elementary blocks for Docker containers.
96
97
III. C LOUD C OMPUTING at Cloud. Its configuration may use for different backend
As per the definition of National Institute of Technology storage services which holds up various formats of VM
and Standards (NIST), ”Cloud computing[10] is a frame- image like ISO, virtual disk image, etc.
work to enable suitable, accessing the network on demand IV. I MPLEMENTATION
to share a configured pool of computing resources which are
rapidly released and provisioned which need less effort to First of all, OpenStack has installed with ubuntu 12.04 on
manage or to make interaction with service provider” our host machine with OpenStack installation steps[13] and
then install the Docker with installation steps[14]. Next, the
A. Cloud Service Framework amalgamation of Docker and OpenStack Cloud follows as:
Basic services model[11] for Cloud computing is de- A. Docker with OpenStack Cloud
scribed as follow: 1) a Service using Infrastructure (IaaS)
renders accessing the underlying resources like as virtual We have deployed OpenStack and named it ”dCloud”
machines, virtual storages, physical machines etc. 2) a Ser- considering containerization instead of creating VMs.
vice using Platform (PaaS) dispenses a runtime environment It provides us facility to create Docker Container with
for applications, evolution and deployment tools, etc. and 3) extensive network capabilities. Docker easily performs push
a Service using Software(SaaS) proffers to the end users a
service through software.
B. OpenStack Cloud Computing Environment
OpenStack[12] is developed as computing project at
Cloud which has an aim to render a service using infras-
tructure (IaaS). It is a heap of projects which are using
the open source that can be utilize to execute and setup a
computating environment. It is devised to jog on commodity
hardware e.g. x86 and arm. The management competencies
of the OpenStack in Cloud environment (see figure 2) is
computing, storage and networking service altogether.
(a) Docker virt Driver (Hypervisor)[15]
C. Openstack Services
In OpenStack, a computing controller node at Cloud is
known as Compute (Nova)[13]. Object and Block storage
of OpenStack at Cloud is used by VM instances for storage
services. The users are granted by the block storage service
to make storage devices through block where attachment and
detachment them from VM instances is done dynamically to (b) Building Docker Image from Dockerfile
utilize the API or dashboard services. OpenStack networking
dispenses software driven network. Dashboard or horizon
service is Python based service which is used to manage
the services for OpenStack. Individual identity in OpenStack
donates the authorization and authentication services for all (c) Launching an instance of Docker Container
activities in OpenStack. OpenStack Image (Glance) works
as storage object and retrieval from it as the images of VM Figure 3: Working of Docker with OpenStack Cloud
97
98
and pull images operation from OpenStack storage using using radius server. The result then would collect in resulatnt
an amalgamated Docker registry with nova and glance (see container (CR) at controller node. Hence, we will able to
figure 3(a)). The driver at Nova sets with a small client of get the entire result in one CR at controller node. Suppose
HTTP which communicate with Docker using an internal some computing node become overloaded due to very large
API Rest or via socket of UNIX to govern Docker containers number of containers, we can then transfer some container
and collect information about them. The configuration to to another computing node for balancing the load which is
enabe Docker is as follows: Nova is configured for very easy using Docker registry (see figure 4).
the use of virt driver of Docker. So, configuration file
of Nova needs to be edited ”/etc/nova/nova.conf” with V. E XPERIMENTS AND R ESULTS
”compute driver=novadocker.virt.docker.DockerDriver” A. Tool installation in Docker Container:
whereas Glance is configured to make the Using the commend ”# apt–get install to install –y pack-
compatibility the format of Docker containers. So, age name”, we can install any essential tools inside Docker
configuration file of Nova needs to be edited using container. We are installed nano as an editor inside Docker
”container formats=docker”. Docker container can be container (see figure 5).
created either by Docker Binary Image or Dockerfile.
We create own Dockerfile and the content is availbale at
”https://registry.hub.docker.com/u/dockberwal/dock–berwal/
dockerfile/”. We built Docker Image (see figure 3(b))
using ubuntu with Docker command ”# docker build
–t dockberwal/dock–berwal .” where Docker Image
need to tag with dockberwal/dock–berwal. To launch
an instance of Docker (see figure 3(c)) using image
dockberwal/dock–berwal where i, t are the options which
make interactivity between host machine and Docker
container.
Figure 4: Docker containers over Cloud Figure 6: Dumping and Restoring mechanism
98
99
a system crash. We simulated this situation by considering
a shell script abc.sh executed inside the created Docker
container to print numbers from 1 through 50 and to take
user input two times after printing 21 (see figure 7(a)) and
42 (see figure 7(d)). When the I/O branching for user input
has to take place, we will stop the container which runs the
current program and then restart after the user input process
is complete and continue to print number 22 onwards, so
that our container will be able to start from the previous
state where we had stopped. Therefore, We get the process
id which belongs to the running container using command
(see figure 7(b)). CRIU (Checkpoint/Restore In Userspace)
call to set the sh –job, as, a part of a session will dump
(a) Running Container with an application and a slave TTY endup from TTY pair. Hence, to dump
the session of process (e.g. PID 8987) by detaching it from
other terminal session or dependent jobs through shell job
and there is everything fine then return the message OK.
(b) Processes running inside Docker container We have stored the pages in form of images. Dump.log
(see figure 7(c)) and restore.log are two main files where
dump.log file contains the stack, ptrace, and syscall address
etc. while restore.log file resolves the address and gets the
dumped data from other images first and then restores the
entire state. Now, we try to restore the entire state back.
After restart the container, it must complete the I/O and
print 22 as next number (see figure 7(d)). Thus, container
is restarted with previously stored state successfully. Hence,
we can pull again anywhere and run the container from the
crushing state.
100
99
Table I: Comparison of VMs versus Docker containers [3] Krishan Kumar, Docker Inroduction : basics, [Online Avail-
able:] https://www.youtube.com/watch?v=18LTGgTlKwAr.
Virtualiza– Start Stop Time to Max. Overhead Retrieved Feb. 2016.
tion time time launch a instances [Memory,
Technique [sec.] [sec.] instance per I/O,CPU, [4] Docker‘s Architecture: Major Docker Components, [Online
[min.] machine etc.]
Available:] https://docs.docker.com/introduction/understanding
VMs 30–40 5–10 >20 <10 1 OS –docker/#what–are–the–major–docker–components. Retrieved
Containers <0.005 <0.005 <0.5 >100 1 process May 2016.
( Zero)
[5] Elements of Docker : Contianers, Images and Dockerfile, [On-
line Available:] http://docs.docker.com/introduction/working
D. Docker versus KVM Network Performance –with–docker/#elements–of–docker. Retrieved March 2016.
KVM uses the less bandwidth than Docker with dedicated [6] Potential use of Docker in Digital Era, [Online Available:]
Port. However, Docker with port forwarding utilize approx- http://blog.flux7.com/blogs/docker/8–ways–to–use–docker–in
imately less than 50% bandwith in comprison of KVM –the–real–world. Retrieved Dec. 2015.
bandwidth 31 Gb/s. The network performance of Docker
against KVM over Cloud is shown in figure 8. [7] Dana P., Interoperability and Portability betweens Clouds:
challenges and case study 4th European Conference Pro-
ceedings, ServiceWave 2011 Poznan, 6994, 2011, pp 62–74
100
101