0% found this document useful (0 votes)
78 views

Autonomic Orchestration of Containers: Problem Definition and Research Challenges

Uploaded by

vava erva29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Autonomic Orchestration of Containers: Problem Definition and Research Challenges

Uploaded by

vava erva29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Autonomic Orchestration of Containers: Problem

Definition and Research Challenges


Emiliano Casalicchio
Blekinge Institute of Technology
Karlskrona, Sweden
[email protected]

ABSTRACT not only to start/stop but also to upgrade and release a new version
Today, a new technology is going to change the way cloud of a containerized service in a seamless way. IBM researcher
platforms are designed and managed. This technology is called “believe that the new cloud operating environment will be built
container. A container is a software environment where to install using containers as a foundation. No virtual machines…”[4].
an application or application component and all the library Containers became so popular because they potentially may solve
dependencies, the binaries, and a basic configuration needed to many cloud application issues, for example: the “dependency
run the application. The container technology promises to solve hell” problem, typical of complex distributed applications.
many cloud application issues, for example the application Containers give the possibility to separate components, wrapping
portability problem and the virtual machine performance overhead up the application with all its dependencies in a self-contained
problem. The cloud industry is adopting the container technology piece of software, that can be executed on any platform that
both for internal usage and as commercial offering. However, we supports the container technology. The application portability
are far away from the maturity stage and there are still many problem; a microservice can be executed on any platform
research challenges to be solved. One of them is container supporting container and moreover the Docker container
orchestration, that make it possible to define how to select, management framework is compliant with the cloud portability
deploy, monitor, and dynamically control the configuration of frameworks TOSCA [5][6]. The performance overhead problem;
multi-container packaged applications in the cloud. This paper containers are lightweight and introduce lower overhead
presents the problem of autonomic container orchestration, compared to VMs. The literature shows that the overhead
analyze the state of the art solutions and discuss open challenges. introduced by containers, compared to bare-metal installations, is
around 10% while hardware virtualization (classical VMs)
CCS Concepts introduce an overhead of the 40% [7]; moreover, launching and
• Computing methodologies~Self-organization • Computer shouting down a container requires 1-2 seconds rather than
systems organization~Cloud computing • Information minutes. Besides, the concept of microservices and the portability
systems~Computing platforms • Applied computing~Service- feature of containers make it possible to satisfy typical constraints
oriented architectures • Theory of computation~Distributed imposed by the legislation and the regulation e.g., the data
computing models sovereignty and the vendor-lock-in.
For all those reasons, and more, the cloud industry adopted the
Keywords container technology both for internal usage [8] and for offering
Container; Docker; Cloud Computing; Autonomic Computing;
container-based services and container development platforms [2].
Service Orchestration. Examples are Google container engine [8], Amazon ECS [9],
1.   INTRODUCTION Alauda (alauda.io), Seastar (seastar.io), Tutum (tutum.com),
Azure Container Service (azure.microsoft.com). Containers are
Operating system and application virtualization, also known as
also the state of the art solution adopted to deploy large scale
container (or Docker container or simply Docker [1]), became
applications, for example big data applications requiring high
popular since 2013 with the launch of the Docker open source
elasticity in managing a very large amount of concurrent
project (docker.com) and with the growing interest of PaaS
components (e.g. [10][11]).
providers [2] and Internet service providers [3]. A container is a
software environment where one can install an application or Despite that wide interest in containers, we are far away from the
application component (the so called microservice) and all the maturity stage and there are still many challenges to be solved, for
library dependencies, the binaries, and a basic configuration example: the need for reducing networking overheads compared
needed to run the application. Containers provide a higher level of to hypervisors; the need for secure resource sharing and isolation
abstraction for the process lifecycle management, with the ability to enable multi-tenancy; the need for improving container
monitoring capabilities; the need for improving container resource
management at run time (e.g. vertical scaling of a container is not
possible) and the need for improving orchestration policies and
SAMPLE: Permission to make digital or hard copies of all or part of this adaptation models adding autonomic capabilities.
work for personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage This work defines and discusses the problem of container
and that copies bear this notice and the full citation on the first page. To orchestration with a focus on autonomic mechanisms. The paper
copy otherwise, or republish, to post on servers or to redistribute to lists, is organized as in what follow. Section 2 defines container
requires prior specific permission and/or a fee. orchestration. Section 3 discusses the related work. The research
Conference’10, Month 1–2, 2010, City, State, Country. problem of autonomic container orchestration is formulated in
Copyright 2010 ACM 1-58113-000-0/00/0010 …$15.00.
Section 4. Research challenges and final remarks are presented in
DOI: http://dx.doi.org/10.1145/12345.67890 Section 5.
2.   CONTAINER ORCHESTRATION contemplate security, but no-one considers constraints imposed by
Orchestration is intended as the set of operations that cloud legal rules [22][23], e.g. sovereignty of data.
providers and application owners undertake (either manually or The idea of container dates back to 1992 [24] and matured over
automatically) for selecting, deploying, monitoring, and the years with the introduction of Linux namespace [25] and the
dynamically controlling the configuration of resources to LXC project (linuxcontainers.org), a solution designed to execute
guarantee a certain level of quality of service [6]. In the same full operating system images in containers. From operating system
way, container orchestration allows cloud and application virtualization the idea has moved to application container [1]. The
providers to define how to select, to deploy, to monitor, and to experiments in [7] show that Docker performs as bare-metal
dynamically control the configuration of multi-container systems and outperforms KVM in most of the cases but that it is
packaged applications in the cloud. weak for I/O and network intensive workloads. In [26] the authors
Container orchestration is not only concerned with the initial shown that for specific memory configuration Docker is capable
deployment of multi-container applications, but it also includes to perform slightly better than VMs while executing scientific
the management at runtime, for example: scaling a multi- workloads.
container application as a single entity; controlling multi-tenant The adoption of container technologies calls for new autonomic
container-based applications; composing containers in high management solutions. An early study on container management
available software architectures; optimizing the networking of the is presented in [27]. Here the authors compare a VM-based and
application, e.g. executing computation near the data. Elastic Application Container based resource management with
Because the highly dynamic and complex environment in which regards to their feasibility and resource-efficiency. The results
containers are operated, the orchestration actions must be show that the container-based solution outperforms the VM-based
automatically adapted at run time, without human intervention, approach in terms of feasibility and resource-efficiency. Today,
and that entails autonomic orchestration mechanisms. there are technologies that specifically support container
orchestration but the level of automation is very naïve. CoreOS
Today, the container orchestration frameworks are at their infancy integrates an orchestrator, called fleet, that supports live container
and do not include any autonomic feature [12]. Cloudify migration from one host to another and the possibility to modify
(getcloudify.org) and Kubernetes (kubernetes.io) are the main at runtime the environment of running containers. Google
TOSCA compliant implementations allowing orchestration of Kubernetes adopts a mixed approach to the virtualization that
Docker containers. However, how to execute and orchestrate allows to scale both the computing resources available to
container in a distributed environment without leveraging on containers and the number containers available to applications. In
hypervisors is still an open issue. Docker Swarm requires a static that way, application can rapidly scale up and down according to
setup of the cluster nodes. Core OS (coreos.com) is a first step their needs. In [12] the authors analyze the state of the art and
toward in this direction, but it is a young solution, and again it is challenges in container orchestration pointing out the high
effected by the standardization problem. The auto-scaling policy fragmentation of technologies, lack of standard image format and
implemented in Kubernetes is a simple threshold based algorithms immaturity of monitoring systems. C-Port [28] is the first example
that only uses CPU usage metrics. The scheduling and load of orchestrator that make it possible to deploy and manage
distribution policy implemented in Swarm is rather naïve. container across multiple clouds. The authors plan to address the
issues of resource discovery and coordination, container
To realize autonomic container orchestration many research
scheduling and placement, and dynamic adaptation. However, the
problems must be addressed, among them: resource management
research is at an early stage. In term of orchestration policy, they
at run time, synchronization between design and execution
developed a constraint-programming model that can be used for
environments, monitoring, performance profiling and
dynamic resource discovery and selection. The constraints that
characterization, performance modeling, definition of
they considered are availability, cost, performance, security, or
optimization models for orchestration and adaptation.
power consumption.
3.   RELATED WORK 4.   RESEARCH PROBLEM
Autonomic orchestration of Docker containers could leverage
The orchestration of container is a broad research topic because it
more than ten years of research results in the field of autonomic
covers the whole life-cycle management. In this work we focuses
computing [15], autonomic service oriented systems [16],
on the elastic properties of container management over distributed
autonomic cloud computing [17][18].
datacenters and in multitenant environments. We concentrate our
A recent survey on service level management in the cloud [17] attention on why, when, where and how to deploy, to launch, to
found that: the monitor-analysis-plan-execute (MAPE) run, to migrate and to shut down container. All those actions are
architecture style [14] is predominant in autonomic cloud systems; defined in what is called an orchestration or adaptation policy.
and, heuristics (e.g. [19][13]), and optimization policies (e.g. Why orchestration action should occur is determined by specific
[20][13][21]) are the most commonly used techniques for resource events such as an increase in the workload intensity or volume,
allocation. In [18] the authors review literature on autonomic the arrival of a new tenant, change of SLA, node failures, critical
QoS-aware resource management in cloud computing since 2009. node load/health state, and so on. Figure 1 shows our approach to
Both from [17] and [18] it results that mainly researchers consider autonomic container orchestration. It is based on the classical
time, availability, cost (in general), energy consumption, as main MAPE-K cycle [14].
constraints or objective of the adaptation. A few works
Container'
image'

Container(

Container(
registries

engine
Container(

image
Autonomic'controller Code( Build Push image(
repository repository
Monitor

Analyzer Planner Executor Container)


Orchestrator)
Shared'Knowledge Development(
@$development$time server
Search Pull(
@$execution$time
CE CE CE
Run
CE CE CE

(e.g.(Docker)
Container(A(

Container(
Container(B

Container(C
CE CE

Engine(
CE
CE
CE CE CE
Clusters'of'Nodes'' Clusters'of'Nodes'' CE
CE
Clients CE
Host(OS
Clusters(of(Nodes((

Figure 2 The container orchestration approach Figure 1 Container life cycle


Containers are stored in repositories called Container Image statistics can be accessed through the
Registries. The orchestrator decides also when to adapt the system /containers/(id)/stats API (docs.docker.com).
(e.g. to start the execution of a container) and where the In a recent work [29], the authors the authors modify Docker and
adaptation will take place (e.g. on what computing node Docker Swarm in order to monitor the I/O capacity and utilization
containers will run). Computing nodes are grouped in clusters that of the containers with the goal of controlling the QoS level of a
can be part of the same datacenter or can be geographically Docker cluster.
distributed. Each computing node in a cluster can run one or more
containers, and the execution of containers is managed by the Also performance characterization studies are limited, an example
container engine (CE) on the node (see Figure 2). The orchestrator is [7]. Profiling and characterization of the performance of
communicates with the container engine of each node to containers and container orchestration technologies is
coordinate the execution of the containers. The decisions taken by fundamental for the definition of performance models and
the orchestrator are based on the system and environment state autonomic orchestration policies.
information collected by the Monitor component and analyzed by
the Analyzer component (see Figure 1). The Planner component Performance models. Validated performance models and energy
executes the orchestration policy to find the appropriate system consumption models of container-based systems are inexistent.
configuration, that is, it solves a system model, or it runs an Performance models are widely used in autonomic computing as
heuristic algorithm. Typically, the adaptation policy finds the representation of the system that must be adapted and as a tool to
determine the reconfiguration actions needed to maintain the
optimal or sub-optimal system configuration that maximizes the
provider utility and that satisfies non-functional and functional desired level of service. An alternative approach is the use of
constraints. The Executor component implements the adaptation machine learning techniques to determine the more appropriate
reconfiguration action (e.g. reinforcement learning [30]).
plan properly interacting with the Container Orchestrator
component (e.g., Kubernetes). How the container is deployed and Adaptation models for container orchestration. As already
executed is another open issue. In the problem description we pointed out, the container orchestration policies used until now are
envisioned a centralized approach to orchestration. However, also very simple and no autonomic mechanisms are used. This is also
decentralized solutions, i.e. choreography, apply and must be consequence of the lack of performance characterization and
carefully investigated. performance models. What the industry need is the definition of a
framework for QoS-aware, energy-aware and legislation-aware
5.   RESEARCH CHALLENGES optimal adaptation for container orchestration. This framework
As pointed out in the previous sections, state of the art should allow to define system models, QoS, energy and legal
mechanisms for container orchestration should be enhanced constraints, to find optimal adaptation policies for container
introducing models and algorithms for runtime self-adaptation. orchestration at run time.
Among the many research challenges, an urgent answer is
That models, to be effective, must account for real systems
required for the followings.
limitations such as: the impossibility to change the resource
Monitoring, profiling and characterization. Monitoring of assigned to a container at run time (that makes impossible vertical
containers includes monitoring of the containerized environment scaling) and the practical difficulty to synchronize the design and
(i.e., of the application) and monitoring of the container execution environment [31]. For example, if the execution
engine/platform. Monitoring techniques and tools used for the environment is modified at run time for orchestration needs, that
operating system and application levels do not allow to catch a will impact the existing system architecture design, that must be
wide range of QoS metrics and health state metrics for containers. synchronized. Off course, with the time, advancements in
Moreover, there is no a commonly agreed definition of QoS resource management and synchronization will overcome these
metrics for container based systems. Docker offers the docker limitation, and new adaptation policies can be designed.
stat command that returns CPU and memory utilization for each To conclude, the way toward the next generation of cloud
running container. More detailed CPU, memory and network computing platforms is based on application virtualization rather
than on hardware virtualization; and it requires a strong [15]  M. C. Huebscher and J. A. McCann. (2008). A survey of
contribution from the research community not only focused on the autonomic computing—degrees, models, and
above discussed orchestration problem but also on different area, applications. ACM Comput. Surv. 40, 3, 7 (August 2008).
for example: improving networking management to reduce [16]  A. L. Lemos, F. Daniel, and B. Benatallah. (2015). Web
overheads compared to hypervisors networking; enhancing, Service Composition: A Survey of Techniques and
secure resource sharing and isolation to enable multi-tenancy; Tools. ACM Comput. Surv. 48, 3, Article 33 (December
refining application design methodologies to find the right balance 2015).
between the micro-service fragmentation and performances.
[17]  F. Faniyi and R. Bahsoon. (2015). A Systematic Review of
6.   ACKNOWLEDGMENTS Service Level Management in the Cloud. ACM Comput.
This work is funded by the research project “Scalable resource- Surv. 48, 3, Article 43 (December 2015), 27 pages.
efficient systems for big data analytics” financed by the [18]  S. Singh and I. Chana. (2015). QoS-Aware Autonomic
Knowledge Foundation (grant: 20140032) in Sweden. Resource Management in Cloud Computing: A Systematic
Review. ACM Comput. Surv. 48, 3, Article 42 (December
7.   REFERENCES 2015), 46 pages
[1]   D. Merkel. (2014). Docker: lightweight Linux containers for
consistent development and deployment. Linux J. 2014, [19]  A. Beloglazov, J. Abawajy, and R. Buyya. (2012). Energy-
aware resource allocation heuristics for efficient management
[2]   R. Dua, A. R. Raja, D. Kakadia, (2014) Virtualization vs of data centers for cloud computing. Future Generation
containerization to support PaaS, in: Proc. of 2014 IEEE Int'l Comput. Syst. 28, 5 (2012), 755–768.
Conf. on Cloud Engineering, IC2E'14, 2014, pp. 610 - 614.
[20]  D. Ardagna, B. Panicucci, M. Trubian, and L. Zhang. (2012).
[3]   S. Natarajan, A. Ghanwani, D. Krishnaswamy, R. Krishnan, Energy-aware autonomic resource allocation in multitier
P. Willis and A. Chaudhary: An Analysis of Container-based virtualized environments. IEEE Trans. Services Comput. 5, 1
Platforms for NFV. IETF draft, Apr. 2016. (2012),
[4]   IBM Container Cloud Operating System project, [21]  M. Maggio et al. (2012). Comparison of decision-making
http://researcher.watson.ibm.com/researcher/view_group.php strategies for self- optimization in autonomic computing
?id=6302 systems. ACM Trans. Auton. Adapt. Syst. 7, 4 (2012) 1–32.
[5]   OASIS, (2013) Topology and orchestration specification for [22]  E. Casalicchio, M. Palmirani (2015) A Cloud Service Broker
cloud applications, Tech. Rep. Ver.1.0, OASIS Standard with Legal-Rule Compliance Checking and Quality
[6]   B. D. Martino, G. Cretella, A. Esposito, (2015) Advances in Assurance Capabilities, Procedia Computer Science, Elsevier
applications portability and services interoperability among [23]  B. D. Martino, G. Cretella, A. Esposito (2015) Towards a
multiple clouds, IEEE Cloud Computing 2 (2) (2015) 22-28. legislation-aware cloud computing framework, Procedia
[7]   W. Felter, A. Ferreira, R. Rajamony, J. Rubio, (2014) An Computer Science (2015)
Updated Performance Comparison of Virtual Machines and [24]  R. Pike, D. Presotto, K. Thompson, H. Trickey, P.
Linux Containers, IBM Research Report, RC25482 Winterbottom, (1993) The use of name spaces in plan 9,
(AUS1407-001) July 21, 2014 SIGOPS Oper. Syst. Rev. 27 (2) (1993) 72-76.
[8]   B. Burns, B. Grant, D. Oppenheimer, E. Brewer, and J. [25]  E. W. Biederman, Multiple instances of the global Linux
Wilkes. (2016). Borg, Omega, and Kubernetes. Queue 14, 1, namespaces, in: 2006 Ottawa Linux Symposium, 2006.
pages 10 (January 2016), 24 pages.
[26]  T. Adufu, J. Choi and Y. Kim (2015) Is container-based
[9]   W. Vogels, Under the Hood of Amazon EC2 Container technology a winner for high performance scientific
Service, 20 July 2015, http://www.allthingsdistributed.com/ applications?, Network Operations and Management
[10]  W. Gerlach, et al.. (2014). Skyport: container-based Symposium, 17th Asia-Pacific, Busan, 2015, pp. 507-510.
execution environment management for multi-cloud [27]  S. He, et al. (2012) Elastic Application Container: A
scientific workflows. IEEE DataCloud '14. IEEE Press, Lightweight Approach for Cloud Resource Provisioning,
Piscataway, NJ, USA, 25-32. IEEE 26th Int. Conf. on Advanced Information Networking
[11]  DT-T. Nguyen et al. (2016). An Index Scheme for Similarity and Applications, Fukuoka, 2012
Search on Cloud Computing using MapReduce over Docker [28]  M. Abdelbaky et al., (2015) Docker Containers across
Container. In ACM IMCOM '16. ACM, New York, NY, Multiple Clouds and Data Centers, 2015 IEEE/ACM 8th Int.
USA, Article 60 , 6 pages. Conf. on Utility and Cloud Computing, Limassol, 2015.
[12]  A. Tosatto, P. Ruiu, A. Attanasio,(2015) Container-based [29]  S. McDaniel, S. Herbein and M. Taufer, (2015) A Two-
orchestration in cloud: State of the art and challenges, in: Tiered Approach to I/O Quality of Service in Docker
Proc. of 9th Int'l Conf. on Complex, Intelligent, and Software Containers 2015 IEEE Int. Conf. on Cluster Computing.
Intensive Systems, CISIS '15, 2015, pp. 70-75.
[30]  A. Pelaez, A. Quiroz and M. Parashar, "Dynamic Adaptation
[13]  E. Casalicchio, D.A.Menasce, A. Aldhalaan (2013), of Policies Using Machine Learning," 2016 16th IEEE/ACM
Autonomic resource provisioning in cloud systems with International Symposium on Cluster, Cloud and Grid
availability goals, ACM Conference on Autonomic Computing (CCGrid), Cartagena, Colombia, 2016,
Computing CAC’13, August 5–9, 2013, Miami, FL, USA
[31]  F. Paraiso, S. Challita, Y. Al-Dhuraibi, P. Merle. Model-
[14]  J. O. Kephart, D. M. Chess, (2003) The vision of autonomic Driven Management of Docker Containers. 9th IEEE
computing, in Computer, vol. 36, no. 1, pp. 41-50, Jan 2003. International Conference on Cloud Computing (CLOUD),
Jun 2016, San Francisco, United Stat

You might also like