2016 Book ServiceOrientationInHolonicAnd PDF
2016 Book ServiceOrientationInHolonicAnd PDF
Theodor Borangiu
Damien Trentesaux
André Thomas
Duncan McFarlane Editors
Service Orientation
in Holonic and
Multi-Agent
Manufacturing
Studies in Computational Intelligence
Volume 640
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: [email protected]
About this Series
Editors
Service Orientation
in Holonic and Multi-Agent
Manufacturing
123
Editors
Theodor Borangiu André Thomas
Faculty of Automatic Control and Computer University of Lorraine
Science Épinal
University Politehnica of Bucharest France
Bucharest
Romania Duncan McFarlane
Institute for Manufacturing Engineering
Damien Trentesaux Department
University of Valenciennes and Cambridge University
Hainaout-Cambresis Cambridge
Valenciennes UK
France
v
vi Foreword
the moment, but the FIPA standards accepted 10–12 years ago are still in use and
accepted in the field of MAS.
The MAS philosophy applied to industrial control allowed thinking about and
conceiving new approaches and solutions. Progressively, products and
semi-products started to be represented by SW agents that were able to commu-
nicate, negotiate and coordinate their activities, not only in manufacturing and
transport processes. The products became active elements during their execution
life cycle. The PROSA-like way of thinking strongly influenced the field, causing
that not only semi-products but also humans were considered as resources repre-
sented by agents. This might be considered as a significant technology break-
through in the field of decentralized control and production management.
But to make distributed solutions increasingly more intelligent, higher level
agents required deployment of more and more knowledge. This is why semantics
has been introduced and ontology knowledge structures shared by agents became
an obvious vehicle to reduce the communication traffic and to make the agents more
intelligent. In some cases, the ontology converged to the WWW technology (or was
combined with it). The direct communication and interaction with and among the
devices (not only with their SW modules as virtual representation) became nec-
essary to get faster access to physical devices, to the physical world. The Internet of
Things appeared.
This development led to the new vision of the Factory of the Future formulated
in the German governmental initiative Industry 4.0, 2013. This vision is nothing
else than the extension of the trends in the field of distributed intelligent control
combined with new business models supported by accelerated development in the
domain of computing and communication. Industry 4.0 is based on the following
principles:
• Integration of both the physical and virtual worlds using the Internet of Things
and Internet of Services.
• Vertical Integration along the enterprise axis, which means integration of all the
information and knowledge-based systems in a company, starting with the
real-time control level of shop floor up to the ERP and managerial systems on
the top.
• Horizontal Integration along the value chain axis, which means integration of all
business activities starting from the supply chain on and up to the product
delivery phase (from suppliers to customers).
• Engineering Activities Integration along the life cycle axis from rough idea via
design, development, verification, production and testing up to product-lifecycle
management (from design to support).
The visions of the three integration axes are based on the following MAS
principles: cooperation of distributed autonomous units, ontology knowledge
sharing and big data analytics. Industry 4.0 solutions are more and more linked or
even coupled with the higher level information systems of the company. Their
implementations are influenced by the latest trends in SW engineering exploring
service-oriented architectures (SOA). The MAS technology remains to represent an
Foreword vii
This volume gathers the peer reviewed papers which were presented at the fifth
edition of the International Workshop “Service Orientation in Holonic and
Multi-agent Manufacturing—SOHOMA’15” organized on 5–6 November 2015 by
the Institute for Manufacturing (IfM) of the University of Cambridge, UK in col-
laboration with the CIMR Research Centre in Computer Integrated Manufacturing
and Robotics of the University Politehnica of Bucharest, Romania, the LAMIH
Laboratory of Industrial and Human Automation Control, Mechanical Engineering
and Computer Science of the University of Valenciennes and Hainaut-Cambrésis,
France and the CRAN Research Centre for Automatic Control, Nancy of the
University of Lorraine, France.
SOHOMA scientific events have been organized since 2011 in the framework
of the European project ERRIC, managed by faculty of Automatic Control and
Computer Science within the University Politehnica of Bucharest.
The book is structured in seven parts, each one grouping a number of chapters
describing research in actual domains of the digital transformation in manufacturing
and trends in future manufacturing control: Part I: Applications of Intelligent
Products, Part II: Recent Advances in Control of Physical Internet and
Interconnected Logistics, Part III: Sustainability Issues in Intelligent Manufacturing
Systems, Part IV: Holonic and Multi-Agent System Design for Industry and
Services, Part V: Service Oriented Enterprise Management and Control, Part VI:
Cloud and Computing-Oriented Manufacturing, Part VII: Smart Grids and Wireless
Sensor Networks.
These seven evolution lines have in common concepts, methodologies and
implementing solutions for the Digital Transformation of Manufacturing (DTM).
The Digital Transformation of Manufacturing is the actual vision and initiative
about developing the overall architecture and core technologies to establish a
comprehensive, Internet-scale platform for networked production that will encap-
sulate the right abstractions to link effectively and scalably the various stakeholders
(product firms, manufacturing plants, material and component providers,
ix
x Preface
technology and key services providers) to enable the emergence of a feasible and
sustainable Internet economy for industrial production.
For the manufacturing domain, the digital transformation is based on the
following:
1. Instrumenting manufacturing resources (machines, robots, AGVs, ASRSs,
products carriers, buffers, a.o.) and environment (workplaces, material flow,
tooling, a.o.) which allows: product traceability, production tracking, evaluation
of resources’ status and quality of services, preventive maintenance…
2. Interconnecting orders, products/components/materials, resources in a
service-oriented approach using multiple communication technologies: wireless,
broadband Internet, mobile applications.
3. Intelligent, distributed control of production by:
• New controls based on ICT convergence in automation, robotics, vision,
multi-agent control, holonic organization; the new controls enable the smart
factory.
• New operations based on product- and process modelling and simulation.
Ontologies are used as a “common vocabulary” to provide semantic
descriptions/abstract models of the manufacturing domain: core ontology—
modelling of assembly processes (resources, jobs, dependencies, a.o.); scene
ontology—modelling flow of products; events ontology—modelling various
expected/unexpected events and disruptions; these models and knowledge
representation enable the digital factory.
• Novel management of complex manufacturing value chains (production,
supply, sales, delivery, etc.) for networked, virtual factories: (a) across
manufacturing sites: logistics, material flows; (b) across the product life
cycle.
Research in the domain of DTM is determined by the last decades’ trend in the
goods market towards highly customized products and shorter product life cycles.
Such trend is expected to rise in the near future, forcing thus companies to an
exhaustive search for achieving responsiveness, flexibility, reduction of costs and
increased productivity in their production systems, in order to stay competitive in
such new and constantly changing environment. In addition, there is a shift from
pure goods dominant logic to service dominant logic which led to service orien-
tation in manufacturing and orienting the design, execution and utilization of the
physical product as vehicle for delivering generic or specific services related to that
product (in “Product-Service Systems”).
How this new vision on digital transformation of manufacturing is achieved?
Reaching the above objectives require solutions providing:
• Dynamic reconfigurability of production (re-assigning resource teams,
re-planning batches, rescheduling processes) to allow “agile business” in
manufacturing;
• Robustness at technical disturbances;
Preface xi
transformations, with no regard of the methods that are used for their application.
This allows a complete separation of process specification from the knowledge on
the production floor making it implementable in any SoHMS platform providing
the necessary MServices with the same application service ontology.
Service orientation is emerging at multiple organizational levels in enterprise
business, and leverages technology in response to the growing need for greater
business integration, flexibility and agility of manufacturing enterprises. Closely
related to IT infrastructures of Web services, the service-oriented enterprise
architecture represents a technical architecture, a business modelling concept, an
integration source and a new way of viewing units of control within the enterprise.
Business and process information systems integration and interoperability are
feasible by considering the customized product as “active controller” of the
enterprise resources—thus providing consistency between material and informa-
tional flows. The areas of service-oriented computing and multi-agent systems are
getting closer, trying to deal with the same kind of environments formed by
loose-coupled, flexible, persistent and distributed tasks. An example is the new
approach of service-oriented multi-agent systems (SoMAS).
The unifying approach of the authors’ contributions for this Part V of the book
relies on the methodology and practice of disaggregating siloed, tightly coupled
business, MES and shop-floor processes into loosely coupled services and mapping
them to IT services, sequencing, synchronizing and orchestrating their execution.
Research is reported in: function block orchestration of services in distributed
automation and performance evaluation of Web services; MAS with
service-oriented agents for dynamic rescheduling work force tasks during opera-
tions; virtual commissioning-based development of a service-oriented holonic
control for retrofit manufacturing systems; security solution for service-oriented
manufacturing architectures that uses a public-key infrastructure to generate cer-
tificates and propagate trust at runtime.
Part VI is devoted to Cloud and Computing-Oriented Manufacturing, which
represent major trends in modern manufacturing. Cloud manufacturing (CMfg) and
MES virtualization were introduced as a networked and service-oriented manu-
facturing model, focusing on the new opportunities in networked manufacturing
area, as enabled by the emergence of cloud computing platforms. The cloud-based
service delivery model for the manufacturing industry includes product design,
batch planning, product scheduling, real-time manufacturing control, testing,
management and all other stages of a product’s life cycle.
CMfg derives not only from cloud computing, but also from related concepts
and technologies such as the Internet of Things—IoT (core enabling technology for
goods tracking and product-centric control), 3D modelling and printing (core
enabling technology for digital manufacturing). In CMfg applications, various
manufacturing resources and abilities can be intelligently sensed and connected into
a wider Internet, and automatically managed and controlled using both (either) IoT
and (or) cloud solutions. The key difference between cloud computing and CMfg is
that resources involved in cloud computing are primarily computational (e.g. server,
storage, network, software), while in CMfg all manufacturing resources and
xvi Preface
abilities involved in the whole life cycle of manufacturing are aimed to be provided
for the user in different service models.
Papers in this section present resource virtualization techniques and resource
sharing in manufacturing environments. Resources and resource capabilities vir-
tualization and modelling represent the starting point for manufacturing services
encapsulation in the cloud. There is also shown that CMfg is clearly an applicable
business model for 3D-printing—a novel direct digital manufacturing technology.
In cyber-physical system (CPS) approach of manufacturing, a major challenge is to
integrate the computational decisional components (i.e. cyber part) with the phys-
ical automation systems and devices (i.e. physical part) to create such network of
smart cyber-physical components at MES and shop-floor levels. Some works
present the development of standardized interfaces for HMES that can be used to
access physical automation components by the cyber layer in CPS. A chapter of this
section investigates the software-defined networking (SDN) concept adoption for
the manufacturing product design and operational flow, by promoting the
logical-only centralization of the shop-floor operations control within the manu-
facturing shared-cloud for clusters of manufacturing networks.
Part VII gathers contributions in the field of Smart Grids and Wireless Sensor
Networks management and control with multi-agent implementing. Technological
advances in wireless sensor networks are enabling new levels of distributed intel-
ligence in several forms such as “active products” that interact with the working
environment and smart metering for monitoring the history of products over their
entire life cycle and the status and performances of resources. These distributed
intelligences offer new opportunities for reducing myopic decision-making in
manufacturing control systems, thereby potentially enhancing their sustainability.
Design of such MAS frameworks for distributed intelligent control and devel-
opment of applications integrating intelligent-embedded devices are reported in
Part VII for several representative domains. Thus, a solution for space system
management is proposed by creating a self-organizing team of intelligent agents,
associated to spacecraft modules, conducting negotiations and capable of both
planning their behaviour individually in real time and working in groups in order to
ensure coordinated decisions. Then, an embedded multi-agent system for managing
sink nodes and clusters of wireless sensor networks is proposed and finally
demonstrated in an oil and gas refinery application. To reduce the communication
overhead in the MAS, the wireless sensor network is clustered which leads to a
hierarchical structure for the WSN composed of two types of sensor nodes: sink
nodes (cluster heads) and anchor nodes (sending sensory data to the sink nodes)
allowing for data aggregation. Finally, this section describes a methodology and
framework for the development of new control architectures based on uncertainty
management and self-reconfigurability of smart power grids.
The book offers a new integrated vision on complexity, Big Data and virtual-
ization in Computing-Oriented Manufacturing, combining emergent information
and communication technologies, control with distributed intelligence and MAS
implementation and total enterprise integration solutions running in truly distributed
and ubiquitous environments. The IMS philosophy adopts heterarchical and
Preface xvii
xix
xx Contents
Abstract The paper presents an intelligent hybrid control solution for the pro-
duction of radiopharmaceuticals that irradiates radioactive isotopes of
neutron-defficient radionuclides type by in cyclotrons. To achieve requirements
such as: highest number of orders accepted daily, shortest production time and safe
operating conditions, a hybrid control system based on a dual architecture: cen-
tralized HMES with ILOG planning and decentralized parameter monitoring and
control via SCADA is developed. Experimental results are reported.
Synthesis
Dispenser 1
module 1 (S5)
Quality
Cyclotron control
Synthesis
Dispenser 2
module 2
Starting from the deadlines for product delivery defined by clients (hospitals),
there will be computed the daily moments of time when orders should be launched
for production execution (the time intervals “( ]” in the timing of Fig. 2), con-
sidering additionally: (a) the estimated transportation time of the final product to the
client, and (b) the testing period of time following the normal completion of any
production process, when all resources are checked whether they are operational. If,
during this test, one resource is found not operational, it will be substituted with its
stand-in one (e.g. resource 1 replaced with resource 2, as depicted in Fig. 2).
A similar replacement is possible for the robotized dispensers in real time at
resource breakdown during stage 3 (see the “breakdown resource 1” event repre-
sented in Fig. 2) [6].
Due to the specificity of processes transforming and handling radioactive
materials and products, the basic functions of the global production control system
are: service orientation and monitoring continuously the parameters of: (1) manu-
facturing processes and (2) resources, and (3) environment parameters of produc-
tion rooms [9, 10].
Figure 3 shows the hybrid control architecture proposed for the manufacturing
system producing radiopharmaceuticals in shop floor processing mode. The
topology of the control architecture is multi-layered: (a) Manufacturing Execution
System (MES) layer—planning and resource allocation, data storage and reports
generation; (b) application layer with SCADA—parameter monitoring and adjust-
ing; (c) resource control layer.
The global control is exerted on these three layers by two subsystems:
1. Centralized HMES for: (a) long-term resource allocation balancing the usage
time of those replicated for stages 2 and 3 in the context of scheduled main-
tenance, (b) short-term (24 h) production planning optimizing a cost function
(i.e. minimize: manufacturing time/raw material waste, etc.), (c) manage the
centralized data storage for executed product batches and generate reports.
Centralized HMES with Environment Adaptation … 7
Hospital
Production execution and Environment
monitoring Orders Reports monitoring
Centralized
Production Planning / Order Processing / Centralized data storage / Reports generator
HMES layer
PC Dispenser Resource
PLC1 PLC2 PLC3
controller control layer
2. Decentralized and reactive SCADA for: (a) monitoring and adjusting process
and environment (production rooms) parameters in response to unforeseen
disturbances in order to deliver requested, valid products at agreed deadlines,
(b) detecting resource failures and initiating the replacement process upon
receiving the authorization from HMES, and (c) collecting data during manu-
facturing for product traceability and description according to IP (Intelligent
Product) conventions [11].
The proposed control model is based on the PROSA and ADACOR reference
architectures and is customized for the flow shop layout and operating mode, in
which the product recipe is configured directly by the client (Make To Order
production) and embedded into the production order [12, 13]. Another specific
feature of the control model is that the environment parameters such as pressure,
temperature, humidity, number of particles and radioactivity levels strongly affect
the production process. This is why the control model is extended with an entity—
the Environment Holon—that models the process rooms (cyclotron electrical room,
cyclotron vault, production room, dispenser technical isolator box, and quality
testing room) together with the instrumentation that measures and adjusts these
parameters.
8 S. Răileanu et al.
Supervisor holon
Production
GUI
Planner
Production Database
Set Monitor
production production Set Order Operate Monitor
room room command traceability resource resource
parameters parameters
Fig. 4 Entities, functions and interactions in the proposed holonic production control system
The structure of the control model is composed of the following entities (Fig. 4).
The Supervisor Holon (SH) is responsible with:
• Optimizing production: (a) resource allocation subject to (fixed) scheduled
maintenance periods, alternative resource re assigning at breakdown (while
execution in stage 3) or not operational status (detected at resources checking
post execution of stage 3), and (b) operations planning subject to:
– Minimizing the duration of production execution considering required
quantities of radiopharmaceutical products and imposed/agreed delivery
times;
– Maximizing and balancing the utilization of resources subject to mainte-
nance periods and failure events;
– On line adaptation to variations of environmental parameters.
• Setting orders (associate hospital order with physical product); configure pro-
cess and production room parameters for planned production and assigned
resources;
• Authorizing the adjustment of process rooms parameters (e.g., pressure, tem-
perature, relative humidity) upon request of the Environmental Holon;
• Centralizing data about resource status, process rooms parameters and product
execution; storing production logs/history files into a centralized and replicated
Production Database;
• Generating traceability reports and IP descriptions of radiopharmaceuticals;
• Keeping BOM (Bill Of Materials) updated.
The Resource Holons (RHs) encapsulate both the decisional/informational part
and the associated physical resources used in the production line: cyclotron, syn-
thesis unit, dispenser and quality test (laboratory equipment). Each physical
resource is controlled as an independent automation island, and the objective of the
Centralized HMES with Environment Adaptation … 9
The Order Holon (OH) holds all the information needed to fulfil a command
starting from the specifications of the hospital (client), production information used
for resource and environment parameterization and accompanying information
generated once the product leaves each production stage. The OH is an aggregate
entity consisting of: (1) the information needed for demand identification (product
type, radioactivity level, hospital #ID, and delivery time), execution of processes
(sequence of resources, operations and process parameters as resulted from pro-
duction planning) and traceability (sequence of resources, operations, execution
reports as resulted from the physical process) and (2) the physical product. From the
OH lifecycle depicted in Fig. 5 it can be seen that the physical product results at the
termination of stage 4 when the physical-informational association is performed.
The informational part of an OH for the different stages of its lifecycle is given
below (see Fig. 5); a generic structure was sought:
Not passed
Failed Delivered
order order
Environment Production
GUI Holon Database Order Holon
Set production
room parameters Adjust
parameters (a)
through
actuators
Store in real-time
measured parameters
Log
parameters
evolution (b)
Production rooms parameters real-
Trigger alarms time update Production log:
if parameters
environment
are out of
parameters
range
Fig. 6 Real-time operating mode of EH for parameter monitoring and adjusting, and data log
– int: Quantity—the quantity of the product that will be delivered to the hospital;
– int: RFID—the code which associates the physical product with the current
information (“IP” in Fig. 5).
The Environment Holon (EH) checks if the process and environment param-
eters are in range, validates the operations executed by RHs and triggers alarms
when radioactivity levels exceed normal values or when the evolution of other
parameters endangers production or human security. Figure 6 illustrates the oper-
ating mode of the EH.
The parameters monitored by the EH are:
• Pressure: in cyclotron vault, production room and dispenser isolator box;
• Temperature: in cyclotron electrical room, production room and dispenser
isolator box;
• Humidity: in cyclotron electrical room, production room and dispenser room;
• Number of particles in the dispenser isolator box: if the number of particles is
not in range (above product safety threshold) the process waits for a maximum
timeout of 30 min to allow this parameter to re-enter in range. If it re-enters the
range the process continues and dispensing (dilution and portioning) is delayed
with the corresponding amount of time—which also delays delivery; otherwise
production is abandoned and the production order fails;
• Radioactivity level: in production room, control room and dispenser room.
The operation of the EH is materialized through intelligent actuators and sensors
integrated in SCADA, to which a software agent is associated to interface these
devices with the centralized HMES. The Environment Holon adjusts process room
Centralized HMES with Environment Adaptation … 13
X
n
desired activityi
Vi target volume loss
i¼1
maximum activity
X
4
starting time þ Tr þ T5ðiÞ \ delivery timei ; 1 i n
r¼1
where T1…T4 are the maximum delays when manufacturing the current
demand (irradiation time, synthesis time, dispensing time, quality testing
time) and T5(i) is the maximum transportation time to the related hospital.
– Do not use the production facility over maintenance periods:
option must be included as a functionality of the Supervisor Holon (Fig. 4), the
IBM ILOG OPL optimization engine [17] was chosen because it can be easily
integrated with separate applications using standard C++, C# or JAVA interfaces.
This is facilitated through the Concert technology.
The procedure described above was integrated into the radiopharmaceuticals
production process both for optimized offline planning (minimize production
duration while respecting imposed deadlines—offline run) and for online agile
execution (online adaptation to variations of environment parameters which can
delay commands being executed within the same day—online run).
Offline run
1. Demands are gathered for the next production day.
2. Maintenance restrictions are introduced as constraints into the CP model.
3. The ILOG model is called based on the set of demands for the next day:
(a) If a feasible solution is reached, the demands are accepted as received and
the production plan is transmitted to the distributed operating control level
(Fig. 3) in order to be implemented;
(b) If no feasible solution is reached (due to conflicting time constraints or tight
deadlines) new deadlines are proposed based on the maximum load of the
production system and on the rule “first came first serve” in order to fulfil as
much as possible demands.
Online run
1. Apply process and environment parameter configuring via SCADA according to
the off line computed commands.
2. Measure environment parameters which affect production time (dust particles in
dispenser chamber and radioactivity levels).
3. If parameters are out of range and the current command is delayed, replan the
next commands taking into account the new constraints. For any command,
since the maximum allowed delay in production (30 min) is less than the
maximum delay accepted for delivery (1 h) the worst case scenario is to use the
off-line computed production plan and just delay it.
ILOG optimization sequence
The following optimization sequence will be run for a maximum amount of raw
material max irr vol (capacity of max. irradiated target) processed for any
command:
1. Consider all demands valid for scheduling (processed(d) = true, d = 1…n);
2. Order demands based on product type;
3. Choose the highest activity required (all other products will be diluted in order
to obtain an inferior activity) for each product type (max_irr_level);
16 S. Răileanu et al.
4. For all demands with the same product type compute the sum:
X desired activity
sum prod ¼ requested volume
for all products of same type
max: activity
The ILOG sequence designed in Sect. 3 was tested on a set of 10 demands grouped
into two different product categories (FDG with index 1 and NaF with index 2). The
execution times are described in Table 1 for each stage: irradiation, synthesis and
dispensing. The characteristics of the demands, which represent the input to the
optimization algorithm, are described in Table 2.
If produced individually, the starting time of each demand would be computed
based on the activity (how much time the raw material stays in the cyclotron,
Table 1) and on the product type (what type of synthesis is applied to the irradiated
raw material, Table 1). To this amount of time a fixed duration will be added for
dispensing (Table 1) and a fixed duration for installation cleaning (1:30 h). The sum
between the irradiation time, synthesis time, dispensing and cleaning time is the
time needed to execute each demand. The starting time is computed by subtracting
the production duration from the delivery time. A theoretical scheduling is given in
Fig. 7.
Analysing Fig. 7 it can be seen that there are overlapping cases between the
production times of the demands which makes it impossible to execute all of them
individually. By applying the optimization procedure, demands are grouped into
commands which are executed together using the same irradiated raw material—
allowing thus to maximize the number of executed demands. The only constraints
18 S. Răileanu et al.
10
9
Demand index
8
7
6
5
4
3
2
1
0 200 400 600 800 1000 1200 1400
Time (minutes)
Fig. 7 Gantt chart for demand realization without optimized offline planning
are: a single type of product can be executed at a given time, a maximum of 3.5 ml
can be irradiated and if the raw material is irradiated for obtaining the highest
activity. This means that less irradiated raw material is used for commands with
lower activity. Thus, the advantage of production optimization is that it reduces the
production cycles by combining demands into a single command. As can be seen
from Fig. 8 the demands can be grouped into 2 separate commands but there is one
demand (2, the one marked with red in Table 2) that exceeds the production interval
attributed to product 2. This demand cannot be satisfied as requested, and conse-
quently a negotiation process with the client for the closest possible delivery time is
proposed (1080), see Fig. 8. If this new deadline is accepted (old delivery time—red
—is invalidated and new delivery time—green—is accepted) all demands are
scheduled and executed in two separate commands as depicted in Fig. 9.
As a conclusion, the paper proposes an intelligent control solution for the pro-
duction of radiopharmaceuticals composed of a hybrid control architecture together
Centralized HMES with Environment Adaptation … 19
Command 2 with
demands 6,7,8,9,10
Command index
Command 1 with
demands 1,2,3,4,5
with a demand optimization sequence which groups demands into commands and
orders.
Future research will cover the directions: (i) testing the optimization sequence
for a larger set of products and production horizon, (ii) analyse how computed
schedule differs from actual execution, (iii) minimizing material loss and (iv) con-
sider energy costs as an objective function for the optimization sequence.
References
1. Iverson, Ch. et al. (eds.): 15.9.2 radiopharmaceuticals. In: AMA Manual of Style (10th edn).
Oxford University Press, Oxford, Oxfordshire (2007). ISBN 978-0-19-517633-9
2. Schwochau, K.: Technetium. Wiley-VCH (2000). ISBN 3-527-29496-1
20 S. Răileanu et al.
3. Ell, P., Gambhir, S.: Nuclear Medicine in Clinical Diagnosis and Treatment. Churchill
Livingstone (2004). ISBN 978-0-443-07312-0
4. http://www.sciencedaily.com/releases/2010/07/100708111326.htm
5. http://www.ansto.gov.au/_data/assets/pdf_file/0019/32941/Nuclear_Medicine_Brochure_
May08.pdf
6. Mas, J.C.: A patient’s guide to nuclear medicine procedures: English–Spanish. Soc. Nucl. Med
(2008). ISBN 978-0-9726478-9-2
7. Medema, J., Luurtsema, G., Keizer, H., Tjilkema, S., Elsinga, P.H., Franssen, E.J.F., Paans, A.
M.J., Vaalburg, W.: Fully automated and unattended [18F] fluoride and [18F] FDG production
using PLC controlled systems. In: Proceedings of the 31st European Cyclotron Progress
Meeting, Zurich (1997)
8. Kusiak, A.: Intelligent Manufacturing Systems. Prentice Hall, Englewood Cliffs (1990). ISBN
0-13-468364-1
9. Tsai, W.T.: Service-oriented system engineering: a new paradigm. In: Proceedings of the 2005
IEEE International Workshop on Service-Oriented System Engineering (SOSE’05). IEEE
Computer Society (2005). 0-7695-2438-9/05
10. De Deugd, S., Carroll, R., Kelly, K.E., Millett, B., Ricker, J.: SODA: service-oriented device
architecture. IEEE Pervasive Comput. 5(3), 94–96 (2006)
11. McFarlane, D., Giannikas, V., Wong, C.Y., Harrison, M.: Product intelligence in industrial
control: theory and practice. Ann. Rev. Control 37, 69–88 (2013)
12. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. (Special Issue on Intelligent
Manufacturing Systems) 37(3), 255–276 (1998)
13. Leitao, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006)
14. Raileanu, S., Anton, F., Iatan, A., Borangiu, Th., Anton, S., Morariu, O.: Resource scheduling
based on energy consumption for sustainable manufacturing. J. Intell. Manuf. Springer (2015).
Print ISSN: 0956-5515, On line ISSN: 1572-8145, doi:10.1007/s10845-015-1142-5
15. Novas, J.M., Bahtiar, R., Van Belle, J., Valckenaers, P.: An approach for the integration of a
scheduling system and a multiagent manufacturing execution system. towards a collaborative
framework. In: Proceedings of the 14th IFAC Symposium INCOM’12, Bucharest, pp. 728–
733, IFAC Papers OnLine (2012)
16. www.constraintsolving.com/solvers. Consulted in Oct 2015
17. ILOG: (2009). www-01.ibm.com/software/websphere/ilog/. Consulted in Oct 2015
Improving the Delivery of a Building
Keywords Construction Part tracking Intelligent object Change management
It is the integration and sharing of information that allow the control of cost and
timeliness [3]. A company that designs and makes its own products controls the
cost-benefit of creating and using product information and of controlling the
schedule and supply chain to make timely products. An example is Apple, who
completely controls the designs of processor, hardware, operating system, appli-
cation software and related cloud services as well as retailing stores [16]. For
products that are developed in partnership, the OEM (Original Equipment
Manufacturer) creates a team along with terms and conditions that incentivize the
IPDT to share information and to optimize cost-benefit across product components.
In today’s product information systems, products behave as both passive and
active actors regarding their information [15]. Usually, a product does not carry its
own information, but it carries identification (bar code, RFID tag (Radio Frequency
IDentification), etc.), which provides a connection to its information stored in a set
of databases. Much of this information is ‘active’ in that a change to a product
datum automatically triggers changes to other data about the product in terms of
design, manufacture or supply, assembly process, maintenance, etc. [5]. This level
of integration of product data and associated automation with regard to product
change has been continuously developed over the past 50 years since the first CAD
systems. Nevertheless, although integration of product information is beneficial to
the design process, most of the benefits accrue during the later stages of a product
life cycle: part production, product assembly and maintenance, that pay for the high
cost of integrating product information.
The goal of this paper is to convey that some recent technologies are able to create a
better environment in the construction industry where companies can partner to
share the cost-benefit of product information in order to optimize the cost and
timeliness of building construction, and that the basis of these improvements is the
use of active products that can update their own information. Section 2 describes
technologies that have recently entered the market or that their price and func-
tionality have been significantly improved. Section 3 discusses the advantage of the
technologies with regard to reducing project cost and improving timeliness.
Conclusions are stated in Sect. 4.
The heart of any construction activity is the bill of materials for a building from
which a schedule for part creation or supply and building assembly is made. ERP
systems store information about a building and the pieces and devices in it. ERP
systems also accommodate change management in terms of part alternatives and
modifications to schedules and suppliers. The bill of materials flows from a building
design. There are many systems that can design a building and create a BIM.
The BIM is a digital representation of the physical and functional characteristics of a
facility. The goal is to use the BIM as a shared knowledge resource for information
24 V. Thomson and X. Zhang
about a facility forming a reliable basis for decisions during its lifecycle (conception
to demolition) [7]. Moving from concept to BIM for a large building requires the
cooperation of many partners and the integration of much data from each partner.
The greatest issue with regard to managing product information stored in a BIM,
ERP, PDMS, etc. is defining, confirming and executing change. Recent research
into change management has shown that it is possible to define the chain of change
propagation and to manage the execution of changes in a design and manufacturing
plan [17]. The management of the change process by the IPDT is another issue, and
not covered here. Systems that can manage procedures for changing a design
greatly reduce the effort for all the partners; for, one of the major cost drivers during
product development is boundary management, i.e., the use of coordination
mechanisms to assure the delivery of material and information across organizational
boundaries: internal and external [3]. The process of managing change uses a lot of
resources and causes much delay, where there can be instances of a design docu-
ment being exchanged over ten times until it is finalized [14]. The more automation
can be brought to bear on this process, the better. Thus, having available a
methodology modifying product data when there is a change to the product or its
circumstances greatly reduces the cost of the production process.
ERP systems are readily available. They range from generic, highly functional
systems, such as SAP, to AEC specific systems, such as COINS, which provides
traditional ERP functionality along with design functions to create a BIM and
applications for construction specific analysis. The price for such systems is con-
tinuously being reduced. Even though this is resulting in wider adoption, the
construction industry still does not have the adoption rate of product industries [1].
Thus, it is about taking advantage of automation to reduce one of the main cost
drivers in building construction: change management.
Delivering the thousands of parts for a large building is a very difficult task. This
usually involves hundreds of partners and thousands of deliveries into a cramped
construction site. RFID is a great tool that allows the identification of parts and their
tracking at the manufacturer and at the construction site. There are several options.
• There is simple identification of parts with an RFID reader when using an RFID
tag. The tag can be found at any time; however, the RFID reader only deter-
mines that the part is in its range. It does not provide location. A short search is
needed.
• An RFID tag can have memory where the user can store not only part identi-
fication, but also other design and construction parameters.
• The majority of RFID tags are passive, i.e., the energy to transmit data comes
from the radio frequency signal scanning a tag, and so, range is limited. There
Improving the Delivery of a Building 25
Objects can be identified and located by using real time locating systems (RTLS).
These systems can use active or passive RFID or infrared tags along with multiple
readers. Items can be identified and positions located as precise as 5 cm using
algorithms such as triangulation and technologies such as ultra-wide band [4].
26 V. Thomson and X. Zhang
Absolute positions can be obtained by use of a GPS or local tags with known
position.
A RTLS reduces the search time for parts, allows for 1 piece flow from supplier
to assembler, and permits time information for placing a part into its final location.
Continuous part location reduces the labour due to continuous checking of delivery
and assembly schedules as well as the supply chain for parts. An RTLS can be used
as part of an information change process to update product information.
In the $US 10-billion Clair Ridge project by BP, a global oil and gas company,
hundreds of suppliers took part in the construction of a new offshore oil platform
by delivering components from two consolidation centres in Europe to the building
site in South Korea. BP used an RTLS system consisting of both RFID and GPS
technologies to track parts and to minimize delays. The RTLS system allowed
real time visibility of parts moving from suppliers to the construction site, and
helped BP to reach zero material loss and to significantly improve the planning
process [13].
Routers are devices that use IEEE 802.11 communication protocols to allow con-
nections among devices and between devices and networks. Of interest are those
devices that can form self-organized networks, i.e., automatically making their own
network. They do not need to be connected to a formal network or the Internet.
Each device acts as a wireless microrouter that can establish peer-to-peer connec-
tions and relay messages from one peer to another. So, peers form a network and
they relay messages such that they reach their final destination. Thus, a microrouter
only needs to be connected to one other router, not to all routers. Self-organizing
wireless microrouters form a true network topology, not a star. If one device is
connected to a network, all microrouters have access. If one device fails, the other
devices repair the network.
At a construction site, devices with an RFID tag and self-organizing wireless
microrouters along with access to an RTLS can form a self-identified network, i.e.,
a peer-to-peer network where each device knows the identity and location of every
other device. One device can download a location map of all devices into a database
on schedule or on command. Part locations can be checked against the construction
plan (BIM) and schedule for any deviations, and the data in an ERP can be updated.
Traditional networks of devices are defined by standards for connectivity. The host
system provides this connectivity, and the software for communication protocols,
for access to data, and for execution of special algorithms is provided by
Improving the Delivery of a Building 27
applications in the host system. These applications are loaded into the host, and if
modifications occur in an attached device, a new application needs to be acquired.
A new approach is to have devices that are intelligent, understand the topology
required for network integration, able to communicate on a network, and contain
their own applications. In this approach, devices deliver applications to the host
system during integration. If a device is changed (options made operative, upgrade),
the device acquires a new application itself or self-modifies its present application.
Then, negotiation is made with the host to install the new application.
This architecture makes a system device (product) responsible for its own
information in addition to its own integration into a system. It provides a product
with the communication and negotiation capability to resolve issues with the host
system.
Consider an HVAC system in a large building. There is usually a central system
which receives information from local controllers throughout the building to bal-
ance the overall system and to create the desired environment. Each controller
usually has multiple sensors and functions. Although the original building plan
determines the number of sensors, area controllers and their functions, allocates the
number of interface connections, and sets the specifications for the host software,
there are always changes: more or different sensors, different functions for area
controllers, as well as significant change to control software. This causes a sea of
changes in the HVAC system, which adds considerable effort to the building
construction and maintenance.
If each area controller had a self-contained application and could effect its own
changes by negotiating with the central system, then the impact of change would be
highly reduced due to the automated change mechanism. Even if different devices
are used than planned, the automated negotiation and integration of interfaces
greatly reduces the effort due to change.
3 Discussion
At the beginning of the paper, the difference between product industries and the
construction industry with respect to the greater use of technology by product
industries for product data integration in order to reduce overall cost and to improve
the timely delivery of products was described. In the construction industry, the
capability of part tracking to reduce material handling cost and to deliver better
timeliness for building assembly was shown. However, it is the use of technology
that allows products to change their own product data that greatly improves the use
of product information in the construction industry, since this type of automation
greatly reduces the amount of labour and time. With this technology, collaboration
among partners increases since the improvement in construction information has
increased benefits for all partners.
The updating of part location can provide the status of part delivery, jobsite
location and final position in building assembly. Moreover, besides providing this
28 V. Thomson and X. Zhang
data, parts need to act as intelligent operators and transparently change their data in
an ERP or BIM. It is the automated initiation of change by a building piece or
device through networked systems that reduces the cost of building assembly,
maintenance and future data integration. Technologies such as self-identifying
objects, RTLS, and wireless microrouters provide the basis for making device
initiated change a reality. However, it is being a self-integrating object that allows a
product to be an intelligent operator, which can respond to its environment and
update its information.
When reviewing projects such as the construction of the Sutter Medical Center,
there are three main reasons for achieving the high level of success: the agreement
by partners to a single, collaborative contract which outlines partner responsibilities
and the sharing of cost and benefits; the use of technologies like BIM; and the use
of Lean project principles [2, 12]. This paper has discussed the use of technology to
automate change in design and operations. It is also clear that this use of technology
needs to be built upon best practices in the forming of partnerships and in project
management.
4 Conclusion
There have been many successful projects that use an IPDT as well as create a BIM
to share building information, and thus, they have been able to create synergies that
have reduced cost and have delivered buildings on time. Unfortunately, to date,
these cases have been limited and the methodologies have not been widely adopted.
This paper has described some technologies that allow building pieces and devices
to be intelligent operators such that they can act to automatically and transparently
update product data in order to improve cost and timeliness during building con-
struction. The technologies are inexpensive and readily available. Overall, the
construction industry needs to combine the use of smart devices as intelligent
operators and the use of best management practices during building projects in
order to improve productivity.
References
1. Ahmed, S., Ahmad, I., Azhar, S., Mallikarjuna, S.: Implementation of enterprise resource
planning (ERP) systems in the construction industry. In: ASCE Construction Research
Congress—Wind of Change: Integration and Innovation, Honolulu, 1–8 Mar 2003
2. Aliaari, M., Najarian, E.: Sutter Health Eden Medical Center: structural engineer’s active role
in an IPD project with lean and BIM components. Struct. Mag., 32–34 (2013, Aug)
3. Anaconda, D., Cladwell, D.: Improving the performance of productivty teams. Res. Technol.
Manag., 37–43 (2007, Sept–Oct)
4. Connell, C.: What’s new in real-time location systems? Wirel. Des. Dev. 21, 36–37 (2013)
Improving the Delivery of a Building 29
5. Jun, H.B., Shin, J.H., Kim, Y.S., Kiritsis, D., Xirouchakis, P.: A framework for RFID
applications in product lifecycle management. Int. J. Comput. Integr. Manuf. 22(7), 595–615
(2009)
6. Khemlani, L.: Sutter Medical Center Castro Valley: case study of an IPD project. AECbytes,
1–11, 06 Mar 2009. http://www.aecbytes.com/buildingthefuture/2009/Sut-ter_IPDCaseStudy.
html. Accessed 27 Aug 15
7. National BIM Standard: Frequently asked questions about the National BIM Standard.
National BIM Standard (United States). Nationalbimstandard.org. Accessed 24 Aug 2015
8. National Research Council Canada: GPS reduces construction cost. Dimensions (6) (2011)
9. Palmer, W.D.: Tracking precast pieces: technology that works. Concr. Construction (2011,
Oct)
10. Shen, W., Hao, Q., Mak, H., Neelamkavil, J., Xie, H., Dickinson, J., Thomas, R., Pardasani,
A., Xue, H.: Systems integration and collaboration in architecture, engineering, construction,
and facilities management: a review. Adv. Eng. Inf. 24(2), 196–207 (2010)
11. Soleimanifar, M., Beard, D., Sissons, P., Lu, M., Carnduff, M.: The autonomous real-time
system for ubiquitous construction resource tracking. In: Proceedings of the 30th ISARC,
Montreal, Canada (2013)
12. Staub-French, S., Forques, D., Iordanova, I., Kassalan, A., Abdulall, B., Samilski, M., Cavka,
H., Nepal, M.: Building information modeling (BIM) ‘best practices’ project report. University
of British Colombia (2011). http://bim-civil.sites.olt.ubc.ca/files/2014/06/BIMBestPractices-
2011.pdf. Accessed 27 Aug 2015
13. Swedberg, C.: RFID, GPS bring visibility to construction of BP oil platform. RFID J. (2013,
May 08). http://www.rfidjournal.com/articles/view?10659. Accessed 27 Aug 2015
14. Thomson, J., Thomson, V.: Using boundary management for more effective product
development. Technol. Innov. Manag. Rev., 23–27 (2013, Oct)
15. Trentesaux, D., Thomas, A.: Product-driven control: concept, literature review and future
trends. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and
Multi Agent Manufacturing and Robotics, vol. 472, pp. 135–150. Springer, Heidelberg (2013)
16. Wharton: How apple made ‘vertical integration’ hot again—too hot, maybe (2012). http://
business.time.com/2012/03/16/how-apple-made-vertical-integration-hot-again-too-hot-maybe/.
Accessed 1 Oct 2015
17. Wynn, D., Caldwell, H., Clarkson, J.: Predicting change propagation in complex design
workflows. ASME J. Mech. Des. 136(8) (2014)
Repair Services for Domestic Appliances
Abstract There has been a trend of increasing levels of Waste Electrical and
Electronic Equipment over the last few decades as the possibility for accessing
repair of appliances has declined. Reducing prices of appliances has also generated
a culture where the disposal and replacement with new appliances is the quicker and
cheaper option, compared with repair. A number of key areas have been identified
as important in helping to increase the number of appliances which may feasibly be
repaired in the future. Of these, two key areas encompass the automation of repair
of appliances, and the information requirements in order to achieve this. Within this
paper, a demonstrator will be described which provides a step towards illustrating
the potential of product intelligence and semi-automated repair.
1 Introduction
The level of waste generated through the disposal of domestic appliances has
increased significantly over the past decades. In many cases, the items which are
disposed of are in working order or could be repaired with little work. This adds to
the increasing concern around the large amount of energy expended on the
‘recycling’ of materials and the manufacture of new goods. What is worrying is the
irresponsible nature of product design where items may have built in obsolescence
such that their lifetime is not as long as perhaps it could be. The intention of this is
to fuel repeat business. Unfortunately, product costs have been driven down so
much that, while repair used to be the norm, consumers now opt to buy new and
throw away failed or unfashionable items.
The background of the Distributed Information and Automation Laboratory
research1 looks at key areas of relevance, namely, information requirements,
quality, availability, sensing (condition of equipment etc.), automation and product
intelligence. The research into the area of repair started with a master’s level student
project looking at Design for Repair in 2014 and this has been followed by an
investigative scoping study over the past 10 months. The aim of this work is to
significantly increase the number of domestic equipment that can feasibly be
repaired.
This paper is structured as follows. We begin with a background section dis-
cussing the issues of waste, design, repair and obsolescence of domestic appliances,
and the consequences that this has on the level of disposal of appliances. We then
present a research agenda for the topic of repair of domestic appliances and the
major challenges associated with it. In Sect. 4, we then present an intelligent
product based demonstrator which our research team is currently working to study
the technical challenges associated with repair.
2 Background
Over the last 30 years, the replacement of failed domestic appliances with new has
become a relatively inexpensive, quick and easy solution. Simultaneously, repair
has become more costly, time-consuming and inaccessible. The ease with which
appliances can be disposed has increased while the separation and sorting of waste
has become more ‘responsible’ [14]. Around 25 % of disposed domestic appliances
are reported to be in working order. Often simple repairable faults or fashion
‘whims' lead to appliance disposal and replacement.
Throughout the supply chain significant revenue may be gained from repeat
business and the continued disposal of domestic appliances despite the negative
materials and energy consequences (Fig. 1). Financial incentives are such that
companies design in obsolescence to ensure future demand, with the damaging
consequence of fuelling a throwaway culture. However, in cases where electronic
devices have become obsolete many which are clean and functional can be reused if
identified and sorted out by experts [7].
The UK generated 200 million tonnes of total waste in 2012. Of this, WEEE
(Waste Electrical and Electronic Equipment) accounts for around 2 million tonnes
1
http://www.ifm.eng.cam.ac.uk/research/dial/.
Repair Services for Domestic Appliances 33
discarded by householders per year [4]. In real terms, estimates are such that for
every tonne of consumer waste, 5 tonnes of manufacturing waste and 20 tonnes of
resource extraction waste have also been generated [8].
A key question is whether all items need to go straight to the recycling stage
rather than a greater number being reused or repaired. Repair, historically, was a far
cheaper and more accessible option and electricity boards had shops which pro-
vided repair services as a means for them to sell electricity (through the purchase of
their appliances). Furthermore, some repairs are regarded as being of inferior
quality to remanufacturing options, with warrantees only covering the repaired
component [8]. Unfortunately, the repair business has ceased to exist as electricity
supply has become commonplace in residences, and as new appliances have
become much more affordable.
Product design and replacement has a significant impact on WEEE generation
and treatment. Where product replacement is fast, WEEE will increase dramatically
in a short time, and then decrease rapidly creating peaks and troughs in waste
treatment processing facilities [10]. One way of trying to reduce levels of WEEE
due to product failure is by improving the design such that they are more repairable.
Design for X is a key term that has been applied across many important areas. Ones
which have significance to this work include:
34 R. Cuthbert et al.
3 A Research Agenda
We identify five areas that future research should deal with in the area of repair of
domestic appliances:
1. Economic requirements and business/contract models to determine how repair,
sales, and product use may be combined in such way that repair is managed and
achieved more often in an after sales capacity.
2. Design guidelines and material considerations, which render products easier to
diagnose faults, disassemble, repair and reassemble, potentially by the end-user.
3. An information model to enable/support/enhance the repair/replacement/upgrade
process. This would ensure that the right information is available to the right
player within the supply chain, such that repair may be achieved.
4. Automation of repairs in order to make repair a quicker, cheaper, more
repeatable and accurate possibility.
5. Standards and legislation, which are key areas to support the above 4 areas.
Of particular interest in this context are the areas of automation and information:
– Automation: Design and development of basic automated repair functions.
Automation could be of a collaborative nature where a person provides key
intelligence and decision making, while a collaborative robot carries out key
disassembly, test, and repair activities.
– Information Management: What information is required to enable/support/
enhance the repair/replacement/upgrade process for appliances. How does this
differ if the repair is carried out by the consumer or by a robot? Can information
from the appliance trigger the repair appointment, spare parts ordering, etc.?
Repair Services for Domestic Appliances 35
It is not anticipated that one would attempt to adopt automation for all domestic
appliance repairs in the first instance. The key question around which appliances
should be repairable will depend on a number of factors, but clearly value will be
one of them. The question then becomes how much we can lower the threshold of
what is economically viable to repair, considering the very varied nature of
domestic appliances from high to low value.
Further factors come into play when considering the ‘repairability’ of appliances.
These relate to how straightforward it is to diagnose the problem in the first
instance, and then on a more practical level, how feasible it is to disassemble, repair
and re-assemble the appliance in a safe and effective way. A number of factors
considered in this area are shown in Table 1.
Product intelligence is a paradigm that could support both the information man-
agement and the automation challenges of repair. In this section, we review similar
work in the literature and we present a demonstrator based on the product intelli-
gence approach.
36 R. Cuthbert et al.
Intelligent products, along with other similar paradigms, are argued to offer special
benefits in middle-of-life services like maintenance and repair [9, 13, 15]. These
benefits refer to the collection and gathering of item-based information about a
product’s use (using sensors embedded on the product itself), which can then be
distributed to third parties and/or be used to detect abnormalities and failures.
This has led researchers to focus their existing work around remote diagnostics
services enabled by intelligent products, which can be used to improve problem
diagnosis, to improve condition-based maintenance and to schedule service per-
sonnel. In a domestic environment, there are examples of intelligent product
developments for video game consoles [16], refrigerators [6] and washing machines
[11]. It is also argued that product intelligence can lead to the development of smart
appliances that can be used to improve the energy efficiency of modern houses [5].
Apart from houses, it has been shown that intelligent products can facilitate better
repair and maintenance services for vehicles [5] and aircraft [3].
However, it is acknowledged that in the context of domestic appliances, there
will be a cost/intelligence trade-off and the question here is really around what types
of appliance make sense to be repaired, and what appliances lend them to benefiting
from intelligence (Fig. 2).
Figure 2 shows two extremes of appliance characteristics. At the high value end
appliances are typically one-off, the ‘brand’ is the engineering authority, they are
hand-made, expensive, repairable and have a long lifetime. At the low value end
they are produced in high-volume, the ‘brand’ is added to pre-built mass-produced
items, their manufacture is automated, they are low-cost, non-repairable and have a
short lifetime. What is key within this work is to determine how far from the left to
the right the sweet spot may be pushed, or in other words, the reduction in value of
appliances for which it is economically viable to effect their repair.
Fig. 3 Demonstrator
38 R. Cuthbert et al.
The second part of the demonstrator refers to the repair. A user, via a network
(e.g. Internet, Bluetooth) can communicate with the appliance to receive instruc-
tions guiding them through the repair process. User-friendly interfaces could be
designed for computers or even tablets and phones using apps. In certain cases,
repair could be a simple process like the replacement of a filter. In more complex
scenarios, an old spare part might need to be replaced with a new one.
Table 1 indicates a number of the issues which need to be considered in the
process of diagnosis, disassembly, repair and reassembly of appliances. With the
emergence of 3D printing, spare parts could be printed at low cost in designated
locations or even inside the owner’s house. Using this interface, a user would send a
command for printing to a 3D printer, which will then find and use the spare part
design over the Internet or in an allocated database. In this way, a customer could
easily replace faulty spare parts and repair their domestic appliances faster and
potentially cheaper.
5 Conclusions
Within this paper we have introduced the issues around the levels of waste gen-
eration, in particular, in the context of WEEE and domestic appliances. Built-in
obsolescence, product cost, inaccessibility of repair have all led to a throwaway
culture where replacement of products with new ones is seen as the quickest,
cheapest and most reliable option. Key research areas, which seek to reduce the
number of appliances that may feasibly be repaired, have been presented. Of these
key areas, two main areas (information requirements and automation of repair) are
being incorporated into a demonstrator which is described and which aims to
illustrate the possibility of increased repair of appliances through more easily
accessible product information, diagnosis and repair possibilities.
References
1. Arnette, A.N., Brewer, B.L., Choal, T.: Design for sustainability (DFS): the intersection of
supply chain and environment. J. Cleaner Prod. 83, 374–390 (2014)
2. Bakker, C., Wang, F., Huisman, J., den Hollander, M.: Products that go round: exploring
product life extension through design. J. Cleaner Prod. 69, 10–16 (2014)
3. Brintrup, A., McFarlane, D., Ranasinghe, D., Sanchez Lopez, T., Owens, K.: Will intelligent
assets take off? Toward self-serving aircraft. IEEE Intell Syst 26(3), 66–75 (2011)
4. Department for Environment, Food & Rural Affairs UK statistics in waste, March, Available
online at https://www.gov.uk/government/statistics/uk-waste-data (2015)
5. Främling, K., Holmstrom, J., Loukkola, J., Nyman, J., Kaustell, A.: Sustainable PLM through
intelligent products. Eng. Appl. Artif. Intell. 26(2), 789–799 (2013)
6. Främling, K., Loukkola, J., Nyman, J. and Kaustell, A.: Intelligent products in real-life
applications. In: International conference on industrial engineering and systems management.
Metz, France, May (2011)
Repair Services for Domestic Appliances 39
7. Kang, H.-Y., Schoenung, J.M.: Electronic waste recycling: a review of U.S. infrastructure and
technology options. Resour. Conserv. Recycl. 45, 368–400 (2005)
8. King, A., Burgess, S., Ijomah, W., McMahon, C.: Reducing Waste: repair, recondition,
remanufacture or recycle? Sustain. Dev. 14(4), 257–267 (2005)
9. Kiritsis, D.: Closed-loop PLM for intelligent products in the era of the internet of things.
Comput-Aided Des 43(5), 479–501 (2011)
10. Lu, B., Liu, J., Yang, J., Li, B.: The environmental impact of technology innovation on WEEE
management by Multi-Life Cycle Assessment. J. Clean. Prod. 89, 148–158 (2015)
11. Lopez, T., Ranasinghe, D., Patkai, B., McFarlane, D.: Taxonomy, technology and applications
of smart objects. Inf. Syst. Front. 13, 281–300 (2011)
12. McFarlane, D., Cuthbert, R.: Modelling information requirements in complex engineering
services. Comput. Ind. 63, 349–360 (2012)
13. Sallez, Y., Berger, T., Deneux, D., Trentesaux, D.: The lifecycle of active and intelligent
products: the augmentation concept. Int. J. Comput. Integr. Manuf. 23(10), 905–924 (2010)
14. WRAP.: Realising the reuse value of household WEEE (2011)
15. Wuest, T., Hribernik, K., Thoben, K.D.: Accessing servitisation potential of PLM data by
applying the product avatar concept. Production Planning & Control (2015), forthcoming
(2015)
16. Yang, X., Moore, P., Chong, S.K.: Intelligent products: from lifecycle data acquisition to
enabling product-related services. Comput. Ind. 60(3), 184–194 (2009)
End-of-Life Information Sharing
for a Circular Economy: Existing
Literature and Research Opportunities
Abstract Intelligent products carrying their own information are more and more
present nowadays. A lot of research works focuses on the usage of such products
during the manufacturing or delivery phases. This led to important contributions
concerning product data management in the framework of HMS (Holonic
Manufacturing Systems). This paper aim is to: (1) make a review of the major
contributions made for EOL information management (data models, communica-
tions protocols, materials, …) in the framework of a circular economy, (2) have a
first overview on the industrial reasons explaining why these systems are not widely
implemented. This previous points help to highlight potential research directions to
develop in the near future.
1 Introduction
that the earth growth limits will be reached in the next hundred years if this current
model is maintained.
An alternative to this linear model is to promote a model defined as “circular”,
mainly a system where products would be designed to be used a long time and be
easily recycled at their end of life. In this model, the product may have several
different usage phases with different missions. To maintain a good performance
level, the product could be updated between these different phases. If the cost is too
high, it will be dismantled and its components would be reused to equip other
products at a lower price. When components become obsolete, they are recycled to
new raw materials that will be used to product new goods. This model, particularly
different from the current one, constitutes nonetheless a credible alternative because
based on a transposition of the very efficient natural model.
Companies willing to adopt a circular model usually follow a 5-step process [1]:
1. Develop new business models: in a circular model, the sales revenue of a
company is not anymore linked to the quantities of sold products, but to services
provided to customers. New development opportunities will appear on the
different reprocessing loops;
2. Develop new partnerships: The circular model is based on the hypothesis that
wastes from a company can become materials for another one. Setting up new
partnerships can develop industrial symbiosis, such as the industrial site of
Kalundborg, Danemark 1;
3. Design and set up a closed-loop supply chain: Recycling loops must be
supported by associated logistics. Usually, a supply chain conveys products
from manufacturer to customer. In a circular economy, companies have to
manage the dismantlement process along with the diverse materials and product
flows in a closed cycle. As a result, setting up a circular economy is equivalent
to adding a loop to a classic supply chain. The key difference between these
models is the End-Of-Life (EOL) management of products, which then becomes
crucial for an efficient circular model. The activity of managing the products in
their EOL is often referred to as reverse logistics (see Fig. 1). The combination
of forward and reverse logistics results in a closed-loop supply chain [2].
4. Design “circular products”: products moving in a circular economy must be
designed to ease their maintenance, remanufacturing or recycling processes. At
the same time, companies adopting a service economy seek to increase their
products’ useful life by making them more reliable. Eco-design refers to the
process aiming to develop a product integrating constraints issued from the
circular economy.
5. Manage company performance: A company adopting a circular model needs
to define new performance indicators to drive its performance, classical ones
being not sufficient. Proposed indicators are either linked to product perfor-
mances or supply chain performances. For example, the ISO 22628 standard
1
for more information, please refer to http://www.ellenmacarthurfoundation.org/fr/case_studies/la-
symbiose-industrielle-de-kalundborg.
End-of-Life Information Sharing for a Circular Economy … 43
defines indicators like the recyclability rate, which is the part of product which
can be reused or recycled. In addition, the used product processing time from its
collect to its transformation is another relevant indicator that characterises the
closed-loop supply chain performance.
As a result, to set up an efficient circular economy, EOL management is a crucial
step because decisions taken during this one have a maximum impact on the
recyclability rate of products. The performance of a closed-loop supply chain is
essentially driven by the following factors [1]: (1) Products are not designed to be
“circular”, (2) Quantity and quality of used products are variable and not pre-
dictable, (3) Product information on used products is not sufficient. While problem
1 is more related to product design, problem 2 is clearly related to the closed-loop
supply chain domain [2] and problem 3 is linked to information technology. This
paper considers this last problem and aims to provide a clear overview of the actual
research works in this area. Section 2 contains a short review on the solutions
proposed by the intelligent product community to solve the lack of information
during the EOL management phase. Then, Sect. 3 is dedicated to potential reasons
explaining the non-emergence of the “EOL product holon”.
End-Of-Life management involves options available to product after its useful life.
[1, 4] illustrate five product recovery options:
– Repair and reuse: the purpose is to repair the product and to return it in
working order. The quality of the repaired products could be less than that of the
new products;
44 W. Derigent and A. Thomas
Table 1 Review of existing research works and materials for efficient product information
retrieval (gray = not fulfilling completely the requirement, hatch = not relevant regarding the
requirement)
– Data modelling: EOL processes need data (mainly product data). This aspect
deals with data modelling, and lists all related research works trying either to
identify or formalize data required by EOL processes;
– Communication architecture: In EOL, communication is an important feature
to take into account, because EOL processes should be able to retrieve infor-
mation located on the product and/or external databases. This aspect lists all
related research works on communication architecture, data synchronisation and
aggregation;
– Decision making: a good EOL management must be based on efficient
decision-making processes, capable of processing all EOL data and choosing the
best recycling alternative. This aspect deals with all works around
decision-making for EOL;
– Materials: HMS are based on product holons, composed of an informational
coupled with a physical part [11, 12]. This aspect deals with product technologies
(industrial solutions or prototypes) that could support the concerned requirement.
Conclusions drawn from this short survey are interesting: first, during past years,
numerous research has been made on product lifecycle data management, and many
could be applied on EOL. Almost all requirements are mostly achieved, meaning
46 W. Derigent and A. Thomas
The previous review showed that currently, efficient technologies and methods are
available to set up primary EOL data management systems. Some are old, seem to
be efficient but they nevertheless did not push forward the development of EOL
End-of-Life Information Sharing for a Circular Economy … 47
data management solutions. One may then wonder why the EOL product holon
does not naturally emerge, pushed by industrial needs.
Managing data all along the product lifecycle is a very hard task: numerous
stakeholders, different locations, different standards, long lifecycles … make cir-
cular data management difficult to achieve. Moreover, along a given reverse supply
chain, the reprocessing decision is highly distributed and depends on the technical,
social, economical or even geographical product context (available resources,
allocated maximum reprocessing costs, legislation, …). Developing such infras-
tructures is equivalent to build collaboration platforms between companies asso-
ciated in a same reverse supply chain, at the product level, in which a given item
would send information about its evolution, and seek information about its envi-
ronment in order to take EOL decisions. This is clearly an important challenge, and
the development of cloud solutions, of wireless networks should ease the devel-
opment and deployment of the product monitoring infrastructure. There will always
remain some interoperability or data accessibility problems, but in that case,
products could be equipped with memory-on-board to store data, that would be
uploaded in the cloud when connection would be restored.
Building these new infrastructures might require major investments that will be
driven by new business models. However, most of our modern companies are not
fully concerned by sustainability problems and do not consider that switching to
circular economy might bring benefits. As a result, the development of closed-loop
supply chains is slow: companies do not want to invest in networks that won’t
provide money. Nevertheless [1] do illustrate some interesting examples of com-
panies who adopted a circular model and made important benefits by recycling their
products or moving towards service economy. These new business models will
inevitably force companies to develop EOL management strategies required to
extend their products’ useful life, and maximise their benefits. The shift from our
current economy to the circular economy may be faster with appropriate proof of
return on investment.
4 Conclusions
This article presents a brief overview of existing EOL data management approaches
proposed by the HMS community. Methods, techniques and materials are available
to develop EOL management systems. However, some business limitations seem to
slow down the adoption of these solutions. This is currently changing because
investors are more and more sensitive to ecological questions, and also because
service economy has been proved to be a credible economical alternative. Future
research in EOL data management should focus on research axis around data
synchronisation/dissemination/aggregation techniques and product-driven decision
making applied on the EOL process. Demonstrating the performances and ROIs of
such systems is a also key for their wide adoption.
References
1. Sempels, C., Hoffman, J.: Les business models du futur. Pearson (2013). ISBN:
978-2-3260-0026-1
2. Govindan, K., Soleimani, H., Kannan, D.: Reverse logistics and closed-loop supply chain: A
comprehensive review to explore the future. Eur. J. Oper. Res. 240(3)603–626 (2015).
(Online). Available: http://www.sciencedirect.com/science/article/pii/S0377221714005633
3. Parlikad, A., McFarlane, D.: RFID-based product information in end-of-life decision making.
Control Eng. Pract. 15(11)1348–1363 (2007)
4. Le Moigne, R.: L’Économie circulaire. Dunod, France (2014). ISBN: 978-2-10-06008-3
5. Bentaha, M.L., Battaïa, O., Dolgui, A., Hu, S.J.: Dealing with uncertainty in disassembly line
design. CIRP Ann. Manuf. Technol. 63(1)21–24 (2014)
6. McFarlane, D., Sheffi, Y.: The impact of automatic identification on supply chain operations.
Int.J. logistics Manage. 14(1), 1–17 (2003)
7. Kiritsis, D.: Closed-loop PLM for intelligent products in the era of the Internet of Things.
Comput. Aided Des. 43(5)479–501 (2011)
8. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: A survey. Comput. Ind. 60(3),
137–148 (2009). (Intelligent Products). (Online). Available: http://www.sciencedirect.com/
science/article/B6V2D-4VCNDW2-1/2/8d4e089750b92f69fdff42cc12268818
End-of-Life Information Sharing for a Circular Economy … 49
9. Kubler, S., Derigent, W., Främling, K., Thomas, A., Rondeau É.: Enhanced product lifecycle
information management using communicating material. Comput.Aided Des. 59, 192–200
(2015)
10. Kubler, S., Främling, K., Derigent, W.: P2P Data synchronization for product lifecycle
management. Comput. Ind. 66, 82–98 (2015)
11. Koestler, A.: The ghost in the machine. Hutchinson (1967)
12. Van Brussel, H., Wyns, J., Valckenaers, P. Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: Prosa. Comput. Ind. 37(3), 255–274 (1998). (Online).
Available: http://www.sciencedirect.com/science/article/B6V2D-3V73RSY-7/2/30a8063959
d379b3fce4e70c664aa3ab
13. Brock, D.: The electronic product code (epc)—a naming scheme for physical objects. MIT
Auto-ID Center White Paper, Jan 2001
14. Kärkkäinen, M.: Increasing efficiency in the supply chain for short shelf life goods using RFID
tagging. Int. J. Retail Distrib. Manage. 31(10), 529–536 (2003)
15. Fuentealba, C., Simon, C., Choffel, D., Charpentier, P., Masson, D.: Wood products
identification by internal characteristics readings. In Industrial Technology. IEEE ICIT’04.
IEEE International Conference on, vol. 2. IEEE, pp. 763–768 (2004)
16. Jover, J., Thomas, A., Leban, J.M., Canet, D.: Interest of new communicating material
paradigm: An attempt in wood industry. J. Phys: Conf. Ser. 416(1), 012031 (2013)
17. DIALOG, Distributed information architectures for collaborative logistics. Available from:
http://dialog.hut.fi (2009). Accessed 14 Nov 2009. Technical Report
18. Kahn, O., Scotti, A., Leverano, A., Bonini, F., Ruggiero, G., Dörsch, C.: Rfid in automotive: a
closed-loop approach. In: Proceedings of ICE, vol. 6 (2006)
19. Ranasinghe, D., Harrison, M., Främling, K., McFarlane, D.: Enabling through life
product-instance management: solutions and challenges. J. Netw. Comput. Appl. (2011)
20. T. O. Group, O-MI, Open messaging interface, an open group internet of things
(IoT) standard, Reference C14B, US ISBN 1-937218-60-7, Std., Oct 2014
21. T. O. Group, O-DF, Open data format, an open group internet of things (IoT) standard, C14A,
US ISBN 1-937218-59-1, Std., Oct 2014
22. Parlikad, A., McFarlane, D.C., Fleich, E., Gross, S.: The role of product identity in end-of-life
decision making. Auto ID Centre White Paper CAM-AUTOID-WH017. Technical Report
(2003)
23. Vegetti, M., Leone, H., Henning, G.: Pronto: An ontology for comprehensive and consistent
representation of product information. Eng. Appl. Artif. Intell. 24(8)1305–1327(2011)
(semantic-based Information and Engineering Systems). (Online). Available: http://www.
sciencedirect.com/science/article/pii/S0952197611000388
24. Rachuri, S., Subrahmanian, E., Bouras, A., Fenves, S.J., Foufou, S., Sriram, R.D.: Information
sharing and exchange in the context of product lifecycle management: role of standards.
Comput. Aided Des. 40(7), 789–800(2008)
25. Harrison, M.: The ‘internet of things’ and commerce. XRDS: Crossroads ACM Mag. Students.
17(3), 19–22(2011)
26. MIMOSA. (Online). Available: www.mimosa.org
27. Derigent, W.: Aggregation of product information for composite intelligent products. In: 9th
International Conference on Modeling, Optimization & SIMulation (2012)
28. Mekki, K., Derigent, W., Zouinkhi, A., Rondeau, E., Abdelkrim, M.N.: Data dissemination
algorithms for communicating materials using wireless sensor networks. In: International
Conference on Future Internet of Things and Cloud (FiCloud), IEEE. pp. 230–237 (2014)
29. Mekki, K., Derigent, W., Zouinkhi, A., Rondeau, E., Thomas, A., Abdelkrim, M.N.:
Non-localized and localized data storage in large-scale communicating materials: probabilistic
and hop-counter approaches. Comput. Stan. Interfaces. (2015)
30. Pochampally, K.K., Gupta, S.M., Govindan, K.: Metrics for performance measurement of a
reverse/closed-loop supply chain. Int. J. Bus. Perform. Supply Chain Model. 1(1), 8–32(2009)
50 W. Derigent and A. Thomas
Abstract This paper proposes a hybrid ITS design, which represents an adaptation
of the Internet of Things to the automotive and transport fields. This proposal opens
a large variety of new applications intended to improve significantly traffic safety,
efficiency and organization. Furthermore, two completely new ideas are imple-
mented: a new technology integration based on the Health and Usage Monitoring
Systems philosophy which leads to a better diagnosis platform, and a slight mod-
ification of our own proposal in order to show a solution to avoid the use of
multiple redundant ITS since they partially share both targets and technologies.
Keywords Intelligent transport systems Internet of things Vehicle to vehicle
Vehicle to infrastructure Health and usage monitoring system
1 Introduction
technologies are evolving into an adaptation of the Internet of Things (IoT) to the
automotive field, which is emerging as one of the most important technological
trends over the incoming years. The most representative implementations that have
recently emerged follow two different philosophies.
On one hand, it is possible to find solutions based on Vehicle to Infrastructure
(V2I) and Infrastructure to Vehicle (I2V) types [3, 4] which usually appear together.
In this first kind of structure, each vehicle establishes an independent communi-
cation with the servers of the traffic operator through mobile network technology, as
well as through electronic beacons placed at strategic points on the road using IEEE
802.11b/g/n interfaces. Therefore, the operator is responsible for coordinating all
information exchanges. This type of solutions shows the advantage of providing
coverage to the entire network of vehicles; however the high latencies of mobile
networks at present—about 100 ms for 3G connections and 50 ms for 4G [5]—do
still make it a solution that cannot be considered safe enough for the transport
environment where many processes require lower latencies. In the near future
however, 5G technology is destined to change this situation by offering less than
1 ms latency. Another problem of this solution comes from the fact of being
conceived as highly centralized system, for which the computational power
required in traffic operators’ servers is considerably high [3].
An example of centralized ITS platform is derived from the new standard
approved by the European Parliament about smart digital tachographs, published in
the Official Journal of the European Union on February 28, 2014—Regulation
(EU) No. 165/2014 [6]. Consequently, transport companies will be required to
install smart tachographs in all their new vehicles from 2017. This regulation
requires new tachographs to incorporate a mobile data connection, GPS and speed
sensors, so that they can provide permanent access to the authorities, leading to a
V2I/I2V basic system.
The other ITS kind of solutions follows a Vehicle to Vehicle (V2V) morphol-
ogy, where a fully decentralized model, in which all communications are
inter-vehicular, arises. These communications between vehicles are established
taking advantage of the IEEE 802.11p [2] specification, especially suitable for
inter-vehicular data transfers due to its range and low latencies, which ensure that
critical information can flow safely. Furthermore, decentralization fosters that the
required computational load is distributed between each of the vehicles, forming a
network of intelligent nodes interacting as a multi-agent system. By contrast, a
more orthodox centralized implementation shows difficulty exchanging information
with isolated vehicles. The decentralized idiosyncrasy and the limited range of the
IEEE 802.11p interface may lead to isolated islands temporarily created through the
traffic flow.
As follows from the above, the application cases for which the V2I/I2V structure
is more suitable and relevant are those where V2V losses ground and vice versa,
which makes these technologies complementary. This is why finally a hybrid
solution has been proposed in recent years, the V2X morphology where both
V2I/I2V and V2V are combined, leading to a decentralized network in which an
operator supervises the traffic [7–9]. Derived from that idea, one of the major
The Internet of Things Applied to the Automotive Sector … 55
contributions of this paper is the design of a modular system with the capability to
operate simultaneously as a V2X type ITS system and as a smart digital tachograph
according to the above regulations.
The main motivation of the ITS is to increase safety on the roads. Consequently, a
large amount of applications are focused on reducing road accidents, which in other
words means saving lives. Due to the fact that in most of the accidents more than one
car is involved, and also because of the short lapse of time accidents occur, it is critical
to use a proper technology able to carry out such applications. This results in the
choice of the new IEEE 802.11p standard, which is focused on V2V communications
and provides very low latencies. Some applications enabled by this technology are
alert of collision risk, focused on both intersection and frontal collisions, the moni-
toring of the distance with following car and consequent alerts and automatic braking.
In contrast, other applications are better performed using mobile data commu-
nications. As examples, the presence of emergency vehicle alerts, slow, stopped or
even wrong way incoming vehicles, or recent accidents as well, can be notified
some kilometres before reaching the corresponding location.
This concept also fits with the alert of traffic jams and roadside works, which ties
in with other family of applications: those related to improving traffic efficiency.
Therefore, the system analyses the road conditions and proposes alternatives to the
current one. Another application is that called ‘green light countdown’. In this case,
when approaching a traffic light showing red, the HMI shows a countdown for the
status change from red to green. This improves fuel consumption as the driver can
reduce the speed in order to avoid stopping the vehicle. Another application con-
sists on a diagnosis management platform for the maintenance of vehicle parts,
which is carried out applying HUMS techniques.
The diagnosis platform is related to efficiency, as it promotes an improvement of
resources usage and engines efficiency, but it can be also considered as a service for
drivers. In fact, the proposed ITS supports several applications related to services.
For example, petrol stations—considering fuel consumption—and roadside
restaurants are shown along the route focusing on most outstanding deals. The
proposed system also analyses fuel consumption, proposing changes in driving in
order to enhance the obtained profiles.
This last service can also be used by transport companies as a tool for improving
fuel consumption in their fleets. Hence, it can be also considered as a logistic
application.
However, the main logistic application consists on a fleet’s management plat-
form which has the capability of incorporating repair, refuelling and catering ser-
vices so that the system can organize and coordinate mandatory breaks with
refuelling, inspection and repairs. Besides, the platform behaves as a commander
centre able to have a real time monitoring of the whole fleet.
The Internet of Things Applied to the Automotive Sector … 57
ITS units installed in each of the vehicles that are part of the ITS network are the
main devices enabling system operation as a whole.
In the case of standard vehicles—such as cars—the requirements for an ITS unit
can be classified into six families: telematic networks, diagnosis and prognostics
system, driver interface, vehicle interface, security and computing power to manage
ITS services and applications. The ITS unit also has a battery that guarantees the
operation of the unit even when the vehicle is turned off.
This allows the detection of possible accidents and the emergency call service to
be enabled in critical situations.
As shown in Fig. 2, the design for standard vehicles focuses on three main units.
These are the Telematic Control Unit (TCU) which manage telematic networks, the
HUMS Diagnosis Unit (HDU) which manage the diagnosis system, and the
Application Unit (AU). This coordinates all the ITS unit modules and is responsible
of the management of the ITS services as well as the interfaces between the ITS
system and both the vehicle—through electronic units bus data—and the driver—
through a HMI provided with a screen and speakers.
The TCU unit, whose design is shown in Fig. 3, has wireless connection
including a variety of supported technologies such as: IEEE 802.11p for commu-
nication between vehicles, IEEE 802.11b/g/n for connections with signalling bea-
cons, and mobile networks—2G, 3G and 4G—for real time data transfer with ITS
platform servers.
It also has a GPS module to obtain the vehicle position and a Bluetooth module
that is used to refresh the system data to the mobile app. A TCP/IP based con-
nection maintains a local network between the three main units—TCU, HDU and
AU. The TCU and HDU have the capability of reading the vehicle electronic units’
data bus—CAN, LIN, FlexRay or Ethernet—in order to minimize latency when
errors are reported.
The HDU unit, as shown in Fig. 4, is responsible for processing the signals
received from various sensors distributed throughout the as well as from shared
variables via bus communication with vehicle’s electronic units. This processing is
carried out at two levels of intelligence: instant intelligence—diagnosis—and pre-
dictive intelligence—prognosis.
Fig. 5 Proposed design for a smart tachograph integrated in the ITS unit
In this paper the current status of ITS has been reviewed. Additionally, an ITS
based on V2X architecture has been proposed, adding two completely new ideas.
The utilisation of the HUMS philosophy to improve parts fault detection and
other incidents in civil transport can be highlighted as one of the main contributions
of the proposed models.
60 V. Cañas et al.
Furthermore, this work lays the foundations for a common platform that inte-
grates new smart tachographs in industrial vehicles with an ITS, and for which a
first prototype has been developed.
Future research will focus on studying the behaviour of the improved prototype
units, now under development, as well as of the related software and apps. It arises
as especially relevant the setting-up of complete system simulators, that would
include the implementation of banks of servers as well as the development of the
applications required to provide additional services to all participants in the pro-
posed system. Once these first complete systems are in place, special attention will
have to be paid to deepen the study of the system security.
References
1. Dressler, F., Hartenstein, H., Altintas, O., Tonguz, O.K.: Inter-vehicle communication: Quo
Vadis. IEEE Commun. Mag. 52, 170–177 (2014)
2. Milanes, V., Onieva, E., Perez, J., Simo, J., Gonzalez, C., de Pedro, T.: Making transport safer:
V2V-based automated emergency braking system. Transport 26, 290–302 (2011)
3. Godoy, J., Milanes, V., Perez, J., Villagra, J., Onieva, E.: An auxiliary V2I network for road
transport and dynamic environments. Transp. Res. Part C-Emerg. Technol. 37, 145–156 (2013)
4. Milanes, V., Villagra, J., Godoy, J., Simo, J., Perez, J., Onieva, E.: An intelligent V2I-based
traffic management system. IEEE Trans. Intell. Transp. Syst. 13, 49–58 (2012)
5. Feteiha, M.F., Hassanein, H.S.: Enabling cooperative relaying VANET clouds over LTE-A
networks. Veh. Technol. IEEE Trans. on 64, 1468–1479 (2015)
6. Regulation (EU) No 165/2014 of the European Parliament and of the Council of 4 February
2014 on tachographs in road transport, repealing Council Regulation (EEC) No 3821/85 on
recording equipment in road transport and amending Regulation (EC) No 561/2006 of the
European Parliament and of the Council on the harmonisation of certain social legislation
relating to road transport Text with EEA relevance. OJ L 60, 28.2.2014, pp. 1–33 (BG, ES, CS,
DA, DE, ET, EL, EN, FR, GA, HR, IT, LV, LT, HU, MT, NL, PL, PT, RO, SK, SL, FI, SV)
7. Wiesbeck, W., Reichardt, L.: C2X communications overview, Electromagnetic Theory
(EMTS), 2010 URSI International Symposium on, pp. 868–871 (2010)
8. Barrachina, J., Sanguesa, J.A., Fogue, M., Garrido, P., Martinez, F.J., Cano, J.C., Calafate, C.
T., Manzoni, P.: V2X-d: A vehicular density estimation system that combines V2V and V2I
communications, Wireless Days (WD), 2013 IFIP, pp. 1–6 (2013)
9. Parrado, N., Donoso, Y.: Congestion based mechanism for route discovery in a V2I-V2V
system applying smart devices and IoT. Sensors 15, 7768–7806 (2015)
Using the Crowd of Taxis to Last Mile
Delivery in E-Commerce:
a methodological research
Keywords Last mile delivery Crowdsourcing Taxi trajectory data mining
Freight transport City logistics
1 Introduction
C. Chen
College of Computer Science, Chongqing University,
144, Shazheng Street, Chongqing, China
e-mail: [email protected]
S. Pan (&)
Centre de Gestion Scientifique - I3 - UMR CNRS 9217,
MINES ParisTech - PSL Research University, 60, Bd St Michel, Paris, France
e-mail: [email protected]
2 Related Works
Recently some innovative solutions have been studied for city logistics and LMD in
E-commerce environment, for example those involved in our study like intercon-
nected city logistics enabled by Physical Internet [6, 7], self-service parcel station
(e.g., DHL PackStation, LaPoste Pickup Station etc.), new tools for LMD (bicycle,
motor, electric vehicle etc.), Smart city logistics [8], and crowd sourced delivery
[5]. Due to space limitation, here we focus on the works on crowdsourcing in
Freight Transport.
Being firstly discussed in [9], crowdsourcing has been increasingly studied as
solution to freight transport. It can be simply defined as “outsourcing a task to the
crowd via open call” [9]. On the practice side, it occurs mainly in the form of
internet-based services for example imoveit.co.uk and zipments.com, where the
crowd is undefined. Thus, both professional (e.g., carriers) and non-professional
(e.g., inhabitants) service providers may answer the calls. In 2014 Amazon has
Using the Crowd of Taxis to Last Mile Delivery in E-Commerce … 63
launched a project to explore taxi deliveries in San Francisco and Los Angeles.1
The idea is similar to our study, though, their methodology and results are not yet
published to our knowledge. Moreover, the package deliveries are completed by
ordering free taxis, while our proposed solution leverages the hitchhiking rides
provided by occupied taxis when they are sending passengers; thus our solution is
more green and economic. On the scientific side, only few relevant works in the
area of logistics can be found. A case study of applying crowdsourcing to library
deliveries in Finland is conducted in [5]. They study a system called PiggyBaggy to
assess the sustainability and adaptability of such solution. A taxi-based solution for
the waste-collecting or product return problem (i.e., reverse logistics) in
metropolitan area is discussed in [10], without considering goods delivery. Some
other relevant works can be also found in the area of data science. Data scientists
are mainly interested at mining the taxi trajectory data to understand the city
dynamics, and developing various smart services for taxi drivers, passengers, as
well as the city planners [11, 12]. However, almost all the current research related to
taxi data mining focuses on the people or public transport [13, 14]; little attention
has been paid to freight transport.
From the literature we can see that crowdsourcing in freight transport usually
occurs in the form of internet-based services in practice, and it is usually investi-
gated via case study in the literature. Methodology for application is not well
addressed. Besides, no attention has been paid to crowd selection or definition.
People in city are often regarded as eligible crowd. Following the previous work
[10] dealing with reverse flows, this paper focus on a methodology approach for the
LMD problem, where logistics constraints and decision model are different.
3 TaxiCrowdShipping System
To ease the description, we define the related concepts based on Fig. 1, and also
make some assumptions.
Definition 1 (Road Network) A road network is a graph G(N, E), consisting of a
node set N and an edge set E (as shown in Fig. 1), where each element n in N is an
intersection and is associated with a pair of longitude and latitude degrees (x, y)
representing its spatial location. Edge set E is a subset of the cross product NxN.
Each element e(u, v) in E is a street connects node u to node v, which can be
one-way or bi-directional, depending on real cases.
1
http://www.engadget.com/2014/11/05/amazon-is-exploring-taxi-deliveries-in-san-francisco-and-
los-ang/.
64 C. Chen and S. Pan
Assumption 2 Taxi drivers are willing to accept assigned package delivery tasks.
Assumption 3 The package can be trackable. Since the birth time, the package is
either stored at the pickup station or carried by the scheduled taxi. Each pickup
station is authorized and has a unique ID; each taxi is registered in taxi management
department and also has a unique ID.
To help understand how our proposed solution works to handle the LMD, we
intentionally design a simple running example. Suppose in Fig. 1 the leftmost and
rightmost stars are the origin and destination of the package respectively. After
generating package delivery request, there happens to be a passenger who makes a
real-time taxi ordering request, intending to go to the same destination. At that time,
we can assign the package delivery task to the taxi which has accepted the pas-
senger’s request. Finally, the package will be also delivered, with a hitchhiking ride
provided by the taxi while sending the passenger. The solution can be featured as
economic and eco-friendly since it almost does not incur extra labour cost, energy
and CO2 emissions.
Accordingly, the taxi-based crowdsourcing solution to LMD consists of the
shortest path finding problem for packages and the scheduling problem for taxis,
and it can be described as follows.
Given
– A road network and a set of package pickup stations in the studied city;
– A set of taxi trajectory data in the studied city in history (e.g., last month);
– A set of package delivery requests, and a set of real-time generated taxi ordering
requests. Note that these requests come in stream.
Objective
For a given package delivery request, find its optimal delivery path which can
minimize the total package delivery time (i.e. maximize the delivery speed). Once
determined the path, we can schedule the next coming taxi to deliver the package
having the same destination. Note that in one of the scenarios in this study the path
can be re-planned according to the Real-time Taxi Ordering Request.
Constraints
Only taxis which respond the taxi ordering requests after the package delivery
request can be scheduled. Once a taxi is involved into a delivery task, it can be
available again to be scheduled to participate only after completing the current task
(i.e. sending the package to the predefined pickup station). In other words, a taxi
can carry at most one package when sending passengers.
66 C. Chen and S. Pan
where Tri.o and Tri.d are the original and destination points of Tri,
respectively; loc(·) gets the latitude and longitude location of the given
pickup station; δ is a user-specified parameter. Ddist(a · b) calculates the
driving distance from point a to b.
Step 2: From Passenger Flow to Time Cost. To estimate the time cost, we need to
estimate two parts, i.e. the waiting time and the driving time. The waiting
time is defined as the time cost on waiting for the suitable hitchhiking ride
event of passenger taking taxis, to help deliver a package from csi to csj
directly (with no transhipment). Here, we employ the Non-Homogeneous
Poisson Process (NHPP) to model the behaviour of passenger taking taxis
[15]. According to the passenger flow, we can estimate the waiting time of
packages at different time slots at the pickup stations. Under the Poisson
hypothesis within a time slot, we can derive the probability distribution of
the waiting time for the next suitable hitchhiking ride event (i.e. tnext, the
event of a passenger taking taxi from csi to csj), which can be expressed in
Eq. 3:
Pftnext tg ¼ 1 Pftnext [ tg
¼ 1 PfNðtÞ ¼ 0g ð3Þ
kt
¼1e
ðktÞk
Here N(t) is the number of event occurring in t, and Pf0 N 0 ðtÞ ¼ kg ¼ ekt k!
Then the probability density function (pdf) of tnext is the derived function of P{},
Eq. 4
Thus, we can deduce the expectation of tnext (i.e. the waiting time for the
hitchhiking ride event occurring):
Z1
1
E½tnext ¼ t k ekt dt ¼ ð5Þ
k
0
Note that λ in the model is the frequency of passenger taking taxis from csi to csj
(i.e. the passenger flow from csi to csj), which can be easily estimated by the Eq. 6.
^k ¼ N ð6Þ
DT
where N is the average number of passengers taking taxis from csi to csj during the
studied time slot in the observed days; ΔT is the time duration of the that time shot.
Therefore, the waiting time from csi to csj is:
1 DT
waiting time ¼ ¼ ð7Þ
k^ N
68 C. Chen and S. Pan
For each passenger-delivery ride from csi to csj, it is easy to derive its time spent
on driving on the roads. The driving time is the average one of all such rides, as in
Eq. 8.
PN
Tri ðte tsÞ
driving time ¼ i¼1
ð8Þ
N
where N is the number of passenger-delivery rides during the studied time slot in
the observed days te − ts is the time cost of the corresponding taxi ride. Finally, the
time cost is just the sum of waiting time and driving time, as in Eq. 9. Note that the
time cost will be +∞ if there was no passenger flow on that pickup station pair.
PN
DT Tri ðte tsÞ
tc ¼ waiting time þ driving time ¼ þ i¼1
ð9Þ
N N
Following the framework proposed here some results are expected in the next steps.
First, we will conduct one study to assess the possibility to implement the
TaxiCrowdShipping system proposed. A large city in China, namely Hangzhou city
is selected to be the test field, thanks to some available data sets there such as Open
Using the Crowd of Taxis to Last Mile Delivery in E-Commerce … 69
data of taxi trajectory, may of city shops’ and road network etc. However, the data
of package delivery request is still to be completed. Second, a set of algorithms for
package routing and taxi scheduling problem will be developed and examined.
Then a set of scenarios will be run to assess the performance of the system and its
sensibility.
4 Conclusion
References
1. Lee, H.L., Whang, S.: Winning the last mile of e-commerce. MIT Sloan Manage. Rev. 42(4),
54–62 (2001)
2. Punakivi, M., Yrjölä, H., Holmstroem, J.: Solving the last mile issue: reception box or delivery
box? Int. J. Phys. Distrib. Logistics Manage. 31(6), 427–439 (2001)
3. Gibson, B.J., Defee, C.C., Ishfaq, R.: The state of the retail supply chain: Essential Findings of
the Fifth Annual Report. RILA, Dallas, TX (2015)
4. Meyer-Larsen, N., Hauge, J.B., Hennig, A.-S.: LogisticsArena—A platform promoting
innovation in logistics, in logistics management. Springer, pp. 505–516 (2015)
5. Paloheimo, H., Lettenmeier, M., Waris, H.: Transport reduction by crowdsourced deliveries—
a library case in Finland. J. Cleaner Prod. (2015)
6. Sarraj, R., et al.: Analogies between Internet network and logistics service networks:
challenges involved in the interconnection. J. Intell. Manuf. 25(6), 1207–1219 (2014)
7. Crainic, T.G. Montreuil, B.: Physical Internet Enabled Interconnected City Logistics, in 1st
International Physical Internet Conference, 2015, Québec City, Canada (2015)
8. Neirotti, P., et al.: Current trends in smart city initiatives: some stylised facts. Cities 38, 25–36
(2014)
9. Howe, J.: The rise of crowdsourcing. Wired Mag. 14(6), 1–4 (2006)
10. Pan, S., Chen, C., Zhong, R.Y.: A crowdsourcing solution to collect e-commerce reverse flows
in metropolitan areas. In: 15th IFAC Symposium on Information Control Problems in
Manufacturing INCOM 2015, Canada, Ottawa, Elsevier (2015)
70 C. Chen and S. Pan
11. Castro, P.S., et al.: From taxi GPS traces to social and community dynamics: A survey. ACM
Comput. Surv. (CSUR) 46(2), 17 (2013)
12. Zheng, Y.: Trajectory data mining: an overview. ACM Trans. on Intell. Syst. Technol. (TIST)
6(3), 29 (2015)
13. Chao, C., et al.: B-Planner: planning bidirectional night bus routes using large-scale taxi GPS
traces. IEEE Trans. on Intell. Transp. Syst. 15(4), 1451–1465 (2014)
14. Liu, Y., et al.: Exploiting heterogeneous human mobility patterns for intelligent bus routing. In
2014 IEEE International Conference on Data Mining (ICDM) (2014)
15. Qi, G., et al.: How long a passenger waits for a vacant taxi–large-scale taxi trace min-ing for
smart cities. In: 2013 IEEE and Internet of Things (iThings/CPSCom), IEEE International
Conference on Cyber, Physical and Social Computing Green Comput. and Com. (GreenCom)
(2013)
16. Sallez, Y., Pan, S., Montreuil, B., Berger, T., Ballot, E.: On the activeness of intelligent
Physical Internet containers. Computers in Industry. http://dx.doi.org/10.1016/j.compind.
2015.12.006. (2016)
Framework for Smart Containers
in the Physical Internet
Abstract In the context of the Physical Internet (PI), the PI-container with
associated instrumentation (e.g., embedded communication, processing, identifi-
cation…) can be considered as “smart”. The concept of “Smart PI-container
(SPIC)” exploits the idea for a container to participate in the decision making
processes that concern itself or other PI-containers. This paper outlines the necessity
to develop a framework able to describe a collective of SPICs. After a quick survey
of the existing typologies in the field of smart entities, a descriptive framework
based on an enrichment of the Meyer typology is proposed. The proposed frame-
work allows a description of the physical aspect (links among PI-containers) and of
the informational aspect for a given function. Finally, for illustration purpose, the
framework is tested on a collective of SPICs for a monitoring application.
1 Introduction
Montreuil [1] points out that current logistic systems are unsustainable economi-
cally, environmentally and socially. To reverse this situation, the author exploits the
digital internet as a metaphor to develop an initiative called Physical Internet (PI). By
analogy with data packets, the goods are encapsulated in modularly dimensioned,
reusable and smart containers, called PI-containers. This paper investigates the role
of Smart PI-containers (addressed hereafter as SPIC) in the domain of Physical
Internet. SPICs can take decision, and interact with other containers and actors of the
PI network.
This paper proposes early work on the development of a descriptive framework
permitting to analyse and classify different aspects of smart PI-containers. The
Sect. 2 is dedicated to a presentation of the different categories of PI-containers and
of the notion of SPIC. The requirements associated to the descriptive framework are
then introduced. In Sect. 3, the existing typologies on smart entities are gathered
and investigated, considering the previous requirements. The Sect. 4 describes the
proposed framework and applies it on a collective of SPICs for monitoring appli-
cation. Finally, conclusion and future perspectives are offered in Sect. 5.
In the recent field of PI, current projects aim to refine the PI-container concept. As
shown in Fig. 1, the LIBCHIP project [2] investigates the exploitation of three
modular levels of PI-containers (and associated functionalities):
ENCAPSULATION
COMPOSITION
Unitary Composite
Transport Container Transport Container
Unitary Composite
Handling Container Handling Container
Unitary Composite
Packaging Container Packaging Container
Goods
Fig. 1 Illustrating the relationships between the three categories of PI-containers [2]
Framework for Smart Containers in the Physical Internet 73
function by function. Indeed, from a decisional point of view, a SPIC can be “passive”
for a function (e.g. fi) and “active” for another one (e.g. fj). Figure 2 highlights, for a
specific PI-containers grouping and for a specific function fi, five requirements that
must be achieved by the framework according to physical and informational aspects:
– Physical aspect: (Req. 1) Which are the physical links existing between
PI-containers in the collective of SPICs (i.e., encapsulation and composition)?
– Informational aspect: The four following requirements must be considered
according to two points of view:
• “Individual” point of view:
• (Req. 2) Intelligence level: What is the intelligence level of each SPIC (e.g.
from simple information handling to more complex decisional activities)?
• (Req. 3) Intelligence location: How the intelligence of each SPIC is sup-
ported by a technology (i.e. embedded or remote implementations)?
• “Collective” point of view:
• (Req. 4) Aggregation: Which are the informational links in a hierarchy of
SPICs (i.e. when several SPICs are included in one SPIC)?
• (Req. 5) Interactions: Which are the interactions among SPICs (e.g.
Master-Slave relationship)?
For a specific grouping of PI-containers the physical aspect is an invariant
whatever are the function(s) considered. However the informal aspect evolves
according to the studied function. In order to build the informational aspect of the
framework, the next section offers a survey of the existing typologies in the field of
Internet of Things, “smart” objects and “intelligent” products.
Legend: 6 C
7
3 4 5
i: PI-container i
I i : Intelligence
associated to i 1 2
I6 6
Layer #1
INDIVIDUAL COLLECTIVE
POINT OF VIEW POINT OF VIEW Req. 1 Transport Container
Req. 2 Intelligence level? Req. 4 Aggregation? Physical links
Req. 3 Intelligence location? Req. 5 Interactions? among SPICs?
I4 5
Layer #2
4
I3 3 Handling Container
I1 1 2 Layer #3
Packaging Container
Informational aspect Physical aspect
Based on the study of Sallez [5], this section provides a brief survey of existing
typologies. Two broad categories are distinguished: individual and collective.
3.1 Individual
This category focuses on the entity as “individual” and is in turn divided into two
major classes: (i) mono-criterion typologies distinguishing broad classes of “in-
telligent” entities according to their level of intelligence, (ii) multi-criteria typolo-
gies taking into account the different characteristics of an “intelligent” entity
(sensory capacities, location intelligence…).
Mono-criterion:
Le Moigne [6] proposed nine levels of intelligence, from a totally passive object at
the first level to a self-completing active object at the highest level. Wong et al. [7]
have proposed informational-oriented products and decisional-oriented products.
Other typologies [8–10] have equally suggested different classifications of intelli-
gence level focusing on different applications of smart entities.
Multi-criteria:
Meyer et al. [11] presented a typology based on three axes: level of intelligence,
location of intelligence and aggregation of intelligence. Kawsar et al. [12] defined
three sets of cooperating objects named SOS (Smart Object System) with five levels
of intelligence. A three axis typology was introduced by Kortuem et al. [13],
addressing awareness, interactivity and representation for smart objects. Three
categories of smart objects are then considered: activity-aware objects, policy-
aware objects and process-aware objects. López et al. [14] have proposed a
five-level typology for smart objects starting from object identifying and storing all
relevant data to finally object making decisions and participating in controlling
other devices. In the same spirit, the typology of Sundmaeker et al. [15] proposes
five categories of smart objects in the field of Internet of Things.
3.2 Collective
This category tries to characterize the types of interactions which exist in a col-
lective of “intelligent” entities. The typology proposed by Salkham et al. [16]
includes three aspects: goal, approaches and means including abilities as sensing
and acting on the environment, communicating and delegating. In the field of
76 A. Rahimi et al.
Internet of Things, the typology of Iera [17] was inspired by the theory of social
relations of Fiske [18]. The four classes highlighted by Fiske are revisited in order
to characterize the different relationships between entities.
3.3 Synthesis
Legend: 6 C
7
3 4 5
i: PI-container i
Ii : Intelligence
associated to i 1 2
[DM,CO,IO]
I6 6
COOP
[PN,CO,IN]
I3 C
7
4 5
3
AUT AUT
I1 I2
1 2
[IH,IT,IN] [IH,IT,IN]
– For each SPIC, the third axis “aggregation” of Meyer’s typology is used to
precise if the SPIC can be considered as an intelligent item (“not decom-
posable” entity) or if the SPIC contains other intelligent items (role of
gateway/proxy). (The term “container” in [11] is not related to SPIC case).
– The interactions among the informational systems of the SPICs are described
via three relationships:
NUL (Non-existent): there is no interaction between the SPICs.
COOP (Cooperation): there are interactions between informational systems,
but no authority link exists between them. For example, SPICs interact to
exchange information on their respective contexts.
AUT (Authority): informational systems interact in an authority relationship.
The collective of SPICs considered is the same as the one in Fig. 2. To illustrate the
framework, a function f1 (cargo “monitoring”) is considered exploiting the
multi-layered intelligence of the collective of SPICs:
• Σ1 and Σ2 are not equipped with sensors and their status is “monitored” by Σ3;
• Σ4 and Σ5 are assumed containing no perishable goods and are not involved by
the monitoring function considered;
• Σ3 sends warnings to Σ6. This last has decisional capabilities to treat the
warnings and to find adequate answers in cooperation with PI management.
Figure 3 depicts the descriptive framework applied in this example. The tree on
the right part describes the physical aspect. Concerning the informational aspect (on
the left part), the AUT relationships depict that Σ1 and Σ2 are dependent of Σ3. The
relationships among the other SPICs are of COOP type. Indeed, Σ3, Σ4, Σ5 and Σ6
cooperate to monitor the different cargos. In Fig. 3, the labels associated to the
different informational systems are relative to the three axes of Meyer’s typology.
5 Conclusion
Acknowledgments The authors like to especially thank the French National Research Agency
(ANR) which supports this work via the granted PI-NUTS Project (ANR-14-CE27-0015).
References
1. Montreuil, B.: Towards a physical internet: meeting the global logistics sustainability grand
challenge. Logistics Res. 3(2–3), 71–87 (2011)
2. Montreuil, B., Ballot, E., Tremblay, W.: Modular structural design of physical internet
containers. Prog. Mater. Handling Res. 13 (2015)
3. MODULUSHCA (2015). http://www.modulushca.eu/
4. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. In:
Service Orientation in Holonic and Multi-agent Manufacturing, Springer Studies in
Computational Intelligence, pp. 259–269 (2015)
5. Sallez, Y.: Proposition of an analysis framework to describe the “activeness” of a product
during its life cycle. In: Service Orientation in Holonic and Multi-Agent Manufacturing and
Robotics, Springer Studies in Computational Intelligence, pp. 257–270 (2014)
6. Le Moigne, J.-L.: La théorie du système général: théorie de la modélisation, jeanlouis le
moigne-ae mcx (1994)
7. Wong, C.Y., et al.: The intelligent product driven supply chain. In: 2002 IEEE International
Conference on Systems, Man and Cybernetics. IEEE (2002)
8. Bajic, E.: Ambient Networking for Intelligent Objects Management, Mobility and Services.
Seminar Institute for Manufacturing, IFM Cambridge University, UK (2004)
9. Kiritsis, D.: Closed-loop PLM for intelligent products in the era of the Internet of things.
Comput. Aided Des. 43(5), 479–501 (2011)
10. Musa, A., et al.: Embedded devices for supply chain applications: towards hardware
integration of disparate technologies. Expert Syst. Appl. 41(1), 137–155 (2014)
11. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: A survey. Comput. Ind. 60(3),
137–148 (2009)
12. Kawsar, F.: A document-based framework for user centric smart object systems. Ph.D. Thesis,
Waseda University, Japan (2009)
13. Kortuem, G., et al.: Smart objects as building blocks for the internet of things. Internet
Comput. IEEE 14(1), 44–51 (2010)
14. López, T.S., et al.: Taxonomy, technology and applications of smart objects. Inf. Syst. Front.
13(2), 281–300 (2011)
15. Sundmaeker, H., et al.: Vision and challenges for realising the internet of things
16. Salkham, A., et al.: A taxonomy of collaborative context-aware systems. In: UMICS’06,
Citeseer (2006)
17. Iera, A.: The social internet of things: from objects that communicate to objects that socialize
in the internet. In: Proceedings of 50th FITCE International Congress. Palermo, Italy, Aug
2011
18. Fiske, A.P.: The four elementary forms of sociality: framework for a unified theory of social
relations. Psychol. Rev. 99(4), 689 (1992)
On the Usage of Wireless Sensor Networks
to Facilitate Composition/Decomposition
of Physical Internet Containers
1 Introduction
The core of the Physical Internet concept, initiated by Benoit Montreuil, is the
handling of standardized modular containers (PI-containers) throughout an open
global logistic infrastructure, including key facilities such as PI-Hubs. PI-containers
will be manipulated over time (transport, store, load/unload, build/dismantle …) but
also, subparts of the containers will be changed (partial loading/unloading, con-
tainers splitting and merging). In this context, a significant challenge is to maintain
traceability in a highly dynamic transport and logistics system.
The ability to identify the past or current location of an item, as well as to know
an item’s history, is more complex in Physical Internet due to the wide variety of
manual or automated handling, storage and routing operations. In addition to obtain
a permanent inventory (full list of delivery items) and the precise location of goods
inside all the PI-containers, the traceability system of carried PI-containers and
“composite” PI-containers can provide new value-added services:
• Monitoring the conditions throughout the container handling (with sensors
deployed to measure temperature, hygrometry …);
• Detect problems for security purposes (e.g., shocks, opening tentative, and
incompatibility between goods);
• Guidance information for loading/unloading systems.
Implementing a traceability system requires to systematically link the physical
flow of materials and products with the flow of information about them. To avoid
synchronization problems between both physical and informational views, we
propose to use the Wireless Sensor Networks (WSNs) which are spontaneous
multi-hop networks and well-suited for dynamic environments like the Physical
Internet. In our approach, each composite PI-container is able to identify its real
composition from information collected, and the virtualization of physical
PI-containers is used as digital representation of their actual state. The model of the
composite container can be updated continuously and is consistent with reality.
Hence, the PI-containers play an active role in the PI management and operations
[1–4]. Moreover, historical and future states can also be obtained from the virtual
representation, and more complex information (unobservable by the human) can be
collected and represented [5].
The paper is organized as follows. The PI-containers concept and their
composition/decomposition issues in Physical Internet context are introduced in
Sect. 2. Section 3 describes the proposed approach based on wireless sensor networks
and virtualization to facilitate the traceability of PI-containers. As a proof-of-concept,
a composition/decomposition benchmark is used to illustrate the approach and
obtained results. Finally conclusive remarks are offered in the last section.
On the Usage of Wireless Sensor Networks … 83
The following sections offer an overview of the PI-containers concept and of the
composition/decomposition process, and assess the situation on current researches.
One of the key concepts of the PI relies on the use of standardized containers that
are the fundamental unit loads. Physical goods are not directly manipulated by the
PI but are encapsulated in standardized containers, called PI-containers. These
containers are moved, handled and stored in the PI network through the different
PI-facilities. The ubiquitous usage of PI-containers will make it possible for any
company to handle and store any company’s products because they will not be
handling and storing products per se. More details about key functional specifi-
cations of PI-containers can be found in [6, 7].
As introduced in [8], three PI-containers categories can be distinguished:
transport, handling and packaging containers. According the Russian doll concept,
the three categories can be successively encapsulated one within the other. Figure 1
gives the main characteristics of these categories and their relationships. The
modularity enables the containers to better complement each other and therefore
allows a better use of the means of transportation.
For that purpose, we use a wireless sensor network (WSN) where nodes are
attached to each container, and store information about the container such as the
container category, the identifier and its dimensions. The sensor node embedded at
the composite container level, acts as a gateway and provides the interface between
the management information system (or PI-operators) and the composite container.
According to the transmission range, a spontaneous multi-hop network is formed.
Through their cooperation and the execution of a neighbour discovery protocol, the
one-hop neighbour table is computed.
A Constraint Satisfaction Problem (CSP) can be formulated where:
• The neighbour table gives constraints related to positions between the
unitary-containers (allocation restrictions);
• The container dimensions provide basic geometric constraints. The
unitary-containers lie entirely within the composite container and do not
overlap. Each one of them may only be placed with its edges parallel to the
walls of the composite container.
Therefore, each feasible solution of the CSP is a potential loading pattern and the
3D container virtualization process provides an instantaneous consolidated view
(dynamic and virtual) of the composite container assemblage, as depicted in Fig. 4.
The mathematical formulation of this satisfaction problem is similar to the
well-known 3D Container Loading Problem with a single container and a number
of heterogeneous boxes [14].
However, the objective is not to optimize the number of items which have to be
packed, but to find the assignment that satisfies all constraints and matches with the
real composition of the H-container.
This issue depends directly on the transmission range and the dimensions of the
containers. The number of feasible solutions is indeed related to the neighbour
graph connectivity, obtained from the neighbour table. Assuming the same trans-
mission range for all nodes, two nodes are neighbours if they can communicate, i.e.
the distance between them is less than or equal to the transmission range. Therefore,
with a transmission range smaller than the smallest container, a lot of nodes will be
unable to communicate.
The set of allocation constraints in the CSP will be reduced, leading to many
feasible solutions. Similarly, if each node can communicate with all the other, the
neighbour graph will be a complete graph. In this case, multiple feasible solutions
can be found from a simple permutation of two containers with the same dimen-
sions. The transmission range plays an important role to limit the number of feasible
solutions and obtain the virtual view of the composite container.
The set of variables, constraints and the mathematical formulation of the CSP
can be found in [15]. As a proof-of-concept, a composition/decomposition scenario
is used to illustrate the approach. The simulation scenario and results are presented
in the next section.
5 Conclusion
The large variety of manual or automated handling, storage and routing operations
characterizes the Physical Internet as a highly dynamic transport and logistics
system. In this paper, we have focused on the traceability of containers in Physical
Internet context in which the management information system must be redesigned.
To avoid any mistake in the composition/decomposition process, the real
composition of a composite PI-container must be known at all times. To do this, we
have proposed an approach based on Wireless Sensor Networks (WSN) and a VoC
framework to consolidate assignment information of PI-containers in the composite
container.
A simulation demonstrated that the real 3D pattern can be obtained from the
cooperation between nodes. Our approach, although more expensive than RFID
technology, offers the benefit from knowing the exact location (through the virtual
representation). The WSN technology could also serve to support information about
containers, or generate new information based on sensing capabilities.
References
1. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. In:
Borangiu, T., Trentesaux, D., Thomas, A. (eds.) Service Orientation in Holonic and
Multi-agent Manufacturing. Springer series Studies in Computational Intelligence, Vol. 594,
pp. 259–269 (2015)
90 N. Krommenacker et al.
2. Wong, C.Y., McFarlane, D., Zaharudin, A.A., Agarwal, V.: The intelligent product driven
supply chain. In: IEEE International Conference on Systems, Man and Cybernetics,
Hammamet, Tunis (2002)
3. Sallez, Y.: Proposition of an analysis framework to describe the “activeness” of a product
during its life cycle—part I: Method and applications. In: Borangiu, T., Trentesaux, D.
(eds) Service Orientation in Holonic and Multi-Agent Manufacturing Control, Studies in
Computational Intelligence, vol. 544, pp. 271–282. Springer (2014)
4. Sallez, Y.: The augmentation concept: how to make a product “active” during its life cycle. In:
Borangiu, T., Trentesaux, D. (eds) Service Orientation in Holonic and Multi-Agent
Manufacturing Control, Studies in Computational Intelligence, vol. 402, pp. 35–48.
Springer (2012)
5. Verdouw, C.N., Beulens, A.J.M., Reijers, H.A.: A control model for object virtualization in
supply chain management, Computers in Industry, Vol. 68, 116–131, Apr 2015
6. Ballot, E., Montreuil, B., Meller, R.D.: The Physical Internet: The Network of the Logistics
Networks. La Documentation Française, Paris (2014)
7. Montreuil, B.: Towards a physical internet: meeting the global logistics sustainability grand
challenge. Logistics Res. 3(2–3), 71–87 (2011)
8. Montreuil, B., Ballot, E., Tremblay, W.: Modular Structural Design of Physical Internet
Containers. Prog. Mater. Handling Res. 13 (2014) (MHI)
9. Modulushca project (2015). http://www.modulushca.eu/
10. Ballot, E., Montreuil, B., Thémans, M.: OPENFRET: contribution à la conceptualisation et à
la réalisation d’un hub rail-route de l’Internet Physique. MEDDAT, Paris (2010)
11. Ballot, E., Montreuil, B., Thivierge, C.: Functional Design of Physical Internet Facilities: A
Road-Rail Hub. Progress in Material Handling Research, MHIA, Charlotte, NC (2012)
12. Meller, R.D., Montreuil, B., Thivierge, C., Montreuil, B.: Functional Design of Physical
Internet Facilities: A Road-Based Transit Center. Progress in Material Handling Research,
MHIA, Charlotte, NC (2012)
13. Pach, C., Sallez, Y., Berger, T., Bonte, T., Trentesaux, D., Montreuil, B.: Routing
management in physical internet cross docking hubs: study of grouping strategies for truck
loading. In: International Conference on Advances in Production Management
Systems APMS, IFIP AICT, vol. 438, pp. 483–490. Springer, Sept 2014
14. Bortfeldt, A., Wäscher, G.: Container loading problems—a state-of-the-art review.
Otto-von-Guericke Universität Magdeburg, Working Paper No. 7/2012 (2012)
15. Tran-Dang, H., Krommenacker, N., Charpentier, P.: Enhancing the functionality of physical
internet containers by WSN. In: International Physical Internet Conference. Paris, July 2015
Part III
Sustainability Issues in Intelligent
Manufacturing Systems
Artefacts and Guidelines for Designing
Sustainable Manufacturing Systems
Abstract The following key questions are the main focus of this paper: Which are
the needs to integrate sustainability and efficiency performances in Intelligent
Manufacturing System design? And: How can these needs be approached using
concepts from Intelligent Manufacturing System engineering methods in the con-
text of design of sustainable manufacturing systems? This paper answers these
questions with: “green” artefacts and guidelines for helping to maximize production
efficiency and balance environmental constraints already in the system design
phase. In this way the engineers designing the manufacturing system can have
guidelines for decision support and tools for improving energy efficiency, CO2
emissions and other environmental impacts integrated into a software engineering
method for intelligent manufacturing development.
Keywords Sustainable manufacturing systems Multi-agent system Holonic
manufacturing system Intelligent manufacturing design
1 Introduction
A. Giret (&)
Dpto. Sistemas Informaticos Y Computacion, Universidad
Politecnica de Valencia, Valencia, Spain
e-mail: [email protected]
D. Trentesaux
LAMIH UMR CNRS 8201, University of Valenciennes
and Hainaut-Cambrésis, Valenciennes 59313, France
e-mail: [email protected]
processes and performance indicators must be taken into account at all relevant
levels (product, process, and system). One of the key questions to answer in the
field of Sustainable Production is: What approaches should/could be used to
transform production processes to be more sustainable? The authors believe that to
foster sustainability in production the whole lifecycle of manufacturing systems
must be taken into account, considering its different layers in a holistic way. From
systems’ conception throughout implementation, until maintenance the system
developer must take into account sustainability issues. Nevertheless, there is a lack
of sustainability considerations in the state-of-the-art design methods for manu-
facturing operations [5–7]. Despite that other relevant levels have a large number of
approaches that take special consideration to sustainability issues (for a
state-of-the-art review see for example [1, 8, 9]). To fill the gap, in this paper a
design artefact and a set of guidelines for the development of sustainable manu-
facturing systems are proposed.
Salonitis and Ball presented in [10] the new challenges imposed by adding sus-
tainability as a new driver in manufacturing modelling and simulation. This very
complex and challenging undertake must also consider issues at all relevant levels
in manufacturing—product, process, and system [11].
It is crucial and urgent for system engineers of sustainable manufacturing sys-
tems to have tools and methods that can help them to undertake this task from
system conception, trough out its design until its execution in an effective way. The
research field of Intelligent Manufacturing Systems (IMS) provides a large list of
engineering methods tailored to deal with specific aspects for designing IMS (for a
comparative study see [5]). Nevertheless, most of the existing approaches do not
integrate specific support for designing sustainable manufacturing systems. One of
the major challenges in developing such approaches is the lack on guidelines and
tools that foster the system designer to consider sustainability issues at design
phases and that can help during the implementation of the IMS. Then two key
questions to answer are the following: (Q1) What are the needs to integrate sus-
tainability efficiency performance in IMS design? [12, 13] (Q2) How can these
needs be approached using concepts from IMS engineering methods in the context
of designing sustainable manufacturing systems?
The authors believe that integrating sustainability efficiency performance in IMS
design can be tackled by means of:
• Specific guidelines that can help the system designer to know (1) what sus-
tainability parameters are key to the system, (2) how these parameters must be
taken into account by the components of the IMS, (3) when these parameters
must be used for achieving sustainable efficiency in the system, (4) which
Artefacts and Guidelines for Designing Sustainable Manufacturing … 95
approaches can be used to compute a sustainable solution for the different tasks
and processes of the manufacturing system.
• “Green” artefacts that can provide optimized solutions for concrete aspects at
different levels such as: enterprise resource planning, production control,
manufacturing operations scheduling, etc.
The above mentioned aspects, which are some answers for Q1, are the main
focus of this paper. Moreover, this paper answers Q2 by means of a specific
approach for IMS development called Go-green ANEMONA [14]. The authors
believe that the answers provided in this paper for Q1 are two of a larger list.
Finding out which are the complete elements of this list is outside the paper’s scope
and an open problem worth for a deeper study. In this paper the complete details of
sustainable specific guidelines and green artefacts to assist the system engineer
during the IMS design are described. Moreover, the engineering process is show-
cased with a case study.
3 Go-Green ANEMONA
holons, and/or product holons, work-order holons and staff holons) since the
Go-green ANEMONA metamodel provides the support for implementing the
cooperation with them.
be handled. For this concrete situation, a solving approach is required that: takes
into account energy and CO2; maintains the scheduling effectiveness as the main
objective while minimizing energy and CO2, and; is a proactive-reactive scheduling
method (an initial schedule is computed off-line and re-scheduling activities are
executed on-line). With this decision support the system engineer can chose from
the library of pre-built solving approaches the one that better fits these requirements
(see [8] for a list of approaches suitable for different sustainable requirements
combinations).
4 Case Study
On the other hand, when evaluating the no. of Holons identified with Go-green
ANEMONA it turns out that 8 more holons were identified compared with the
ANEMONA development. This is because the Go-green Holons are added to the
classical holons in the development. But the Go-green Holons helped to have less
cooperation domains with Go-green ANEMONA since there is no need to have
100 A. Giret and D. Trentesaux
such cooperation domains for sustainability issues because they are already taken
into account in the different cooperation domains in which the Go-green Holons are
involved.
5 Conclusions
In this paper the answers to the following questions where analysed: (Q1) What are
the needs to integrate sustainability efficiency performance in IMS design? (Q2)
How can these needs be approached using concepts from IMS engineering methods
in the context of sustainable manufacturing systems design? The main proposals for
answering the questions are: (1) a Go-green holon, as a green artefact that helps the
system designer to implement solutions for sustainable IMS, and (2) a set of
guidelines that enforces system engineers to think about their main designs choices
of the sustainable parameters taken into account in the IMS. The proposal was
showcased designing an intelligent distributed monitoring and control application
of a ceramic tile factory. Nevertheless, the guidelines and artefacts have helped in
the development of the case study; the authors believe that these are only 2 of a
larger list of design elements for developing sustainable manufacturing systems;
this global list is still open to study.
The proposed approach is still under development. The library of pre-built
solving methods from which the engineer can select the type of service which better
suits his/her needs for the efficiency-oriented objectives, constraints and KPIs of
go-green holons is being populated. Moreover, a case-tool is being designed as
design support.
References
1. Garetti, M., Taisch, M.: Sustainable manufacturing: trends and research challenges. Prod.
Plan. Control 23, 83–104 (2012). doi:10.1080/09537287.2011.591619
2. Fang, K., Uhan, N., Zhao, F., Sutherland, J.W.: A new approach to scheduling in
manufacturing for power consumption and carbon footprint reduction. J. Manuf. Syst. 30,
234–240 (2011)
Artefacts and Guidelines for Designing Sustainable Manufacturing … 101
3. Merkert, L., Harjunkoski, I., Isaksson, A., Säynevirta, S., Saarela, A., Sand, G.: Scheduling
and energy-industrial challenges and opportunities. Comput. Chem. Eng. 72, 183–198 (2015)
4. Evans, S. Bergendahl, M., Gregory, M., Ryan, C.: Towards a sustainable industrial system.
with recommendations for education, research, industry and policy. http://www.ifm.eng.cam.
ac.uk/uploads/Resources/Reports/industrial_sustainability_report (2009)
5. Giret, A., Trentesaux, D.: Software engineering methods for intelligent manufacturing
systems: a comparative survey. Ind. Appl. Holonic Multi-Agent Syst. 11–21 (2015)
6. Thomas, A., Trentesaux, D.: Are intelligent manufacturing systems sustainable? In: Borangiu.
T., Trentesaux, D., Thomas, A., (ed.) Service Orientation in Holonic and Multi-Agent
Manufacturing and Robotics, Springer Studies in Comput. Intell., pp. 3–14
7. Matsuda, M., Kimura, F.: Usage of a digital eco-factory for green production preparation.
Procedia CIRP. 7, 181-186. ISSN 2212-8271 (2013)
8. Giret, A., Trentesaux, D., Prabhu, V.: Sustainability in manufacturing operations scheduling: a
state of the art review. J. Manuf. Syst., To appear (2015)
9. Badurdeen, F., Iyengar, D., Goldsby, T.J., Metta, H., Gupta, S., Jawahir, I.S.: Extending total
life-cycle thinking to sustainable supply chain design. Int. J. Prod. Lifecycle Manage. 4(49), 6
(2009)
10. Salonitis, K., Ball, P.: Energy efficient manufacturing from machine tools to manufacturing
systems. Procedia CIRP. 7:634–639, ISSN 2212-8271 (2013)
11. Jayal, A.D., Badurdeen, F., Dillon Jr, O.W., Jawahir, I.S.: Sustainable manufacturing:
modeling and optimization challenges at the product, process and system levels.
CIRP J. Manuf. Sci. Technol. 2, 144–152 (2010). doi:10.1016/j.cirpj.2010.03.006
12. Taticchi, P., Tonelli, F., Pasqualino, R.: Performance measurement of sustainable supply
chains: a literature review and a research agenda. Int. J. Prod. Perform. Manage. 62(8), 782–
804 (2013)
13. Taticchi, P., Garengo, P., Nudurupati, S.S., Tonelli, F., Pasqualino, R.: A review of
decision-support tools and performance measurement and sustainable supply chain
management. Int. J. Prod. Res. 53(21), 6473–6494 (2015)
14. Giret, A., Trentesaux, D.: Go-Green Anemona: a manufacturing system engineering method
that fosters sustainability. Glob. Clean Prod. Sustain. Cons. Conf, To appear (2015)
15. Giret, A., Botti, V.: Engineering holonic manufacturing systems. Comput. Ind. 60, 428–440
(2009). doi:10.1016/j.compind.2009.02.007
16. Trentesaux, D., Giret, A.: Go-green manufacturing holons: a step towards sustainable
manufacturing operations control. Manuf. Lett. 5, 29–33 (2015)
17. Escamilla, J., Salido, M.A., Giret, A., Barber, F.: A Metaheuristic technique for
energy-efficiency in job-shop scheduling. Proc. Constraint Satisf. Tech. COPLAS, 24th Int.
Conf. Autom. Plan. Sched. ICAPS’14 (2014)
18. Garcia, E., Giret, A., Botti, V.: Evaluating software engineering techniques for developing
complex systems with multiagent approaches. Inf. Soft. Technol. 53, 494–506 (2011)
A Human-Centred Design to Break
the Myth of the “Magic Human”
in Intelligent Manufacturing Systems
Keywords Techno-centred design Human centred design Human in the loop
Levels of automation Human-machine cooperation Intelligent manufacturing
systems
1 Introduction
This paper is relevant to industrial engineering, energy and services in general, but
is focused on Intelligent Manufacturing Systems (IMS). It deals with the way the
human operator is considered from a control point of view when designing IMS that
integrates human beings.
The complexity of industrial systems and human organizations that control them
is increasing with time, as well as their required safety levels. These requirements
evolve accordingly with past negative experiences and industrial disasters (Seveso,
Bhopal, AZF, Chernobyl…). In France, the Ministry for Ecology, Sustainable
Development and Energy (Ministère de l’Écologie, du Développement durable et de
l’Énergie) has led a study in technological accidents that occurred in France in 2013
(“inventaire 2014 des accidents technologiques”). It has shown that the three first
domains expressed in terms of numbers of accidents are manufacturing, water and
waste treatment. This study has also highlighted that even if “only” 11 % of the root
causes come from a “counter-productive human intervention”, human operators are
often involved in accidents at different levels: organizational issues; default in
control, monitoring and supervision; bad equipment choice; and lacks in knowledge
capitalization from past experiences.
Obviously, the capabilities and limits of the human operator during manufac-
turing have been widely considered for several years, and very intensively by
industrialists. This attention has been mainly paid at an operational level:
• At the physical level: industrial ergonomic studies, norms and methods (MTM,
MOST…) are a clear illustration of this;
• At the informational and decisional levels: industrial lean and kaizen techniques
aim to provide the operator with informational and decisional capabilities to
react and to improve the manufacturing processes for predefined functioning
modes of the manufacturing system.
Meanwhile, these industrialist-oriented technical solutions lack human-oriented
considerations when dealing with higher and more global decisional and infor-
mational levels such as scheduling, supervision, etc. as well as when abnormal and
unforeseen situations and modes occur. This holds also true for the related scientific
research activity. And this is even truer for less mature and more recent research
topics such as those dealing with the design of control in IMS architectures. In
addition, and specifically to IMS, where it is known that emerging (unexpected)
control behaviours can occur during manufacturing, the risk to face possible
accidents or unexpected and possibly hazardous situations when using
un-human-aware control systems increases.
The objective of this paper is then to foster researchers dealing with the design of
control systems in IMS to question the way they consider the real capabilities and
limitations of the human beings. It is important to note that, at our stage of
A Human-Centred Design to Break the Myth of the “Magic Human” … 105
development, this paper remains highly prospective and contains only a set of
human-oriented specifications that we think researchers must be aware of when
designing their control in IMS. For that purpose, before providing these specifications,
the following part describes the consequence of designing un-human-aware control
systems in IMS, which corresponds to what we call a “techno-centred” approach.
As introduced, we consider in this paper the way the human operator is integrated
within the control architectures in IMS. Such “Human-in-the-loop” Intelligent
Manufacturing Control Systems are denoted, for simplification purpose, HIMCoS
in this paper. These systems consider the intervention of human (typically, infor-
mation providing, decision making or direct action on physical components) during
the intelligent control of any functions relevant to the operational level of manu-
facturing operations, being for example scheduling, maintenance, monitoring,
inventory management, supply, etc. Intelligence in manufacturing control refers to
the ability to react, learn, adapt, reconfigure, evolve, etc. with time using compu-
tational and artificial intelligence technics, the control architecture being typically
structured using Cyber-Physical Systems (CPS) and modelled using multi-agent or
holonic principles, in a static or dynamic way (i.e., embedding self-organizational
capabilities). The intervention of the human is limited in this paper to the decisional
and information aspects (we do not consider direct and physical action on the
controlled system for example).
To illustrate what we call the techno-centred design approach in this context, let us
focus and consider a widespread studied IMS domain: distributed scheduling in
manufacturing control. Research activities in this domain foster a paradigm that
aims to provide more autonomy and adaptation capabilities to the manufacturing
control system by distributing functionally or geographically the informational and
decisional capabilities among artificial entities (typically agents or holons). This
paradigm creates “bottom-up” emerging behavioural mechanisms complementarily
to possible “top-down” ones generated by a centralized and predictive system to
limit or to force this emerging behaviour evolving within pre-fixed bounds [1]. This
paradigm encourages designers to provide these entities with cooperation or
negotiation skills so that they can react and adapt more easily to the growing level
106 D. Trentesaux and P. Millot
Assuming the human operator a magic human is obviously not realistic but it is a
reality in research in industrial engineering. In light of the mentioned reference in the
introduction to the French ministry study, a techno-centred design pattern in
HIMCoS is risky since it leads to overestimate the ability of the human operator who
must perfectly behave when desired, within due response times, and who is also
perfectly able to react facing unexpected situations: How can we be sure that he is
able to realize all what he is intended to do and in the best possible way? And more,
108 D. Trentesaux and P. Millot
do his human reaction times comply with the high-speed ones of computerized
artificial entities? Thus, what if he takes too much time to react? What if he makes
wrong or risky decisions? What if he simply does not know what to do?
Moreover, one specificity in HIMCoS renders the techno-centred approach more
risky. Indeed, as explained before, “bottom-up” emerging behaviours will occur in
HIMCoS. Emerging behaviours are never faced (nor sought) in classical
hierarchically/centralized control approaches in manufacturing. This novelty, analysed
with regards to the need to maintain and guarantee especially the safety levels in
manufacturing systems makes it more crucial. Typically, is the human operator ready
to face the unexpected in front of complex self-organizing complex systems? This
critical issue has seldom been addressed, see for example [12]. And, on the opposite
point of view, what to do in case of unexpected events, for which no foreseen technical
solution is available whereas the human is the only entity really able to invent one?
From our point of view, three main reasons explain why this assumption remains
hidden and is seldom explicitly pointed out.
The first one comes from the fact that researchers in industrial engineering are
often not expert in or even aware of ergonomics, human factor or human-machine
systems. A second one comes from the fact that integrating the human operator will
require introducing undesired qualitative and blurring elements coupled to hardly
reproducible and evaluable behaviours including complex experimental protocols
potentially involving several humans as “guinea pigs” for test purpose. Last, the
technological evolution in CPS, infotronics and information and communication
technologies facilitates the automation of control functions (denoted LoA: level of
automation), which make it easier for researchers to automate as much as possible
the different control functions they consider.
For all these reasons, researchers, consciously or not, “kick into touch” or
sidestep the integration of the human dimension, when designing their HIMCoS or
their industrial control system.
supply, automatic scheduling of processes, etc.). The designer must consider this
aspect when designing and allocating decisional abilities among entities. In other
words, if the human is accountable, he must be allowed to fully control the system.
Therefore:
The human must always be aware of the situation: According to Endsley
[16], Situation Awareness (SA) is composed of three levels: SA1 (perception of the
elements), SA2 (comprehension of the situation), SA3 (projection of future states).
Thus each of these SA levels must be considered to ensure that humans can take
decisions and make their mental models of the system evolve continuously (e.g., to
take over the control or just to know what is the situation).
The LoA must be adaptive: some tasks must be automated and some others
cannot be. But the related LoA must not be predefined and fixed forever. It must
evolve according to situations and events, sometimes easing the work of the human
(for example, in normal conditions) and other times, sending him back the control
of critical tasks (for example, when abnormal situations occur). As a consequence,
the control system must cooperate differently with the human according to situa-
tions: tasks allocation must be dynamic and handled in an adaptive way.
The diversity and repeatability of decisions must be considered, typically to
avoid boring repetitive actions/decisions. This also requires to explicit as much as
possible all the rare decisions for which the human was not prepared. For that, a
time-based hierarchy (e.g., strategic, tactic and operational levels) and a typology of
decisions (e.g., according to skill, rule or knowledge-based behavior) can be
defined.
Therefore, the human mental workload must be carefully addressed: related
to some of the previous principles, there exists an “optimal workload”, between
nothing to do, inducing potentially lack of interest and too much things to do,
inducing stress and fatigue. A typical consequence is that the designer must care-
fully define different time horizons (from real time to long term), balance the
reactions times of the human with the one of the controlled industrial system. This
is one of the historical issues dealt with by researchers in human engineering [17].
For sure it is not possible to draw a generic model of a HIMCoS that complies for
each possible case with all the previous principles. Despite this, we can propose a
human-centred design framework to provide to researchers in IMS (and in more
general, in industrial engineering) with some ideas to limit the magic human effect
in their control system. For that purpose, Fig. 3 presents such a global framework.
As suggested before, the process has been decomposed into 3 levels: operational
for the short run, tactical at a higher hierarchical level for achieving the interme-
diate objectives and strategic at the highest level. The human may be apparently
absent of the lower level, but this does not mean a fully automated system. We can
therefore consider the automation in the system in 3 subsets as in nuclear plant
A Human-Centred Design to Break the Myth of the “Magic Human” … 111
control: one subset is fully automated, a second one is not fully automated but the
feedback experience enables to design procedures that the human must follow (a
kind of automation of human), and the last subset is neither automated nor foreseen
and therefore must be achieved thanks to the human inventive capabilities. This
requires paying a particular attention when designing the whole system so that the
humans are able to play the best of them especially when no technical solution is
available!
This framework features some of the previously introduced principles. For
example, a mutual observation (through cooperation) is performed to consider the
limited reliability of either the human or the intelligent manufacturing control
system. Also, different time horizon levels are proposed. But some other principles
can be hardly represented in this figure. This is typically the case for the one dealing
with the adaptive LoA. Research in this field is very active since few years.
A famous guideline based on 10 levels has been proposed by [18], where at level 1,
the control is completely manual while at level 10, the control is fully automatized.
The 4th intermediary level corresponds to the DSS (the control selects a possible
action and proposes it to the operator). At level 6, the control lets a limited time to
the operator to counterbalance the decision before the automatic execution of the
decision. This can be specified for each level (strategic, tactical, and operational).
For example, it is nowadays conceivable that the Intelligent Manufacturing Control
system depicted in Fig. 3 changes itself the operational decision level from a level
1–4 to the level 10 because of the need to react within milliseconds to avoid an
accident while it lets the tactical decision level unchanged to an intermediary level.
Researchers in automated vehicle addressed adaptive LoA, which may be inspiring
in industrial engineering [19]. Works on Humans Machines Cooperation is one very
112 D. Trentesaux and P. Millot
promising track since the current technology allows embedding more and more
decisional abilities into machines and transform them into efficient assistants (CPS,
avatar, agents, holons…) to humans for enhancing performance. In such a context,
it is suggested that each of these assistants embed:
• A Know-How (KH, knowledge and processing capabilities and capabilities of
communication with other assistants and with the environment: sensors, actu-
ators), and
• A Know-How to Cooperate (KHC) allowing the assistant to cooperate with
others (e.g., gathering coordination abilities and capabilities to facilitate the
achievement of the goals of the other assistants) [13].
Recent works have shown that team Situation Awareness can be increased when
the humans cooperate with assistant machines equipped with such cooperative
abilities. Examples were shown in several application fields: air traffic control,
cockpit of the fighter aircraft and human robot cooperative rescue actions [20].
5 Conclusion
The aim of this paper was to raise awareness of the risk of maintaining hidden and
true the “magic human” assumption when designing HIMCoS and at a more
general level, industrial control systems with the human in the loop as a decision
maker.
The suggested human-centred design aims to reconcile two apparently antago-
nist behaviours: the imperfect human, who can correct and learn from his errors,
and the attentive and inventive human capable of detecting problems and bringing
solutions even if they are difficult and new. With a human-centred design approach
in IMS, human resources can be amplified by recent ICT tools to support them with
decision and action. The integration of such tools leads to the question of the level
of automation, since these tools could become real decision partners and even real
collaborators for humans [21].
References
1. Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., El-Haouzi, H.B.: Coupling
predictive scheduling and reactive control in manufacturing hybrid control architectures: state
of the art and future challenges. J. Intell. Manuf. doi:10.1007/s10845-015-1139-0 (2016)
2. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37, 255–274 (1998)
3. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57, 121–130 (2006)
A Human-Centred Design to Break the Myth of the “Magic Human” … 113
4. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic
multi-agent manufacturing systems: The ADACOR evolution. Comput. Ind. 66, 99–111
(2015)
5. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: theory and practice. Annual Rev. Control 37, 69–88 (2013)
6. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
7. Gaham, M., Bouzouia, B., Achour, N.: Human-in-the-Loop Cyber-Physical Production
Systems Control (HiLCP2sC): a multi-objective interactive framework proposal, service
orientation in holonic and multi-agent manufacturing, pp. 315–325, Springer (2015)
8. Zambrano Rey, G., Carvalho, M., Trentesaux, D.: Cooperation models between humans and
artificial self-organizing systems: Motivations, issues and perspectives. In: 6th International
Symposium on Resilient Control Systems (ISRCS), pp. 156–161 (2013)
9. Oborski, P.: Man-machine interactions in advanced manufacturing systems. Int. J. Adv.
Manuf. Technol. 23, 227–232 (2003)
10. Mac Carthy, B.: Organizational, systems and human issues in production planning, scheduling
and control. In: Handbook of production scheduling, pp. 59–90, Springer, US (2006)
11. Trentesaux, D., Dindeleux, R., Tahon, C.: A multicriteria decision support system for dynamic
task allocation in a distributed production activity control structure. Int. J. Comput. Integr.
Manuf. 11, 3–17 (1998)
12. Valckenaers, P., Van Brussel, H., Bruyninckx, H., Saint Germain, B., Van Belle, J., Philips, J.:
Predicting the unexpected. Comput. Ind. 62, 623–637 (2011)
13. Millot, P.: Designing human-machine cooperation systems. ISTE-Wiley, London (2014)
14. Pacaux-Lemoine, M.-P., Debernard, S., Godin, A., Rajaonah, B., Anceaux, F., Vanderhaegen,
F.: Levels of Automation and human-machine cooperation: application to human-robot
interaction. In: IFAC World Congress, pp. 6484–6492 (2011)
15. Schmitt, K.: Automations influence on nuclear power plants: a look at three accidents and how
automation played a role. Int. Ergon. Assoc. World Conf., Recife, Brazil (2012)
16. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors:
J. Hum. Factors Ergon. Soc. 37, 32–64 (1995)
17. Trentesaux, D., Moray, N., Tahon, C.: Integration of the human operator into responsive
discrete production management systems. Eur. J. Oper. Res. 109, 342–361 (1998)
18. Sheridan, T.B.: Telerobotics, automation, and human supervisory control, MIT Press (1992)
19. Sentouh, C., Popieul, J.C.: Human–machine interaction in automated vehicles: The ABV
project. In: Risk management in life-critical systems, pp. 335–350, ISTE-Wiley (2014)
20. Millot, P.: Cooperative organization for enhancing situation awareness. In: Risk management
in life-critical systems, pp. 279–300, ISTE-Wiley, London (2014)
21. Millot, P., Boy, G.A.: Human-machine cooperation: a solution for life-critical systems? Work,
41 (2012)
Sustainability in Production Systems:
A Review of Optimization Methods
Studying Social Responsibility Issues
in Workforce Scheduling
1 Introduction
making process, known as the Triple Bottom Line (profit, planet and people) [6].
The academic literature has witnessed the appearance of several reviews on
sustainable manufacturing, mainly focusing on the strategic decision-making levels:
supply chain design, layout design, cleaner product and production mean design,
construction, recycling process, etc. [4, 5]. As stated in [16], one of the main
reasons for the strategic level emphasis is that much of the sustainability efforts
have been driven by highest decision levels within organizations. According to
[16], research considering sustainability issues, as a whole, at lower decision-
making levels (i.e., operations control and scheduling) has been relatively limited.
Some efforts have been made in some industrial settings by considering only the
environmental dimension of sustainability [16]. To the best of our knowledge, the
social dimension has been still less studied.
Moreover, at present, regulations in various countries and trade agreements
across countries are increasingly addressing issues of social responsibility and
employee wellbeing. The international quality standard ISO 26000:2010 “Guidance
on Social Responsibility (SR)” recognizes labour practices as central to the for-
mation of company SR policies. Such practices outline issues that organizations
must address for their employees and subcontractors, by taking as a fundamental
principles that personnel is not a commodity. Therefore, employees may not be
treated as tools of production nor be subjected to the same market forces applicable
to goods [9]. These guidelines assume the adoption of socially responsible labour
practices that are fundamental to social justice, stability, and peace [9]. Hence, such
aspects are pertinent for personnel scheduling in manufacturing and service
organizations.
In this context, the aim of this paper is to review research works published from
2002 to 2014 that considers the social dimension of sustainability for manufacturing
and service systems in which workforce and personnel resource scheduling is a
central issue of concern. Indeed, workforce scheduling affects operating costs and
customer service quality [1], while at the same time affects staff morale, mental
health, social wellbeing and productivity [10, 11]. Given these considerations,
labour is not merely a productive resource, as it is necessary to consider each
employee as an individual with unique characteristics.
The goal is to identify at what extend social responsibility issues have been taken
into account on workforce scheduling literature using optimization methods. This
work will help advance knowledge on the application of international standards on
social responsibility to personnel scheduling in manufacturing and service systems.
A systematic literature review (SLR) approach is applied for the rigorous selection,
inclusion and exclusion, and classification of articles to identify trends and gaps in
scientific research and propose future lines of research. In turn, we illustrate how
published academic works have incorporated employees as human beings in
modelling approaches in workforce scheduling problems. The study period begins
in 2002, as the ISO committee presented its report on the viability and convenience
of delivering and international standard on social responsibility considering labour
practices and employees’ needs [8, p. 213].
Sustainability in Production Systems: A Review of Optimization … 117
3 Findings
This section presents the main findings of the systematic literature review. Statistics
about the number of papers published annually that meet our general search criteria
and those that consider aspects of SR are shown in Table 1. An average of 7.1
articles were published per year, with the highest number of publications published
from 2002 to 2006. Moreover, 24 % of the reviewed papers consider SR issues in
the objective function such as workload balance, work stability, employee satis-
faction and preference, ergonomic risk minimization, deviations in working day
volumes, deviations in minimum required vacation days for a period of time, and
maximum work hours. 4 % of the reviewed papers consider these criteria as soft
constraints in the model, implying that a restriction breach does not cause infea-
sibility, but is penalized through the objective function. Examples include:
assigning work to an unskilled employee, assigning working periods that exceed the
specified maximum, anticipated employee scheduling before a period of rest is
completed, among others. 49 % of the works discuss issues such as: employees with
multiple skills, variations in employee productivity, employee assignment avail-
ability and fatigue. These considerations recognize employees as human beings
rather than as mere productive resources.
In regard of the optimization objective, most commonly evaluated objective
functions are minimizing production costs or labour costs (slightly more than 60 %
of reviewed papers). Other objectives such as employee satisfaction (8 %), work-
load balance (7 %), and penalties for noncompliance with soft constraints (9 %) are
also considered. As noted previously, some soft constraints that consider employee
wellbeing can help solve related problems, and thus personnel satisfaction and
productivity are equally as important as satisfying demand at reasonable costs [14].
Regarding the problem solution technique, classical Operations Research
methods such as mathematical programming, heuristics and meta-heuristics are
employed to solve the problems. Binary variables are employed when the
requirement is to assign employees to certain tasks based on shifts of variable start
times and length while respecting the working hour maximum, among other con-
ditions. Hence, mixed-integer linear programming (MILP) modelling is the most
widely employed solution method (47 % of the short-listed papers). Heuristic
methods were the second most frequently identified solution technique (24 % of
reviewed papers), which are used in instances where MILP capacities are limited.
Among the reviewed papers, 20 % of them use decomposition methods calculating
the number of employees needed to meet shift labour needs and then determining
workdays and days off for each employee. Enumeration algorithms (Branch &
Bound, Branch & Price, Branch & Cut) and methods of Column Generation have
also been proposed. Meta-heuristic procedures such as Genetic Algorithm (GA),
Tabu Search (TS), Simulated Annealing (SA) and Particle Swarm Optimization
(PSO) are most frequently employed.
Another interesting issue to evaluate in this review were the job conditions as
described in Sect. 2 (Step 4). The most common scenarios for scheduling are:
constant labour involving homogeneous and non-hierarchical skills conducted over
multiple shifts per day (an employee can only work one shift per day, but can work
different shifts during the scheduling horizon), and constant demand or known
demand at the start of the planning period. These conditions can be applied
effectively to multiple industrial manufacturing sectors. Other conditions such as
variable labour, multiple workplaces, heterogeneous skills and variable demand
appear primarily related to the service industry (medical services, postal services,
check-in counters, maintenance services, and call centres, among others). As a
matter of fact, 55 % of shortlisted papers studied an actual application to real-life
manufacturing or service industry, while the other 45 % address theoretical prob-
lems with solution procedures tested using random-generated data sets.
The configuration of production system was another pertinent criterion for paper
classification. Manufacturing personnel scheduling problems are often modelled to
ensure that a sufficient number of employees meet daily work requirements several
studies do not specify the system configuration. As a consequence, it was possible
to classify only 26 % of reviewed papers. The most frequently used configurations
correspond to queuing models (12 % of papers). Flow shop and job shop config-
urations are respectively studied in 7 % and 3 % of reviewed papers. Applications
include manufacturing facilities and postal service activities. Moreover,
resource-constrained project scheduling (RCPS)-based models (4 % of papers) are
common in sectors such as construction, as one company is responsible for several
projects.
In addition, most of the reviewed papers aim to solve tangible issues, suggesting
that organizations are invested in resolving production-scheduling issues in ways
that reconcile employee work- and family-life responsibilities. This benefits the
employee, but it also benefits the company through higher productivity and service
quality [7, 10–12]. This is even more relevant in sectors where high staffing
turnover rates affect profits of businesses that incur constant costs due to hiring,
training, and employee development [3, 17].
From these findings, as noted in [13], a major research question remains
unsolved: what considerations about labour practice social responsibility help to
solve personnel scheduling problems to minimize errors resulting from overlooking
employee variability?
Based on these findings and our literature review analysis, we propose that future
research considers one or more of the factors listed hereafter:
1. Heterogeneous labour with productivity rates that are either stochastic or
deterministic but variable as a function of time, with implicit features such as
learning curves, work monotony, and employee fatigue during work shifts.
2. Aspects of family and social order that affect work employee performance and
cause re-scheduling due to absenteeism.
3. Model evaluation with multiple objectives through which employee satisfaction
and conciliation of work and family life are considered. Some of these con-
siderations have been studied in [13, 17].
4. The development of social welfare programs and evaluations of their effects on
the loyalty, morale, health and productivity of employees; strategic programs
that produce better outcomes for the development of operational level solutions.
5. The development of efficient tools for solving tangible personnel scheduling
problems that considers personnel. These may involve heuristics, meta-
heuristics, or computer simulation models for stochastic process modelling.
5 Concluding Remarks
This paper presented an updated literature review that examines various dimensions
of personnel scheduling with a particular emphasis on issues linked with the
practice of social responsibility. Also, this paper intended to evaluate how these
factors are considered in current scheduling research.
Our review showed that within the context problem features, rather than
objective functions, labour practices that consider unique features that distinguish
employees from other productive resources are accounted for most often. Mixed
Integer Programming (MIP) is the most widely used approach for problem mod-
elling and resolution. Decomposition techniques are also very often employed;
however because of its complexity, these techniques do not guarantee global
optimality for the integrated problem. Likewise, we highlight the development of
heuristic algorithms and the effective application of meta-heuristics such as Particle
122 C.A. Moreno-Camacho and J.R. Montoya-Torres
Acknowledgment The work presented in this paper was supported under a postgraduate schol-
arship awarded to the first author by Universidad de La Sabana.
References
1. Alfares, H.K.: Survey Categorization, and comparison of recent tour scheduling literature.
Ann. Oper. Res. 127(1–4), 145–175 (2004)
2. Denyer, D., Tranfield, D.: Producing a systematic review. In: Buchanan, D.A., Bryman, A.
(eds.) The Sage Handbook of Organizational Research Methods, pp. 671–689, Sage
Publications Ltd (2009)
3. Florez, L., Castro-Lacouture, D., Medaglia, A.L.: Sustainable workforce scheduling in
construction program management. J. Oper. Res. Soc. 64(8), 1169–1181 (2013)
4. Garetti, M., Taisch, M.: Sustainable manufacturing: trends and research challenges. Prod.
Plann. Control 23, 83–104 (2012)
5. Gunasekaran, A., Spalanzani, A.: Sustainability of manufacturing and services: Investigations
for research and applications. Int. J. Prod. Econ. 140, 35–47 (2012)
6. Montoya-Torres, J.R.: Designing sustainable supply chains based on the triple bottom line
approach. Proc. 2015 Int. Conf. Advanced Logistics and Transport (ICALT 2015).
Valenciennes, France, 1–6. May 20–22 (2015)
7. Musliu, N., Gärtner, J., Slany, W.: Efficient generation of rotating workforce schedules.
Discrete Appl. Math. 118(1–2), 85–98 (2002)
8. Navarro García, F.: Responsabilidad Social Corporativa: Teoría y práctica (2da ed). ESIC
Editorial (2008)
9. Organización Internacional de Normalización.: ISO 26000 Guía de Responsabilidad Social, Of
2010. Geneve, Switzerland (2010)
Sustainability in Production Systems: A Review of Optimization … 123
10. Petrovic, S., Van den Berghe, G.: A comparison of two approaches to nurse rostering
problems. Ann. Oper. Res. 194(1), 365–384 (2012)
11. Puente, J., Gómez, A., Fernández, I., Priore, P.: Medical doctor rostering problem in a hospital
emergency department by means of genetic algorithms. Comput. Ind. Eng. 56(4), 1232–1242
(2009)
12. Rocha, M., Oliveira, J.F., Carravilla, M.A.: A constructive heuristic for staff scheduling in the
glass industry. Ann. Oper. Res. 217(1), 463–478 (2014)
13. Thompson, G.M., Goodale, J.C.: Variable employee productivity in workforce scheduling.
Eur. J. Oper. Res. 170(2), 376–390 (2006)
14. Topaloglu, S., Ozkarahan, I.: Implicit goal programming model for the tour scheduling
problem considering the employee work preferences. Ann. Oper. Res. 128(1–4), 135–158
(2004)
15. Tranfield, D., Denyer, D., Smart, P.: Towards a methodology for developing evidence-informed
management knowledge by means of systematic review. Br. J. Manage. 14(3), 207–222 (2003)
16. Trentesaux, D., Prabhu, V.: Sustainability in manufacturing operations scheduling: stakes,
approaches and trends. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.)
APMS 2014, IFIP AICT 439, pp. 106–113. Springer, Heidelberg (2014)
17. Wright, P.D., Mahar, S.: Centralized nurse scheduling to simultaneously improve schedule
cost and nurse satisfaction. Omega 41(6), 1042–1052 (2013)
Identifying the Requirements for Resilient
Production Control Systems
1 Introduction
resource breakdowns, material delivery issues) [1]. The ability to identify, respond
and cope with disruptions is becoming essential for these firms to operate in a
global environment and be competitive at the same time.
The intricate nature of manufacturing systems coupled with interdependent
processes and human interactions are placing additional constraints on management
and control. The dependence of manufacturing systems on automation and their
ability to provide a higher degree of state awareness is important to identify the
onset of disruptions and also to develop resilience strategies [2].
From a manufacturing point of view, the need for resilience arises from the fact
to know the operational status of the systems in real time, to be aware of the state so
as to identify the onset of disruptions. Additionally, there is also the need to
determine or infer from the state, the most appropriate course of action that will
either reduce the impact of disruption or allowing coping with it. Essentially, this
implies manufacturing needs efficient tracking of information and control systems
that can infer from the tracking information to determine/act on the required mit-
igation strategy.
Resilience in general can be defined as the ability of the system to cope with
unexpected changes [2]. For production systems, resilience is closely associated
with robustness, responsiveness and agility. Robustness is the ability of the pro-
duction system to maintain its goal or the desired output in the face of distur-
bances [3]. Responsiveness is defined as the ability of the production system to
respond to disturbances [3]. On the other hand, agility refers to quick and adequate
changes to disturbances.
Despite the clear need for resilient systems in manufacturing, there has been a
lack of understanding of the key requirements that are needed for a resilient pro-
duction control. The key objective is to link disruption analysis to the design of a
resilient production control strategy. Additionally, the proposed approach is
demonstrated in a lab used as an experimental facility.
The need for resilient production system stems from the fact that production
operations are inherently prone to various disturbances and therefore the key
enabler for resiliency is the ability to avoid/survive/recover from disturbances.
Consequently, disturbance identification and their characteristics will influence the
resilience capability requirements, which will then lead to establishing the
resilience strategies and the consequent control and tracking requirements that will
enable the production system to be resilient to the identified disturbances. This
process is illustrated in Fig. 1.
Identifying the Requirements for Resilient Production … 127
Disturbance analysis and the resilience capability requirements are used to develop
strategies for utilising the underlying response capability in the system to cater for
disturbances. The resilience strategies should align with the following:
• Resilience strategies should be aligned with the phases of disruption. The
resilience phases indicate the timing of implementing the strategy. This implies
Identifying the Requirements for Resilient Production … 129
The key requirements for handling disturbances from a tracking perspective are the
need to update the operational status (awareness) and the ability to detect the
occurrence of disturbance.
Tracking system should be able to capture and process information from
various production resources. The need for tracking from a resilience perspective
gives rise to certain characteristics. The tracking system should be able to capture
and sense the data at an aggregate level, relating to the behaviour of a processing
line rather than the individual machines or products [6]. Similarly, the data should
be processed and analysed at higher level by combining information from people,
product and resource [6]. Additionally, the tracking system should be able to
capture event related information rather than raw data. This should allow the
tracking system to have additional functionalities, moving from data logging to
recording event related data.
Tracking system should be able to communicate the required information
for various production resources. The tracking system should be able to manage
the information by communicating the message required to the right entity and also
to store the data in a meaningful manner. Particularly for resilient production
systems, there is a requirement for automatic capture of real time data on uniquely
identified products, providing visibility of operations by associating products with
their current location, condition/status and history [7]. Additionally, it is also
important to capture process parameters, resource data and associate them with
products.
Tracking system should be interoperable. In order to capture data from dis-
parate sources, it is essential to consider existing standards and issues related to
interoperability. Tracking in production set up needs to combine data from physical
resources and control systems; it is thus important to integrate this information in a
seamless manner. Standards on communication and data representation must be
considered.
130 R. Srinivasan et al.
The role of production control is to interface with planning and schedule, and to
execute the respective operations based on the schedule. On the other hand, for
resilient production control, it should also determine and/or anticipate deviations
and to make necessary control adjustments accordingly.
Control should be able to communicate real-time information and incor-
porate them in analysis and decision making. There is an inherent need to
capture the information from the control system and communicate that information
to the tracking system and wider business entities. This allows production system to
gain operational visibility for disturbance handling. Additionally, the control system
should be able to incorporate information from disparate sources (through tracking
system) to analyse and act on the information signals.
Control system should be able to infer current state (local and global) and
predict/identify the onset of disturbances. For resilient production systems, it is
essential for the control system to know the operational state for the purpose of
detecting the disturbances. Also, the control system should be able to forewarn or
predict the occurrence of disturbances.
Control should have the ability to react/control for handling or coping with
disturbances. In addition to detecting disturbances, the resilient control system
should dynamically react or cope with disturbances. In this aspect the following
requirements are identified:
Control should be de-centralised: Centralised control will become complex
and difficult to adapt for handling disturbances [8]. Therefore, distributed intelligent
control will be more suitable for resilient production systems. In order to be resi-
lient, the distributed control should be product or resource-based.
– In resource-based architecture, the set of resources is able to allocate jobs
without a centralised support, allowing the system to be flexible and reconfig-
urable [8].
– In product-based architecture, the customer’s order/product drives the produc-
tion process by negotiating with the individual resources. This allows the system
to cope with variations in customers’ preferences and customisation.
Control should be adaptable (self-organising) and utilise dynamic capabil-
ities as needed: In order to cope with disturbances, the control system should be
adaptable and utilise the flexibilities provided by planning, processes, resources and
operational flow. Additionally, the system should be re-configurable and therefore
the control system should be self-organising.
Identifying the Requirements for Resilient Production … 131
Inspection information is used to decide whether the part quality is achieved. If the
parts are failed, then buffers are utilised to re-assign parts to orders. Unique ID on
parts helps in associating/disassociating parts to orders. To handle part misplace-
ments, the robot’s data matrix reader are used to read the unique ID before the start
of operations, thereby eliminating the possibility of wrong assembly of products.
• Information Handling: Control system analyses the part-id of each part before
proceeding with the operation. Additionally, the control system is transferring
real-time information regarding the status of orders/parts and resources.
• Operational State Awareness: The control system is communicating regularly
with the tracking system before and during operations, thus enabling it to
determine the current state and detect the occurrence of disturbances.
• React/control: Distributed holonic control principle is implemented, where
during quality disturbance the orders dynamically allocate parts through nego-
tiation with other order parts and buffers. Resources also co-operate with
orders/parts to carry out the job sequences.
4 Conclusions
References
1. Hu, Y., Li, J., Holloway, L.: Resilient control for serial manufacturing networks with advance
notice of disruptions. IEEE Trans. Syst. Man Cybern. 43, 98–114 (2013)
2. Rieger, C., Gertman, D., McQueen, M.: Resilient control systems: next generation design
research. In: 2nd Conference on Human System Interactions, HSI’09, pp. 632–636, May 2009
3. Matson, J., McFarlane, D.: Assessing the responsiveness of existing production operations. Int.
J. Oper. Prod. Manage. 19(8), 765–784 (1999)
4. Hu, Y., Li, J., Holloway, L.: Towards modeling of resilience dynamics in manufacturing
enterprises: literature review and problem formulation. In: IEEE International Conference on
Automation Science and Engineering, pp. 279–284. CASE 2008, Aug 2008
5. Zhang, W., van Luttervelt, C.: Toward a resilient manufacturing system. CIRP Ann. Manuf.
Technol. 60(1), 469–472 (2011)
6. McFarlane, D., Parlikad, A., Neely, A., Thorne, A. (2012): A framework for distributed
intelligent automation systems developments. In: Borangiu, T., Dolgui, A., Dumitrache, I.,
Filip, F.G. (eds.) 14th IFAC Symposium on Information Control Problems in Manufacturing,
vol. 14, pp. 758–763 (2012)
134 R. Srinivasan et al.
7. Brintrup, A., Ranasinghe, D., McFarlane, D.: RFID opportunity analysis for leaner manufac-
turing. Int. J. Prod. Res. 48(9), 2745–2764 (2010)
8. Bussmann, S., McFarlane, D.: Rationales for holonic manufacturing control. In: Proceedings of
the 2nd International Workshop on Intelligent Manufacturing System (1999)
9. Rieger, C.: Notional examples and benchmark aspects of a resilient control system. In: 3rd
International Symposium on Resilient Control Systems (ISRCS), pp. 64–71, Aug 2010
Requirements Verification Method
for System Engineering Based on a RDF
Logic View
1 Introduction
Most contributions on RE are either from the software engineering field, or focus on
the activity of requirements elicitation [1, 11]. A complete literature review on the
different methods for elicitation has also been proposed in [8]. Even regarding
requirements on products or services, research on requirements verification has led
to specific algorithms for manual verification methods, one requirement at a time,
generally on the basis of textual statements [3, 13]. Another part of literature is
generally designated as artefact-Based RE and focuses on classifying the require-
ments prior to their verification. Proposed models are thus focused on requirements
management and traceability. However no conceptual formalization is proposed in
these works, which generally settle for textual statements, as is the case for
ISO15288 or in the SysML formalism. For instance, Berkovich et al. [4] notes that a
Requirements Verification Method for System Engineering … 137
limit to the RD-Mod method is the absence of semantic link between the
requirements list (a document) and the functional architecture of the product. Yet, a
necessary condition for automatically verifying the compliance of a product to
requirements is a reliable semantic formalization of the requirements, linked to the
functional and organic system definition, and associated to a generic model. This
work is then not about the elicitation of requirements, which are considered as
input. It does not contribute to the process of requirements management within a
work flow either. This contribution presents a generic model for requirements
elicited beforehand, that allows to automatically verify compliance to a full set of
requirements through reasoning on the logical view of the product, while ensuring
traceability throughout the life cycle of the product. In order to do so, the
requirement model’s genericness, the system representation’s semantic richness,
and the mappings between them are crucial.
its satisfaction, a rich semantic is needed, that allows a distinction among the
different relations and concepts involved in the definition of requirements.
Automation of the verification process: given the scale and the complexity of the
considered systems, relying on human expertise alone to verify the requirements is
not sufficient. An automatic verification process can resolve a large number of
requirements, saving the engineers expertise for cases where it is most needed.
Atomic requirement verification: the complexity and the scale of a nuclear power
plant induces a potentially arbitrary large, interconnected logical network. The
information needed to verify each requirement must be reduced as much as possible
in order to limit analysed data, and thus to obtain results in an acceptable time
frame.
Reasoning reliability: in the context of nuclear engineering, reliable processes and
results are crucial. Some requirements are related to safety or nuclear security, for
which it is necessary for the verification to be as reliable as possible.
Genericness of the model: a generic model is chosen for requirements, in order to
both use a project-agnostic syntax for requirements definition, and to use a same
algorithm for the verification of any requirement, avoiding to resort to specific
methods and algorithms for different cases.
A conceptual and generic model of requirements for verification has been proposed
and discussed in [5] and is presented on Fig. 1. It holds the concepts involved in a
requirement, but it is not implementable as it is. This conceptualization relies on 5
generic elements: first the circumstantial conditions, in which the requirement
constraints an attribute of a constrained element, which can be a function, a
system or a component. To comply with the requirement, the attribute must be
consistent with a set of admissible values: the criterion. Verification of a
requirement’s satisfaction then consists in the comparison of the actual values of the
attribute to the criterion. To avoid any ambiguity in the lookup for all the con-
strained elements of a given requirement, contextual information are added as the
perimeter of a requirement. For instance “The valves of the cooling system must be
manoeuvrable locally” requires the “valves” as a constrained element to present
the attribute “manoeuvre type” equal to the criterion “local”. It does not concern
every valve in the plant though, only those in its perimeter, “the cooling system”.
Fig. 2 Twin graphs representing the conceptual and factual data of the system
140 A. Cornière et al.
On the left, function_1 has it own duration, while on the right a function “may
have” a duration as a characteristic. The expression of requirement 4 is in red and
has two parts: the constraint “data” i.e. the constrained element—has—attribute
triple, and the pattern specification for attribute—less than—1 h. Finally the ver-
ification in green is the comparison of the actual value from the data side with the
specification. In this example two values from different simulations are considered,
one satisfies the requirement while the other does not.
RDF Syntax: As seen above, the logical view for the system is represented as a
network of relations between individuals. RDF is well suited to declare individuals
and their relations as triples, in the form “item1 —relation—item2 ”, which allows to
represent a directed, labelled multi-graph.
Generic modelling of the requirements from a twin network of triples: The
requirements can be described as RDF triples as well, a first triple in the form
“constrained element—has—attribute” and a second one in the form “attribute—is
within—criterion” being the defining pattern for verification itself. Using RDF as a
mean to map this expected state of the design allows to distinguish within the data
model between the specification and the actual state of affairs, while the capability
to process them together remains. In terms of abstraction level, the requirement is
effectively a bridge from the specification (a conceptual representation) to the
reality of the design or implementation (a specific occurrence of the system).
Method for automatic verification of requirements: Verification occurrences can
then be derived from the requirements definitions as triples, by matching the triples
against sub-views of the logical view (i.e. views that consider only the relations
“has” from a concept to an attribute, and “is within” from an attribute to a criterion).
This generic “inspector” consists in a simple comparison and traceability meta-data:
one is created and run for each of the occurrences in the product view that match the
constrained element defining pattern. Their individual and aggregated results in turn
determine global satisfaction of the (conceptual) requirement as a whole.
4 Case Study
The case study is a work in progress drawn from the field of nuclear plants engi-
neering, a field characterised by systems of a great complexity and of a large scale.
Semantic elements necessary to the representation of a nuclear plant are several
billions, not counting the abstract objects for instance representing system groups of
phases of life. Also, the complexity of a nuclear installation arises from the multiple
non-trivial interactions of its elements, tangibly in the involved physical processes,
as well as in the data model representing them. By this two respects, nuclear
engineering is representative of complex, large-scale system engineering. Consider
a requirement from this application case (illustrated in Fig. 2), “Safety functions
must be performed in less than an hour”. To model this requirement, we use the
Requirements Verification Method for System Engineering … 141
The model and methods proposed in this contribution have limits to take into
account. First and foremost they rely on a consistent semantic representation of the
actual data to verify. It can be problematic, if possible at all, to retrieve these data
into the logical view if they are created or altered in specific environments, such as
specialized CAD software. This interoperability problem has been treated and
discussed in [10]. Overcoming this issue can be done with semantic mappings
between the global logical view and the various product views. Such mapping
demands reasoning capabilities, as presented in the perspectives below. The vast
and thorough logical view also raises an issue related to scale: as the system’s data
grows in number of relation, the complexity of the mapping with patterns grows
exponentially, potentially leading to arbitrary long processing. As of this contri-
bution no assumptions were made on the structure of the logical network; which
may present properties that could be used to optimize processing on the network for
allocation. The perimeter is another possible mitigation of the scale. As for the
verification itself, as the verification model being both light and generic [6], it can
be processed in batches efficiently. The model and the methods described in this
model being based on a formalism designed for data exchange and storage, using it
not only across the life cycle of the product, but also from a project to a following
one in the same family of products, is a foreseeable perspective. Storing require-
ments templates in the knowledge base could allow to generate conceptual
requirements from the knowledge and rules contained in it—possibly through rules
application. Reasoning directly on the “patterns” and product network directly is
142 A. Cornière et al.
not trivial, not only due to the size of the graph, but mainly because of its com-
plexity and the variety of rules that need to be applied on it to make it consistent.
e.g. transitivity of the system—has—subsystem relationship has to be performed. In
practice, completion of the information requires many additions to the network.
Ontologies can be leveraged, taking benefit from their reasoning capabilities and
support of SWRL rules to explicit the implicit part of information, prior to gen-
erating a more complete RDF graph of individuals.
6 Conclusion
References
1. Ahmed, N., Matulevicius, R.: A method for eliciting security requirements from the business
process models. In: CAiSE Forum and Doctoral Consortium, pp. 57–64 (2014). url http://ceur-
ws.org/Vol-1164/PaperVision08.pdf
2. Arnold, S.: ISO 15288 Systems Engineering System Life Cycle Processes. International
Standards Organisation (2002)
3. Ben-David, S., Sterin, B., Atlee, J.M., Beidu, S.: Symbolic model checking of product-line
requirements using sat-based methods. In: IEEE/ACM 37th IEEE International Conference on
Software Engineering (ICSE), 2015, vol. 1, pp. 189–199. IEEE (2015)
4. Berkovich, M., Leimeister, J.M., Hoffmann, A., Krcmar, H.: A requirements data model for
product service systems. Req. Eng. 19(2), 161–186 (2014)
5. Cornière, A., Fortineau, V., Paviot, T., Lamouri, S., Goblet, J.L., Platon, A., Dutertre, C.:
Modelling requirements in service to PLM for long lived products in the nuclear field. In:
Advances in Production Management Systems. Innovative and Knowledge-Based Production
Management in a Global-Local World, pp. 650–657. Springer, Berlin (2014)
6. Cornière, A., Fortineau, V., Paviot, T., Lamouri, S.: Towards a framework for integration of
requirements engineering in PLM. In: 15th IFAC Symposium on Information Control
Problems in Manufacturing INCOM 2015. IFAC-PapersOnLine 48(3), 283–287 (2015). doi
http://dx.doi.org/10.1016/j.ifacol.2015.06.095. url http://www.sciencedirect.com/science/
article/pii/S2405896315003341
7. ISO: Iso 10303 (2014)
8. Nisar, S., Nawaz, M., Sirshar, M.: Review analysis on requirement elicitation and its issues.
Int. J. Comput. Commun. Syst. Eng. (IJCCSE) 2, 484–489 (2015)
9. OMG: Sysml v 1.3 (2012). http://www.omg.org/spec/SysML/1.3
Requirements Verification Method for System Engineering … 143
10. Paviot, T.: Méthodologie de résolution des problèmes d’interopérabilité dans le domaine du
product lifecycle management. Ph.D. thesis, Ecole Centrale Paris (2010)
11. Rahman, M., Ripon, S., et al.: Elicitation and modeling non-functional requirements-a pos
case study (2014). arXiv preprint arXiv:1403.1936. url http://arxiv.org/pdf/1403.1936
12. Tsuchiya, S.: Improving knowledge creation ability trough organizational learning. In:
Proceedings of International Symposium on the Management of Industrial and Corporate
Knowledge, ISMICK, Compiègne, France (1993)
13. Viriyasitavat, W., Da Xu, L.: Compliance checking for requirement-oriented service workflow
interoperations. IEEE Trans. Ind. Inform. 10(2), 1469–1477 (2014)
14. Zave, P., Jackson, M.: Four dark corners of requirements engineering. ACM Trans. Softw.
Eng. Methodol. (TOSEM) 6(1), 1–30 (1997)
Approaching Industrial Sustainability
Investments in Resource Efficiency
Through Agent-Based Simulation
The modern global industrial system has delivered major benefits in wealth cre-
ation, technological advancement, and enhanced well-being in many aspects of
human life. However, industry is estimated to be responsible for some 30 % of the
greenhouse gases (GHG) in industrialized countries and is a major consumer of
primary resources [1], and resource scarcity and resultant price and supply issues
require new strategies and innovation at different levels [2]. Although some
progress towards sustainability has been achieved (i.e. eco-efficiency, cleaner
production, recycling initiatives, and extended producer responsibility directives),
overall sustainability at the macro-level has not improved. Similarly, despite
impressive improvements in material productivity and energy efficiency in many
industry sectors, overall energy and material throughput continues to rise. Even in
the instance where an industrial firm-level innovation seems to be highly effective,
the system-level effectiveness of most currently proposed models on the current
manufacturing value chain is largely unproven, and the long-run implications for
sustainability are poorly understood even because of lack in measuring these effects
[3] or supporting decision making [4]. To develop more sustainable industrial
systems industrialists and policy makers need to better understand how to respond
to economic, environmental, and social challenges and transform industrial beha-
viour accordingly by leveraging appropriate industrial technology investments to
reshaping the current manufacturing value chain. Investments have to be collected
on the private as well as public sides taking into account the stakeholders’
macro-economic framework. In order to identify effective incentives and enablers to
leverage, at least partially, financial capital investments into sustainability, the
dynamic interactions between financial capital, natural resources and technology
have to be analysed in their interdependences, increasing the overall complexity.
Several articles have focused attention on attaining successful levels of sustain-
ability with resource efficiency coupled with minimal impact on governments,
firms, households, research and public players, such as Meyer et al. [5], Behrens
et al. [6], Millok et al. [7], and Söderholm [8] as summarized in Table 1.
Agent-based modelling (ABM) has gained prominence through new insights on the
limitations of traditional assumptions and approaches, as well as computational
advances that permit better modelling and analysis of complex systems and par-
ticularly in the sustainability domain [9]. Agent-based models in the industrial
sustainability field are emerging and various authors have identified the potential
value and effectiveness and advocated such simulation approaches. Bousquet et al.
[10] provide a review of multi-agent simulations and ecosystem management,
Trentesaux and Giret [11] the adoption of manufacturing holons towards sustain-
able manufacturing operations control. Monostori et al. demonstrated the possibility
of ABM integrating manufacturing sustainability through multi-agent systems [12]
while Davis et al. used ABM integrated with a life cycle assessment to investigate
effects of energy based infrastructure system on its environment [13]. Yang et al.
used an agent-based simulation approach to investigate economic sustainability to
evaluate waste-to-material recovery system [14]. Typically, such works have
focused on specific environmental issues such as carbon, or waste, and generally
this is modelled at the individual firm level. Cao et al. demonstrated agent inter-
actions between the factory, consumers and the environment focusing on eco
industrial parks [15]. The findings of this research established the potential and
capability of ABM for investigating decision options for optimal eco-industrial
systems in both ‘open-loop’ and ‘closed-loop’ structures/systems.
In summary, ABM is an interesting emerging field that seems to have significant
potential for exploring the complex issues of transitions towards industrial sus-
tainability at system level. Applied to the next generation of industrial systems and
manufacturing value chains, such models have the potential to give industrialists a
needed test-bed for safe, low-cost management and policy experiments.
The reported research aims at using agent-based modelling to simulate the
dynamic models of the capital investment in technological investments for pro-
moting industrial sustainability. The dynamic model adopted is the EURACE
agent-based framework [16–18], a platform demonstrating the potential for mod-
elling of large complex systems with many thousands of heterogeneous agents.
The EURACE platform has been modified by coupling the environmental sector
with other established macroeconomic dimensions, by integrating material input,
resource efficiency and environmental policy considering explicitly the extracting
and manufacturing phases.
The pecking order theory [19] is adopted to determine a hierarchy of financial sources for the firm.
1
Approaching Industrial Sustainability Investments in Resource … 149
2. A new agent, called “mining company”, which extracts raw materials to be sold
to CGPs. We assume for simplicity that the raw materials price is exogenously
given and that there are no extractions costs. Therefore, revenues and profits of
the mining company coincide. Profits are paid out as dividends to the mining
company’s shareholders, which can be partially or totally the households pop-
ulating the Eurace economy. Raw materials costs can therefore be “recycled”
partially or totally back into the Eurace economy.
3. Capital goods are characterized by a resource efficiency parameter whose value
depends on the time capital goods are produced and delivered to CGPs. In
particular, the parameter’s value increases according to an exogenously given
yearly growth rate IR. The vintages of capital goods owned by each CGP set
their resource efficiency.
4. The government levies a new tax, called environmental or material tax, which is
applied to each CGP and computed as a percentage of the value of the raw
materials input. A the same time, the government subsidizes CGP’s investments
by rebating a percentage of their capital goods expenses, up to the amount of
environmental taxes paid (restricted case—S1) or without limitations (unre-
stricted case—S3),
5. Increasing resource efficiency and its related saving in raw materials costs and
environmental taxes as well as subsidies are taken into account by CGPs in their
net present value calculation to decide investments in new capital goods.
This section provides views from simulation experiments through a set of snapshots
of revenue recycling effects and efficiency investment decisions with consideration
of environmental tax and subsidy. The research deals with subsidy receipts on
capital costs for environmental tax paying firms only at 9 environmental tax levels
on raw material/production input (only one level will be reported on this paper
version while for the complete experimental set can be accessed at the following
dropbox link2). The experimental simulation is carried out over a sample period of
20 years (240 months).
Figures 1, 2 and 3 compare two levels of recycled percentage of mining com-
pany’s earnings to revenue into Eurace economy, to which 100 % (ER1) produced
better results over 50 % (ER05). For a simpler representation, the figures are only
produced for one selected environmental tax rate of 2.5 %, in reference to one
subsidy percentage (5 % restricted-S1 while the unrestricted-S3 has not been
reported) and efficiency dynamics (IR0 and IR2).
Results show that the higher the recycled earnings, the better the system is.
Increase in material consumption, GDP and employment levels are observed in
2
https://www.dropbox.com/sh/e052vgaix6a1x87/AADjTOSe0oRe4vcuoHcyKxCQa?dl=0.
150 F. Tonelli et al.
5 5
x 10 x 10
4 4
ER1,IR0,S1:5%,RT2.5%
Consumption
Consumption
ER05,IR0,S1:5%,RT2.5% 3
Material
Material
3
2
2
1 ER1,IR2,S1:5%,RT2.5%
ER05,IR2,S1:5%,RT2.5%
1 0
0 24 48 72 96 120 144 168 192 216 240 0 24 48 72 96 120 144 168 192 216 240
months months
5 5
x 10 x 10
2 2
ER1,IR0,S1:5%,RT2.5% ER1,IR2,S1:5%,RT2.5%
ER05,IR0,S1:5%,RT2.5% 1.5 ER05,IR2,S1:5%,RT2.5%
1.5
Waste
Waste
1
1
0.5
0.5 0
0 24 48 72 96 120 144 168 192 216 240 0 24 48 72 96 120 144 168 192 216 240
months months
Fig. 1 Material consumption and waste levels (S1: IR0, IR2, ER1 vs. ER05)
4
4
x 10 x 10
3 3
ER1,IR0,S1:5%,RT2.5% ER1,IR2,S1:5%,RT2.5%
ER05,IR0,S1:5%,RT2.5% ER05,IR2,S1:5%,RT2.5%
2 2
GDP
GDP
1 1
0 0
0 24 48 72 96 120 144 168 192 216 240 0 24 48 72 96 120 144 168 192 216 240
months months
Un employment (%)
Un employment (%)
80 80
ER1,IR0,S1:5%,RT2.5% ER1,IR2,S1:5%,RT2.5%
60 ER05,IR0,S1:5%,RT2.5% 60 ER05,IR2,S1:5%,RT2.5%
40 40
20 20
0 0
0 24 48 72 96 120 144 168 192 216 240 0 24 48 72 96 120 144 168 192 216 240
months months
Fig. 2 GDP and unemployment (S1: IR0, IR2, ER1 vs. ER05)
both efficiency dynamics. That is, with or without efficiency investment gains, the
system improves with increasing recycled earnings.
For further validation of efficiency investment dynamics and distinguished dif-
ference, the next section focuses more on 20 years averaged performance of 9
environmental tax levels between efficiency investment decisions (partial results
presented because of space limitations3).
For observations, Figs. 3, 4, 5, 6 and 7 display trends between the old (IR0) and
new efficiency investment gains (IR2) system. A target level of 2 % annual effi-
ciency gains was used in relation to an observed average annual resource pro-
ductivity growth rate for EU27 members. This is focused on the case of 100 %
recycled earnings, in reference to the best validated performance (see Figs. 1 and 2).
3
NB: IR0 (straight lines) versus IR2 (dash lines)
Subsidy percentages: 0 %-red; 5 %-blue; 10 %-pink; 15 %-black; 20 %-green.
Approaching Industrial Sustainability Investments in Resource … 151
IR2,S1:0% 2.8
IR0,S1:5%
2.6
Material Levels
IR2,S1:5%
IR0,S1:10%
2.4
IR2,S1:10%
IR0,S1:15% 2.2
IR2,S1:15%
2
IR2,S1:20%
IR0,S1:20%
1.8
0 2.5 5 7.5 10 12.5 15 17.5 20
Material Tax (%)
1.3
1.25
IR0,S1:0%
1.2
IR2,S1:0%
Waste Levels
1.15 IR0,S1:5%
IR2,S1:5%
1.1
IR0,S1:10%
1.05 IR2,S1:10%
IR0,S1:15%
1
IR2,S1:15%
0.95 IR2,S1:20%
IR0,S1:20%
0.9
0 2.5 5 7.5 10 12.5 15 17.5 20
Material Tax (%)
From the figures, it is evident that systems with efficiency investments produced
higher material consumption. Figure 4 shows the inverse proportion of waste
emission levels which are higher under efficiency gain systems due to suspected
rebound effects of higher material consumptions. On the positive side, Fig. 4 shows
the reduced margin in waste levels between inefficiency system (IR0) and efficiency
gain system (IR2), indicating an improvement in waste gap release. Furthermore the
critical decline in material (Fig. 3) and waste levels (Fig. 4) at higher material tax
rates indicates the powerful effect of environmental policy tool for reducing con-
sumption of a targeted good, but on the verge of drop in employment (Fig. 6) and
GDP (Fig. 7).
152 F. Tonelli et al.
35 IR2,S1:10%
IR0,S1:15%
IR2,S1:15%
30 IR0,S1:20%
IR2,S1:20%
25
20
15
0 2.5 5 7.5 10 12.5 15 17.5 20
Material Tax (%)
For intensity, the lower the value the better the system, although lower values
may sometimes indicate a presence of poor economic performance. Concerning
intensity, efficiency system gain produces mostly lower trends at all levels, sug-
gesting a better overall system (Fig. 5). Generally the introduction of subsidy
payments proves to be effective with minimizing shock effects.
Approaching Industrial Sustainability Investments in Resource … 153
IR2,S1:0% 7
IR0,S1:5%
IR2,S1:10% 4
IR0,S1:15%
3
IR2,S1:15%
IR0,S1:20% 2
IR2,S1:20%
1
0 2.5 5 7.5 10 12.5 15 17.5 20
Material Tax (%)
5 Conclusions
comparison, the next stage will consider a better focus on subsidy method com-
parison and impact trend. Other considerable future features include introduction of
recyclable product and multiple material input options.
References
1. Evans, S., et al.: Towards a Sustainable Industrial System (2009). Available at: http://www.
ifm.eng.cam.ac.uk/sis/
2. Tonelli, F., Evans, S., Taticchi, P.: Industrial sustainability: challenges, perspectives, actions.
Int. J. Bus. Innov. Res. 7(2), 143–163 (2013)
3. Taticchi, P., Tonelli, F., Pasqualino, R.: Performance measurement of sustainable supply
chains: a literature review and a research agenda. Int. J. Prod. Performance Manage. 62(8),
782–804 (2013)
4. Taticchi, P., Garengo, P., Nudurupati, S.S., Tonelli, F., Pasqualino, R.: A review of
decision-support tools and performance measurement and sustainable supply chain
management. Int. J. Prod. Res. 1–22 (2014)
Approaching Industrial Sustainability Investments in Resource … 155
5. Meyer, B., Distelkamp, M., Wolter, M.I.: Material efficiency and economic-environmental
sustainability. Results of simulations for Germany with the model PANTA RHEI. Ecol. Econ.
63(1), 192–200 (2007)
6. Behrens, A., Giljum, S., Kovanda, J., Niza, S.: The material basis of the global economy:
worldwide patterns of natural resource extraction and their implications for sustainable
resource use policies. Ecol. Econ. 64(2), 444–453 (2007)
7. Millock, K., Nauges, C., Sterner, T.: Environmental taxes: a comparison of French and
Swedish experience from taxes on industrial air pollution. CESifo DICE Rep. J. Inst.
Comparison 2(1), 30–34 (2004)
8. Söderholm, P.: Taxing virgin natural resources: lessons from aggregates taxation in Europe.
Resour. Conserv. Recycl. 55(11), 911 (2011)
9. Thomas, A., Trentesaux, D.: Are intelligent manufacturing systems sustainable? In: Borangiu,
T., Trentesaux, D., Thomas, A. (eds.) Services Orientation in Holonic and Multi-agent
Manufacturing and Robotics, pp. 3–14. Springer International Publishing, Berlin (2014)
10. Bousquet, F., Le Page, C.: Multi-agent simulations and ecosystem management: a review.
Ecol. Model. 176(3–4), 313–332 (2004)
11. Trentesaux, D., Giret, A.: Go-green manufacturing holons: a step towards sustainable
manufacturing operations control. Manuf Letter, To appear (2015)
12. Monostori, L., Váncza, J., Kumara, S.R.T.: Agent-based systems for manufacturing. CIRP
Ann. Manuf. Technol. 55(2), 697–720 (2006)
13. Davis, C., Nikolić, I., Dijkema, G.P.J.: Integration of life cycle assessment into agent-based
modeling. J. Ind. Ecol. 13(2), 306–325 (2009)
14. Yang, Q.Z., Sheng, Y.Z., Shen, Z.Q.: Agent-based simulation of economic sustainability in
waste-to-material recovery. In: 2011 IEEE International Conference on Industrial Engineering
and Engineering Management, pp. 1150–1154 (2011)
15. Cao, K., Feng, X., Wan, H.: Applying agent-based modeling to the evolution of eco-industrial
systems. Ecol. Econ. 68(11), 2868–2876 (2009)
16. Cincotti, S., Raberto, M., Teglio, A.: Credit money and macroeconomic instability in the
agent-based model and simulator Eurace. Economics: The Open-Access. Open-Assess. E-J. 4
(2010-26) (2010)
17. Cincotti, S., Raberto, M., Teglio, A.: Part II Chapter 4: The EURACE macroeconomic model
and simulator. In: Aoki, M., et al. (eds.) Complexity and Institutions: Markets, Norms and
Corporations, Masahiko Aoki, Kenneth Binmore, Simon Deakin, Herbert Gintis, pp. 81–104.
Palgrave Macmillan (2012)
18. Raberto, M., Teglio, A., Cincotti, S.: Debt deleveraging and business cycles. An agent-based
perspective. The Open-Access. Open-Assess. E-J. 6, 2012-27 (2012)
19. Myers, S., Majluf, N.: Corporate financing and investment decisions when firms have
information investors do not have. J. Financ. Econ. 13(2), 187–221 (1984)
20. Deaton, A.: Household saving in LDCs: credit markets, insurance and welfare. Scand. J. Econ.
94(2), 253–273 (1992)
21. McLeay, M., Radia, A., Thomas, R.: Money creation in the modern economy. Bank Engl.
Q. Bull. 54(1), 14–27 (2014)
22. Ekins, P., Pollitt, H., Summerton, P., Chewpreecha, U.: Increasing carbon and material
productivity through environmental tax reform. Energy Policy 42, 365–376 (2012)
23. Conrad, K.: Taxes and subsidies for pollution-intensive industries as trade policy. J. Env.
Econ. Manage. 25(2), 121–135 (1993)
Part IV
Holonic and Multi-Agent System Design
for Industry and Services
Increasing Dependability by Agent-Based
Model-Checking During Run-Time
1 Introduction
2.1 Model-Checking
The idea of formal verification was originally based on finding a deductive math-
ematical proof of a software construct in regard to a specification formulated with
temporal logic [1]. However theorem proofing is done manually and not applicable
on large complex software programs, as implemented in programmable logic
controllers (PLC) of aPS today. Therefore model-checking as an automatic and
algorithmic search method has been introduced by Clarke, Emerson and Sifikais in
Increasing Dependability by Agent-Based Model-Checking … 161
the early 1980s. Since the 1990s model checking is also subject in automation
science to enable formal verification for aPS.
Algorithmic approaches to verify hybrid systems containing discrete and con-
tinuous information about the aPS were developed by Silva et al. [2] and Stursburg
et al. [3]. Hence, complex continuous models, which possess an infinite number of
states, are firstly approximated by many small sub-models and subsequently dis-
cretized. Further approaches focused on model checking employing a probabilistic
model for abstraction of networked automated systems were developed by
Greifeneder and Frey [4] and Greifeneder et al. [5]. A comparison of the strengths
and weaknesses of simulative and formal methods for the analysis of response time
was also examined [6]. As conclusion, both methods have their own specific
characteristics and are not equally well-suitable depending upon the aim of the
analysis.
Focusing on verification of the PLC program code of aPS without considering
the plant behaviour as part of the system was examined by Schlich and Kowalewski
[7], Biallas et al. [8] and Kowalewski et al. [9] based on over-approximation.
Further Kowalewski et al. implemented two optimizations to deal with the state
space explosion problem [8]. Expanding this approach by integrating the aPS itself,
or separating the whole control system into small sub-systems (e.g. agents) was not
considered yet.
The necessity of integrating the plant model for model-checking of logic con-
trollers focused on untimed properties was presented by Santiago and Faure [10]
and Machado et al. [11]. Therein, the verification results obtained by the lack of a
plant model, by usage of a simple plant model or usage of a detailed model, were
compared. Faure et al. showed that only a detailed model of the plant is able to
verify every defined property, by combining simulative and formal methods, to
enable model-checking approaches to verify even complex aPS with continuous
behaviour.
1
Standards for interoperability among software agent platforms (FIPA), http://www.fipa.org,
retrieved on 8/17/2015.
162 S. Rehberger et al.
[16]. The design paradigm of MAS in industrial control assigns an agent a specific
perception (e.g. the sensor data) and an associated action space in which it carries
out tasks and manipulates the modules actuation.
Commonly the MAS is divided into a resource agent for a production module
and a product agent for a WP. The decisions in the MAS lead to solving a given
production request by allocating the tasks to the agent classes and therefore to
dynamically generate a production schedule during run-time. The description of the
capabilities and the technical process for manufacturing the WP is stored within the
agent’s knowledge-base and the exchange with other agents is conducted by
employing a common ontology to enable message encoding/decoding. Decision
making of an agent can be divided into two steps: deliberation and reasoning for
deriving a plan for future execution [17]. Above the sheer possibility to offer a
production step, it is not ensured that the action may be carried out without failure.
However this is undertaken by reasoning and further poses a crucial step towards
estimating the processing of a new WP type.
To enable an aPS to produce a WP, which was not considered during the design, a
concept for model checking at run-time based on MAS is presented. Hence, aPS is
partitioned into modules, e.g. a handling or processing station, connected by
logistic connections, e.g. belt drives. Regarding its flexibility, it is assumed that the
WP’s specifications are varying in parameters such as dimensions or weight, but not
in its magnitude and general structure. In the scenario of a cyber-physical pro-
duction system (CPPS), the request for a new product has to be considered for
production in more than one plant and consequently the decision must take per-
formance and safety indicators into account. In our case we realize this control by
the multi agent paradigm. Two different kinds of agents are incorporated in our
approach: first the product agent, generating a request for a new product in the form
of a specification; secondly the resource agent in form of the entity, carrying out the
production process. A redundant existence of functionalities in form of resources
opens the solution space for a flexible behaviour of the plant.
In case of an unknown product request, a module must ensure safety and fea-
sibility for production process execution. Consequently the agents need to carry a
knowledge base about their environment and the modules respective their physical
behaviour.
The model-checking mechanism is triggered after a deliberation process that
determines which resource agents are compatible with the certain product steps of
the product. The transformation of control software, e.g. PLC code, and semicon-
ductor hardware architecture into a representation for model-checking is not
focused here, since advances have already been conducted in this field in the last
decade [18–20]. More importantly the behaviour of the plant must be derived in
Increasing Dependability by Agent-Based Model-Checking … 163
Request
Perform Perform
model- model-
checking checking
response Check
failed Check
Solution B passed
available B directly
feasible Formulate
Offer counter-
example
Alternative A A feasible
available with fail-safe
strategy
Decide
Action
Execute
Time out
4 Evaluation
For evaluation of the concept, a Pick and Place Unit (PPU) for handling cylindrical
work pieces of different colours (white, black) and materials (plastic, metal) is used.
Considering typical evolution scenarios of aPS’, separated into sequential and
parallel evolution, 16 scenarios based on the PPU were developed [21]. In the last
evolution scenario the PPU contains the mechatronic modules separator, conveyor
and stamp. For transportation of the WPs, an electric driven crane with a vacuum
gripper is located between these modules.
To maximize the throughput of WPs transported by the crane, the acceleration
and deceleration distances of the crane have to be optimized as short as possible,
which increases the inert force on the WPs. Consequently WPs are oscillating
shortly after the crane has stopped at the target area. Hence, based on the oscillation,
WPs may be dropped beside the storage area, if the crane moves down directly after
it has stopped. A sectional drawing of an oscillating WP at the storage area using an
arbitrary set of parameters is shown in Fig. 2. Therein the y-axis is separated into
three parts: around zero, positive and negative, which describes the position of the
WP being above, right or left of the storage area, respectively.
gripper
storage WP
area
Fig. 2 Comparison of continuous and discrete oscillating WPs depending on their weight (m1 and
m2) with an arbitrary set of parameters
Increasing Dependability by Agent-Based Model-Checking … 165
T1
up C2 T2 C3
left middle Right
en: i++; en: i++; en: i++;
[vmove >= y] [vmove >= y]
Fig. 3 Finite state machines for vertical (left) and horizontal (right) movement processes with
clocks x and y
2
http://www.mathworks.com, retrieved on 2/19/2016.
166 S. Rehberger et al.
The resulting model checking problem can be compared to a proof for mutual
exclusion. The problem formulated in computation tree logic (CTL) is written as EF(f)
with f = (C1 && C2) ǁ (C1 && C3). This describes, that the model checker should
search if EF(f) holds for the processes of vertical and horizontal movement to not enter
the critical process C1–3 at the same time, meaning not to clamp or to drop the WP
besides the drop zone. Considering the oscillation frequency of the crane, which does
not depend on the mass of the WP, the verification results are also valid for WP, which
mass is comparable to the verified mass (see Fig. 3 cross marked line). To transport
WPs containing a high difference mass by the crane, further verification runs including
pre-calculations by the continuous model are essential. Based on the low number of
states, which are necessary to verify this mechanical phenomenon, run-time verifi-
cations in control systems, i.e. agents, are feasible.
Acknowledgment We thank the German Research Foundation (DFG) for funding this project as
part of the Priority Programme SPP 1593: Design for Future—Managed Software Evolution.
Increasing Dependability by Agent-Based Model-Checking … 167
References
1. Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and
debugging. Commun. ACM 52, 74 (2009)
2. Silva, B.I., Stursberg, O., Krogh, B.H., Engell, S.: An assessment of the current status of
algorithmic approaches to the verification of hybrid systems. In: Proceedings of the 40th IEEE
Conference on Decision Control (Cat. No.01CH37228), vol. 3, pp. 2867–2874 (2001)
3. Stursberg, O., Lohmann, S., Engell, S.: Improving dependability of logic controllers by
algorithmic verification. In: IFAC World Congress 2005, pp. 104–109 (2005)
4. Greifeneder, J., Frey, G.: Probabilistic hybrid automata with variable step width applied to the
anaylsis of networked automation systems. Discret. Syst. Des. 3, 283–288 (2006)
5. Greifeneder, J., Liu, L., Frey, G.: Comparing simulative and formal methods for the analysis of
response times in networked automation systems. In: IFAC World Congress 2008, vol. 1,
p. O3 (2008)
6. Kwiatkowska, M., Norman, G., Parker, D.: PRISM: probabilistic symbolic model checker.
Comput. Perform. Eval. Model. Tech. Tools 2324, 200–204 (2002)
7. Schlich, B., Kowalewski, S.: Model checking C source code for embedded systems. Int.
J. Softw. Tools Technol. Transf. 11, 187–202 (2009)
8. Biallas, S., Kowalewski, S., Stattelmann, S., Schlich, B.: Efficient handling of states in abstract
interpretation of industrial programmable logic controller code. In: Workshop on Discrete
Event Systems (WODES 2014), pp. 12–17, Cachan, France (2014)
9. Kowalewski, S., Engell, S., Preußig, J., Stursberg, O.: Verification of logic controllers for
continuous plants using timed condition/event-system models. Automatica 35, 505–518
(1999)
10. Santiago, I.B., Faure, J.-M.: From fault tree analysis to model checking of logic controllers. In:
IFAC World Congress 2005, pp. 86–91 (2005)
11. Machado, J.M., Denis, B., Lesage, J.J., Faure, J.M., Ferreira Da Silva, J.C.L.: Logic
controllers dependability verification using a plant model. In: Discrete Event Systems, vol. 3,
pp. 37–42 (2006)
12. Wooldridge, M., Jennings, N.: Intelligent Agents: Theory and Practice (1995)
13. Colombo, A., Schoop, R., Neubert, R.: An agent-based intelligent control platform for
industrial holonic manufacturing systems. IEEE Trans. Ind. Electron. 53, 322–337 (2006)
14. Poslad, S.: Specifying protocols for multi-agent systems interaction. In: ACM Trans. Auton.
Adapt. Syst. 2, 15 (2007)
15. Leitao, P., Marik, V., Vrba, P.: Past, present, and future of industrial agent applications. IEEE
Trans. Ind. Inf. 9, 2360–2372 (2013)
16. Schütz, D., Wannagat, A., Legat, C., Vogel-Heuser, B.: Development of plc-based software
for increasing the dependability of production automation systems. IEEE Trans. Ind. Inf. 9,
2397–2406 (2013)
17. Wooldridge, M.: An Introduction to Multiagent Systems, 2nd edn. Wiley, New York (2009)
18. Younis, M.B., Frey, G.: Formalization of existing plc programs: a survey. In: CESA 2003,
Lille (France), Paper no. S2-R. -00–0239 (2003)
19. Schlich, B., Brauer, J., Wernerus, J., Kowalewski, S.: Direct model checking of PLC programs
in IL. In: 2nd IFAC Workshop on Dependable Control of Discrete Systems, pp. 28–33 (2009)
20. Schlich, B.: Model Checking of Software for Microcontrollers (2008)
21. Vogel-Heuser, B., Legat, C., Folmer, J., Feldmann, S.: Researching evolution in industrial
plant automation: scenarios and documentation of the pick and place unit (2014)
A Synchronous CNP-Based Coordination
Mechanism for Holonic Manufacturing
Systems
Abstract Our paper presents a new holonic coordination approach, which can be
useful for the difficult case when more holons try to assign common resources. It is
based on the Contract Net Protocol and this is combined with synchronous back-
tracking. The coordination scheme also implies an a priori established hierarchy of
manager holons. The proposed method ensures a safe operation and was evaluated
on a manufacturing scenario with four robots.
1 Introduction
Literature shows more adaptations of CNP [2, 7, 8]; even so, to the best of our
knowledge there is no scheme to use a synchronous, hierarchical coordination of
agents.
Before detailing the new coordination scheme, the used assumptions are presented.
• If the HMS operation involves a single manager holon, then the system func-
tions according to the normal CNP.
• When more managers operate, there is an a priori established hierarchy between
them; this means there is a manager having the most important tasks to solve,
which consequently has the highest priority, and so on, until the less important
manager that has the lowest priority. This hierarchy is decided at a certain
instant, supposing that all manufacturing commands are known at that time.
Orders that are received during negotiation will be considered in the next
deliberation phase. The way managers create holarchies for their goals’ solving
is a distributed one, according to the negotiation carried out with resource
holons.
• Each manager knows which is its successor and predecessor, respectively.
• Any manager (an order holon or product holon) may have to solve one or more
tasks; these are goals for resource holons, which have the role of contractors in
CNP. The pairing between managers’ goals and contractors’ is so that the case
when a contractor becomes manager is excluded (it is supposed that any goal is
to be solved by a single contractor). Thus, the set of managers is a priori known
and not modified during the coordination process. In what follows we call a
manager’s assignment the pairing established between its goals and contractors.
• Managers communicate among themselves with two types of messages as in
DisCSP; these messages are adapted to the specific of CNP. We use Ok and
Ng_Ok messages. An Ok message is sent from a manager to its successor to
announce the contractors being used by itself and its predecessors. Each man-
ager appends or updates the received Ok message with its own assignment. As
Ok message are used to also announce the result after a backtracking process,
there are two types of Ok messages: positive (labelled Ok+) and negative (Ok−).
A Ng_Ok message is issued when an agent cannot solve a goal according to the
bids received from contractors. This is sent from an agent to its predecessor. The
Ng_Ok messages contain two parts: a Nogood part which includes information
on contractors that are needed by a manager to satisfy its goals and are requested
from higher priority agents, and an Ok part which contains the updated situation
on the contractors used by managers. This part is continuously updated by
managers, as explained below. It is to be understood that a Ng_Ok message
A Synchronous CNP-Based Coordination Mechanism … 171
In what it follows agents having the role of managers in CNP are labelled with M,
while contractors being resource holons are labelled with R. Due to the syn-
chronous operation of the proposed coordination scheme, in which at any time a
single manager is active, we have to describe the operation of the first manager (the
one with the highest priority), of an intermediary manager and of the last manager.
(a) Operation of manager M1
Manager M1 starts the search process. At that time, it is supposed that all
managers have already received the input data so that they know their goals. M1
applies CNP. This regards broadcasting the goals to all contractors, the bidding
process, the decision on chosen bids and the announcement of selected contractors
172 D. Panescu and C. Pascal
[5]; the execution part of CNP is not discussed in this paper. If the result of CNP is
positive (all goals could be solved), then M1 sends an Ok+ message to M2, which is:
(Ok+, (M1, CM1)), where CM1 is the list of contractors that were chosen by M1.
According to one of the hypotheses presented in the previous section, if M1 cannot
satisfy all its goals, then it sends an Ok+ message with no assignment, namely: (Ok
+, (M1, ())). Being the agent with the highest priority, M1 can receive a Ng_Ok
message from its successor. In this case, it applies a procedure to solve the request
(this is explained latter); if the result is positive, it continues with an Ok+ message,
otherwise with an Ok− message.
(b) Operation of manager Mi
In this case the Ng_Ok message composed by Mi is: (Nogood; (Mi, ((R3), (R4,
R5))); Ok; ((M1, CM1), …, (Mi, (R1)))), by which it requests the releasing of
resource R3 and R4 or R5. Then it waits for receiving a consequent Ok message; if
this is an Ok+ it means that the requested contractors could be freed by its pre-
decessors and a right assignment will be found after applying the CNP. If it receives
an Ok− message, its request could not be satisfied and thus Mi continues with an
Ok+ sent to Mi+1, indicating no assignment made by itself.
Being in intermediary manager, Mi can receive a Ng_Ok message from a suc-
cessor. In this case it applies the procedure Solve which uses the data of the
received Ng_Ok message. Each manager, after applying CNP, keeps the whole list
of received bids. Using this list, the manager can determine whether it can release
all the requested resources—in this case the procedure Solve returns Positive, or not
when Solve returns Negative. In the first case Mi must continue with a corre-
spondingly updated Ok+ message to be sent to Mi+1. Otherwise, it updates the
Ng_Ok message if it can release part of the requested contractors and sends this
message to its predecessor, so that backtracking is continued. As an example, let us
suppose that manager Mi-1 received the above considered Ng_Ok message and it
can release only the resource R4. Namely, according to the received bids, it happens
that it can use R6 instead of R4. In this case the Ng_Ok message that will be sent by
Mi−1 to Mi−2 is: (Nogood;(Mi, ((R3))); Ok; ((M1, CM1), …, (Mi−1, … R6 …), (Mi,
(R1)))).
A further case is when an intermediary agent receives an Ok message after a
backtracking process was initiated. Here we have two situations. If the manager is
not the one that initiated the backtracking process, then it only has to pass the
message unchanged to its successor. If the manager is the one that initiated a
Ng_Ok message, then depending on the type of received message (Ok+ or Ok−), it
will be able to make an assignment for all its goals or it fails. In both cases, it sends
an Ok+ message to its successor, so that the search should be continued.
(c) Operation of manager Mn
Fig. 1 The manufacturing environment that inspired the case study used in this paper
thus the corresponding product holon gets the highest priority. The following pri-
orities are for holons representing pallets P2, P3 and P4.
The first instance of our problem is for the initial state displayed in Fig. 2a. The
parts are of four types: A, B, C and D. The storages of robots contain parts as shown
in Fig. 2a near each robot, and the needed content of pallets is presented, too. The
coordination process is started by M1. For its four goals (a goal is issued by M1 for
each part to be placed in P1) it receives bids from robot resource holons R1 and R2.
The received bids (according to the contents of robots’ storages) are: R1 + A;
R1 + B; R1 + C; R2 + A; R2 + A; R2 + B. Bids are marked with the name of robot
and name of part to be transferred in P1, and all bids are positive because these are
the first bids made by contractors. From this set of bids, M1 selects a solution which
is communicated to contractors (each contractor will know whether it was selected
by M1 or not, so that to make corresponding bids for next managers) and is
communicated through an Ok+ message to manager M2. Namely, let us suppose
that M1 decided the solution corresponding to the message: (Ok+; ((M1, (R1 + B,
R1 + C, R2 + A; R2 + A)))).
After receiving this Ok message, M2 applies CNP for its goals and receives the
following bids: R2 + B; R2 + B; R2 + D; R3 + C; R3 + D. M2 selects a combination
of bids that satisfies all its goals and communicates the corresponding Ok+ message
to M3: (Ok+; ((M1, (R1 + B, R1 + C, R2 + A, R2 + A)), (M2, (R2 + B, R2 + B,
R3 + C, R3 + D)))). Then, M3 applies CNP to solve its goals and receives the bids:
R1 + A; R1 + C; R1 + D; R1 − C; R4 + A; R4 + C. From this set of bids M3 can find
a solution and correspondingly sends the following Ok+ message to M4: (Ok+;
((M1, (R1 + B, R1 + C, R2 + A, R2 + A)), (M2, (R2 + B, R2 + B, R3 + C, R3 + D)),
(M3, (R1 + C, R1 + D, R4 + A, R4 + C)))). When the last agent applies CNP, it
receives the following bids: R3 + D; R3 − D; R4 − A; R4 + B. From this set of bids
M4 cannot get a solution for all its goals; namely it did not receive any positive bid
for a part A and a part D. So it starts backtracking by issuing a Ng_Ok message to
its predecessor: (Nogood; (M4, (R3 − D, R4 − A)); Ok; ((M1, (R1 + B, R1 + C,
R2 + A, R2 + A)), (M2, (R2 + B, R2 + B, R3 + C, R3 + D)), (M3, (R1 + C, R1 + D,
R4 + A, R4 + C)) (M4, (R3 + D, R4 + B)))). It is to notice that the Ok part of message
included the contractors reserved by M4 in its attempt to solve the goals. M3
receives the above message and tries to solve it without jeopardizing the solution
for its goals. Thus, it happens that M3 can free the contractor R4 with the part A, by
choosing another bid for that goal, namely the one of contractor R1. The other
request of the Ng_Ok message (R3 with part D) cannot be solved by M3 because in
fact it did not engage contractor R3. Thus, M3 creates a new Ng_Ok message with
updated Nogood and Ok parts, which is sent to M2: (Nogood; (M4, (R3 − D)); Ok;
((M1, (R1 + B, R1 + C, R2 + A, R2 + A)), (M2, (R2 + B, R2 + B, R3 + C, R3 + D)),
(M3, (R1 + C, R1 + D, R1 + A, R4 + C)), (M4, (R3 + D, R4 + B)))).
According to the previously received set of bids, M2 discovers that it can free the
contractor R3 with the part D, as it can use the bid for part D from R2. Thus it
creates an Ok+ message (as there is nothing more to be solved in the Ng_Ok
message) with its part correspondingly up-dated. This is: (Ok; (M1, (R1 + B,
R1 + C, R2 + A, R2 + A))(M2, (R2 + B, R2 + B, R2 + D, R3 + C))(M3, (R1 + C,
176 D. Panescu and C. Pascal
Table 1 Coordination process for the second instance of the manufacturing problem
Step Agent Received bids Issued message
1 M1 R1 + A; R1 + B; R1 + C; (Ok+; ((M1, (R1 + B, R1 + C, R1 + D;
R1 + D; R2 + A; R2 + B R2 + A))))
2 M2 R2 + B; R2 + B; R3 + B; (Ok+; ((M1, (R1 + B, R1 + C, R1 + D, R2 + A)),
R3 + C; R3 + D (M2, (R2 + B, R2 + B, R3 + C, R3 + D))))
3 M3 R1 + A; R1–D; R4 + D; (Nogood; (M3, (R1 − D)); Ok; ((M1, (R1 + B,
R4 + D R1 + C, R1 + D, R2 + A)), (M2, (R2 + B,
R2 + B, R3 + C, R3 + D)), (M3, (R1 + A,
R4 + D; R4 + D))))
4 M2 Same previous set (Nogood; (M3, (R1 − D)); Ok; ((M1, (R1 + B,
R1 + C, R1 + D, R2 + A)), (M2, (R2 + B,
R2 + B, R3 + C, R3 + D)), (M3, (R1 + A,
R4 + D; R4 + D))))
5 M1 Same previous set (Ok−; ((M1, (R1 + B, R1 + C, R1 + D, R2 + A)),
(M2, (R2 + B, R2 + B, R3 + C, R3 + D)), (M3,
(R1 + A, R4 + D; R4 + D))))
6 M2 Same previous set (Ok−; ((M1, (R1 + B, R1 + C, R1 + D, R2 + A)),
(M2, (R2 + B, R2 + B, R3 + C, R3 + D)), (M3,
(R1 + A, R4 + D; R4 + D))))
7 M3 Same previous set (Ok+; ((M1, (R1 + B, R1 + C, R1 + D, R2 + A)),
(M2, (R2 + B, R2 + B, R3 + C, R3 + D), (M3,
())))
8 M4 R3 + A; R3 + B; R3 - D; (Ok+; ((M1, (R1 + B, R1 + C, R1 + D, R2 + A)),
R4 + D; R4 + D (M2, (R2 + B, R2 + B, R3 + C, R3 + D), (M3, ())
(M4, (R3 + A; R3 + B; R4 + D; R4 + D))))
This paper proposes a holonic coordination mechanism for cases that can frequently
appear in practice, namely when more entities are competing for the same
resources. The merit of the introduced coordination scheme is that it always
determines a solution when there exists one. This happens because at any time only
one manager operates with contractors (it means no blocking between managers is
A Synchronous CNP-Based Coordination Mechanism … 177
possible), and for each manager an exhaustive search is made (through back-
tracking). Thus, a solution for a manager is not found only when either the man-
ufacturing environment does not have the needed resources, or if these exist they
are used by managers with higher priorities. Thus, an HMS or in general a mul-
tiagent system operating according to the proposed scheme can guarantee solutions
for goals, in the order of their importance. In comparison with our previous
approaches [2, 8], the present one has the advantage of converging to the optimal
result. As a weak point, our mechanism is time consuming, due to the backtracking
process and the way managers operate successively. For certain cases of manu-
facturing planning processes, time may not be critical, and thus the method should
be applicable. As future work, we want to make further tests on virtual and real
environments, complex problems and to see if it is possible to apply a distributed
asynchronous backtracking approach, while keeping the safe operation.
References
1. Vrba, P., Tichy, P., Marik, V., Hall, K., Staron, R., Maturana, F., Kadera, P.: Rockwell
automation’s holonic and multiagent control systems compendium. IEEE Trans. Syst. Man
Cybern.—Part C: Appl. Rev. 41(1), 14–30 (2011)
2. Panescu, D., Pascal, C.: An extended contract net protocol with direct negotiation of managers.
In: Borangiu, T., Trentesaux, D., Thomas, A., (eds.) Service Orientation in Holonic and Multi
Agent Manufacturing and Robotics. Studies in Computational Intelligence, vol. 544, pp. 81–95.
Springer, Berlin (2014)
3. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37, 255–274 (1998)
4. Panescu, D., Pascal, C.: On a holonic adaptive plan-based architecture: planning scheme and
holons’ life periods. Int. J. Adv. Manuf. Tech. 63(5–8), 753–769 (2012)
5. Smith, R.G.: The contract net protocol: high level communication and control in a distributed
problem solver. IEEE Trans. Comput. C-29, 1104–1113 (1980)
6. Pascal, C., Panescu, D.: A petri net model for constraint satisfaction application in holonic
systems. In: IEEE International Conference on Automation, Quality and Testing, Robotics
(AQTR), Cluj-Napoca, Romania (2014)
7. Kim, H.M., Wei, W., Kinoshita, T.: A new modified CNP for autonomous microgrid operation
based on multiagent system. J. Electr. Eng. Technol. 6(1), 139–146 (2011)
8. Panescu, D., Pascal, C.: On the staff holon operation in a holonic manufacturing system
architecture. In: Proceedings of ICSTCC, Sinaia, Romania, pp. 427–432 (2012)
9. Panescu, D., Pascal, C.: HAPBA—a holonic adaptive plan-based architecture. In: Borangiu, T.,
Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing Control. Studies in Computational Intelligence, pp. 61–74. Springer, Berlin
(2012)
Interfacing Belief-Desire-Intention Agent
Systems with Geometric Reasoning
for Robotics and Manufacturing
1 Introduction
2 Background
Geometric Reasoning. In this paper we use the term geometric reasoning to refer
to motion planning as defined in [10]. A state, then, is the 3D world W ¼ R3 , and
its fixed obstacles are the subset O R3 . A robot is modelled as a collection of
(possibly attached) rigid bodies. For example, a simple polygonal robot A could be
defined as the sequence A ¼ ðx1 ; y1 ; z1 Þ; . . .; ðxn ; yn ; zn Þ, where each ðxi ; yi ; zi Þ 2 R3 .
A key component of motion planning is a configuration space C which defines all
the possible transformations that can be applied to a body such as A above. More
Interfacing Belief-Desire-Intention Agent Systems … 181
1
R is short for Robot, F for From, T for To and ti for table i.
182 L. de Silva et al.
Fig. 1 The assembly platform, tool rack, and a simulation of the pallet being gripped
and then checking for success by testing B via ? atðr1; t2Þ. The action to navigate is
defined by the following action-rule: navðR; F; T Þ : atðR; F Þ ^ canMovðR; F; T Þ
mvExecðR; F; T Þ; mvEff ðÞ, where mvExecðR; F; T Þ is associated with a procedure
that moves the robot, and mvEff ðÞ with one that returns, possibly after sensing the
environment, a set of literals representing the result of moving.2
The Assembly Platform. We use the production system in Fig. 1 [11] as a
running example to motivate some of the concepts in this paper. The system
combines the functionality of six independent workstations, each governed by a
separate agent, to assemble detent hinges for lorry-cab furniture. Each station is
served by a linear transfer system that transports a pallet carrier; this supports a
pallet with the individual parts that need to be assembled, as well as the
partially/fully assembled hinge. The six workstations, controlled by PLCs
(Programmable Logic Controllers), are as follows: two consist of a Kuka robot
each; two accommodate one workspace each; one contains a tool changing rack;
and one contains an inspection station. The tool changing rack is placed between
the Kuka arms, which have access to the rack as well as to the workspaces that are
used for carrying out assembly operations. The rack contains six slots which can
hold up to six different types of end effectors such as pneumatic and two-finger
grippers. RFID tags on the tools are used to determine which of them are currently
on the rack, so that the Kuka arms may dynamically lock into the relevant ones
during assembly. Finally, the inspection station is used to perform force and vision
tests to verify whether the hinge was assembled correctly. The hinge that is
assembled is composed of two separate leaves held together by a metal pin. Three
metal balls need to be placed into adjacent cylindrical slots in the center of the
hinge, three springs need to be placed into the same slots, and a retainer is used to
close the hinge. By using only a subset of these parts to assemble a hinge, there can
be four product variants, each having a different detent force.
2
An action-rule’s body is adapted from STRIPS to be a sequence of functions that return a
(possibly empty) set of literals, each of which is applied to the belief base B, i.e. the positive literals
are added to B, and atoms associated with negative literals are removed from B.
Interfacing Belief-Desire-Intention Agent Systems … 183
Like in works such as [2, 7], evaluable predicates are fundamental in linking
AgentSpeak with geometric reasoning. While standard predicates are evaluated by
looking up the agent’s belief base, evaluable predicates are attached to external
procedures, which for us involve searching for a viable trajectory within a geometric
world/state W. Thus, we call such predicates geometric predicates. For example,
predicate canMovðR; curr; T Þ in our plan-rule from the previous section could be a
geometric predicate which invokes a motion planner to check whether it is possible
for Kuka arm R to move from its current pose curr to tool T, specifically, to a position
from where the arm can now easily lock into the tool with a predefined vertical
motion. We use curr as a special constant symbol to represent the current pose.
To evaluate a geometric predicate it needs to be associated with a collection of
goal poses, from which at least one needs to have a viable trajectory from the
current pose for the predicate to evaluate to true. Goal poses could either be
determined manually or computed offline automatically with respect to the 3D
model of the world and the objects involved. In our assembly platform, for
example, the Kuka arms are manually trained on how to grasp the various shapes
that might be encountered during production. This is especially important because
objects like the pallet carrier are too heavy to be lifted from most seemingly good
grasps and poses—there is only one pose that will work; indeed, a simple 3D model
of the world that cannot also take into account additional information such as object
weights will not be able to automatically predict such goal poses accurately.
Consequently, we require that a “sampling” SMP from ground geometric predicates
to their corresponding goal poses be provided by the user. For example, predicate
canMovGr ðk1; gr1; curr; pcÞ, which checks whether Kuka arm k1 combined with
gripper gr1 can move to a pose from where pallet carrier pc can be grasped, will
map to the set consisting of just the single pose depicted in Fig. 1.
We describe SMP as follows. Let P ¼ p1 ðo1 ; . . .; oj Þ; . . .; pn o01 ; . . .; o0k be
the set of ground instances of all geometric predicates occurring in the agent, and
Ps ¼ fp1 ; . . .; pn g and O ¼ fo1 ; . . .; oj ; . . .; o01 ; . . .; o0k g their associated predicate
and constant symbols, respectively. Then, if nmax is the maximum parity of a
predicate in P, function SMP is denoted by the partial function
SMP : C Ps O1 . . . Onmax ! 2C , where C is the configuration space and
each Oi ¼ O. Thus, function SMP is a user-defined “sampling” with only the goal
poses that “matter” with respect to the current pose c 2 C and the given ground
geometric predicate. In practice, the full goal pose for a task such as picking up an
object could be computed dynamically from a user-supplied pose for the gripper—
such as the one in Fig. 1—by first transforming the gripper’s pose to “place” it
relative to the object and within the current world W, and then using inverse
kinematics to derive suitable poses for the geometric bodies that form the robot arm,
which are attached to the gripper and to each other.
Function SMP is used within an “intermediate layer” like the ones used in [2, 3],
which we actualise here via a special evaluable predicate denoted by
184 L. de Silva et al.
þ !ep ðvÞ : true actSuccp ðvÞ actSuccp ðvÞ : INTðp; vÞ execðÞ; postðÞ; U>
!ep ðvÞ : true actFailp ðvÞ actFailp ðvÞ : :INTðp; vÞ postðÞ; U?
3
For simplicity we omit the last parameters of INTðp; vÞ, which may be null constants.
Interfacing Belief-Desire-Intention Agent Systems … 185
postðÞ is a function that returns the set of (symbolic) facts representing either the
pose that resulted from executing execðÞ, or the “reasons” why there was no tra-
jectory while evaluating the precondition, i.e. the set FCT computed by INTðp; oÞ.
Likewise, execðÞ is associated with a procedure that executes (in the real world) a
given motion plan, which in our case is the one that was assigned to SOL when
INTðp; oÞ was called. Action actFailp ðoÞ is not associated with any such function
because its action-rule is only chosen when there is no viable motion plan. Thus, the
rule’s precondition confirms that :INTðp; oÞ still holds, just in case there was a
relevant change in the environment after INTðp; oÞ was last checked, causing
INTðp; oÞ to now hold (in which case there are no failure-related facts to include).
We assume that execðÞ always succeeds and that if necessary the programmer
will check whether the action was actually successful by explicitly testing its
desired goal condition. This is exemplified by the !moveðR; F; T Þ achievement goal
in Sect. 2, where ?atðR; T Þ checks whether the navigateðR; F; T Þ action was suc-
cessful. One property of the described encapsulation is that looking for motion
plans and then executing them and/or applying the associated symbolic facts are
one atomic operation—no other step can be interleaved to occur between those
steps. This ensures that a motion plan found while evaluating an action’s precon-
dition cannot be invalidated by an interleaved step while the action is being
executed.
Once all geometric predicates have been encapsulated as described, we may then
use their corresponding achievement goals from within plan-rules. Since we cannot
include them in context conditions (logical formulae) they can instead be placed as
the first steps of plan bodies. This allows such achievement goals to be ordered so
that the ones having the most computationally expensive geometric predicates are
checked only if the less expensive ones were already checked and they were met.
There are certain elements in the geometric representation that are worth abstracting
out into their corresponding symbolic entities so that they may be exploited by the
agent. Our first abstraction is a user-defined surjection from a subset of the geo-
metric bodies (defined as a sequence of boundary points, for example) onto a subset
of the constant symbols occurring in the agent. This allows multiple bodies—such
as the individual pieces of a Kuka arm—to simply be identified by a single constant
symbol such as k1, and also for certain geometric bodies (e.g. an unknown box on
the floor) and symbolic constants (e.g. the name of a customer) to be ignored.
Indeed, while every rigid body is crucial for geometric reasoning, it does not
necessarily need a corresponding symbolic representation, and likewise, every
constant symbol occurring in the agent does not necessarily represent a geometric
body.
186 L. de Silva et al.
In the situation where there was no viable motion plan when the precondition of
an action-rule above was checked, the facts applied by postðÞ instead “describe” the
reason. To this end, two useful domain-independent predicates, inspired by [3], are
obsSomeðk2; canMov; k1; t1Þ, indicating arm k2 obstructs at least one trajectory of
the task canMovðk1; t1Þ, and likewise obsAllðk2; canMov; k1; t1Þ. The agent could
exploit such information by, for instance, moving arm k2 out of the way.
Acknowledgements We thank Amit Kumar Pandey and the reviewers for useful feedback. Felipe
thanks CNPq for support within grant no. 306864/2013-4 under the PQ fellowship and
482156/2013-9 under the Universal project programs. The other authors are grateful for support
from the Evolvable Assembly Systems EPSRC project (EP/K018205/1), and the PRIME EU FP7
project (Grant Agreement: 314762).
References
1. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language. In:
Proceedings of the MAAMAW Workshop, pp. 42–55 (1996)
2. de Silva, L., Pandey, A.K. Alami, R.: An interface for interleaved symbolic-geometric
planning and backtracking. In: IROS, pp. 232–239 (2013)
3. Srivastava, S., Fang, E., Riano, L., Chitnis, R., Russell, S., Abbeel, P.: Combined task and
motion planning through an extensible planner-independent interface layer, ICRA, pp. 639–
646 (2014)
4. Lagriffoul, F., Dimitrov, D., Saffiotti, A., Karlsson, L.: Constraint propagation on interval
bounds for dealing with geometric backtracking, IROS, pp. 957–964 (2012)
5. Erdem, E., Haspalamutgil, K., Palaz, C., Patoglu, V., Uras, T.: Combining high-level causal
reasoning with low-level geometric reasoning and motion planning for robotic manipulation,
ICRA, pp. 4575–4581, (2011)
6. Plaku, E. Hager, G.D.: Sampling-based motion and symbolic action planning with geometric
and differential constraints, ICRA, pp. 5002–5008 (2010)
7. Dornhege, C., Eyerich, P., Keller, T., Trüg, S., Brenner, M. Nebel, B.: Semantic attachments
for domain-independent planning systems, ICAPS, pp. 114–121 (2009)
8. Gaschler, A., Kessler, I., Petrick, R., Knoll, A.: Extending the knowledge of volumes approach
to robot task planning with efficient geometric predicates. ICRA, (2015) (To appear)
188 L. de Silva et al.
9. Kaelbling, L.P., Lozano-Pérez, T.: Integrated task and motion planning in belief space. IJRR
32(9–10), 1194–1227 (2013)
10. LaValle, S.M.: Planning Algorithms. Cambridge University Press (2006)
11. Antzoulatos, N., Castro, E., de Silva, L., Ratchev, S.: Interfacing agents with an industrial
assembly system for “plug and produce”. In AAMAS, pp. 1957–1958 (2015)
A Holonic Manufacturing System
for a Copper Smelting Process
Keywords Manufacturing Copper smelter Holonic manufacturing systems
Multiagent simulation Metallurgical processes
1 Introduction
while considering on going processes [12]. Unlike the chemical industry, metal-
lurgical processes are more structurally complex because discrete and continuous
events occur simultaneously.
In this paper we propose a decentralized control architecture. The design of a
holonic system to manage the production of a copper smelter is presented. The set
of designed modules was computationally implemented using the SPADE (Smart
Python multi-Agent Development Environment) tool [13] and Discrete Events
Simulation (DES) software Rockwell Arena® 14.7 [14]. The resulting platform
allows simulating the operations of the holonic system and study the coordination
among the activities in the copper smelter. Thus, it is possible to decrease the
waiting times and better distribute the workload among different stages of the
process.
The simulation model consists of three modules that comprise the logic of the
system: the converter, production and crane modules. The first module generates
A Holonic Manufacturing System for a Copper Smelting Process 193
the production order for the system. The first available converter is selected to load
the material, and the constraint that no more than two converters are simultaneously
in operation is verified. After unloading is finished, the converter is available for the
entry of a new order. To start the simulation, two entities that represent two cranes
in the system are generated. These cranes wait for a loading or unloading order
from the converter module. After the order is received, the entity is input to the
corresponding process. The loading status of the converter where the process is
performed is revised, and it finishes when the filling or emptying objective of the
converter is attained. Then, the crane is sent to perform an external process if no
crane has already performed it. Finally, the crane returns to wait for a new loading
or unloading request.
194 C. Herrera et al.
3 Results
The control of the copper smelter by the holonic system improves the amount of
processed copper concentrate in the four studied scenarios. This improvement is
observed at both daily (Qccd) and weekly (Qccw) levels (Table 1). It should be noted
that the copper concentrate was similarly processed in the four simulated scenarios.
In particular, scenarios 3 and 4 had identical values. Identical phenomena are
observed when the control is performed with the holonic system. In the latter case,
scenarios 2 and 4 are equal. In addition, the amount of copper concentrate processed
in one week did not vary in the four scenarios. On average, the amount of processed
concentrate copper was increased between 7.4 and 9.0 % in the four scenarios.
These results suggest that the control by the holonic system enables an efficient
management of the production process and increases the copper concentrate pro-
cessing capacity. Then, the implemented rules contribute to streamlining operations
and decreasing the critical time of the production process.
Comparing the holonic control with the existing control, the amount of pro-
cessed copper concentrate is increased because of a reduction in total waiting time
when the copper smelter is controlled by the holonic system. In Table 1, the waiting
time to process (twp) and waiting time to unload from the converters (twu) are
presented. In the four scenarios, it is verified that both waiting times decrease when
the holonic control is simulated; twp and twu decrease by 64.9 and 32.8 %,
respectively, because in the existing control, the level significantly decreases with
the loading of the first converters and subsequently maintains a cyclic behaviour in
time. On the contrary, in the holonic control, more time is required to reach the
cyclic behaviour. Specifically, the furnace reaches its maximum capacity after 40 h,
which is double the required time for the existing control.
The production level is also improved because the copper smelter is more effi-
ciently used. The coordination of the assigned tasks to the cranes is important
because most of the delays in the current operational activities arise from the
waiting time of both loading and unloading of the material from the converters or
the furnace. The use of cranes corresponds to the percentage of time in relation to
the total simulation time, in which a crane performs some process (loading,
unloading or external process). In Table 1, the percentage differences in the use of
cranes between existing and holonic controls are given. The increase in use is
higher for crane 2 than crane 1 because the initial use of crane 2 facilitates its
assignment to specific external tasks of the process, such as the transport of slags
which are removed during the conversion process. Table 2 shows that the first
converter was used 15.93 % more than in the holonic control. Converters 1 and 3
were more used, whereas converter 2 was less used. These results show that the
utilization of the converters was homogenized by reducing idle times.
The overall performance of both controls is shown in Fig. 3; the use of three
converters for the existing control is visualized, while the holonic control is
observed in Fig. 3b. In the first case, converter 1 (of greatest capacity) is loaded.
After this stage is completed, the loading of the second converter proceeds.
Immediately after both converters finish loading, processing begins. Converter 1 is
continuously active, whereas there are intermittencies in the other two converters.
Because only two converters can operate simultaneously, the first process of the
third converter only begins after converter 1 completes its process stage. Figure 3b
suggests that in the holonic control the loading process of the converters is more
homogenously performed reducing idle times and more quickly finishes the
processes.
4 Conclusion
In this article a holonic system to control and manage material in a copper smelter is
presented. We analysed the material handling from the smelting furnace to the
converters and subsequently to the refining stage. The production system was
simulated to evaluate the proposed holonic system versus the existing one.
The implementation of the holonic system decreases the waiting times and better
distributes the workload among different machines. This improvement was
achieved by coordinating the components, making decisions in real time and
adapting to any inconvenient or difference in the optimal process without the
intervention of a third party. In addition, this distributed system was verified to
present sufficient flexibility adapted to this complex production process.
Acknowledgments The authors would like to thank the Complex Engineering Systems Institute,
ICM: P-05-004-F, CONICYT: FBO16, DICYT: 61219-USACH, ECOS/CONICYT: C13E04,
STICAMSUD: 13STIC-05.
References
1. Pradenas, L., Zúñiga, J., Parada, V.: CODELCO, Chile programs its copper-smelting
operations. Interfaces 36(4), 296–301 (2006)
2. Pradenas, L., Campos, A., Saldaña, J., Parada, V.: Scheduling copper refining and casting
operations by means of heuristics for the flexible flow shop problem. Pesqui. Oper. 31(3),
443–457 (2011)
A Holonic Manufacturing System for a Copper Smelting Process 197
3. Blanc, P., Demongodin, I., Castagna, P.: A holonic approach for manufacturing execution
system design: an industrial application. Eng. Appl. Artif. Intell. 21(3), 315–330 (2008)
4. Babiceanu, R.F., Chen, F.F.: Development and applications of holonic manufacturing systems:
a survey. J. Intell. Manuf. 17(1), 111–131 (2006)
5. Arauzo, J.A., Del-Olmo-Martinez, R., Lavios, J.J., De-Benito-Martin, J.J.: Scheduling and
control of flexible manufacturing systems: a holonic approach. Rev. Iberoamer. Autom.
E Inform. Ind. 12(1), 58–68 (2015)
6. Valckenaers, P., van Brussel, H., Bongaerts, L., Wyns, J.: Holonic manufacturing systems.
Integr. Comput. Aided Eng. 4(3), 191–201 (1997)
7. Carayannis, G.: Artificial intelligence and expert systems in the steel industry. JOM 45(10),
43–51 (1993)
8. Zhao, Q.-J., Cao, P., Tu, D.-W.: Toward intelligent manufacturing: label characters marking
and recognition method for steel products with machine vision. Adv. Manuf. 2(1), 3–12
(2014)
9. Herrera, C., Belmokhtar-Berraf, S., Thomas, A., Parada, V.: A reactive decision-making
approach to reduce instability in a master production schedule. Int. J. Prod. Res. 1–11 (2015)
10. Chokshi, N.N., McFarlane, D.C.: Rationales for holonic manufacturing systems in chemical
process industries. In: 2012 23rd international workshop on database and expert systems
applications, vol. 1, p. 616, Los Alamitos, CA, USA (2001)
11. Agre, J.R., Elsley, G., McFarlane, D., Cheng, J., Gunn, B.: Holonic control of a water cooling
system for a steel rod mill. In: Proceedings of the Fourth International Conference on
Computer Integrated Manufacturing and Automation Technology, pp. 134–141 (1994)
12. Luo, N., Zhong, W., Wan, F., Ye, Z., Qian, F.: An agent-based service-oriented integration
architecture for chemical process automation. Chin. J. Chem. Eng. 23(1), 173–180 (2015)
13. https://pypi.python.org/pypi/SPADE
14. https://www.arenasimulation.com/
A Nervousness Regulator Framework
for Dynamic Hybrid Control Architectures
1 Introduction
This section presents the proposed framework to control the nervousness behaviour
in a D-HCA. The framework identifies four phases of the nervousness behaviour
according to the nervousness state: prevention, assessment, handling and recovery.
Figure 1 illustrates the framework’s phases.
202 J.-F. Jimenez et al.
In D-HCA, the nervousness present in the system is due to the changes per-
formed by a switching mechanism that modifies or reconfigures dynamically the
structure and/or behaviour of the system to obtain a custom-built optimal config-
uration. However, in order to accomplish this objective, the system might switch
constantly causing a nervousness event. In this paper, we focus on the nervousness
crisis prevention phase of the nervousness regulator in order to prove the need for a
mechanism in the system. In this respect, the nervousness control authorises or not
the switching procedure depending on the nervousness threshold. An instantiation
of the proposed framework focused on the prevention of the nervousness state is
proposed in the next section.
The D-HCA of this paper is based on the governance mechanism approach pro-
posed in Jimenez et al. [6]. This approach features an operating mode of a D-HCA
as a specific parameterization that characterizes the control settings applied to the
system. A switching mechanism, called governance mechanism, commutes the
operating mode to reconfigure the architecture of the control system. The D-HCA
that controls the FMS is organized as follows (Fig. 2):
FMS Controlled system: the general structure of the FMS is divided into two
layers: a global and a local layer. While the global layer contains a unique global
decisional entity (GDE) responsible for optimizing the release sequence of the
production orders (scheduler), the local layer contains several local decisional
entities (LDE) as jobs to be processed in the production order (7 jobs in scenario
A0). In this approach, each decisional entity (GDE or LDE) includes its own
objective and governance parameters. In this scenario, the objectives of the GDE
and LDE are respectively to minimize the makespan at batch execution level and
the completion of the next operation. The governance parameter in the GDE is the
role of the entity for establishing the order release sequence and imposing these
intentions to the LDE in the shop floor. The governance of each LDE is represented
by the reactive technique that guides the evolution of the job through the shop floor.
This evolution can be driven by a potential-field’s (PF) approach [12] or by the first
available machine rule (FAM). For this research, even though both PF and FAM
techniques are part of the reactive approach in distributed systems, it is considered
that the potential-field’s approach assures higher performances while computing
resource allocation depending on their availability and shortest route to the
resources. For a better representation of the configuration, an operating mode vector
that gathers all the governance parameters of the decisional entities is defined.
Governance mechanism entity: this switching mechanism is responsible for
changing the governance parameters of the GDE and LDE through the operating
mode vector. It monitors the performance of the controlled FMS, continues with the
improvement process for enhancing the system performance and triggers a change
in the system’s functioning by acting upon the operating mode vector (Fig. 2).
Considering that the nervousness behaviour derived from the switching of the
control system is monitored, the switching is triggered periodically (every 20
time-units) according to a condition-rule applied to the system. For measuring the
performance of operating modes, the expected makespan without switching (static)
was simulated for each possible operating mode vector. The result was sorted in a
numbered list and plotted to characterize the operating modes (Fig. 3 top). The list
contains 128 operating modes derived from the combination of the governance
parameters of all LDE (jobs to be produced). In this model, it is assumed that this
characterization of operating mode does not change through the execution and the
results are considered a preliminary possible control solution. Finally, the direction
of the switching towards an operating mode is decided by a condition-rule
according to the intentions received from the resources. That is, if a certain resource
has more than α (Alpha) jobs to be produced at the switching time, the operating
mode changes to a more reactive one (higher in the numbered list) with a step of λ
(Lambda) in the sorted list of operating modes. Otherwise, if all resources have less
than four jobs intentions to be produced, the operating mode switches to better
alternatives (lower in the numbered list).
Nervousness regulator: This entity is responsible for filtering the intentions of
the governance mechanism to dampen the switching evolution. For the definition of
nervousness indicator (NI) and Nervousness threshold (NT), the module proposed
by Hadeli et al. [3] was used. This module employs a probabilistic mechanism each
time the system is willing to change. As it is not evaluating the state of the system but
dampening the system evolution, this approach is enclosed in the nervousness crisis
prevention in the defined framework. The NI defined is a random value between 0
and 1, and the NT is fixed to β (beta). If NI is higher than NT the system holds the
switch. Otherwise, the switching process is performed. The flow diagram of the
nervousness linked to the switching mechanism is illustrated in Fig. 3 bottom.
This section presents the experiments performed in the manufacturing cell of the
AIP PRIMECA lab. of the UVHC. The main goal of this experiment is to compare
the behaviour of a D-HCA with and without a nervousness regulator; we wanted to
prove that the nervousness mechanism damps the switching process and avoids thus
a nervous behaviour. For the implementation, the proposed D-HCA with ner-
vousness regulator is programmed in the NetLogo agent-based software [17]. The
data-set used for the case study is the scenario A0 from the Benchmark [14].
206 J.-F. Jimenez et al.
For the setup of the D-HCA, the governance parameters of the decisional entities
are initially fixed. The GDE presents a coercive role and the LDE is fixed with the
values of the 80th operating mode. As initial values for the experiments, the
conditional-rule α is 4, the switching step λ is 2 and the nervousness threshold β is
0.9. When execution starts, the GDE communicates a coercive optimal plan to the
LDE for the order release sequence. The emulation of the production system starts
execution with this optimal plan and the initial operating mode. In the experiments,
A Nervousness Regulator Framework for Dynamic … 207
while part A considers the proposed D-HCA without the nervousness regulator, part
B includes the regulator. Considering that the nervousness regulator is a proba-
bilistic mechanism, it is executed 30 times for each part of the experiment. Finally,
an analysis of variance (ANOVA) procedure is conducted to compare the differ-
ences between the results of part A and part B.
As a first result, the experiments showed that there are statistically significant
differences between part A and B as determined by one-way ANOVA (F
(1,58) = 4.0068, p = 0.05). In this respect, part B performs better in the production
execution. In fact, even though this result does not demonstrate that constant
switching can generate a nervousness state, the results show that the nervousness
mechanism damps the switching. We believe that the results are essentially by two
reasons. The first reason is that, due the rapid evolution of the system, damping the
switching is imposing the system to stay in the same operating mode to take
advantage of the benefits inherent to the configuration. Thus, the jobs are able to
apply certain intentions settled by the operating mode of current execution. The
second reason is that, when the nervousness regulator is activated, the jobs enter a
stabilization period in which the regulator contributes avoiding the changes of
intentions caused by the switching. Even though it was not confirmed in this
experiment, the changes of intentions should diminish as a consequence of the
stabilization period. These experiments confirm that the switching between different
operating modes in a D-HCA achieves a better performance than a fully static
configuration. In Fig. 4, while in a static operating the proposed architecture has
607 time-units as makespan, the switching for part A and B presents a mean of
509.40 and 473.76, respectively. In conclusion, from the experiments conducted,
the nervousness regulator searches a convergence in the dynamic process in order to
stabilize the trade-off between evolution and nervousness. However, these results
raise the further need to balance between the switching and the nervousness
behaviour mechanisms.
6 Conclusions
References
1. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Nervousness in dynamic self-organized
holonic multi-agent systems. In: Highlights on Practical Applications of Agents and
Multi-Agent Systems, pp. 9–17. Springer, Berlin, Heidelberg (2012)
2. Blackburn, J.D., Kropp, D.H., Millen, R.A.: A comparison of strategies to dampen
nervousness in MRP systems. Manage. Sci. 32(4), 413–429 (1986)
3. Hadeli, K., Valckenaers, P., Verstraete, P., Germain, B.S., Brussel, H.V.: A study of system
nervousness in multi-agent manufacturing control system. In: Brueckner, S., Serugendo, G.D.
M., Hales, D., Zambonelli, F. (eds.) Engineering Self-Organising Systems, Lecture Notes in
Computer Science, vol. 3910, pp. 232–243. Springer, Heidelberg (2005)
4. Heisig, G.: Planning Stability in Material Requirements Planning Systems, vol. 515, Springer
Science and Business Media, Berlin Heidelberg (2012)
5. Herrera, C.: Cadre générique de planification logistique dans un contexte de décisions
centralisées et distribuées (Doctoral dissertation, Université Henri Poincaré-Nancy I) (2011)
6. Jimenez, J.F., Bekrar, A., Trentesaux, D., Rey, G.Z., Leitao, P.: Governance mechanism in
control architectures for flexible manufacturing systems. IFAC-PapersOnLine 48(3), 1093–
1098 (2015)
7. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006)
8. Leitão, P.: Agent-based distributed manufacturing control: a state-of-the-art survey. Eng.
Appl. Artif. Intell. 22(7), 979–991 (2009)
9. Minifie, J.R., Davis, R.A.: Interaction effects on MRP nervousness. Int. J. Prod. Res. 28, 173–
183 (1990)
10. Novas, J.M., Van Belle, J., Saint Germain, B., Valckenaers, P.: A collaborative framework
between a scheduling system and a holonic manufacturing execution system. In: Service
Orientation in Holonic and Multi Agent Manufacturing and Robotics. Studies in
Computational Intelligence, pp. 3–17. Springer, Berlin (2013)
11. Onori, M., Barata, J., Frei, R.: Evolvable assembly systems basic principles. In: Information
Technology for Balanced Manufacturing Systems, pp. 317–328, Springer, US (2006)
12. Pach, C., Bekrar, A., Zbib, N., Sallez, Y., Trentesaux, D.: An effective potential field approach
to FMS holonic heterarchical control. Control Eng. Pract. 20(12), 1293–1309 (2011)
13. Steele, D.C.: The nervous MRP system: how to do battle. Prod. Inventory Manage. 16(4), 83–
89 (1975)
A Nervousness Regulator Framework for Dynamic … 209
14. Trentesaux, D., Pach, C., Bekrar, A., Sallez, Y., Berger, T., Bonte, T., Leitão, P., Barbosa, J.:
Benchmarking flexible job-shop scheduling and control systems. Control Eng. Pract. 21(9),
1204–1225 (2013)
15. Valckenaers, P., Verstraete, P., Saint Germain, B., Van Brussel, H.: A study of system
nervousness in multi-agent manufacturing control system. In: Engineering Self-Organising
Systems, pp. 232–243. Springer, Berlin Heidelberg (2006)
16. Verstraete, P., Saint Germain, B., Valckenaers, P., Van Brussel, H., Belle, J., Hadeli, H.:
Engineering manufacturing control systems using PROSA and delegate MAS. Int.
J. Agent-Oriented Softw. Eng. 2(1), 62–89 (2008)
17. Wilensky, U.: NetLogo. Center for Connected Learning and Computer-Based Modeling.
Northwestern University, Evanston, IL (1999). http://ccl.northwestern.edu/netlogo/
Part V
Service Oriented Enterprise
Management and Control
Automation Services Orchestration
with Function Blocks: Web-Service
Implementation and Performance
Evaluation
Keywords Service-oriented architecture Web service Cloud Pick-and-place
manipulator Web servers Function blocks
1 Introduction
with others using the message passing mechanism. A service sends a request
message, another service receives the message, executes the service invoked and
sends a response message if needed.
Cloud computing, that is getting increasingly popular in various IT applications,
can provide a very useful complement to IoT and SOA. The use of Cloud-deployed
web-services in combination with embedded intelligence is being widely investi-
gated for industrial automation applications. An example of such research activity is
Arrowhead project sponsored by ARTEMIS.1
According to [2] “cloud computing is a modern model for ubiquitous and
convenient on-demand access to a common pool of configurable remote computing
and software resources and storage devices, which can be promptly provided and
released with minimal operating costs and/or calls to the provider”.
Cloud computing is applied in various domains, from research and media to the
mail services, corporate computer systems and electronic commerce. Consumers of
cloud computing can greatly reduce the cost of maintaining their own information
technology infrastructure, and dynamically respond to changing computing needs
in peak time periods, using the property of elasticity of cloud services.
In the development of cloud-based systems, a wide range of programming
languages, libraries and technology frameworks can be used, which determine the
effectiveness of the software functioning. The task of choosing the adequate
development tools is an important stage of the software lifecycle. Thus, the urgent
task is to study and perform comparative analysis of software applications pro-
ductivity, depending on the development tools, as well as its deployment envi-
ronment. Given the wide spread of distributed information systems such research is
of particular interest for the Web-services—applications based on service-oriented
model of interaction between providers and consumers of information services.
Some work [3, 4] present a model that helps selecting best end-point for a service
and this is particularly applicable in a distributed system that we are interested in
and what is briefly presented in this paper.
The aim of the paper is to investigate the performance of web services developed
to complement the embedded mechatronic intelligence using different development
languages and deployment tools, and identify the various components of the total
service time: the transmission delay/service request, and the processing time of the
request and response formed by Web services.
The rest of the paper is structured as follows: Sect. 2 details the IEC 61499
function block implementation of the services, Sect. 3 presents the case study
considered to demonstrate our approach, Sect. 4 presents the method of our testing
approach and finally Sect. 5 presents the results and evaluation.
1
www.arrowhead.eu.
Automation Services Orchestration … 215
Fig. 1 a Workpiece transfer system with one linear motion pusher. b A function block application
generated to implement requirements specified in the form of services
216 E. Demin et al.
This study was performed using a simulated model of pick and place
(PnP) manipulator presented in Fig. 3. The manipulator, consisting of two axes of
pneumatic cylinders and a suction device, performs the function of moving items
(work pieces) from one place to another. This manipulator has a fully decentralized
control based on collaboration of controllers embedded into each cylinder. This
architecture allows Plug and Play composition of mechatronic devices. One
approach to totally decentralised manipulator control implemented using the IEC
61499 standard is described in detail in [8, 9].
The PnP-manipulator is an automated system consisting of intelligent mecha-
tronic components, e.g. pneumatic cylinders. Several configurations of the
manipulator are described in [10, 11]. Here we use a configuration with 6 cylinders
(3 vertical and 3 horizontal). Each cylinder can be moved in and out by the
4 Testing Methods
TRT ¼ T4 T1 ; ð1Þ
TRPT ¼ T3 T2 ; ð2Þ
Fig. 4 Measuring the duration at different points in a single Web service request/response
5 Performance Evaluation
The experimental results show that the time of service request of the Web service
greatly depends on the implementation technology and platform on which it is
deployed. Based on the data obtained during the experiments, and using statistical
analysis, one can formulate the following practical conclusions about the perfor-
mance of deployment platforms from different manufacturers:
1. Despite the fact that Oracle and IBM products provide more flexibility when
designing Web services, the performance of real-time applications is reduced.
2. When deploying Web service with Microsoft Internet Information Services (MS
IIS) the real productivity is twice the one of IBM WebSphere Application
Server.
3. Borland’s Web-platform has shown significant volatility, in contrast with
Microsoft’s one. At the same time there is a steady, albeit small deviation of the
average processing time for TRPT for a Web-service deployed on a local server.
4. All Web-platforms use different technology to optimize Web-services.
5. Network latency has a significant effect on the performance characteristics of
Web services. In addition, the timing of network connections is characterized by
significant levels of volatility compared to the time of service request for a
Web-service.
Automation Services Orchestration … 219
6. The instability of the network environment interaction of the consumer and the
provider of Web services has significant impact on the uncertainty of perfor-
mance characteristics. The uncertainty can be characterized by the coefficient of
variation, which determines the ratio of the standard deviation and the expec-
tation of service time. In some cases, this option is too high, which indicates
substantial uncertainty of Web-platform performance characteristics measured
experimentally.
The above chart shows the results of testing of Web based platforms from
different vendors. Figure 5 shows the change of service time Web-service statistics.
Figure 6 shows the statistics of change in network latency (Table 1).
6 Conclusion
Acknowledgments This work was partially supported by the program “Fundamental research
and exploratory research involving young researchers” (2015–2017) of the Russian Science
Foundation (project number 15-11-10010), and by Luleå Tekniska Universitet through the grants
381119, 381940 and 381121.
References
1. Erl, T.: Service-Oriented Architecture: Concepts, Technology, and Design. Prentice Hall PTR,
Upper Saddle River (2005)
2. Jadeja, Y., Modi, K.: Cloud computing—concepts, architecture and challenges. In: 2012
International Conference on Computing, Electronics and Electrical Technologies (ICCEET),
pp. 877–880 (2012)
Automation Services Orchestration … 221
3. Serbănescu, V.N., Pop, F., Cristea, V., Achim, O.M.: Web services allocation guided by
reputation in distributed SOA-based environments. In 11th International Symposium on
Parallel and Distributed Computing (ISPDC), pp. 127–134 (2012)
4. Achim, O.M., Pop, F., Cristea, V.: Reputation based selection for services in cloud
environments. In 14th International Conference on Network-Based Information Systems
(NBiS), pp. 268–273 (2011)
5. Dai, W., Vyatkin, V., Christensen, J.H.: The application of service-oriented architectures in
distributed automation systems. In 2014 IEEE International Conference on Robotics and
Automation (ICRA), pp. 252–257 (2014)
6. Dai, W., Christensen, J.H., Vyatkin, V., Dubinin, V.: Function block implementation of
service oriented architecture: Case study. In: 12th IEEE International Conference on Industrial
Informatics (INDIN), pp. 112–117 (2014)
7. Dai, W., Riliskis, L., Vyatkin, V., Osipov, E., Delsing, J.: A configurable cloud-based testing
infrastructure for interoperable distributed automation systems. In IEEE International
Conference on Industrial Electronics IECON’14, Dallas (2014)
8. Vyatkin, V.: IEC 61499 as enabler of distributed and intelligent automation: State-of-the-art
review. IEEE Trans. Industr. Inf. 7(4), 768–781 (2011)
9. Sorouri, M., Patil, S., Vyatkin, V., Salcic, Z.: Software composition and distributed operation
scheduling in modular automated machines. In IEEE Transactions on Industrial Informatics,
vol. 11, pp. 865–878 (2015)
10. Patil, S., Yan, J., Vyatkin, V., Pang, C.: On composition of mechatronic components enabled
by interoperability and portability provisions of IEC 61499: A case study. In 18th IEEE
Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–4 (2013)
11. Vyatkin, V.: Intelligent mechatronic components: Control system engineering using an open
distributed architecture. In IEEE Conference on Emerging Technologies and Factory
Automation, Proceedings ETFA ‘03, vol. 2, pp. 277–284 (2003)
12. Demin, E., Patil, S., Dubinin, V., Vyatkin, V.: IEC 61499 Distributed control enhanced with
cloud-based web-services. In IEEE Conference on Industrial Electronics and Applications,
Auckland, New Zealand (2015)
13. Feng, L., Gesan, W., Li, L. Wu, C.: Web service for distributed communication systems.
In IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI
‘06), pp. 1030–1035 (2006)
14. Cheung, L., Golubchik, L., Fei, S.: A study of web services performance prediction: A client’s
perspective. In 19th IEEE International Symposium on Modeling, Analysis and Simulation of
Computer and Telecommunication Systems (MASCOTS), pp. 75–84
15. Velkoski, G., Simjanoska, M., Ristov, S., Gusev, M.: CPU utilization in a multitenant cloud.
In EUROCON, IEEE, pp. 242–249
IoT Visibility Software Architecture
to Provide Smart Workforce Allocation
Abstract In manufacturing and logistics companies there are many processes and
services that cannot be fully automated and the integration with workforce is the
key to provide better results. One example is Airport Ground handling operations
where agents, operators, drivers or aircraft crews need to generate and feed infor-
mation from other processes and events in order to provide better schedules. This
work uses manufacturing and Internet of Things (IoT) concepts to design software
architecture to generate an uncoupled workforce information feedback for current
agent-based decision-making frameworks. In the case at hand, the architecture is
implemented in a cloud-based commercial solution called “aTurnos”, which has
already been deployed by different companies to schedule working shifts for over
25,000 employees. The handling company being analysed in this paper requires a
dynamic allocation of employees and tasks with updated field information about the
status of workers in real time.
1 Introduction
2 Background Considerations
There are many planning algorithms that solve problems of workforce allocation
such as the scheduling of shifts for nurses in a hospital. These problems require the
implementation of relatively complex programming algorithms such as backtracking
with high computational requirements. For the application at hand, some authors
asserted that optimal solutions are not required; good ones are enough as is the case
for the results obtained by heuristic or voracious algorithms [9, 10]. The lack of
standards for the data provided to algorithms is another problem of planning sys-
tems, which thus require custom developments that increase costs. One of the most
important weaknesses in online workforce management is the lack of information
about the real evolution that is taking place in the environment. Deviations in
allocations are not automatically handled by the planning system, but manually by
supervisors or team leaders in the workplace (e.g. a factory). To make this possible it
is necessary to design interfaces capable to capture real and reliable information and
to make it available to all parties by publishing it in a standard format.
For example, when addressing the problem with a vision related to manufacturing
environments, the first step is to identify the point within the scheduling process
where integration of physical data and decision-making need to be implemented; this
corresponds to the point in which the information from the environment needs to be
fed. Firstly, at manufacturing cell level, the system executes the low level actions
through machine control subject to its specific constraints. Above cell level, shop
floor is the lowest level that introduces flexibility requirements; on this level the
tasks/orders are broken into single instructions executed downstream by the cells
that directly manipulate products and resources. This level supposes the first
cooperation between elements. It schedules the instructions and controls the oper-
ational disturbances between operational cells. Shop floor control is often referred as
Manufacturing Execution Systems (MES) [11]. The Decision Support Systems
(DSS) require information coming from both adjacent levels: the enterprise level
(information systems) and the cell level (physical resources). More specifically, in
the manufacturing/production control proposed by McFarlane et al. [12], the product
scheduling is an interface between the resource status and the order status. In a
workforce shift-allocation process, the visibility requirements are the same, those of
checking the status of workers while being fed constraint information from the upper
levels related do factory management. Therefore, the visibility framework used in
the definition of the main requirements and constraints is crucial in workforce
management. To set up this visibility framework it is necessary to start by selecting
technologies for the tracking of workers attending to the use cases, environment and
employment laws.
Some common requirements for a workforce visibility framework are:
• The definition of the workforce area as an indoor/outdoor environment. The
tracking is forbidden out of the work environment.
• Employee monitoring cannot be continuous but limited to specific positions,
crossing points and doors.
226 P.G. Ansola et al.
• The system must be autonomous and easy to deploy without using existing
equipment or infrastructures. It may have to overcome coverage problems. As
would be the case in a demo that can gather more than 5000 users/employees.
Even as that is not the focus of this paper a short overview of the hardware is
nevertheless required to understand the software architecture. The present approach
uses regular smartphones that identify Bluetooth points based on the iBeacons
protocol, which is a commercial distribution of Bluetooth 4.0 from Apple. These
iBeacons were initially designed for commercial/marketing issues and are well
known by the industry but, as in this case, they may be used for many other
purposes. In the case of workforce management, the Bluetooth points or beacons
are distributed in the scenario and broadcasting their unique ID with a configurable
coverage that goes from 1 to 60 m. These beacons define locations making possible
to cover big spaces by providing specific identifications at the defined strategic
points. The smartphones carried by employees send the locations IDs to the cloud
when they are in the proximity of a beacon through a process called “check”. The
next figure details the setting up of this network or mesh. The communication
between iBeacons and smartphones uses Bluetooth, and the smartphones send their
ID to the cloud using the company network (3G or WIFI) (Fig. 1).
The proposed software architecture can directly benefit from visibility frameworks
through a software interface such as standard Electronic Product Code Information
Services (EPCIS), a well-known software architecture for visibility frameworks
used in manufacturing and logistics. The EPCIS specification helps defining this
dynamic interface by using the existing services, which automatically publish
real-time information coming from the plant and its circumstances, reporting pro-
cessed information and abstracting upper IT levels. In the proposed system, EPC
subscriptions connect the human resources with the internal reasoning of the
affected agents. The beliefs of these agents are updated following the standard
EPCIS XML specification [3]. The proposed architecture has been implemented in
a cloud-based commercial solution called “aTurnos”, which has already been
commercially deployed by different companies to schedule working shifts for over
25,000 employees. This solution requires to identify what services will be needed to
test the extrapolation of EPCIS to workforce management. The services imple-
mented in the aTurnos Web services that are already available to its client com-
panies through their App are:
• setCheck(EPC-Package), receives the new location of the employees based on
the EPCIS standard as detailed below.
• setNewLocation(Location), receives a new location for the employee being
identified.
IoT Visibility Software Architecture … 227
Looking at the full cycle, the employees close to the beacons send a check to the
server in a JSON package. Then, the employee avatar implemented in an agent is
subscribed to the events of this employee using the EPC products subscription
process. Based on this information, the agent-based system is capable to reschedule
the plan, thus allocating new tasks to the employees. Finally, the tasks service
pushes the information to the smartphones of the involved employees. In order for
the system to being able to perform all these operations in an efficient manner, it is
necessary to uncouple the problem while maintaining the cohesion between checks
and the agent information used in the decision-making process. This work proposes
the definition of states based on these checks (What? Where? When? and Why?).
The checking process allows identifying the status of employees in an uncoupled
way. This simplifies implementation by tackling the problem of the excessive
dependence on up-to-date field information, which is usually the draw-back of
MAS [11, 13]. As discussed above, the status of an employee is defined by the
“What?” (Product identification), “Where?” (Read point), “When?” (Timestamp)
and “Why?” (Business step) information. In Eqs. 1–3 “p” represents the employees
which have a set of Reader Points (R) (Eq. 1) and Business Steps (B) (Eq. 2). In an
airport ground-handling scenario, if an operator is busy (why?) in a specific gate
(where?) it defines a state. As the number of business steps and reading points
grow, the precision during the decision-making gets bigger because of the corre-
sponding increase in the number of states. The definition of states is flexible, so new
read points and business steps can be added during operations.
R ¼ ðr1 ; r2 ; . . .; rn Þ; R 2 M1xn ð1Þ
Once the employee states in the schedule are identified, it is possible to define
the transactions between the current state (sij ) and the desirable state (s0ij ) by using a
simple inference of B and R. Any new event coming from the checks triggers a
reschedule based on the modification of the state. There is an event subscription
between the checks service and the agent-based avatar in aTurnos.
Implementing the “aTurnos” solution, the initial phase that needs to be addressed
must situate every location in a Map (the actual version uses Google Maps). This
will allow aTurnos to check possible delays and trigger rescheduling processes
based on distances of the resources (workers) to the required operation points and
timings. The example in Fig. 1 shows the Airport of Barcelona, where 4 iBeacons
have been situated in different points at terminals and aTurnos can calculate the
shift times between locations at the Google Maps API. This was an efficient (i.e.
fast) way to generate information about the real environment with third party
software in a dynamic interface.
IoT Visibility Software Architecture … 229
where Tnt stands for the time at which employees have not yet been informed about
an assigned task and Tmove is the time that an employee needs to move to the new
location (5). Tnt includes the time required for the supervisor to identify the
deviation over the current plan, the time to plan and the time to communicate the
new instructions to the team.
green line defines the real process demand per hour. The demand is obtained after
the event has occurred but supervisors have access to historical data and demand
forecasting. The first day, existing employees do no cover the real needs. The
second day, the supervision contracts more workers but they are concentrated in the
morning shift with a shortage remaining for the afternoon. The third day, this
deviation persists because the information used by the supervisor to define the new
plan is not accurate enough. For this case, the coverage does not fit the requirements
until the 4th day, even though for two days there were more workers allocated to the
morning shift than really needed.
The proposed system would make possible reducing this waste time by
improving the scheduling process using cloud-based solutions. Based on the Tnt,
the reduction of waste time can be considered in three aspects:
• The automatic identification of disturbances; when there are many tasks and
resources it is hard to identify the disturbances and delays.
• The new plans are directly generated by the agent-based system every time these
disturbances occur. The managers can access the real time information in the
cloud through their mobile terminals during operations and validate or re define
processes.
• The fast communication to the employees through the smartphone with the new
plan. If this process is manually, the supervisor needs to communicate every
new instruction.
5 Conclusions
References
1. Zhang, P., et al.: The Influence of Industries and Practice on the IS Field: A Recount of History
in Twentieth Americas Conference on Information Systems, Savannah (2014)
2. Luftman, J.: Strategic alignment maturity. Handbook on Business Process Management 2,
pp. 5–43. Springer, Berlin, Heidelberg (2015)
3. García Ansola, P., et al.: Distributed decision support system for airport ground handling
management using WSN and MAS. Eng. Appl. Artif. Intell. 25(3), 544–553 (2012)
4. Ashford, N., Stanton, H., Moore, C.: Airport Operations. McGraw-Hill Professional, New
York (1998)
5. Flynn, B.B., Schroeder, R.G., Sakakibara, S.: The impact of quality management practices on
performance and competitive advantage. Decis. Sci. 26(5), 659–691 (1995)
6. Liu, L.: From data privacy to location privacy: Models and algorithms. In Proceedings of the
33rd International Conference on Very Large Data Bases (VLDB) Endowment (2007)
7. López, T.S., et al.: Adding sense to the internet of things. Pers. Ubiquit. Comput. 16(3),
291–308 (2012)
8. Da Xu, L., He, W., Li, S.: Internet of things in industries: A survey. IEEE Trans. Industr. Inf.
10(4), 2233–2243 (2014)
9. Colombo, A.W., et al.: Industrial cloud-based cyber-physical systems, The IMC-AESOP
Approach (2014)
10. De Weerdt, M., Clement, B.: Introduction to planning in multiagent systems. Multiagent Grid
Syst. 5(4), 345–355 (2009)
11. Leitão, P.: Agent-based distributed manufacturing control: A state-of-the-art survey. Eng.
Appl. Artif. Intell. 22(7), 979–991 (2009)
12. McFarlane, D., et al.: Intelligent products in the supply chain—10 years on. Service
Orientation in Holonic and Multi Agent Manufacturing and Robotics, pp. 103–117. Springer,
Berlin, Heidelberg (2013)
13. Archimede, B., et al.: Towards a distributed multi-agent framework for shared resources
scheduling. J. Intell. Manuf. 25(5), 1077–1087 (2014)
Virtual Commissioning-Based
Development and Implementation
of a Service-Oriented Holonic Control
for Retrofit Manufacturing Systems
1 Introduction
One of the key objectives of current research activities worldwide is to define best
practices for implementing agile manufacturing systems. One very promising trend
deals with the breakthrough induced by cyber-physical systems technology [1–3],
for production [2], maintenance [4, 5] or logistics issues [6] to name a few. Even if
these technologies are of a great interest and innovative solutions will appear soon
on the market, the fundamental change induced and the cost of system’s
enhancements might impose a delay of a couple of decades between their industrial
maturity and their exploitation in a large scale. Based on a size 1 experimental
platform, this paper intends to define a framework for implementing an agile control
2 System Description
The application of the SoHMS is made to a small production line located at the
University of Nantes, France. This production line, Fig. 1, is an automated assembly
line composed of three workstations and a conveyor system formed by four con-
veyor loops of which three serve as buffers for each of the workstations and the
other, the main loop, serves for transportation between the workstations. Product
goods are transported by the conveyors with pallets having an intelligence level 1
[9], containing capabilities of self-identification with RFID tags in order to allow the
transport resources to direct the pallet through the conveyors diverters from one port
to another.
Workstations are composed of 6-axis robotic arms, a stock of Lego® blocks, a
temporary stock and a workspace location for incoming pallets, Fig. 2.
2.2 Products
The main function of the robotic arm is to perform pick and place assembly
operations. The main task of the robot is to pick a corresponding Lego® block from
the fixed or temporary stock and assemble it on the product under treatment. The
fixed stock has three different racks, each rack for a different size of Lego® block.
Within a rack, blocks of different colours may arrive randomly way. When a special
colour is demanded and is not available in the picking position of the rack, the robot
can use the temporary stock to remove blocks from the rack to make the desired
colour available.
The product is a structure of Lego® blocks compiled in a specific configuration.
Figure 3 illustrates a product family with two versions sharing the same product
feature, therefore all members have the same process structure up to the third level.
Differentiation occurs at the fourth level where two versions can be issued.
Lego® structure is formed by three types of blocks namely; a small 2 × 2 block, a
medium 2 × 4 block and a Large 2 × 6 block. These blocks can be assembled in any
position (X, Y, Z and W axes). Added to this, each block is available in four
colours: red, green, yellow and blue. Hence, there is a great flexibility to create a
vast variety of structures. Customization for such product family happens at a
scalable level with the choice of colour and at a modular/structural level with the
choice of version. The Lego® structure results to be an ideal alternative in order to
illustrate, in a very simple manner, the dependencies between the different com-
ponents of the structure.
In the SoHMS there are the product-level services which are offered by workstations
and transport resources. This service library belongs to the production line which can
be viewed as a resource itself thus having a service offer of the different product
families it can produce. Lego® blocks are used to represent the different manufac-
turing services, Fig. 4. In this way, taking the three types of blocks available, the
service ontology for this application is formed by three types of services per layer.
Differentiated by their size; a 2 × 2 block represents a service class A, a 2 × 4 block a
service class B and a 2 × 6 block a service class C. This constitutes an ontology of
4 × 3 = 12 services types namely; A1 for a small block at level 1, B2 for a medium
block at level 2, C3 for a large block at level 3, etc. Moreover, each of these services
has a set of parameters. These parameters are the colour and position of the block
only in x and y coordinates as the vertical position forms part of the service type
definition. Other product-level services are the Transport_Pallet service and the
Supply_Base service. The transport service has parameters: startPort and endPort
while the supply service has the parameter colour of the base.
As the production line represents a flexible job-shop, service redundancy is
included. Workstations 2 and 3 provide all the manufacturing services for the
assembly of Lego® block of the three sizes. However, even though both work-
stations provide the same service types, these do not have the same capabilities at
any time, considering the range of possible colours in stock for example.
described in [10]. The choice made on this system is different. On the network,
industrial PCs are set up and emulate a java virtual machine in order to run different
programs, each having a specific function.
3 Virtual Commissioning
The choice made on this system is to replace the PLC by ad hoc programs (Fig. 7),
able to handle higher semantics than PLC do and more flexible in configuration for
experimental purposes. First, a Low-Level Middleware (LLM) was created.
The objective of LLM is to synchronously retrieve the state of each sensor of the
system, asynchronously inform the upper layers of any change of value of the
sensors and asynchronously modify the state of actuators on upper layers’ order.
Functionally, this is close to what OPC1 servers do, but adapted to the hardware
configuration.
Second, a Medium-Level Middleware (MLM) is in charge of aggregating the
data coming form LLM for upper layers and time macro-actions requested by upper
layers in high level semantics. For example, when a pick service is requested on a
robot, MLM communicates to LLM all the configuration bytes to modify on the
controller, waits for an acknowledgement, sends the program start order, sends an
acknowledgement to upper layer that the service is running, waits for the
1
www.opcfoundation.org.
240 F. Gamboa Quintanilla et al.
5 Conclusion
This study introduces a new experimental platform, built up around conveyor loops,
three robotic stations and a control architecture fully programmed in ad hoc Java
code. A SoHMS was implemented, thanks to a virtual commissioning phase per-
formed via a Rockwell Arena simulation model. The next step is to generalize the
programs in order to distribute the control and enhance the autonomy of holons.
242 F. Gamboa Quintanilla et al.
References
1. Colombo, A.W., Karnouskos, S., Bangemann, T.: Towards the next generation of industrial
cyber-physical systems. In Industrial Cloud-Based Cyber-Physical Systems. Springer, Berlin,
pp. 1–22 (2014)
2. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Letters 3, 18–23 (2015)
3. Monostori, L.: Cyber-physical production systems: Roots expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014)
4. Trentesaux, D., Knothe, T., Branger, G., Fischer, K.: Planning and control of maintenance,
repair and overhaul operations of a fleet of complex transportation systems: A cyber-physical
system approach. In: Borangiu, T., Trentesaux, D., Thomas, A. (eds.) Service Orientation in
Holonic and Multi-agent Manufacturing, pp. 175–186. Springer, Berlin (2015)
5. Zhong, H., Nof, S.Y.: The dynamic lines of collaboration model: Collaborative disruption
response in cyber–physical systems. Comput. Ind. Eng. 87, 370–382 (2015)
6. Seitz, K.-F., Nyhuis, P.: Cyber-physical production systems combined with logistic models—a
learning factory concept for an improved production planning and control. Procedia CIRP 32,
92–97 (2015)
7. Morariu, C., Morariu, O., Borangiu, T.: Customer order management in service oriented
holonic manufacturing. Comput. Ind. 64(8), 1061–1072 (2013)
8. Quintanilla, F.G., Cardin, O., Castagna, P.: Product specification for flexible workflow
orchestrations in service oriented Holonic manufacturing systems. In Borangiu, T.,
Trentesaux, D., Thomas, A. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing and Robotics. Springer, Berlin, pp. 177–193 (2014)
9. Wong, C.Y., McFarlane, D., Ahmad Zaharudin, A., Agarwal, V.: The intelligent product
driven supply chain. In 2002 IEEE International Conference on Systems, Man and
Cybernetics, vol. 4, pp. 6–10 (2002)
10. Legat, C., Vogel-Heuser, B.: An orchestration engine for services-oriented field level
automation software. In Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service Orientation in
Holonic and Multi-Agent Manufacturing. Springer Studies in Computational Intelligence,
pp. 71–80
11. Berger, T., Deneux, D., Bonte, T., Cocquebert, E., Trentesaux, D.: Arezzo-flexible
manufacturing system: A generic flexible manufacturing system shop floor emulator
approach for high-level control virtual commissioning. Concurrent Eng. July,
1063293X15591609 (2015)
Security Issues in Service Oriented
Manufacturing Architectures
with Distributed Intelligence
Abstract The paper discusses the main classes of shop floor devices relative to
distributed intelligence for product-driven automation in heterarchical control. The
intelligent product (IP) concept is enhanced with two additional require-ments:
standard alignment and SOA capability. The paper classifies IPs from SOA inte-
gration point of view and introduces a formalized data structure in the form of a
XSD schema for XML representation. We propose a security solution for service
oriented manufacturing architectures (SOMA) that uses a public-key infrastructure
to generate certificates and propagate trust at runtime (during product execution) for
embedded devices that establish IPs on board pallets and communicate with shop
floor resources. Experimental results are provided.
Keywords Manufacturing execution system Distributed intelligence SOMA
Intelligent product Security Multi-agent framework
1 Introduction
not aware of its current state, location or identity. The requirements for manufac-
turing agility on one hand and supply chain predictability on the other hand con-
verged and the concept of intelligent product (IP) has emerged. McFarlane [2]
presents the main characteristics of the intelligent product as the ability to monitor,
assess and reason about its current and future state. At the same time, recent
research offers advances in developing MES applications with Service Oriented
Architecture (SOA) [3].
In this context, the intelligent product concept needs to be enhanced with two
additional requirements: standard alignment and SOA capability. These require-
ments are vital for seamless real time integration of intelligent products with the
other components (e.g. batch planning, automated execution, quality control,
inventory, etc.) of the overall manufacturing system control system. On the other
hand, when considering the global environment in which manufacturing companies
must operate today in order to remain competitive and keep costs at minimum, the
standardization problem becomes very important. Operating with proprietary
information structures becomes a concern that prevents real cooperation between
organizations towards a common goal. We argue that the alignment to standards
will be a decisive factor for determining the manufacturing enterprise’s success in
the years to come [4].
Several standards have emerged in the last period supporting the advances in
SOA adoption in manufacturing, some of the best known examples being: ISA 95,
ISA 88, ebXML, EDDL, FDT and MIMOSA. The adoption of these standards has
been seen first in the automotive industry and was supported by IBM through its
Manufacturing Integration Framework (MIF) [5].
Meyer [6] presents a complex survey on the intelligent product focusing on the
underlying technologies that enable this concept. A classification is introduced that
positions an IP along three perspectives: level of intelligence, location of intelli-
gence, and aggregation level of intelligence.
This paper discusses the main classes of shop floor devices relative to distributed
intelligence for product-driven automation and traceability and presents in this
context an XML approach for: storing the operational information for intelligent
products and representing the information flow during manufacturing process, the
XSD scheme definition, and the operation dependencies including lead and lag
time. We propose and implement a security architecture that uses a public-key
infrastructure to generate certificates and propagate trust at runtime (during pro-
duction execution) for intelligent products on board pallet carriers, communicating
with shop floor resources.
Fig. 2 Workstation-assisted
shop floor device
246 C. Morariu et al.
In this category one can include mostly the Android equipped devices which can
execute complex Java based agents (JADE/WADE). This class of devices are the
building blocks for a genuine distributed intelligence architecture, in which com-
plex negotiation logic can be implemented at lower levels in the stack, allowing
local decisions in the manufacturing process. These devices can leverage higher
layer standards on top of SOAP like ebXML, STEP or OAG BOD.
The manufacturing system architecture can be implemented using any combi-
nation of the above devices, as all have in common the generic structure consisting
in an informational part and the physical system. From a SOA perspective, the
architecture would tend to be a point to point choreography if the lower class
devices were used. Once higher class devices introduced, the trend is to use an
orchestrated architecture based on BPEL workflows and real time events. The
orchestrated architecture offers a high flexibility at integration layer by promoting
low coupling between components involved and allows algorithms capable of local
decision making. Based on the classification introduced by Meyer [6] in 2009 and
discussed in the previous section the capabilities of these device classes are pre-
sented in Table 1.
The informational part of the shop floor devices consists at least of structured
information regarding both the capabilities of the device for manufacturing
resources, and the operations required for intelligent products moving on pallet
carriers [10].
The data flow for intelligent products (IP) can be seen as a sequence of steps during
the execution of each individual product, as illustrated in Fig. 5.
This process starts when the intelligent product, represented initially only by the
pallet carrier equipped with an embedded device, is inserted in the manufacturing
line. At this point, the production information is loaded in the memory of the
embedded device and initialized. The next step is the data validation activity
composed from a XSD schema validation and a logical validation against the
operations required for product execution. The logical validation is required in
order to detect situations like dead-lock scenarios that might occur. Once the val-
idation is complete each operation from the pre-loaded product recipe is succes-
sively executed by shop floor resources. The data structure is updated with the
248 C. Morariu et al.
result of each operation execution, until all operations are completed. When the
product on pallet exits the manufacturing line, the data finalization phase is exe-
cuted, where information about each operation execution is consolidated and
unloaded from the embedded device on board of the pallet.
The proposed data format is defined using XSD, and contains three sections
describing: the product identity, its real time status and the list of operations that
need to be executed. The product identity is defined by the following complex type:
</sequence>
</complexType>
The Product_ID together with the RFID_Tag associated to the pallet identify
uniquely the product during its execution. The Product_Type represents a pointer to
the manufacturing system knowledge base storing the recipe for that specific
product.
The real time status of the product is represented by the ProductStatus complex
type:
The Operation complex type is holding the information required for each
operation execution. The ID uniquely identifies the operation in the operation list.
Prerequisites element is a comma separated list of operations IDs that need to be
completed before this operation can start. The Code represents the operation code in
the manufacturing system knowledge base and is used in conjunction with the
Parameters element. The Resource_ID is the ID of the machine that will execute
that specific operation. This value can be known in advance, in case of a
pre-computed execution schedule for the whole batch or can be determined at
runtime in case of heterarchical mode when there is no predefined execution
schedule and job executions are negotiated at runtime.
There are three time-related constraints in the Operation data structure, namely:
Duration, Lead Time and Lag Time (see Fig. 6).
• Duration: represents the number of time units required for the operation to be
performed by a resource;
• Lag Time: represents the delay imposed between two consecutive operations. For
example, if we consider a finish-to-start dependency between Operation1 and
Security Issues in Service Oriented Manufacturing Architectures … 251
Operation2 with a 5 time units lag time, it would mean that Operation2 cannot
start until at least 5 time units have passed since Operation1 was completed. Lag
time is a positive integer as it adds to the operation’s overall duration;
• Lead Time: is an acceleration of the successor operation. In a finish-to-start
dependency between Operation3 and Operation4 with a 5 time units lead time,
Operation4 can start up to 5 time units before Operation3 finishes. The lead time
is expressed as a negative integer because it subtracts from the total duration of
the operation. These two concepts are illustrated in Fig. 6.
Considering the lag time and lead time characteristics for operations together with
the duration and the precedence of operations, one can consider four time estimates
for each operation:
• ES (early start): is the earliest time expressed in time units when the operation
can start. This is computed based on the prerequisites tree of the operation by
adding the duration of all operations and the lag time and substantiating the lead
time. In other words, this represents the optimistic start time for the operation;
• LS (late start): is the latest time expressed in time units when the operation can
start. Similarly to ES, this is computed form the prerequisite operations, with the
difference that the lead time is subtracted. This represent the pessimistic start
time for the operation;
• EF (early finish): is the earliest time when the operation can be finished. It is
computed as ES + duration of the operation;
• LF (late finish): is the latest time when the operation can be finished. It is
computed as LS + duration of the operation.
Each product recipe is basically a dependency tree where operations have
generally a pre-imposed precedence. In a hierarchical scheduling mode, where the
execution order of the operations is known in advance, the above four parameters
can be computed directly at the beginning, and during execution are used to validate
that the initial schedule is still being followed by the system.
252 C. Morariu et al.
The Operation complex type has also two attributes that are always added after the
operation execution finishes. The Energy_Footprint is added by the resource that just
performed the operation, and is a float number representing the energy consumed for
the operation. Shop floor resources should be able to report this metric for each
predefined operation. The Quality_Check attribute contains the result of the quality
control done after each operation. In practice this will be a string status representing
either the fact that the check is passed or an error message if a failure was detected.
The Secure Socket Layer protocol, first introduced by Netscape, was used initially
to ensure secure communication between web servers and browsers communicating
Security Issues in Service Oriented Manufacturing Architectures … 255
over HTTP(s). However, the protocol itself is at socket layer, so the applications are
not limited to HTTP.
The protocol proposed in the paper uses a PKI infrastructure, consisting in a
Certificate Authority (CA) that generates public-private key pairs to identify the
server, the client or both. The SSL handshake, in case of mutual authentication,
consists in the following main steps:
1. The client connects to a secure server socket.
2. The server sends its public key with its certificate.
3. The client checks if the certificate was issued by a trusted CA, that the certificate
is still valid and that the certificate is matching the DNS hostname of the server.
4. The client passes its public key with its certificate to the server. The server
verifies if the client certificate was issued by a trusted CA, similarly with step 4.
5. The server responds with a negotiation request on the encryption cypher used.
6. The client then uses the public key to encrypt a random symmetric encryption
key and sends it to the server.
7. The server decrypts the symmetric encryption key using its private key and uses
the symmetric key to decrypt data.
8. The server sends back the data encrypted with the symmetric key.
9. Communication continues encrypted until the socket is closed.
In order to use SSL with mutual certificate authentication, a PKI is required.
The PKI is considered as a software platform that basically generates public/private
key pairs and certificates and associates them with identities by means of a cer-
tificate authority (CA).
The proposed security architecture involves using a public-key infrastructure
PKI to generate certificates and propagate trust during manufacturing execution
(runtime) for intelligent products communicating with shop floor resources. The
sequence diagram for trust propagation is presented in Fig. 7.
The process is divided in two separate phases. The pre-configuration phase
consists in generating the certificates for the shop floor resources that are static in
nature, or in other words, are present during the entire manufacturing process.
Examples of such resources are robots equipped with Web Service capabilities, the
shop floor scheduler or the PLC for conveyor service. These certificates are valid
for a long period of time, as they belong to these static resources. During this phase,
the trust is configured for the shop floor static devices, by importing the CA public
key in the local trust-store.
The run-time phase refers to the products equipped with local web service
capabilities. When the intelligent product enters the production line the embedded
device on the product pallet is initialized.
The main steps are creation of certificate request by the IP, generation of the
certificate by the CA, creation of the certificate store on the intelligent product,
product execution and certificate revocation when the product is completed. The
following sections exemplify these steps using OpenSSL as PKI infrastructure and
JADE agents for intelligent product implementation.
256 C. Morariu et al.
The PKI implementation proposed for securing service oriented manufacturing sys-
tems is based on OpenSSL [33]. OpenSSL is an Open Source toolkit implementing the
Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols
as well as a full-strength general purpose cryptography library. The internal JADE
agent architecture when integrated with OpenSSL is illustrated in Fig. 8.
The SOA enabled device must be capable of both inbound and outbound SOAP
capabilities. For run time encryption the Java SSL implementation is used.
However, for initial configuration and for the CA agent, the OpenSSL native
libraries are used. The native library is accessed using a JNI wrapper library.
Step 1 Certificate request: consists in calling X509_REQ_NEW function to
create a X.509 certificate request. An example data structure passed to the method
is:
{“stateOrProvinceName”, “RO” },
{“localityName”, “Bucharest” },
{“organizationName”, “University Politehnica Bucharest” },
{“organizationalUnitName”, “CIMR” },
{“commonName”, <DNS name of the device based on RFID tag>},
};
The commonName must match to the DNS name of the embedded device, in
order to pass host name verification during SSL handshake. To uniquely identify
the product, the host name is constructed using the individual RFID tag from the
product pallet. Once the certificate request is completed, it is sent to the CA agent.
Step 2 Certificate signing: CA agent receives the certificate request and generates
the X509 certificate that will be used by the IP. The certificate is installed in the
local key-store on the product and the CA agent public key is added to the
trust-store.
Step 3 Execution: at this stage the agent is able to securely communicate and
authen-ticate with SSL mode 3 (which includes mutual authentication) with other
shop floor devices based on CA trust established. The sequence of messages
258 C. Morariu et al.
exchanged between the client and the server (shop floor actors) during SSL
handshake at runtime, are:
Client 1: Client sends a CLIENT_HELLO command to the server, including:
– The SSL and TLS version supported by the client
– A set of ciphers supported by the client in order of preference
– A set of data compression methods supported by the client
– The session ID, which is 0 in case of a new SSL session
– A segment of random data generated by the client for key generation
Server 1: Server sends back a SERVER_HELLO command to the client, which
includes:
– The SSL or TLS version that will be used for the SSL session
– The cipher selected by the server that will be used for the SSL session
– The data compression method selected by the server
– The generated session ID for the SSL session
– A segment of random data generated by the server for key generation
Server 2: Server sends the CERTIFICATE command:
– This command includes the server certificate
– The client will validate the certificate against the truststore, check the hostname
against the certificate CN and check the Certificate Revocation List from the CA
Server 3: Server sends the CERTIFICATE_REQUEST command to request the
client certificate. The command contains the certificate authorities (CAs) that the
server trusts, allowing the client to send the corresponding certificate.
Client 2: The client sends the CERTIFICATE command, sending its certificate to
the server. The server will validate the client certificate against the truststore, check
the hostname against the client certificate CN and the Certificate Revocation List
from the CA.
Client 3: The client sends the CERTIFICATE_VERIFY command to the server,
which contains a digest of the SSL handshake messages signed using the client
private key. The server also calculates the digest using the client’s public key from
the client certificate. The two are compared by the server and if they match the
client is verified.
Server 4: Server sends the SERVER_DONE command, indicating that the server
has completed the SSL handshake.
Client 4: The client sends the CLIENT_KEY_EXCHANGE command
This command contains the premaster secret that was created by the client and was
then encrypted using the server public key. Both the client and the server generate the
symmetric encryption keys on their own using the premaster secret and the random
data generated from the SERVER_HELLO and CLIENT_HELLO commands.
Client 5: The client sends the CHANGE_CIPHER_SPEC command, indicating that
the contents of subsequent SSL record data sent by the client during the SSL
session will be encrypted with the selected cipher.
Security Issues in Service Oriented Manufacturing Architectures … 259
Client 6: The client sends the FINISHED command, including a digest of all the
SSL handshake commands that have flowed between the client and server. This
command is used to confirm that none of the commands exchanged so far were
tampered with
Server 5: The server sends the CHANGE_CIPHER_SPEC command, indicating the
cipher used to encrypt following SSL messages.
Server 6: The server sends the FINISHED command, including a digest of all the
SSL handshake commands that have flowed between the server and the client. This
command is used to confirm that none of the commands exchanged so far were
tampered with. From this stage on, the messages exchanged are encrypted and
secure.
Step 4 Revocation: is the last stage after the product execution is completed. As the
product will no longer require communication with other shop floor devices, the
certificate must be revoked. This is accomplished by sending a certificate revocation
request to the CA agent. The CA agent publishes the certificate revocation in the
CRL list, so that all future SSL handshakes that use this certificate will be prevented
The experimental evaluation of the security provided is presented in Sect. 5.
A simulation environment employing JADE agents for IP and shop floor resources
was created to evaluate the PKI solution proposed and analyse network trace
(Fig. 9).
The simulation environment consists in two agent instances communicating over
the WEP encrypted Wi-Fi network. The interaction between two agents is con-
sidered using the described PKI setup, assuming an attacker that was already able to
break the WEP encryption and gain access to the network. The attacker uses
AirPcapNG to intercept the Wi-Fi traffic and analyse it using Wireshark tool.
Figure 9 presents the network traffic obtained by the attacker in this scenario. The
network analysis shows the certificate exchange with both server and client
authentication at SSL/TLS layer. The last packet in the above trace shows the
beginning of the encrypted application data conversation between two agents.
Looking closer at SERVER_HELLO the server certificate together with the CA
certificate are being sent to the client, as highlighted in Fig. 10.
By implementing the PKI infrastructure for SSL communication presented in
this paper, some important security challenges are mitigated. The asymmetric
encryption in SSL assures that un-authorized access to information is prevented
even if the Wi-Fi packets are captured by a potential attacker. The encryption also
prevents access to proprietary information that might be stored and communicated
between intelligent products, shop floor scheduler and resources. The possibility of
Denial of Service attacks is not completely eliminated by this approach; however
260 C. Morariu et al.
they are limited to the network layer due to SSL implementation. Impersonation is
also prevented by implementing mutual SSL authentication during initial hand-
shake. One limitation at this point is represented by the requirement for Step 1 and 2
of the initialization process to be performed over wired network to prevent cer-
tificate spoofing during private key transmission.
Security Issues in Service Oriented Manufacturing Architectures … 261
References
1. Wong, C. Y., McFarlane, D., Zaharudin, A., Agarwal, V.: The intelligent product driven
supply chain. In: IEEE International Conference on IEEE Systems, Man and Cybernetics,
vols. 4, 6 (2002)
2. McFarlane, D., Sarma, S., Chirn, J.L., Ashton, K.: The intelligent product in manufacturing
control and management. In: 15th Triennial World Congress, Barcelona (2002
3. Morariu, C., Borangiu, T.: Manufacturing integration framework: a SOA perspective on
manufacturing. In: Proceeding of Information Control Problems in Manufacturing
(INCOM’12), IFAC Papers OnLine, vol. 14, no. 1, 31–38, (2012) (Elsevier)
4. Främling, K., Harrison, M., Brusey, J., Petrow, J.: Requirements on unique identifiers for
managing product lifecycle information: comparison of alternative approaches. Int. J. Comput.
Integr. Manuf. 20(7), 715–726 (2007)
262 C. Morariu et al.
5. Zhang, L.-J., Nianjun, Z., Chee, Y.-M., Jalaldeen, A., Ponnalagu, K., Arsanjani, A.,
Bernardini, F.: SOMA-ME: a platform for the model-driven design of SOA solutions. IBM
Syst. J 47(3), 397–413 (2008)
6. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: a survey. Comput. Ind. 60(3),
137–148 (2009). Elsevier
7. Kiritsis, D., Bufardi, A., Xirouchakis, P.: Research issues on product lifecycle management
and information tracking using smart embedded systems. Adv. Eng. Inform. 17(3), 189–202
(2003)
8. Jun, H.-B., Shin, J.-H., Kim, Y.-S., Kiritsis, D., Xirouchakis, P.: A framework for RFID
applications in product lifecycle management. Int. J. Comput. Integr. Manuf. 22(7), 595–615
(2009)
9. Leitão, P.: Agent-based distributed manufacturing control: a state-of-the-art survey. Eng.
Appl. Artif. Intell. 22(7), 979–991 (2009)
10. Ventä, O.: Intelligent products and systems: Technology theme-final report. VTT Technical
Research Centre of Finland (2007)
11. Potter, B.: Wireless hotspots: Petri dish of wireless security. Commun. ACM 49(6), 50–56
(2006)
12. Berghel, H., Uecker, J.: Wireless infidelity II: air jacking. Commun. ACM 47(12), 15–20
(2004)
13. Boncella, R.J.: Wireless security: an overview, Commun. Assoc. Info. Syst. 9(15), 269–282
(2002)
14. Von Solms, B., Marais, E.: From secure wired networks to secure wireless networks–what are
the extra risks? Computers and Security 23(8), 633–637 (2004)
15. Kankanhalli, A., Teo, H.-H., Tan, B., Wei, K.-K.: An integrative study of in-formation
systems security effectiveness. Int. J. Info. Manage. 23(2), 139–154 (2003)
16. Whitman, M.E.: Enemy at the gate: threats to information security. Commun. ACM 46(8),
91–95 (2003)
17. Mercuri, R.: Analyzing security costs. Commun. ACM 46(6), 15–18 (2003)
18. Wang, Y., Chuang, L., Quan-Lin, L., Fang, Y.: A queuing analysis for the denial of service
(DoS) attacks in computer networks. Comput. Netw. 51(12), 3564–3573 (2007)
19. Warren, M., Hutchinson, W.: Cyber-attacks against supply chain management systems. Int.
J. Phys. Distrib. Logistics Manage. 30(7/8), 710–716 (2000)
20. Reaves, B. Morris, T.: Discovery, infiltration, and denial of service in a process control system
wireless network. In: eCrime Res. Summit, IEEE 1–9 (2009)
21. Peine, H.: Security concepts and implementation in the Ara mobile agent system. In: 7th IEEE
International Workshops on IEEE Enabling technologies: infrastructure for collaborative
enterprises (WET ICE’98 Proceedings), pp. 236–242 (1998)
22. Nilsson, D., Larson, U., Jonsson, E.: Creating a secure infrastructure for wireless diagnostics
and software updates in vehicles. In: Computer Safety, Reliability, and Security, pp. 207–220.
Springer (2008)
23. Boland, H., Mousavi, H.: Security issues of the IEEE 802.11 b wireless LAN. In: Canadian
Conference on IEEE, El. and Computer Engineering, 1, 333–336 (2004)
24. Reddy, S., Vinjosh, K., Sai R., Rijutha, K., Ali, S.M., Reddy, C.P.: Wireless hacking-a WiFi
hack by cracking WEP. In: 2nd International Conference on IEEE, Education Technology and
Computer (ICETC), 1, V1–189 (2010)
25. Berghel, H., Uecker, J.: WiFi attack vectors. Comm. ACM 48(8), 21–28 (2005)
26. Aime, M.D., Calandriello, G., Lioy, A.: Dependability in wireless networks: can we rely on
WiFi?, Security & Privacy, IEEE 5(1), 23–29 (2007)
27. Wang, L., Orban, P., Cunningham, A., Lang, S.: Remote real-time CNC machining for
web-based manufacturing. Robot. CIM 20(6), 563–571 (2004)
28. Wang, L., Shen, W.,Lang, S.: Wise-ShopFloor: a web-based and sensor-driven shop floor
environment. In: The 7th International Conference on IEEE Computer Supported Cooperative
Work in Design, pp. 413–418 (2002)
Security Issues in Service Oriented Manufacturing Architectures … 263
29. Sauter, T.: The continuing evolution of integration in manufacturing automation, Ind.
Electron. Mag. IEEE 1(1), 10–19 (2007)
30. De Souza, L. et al.: Socrades: A web service based shop floor integration infrastructure. The
Internet of Things, pp. 50–67 (2008)
31. Shen, W., Lang, S.Y.T., Wang, L.: iShopFloor: an Internet-enabled agent-based intelligent
shop floor, Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Trans. on
35(3), 371–381 (2005)
32. Shin, J., et al.: CORBA-based integration framework for distributed shop floor control.
Comput. Ind. Eng. 45(3), 457–474 (2003)
33. OpenSSL.: http://openssl.org/. Accessed June 2015-04-23
Part VI
Cloud and Computing-Oriented
Manufacturing
Technological Theory of Cloud
Manufacturing
Abstract Over the past decade, a flourishing number of concepts and architectural
shifts appeared such as the Internet of Things, Industry 4.0, Big Data, 3D printing,
etc. Such concepts are reshaping traditional manufacturing models, which become
increasingly network-, service- and intelligent manufacturing-oriented. It sometimes
becomes difficult to have a clear vision of how all those concepts are interwoven and
what benefits they bring to the global picture (either from a service or business
perspective). This paper traces the evolution of the manufacturing paradigms,
highlighting the recent shift towards Cloud Manufacturing (CMfg), along with a
taxonomy of the technological concepts and technologies underlying CMfg.
Keywords Cloud manufacturing Internet of things Direct digital manufacturing
S. Kubler (&)
Interdisciplinary Centre for Security Reliability and Trust,
University of Luxembourg, 2721 Luxembourg, Luxembourg
e-mail: [email protected]
J. Holmström
Department of Civil and Structural Engineering, School of Engineering,
Aalto University, P.O. Box 11000, 00076 Aalto, Finland
e-mail: jan.holmstrom@aalto.fi
K. Främling
Department of Computer Science, School of Science, Aalto University,
P.O. Box 15500, 00076 Aalto, Finland
e-mail: kary.framling@aalto.fi
P. Turkama
Center for Knowledge and Innovation Research, School of Business,
Aalto University, P.O. Box 15500, 00076 Aalto, Finland
e-mail: petra.turkama@aalto.fi
1 Introduction
Manufacturing paradigms evolved over time, driven by societal trends, new ICT
(information and communication technology) technologies, and new theories. The
manufacturing processes of the future need to be highly flexible and dynamic in
order to map the customer demands, e.g. in large series production or mass cus-
tomization. Manufacturing companies are not only part of sequential, long-term
supply chains, but also of extensive networks that require agile collaboration
between partners. Companies involved in such networks must be able to design,
configure, enact, and monitor a large number of processes and products, each rep-
resenting a different order and supply chain instance. One way of achieving this goal
is to port essential concepts from the field of Cloud Computing to Manufacturing,
such as the commonly applied SPI model: SaaS (Software-as-a-Service), PaaS
(Platform-as-a-Service), IaaS (Infrastructure-as-a-Service) [1]. In the literature, this
concept is referred to as “Cloud manufacturing” (CMfg), which has the potential to
move from production-oriented manufacturing processes to customer- and
service-oriented manufacturing process networks [2], e.g. by modelling single
manufacturing assets as services in a similar vein as SaaS or PaaS solutions.
While organizations will be looking to make use of CMfg for creating radical
change in manufacturing practices, this will not be an easy transition for many.
There will be architectural issues as well as structural considerations to overcome.
The main reason for this is that CMfg derives not only from cloud computing, but
also from related concepts and technologies such as the Internet of Things—IoT
(core enabling technology for goods tracking and product-centric control) [3, 4], 3D
modelling and printing (core enabling technology for digital manufacturing) [5, 6],
and so on. Furthermore, some of those concepts/technologies have not yet reached
full maturity such as the IoT, whose number of connected devices should pass from
9.1 billion (2013) to 28.1 billion (2020) according to IDC forecasts). Similarly,
while 3D modelling is now conventional even for small companies, 3D printing is
still in the peak of inflated expectation phase in the Gartner Hype Cycle, which may
be (potentially) followed by a drop into the trough of disillusionment [7]. Within
this context, the success of CMfg is partly dependent upon the evolution of all these
concepts, although it is often difficult to understand how they are interwoven and
how important one is to the other. The present paper helps to better understand such
interwoven relationships, the current trends and challenges (e.g., shift from
closed-industry solutions to open infrastructures and marketplaces).
To this end, Sect. 2 shows the evolution of the manufacturing paradigms through
the ages. Section 3 introduces a CMfg taxonomy, whose key challenges and
opportunities of the underlying concepts are discussed; the conclusions follow.
Technological Theory of Cloud Manufacturing 269
Over the last two centuries, manufacturing industry has evolved through several
paradigms from Craft Production to CMfg [8, 9]. Craft Production, as the first
paradigm, responded to a specific customer order based on a model allowing high
product variety and flexibility, where highly skilled craftsmen treated each product
as unique. However, such a model was time- and money-consuming—as depicted
in 1. The history of production systems truly began with the introduction of stan-
dardized parts for arms, also known as the “American System” (see Fig. 1).
Following the American System model, Mass Production enabled the making of
products at lower cost through large-scale manufacturing. On the bad side, the
possible variety of products was very limited since the model is based on resources
performing the same task again and again, leading to significant improvement of
speed and reduction of assembly costs (cf. 1). Symbols for mass production were
Henry Ford’s moving assembly line and his statement: “Any customer can have a
car painted any color that he wants so long as it is black”.
Lean Manufacturing emerged after World War II as a necessity due to the limited
resources in Japan. The Lean Manufacturing paradigm is a multi-dimensional
approach that encompasses a wide variety of management practices, including
just-in-time, quality systems, work teams, cellular manufacturing, etc., in an inte-
grated system [10] that eliminates “waste” on all levels. It is worth noting that the
lean management philosophy is still an important part of all modern production
systems.
Volume
1955
Lean manufacturing
Mass production
1913 American
System
Cloud 2000 1980 1913
Manufacturing Mass
Customization Cost
Craft production
1850
Variety
The fourth paradigm, Mass Customization, came up in the late 1980s when the
customer demand for product variety increased. The underlying model combines
business practices from Mass Production and Craft Production, moving towards a
customer-centric model. This model requires the mastery of a number of tech-
nologies and theories to make manufacturing systems intelligent, faster, more
flexible, and interoperable. Within this context, a significant body of research
emerged, particularly with the IMS (Intelligent Manufacturing System) community
with worldwide membership, which is an industry-led, global, collaborative
research and development program established to develop the next generation of
manufacturing and processing technologies. The IMS philosophy adopts heterar-
chical and collaborative control as its information system architecture [11–13]. The
behaviour of the entire manufacturing system therefore becomes collaborative,
determined by many interacting subsystems that may have their own independent
interests, values, and modes of operation.
It is clear from Fig. 1 that the manufacturing paradigms succeeded one another,
always seeking for smaller volumes and costs, while rising the product variety. The
fifth and recent paradigm, CMfg, moves this vision a step further since it provides
service-oriented networked product development models in which service con-
sumers are enabled to configure, select, and use customized product realization
resources and services, ranging from computer-aided engineering software to
reconfigurable manufacturing systems [14, 15]. Several applications relying on
Cloud infrastructure have been reported in recent years, e.g. used for hosting and
exposing services related to manufacturing such as machine availability monitoring,
collaborative and adaptive process planning, online tool-path programming based
on real-time machine monitoring, collaborative design, etc. [16, 17]. Similarly in
the European sphere, this technology has recently attracted a lot of attention, e.g.
with the Future Internet Public Private Partnership (FI-PPP),1 OpenStack,
OpenIoT,2 or Open Platform 3.0 communities.3
The next section helps to understand what concepts and technologies are
underlying CMfg, how they are interwoven together, how important one is to the
other, and what challenges remain ahead.
1
http://www.fi-ppp.eu.
2
https://github.com/OpenIotOrg/openiot/wiki/OpenIoT-Architecture.
3
http://www.opengroup.org/subjectareas/platform3.0.
Technological Theory of Cloud Manufacturing 271
various manufacturing resources and abilities can be intelligently sensed and con-
nected into a wider Internet, and automatically managed and controlled using both
(either) IoT and (or) Cloud solutions, as emphasized in the taxonomy given in
Fig. 2. In this taxonomy, one can see that the so-called IoT is a core enabler, if not
the cornerstone, for product-centric control and increasing servitization (i.e.,
making explicit the role of the product as the coordinating entity in the delivery of
customized products and services) [18]. Product-centric control methods are, in
turn, required and of the utmost importance for developing fast and cost effective
Direct Digital Manufacturing (DDM) solutions [6], also known as `Rapid
Manufacturing’. One example of how CMfg platforms combine all those concepts
might be the following:
A tractor (or backend system) detects – based on sensor data fusion – that the pump is
defective. The after-sales service system is immediately notified and turns to the services of
the cloud manufacturing community to (i) access product-related data and models (e.g.,
CAD models) and then (ii) identify an optimal manufacturer for the broken pump parts. The
digital model is sent to the community member who can produce the custom part via 3D
printing. The closest (or cheapest) 3D printer service provider(s) can be discovered (e.g.,
via IoT discovery mechanisms), so that the pump part can be produced to order and shipped
to the farmer.
Sections 3.1–3.4 discusses in greater detail all the taxonomy concepts and
interdependencies, along with challenges that still need to be addressed.
Cloud Manufac
turing (CMfg)
Product Centric
DDM
Control Cloud
IoT Computing
3D 3D
Modeling Printing
cloud computing are primarily computational (e.g., server, storage, network, soft-
ware), while in CMfg, all manufacturing resources and abilities involved in the
whole life cycle of manufacturing are aimed to be provided for the user in different
service models [2]. The manufacturing resources and abilities are virtualized and
encapsulated into different manufacturing cloud services, where different product
stakeholders can search and invoke the qualified services according to their needs,
and assemble them to be a virtual manufacturing environment or solution to
complete their manufacturing task [15].
As an end consumer looking at the cloud space, there are two major types of
clouds to choose from: open source clouds (e.g., Citrix, OpenIoT) and closed
clouds (e.g., Amazon, Azure, Google). One of the key challenges, especially from
the EU perspective, is to foster cloud manufacturing based on existing open stan-
dards and components to facilitate an as-vendor-independent-as-possible Cloud
engineering workflows platform should lead to radical transformations in business
dynamics in the industry (e.g., for new open standard-based value creation)
[19, 20]. This implies creating cloud manufacturing ecosystem(s) built on open IoT
messaging standards having the capabilities to achieve “Systems-of-Systems”
integration, as will be discussed in the next section.
The growth of the IoT creates a widespread connection of “Things”, which can lead
to large amounts of data to be stored, processed and accessed. Cloud computing is
one alternative for handling those large amounts of data. To a certain extent, the
cloud effectively serves as the brain to improve decision-making and optimization
for IoT-connected objects and interactions [21], although some of those decisions
can be made locally (e.g., by the product itself) [12, 13]. However, as stated
previously, new challenges arise when IoT meets Cloud; e.g. creating novel net-
work architectures that seamlessly integrate smart connected objects, as well as
distinct cloud service providers (as illustrated with the dashed arrows in Fig. 3). IoT
standards e.g. for RESTful APIs and associated data will be key to be able to
import/export product-related data and models inside CMfg ecosystems [22].
Several research initiatives have addressed this vision such as—in the EU sphere
—the IERC or FI-PPP clusters (see e.g. FI-WARE, OpenIoT), or still the Open
Platform 3.0 (initiative of The Open Group). In this respect, our research claims that
the recent IoT standards published by The Open Group, notably O-MI and O-DF
[3], have the potential to fulfill the “Systems-of-Systems” vision discussed above.
O-MI provides a generic Open API for any RESTful IoT information system, and
O-DF is a generic content description model for Objects in the IoT, which can be
extended with more specific vocabularies (e.g., using or extend domain-specific
Technological Theory of Cloud Manufacturing 273
...
Private
Cloud
Legend
Today’s IoT : Data collected into vertical silos
(pushed to vertical servers)
Open Cloud Computing based on open and
standardized IoT solutions
In a true IoT, each intelligent product and equipment is uniquely identifiable [24],
making it possible to link control instructions with a given product-instance. The
basic principle is that the product itself, while it is in the process of being produced
and delivered, directly requests processing, assembly and materials handling from
available providers, therefore simplifying materials handling and control, cus-
tomization, and information sharing in the supply chain. This concept is referred to
as “Product Centric Control” [25], which is required and of the utmost importance
from a CMfg perspective since it allows for developing fast and cost effective DDM
solutions, as will be discussed in the next section. Indeed, operations and decision
making processes that are triggered and controlled by the product itself result in
higher quality and efficiency than standard operations and external control. The
generative mechanism is somehow the ability of the product to (i) monitor its own
status; (ii) notify the user when something goes wrong (e.g., the defective pump);
(iii) help the user to find and access the necessary product-related models and
information from the manufacturer community involved in the CMfg ecosystem;
and (iv) ease the synchronization of product-related data and models that might be
generated in distinct organizations, throughout the product lifecycle [12, 26].
274 S. Kubler et al.
Recently, the range of DDM4 technologies has increased significantly with the
advancement of 3D printing [6], opening up a novel range of applications con-
sidered impossible, infeasible or uneconomic in the past. DDM technologies are
technologies that include both novel 3D printing and 3D modelling (as emphasized
in Fig. 2), i.e. the more conventional numerical controlled machines. The need for
tooling and setup is reduced by producing parts directly based on a digital model.
The implication of the development of DDM technologies is that, in an increasing
number of situations, it is possible to produce parts directly to demand, without
tooling, setup and consideration of economies of scale [27]. Time-to-market,
freedom of design, freedom to redesign and flexible manufacturing plans are only
the beginning. These advantages represent just the tip of the iceberg since DDM is a
relatively new manufacturing practice.
Given this, CMfg is clearly an applicable business model for 3D-printing.
Because additive manufacturing is a digital technique, it is possible to manufacture
products close to the location where they will be used, thus reducing transportation
(Co2 emissions), large storage areas, while enabling a wide range of customers,
suppliers and manufacturers to take part to the development of new products and
services based on an open and standardized CMfg platform.
4 Conclusion
In industry, cloud manufacturing (CMfg) platforms are rarely applied today because
of considerable concerns about security and ROI (due mainly to considerable efforts
to implement interoperability). Furthermore, the maturity of the platforms is often
limited to a prototype status nowadays. However, there are some industry settings,
from which interest in such a concept is stated such as associations of SMEs who
intend to jointly provide customisable products, or industry clusters who would like
to make their members’ abilities easily available (searchable and usable) for other
members.
Within this context, the emergence of the Internet of Things, Cloud computing,
3D printing, product-centric-control techniques, etc., mark a new turning point for
CMfg—manufacturing resources and organization assets become easier to be
remotely tracked, monitored, accessed, booked and used (e.g., for production),
when and as needed. However, all those concepts make it difficult to understand
how they are interwoven and what benefits they bring to the global picture (either
from a service or business perspective). This paper contributes to the discussion
about this global picture with the introduction of a CMfg taxonomy, while dis-
cussing current trends and challenges that still face CMfg (e.g., shift from
4
DDM is the usage of additive manufacturing for production of end-use components.
Technological Theory of Cloud Manufacturing 275
References
1. Mell, P., Grance, T.: The nist definition of cloud computing. Technical report, National
Institute of Standards and Technology (2011)
2. Li, B.H., Zhang, L., Wang, S.L.: Cloud manufacturing: a new service-oriented networked
manufacturing model. Comput. Integr. Manuf. Syst. 16(1), 1–16 (2010)
3. Främling, K., Kubler, S., Buda, A.: Universal messaging standards for the iot from a lifecycle
management perspective. IEEE Internet Things J. 1(4), 319–327 (2014)
4. Cai, H., Xu, L.D., Xu, B., Xie, C., Qin, S., Jiang, L.: Iot-based configurable information
service platform for product lifecycle management. IEEE Trans. Industr. Inf. 10(2), 1558–
1567 (2014)
5. Berman, B.: 3-d printing: the new industrial revolution. Bus. Horiz. 55(2), 155–162 (2012)
6. Khajavi, S.H., Partanen, J., Holmström, J.: Additive manufacturing in the spare parts supply
chain. Comput. Ind. 65(1), 50–63 (2014)
7. Kietzmann, J., Pitt, L., Berthon, P.: Disruptions, decisions, and destinations: Enter the age of
3-d printing and additive manufacturing. Bus. Horiz. 58(2), 209–215 (2015)
8. Clarke, C.: Automotive Production Systems and Standardisation: From Ford to the Case of
Mercedes-Benz. Springer Science & Business Media (2005)
9. Herrmann, C., Schmidt, C., Kurle, D., Blume, S., Thiede, S.: Sustainability in manufacturing
and factories of the future. Int. J. Precis. Eng. Manuf. Green Technol. 1(4), 283–292 (2014)
10. Shah, R., Ward, P.T.: Lean manufacturing: context, practice bundles, and performance.
J. Oper. Manage. 21(2), 129–149 (2003)
11. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
12. Meyer, G., Främling, K., Holmström, J.: Intelligent products: a survey. Comput. Ind. 60(3),
137–148 (2009)
13. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: theory and practice. Ann. Rev. Control 37(1), 69–88 (2013)
14. Mahdjoub, M., Monticolo, D., Gomes, S., Sagot, J.C.: A collaborative design for usability
approach supported by virtual reality and a multi-agent system embedded in a PLM
environment. Comput. Aided Des. 42(5), 402–413 (2010)
15. Wu, D., Rosen, D.W., Wang, L., Schaefer, D.: Cloud-based design and manufacturing: a new
paradigm in digital manufacturing and design innovation. Comput. Aided Des. 59, 1–14
(2014)
16. Wang, L.: Machine availability monitoring and machining process planning towards cloud
manufacturing. CIRP J. Manufact. Sci. Technol. 6(4), 263–273 (2013)
17. Morariu, O., Morariu, C., Borangiu, T., Raileanu, S.: Smart resource allocations for highly
adaptive private cloud systems. J. Control Eng. Appl. Inform. 16(3), 23–34 (2014)
18. Kärkkäinen, M., Ala-Risku, T., Främling, K.: The product centric approach: a solution to
supply network information management problems? Comput. Ind. 52(2), 147–159 (2003)
276 S. Kubler et al.
19. Vermesan, O., Friess, P.: Internet of Things—From Research and Innovation to Market
Deployment. River Publishers (2014)
20. Dini, P., Lombardo, G., Mansell, R., Razavi, A., Moschoyiannis, S., Krause, P., Nicolai, A.L.
R.: Beyond interoperability to digital ecosystems: regional innovation and socio-economic
development led by SMEs. International Journal of Technological. Learning 1, 410–426
(2008)
21. Wu, D., Thames, J.L., Rosen, D.W., Schaefer, D.: Enhancing the product realization process
with cloud-based design and manufacturing systems. J. Comput. Inf. Sci. Engi. 13(4) (2013)
22. Mezgár, I., Rauschecker, U.: The challenge of networked enterprises for cloud computing
interoperability. Comput. Ind. 65(4), 657–674 (2014)
23. Främling, K., Holmström, J., Loukkola, J., Nyman, J., Kaustell, A.: Sustainable PLM through
intelligent products. Eng. Appl. Artif. Intell. 26(2), 789–799 (2013)
24. Ashton, K.: Internet things—MIT, embedded technology and the next internet revolution. In:
Baltic Conventions, The Commonwealth Conference and Events Centre, London (2000)
25. Kärkkäinen, M., Holmström, J., Främling, K., Artto, K.: Intelligent products–a step towards a
more effective project delivery chain. Comput. Ind. 50(2), 141–151 (2003)
26. Kubler, S., Främling, K., Derigent, W.: P2P data synchronization for product lifecycle
management. Comput. Ind. 66, 82–98 (2015)
27. Czajkiewicz, Z.: Direct digital manufacturing—new product development and production
technology. Econ. Organ. Enterp. 2(2), 29–37 (2008)
Integrated Scheduling for Make-to-Order
Multi-factory Manufacturing:
An Agent-Based Cloud-Assisted Approach
Iman Badr
1 Introduction
I. Badr (&)
Science Faculty, Helwan University, Cairo, Egypt
e-mail: [email protected]
2 Literature Review
Recently, some research work has been proposed to tackle the problem of dis-
tributed product management in general and scheduling in particular. In [3, 4],
cloud-assisted platforms for managing distributed manufacturing and employing a
service-oriented paradigm are presented. In [5, 6], RFID technology is employed to
deal with the dynamic production scheduling problem by capturing and analysing
real-time data. The authors in [5] apply a Monte Carlo simulation to generate a
production schedule based on the captured data. In [6], the main scheduling deci-
sions are taken by the human managers at the different production stages.
In [7], the scheduling of production logistics is focused on and a system based
on Internet of Things (IoT) and cloud computing for solving this problem is pre-
sented. On the other hand, Zhang et al. [8] focus on the production scheduling
problem and propose a multi-agent based architecture for a ubiquitous manufac-
turing environment. A method based on Genetic Algorithms is employed to solve
the machine scheduling problem. Sun et al. [9] study the integrated production
scheduling and distribution problem for multi-factory with both inland transporta-
tion and overseas shipment. The proposed solution is based on a two-level fuzzy
guided genetic algorithm.
Integrated Scheduling for Make-to-Order Multi-factory … 279
3 Problem Analysis
The scheduling problem studied in this research is concerned with a make-to order
manufacturing environment encompassing distributed factories and multiple
warehouses, material suppliers and transportation companies. The integration of the
distributed facilities takes place via a cloud that provides a ubiquitous access to
customers and manufacturing stakeholders. As depicted in Fig. 1, customers are
involved as active stakeholders in the manufacturing process. They are allowed to
place orders for their customized products, which may be processed at scattered
locations and finally assembled and delivered to their customers in a specified
deadline.
Scheduling customer requests in such a distributed, ubiquitous and dynamic
environment have to be made under consideration of the current status of the entire
set of the involved entities. The production flow of distributed manufacturing may
be summarized in the steps captured in Fig. 2. First, raw material is collected either
from an internal or external warehouse. While in the former case no transportation
is required, the collected material has to be transported to the shop floor of the
factory in the latter case. This justifies the existence of an arrow by passing the
Material flow
Information flow
Warehouse
for raw
material Customer
Final
Cloud product
platform
Warehouse
for final
Warehouse
product
for raw
material Customer
Final
product
transportation step to indicate that this step is optional. Followed by the collection
and possible transportation of material, a transformation step takes place. This step
involves a set of processing and material handling steps inside a factory.
The product that undergoes a transformation step may require further transfor-
mations at other factories or may correspond to the final product ordered by the
customer. This is designated by the feedback loop depicted in Fig. 2, after the
transportation step succeeding the transformation step. Taking the path, denoted by
the feedback arrow, indicates the need to apply further transformations in other
factories. Once a final product is produced, two possibilities exist, either directly
transporting the product to deliver it to the customer or transporting it to be stored
for some time before being transported again for customer delivery. While the
former possibility corresponds to the path from the transportation step through the
dashed arrow to the customer delivery step, the latter possibility is denoted by
taking the rest of the steps (i.e. warehousing, transportation and customer delivery).
Each of these steps involves a set of resources, as captured by Table 1. To reduce
the complexity associated with tracking this overwhelming set of resources, the
influencing location or entity is identified for every step to be modelled later as an
agent undertaking the allocation decision for the corresponding step. For example,
the availability of raw material affects the material collection step and real-time
tracking of material may be performed by attaching RFID to the material units.
However, the allocation decision of material should be delegated to the material
supplier rather than to the material itself. The transformation of material or work
pieces inside a factory corresponds to a set of processing and material handling
steps that are analysed and modelled as agents in [10, 11]. In multi-factory envi-
ronment, every factory is responsible for its internal schedule and is conceived as
the influencing entity in this case (see Table 1).
Integrated scheduling is affected by two facilities-related factors:
• Static factors related to the inherent specifications of facilities such as the
maximum speed of a truck, the services supported by a factory and the capacity
of a warehouse.
• Dynamic factors corresponding to the current status of facilities such as the
current free space in a warehouse, the currently existing material at a certain
store and the shipping capacity of a certain shipping company for a given date.
Table 1 A derivation of the influencing resources and entities of a typical material flow
Material flow step Influencing resource Influencing location/entity
Material Material Warehouse/material supplier
collection company
Transportation Trucks, forklifts, etc. Transportation company
Transformation CNCs, robots, AGVs, etc. Factory
Warehousing Location and internal facilities Warehousing company
Integrated Scheduling for Make-to-Order Multi-factory … 281
the four entities, listed in Table 1, namely a supplier agent, a shipping agent, a
factory agent and a warehouse agent.
Similarly, the environmental model is extended by incorporating the static fac-
tors defined in the previous section in a facilities repository. Furthermore, the static
factors derived from the resources and services of all the facilities have to be added
to the existing resources and services repositories, respectively. To account for the
customer-oriented production and make-to-order strategy, customer profiles should
be added as well. Each profile captures personal information, including address or
delivery location, preferences and a history of designed, configured and ordered
products.
Auxiliary tools are required to populate the environmental model with facilities,
products, services, etc. This is made possible through configuration and adminis-
tration tools that provide enabling the registration and deletion of entities within the
environmental model. Search tools are also provided to allow agents to find each
other dynamically based on the provided service. To enable customers to make their
own products, a tool for the design and creation of virtual products has to be
provided as well.
5 Scheduling Method
The schedule generation takes place dynamically through the goal-oriented nego-
tiations among the concerned agents. The proposed scheduling method may be
summarized in the following steps.
• Initializations
Before placing an order, the customer navigates through a product repository
and either selects a predefined product, configures an existing one or designs a
new product. A customer order corresponds to a job for manufacturing a specific
product in a given quantity, at a delivery time or deadline and possibly a
maximum price. The customer agent reacts to the placement of an order by
instantiating a job agent. The job agent retrieves the technological order of the
product in concern, i.e. the production services required to manufacture the
product along with their sequence. The factory agents supporting these services
are retrieved and contacted.
• Collecting production scheduling proposals
To generate a production schedule, the factory agents supporting the required
services are contacted by the job agent. Every factory agent prepares and sends
its bid to the job agent. A bid includes the earliest start time, the latest end time,
and an estimated price. In case raw material or parts are required from an
external warehouse, the corresponding supplier agent is retrieved from the
yellow pages through the search tool. The cost and time of the supply is con-
sidered by the factory agent when generating its bid. In generating a bid, factory
agents decompose the required service into internal services supported by the
Integrated Scheduling for Make-to-Order Multi-factory … 283
6 Conclusion
The present work deals with the integrated scheduling problem for a distributed
multi-factory, make-to-order manufacturing. The problem is analysed to identify
the involved entities along with static and dynamic characteristics of these entities
influencing the flow of production. The influencing entities are modelled as
autonomous agents and added to an agent-based, single-factory architecture pro-
posed in a previous work. The extended architecture accommodates for the new
agents at an additional layer of abstraction representing the distributed manufac-
turing facilities that spans the entire flow of production starting from material
collection up to customer delivery. The architecture encompasses a total of five
layers of abstraction that contain agents which encapsulate and capture the dynamic
influencing factors of the corresponding entities. The agents are complemented with
an environmental model that keeps track of the static factors, influencing the flow of
production and a set of auxiliary tools that are used for configuring and customizing
284 I. Badr
products, registering and deleting services and agent, searching for services and
agents, etc.
The generation of an integrated schedule takes place through the dynamic
goal-oriented negotiations among the concerned agents. By basing schedule deci-
sions on the factors influencing the flow of production and captured dynamically,
agents can better react to unforeseen events. The decomposition of the scheduling
problem into agents and limiting the negotiations among only the concerned agents
greatly reduce the time complexity and thus enhances the responsiveness.
Furthermore, the incorporation of static influencing factors exogenous to scheduling
agents facilitates the adoption of the proposed scheduling regardless of the speci-
fications of the involved factories.
Work is ongoing on employing different optimization algorithms that can be
incorporated in the proposed agents. The different algorithms are to be evaluated on
different case studies.
References
1. Zhang, Q., Cheng, L., Boutaba, R.: Cloud computing: state-of-the-art and research challenges.
J Internet Serv. Apdpl. 1, 7–18 (2010)
2. Ouelhadj, D., Petrovic, S.: Survey of dynamic scheduling in manufacturing systems. J. Sched.
12(4), 417–431 (2009)
3. Huang, B., Li, C., Yin, C., Zhao, X.: Cloud manufacturing service platform for small- and
medium-sized enterprises. Int. J. Adv. Manuf. Technol. 65, 1261–1272 (2013)
4. Valilai, O., Houshmand, M.: A collaborative and integrated platform to support distributed
manufacturing system using a service-oriented approach based on cloud computing paradigm.
Robot. Comput. Integr. Manuf. 29, 110–127 (2013)
5. Guo, Z., Yang, C.: Development of production tracking and scheduling system: a cloud based
architecture. In: International Conference on Cloud Computing and Big Data (2013)
6. Luo, H., Fang, J., Huang, G.: Real-time scheduling for hybrid flow shop in ubiquitous
manufacturing environment. Comput. Ind. Eng. 84, 12–23 (2015)
7. Qu, T., Lei, S., Wang, Z., Nie, D., Chen, X., Huang, G.: IoT-based real-time production
logistics synchronization system under smart cloud manufacturing. Int. J. Adv. Manuf.
Technol. 1–18 (2015)
8. Zhang, Y., Huang, G., Sun, S., Yang, T.: Multi-agent based real-time production scheduling
method for radio frequency identification enabled ubiquitous shop floor environment. Int.
J. Comput. Ind. Eng. 76, 89–97 (2014)
9. Sun, X., Chung, S., Chan, F.: Integrated scheduling of a multi-product multi-factory
manufacturing system with maritime transport limits. Transp. Res. Part E Logistics
Transp. Rev. 79, 110–127 (2015)
10. Badr, I., Göhner, P.: An agent-based approach for scheduling under consideration of the
influencing factors in FMS. In: The 35th Annual Conference of the IEEE Industrial Electronics
Society (IECON-09). Porto, Portugal (2009)
11. Badr, I.: Agent-based dynamic scheduling for flexible manufacturing systems. Dissertation
Thesis (2010)
12. Badr, I., Göhner, P.: Incorporating GA-based optimization into a multi-agent architecture for
FMS scheduling. In: the 10th IFAC Workshop on Intelligent Manufacturing Systems. Lisbon,
Portugal (2010)
Secure and Resilient Manufacturing
Operations Inspired by Software-Defined
Networking
1 Introduction
transmission process. Also, this work may drive the manufacturing research and
practitioners communities to speed-up the adoption of SDN to distributed manu-
facturing operations. Secondly, the paper proposes the design of a novel
SDN-inspired control mechanism for manufacturing sequencing and scheduling,
with the goal to optimize the performance of specific manufacturing operations
metrics such as total completion time, maximum lateness, and others. This novel
solution is expected to bring to manufacturing control similar benefits that SDN is
reported to bring to IP networks, such as optimized manufacturing resource routing
solutions, better resource load balancing, and improved monitoring (fault-detection)
of manufacturing resources.
This work may also provide the cloud manufacturing research community with
the foundations to tackle complex optimization problems, which many times resort
on heuristics for acceptable solutions. Moreover, cloud control of manufacturing
decisions will come with the added benefit of the cyber security solutions that cloud
platforms offer. From this point forward the paper is structured as follows: Sect. 2
provides a review of the most important aspects of the SDN and manufacturing
control, and after that, Sect. 3 presents the proposed Manufacturing-SDN System
model, detailing certain critical modelling aspects and instantiates the
manufacturing-SDN systems paradigm. Finally, the future research concerning the
proposed system framework is outlined in the conclusions section.
2 Literature Review
OpenFlow API
Southbound Interface
Data Plane
Traditional solutions for manufacturing control range from total centralized to total
decentralized control schemes. Manufacturing control literature presented over the
years the advantages and disadvantages of both and any other solutions in between
[4]. More recently, the revolutionary adoption of virtualization and cloud compu-
tation at different level of service (IaaS, PaaS, HaaS, SaaS) in many areas was
embraced by some well-established manufacturing organizations, as well. Other
manufacturing organizations acting as service broker were established and operate
in the overall virtualized and cloud manufacturing environment [3].
Previous authors’ work [10] proposes a Manufacturing Cyber-Physical System
(M-CPS) model, which includes both the physical world, where the traditional
manufacturing system is located, and the cyber world, where the Internet connec-
tivity and computing in the cloud is performed. Figure 2 presents the proposed
M-CPS system.
In between the two worlds, there is a layer of cyber-physical devices, such
as sensors and actuators, local area networks, and also application and cyber
security software, which completes the cyber-physical system model. The layer of
Secure and Resilient Manufacturing Operations …
Monitoring
Load Balancer
Cybersecurity
SDN Testbed
potential problem, with resulting undesired consequences from the delivery time
and cost points of view. Relying on solutions provided by dispatching rules is
another problematic approach, as none of those simple heuristic rules will provide
acceptable solutions every time they are called.
As mentioned above, SDN separates the data and control plans in IP networks
with the benefits of better allocation of load in the network and control over the
route to be followed by the IP data packets from source to destination, among
others. Similar benefits can be obtained in the case of adopting SDN concept to
manufacturing operations. The adoption approach essentially needs to start with a
distributed manufacturing architecture as centralized architectures do not have the
flexibility in communication needed at lower levels of the architecture for for-
warding manufacturing packets across the shop-floor network of machines.
The first part of this work is the adoption of the SDN concept to distributed
manufacturing operations, with the goal to improve the performance of the man-
ufacturing data network metrics. As depicted in Fig. 3, the M-SDN testbed uses
FPGA boards to emulate the manufacturing resources data on the shop-floor level.
The proposed testbed includes several FPGA Development Boards, such as the
powerful Xilinx Zynq-7000 Series featuring the Zynq All Programmable
System-on-Chip, which can be used to implement OpenFlow devices yielding
88Gbps throughput for 1 K flow supporting dynamic updates [11]. Current avail-
able OpenFlow devices are able to provide up to 1000 K forwarding table entries
for the Southbound Interface model. However, the testbed will use Open vSwitch, a
virtual OpenFlow device software that provides a number of virtual data points
larger than the actual hardware ones. Open vSwitch is available from their portal
[12]. The testbed will use two solutions for the Control Level modelling purpose. In
a first step, Mininet, a platform available from Mininet portal [13], will be con-
sidered. Mininet creates a realistic virtual network, running real kernel, switch and
application code on a single machine which could be VM, cloud or native. One of
the key properties of Mininet is its use of software-based OpenFlow switches in
virtualized containers, providing the exact same semantics of hardware-based
OpenFlow switches [3]. Once testing is successful, the testbed will employ the use
of multiple VM created in a computer network environment.
The Northbound Interface will be emulated by employing SDN programming
languages such as Procera, NetCore, Pyretic, etc. The Application Level will
include at a minimum network applications for load balancing, virtualization, cyber
security, process monitoring, and manufacturing control routing (manufacturing
order sequencing and scheduling). The proposed implementation, depicted in
Fig. 4, will be loaded with manufacturing orders to be performed in different
292 R.F. Babiceanu and R. Seker
OpenFlow
Controller Emulated
Mfg Organization 2
SDN Network
Emulated
Mfg Organization 1 IP Network
Emulated
Mfg Organization k
Emulated
Mfg Organization 3
The second part of this work is the design of a novel SDN-inspired control
mechanism for manufacturing sequencing and scheduling, with the goal to optimize
the performance of specific manufacturing operations metrics such as total com-
pletion time, maximum lateness, etc. While the actual manufacturing orders are still
residing in the data plan, the decision on where they are moved next is not made at
the resource level as in the decentralized manufacturing control, but it will be done
at the control plane, where the global view of the network obtained through the
logically centralized control will help in providing solutions that avoid the limita-
tions of the decentralized control. The first scenarios tested will include previously
solved agent-based and holonic approaches solutions reported in the area of
decentralized manufacturing control [4, 5, 14]. Scenarios will increase in com-
plexity to the level of what is expected to be realistic for today’s cyber-physical
systems that include large numbers of systems and components that need to be
manufactured and assembled together in different locations geographically
Secure and Resilient Manufacturing Operations … 293
SDN security and resilience are identified among the SDN research areas currently
pursued both in the academia and industry. Clustered control architectures such as
SDN that provide distributed functionality are under constant scrutiny for their
availability and scalability and thus their resilience characteristics. Also, the SDN
decoupling between the data and control layers can result in delays in reporting of
faulty data links due to communication overload. Therefore, the resilience of SDN
OpenFlow networks depends on both the fault-tolerance in the data layer, as well as
the performance of the logically (only) centralized control layer. An SDN-inspired
solution offers manufacturing the virtualization and cloud capabilities to address
changes and respond to resource failures in practically real-time. Also, VM-enabled
control of manufacturing decisions comes with the added benefit of cyber security
solutions that cloud platforms offer.
This work presents a framework for the development of a secure and resilient
Manufacturing-SDN system, where the cloud manufacturing operations are
expected to embrace the benefits of deployed SDN in traditional IP networks. The
work proposes an actual Manufacturing-SDN testbed and outlines the components
and test scenarios for the proposed system. Future work will address the testbed
implementation and report on the results of the SDN distributed cloud manufac-
turing operations.
References
1. Xia, W., Wen, Y., Xie, H., Foh, C.H., Niyato, D., Xie, H.: A Survey on Software-defined
networking. IEEE Commun. Survey Tutorials 17(1), 27–51 (2015)
2. Hakiri, A., Gokhale, A., Berthou, P., Schimdt, D.C., Gayraud, T.: Software-defined
networking: challenges and research opportunities for future internet. Comput. Netw. 75,
453–471 (2014)
3. Kreutz, D., Ramos, F.M.V., Verissiomo, P.E., Rothenberg, C.E., Azodolmolky, S., Uhlig, S.:
Software-Defined networking: a comprehensive survey. Proc. IEEE 103(1), 14–76 (2015)
294 R.F. Babiceanu and R. Seker
4. Babiceanu, R.F., Chen, F.F.: Development and applications of holonic manufacturing systems:
a survey. J. Intell. Manuf. 17(1), 111–131 (2006)
5. Babiceanu, R.F., Chen, F.F.: Distributed and centralized material handling scheduling:
comparison and results of a simulation study. Rob. Comput. Integrated Manuf. 25(2), 441–448
(2009)
6. Babiceanu, R.F., Seker, R.: manufacturing operations, internet of things, and big data: towards
predictive manufacturing systems. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service
Orientation in Holonic and Multi-Agent Manufacturing, SCI, vol. 594, pp. 157–164. Springer,
Heidelberg (2015)
7. Open networking foundation. https://www.opennetworking.org
8. Nunes, B.A., Mendonca, M., Hguyen, X.-N., Obraczka, K., Turletti, T.: A survey of
software-defined networking: past, present, and future of programmable networks. IEEE
Commun. Survey Tutorials 16(3), 1617–1634 (2014)
9. Jarraya, Y., Madi, T., Debbabi, M.: A survey and a layered taxonomy of software-defined
networking. IEEE Commun. Survey Tutorials 16(4), 1955–1982 (2014)
10. Babiceanu, R.F., Seker, R.: Manufacturing cyber-physical systems enabled by complex event
processing and big data environments: a framework for development. In: Borangiu, T.,
Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, SCI, vol. 594, pp. 165–173. Springer, Heidelberg (2015)
11. Kobayashi, M., Seetharaman, S., Paruklar, G., Appenzellar, G., Little, J., van Reijendam, J.,
Weismann, P., McKeown, N.: Maturing of OpenFlow and software-defined networking
through deployments. Comput. Netw. 61, 151–175 (2014)
12. Open vSwitch. http://vswitch.org
13. Mininet. http://mininet.org
14. Mejjaouli, S., Babiceanu, R.F.: Holonic condition monitoring and fault-recovery system for
sustainable manufacturing enterprises. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.)
Service Orientation in Holonic and Multi-Agent Manufacturing, SCI, vol. 544, pp. 31–46.
Springer, Heidelberg (2014)
Building a Robotic Cyber-Physical
Production Component
1 Introduction
The manufacturing world is being subject to a paradigm shift, both at the organi-
zational and control levels, facing the current demands for more robust, flexible,
modular, adaptive and responsive systems. Several opportunities arise for the
introduction of new and innovative approaches, such as the Cyber-Physical System
(CPS) approach. This CPS approach is being supported by strong financing
measures, such as the European Horizon 2020 framework or the German Industry
4.0 initiative, leveraging a new industrial revolution and capturing the attention of
academia or industry.
CPS constitutes a network of interacting cyber and physical elements aiming a
common goal [1]. A major challenge is to integrate the computational decisional
components (i.e. cyber part) with the physical automation systems and devices (i.e.
physical part) to create such network of smart cyber-physical components.
However, this integration is not transparent and constitutes a critical challenge for
the success of this approach. In fact, it is not easy and transparent to integrate
heterogeneous automation devices, such as sensors, robots, numerical control
machines or automation solutions based on Programmable logic Controllers
(PLCs), which usually requires a complex and time consuming activity. To face this
problem, the challenge is to define standard industrial interfaces that allow a
completely transparent development of the computational decisional components
without knowing the particularities of the automation device; in such process, these
interfaces may be developed by automation providers or system integrators and
(re-)used by the system developers.
The objective of the paper is to describe an approach to establish such standard
interfaces based on the use of the ISO 9506 Manufacturing Message Specification
(MMS) international standard [2], initially introduced by ADACOR holonic control
architecture [3]. This approach is exemplified by deploying a cyber-physical pro-
duction component for an industrial manipulator robot, which is part of a
small-scale production system based on Fischertechnik systems.
The rest of the paper is organized as follows: Sect. 2 overviews the concept of
cyber-physical systems and identifies the integration of computational components
with automation devices as a critical challenge for its industrial implementation.
Section 3 presents an approach to engineer cyber-physical production components
and Sect. 4 illustrates its applicability by developing a robotic cyber-physical
production component. Finally, Sect. 5 rounds up the paper with the conclusions.
Embedded systems have been in use for many years. They can be characterized by
the conjunction of computational, electrical and mechanical capabilities, being
often executed in real-time and providing some sort of intelligence to the system.
Embedded systems are present everywhere and in different sectors, such as civil
infrastructure, aerospace, energy, healthcare, manufacturing, transportation.
Examples are vendor machines, cars’ Automatic Breaking System (ABS) or even
elevators.
With the widely improvement and spread of communication technologies,
namely wireless communication and optical fibre, used currently in internet
Building a Robotic Cyber-Physical Production Component 297
Fig. 1 Cyber-physical
system triad [4]
Fig. 2 Integration of agents with low level automation functions to form a cyber-physical
production component
(from different types, e.g. robots and numerical control machines, and from dif-
ferent automation providers) and the communication infrastructure (e.g. serial
communication, Modbus or OPC-UA). These services, after being developed, can
be re-used and offered as drivers or wrappers, to be used in a pluggable and modular
manner by other control applications for similar resources.
The transparent and standard invocation of these services by the computational
entity requires the specification of the syntax of each service, i.e. the definition of
input and output parameters. The services available in the Program Invocation
Service package, which are invoked in a unique way by the client side, i.e. the
agents, are:
300 P. Leitão and J. Barbosa
The robotic device is an IRB 1400 ABB robot that is part of a real small-scale
production system, which also comprises two punching machines and two indexed
lines supplied by Fischertechnik™, as illustrated in Fig. 4.
The punching and indexed machines are controlled by IEC 61131-3 programs
running in a Modicon M340 PLC. Two different parts circulate in the system, each
one having a particular process plan. The circulation of parts within the flexible
production system is tracked by radio-frequency identification (RFID) readers. An
industrial manipulator robot executes the transfer of the parts between the machines
using proper RAPID programs and is accessible through the ABB S4 DDE Server
(that can be accessed by OPC).
The idea in this work is to describe the way the cyber-physical production
component for the industrial robot was engineered, and particularly how the soft-
ware agent, which is providing intelligence and adaptation to the robot, was
interfaced with the physical controller of the automation device.
Fig. 5 Petri net model for the behaviour of the Operational Holon
where the var parameter contains the specification of the PLC type and address
extracted from the xml configuration file. The same read interface, using a Modbus
communication protocol is now recoded using the following code excerpt.
304 P. Leitão and J. Barbosa
More examples could be given using the same approach, namely interfacing
different robot controllers or using different communication infrastructures, e.g. a
serial communication channel.
It is worthy to mention that all the aforementioned examples use a decoupled
approach, where the agent control layer, due to the JVM needs, is not directly
deployable into the controlled HW, i.e. into the robot controller. Despite this, the
standard interfaces approach is also used when a direct HW control can be per-
formed using an agent coupled approach, as in the case of a Raspberry Pi [9].
The experimental tests show that this approach simplifies the development and
deployment of agent-based systems in the control of physical devices. On one side
agents developers can only focus on developing the desired agents’ functionalities,
while, on the other side, automation integrators can focus on developing these
interfaces, parameterized according to the particularities of the physical HW
devices.
5 Conclusions
References
1. Leitão, P., Colombo, A.W., Karnouskos. S.: Industrial automation based on cyber-physical
systems technologies: prototype implementations and challenges. In: Accepted for publication
in Computers in Industry, Elsevier (2015)
2. ISO/IEC 9506-1: Industrial Automation Systems—Manufacturing Message Specification. Part
1—Service Definition (1992)
3. Leitão, P., Restivo, F., ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006)
4. Schmid, M.: Cyber-Physical Systems ganz konkret, ELEKTRONIKPRAXIS, n. 7 (2014)
5. Wooldridge, M.: An Introduction to Multi-Agent Systems. John Wiley & Sons (2002)
6. Leitão, P., Karnouskos, S.: A survey on factors that impact industrial agent acceptance, In:
Leitão, P., Karnouskos, S. (eds.) Industrial Agents: Emerging Applications of Software Agents
in Industry, pp. 401–429. Elsevier (2015)
7. Winkler, M., Mey, M.: Holonic manufacturing systems. Eur. Prod. Eng. (1994)
8. Ribeiro, L.: The design, deployment, and assessment of industrial agent systems. In: Leitão,
P., Karnouskos, S. (eds.) Industrial Agents: Emerging Applications of Software Agents in
Industry, pp. 45–63. Elsevier (2015)
9. Dias, J., Barbosa, J., Leitão, P.: Deployment of industrial agents in heterogeneous automation
environments. In: Proceedings of the 13th IEEE International Conference on Industrial
Informatics (INDIN’15), pp. 1330–1335. Cambridge, UK 22–25 July 2015
10. Murata, T.: Petri nets: properties. Anal. Appl. IEEE 77(4), 541–580 (1989)
11. Bellifemine, F., Caire, G., Greenwood,D.: Developing Multi-Agent Systems with JADE.
Wiley (2007)
Part VII
Smart Grids and Wireless
Sensor Networks
Multi-Agent Planning of Spacecraft Group
for Earth Remote Sensing
1 Introduction
One of the most perspective trends in the field of Earth remote sensing (ERS) is
creating a multi-satellite orbit group that allows for increasing frequency of Earth
surface examination as well as reliability and viability level of the space system.
Expansion of an orbit group results in alternative possibilities of observing the same
areas with various spacecraft. At the same time, with the limited number of data
receiving points (DRPs), it becomes inevitable when several spacecraft lay claim to
transmitting data to the same DRP. Increase of interest to Earth remote sensing
results in the need for allocation of observation requests considering their priorities
and execution time. Hence, the necessity of real-time dynamic coordination of a
group resource functioning plan occurs.
However, most of the existing developments in this area are aimed at creating
static resource usage plan of the space system [1]. At that, there is a disputable
assumption that a spacecraft functions in a determined environment. Besides, the
used planning methods and means are primarily aimed at separate spacecraft and
cannot be projected to large-scale groups.
With the introduction of multi-satellite groups and the corresponding growth of
their target functioning complexity, various heuristic algorithms have been sug-
gested for solving this task. This question has been widely discussed in [2]: authors
compare several implementations of genetic algorithms that are combined with hill
climbing, simulated annealing, squeaky wheel optimization and iterated sampling
algorithms. The research in [3] considers planning of ERS tasks that occur con-
tinuously and asynchronously by means of ant colony optimization algorithm.
According to [4], for solving the task of spacecraft planning a combination of
artificial neural network and ant colony optimization algorithm is suggested.
Dynamic planning of spacecraft operations is separately considered in [5]. The
literature shows that the principle of adaptive spacecraft resource scheduling with
the use of heuristic methods is very efficient. The paper considers the possibility of
implementing this principle with the help of multi-agent technology, which showed
good results when solving traditional tasks of resource planning and allocation [6].
2 Problem Statement
i
tshoot —time when shooting was started for the observation area i by a chosen
spacecraft;
Multi-Agent Planning of Spacecraft Group for Earth Remote Sensing 311
i
tdrop —time when the shot transmission was started for the observation area i to a
chosen DRP;
tmax —critical storage time for a shot, after which it is considered outdated.
The developed schedule must satisfy the following constraints:
1. Visibility between a spacecraft and an observation area during shooting.
2. Visibility (accessibility) between a spacecraft and a DRP during data
transmission.
3. Free space in the on-board memory unit of a spacecraft (2).
Xn
i
qj uij ðtÞ Qj ; for j ¼ 1; m; where : ð2Þ
i
tdrop ¼ treceive
i
; for i ¼ 1; n; where : ð4Þ
i
treceive —time when receiving the shot was started for the observation area i by a
chosen DRP.
5. No overlapping between operations in schedules of different resources (satellites
and DRPs are forbidden to perform several operations simultaneously) (5–6).
X
n
uij ðtÞ 1; for j ¼ 1; m; ð5Þ
i
X
n
xik ðtÞ 1; for k ¼ 1; k where; ð6Þ
i
1; shot receiving of the observation area i by a DRP k at the moment t;
xik ðtÞ ¼
0; in other cases:
3 Methods
At the first stage agents of observation areas send requests to suitable spacecraft
about the possibilities of shooting and transmitting to the Earth within the best
(according to the target function) free time interval. The decision obtained at this
stage is taken as the initial one, which will further be consistently improved starting
with the “worst” fragments of the plan. Spacecraft agents have access only to the
timetables of their own spacecraft’s resources. According to this data, they make
decisions about the possibility or impossibility of placing a new image of the
observation area. If at the time of receiving a request from the observation area all
resources of the spacecraft are already occupied, its agent declines the request for
shooting of the area.
Conflict-free route planning is possible if all of the following conditions are
fulfilled:
• Visibility between a spacecraft and an observation area and a DRP;
• The time period does not overlap with the previously planned data transmission
session or shooting of other observation areas;
• Spacecraft on-board memory unit contains sufficient amount of free space for
the time interval from the beginning of shooting of the observation area to the
end of image transmission to DRP.
The spacecraft agent sends inquiries to all the known DRP agents with a pro-
posal to hold a data transmission session. Among all the options proposed by DRP
agents, the spacecraft agent chooses the closest to the moment of shooting time
interval, but at the same time the one that is free from other shooting sessions or
DRP communication sessions. If any of the conditions is not fulfilled, the shooting
of the observation area remains unscheduled. Having planned a DRP communi-
cation session, the spacecraft agent informs the observation area agent about the
data transmission time and makes the necessary changes in its timetable. It is
important to note that the purpose of this stage is to quickly obtain a feasible initial
schedule, whatever its level of quality. The solution received at this stage shows the
main bottlenecks of the timetable and becomes the reference point for further
improvements.
314 P. Skobelev et al.
At this stage, the observation area agents are trying to improve the value of their
target function, suggesting that conflicting with their areas needs finding other
intervals for placement by shifting the time or moving to another resource
(spacecraft or DRP). Building a sequence of changes is started by those agents that
are most unsatisfied with the value of their target function. A proactive observation
area agent asks available resources about the possibility of placing certain opera-
tions; then, some conflicts are inevitably exposed: time slots that are favourable
from the point of view of the target function are found to be occupied by other
operations. Those agents that are connected with these operations receive a request
for a shift to a specified time slot. Recursive shifting of the operations affected by
the shift continues until one of the operations can move to a new position without
any obstacles; the displacing operation proceeds as long as there are means to
compensate the induced expenses or until a counter which limits recursion depth
equals zero. Such a process of agent interaction when shifting the operations in the
schedule is shown in Fig. 1: operations are symbolized by rectangles of varying
width proportional to their duration; the displacing operation is shaded, solid arrows
represent messages generated by the shift request, and response messages of the
shifted operations are shown as dotted lines.
The following conflict situations are taken into consideration when building a
chain of changes:
1. Planning of shooting in the observation area by displacement of the previously
planned shooting sessions or data transmission sessions from the spacecraft
schedule.
2. Approximation of the time of image transfer to DRP by displacement of the
previously planned shooting sessions or data transmission sessions from the
spacecraft schedule.
3. Displacement of the previously planned data transmission sessions from the
DRP schedule.
4. Emptying the spacecraft memory unit of other images in case of lacking space in
the on-board memory unit.
t
Multi-Agent Planning of Spacecraft Group for Earth Remote Sensing 315
Selecting
unsatisfied area
agents n Cycle for all
operations
Proactive start
Request for proposals m
Choosing
i m Refusal conflict
operations
j=m-i Proposal
Fig. 3 A screen of multi-agent prototype system for planning of a spacecraft group for ERS
4 Results
5 Conclusion
Within the proposed approach, the problem of space system management is solved
by creating a self-organizing team of intelligent agents conducting negotiations and
not only capable of planning their behaviour individually in real time, but also
working in groups in order to ensure coordination of decisions.
Multi-Agent Planning of Spacecraft Group for Earth Remote Sensing 317
The developed software prototype for planning the use of spacecraft for Earth
remote sensing has proved the potential of this approach owing to the following
facts:
• Significant reduction of time spent on forming a schedule close to optimal
(compared to the exhaustive algorithm);
• Flexibility provided by a rapid response to emerging events;
• Scalability and openness—new components (spacecraft, DRPs, etc.) can be
connected to the system dynamically without the system’s shutdown and restart;
• Autonomy of software modules which will at long term make it possible to place
planning components inside the spacecraft’s on-board computing devices [10].
Acknowledgements This work was carried out in SEC “Smart Solutions” Ltd. with the financial
support of the Ministry of Education and Science of the Russian Federation (Contract №
14.576.21.0012, unique number RFMEFI57614X0012, and P. Skobelev—scientific advisor, E.
Simonova—senior analyst, V. Travin—project manager, A. Zhilyaev—programmer).
References
1. Sollogub, A., Anshakov G., Danilov V.: Spacecraft systems for sensing of the Earth’s surface.
Mechanical Engineering, Moscow (2009)
2. Globus A., Crawford J., Lohn J., Pryor A.: Application of techniques for scheduling
earth-observing satellites. In: Proceedings of the 16th conference on Innovative Applications
of Artificial Intelligence, 836–843 (2004)
3. Iacopino, C., Palmer, P., Policella, N., Donati, A., Brewer, A.: How ants can manage your
satellites. Acta Futura 9, 57–70 (2014)
4. Rixin, L.Y.W., Xu, M.: Rescheduling of observing spacecraft using fuzzy neural network and
ant colony algorithm. Chin. J. Aeronaut. 27, 678–687 (2014)
5. Chuan, H., Liu, J., Manhao, M.: A dynamic scheduling method of earth-observing satellites by
employing rolling horizon strategy. Sci. World J. (2013)
6. Rzevski, G., Skobelev, P.: Managing complexity. WIT Press, London-Boston (2014)
7. Wooldridge, M.: An introduction to multiagent systems, 2nd edn. Wiley, London (2009)
8. Skobelev, P.: Multi-agent systems for real time resource allocation, scheduling, optimization
and controlling: industrial application. In: 10th International Conference on Industrial
Applications of Holonic and Multi-Agent Systems, Toulouse, France (2011)
9. Belokonov, I., Skobelev, P., Simonova, E., Travin, V., Zhilyaev, A.: Multiagent planning of
the network traffic between nano satellites and ground stations. Procedia Eng.: Sci. Technol.
Exp. Autom. Space Veh. Small Satell. 104, 118–130 (2015)
10. Sollogub, A., Skobelev, P., Simonova, E., Tzarev, A., Stepanov, M., Zhilyaev, A.: Intelligent
system for distributed problem solving in cluster of small satellites for earth remote sensing.
Inf. Control Syst. 1(62), 16–26 (2013)
Methodology and Framework
for Development of Smart Grid Control
transmission line but many parts of the distribution system do not have this
capability.
The renewable energy sources are the biggest challenges for power system
operators. The intermittent generation may have negative impact on the power
flows, voltages as well as other network parameters. Consecutive outages may
occur in a power system and thus it is highly recommended that the power system is
designed so that to withstand double disconnections. Unfortunately, due to the large
investment requirements, most of the power systems may have problems to comply
with this criterion.
The operating conditions vary continuously and the power system moves from
one state to another, as suggestively indicated in Fig. 1. N and N-1 represent
standard analysis of power system operating regimes (N means all power lines are
up, N-1 means one power line is down, N-1-1 means two lines are down).
Transition to one state or another depends on the random events that may occur or
on the decision taken by the system operator. The figure shows also a classification
of possible states of the power system depending on the events that may occur.
When the power system enters the alert state, immediate corrective actions must
be taken in order to restore the normal operation. If during this transition a con-
tingency occurs, the system can enter in an emergency state, in which there are a
large number of bus voltage limits violations. In this state, ultimate (extreme)
actions can still be taken to restore the system to a normal operation state. If the
contingency is too severe, the power system may become instable and finally
collapses (as shown in Fig. 1).
Following the major incidents that have occurred in the European interconnected
power system [1, 2], UCTE has issued in 2009 a new policy for operation security
[3]. Maintaining the power system in secure operating state assumes that some
time [h]
overall process operability by minimizing downtimes. One of the first proposals for
an architecture based on reconfigurability design is that of Choksi and McFarlane
[6] which uses the coordinating function to monitor and control local planning,
local optimization and local control.
Most available control theories assume that a control structure is given at the
outset. There are two main approaches to the problem, a mathematically oriented
approach (control structure design) and a process oriented approach [7, 8].
Plantwide control is a holistic approach concerned with the structural and functional
decisions involved in the control system design of a process.
2.1 Methodology
In order to solve the uncertainties management problem the designer will try to split
it into manageable parts. A generic methodology to perform requirements analysis
while addressing hazard and risk includes the following steps:
• Applicability specification (program, project, data, constraints, personnel).
• Hazards identification (expert opinion/lessons learned/test data/technical
analysis/hazard analysis).
• Consequences evaluation (impact/severity, probability, timeframe).
• Risk assessment (resources/leads/project; metrics information, risk management
structure).
• Monitor risk metrics and verify/validate mitigation actions.
• Checking real scenarios and progressive updating the input information.
Our approach to develop advanced control using an integrated architecture that
is open to incorporate more flexibility implements reconfigurable control to ensure
the reaction to uncertainties. Integration of Risk and Hazard (RH) Control able to
maintain the process in a safe state using control hierarchy layers is shown in Fig. 3
where (a), (b), (c) and (d) suggest a holonic organization at several levels.
The four main holons are, from the upper level: (d)—classical control based on
basic regulatory, sequential and logical, (c)—safety level based on safety instru-
mented systems (SIS) and the new paradigm Reconfigurable control, (b)—remote
level based on internet or cloud able to do automatic identification, modelling and
simulations, (a)—management level, which has two main functions: management
and supervisory control [7].
To be able to perform such tasks the system architecture, structure and data flows
must be able to support different methods of reconfiguration. Consequently,
reconfigurability design must focus on: (i) Components (sensors, actuators, IEDs,
synchrophasors, FACTS, controllers, equipment); (ii) Control (algorithms, struc-
ture, data flows, RH control strategies, integrated control); (iii) Transmission
process (equipment, flows, process, and states).
(a) Information
(b)
Central Simulation
Management Production
Operation
Orders, Scheduling
Control
Deliveries
Process Data Modelling
Reconfigurable
Control
Sensors &
Actuators
Process Uncertainty
library of algorithms and strategies, case studies. The focus in our work is on
developing a structure of fault detection and intelligent alert that in conjunction with
Reconfigurable Control can conduct to the recovery of functionality, even with
spoiled performances. Mode selection—part of this structure functions as follows:
at first, the fault recovery measures for individual loop failures are derived from a
fault impact analysis. Next, the fault recovery principle initiates a change in the
operating strategy of the plant by incorporating changes in the operating factors
associated with failures in the model based control calculations. These strategies
can be implemented with direct commands from Reconfigurable Control or/and
associated with reconfiguration scenarios.
3 Case Study
The Romanian power system has undergone significant changes concerning the
generation pattern. In December 2014 the total installed capacity in wind power
plants (WPP) was 2950 MW. At least 80 % of the capacity installed in WPPs is
located in the Dobrogea region. A 1400 MW rated power nuclear power plant is
also connected in this region and operates at full capacity. The average load in the
Methodology and Framework for Development of Smart Grid Control 325
PH Center / Cloud
Automatic Solver +
Simulator Library
Identification Merger
Solution
+ Time
Mode Operating
Selection mode selector
Reconfigurable Scenarios /
Control Strategies
Process
Reconfiguration
process
Enviroment
Romanian power system is about 6700 MW, while an average power of 800 MW is
exported to other power systems.
The network section (Fig. 5) is defined across the electrical lines that inter-
connect the Dobrogea region with other parts of the Romanian power system or
with the Bulgarian power system. These lines are also the most subjected to
overloading.
Fig. 5 A simplified
representation of the
Dobrogea region
326 G. Florea et al.
From the security point of view several limits are verified both in the planning
activity one day before, and also in exploitation which takes place in real-time.
The thermal limit is verified one day before for both N and N-1 configurations; if
overloading is identified, network reconfiguration or/and generation dispatching is
performed. In real-time if N-1-1 contingency occurs, either immediate action by the
dispatcher is taken or appropriate automation is activated by reducing the power
generation within the area. For this purpose, a study was performed to define a
regional automation scheme that deals with any unexpected contingency. Taking
into account the large number of scenarios, the influences (sensitivities) of all wind
power plants on the power flows on the transmission lines have been determined.
A ranking of these sensitivities have been roughly defined so that, in real-time,
when a certain contingency occurs, the minimum quantity of generated power is
disconnected.
The bus voltages are regulated in two stages to be maintained within predefined
voltage limits. The first stage consists in providing the voltage set-points to the
nuclear power plant and the nodes where large wind power plants are connected.
The second stage is the real-time operation and consists in voltage control contri-
bution to the pilot buses from the wind power plants. However, this stage is active if
there is generation availability from wind power plants. When active, voltage
set-points roughly about 105 % of the nominal value are set within area because
Dobrogea region becomes an important reactive power sources for other parts of the
Romanian power system.
The stability limit is calculated one day before by off-line simulations and in
real-time (on-line) by specialized software. The stability limit is defined as the
maximum power that can be transited through the predefined section. Since the
Dobrogea region exports power, the stability is the sum of power flows on all lines
from the section. This limit decreases when one or more lines are disconnected; in
this case the most severe contingency is identified, which gives the stability limit for
the N-1 configuration, a.o. The stability limit is calculated one day before and if the
scheduled exported power exceeds these limits some actions are taken. The same
study had the purpose of defining an automation logic that takes actions in real time
when the stability criterion is not met. The automation logic aims to reduce the
power produced by wind power plants when the stability limit is exceeded, mainly
in case of contingencies.
4 Conclusions
Our work promotes the concept of holonic control based on uncertainties man-
agement instead of using the standard control strategy approach. Real-time capa-
bility jointly integrated with Smart Grid attributes like isolation, reconfiguration,
modularity and standardization provide the necessary tools for uncertainties man-
agement and lead to a more reliable system. This approach was considered in
Methodology and Framework for Development of Smart Grid Control 327
simulating the behavior of the power system in Dobrogea region. Future work will
include identifying a particular solution for risk and hazard control with
self-reconfiguration of a pilot plant.
References
1. Berizzi, A.: The Italian 2003 blackout. In: IEEE-PES General Meeting, vol. 2, pp. 1673–1679,
IEEE, Denver (2004)
2. Final report on system disturbance on 4 November 2006. Technical report, Union for the
Coordination of Transmission of Electricity (2007)
3. UCTE Policy 3: Operational security. Technical report, Union for the Coordination of
Transmission of Electricity (2009)
4. Available transfer capability definitions and determination: a framework for determining
available transfer capabilities of the interconnected transmission networks for a commercially
viable electricity market. NAERC (1996)
5. Eremia, M., Shahidehpour, M., et al. (eds.): Handbook of Electrical Power System Dynamics:
Modeling, Stability, and Control. Wiley & IEEE Press, Power Engineering Series, Hoboken
(2013)
6. Chokshi, N.N., McFarlane, C.D.: A Distributed Coordination Approach to Reconfigurable
Process Control. Springer Series in Advanced Manufacturing (2008)
7. Dobrescu, R., Florea, G.: Integrating risk and hazard and plantwide control solutions for
reconfigurability. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Studies in
Computational Intelligence. Service Orientation in Holonic and Multi-agent Manufacturing,
vol. 594, pp. 103–114. Springer (2015)
8. Pournaras. E., Yao, M., Ambrosio, R., Warnier, M.: Organizational control reconfigurations
for robust smart power grid. In: Besis, N., Xhafa, F., Varvarigou, D., Hill, R., Li, M. (eds.)
Studies in Computational Intelligence. IoT and Inter-cooperative Computational Technologies
for Collective Intelligence, vol. 460, pp. 189–206. Springer (2013)
9. Kok, K., Scheepers, M., Kamphuis, R.: Intelligence in electricity networks for embedding
renewables and distributed generation. In: Negenborn, R.R., Lukszo, Z., Hellendoorn, H.
(eds.) Intelligent Systems, Control and Automation: Science and Engineering, Intelligent
Infrastructures, pp. 179–209. Springer (2010)
10. Botea, A., Rintanen, J., Banerjee D.: Optimal reconfiguration for supply restoration with
informed A* search. In: IEEE Transactions on Smart GRID, vol. 3, no. 2, pp. 583–593. IEEE
(2012)
Sink Node Embedded, Multi-agent
Systems Based Cluster Management
in Industrial Wireless Sensor Networks
1 Introduction
With advances in cyber-physical systems and the introduction of Industry 4.0, there
has been an extensive amount of research in distributed intelligent control. The
requirement of cyber-physical system is that devices be aware of their environment;
industrial wireless sensor networks (WSNs) have been considered for this appli-
cation. Research has shown that multi-agent systems have proven to be a successful
technique for managing WSNs. Typically this is done through a coupled or cloud
based deployment. With advances in technology it is becoming feasible to deploy
these intelligent agents directly on the automation hardware.
Key challenges in WSNs are fault recovery and scalability, especially in
industrial systems situated in harsh environments. WSNs’ scalability depends
tremendously on the ease of introducing new sensor nodes into the network.
2 Background
traditional sensors [3]. Sensor nodes are used to sense, measure and gather infor-
mation from the environment and transmit the data to a user or data acquisition
system.
One of the primary concerns in wireless sensor network is data routing. When a
large scale industrial WSN passes a lot of data, this creates a large communication
overhead. The most widely accepted solution to reduce this overhead is to cluster
the wireless sensor network. This clustering process forms a hierarchy structure for
the network and allows for data aggregation. This hierarchy can then be composed
of two types of sensor nodes: sink nodes and anchor nodes.
Sink nodes are the cluster-heads of the network. They are responsible for
aggregating the data and transmitting information from the network to the acqui-
sition system or base station. Due to the fact that there are many transmissions and
data aggregation required of the sink node, it is often a higher processing, fixed unit.
The sensor nodes which make up the cluster and send sensory data to the sink nodes
are often referred to as anchor nodes.
In some WSNs, the sink node is a regular wireless sensor node. This is referred
to as a homogeneous WSN (as opposed to a heterogeneous WSN). In this case, the
extra processing power required of the sink node drains the battery at a faster rate
than the anchor nodes, and sink node rotation becomes a primary concern to pro-
long the network lifetime. In industrial applications that require a perpetual lifetime
of the wireless sensor network, it is often more practical to use a heterogeneous
wireless sensor network. This reduces the complexity of sensor node replacement
and maintenance programs.
While there are several definitions of agents, the most commonly accepted defi-
nition is provided by reference [4], which states that an agent is a computer system
that is situated in some environment, and that is capable of autonomous action in
this environment in order to meet its delegated objectives. Agents are also often
defined by their characteristics. According to reference [5], agents are autonomous,
responsive, proactive, goal-oriented, smart-behaving, social and able to learn.
A multi-agent system is a system of two or more agents that collaborate to some
sort of collective goal, while still working to their own individual goals. According
to reference [5], multi-agent systems have decentralized control and are flexible,
adaptable, reconfigurable, scalable, lean and robust. The properties of multi-agent
systems align with the design considerations for wireless sensor networks, and are
therefore well suited to manage these networks.
A major point of interest with the advances in technology is whether to embed
the intelligent agent or to use a coupled or cloud based design. Reference [5] defines
a coupled design as a situation where one or more agents collect and process data
from an existing structure, in a cloud-based fashion. Embedded agents are when the
332 M.S. Taboun and R.W. Brennan
3 Related Work
Each sink node has three agents that provide the intelligence required to manage its
respective cluster. These agents are the sink node mediator, device manager and
task manager, and are shown in Fig. 1.
The device manager agent has knowledge of the current cluster topology, the
hierarchic level of the sensor network, the state of the nodes in the cluster, the node
I/O’s, the node description and the node power levels. The device manager com-
municates with the nodes in the cluster via a RF gateway (such as xBee trans-
mitters). The device manager has skills conversation, negotiation and decision
making. These skills allow the device manager agent to dynamically reconfigure the
nodes in the sink nodes’ respective cluster.
The task manager agent has knowledge of the sensing and control task status for
the sink nodes’ cluster of sensors. It also has knowledge of the level of parameter
being sensed and the corresponding reaction required for the control units in the
cluster. Much like the device manager agent, the task manager agent communicates
with the sensor nodes via RF or wireless sensor network gateways. The task
management agent has a skillset of data aggregation, integration, filtering, con-
versation and decision-making. While the device manager and task manager agent
communicate with the nodes in their cluster of sensors, they do not communicate
with each other. Instead, communication between the agents is done through the
sink node mediator agent. The sink node mediator has knowledge of advertisements
and bidding. It communicates with the other agents on the same sink node via
software communication, and to the data acquisition and other sink nodes via other
networks such as LAN or Wi-Fi. The skillset of this agent is in conversation
collaboration and brokering.
At the highest level of the network is the supervisory control and data acquisition
(SCADA) system. Due to the intelligent agents being embedded on the physical
network, only simple control is required for simple inputs into the wireless sensor
network. For this reason, a SCADA system is preferable over a more complex
distributed control system. The SCADA system transmits and receives data from
the sink nodes through a wired or wireless network.
The device manager agent is responsible for ensuring that the sensor cluster is
able to perform the tasks required of it. It accomplishes this task by dynamically
reconfiguring the topology of the cluster in an on-the-fly fashion. Dynamic
reconfiguration considers any changes to the topology including replacing sensor
nodes that have failed by either hardware/software errors or battery failure. Other
types of reconfiguration include the sleep and wake function of sensors, using
sensors from other cluster in the case of fault recovery and lending sensors to other
clusters.
The task manager is primarily used for obtaining the application specific data
from the sensor cluster. In the case of a hierarchical wireless sensor network in
which a cluster is composed of sub-clusters, the task manager is aware of which
level of the hierarchy it is obtaining data from. For complex sensing data, a large
amount of data may be transmitted through the WSN. Data aggregation helps lower
Sink Node Embedded, Multi-agent Systems … 335
the resulting overhead power consumption. The aggregated data routing can be seen
in Fig. 2.
In order to ensure that the proper data is being received, some collaboration is
required with the device manager. The sink node mediator is responsible for han-
dling this collaboration. The sink node mediator is also responsible for handling
inter-cluster communication, which may occur if the cluster needs to borrow
another node. In this case, the device manager agent would send a request to the
sink node mediator which would negotiate with the sink node mediator in another
cluster.
two types of separators: stage 1 and stage 2. There are also two types of gas
compressors: a low pressure compressor and a high pressure compressor.
In this example which is illustrated in Fig. 3, unprocessed oil (which consists of
oil, gas and water) enters the stage 1 separator. The gas that is separated from the
stage 1 separator goes to the low pressure gas compressor. The leftover oil flows to
the stage 2 separator, where the remaining gas is separated from the oil. This gas is
also sent to the low pressure gas compressor. The water that is separated during the
process is sent to the water treatment equipment. The refined oil is stored and/or
exported after leaving stage 2 separation. After leaving the low pressure gas
compressor, the gas is sent to the high pressure gas compressor, where it is then
exported.
As illustrated in Fig. 3, cluster 1 is responsible for monitoring the compressors,
cluster 2 is responsible for the separators, cluster 3 monitors the water treatment
equipment and cluster 4 monitors the oil storage tanks.
Alternatively, if there were some sort of hierarchical sub-clusters on a lower
level, cluster 1.1 can monitor the low pressure gas compressor and cluster 1.2 can
be responsible for the high pressure compressor. Some facilities will have more
than 1 piece of given equipment. For example, if it is assumed that a regular sized
oil storage tank can be cluster 1.1, and that there are three oil storage tanks in a
refinery, it can then be said that cluster 1.1.1 monitors the first storage tank, 1.1.2
monitors the second, and 1.1.3 monitors the third. This demonstrates scalability of
the wireless sensor network. In this case, it would be simple to add one or more
storage tanks, or remove an obsolete or damaged storage tank.
In this paper, a model was proposed to embed intelligent agents on sink nodes and
manage clusters. The agent architecture and deployment was examined and illus-
trated in an example for an oil and gas refinery. This type of embedded deployment
has illustrated several challenges. The most important one can lay in fault recovery;
for example, how the multi-agent system reacts when a sink node fails.
There are also challenges present when inter cluster communication is present,
due to the embedded agents. In a cloud-based system, a centralized deployment has
agents that have access to most of the network to negotiate with the other sink
nodes. In an embedded system, a sink node may not be in range of other clusters in
order to borrow sensor nodes, amongst many other challenges. A new protocol for
network discovery and integration needs to be developed for the plug and play style
of introduction of new sensor nodes. As previously mentioned this is a key chal-
lenge for industrial systems and can be linked to it being an under-researched field.
Currently, an embedded agent-managed cluster environment is being developed.
This environment has the agents implemented on higher-powered micro-computers
(such as the Raspberry Pi), and uses the ZigBee RF network protocols using xBee
transmitters. The anchor nodes consist of simple controllers, such as PLCs. This
hardware will provide simulated data back to a SCADA system and will allow
many situations to be examined.
References
1. Broy, M., Schmidt, A.: Challenges in engineering cyber-physical systems. Computer 2, 70–72
(2014)
2. Lasi, H., et al.: Industry 4.0. Business & information. Syst. Eng. 6(4), 239–242 (2014)
3. Callaway, E.H.: Wireless Sensor Networks: Architectures and Protocols. CRC Press (2003)
4. Wooldridge, M.: An Introduction to Multiagent Systems. John Wiley & Sons (2009)
5. Leitão, P., Karnouskos, S. (eds.): Industrial Agents: Emerging Applications of Software
Agents in Industry. Morgan Kaufmann (2015)
6. Hla, K.H.S., Choi, Y.S., Park, J.S.: The Multi Agent System Solutions for Wireless Sensor
Network Applications. Agent and Multi-Agent Systems. Technologies and Applications,
pp. 454–463. Springer, Berlin, Heidelberg (2008)
7. Karlsson, B. et al. (2005). Intelligent sensor networks, an agent-oriented approach. In:
Workshop on Real-World Wireless Sensor Networks
338 M.S. Taboun and R.W. Brennan
8. Tynan, R., Ruzzelli, A.G., O’Hare, G.M.P.: A methodology for the development of
multi-agent systems on wireless sensor networks. In: 17th International Conference on
Software Engineering and Knowledge Engineering (SEKE’05). Taipei, Taiwan, Rep. of
China, 14–16 July, 2005
9. Gholami, M., Taboun, M., Brennan, R.W.: Comparing alternative cluster management
approaches for mobile node tracking in a factory wireless sensor network. In: 2014 IEEE
International Conference on Systems, Man and Cybernetics (SMC). IEEE (2014)
10. Ningxu, C., et al.: Application-oriented intelligent middleware for distributed sensing and
control. Syst. Man Cybern. Part C Appl. Rev. IEEE Trans. 42(6), 947–956 (2012)
11. Savazzi, S., Guardiano, S., Spagnolini, U.: Wireless sensor network modeling and deployment
challenges in oil and gas refinery plants. Int. J. Distrib. Sensor Netw. (2013)
12. Gil, P., Santos, A., Cardoso, A.: Dealing with outliers in wireless sensor networks: an oil
refinery application. Control Syst. Technol. IEEE Trans. 22(4), 1589–1596 (2014)
13. Hevard, D.: Oil and gas production handbook (2006)
Author Index