0% found this document useful (0 votes)
3 views

FoV-Based Quality Assessment and Optimization For Area Coverage in Wireless Visual Sensor Networks

Fov assessment

Uploaded by

hriditakarmaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

FoV-Based Quality Assessment and Optimization For Area Coverage in Wireless Visual Sensor Networks

Fov assessment

Uploaded by

hriditakarmaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Received May 15, 2020, accepted June 10, 2020, date of publication June 15, 2020, date of current

version June 24, 2020.


Digital Object Identifier 10.1109/ACCESS.2020.3002206

FoV-Based Quality Assessment and Optimization


for Area Coverage in Wireless Visual
Sensor Networks
THIAGO C. JESUS 1,2 , DANIEL G. COSTA 1 , (Senior Member, IEEE),
PAULO PORTUGAL 2 , (Member, IEEE), AND FRANCISCO VASQUES 2
1 Department of Technology, State University of Feira de Santana, Feira de Santana 44036-900, Brazil
2 Faculty of Engineering, INEGI/INESC-TEC, University of Porto, 4200-465 Porto, Portugal

Corresponding author: Thiago C. Jesus ([email protected])


This work was a result of the project Operation NORTE-08-5369-FSE-000003 supported by Norte Portugal Regional Operational
Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Social Fund (ESF). Additionally,
this work was also partially funded by the Brazilian CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) Agency
under Grant 204691/2018-4.

ABSTRACT Wireless visual sensor networks are commonly employed on several applications contexts such
as smart cities, intelligent transportation systems and industrial management, aiming at the use of visual data
from cameras to provide enhanced information and to expand the networks utilities. In these scenarios, some
applications may require high-definition images when performing more specialized tasks, for example in
face and text recognition, adding an important monitoring requirement when using camera-based sensors.
In fact, it is important to ensure that the network is able to gather visual data with the associated required
quality to each task, and such perceived quality may be processed as a function of the Field of View (FoV) of
the visual sensors. In order to address this issue, new quality metrics are proposed for wireless visual sensor
networks that are deployed to perform area coverage, exploiting for that different perceptions of the FoV.
Those metrics are proposed along with redeployment optimization methods for visual sensor nodes aiming
at the improvement of the perceived monitoring quality, which are based on greedy and evolutionary-based
approaches. The proposed metrics and algorithms are expected to be more realistic than previous solutions,
allowing flexible processing of variables as cameras’ positions, orientations and viewing angles, providing
then high flexibility on the definition of parameters and significantly contributing to the development of
sensor networks based on visual sensors.

INDEX TERMS Wireless sensor networks, area coverage, optimization, visual quality, quality of monitoring,
quality metric, visual sensing, field of view, mathematical modelling.

I. INTRODUCTION subjective and it may vary considerably, making harder the


Several applications have benefited from the use of dis- definition of quality assessment metrics.
tributed systems and visual information when achieving Nevertheless, such metrics could be an important require-
a more comprehensive perception of the monitoring con- ment to develop and manage some applications in any
text, usually employing (Wireless) Visual Sensor Net- context, such as in Industrial 4.0 [2], environmental moni-
works (WVSN) composed of a set of camera-enabled toring [3], mobility tracking [4], Internet of Things [5] or
nodes [1]. For those networks, retrieved visual information just for network metric analysis, such as in dependability
must be adequate to the application monitoring requirements, assessment [6].
i.e., the proper visual data ‘‘quality’’ is essential to perform Actually, when considering the use of visual sensor
the expected tasks. However, the definition of quality is networks, some tasks will demand a stronger perception of
quality, for example requiring the adjustment of the position
or orientation of cameras in order to enhance the quality of
The associate editor coordinating the review of this manuscript and retrieved visual data, taking closer and sharper images of
approving it for publication was Yong Yang . objects. Similarly, more powerful hardware may be used to

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
109568 VOLUME 8, 2020
T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

gather better images, also impacting quality [7]. On the other as redundancy and dependability. In fact, in this work, the pro-
hand, some monitoring tasks may not require high-definition posed metrics are used to guide the network redeployment
images, such as in intrusion detection applications that based on QoM-based optimizations. For this purpose, we also
essentially need to detect movement patterns. For this last propose three optimization methods (Greedy, Pseudo-Greedy
case, it is possible to save resources or use cheaper hardware and Evolutionary - Genetic Algorithm) and compare their
in order to perform simpler monitoring tasks, while achieving results in terms of quality improvement. The model is more
acceptable results. Such scenario, with applications with realistic than others found in literature since it does not
different demands concerning the quality of retrieved visual consider predefined cameras’ position or orientation (neither
data, is susceptible to the adoption of quality metrics that on deployment or on redeployment) and provides support to
leverage the processing and resource allocation in such heterogeneous hardware configuration. For the best of our
networks. knowledge, no metric and optimization algorithms with these
It is important to distinguish between quality of objectives have been proposed before.
image [8]–[11] and Quality of Monitoring (QoM) [12]–[16] The remainder of the article is organized as follows.
in a visual network. The image quality assessment evaluates Section II presents some related works regarding cov-
the image content degradation as consequence of acquisition, erage and quality in Wireless Sensor Networks (WSN).
processing, compression, storage, transmission and reproduc- Some fundamental concepts are presented in Section III.
tion processes. On the other hand, quality of monitoring is a Then, three quality metrics are proposed and discussed
generic term used to refer to the capability of a network to in Section IV. The proposed optimization algorithms are
perform the expected monitoring functions over the region of described in Section V, detailing the ideas behind the greedy,
interest. In this case it is assessed the value of gathered visual pseudo-greedy and evolutionary approaches. Section VI
information not considering content degradation, although presents and discusses the results for some WVSN scenarios.
it is possible to join both approaches for a more realistic Finally, the conclusions are stated in Section VII.
assessment. In this paper it is discussed assessment of quality
of monitoring on wireless visual sensors networks only.
Still considering the relevant aspects for quality of visual II. RELATED WORKS
monitoring, several networks aspects like area coverage and Quality of monitoring in wireless sensor networks has
redundancy [17], [18] can affect the perceived quality, which been addressed in different contexts in the literature. When
means that it can be indirectly used to evaluate other metrics. concerning visual sensing, the most common approaches
It can also determine other network attributes such as depend- are related to targets coverage, comprising both scalar and
ability, availability, lifetime or power consumption [15], [19], directional sensors. Some of the most relevant works are
which means that a proper quality mapping could also be then discussed in this section, giving important clues of how
used to improve other network features or even to prevent and quality of monitoring has been assessed and optimized in
identify failures [20]. visual sensor networks.
In previous works in the literature [21]–[23], some QoM In [15], authors aim to find a scheduling approach for scalar
metric is usually exploited according to the sensors ability sensors in order to maximize the target coverage quality.
to view one or more targets. However, it is also possible In this case, it is explicitly proposed a QoM metric for targets,
to define a QoM for area coverage applications [24], which measuring the number of time slots in which targets are
is a more challenging task since there are no reference covered, considering the amount of energy consumed by each
points with respect to the camera. Instead, the regions expand sensor during the monitoring period.
continuously, varying both in distance and perspective from Similarly, the authors in [14] also use the concept of QoM
each visual sensor node. Moreover, some regions may be to provide an efficient sensor scheduling, selecting the active
redundantly covered by several visual nodes, increasing the sensors based on the QoM of the sensors related to the desired
complexity of area coverage. For these cases, it makes more targets. However, in that work, QoM is calculated for each
sense to define a metric to the entire network considering the single sensor, considering the ratio of the distance between
composition of covered areas. the sensor and target to the sensing range. Although the
The problem of quality of monitoring regarding area authors approach directional sensors, they focus specifically
coverage was initially addressed in [24], but imposing several in ultrasonic and infrared sensors, which present a different
restrictions to the network deployment (position, orientation, notion of QoM. Hence, that method only considers one sensor
viewing angle) and, as a consequence, to the network active at time, and consequently it does not provide a QoM
optimization process. In a different way, in this article we metric for the entire network.
circumvent these constraints proposing a new approach for The distance of a target point to a sensor is also considered
assessment of the quality of monitoring in WVSN, defining to determine a perception of QoM in [16], but in visual
new QoM-based metrics. The proposed metrics, defined networks, which means that the directional sensors are
as the Area Quality Metric (AQM ) and its variations, are cameras. A QoM metric is defined and used to guide the
types of QoM metrics that can be used for optimizations, network deployment based on a predefined set of discrete
comparisons or exploitation of different quality aspects, such feasible configurations for all camera types.

VOLUME 8, 2020 109569


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

In [25], QoM in visual networks is also addressed, also TABLE 1. Adopted notations.
considering target coverage, but exploiting the fact that the
quality of visual information is sensitive to its viewpoint. That
way, the authors address full view coverage considering target
viewing related to target facing direction and the viewed
direction of the objects, which reflects the viewpoint of a
sensor. However, the perception of QoM is assumed as a
condition to select a non-faulty sensor node instead of being
considered as a metric.
Still considering visual networks for target coverage,
the authors in [13] consider an anisotropic monitoring
due to the perspective of monitored objects in relation to
the cameras. In that work, the cameras can assume any
orientation and position, since they are attached to unmanned
aerial vehicles (UAV).
In [24] the notion of QoM in visual networks is
discussed for area coverage. The authors consider the
weighted sensing quality and the importance of sensing
area to establish Quality of Monitoring in a full coverage
scenario. That work focuses on the network deployment
(position, orientation, viewing angle) and, as a consequence,
it can support an eventual network optimization process.
However, coverage redundancy is not clearly discussed in
that paper, providing inconsistent understanding about QoM
calculation.
In this article we discuss and propose new QoM metrics
for quality assessment when performing area coverage by
visual sensor networks, without restrictions to cameras’
orientations, positions or viewing angles. Such metrics are
expected to be meaningful for single visual sensor nodes
or even for an entire network. Moreover, we show how to
use these metrics to improve the QoM perception of the
network using different optimization methods, in a flexible
and broader perspective. Putting all these together, this article
brings important contributions to the area, which were not
proposed before.

III. DEFINITIONS AND BASIC CONCEPTS


In this article, we follow the visual coverage formulation
presented in [21] and [26], taking their mathematical models
as reference. Table 1 lists the notations we use in this paper.
In the defined problem scope, it is considered a set of visual
sensors, VS = {vs1 , vs2 , . . . , vsn }, which are deployed over
FIGURE 1. Field of View of a visual sensor.
a two-dimensional area A. Each sensor vs ∈ VS is located
using Cartesian coordinates and it is expected to be equipped
with a camera, having a viewing angle θvs and an orientation
αvs , as shown in Figure 1, where the camera is represented employing WSN in real applications, sensor nodes should be
by the small red circle. Each camera also presents a sensing assumed as having limited hardware processing resources and
radius Rs as the approximation of the camera’s Depth of low-power requirements [21].
Field (DoF), which is the region between the nearest and The Field of View (FoV ) of any visual sensor is defined as
farthest point that can be sharply sensed [27]. Each camera the area of an isosceles triangle composed of three vertices,
may assume different values for their parameters, however, A, B and C (see Figure 1), being (Ax , Ay ) the Cartesian
without loss of generality, we consider in this work that all coordinates of the sensor. The coordinates of vertices B
visual sensors are identical and so they are configured with and C can be calculated by Equation 1 and the a FoV of
the same values. For simplification and in order to make any visual sensor vsi is the area of the triangle ABC,
this problem tractable and computationally feasible when which can be computed using trigonometry, as expressed in

109570 VOLUME 8, 2020


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

FIGURE 3. A non-rectangular monitoring area represented by the union


of several rectangles.

In that case, a circular monitoring area can be represented


by several rectangles, each one defining a single monitoring
area. The resulting MA will be more realistic if the empty
spaces are fulfilled with more or even smaller rectangles.
FIGURE 2. Monitoring area being covered by visual sensor nodes vsi , A single MA can be divided in smaller regions, called
i = 1, . . . , 7.
Monitoring Blocks (MB), each one defined as rectangles
represented by its origins (x1s, y1s), a width ws, a height hs
and a center (xc, yc). Thus, a MA is composed by a grid of
Equation 2 [21].
monitoring blocks with size M × N . In this case, the ‘‘area
Bx Ax + Rs cos (αvs )
= coverage’’ problem can be indirectly approached as several
By Ay + Rs sin (αvs )
= ‘‘target coverage’’ problems, where each point (xc, yc) is a
target with infinitesimal size. This is an important abstraction
Cx Ax + Rs cos ((αvs + θvs ) mod 2π)
= aimed at higher efficiency, while keeping the computational
Cy Ay + Rs sin ((αvs + θvs ) mod 2π)
= (1) cost low, as we previously discussed in [28]. In that paper,
R · sin (θvs )
2 the monitoring blocks approach for area calculation has been
FoVvs = vs (2) proven to be a good approximation in terms of accuracy, with
2
low computational cost.
The monitoring field A on which a WVSN operates can
We consider that a single monitoring block mb is covered
characterize different regions, such as an industrial plant,
by a visual sensor node vs if the center mb (xc, yc) is inside the
a military field, a public square, an avenue, an entire city or
polygon area of FoVvs . In this configuration, the Monitoring
even a farm. Whatever the case, a monitoring field may be
Block is assumed to be covered as a whole, and its area ws×hs
composed by one or more Monitoring Areas (MA), each one
is counted to the total coverage area [27], [28]. We represent
described as a rectangle defined by its origins (x1, y1), a width
a covered MB with the notation mb ∈ FoVvs . This definition
w, a height h and a rotation angle β, as shown in Figure 2. The
can be extended for a set of visual sensors VS according to
other vertices of a MA can be computed in a 2D Cartesian
Equation 4. In this paper it is not considered the impact of
plan, as presented in Equation 3.
perspective from cameras in area coverage, i.e., it makes no
difference here to cover a MB from different angles.
x2 = x1 + w. cos(β) (
1, if ∃vs ∈ VS | mb ∈ FoVvs
y2 = y1 + w. sin(β) cover (mb, VS) = (4)
0, otherwise
x3 = x1 + h. sin(β)
The total area covered by VS is the product of the area of
y3 = y1 − h. cos(β)
a single MB by the quantity of monitoring blocks covered by
x4 = x1 + h. sin(β) + w. cos(β) at least one visual sensor, according to Equation 5.
y4 = y1 − h. cos(β) + w. sin(β) (3) M X
N
X
CAmb (VS) = ws · hs · cover mbj,l , VS

A MA is the area of interest of a visual application and (5)
j=1 l=1
only visual information from this sub-region is relevant
to the considered WVSN. If necessary, a non-rectangular Besides the coverage area, another important aspect related
monitoring area can be defined as the union of t smaller to visual sensor networks is the subjective perception of
MAs, as shown in Figure 3, for instance.TIn other words, Quality of Monitoring (QoM). In this work, the QoM will
MA = MA1 ∪ MA2 ∪ · · · ∪ MAt , such as ti=1 (MAi ) = ∅. be related to how ‘‘good’’ is the visual data definition that a

VOLUME 8, 2020 109571


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

dov2
camera can provide for a region located on a certain distance AD = AE =
cos θvs 2
 
dov (distance of view). Actually, it is considered that the
farther is the monitored region from the camera, the lower is dov1
AF = AG =
cos θvs 2
 
the level of details related to that region in captured images,
and consequently, the lower should be the amount of visual Dx = Ax + AD. cos (αvs )
information extracted from that region. This means that the
quality of captured images decreases as the distance from that Dy = Ay + AD. sin (αvs )
camera increases, which lead us to consider different quality Ex = Ax + AE. cos ((αvs + θvs ) mod 2π)
levels for the FoV of a camera. In other words, a FoV can Ey = Ay + AE. sin ((αvs + θvs ) mod 2π)
be perceived as an area with different associated levels of Fx = Ax + AF. cos (αvs )
monitoring quality over it.
That way, a sensor node vs ∈ VS has its FoV divided Fy = Ay + AF. sin (αvs )
into disjoint sub-regions FoVvsl1 , FoVvsl2 and FoVvsl3 , which Gx = Ax + AG. cos ((αvs + θvs ) mod 2π)
determine the visual levels 1, 2 and 3, with high, medium Gy = Ay + AG. sin ((αvs + θvs ) mod 2π) (7)
and low quality, respectively.
a The first level is defined by an
isosceles triangle AFG with its high dov1 defined as the In order to provide quantitative assessment, we assign a
distance from vertex A to the end of FoVvsl1 . The second and weight value for each visual level, which is w1 = 1 for
third levels are defined by isosceles trapezoids DEGF and FoVvsl1 , w2 = 0.5 for FoVvsl2 and w3 = 0.25 for FoVvsl3 .
BCED with highs equal to (dov2 −dov1 ) and (dov3 −dov2 ), Obviously, different values could be assigned, according to
respectively, as depicted in Figure 4. It is important to notice the application requirements. For example, following the
that the sensor FoV is not modified. It is only re-interpreted, definitions in [24], the assigned values would be w1 = 4 for
being FoVvs = FoVvsl1 ∪ FoVvsl2 ∪ FoVvsl3 and FoVvsl1 ∩ FoVvsl2 ∩ FoVvsl1 , w2 = 2 for FoVvsl2 and w3 = 1 for FoVvsl3 . Actually,
FoVvsl3 = ∅. we use a percentage approach because an entire region
‘‘poorly’’ viewed would be equivalent to (a percentage) part
of an ‘‘adequately’’ viewed region. Nevertheless, this does
not mean that it is indifferent for the application to monitor a
small area with good quality or a large area with low quality.
In fact, an application is probably not able to extract the
same visual information from 100 MB poorly monitored
(w3 = 0.25) and from 25 MB well monitored (w1 = 1),
and vice-versa. However, we believe these quality weights
are defined in a way that they can provide relevance
equivalence between levels. For example, the information
extracted from 25 well monitored MB can be as relevant as
the information extracted from 100 poorly monitored MB,
depending on the application. Actually, with less covered
area, but with high coverage quality, it may be possible
to make facial recognition. On the other hand, with a
larger covered area, but with an associated lower coverage
FIGURE 4. Quality perspective for a visual sensor’s Field of View. quality, it could be possible to detect intrusion or to perform
pattern identification. Therefore, it is not necessarily about
The distances dov1 , dov2 and dov3 can be calculated the importance of the task, but the possibility of adding
according to Equation 6 and the proportion of dov1 and value to visual information. This view-notion simplifies the
dov2 with respect to dov3 can be defined freely, considering understanding of the proposed metrics and indicates how
the application requirements and camera’s constraints. The practical they can be when performing quality assessment.
coordinates of vertices D, E, F and G can be calculated
according to Equation 7. In this article, we consider IV. PROPOSED QUALITY METRICS
only three quality levels, but they could be extended to One of the challenges to assess the monitoring quality for area
incorporate additional levels, without loss of generality. coverage is the necessity to deal with continuous variations
In fact, the quality variation will be more realistic if the same of quality in function of the distance of view of a visual
FoV could be divided in more quality levels. sensor. But this may be a prohibitive task if it is desired to
compute the QoM of the entire network instead of a single
visual sensor. For that, this work treats this potential complex
dov3 = Rs . cos θvs 2
 
  scenario as a discrete problem considering that the area to
dov2 = 3 4 dov3 be monitored will be divided into monitoring blocks, thus
 
dov1 = 2 4 dov3 (6) approximating ‘‘area coverage’’ to the ‘‘coverage of several

109572 VOLUME 8, 2020


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

targets’’. In this case, the smaller the MB, more realistic the
QoM assessment will be.
In this context, we propose three new QoM metrics: AQM ,
AQMabs and AQMrel . These Area Quality Metrics consider
that, similarly to the visual levels, each monitoring block
mb ∈ FoVvs receives a weight wl which is the weight of the
FoV sub-region of vs where mb can be viewed, as expressed
in Equation 8.

w1 , if mb ∈ FoVvs
 l1

wl (mb, vs) = w2 , if mb ∈ FoVvs l2 (8)


w3 , if mb ∈ FoVvs

 l3

If a monitoring block mb is redundantly covered by a


set of visual sensor VS, then the weight of mb is the
maximum weight among the associated sensor, as expressed
in Equation 9. It is worthy to remark that in this paper
it is not considered the impact of perspective of coverage.
This explains why it is taken the maximum weight instead FIGURE 5. Quality of the performed area coverage when viewing
other compositions: the visual information extracted from monitoring blocks.
a MB by different sensors will be as good as the best
quality of monitoring available among the associated sensor.
In a different scenario, where the coverage direction is to be used as an objective function in optimization processes
considered, sum or average should provide a better quality since it is associated with the entire monitoring field. On the
representation. other hand, AQMrel reveals an innermost panorama of the
coverage. A low value of AQMrel (< 62.5%) implies that a
wmax (mb, VS) = max wl (mb, vs)|∀vs∈VS

(9) larger area is covered with the majority of MB being ‘‘low
Figure 5 illustrates the mapping of monitoring quality quality’’ monitored, while a high value of AQMrel (> 62.5%)
of the MB covered by two visual sensors, including the implies that a smaller area is covered with the majority of
overlapping considerations. The MB marked with a green MB being ‘‘high quality’’ monitored. The mean value of
circle are in level 1 (highest quality), while the ones marked AQMrel = 62.5% is justified because AQMrel varies from
with a yellow star are in level 2 (medium quality) and the 25% to 100%, which is easy to verify. In the worst case
MB marked with a red square are in level 3 (lowest quality). scenario, all covered MBs would be in the lowest quality
Notice that there are some MB marked with more than one level (w3 ), which generates AQMrel = 25%. In the best case
symbol: those MB are redundantly monitored by more than scenario, all covered MBs would be in the highest quality
one visual sensor and its assigned weight is that one related level (w1 ), which generates AQMrel = 100%.
to the highest quality.
We define the proposed metrics as presented in Equa- TABLE 2. QoM metrics analysis for ws = hs = 1.
tions 10, 11 and 12.
M X
X N
wmax mbj,l , VS

AQMabs (VS) = hs · ws · (10)
j=1 l=1
AQMabs (VS)
AQMrel (VS) = (11)
CAmb (VS)
AQMabs (VS)
AQM (VS) = (12)
h·w
The AQMabs is an intermediate metric that provides an Table 2 shows fictional scenarios to better understand
absolute perspective of the quality of monitoring, indicating the meaning of the proposed metrics. The AQM is the
the equivalent quantity of monitoring blocks. In a different fundamental metric, associating area coverage with the
way, the AQMrel provides a relative perspective of the quality quality of monitoring. However, such monitoring can be
of monitoring, presenting the percentage of the equivalent performed on different ways. For example, a large area may
monitoring blocks related to the covered area. Finally, AQM be monitored with low quality, while a small area can be
provides a global perspective of the quality of monitoring, monitored with high quality. These two scenarios would
indicating the percentage of the equivalent monitoring blocks probably present a similar AQM value, as presented in lines 1,
related to the entire monitoring field. Actually, AQM best fits 2 and 3 of Table 2. In those cases, it is difficult to perform

VOLUME 8, 2020 109573


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

worth assessment considering only the AQM metric and thus


the other proposed metrics can be used for a better perception
of the considered visual sensor network.
Therefore, the AQMrel appears as an auxiliary metric to
help to ‘‘untie’’ such comparisons. Thus, it is possible to
distinguish the monitoring quality between coverage schemes
prioritizing area coverage (lower AQMrel ) or quality of mon-
itoring (higher AQMrel ). The relation between these metrics
can be used to improve objective functions in optimization
processes. Some authors use redundancy as dependability
or, specifically, as availability metrics [26], [29]. In this
case, AQMrel can be used to guide an optimization process
focused on the maximization of quality and redundancy,
for example. And the proposed metrics can be exploited to
FIGURE 6. Sectors in the Greedy and Pseud-Greedy algorithms.
analyze and enhance quality by optimization processes and
network redeployment.

V. PROPOSED OPTIMIZATION ALGORITHMS each single visual sensor, then redeploying the sensor
In order to illustrate the utility of the proposed metrics, three for that orientation. In this case, each sensor node will
optimization algorithms are proposed aimed at the maximiza- be re-orientated in order to compute the highest AQM .
tion of the FoV-based quality of monitoring on randomly Actually, this is a simple way to improve the value of
deployed WVSN. Those optimization solutions, notably a QoM, while keeping lower computational costs as compared
classic greedy algorithm, a pseudo-greedy algorithm and with other approaches, although greedy algorithms may
an evolutionary algorithm, consider that visual sensors are provide sub-optimal results since they only evaluate local
rotatable, and so their orientations can be changed. information [30]. The proposed Greedy approach is detailed
The proposed greedy and pseudo-greedy algorithms con- in Algorithm 1.
sider that each visual sensor may take one of a finite set of It is worth to remark that, since this approach only analyzes
disjoint orientations. This assumption is aimed at making this each sensor individually, it is not possible to guide the
problem tractable, still assuring near-optimal maximization optimization directly by the metric AQM . However, it is
of the quality of monitoring [26]. On the other hand, since important to present this method to realize the premise of
evolutionary algorithms perform a guided random search, quality optimization and to have a basis of comparison.
showing good results when seeking in a very large solution In spite of that, one should notice that a notion of QoM
set, we consider for the proposed evolutionary algorithm that is provided by the variable SensorQoM (Algorithm 1,
visual sensors can take any orientation. Line 17), which is the quality perceived by a single visual
sensor.
A. GREEDY
A reasonable and feasible way to optimize the network is to B. PSEUDO-GREEDY
compute the best orientations for each sensor individually, The main disadvantage of the Greedy approach is to use only
aiming at the maximization of covered monitoring blocks. local information at the optimization process. To cope with
A classical greedy heuristic looks for a global optimization, that problem, we propose a Pseudo-Greedy algorithm based
handling only local data. This is due to the complexity of on [26], but here aiming at QoM optimization instead of
dealing with global information, such as area coverage. dependability optimization. This approach keeps looking for
That way, for greedy algorithm, a visual sensor vs may a global optimum while searching in local solutions, but in
assume Ovs different orientations (sectors), where each sector a different way of a classic Greedy approach, it uses some
has the same angle γsec . The value of γsec can be calculated global information recovered in a lightweight way to improve
by the quantity of sectors Ovs , according to Equation 13. the searching process. The idea is to re-orientate each sensor
node in a way that it generates the highest overall quality
γsec = 360o /Ovs
 
(13)
of monitoring instead of the highest quality possible by
Therefore, for each sector o = 1, . . . , Ovs , the possible new the sensor. Algorithm 2 shows the proposed Pseudo-Greedy
orientation of vs will be αvso , according to Equation 14 (as heuristic.
shown in Figure 6, where γsec = θvs = 60◦ ), where αvs is the The first step is to identify, for each visual sensor node, its
original orientation of vs. covered monitoring blocks (Line 3) and store the associated
weight value (Line 4). Then, iteratively, each sensor (one
αvs
o
= (o − 1) × γsec + αvs (14)
at time) is re-orientated to the position that generates the
The proposed Greedy approach is based on individually highest QoM considering the current position of the other
testing which orientation provides the highest QoM for sensors. For this, it is generated a finite set of sectors,

109574 VOLUME 8, 2020


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

Algorithm 1 Proposed Greedy Optimization Algo- Algorithm 2 Proposed Pseudo-Greedy Algorithm


rithm
Data: VS = pseudoGreedy(VS, MA), Ovs );
Data: VS = Greedy(VS, MA, Ovs ); Input: List of sensors, monitoring area parameters and
Input: List of sensors, monitoring area parameters and number of sectors.
number of sectors. Output: A set of reoriented visual sensors VS.
Output: A set of reoriented visual sensors VS.
1 foreach visual sensor vs in sensors set VS do
1 foreach visual sensor vs in sensors set VS do 2 foreach Monitoring Block mbj,l do
2 angles = []; // Create a matrix MB[] of
3 γsec = b360o /Ovs c; covered
4 α 0 = vs.α; // monitoring blocks per
5 foreach sector o = 1, . . . , Ovs do visual sensor
6 MB = []; 3 if isCovered(mbj,l ) then
7 αvs
o = (o − 1) × γ
sec + α ;
0
4 MB[j,l,vs] = wl (mbj,l , vs);
// Re-orientate vs with αvs o
5 end
8 vs.α = αvs ;
o
6 else
9 foreach Monitoring Block mbj,l do 7 MB[j,l,vs] = 0;
10 if isCovered(mbj,l ) then 8 end
11 MB[j,l] = wl (mbj,l , vs); 9 end
12 end 10 end
13 else 11 k=0; angles = []; anglesErr = ∞;
14 MB[j,l] = 0; 12 while anglesErr >  && k++ < |VS| do
15 end 13 foreach visual sensor vs in sensors set VS do
16 end 14 γsec = b360o /Ovs c;
17 SensorQoM = sum(MB); 15 α 0 = vs.α;
o , SensorQoM}) ;
18 angles.add({αvs 16 foreach sector o = 1, . . . , Ovs do
19 end // Re-orientate vs with αvs o
20 vs.α = angles.getMaxSensorQoM(); 17 αvs = (o − 1) × γsec + α ;
o 0
21 end 18 vs.α = αvs
o;
22 returnVS; 19 update(MB[:, :, vs]);
20 MB’[] = max (MB[:, :, :]);
21 AQMabs = hs × ws  × sum (MB’);
22 AQM = AQMabs (h × w);
which are disjoint possible orientations to be assumed by 23
o , AQM }) ;
angles.add({αvs
the visual sensor (Line 14). For each sector to be tested, 24 end
it is identified the MB covered by the sensor in its new 25 vs.α = angles.getMaxAQM();
orientation, storing the associated weight value according 26 update(MB[:, :, vs]);
to Equation 8 (Line 19). Then, the AQM is computed 27 end
considering the new orientation of vs. Each computed value 28 anglesErr = abs(angles(k) − angles(k − 1));
of AQM is stored (Line 23) and the orientation which 29 end
generates the highest quality is associated to the sensor 30 returnVS;
(Line 25). This procedure is repeated for each sensor until
the network assumes a convergent deployment configuration.
Since the computing of the AQM is basically the summing
of all elements in a matrix (Lines 21 and 22), this step that might not otherwise be found in a lifetime. However,
adds valuable global information to the optimization process such approach presents a relatively high computational
without computational overhead. complexity, which may be prohibitive for some scenarios.
Since we provide a lightweight heuristic to compute QoM,
C. EVOLUTIONARY Genetic Algorithms became a feasible solution. Figure 7
In order to search in a vaster solutions space, we also shows the steps sequence of these algorithms.
implemented an evolutionary optimization process based The execution starts with an initial population, which is
on Genetic Algorithms. These algorithms perform a guided the original deployment configuration, and some eligible
random search, inspired on natural evolution concepts such solutions randomly generated. A chromosome belonging to
as survival of the fittest, crossover and mutation. The the population is a set containing the orientation αvs of each
randomness of the algorithm makes it a good approach to visual node. This population is evaluated (fitness) in order
look for optimal or near-optimal combinations of solutions to identify which chromosomes best fit as solution for the

VOLUME 8, 2020 109575


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

FIGURE 8. Quality variation related to the viewing angle.

FIGURE 7. Flowchart of typical Genetic Algorithms.


A. EXPERIMENTAL SETTINGS
For the performed simulations, the same configuration is
optimization problem. Therefore, the fitness phase computes considered, with all visual sensors having the same sensing
the AQM generated by all visual nodes together. In the next radius Rs = 150 units of distance (u.d.). As a reference
step, some chromosomes are selected as parents that mate when setting the viewing angle, Figure 8 shows the total
and recombine to create off-springs for the next generation. covered area by a visual node, as well as the covered area by
We use a proportional selection based on Roulette Wheel, each FoV level. It can be seen that for smaller angles (below
i.e., each individual can become a parent with a probability 45◦ ) we have lower covered area. For 75◦ and higher we
proportional to its fitness, but not necessarily the best have a greater covered area, specially for FoVvsl1 and FoVvsl2 .
chromosomes are chosen [31]. To avoid that, we also apply However, very wide angles bring the risk of loss of quality on
elitist selection, which directly copy the best chromosome to peripheral areas. This could be solved increasing the image
the new population in the next generation. Then, a crossover definition or setting an anisotropic QoM with respect to
is performed, when some chromosomes pair and exchange viewing angle [13]. That being said, we set θvs = 60◦ which is
part of their DNA, which means that some visual sensors an intermediate value and also it is the average viewing angle
from different solutions exchange their orientations. A few of several commercial cameras widely used on academy and
visual sensors can also suffer mutation and change their industry, such as RaspiCam and Cisco IP cameras [7]. These
orientation randomly. Crossover and mutation operators are visual nodes must cover a monitoring area with w = 500 u.d.,
crucial to provide a more diverse population in order to h = 500 u.d., ws = hs = 8.5 u.d. and β = 0◦ , which are the
make the heuristic more immune to be trapped in a local same values used in [26]. The position and orientation of each
optima. Finally, elite and selected chromosomes join new sensor node were randomly generated. The position is limited
chromosomes that are randomly generated to produce a to 100 u.d. away from monitoring area at most, since there is
new population. The process is repeated until it fulfills the no point to place a visual sensor too far that cannot be able to
stopping criterion, which could be a maximum number of cover the area of interest.
generations or the fitness convergence [32]. For the first test, in each simulation, all three pro-
posed algorithms were executed. The Greedy and the
VI. NUMERICAL RESULTS Pseudo-Greedy algorithms divide the search space into 30
In this section, some analysis and numerical results related sectors, while the Evolutionary approach handles 50 chro-
to the utilization of the proposed metrics are presented. mosomes over 100 generations, with crossover and mutation
Initially, we analyze the impact of the viewing angle on probabilities of 0.7 and 0.01, respectively. In this scenario,
each FoV level, considering constant values of dov1 , dov2 600 simulations were performed, being 100 simulations for
and dov3 . Then, in order to improve the QoM of a network, each scenario that vary the number of visual sensors, i.e.,
the three proposed optimization algorithms are compared scenarios with 5, 10, 20, 30, 40, 50 visual sensors. This way,
regarding maximization of the AQM , and their performances the impact of the quantity of visual sensors and redundancy
are analyzed and discussed. We also show how the metrics on the QoM could be analyzed.
AQM and AQMrel are associated and how they can be used
together to analyze the QoM perception. Algorithms and B. OPTIMIZATION COMPARISON
designed simulations were implemented in the Mathworks For each scenario, we took the average values from each
MATLAB platform. group of 100 simulations. Figures 9 and 10 show the gain

109576 VOLUME 8, 2020


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

FIGURE 9. Area coverage improvement. FIGURE 11. Average value of AQMrel .

FIGURE 12. Association between covered area and QoM.

For a small number of visual sensor nodes (5 to 20,


in the case of monitoring network configurations taken as
FIGURE 10. AQM improvement.
example) there is plenty of uncovered regions which provides
‘‘room for improvement" to the optimization algorithms,
generating high gains in Figures 9 and 10. As the number of
of area coverage and of AQM provided by each optimization visual sensors increases (30 to 50), uncovered regions shrink
algorithm in relation to the initial deployment. Since AQMrel and these gains decrease. It does not mean that the QoM
is a relative metric, it does not make sense to present decreases. On the contrary, the optimization process keeps
its gain. Instead, Figure 11 presents the average value of increasing the perception of quality (QoM) and the coverage
AQMrel from executed simulations. As shown in Figure 9, area.
all optimization methods provide a considerable gain of the Furthermore, it is worthy to discuss the evolutionary results
total covered area that is reflected on the number of MBs since that approach is generally applied to complex problems
covered. Comparing this result with Figure 10, it is possible to providing good solutions. However, in this work, it did
see that all optimization methods also provide a considerable not provide the best results. The main explanation to this
gain of AQM at the same order: the Pseudo-Greedy algorithm discrepancy between the expected and obtained results is that
provides a higher gain than the Evolutionary, which provides the evolutionary approach could eventually find the same
a higher gain than the Greedy. This is due to the fact that result or better than other approaches, since it has a wider
the Greedy algorithm only considers local information and in solutions space. However, it may take much more time than
a limited search space, while the Evolutionary approach has the acceptable timeout that we set at the algorithm searching
and infinite search space, but exploited by a random guided parameters for 50 chromosomes through 100 generations.
search using information of the entire network. Finally, It could eventually happen in the simulations, but it will
the Pseudo-Greedy algorithm also exploits information from hardly appear at the results since we are taking the average
other nodes, but using a deterministic search. value from 100 simulations. This is, in fact, a reason to

VOLUME 8, 2020 109577


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

FIGURE 13. Example of a WVSN and the respective redeployment by each optimization algorithm.

exalt the Pseudo-Greedy results: it is so fast and efficient C. QUALITY OF MONITORING VS. AREA COVERAGE
in this context that overcomes the evolutionary approach Another interesting result to notice is that the growth
results. As a final remark, the simulations showed that the of covered area (Figure 9) followed by the growth of
Pseudo-Greedy optimization method converges in only 4 the AQM (Figure 10) in this comparison could leave the
iterations, in average. wrong impression that the area coverage optimization (area
Considering the AQMrel metric, in Figure 11 the networks maximization or redundancy minimization) directly leads to
with higher relative quality of monitoring, i.e., with more QoM optimization. But it may be not true in many cases.
MBs ‘‘well" viewed (MBs in majority inside of FoVvsl1 Actually, it is natural that, increasing the covered area after
and FoVvsl2 ), were redeployed by the Greedy optimization an optimization process, more non-monitored regions will
with more than 30 sensors, or by the Pseudo-Greedy and be encompassed, which tends to contribute to the gathering
Evolutionary optimization with more then 40 sensors, since of more visual data and the improvement of the QoM
they present AQMrel ≥ 62.5%. This means that, even if these perception. However, a non-optimal area coverage could
redeployments do not provide the highest amount of covered provide less overlapping of regions with the same weight,
area or the highest AQM , the available visual information is providing a higher QoM. An example of this phenomenon
assumed as having high quality and it can be used for more can be seen in Figure 12. In this case, the first network
specialized tasks. presents CAmb (VS ) = 28835 and AQMabs (VS ) = 14529,

109578 VOLUME 8, 2020


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

while the second network presents CAmb (VS ) = 28166 In this article, we have shown how to analyze the features
AQMabs (VS ) = 14696. Hence, the network 1 has a higher of a network based on QoM metrics and how to use those
covered area and a lower QoM than network 2. This means metrics to improve the quality of monitoring through three
that we can not reduce the problem of QoM optimization proposed optimization algorithms. Also, this work showed
to a simple area coverage maximization or redundancy the importance of metrics to perform mathematical analysis
minimization. related to the utility of available visual information with
We investigated this issue more carefully and implemented respect to the perceived QoM, as well as the potential
the Greedy, Pseudo-Greedy end Evolutionary optimization to exploit other quality aspects, such as dependability or
processes aiming at area coverage maximization, according redundancy.
to [26]. Then, the results were compared with the optimiza- The presented numerical results reinforce the idea that
tion aiming at QoM maximization in each simulation. It could the proposed metrics indicate valuable information of the
be seen that only in 24% of the simulations, the area coverage networks, being possible to distinguish QoM among several
maximization implied in QoM maximization, which justifies distinct networks deployments. Additionally, the optimiza-
the metrics and algorithms proposed in this article. tion algorithms achieved good results, specially the Pseudo-
Greedy, which increased the QoM up to 54% and area
D. QUALITY OF MONITORING ASSESSMENT EXAMPLE coverage up to 65%. The relationship between QoM and area
In order to illustrate the discussion stated from the optimiza- coverage was also studied and we showed that they are not
tion methods, Figure 13 shows an example of distribution directly related, which justifies the metrics and algorithms
of visual sensors on the initial deployment and after proposed in this article.
the execution of the Greedy, the Pseudo-Greedy and the Finally, the ideas and insights discussed in this work,
Evolutionary redeployment algorithms in a WVSN with 20 although consistent, still demand additional investigation
visual nodes. Table 3 presents the covered area and the and potentially new proposals. In fact, new quality levels
quality metrics related to each (re)deployment. As suggested considering an anisotropic perspective, dealing properly with
by the graphics in Figures 9 and 10, the Pseudo-Greedy too oblique viewing angles or with peripheral regions, might
optimization provides a higher covered area as well as higher bring even more realism to the proposed model. Another
QoM. It is worthy to remark that these high values for the improvement could be the inclusion of dynamic external
metrics may imply in an unconventional redeployment. For factors that affect the perceived QoM, such as incidence or
example, in Figures 13c and 13d this is achieved orientating lack of luminosity, or even weather conditions like fog, rain,
some sensors in a way that their FoV are outside of the snow and dust. At last, other future works could also analyze
monitoring area. This also explains the reason for the Greedy the network dependability as a function of QoM, further
optimization to provide a higher AQMrel : there is higher area associating visual quality to the network availability.
coverage redundancy, specially for FoVvsl3 and FoVvsl2 (see
Figure 13b). REFERENCES
[1] F. G. H. Yap and H.-H. Yen, ‘‘A survey on sensor coverage and visual data
TABLE 3. Quality metrics for the results in Figure 13. capturing/processing/transmission in wireless visual sensor networks,’’
Sensors, vol. 14, no. 2, pp. 3506–3527, 2014.
[2] P. Dolezel and D. Honc, ‘‘Neural network for smart adjustment of industrial
camera—Study of deployed application,’’ in AETA-Recent Advances in
Electrical Engineering and Related Sciences: Theory and Application,
I. Zelinka, P. Brandstetter, T. T. Dao, V. H. Duy, and S. B. Kim, Eds. Cham,
Switzerland: Springer, 2020, pp. 101–113.
[3] Y. Balouin, H. Rey-Valette, and P.-A. Picand, ‘‘Automatic assessment and
analysis of beach attendance using video images at the Lido of Sète beach,
France,’’ Ocean Coastal Manage., vol. 102, pp. 114–122, Dec. 2014.
[4] L. F. Bittencourt, J. Diaz-Montes, R. Buyya, O. F. Rana, and M. Parashar,
‘‘Mobility-aware application scheduling in fog computing,’’ IEEE Cloud
Comput., vol. 4, no. 2, pp. 26–35, Mar./Apr. 2017.
[5] S. Y. Jang, Y. Lee, B. Shin, and D. Lee, ‘‘Application-aware IoT camera
VII. CONCLUSIONS virtualization for video analytics edge computing,’’ in Proc. IEEE/ACM
Symp. Edge Comput. (SEC), Oct. 2018, pp. 132–144.
This article discussed the quality of monitoring in wireless [6] T. C. Jesus, P. Portugal, F. Vasques, and D. G. Costa, ‘‘Automated
visual sensors networks when performing area coverage. methodology for dependability evaluation of wireless visual sensor
Quality was addressed based on the characteristics of the sen- networks,’’ Sensors, vol. 18, no. 8, p. 2629, 2018.
[7] D. G. Costa, ‘‘Visual sensors hardware platforms: A review,’’ IEEE Sensors
sor’s FoV , but such perception of quality was also exploited J., vol. 20, no. 8, pp. 4025–4033, Apr. 2020.
as an attribute of the entire visual sensor network. In this [8] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image quality
context, metrics were proposed for both cases, providing a assessment: From error visibility to structural similarity,’’ IEEE Trans.
Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
practical mathematical tool for quality assessment. Actually, [9] X. Min, K. Gu, G. Zhai, J. Liu, X. Yang, and C. W. Chen, ‘‘Blind quality
this network quality model is more realistic than others found assessment based on pseudo-reference image,’’ IEEE Trans. Multimedia,
in the literature, since it does not enforce predefined camera’s vol. 20, no. 8, pp. 2049–2062, Aug. 2018.
[10] H. Duan, G. Zhai, X. Min, Y. Zhu, Y. Fang, and X. Yang, ‘‘Perceptual
position or orientation and provide high flexibility on several quality assessment of omnidirectional images,’’ in Proc. IEEE Int. Symp.
parameters, such as quantity and width of quality levels. Circuits Syst. (ISCAS), May 2018, pp. 1–5.

VOLUME 8, 2020 109579


T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN

[11] Y. Zhu, G. Zhai, X. Min, and J. Zhou, ‘‘The prediction of saliency map for THIAGO C. JESUS received the B.Sc. degree in
head and eye movements in 360 degree images,’’ IEEE Trans. Multimedia, computer engineering from the State University of
early access, Dec. 12, 2019, doi: 10.1109/TMM.2019.2957986. Feira de Santana, Brazil, in 2008, and the M.Sc.
[12] D. G. Costa, L. A. Guedes, F. Vasques, and P. Portugal, ‘‘QoV: Assessing degree in electrical engineering from the Federal
the monitoring quality in visual sensor networks,’’ in Proc. IEEE 8th University of Rio de Janeiro, Brazil, in 2011.
Int. Conf. Wireless Mobile Comput. Netw. Commun. (WiMob), Oct. 2012, He is currently pursuing the Ph.D. degree in
pp. 667–674. electrical and computer engineering program with
[13] W. Wang, H. Dai, C. Dong, F. Xiao, X. Cheng, and G. Chen, ‘‘VISIT: the Faculty of Engineering, University of Porto,
Placement of unmanned aerial vehicles for anisotropic monitoring tasks,’’ Portugal.
in Proc. 16th Annu. IEEE Int. Conf. Sens., Commun., Netw. (SECON),
He is also an Auxiliary Professor with the
Jun. 2019, pp. 1–9.
Department of Technology, State University of Feira de Santana. He is
[14] H. Yang, D. Li, and H. Chen, ‘‘Coverage quality based target-oriented
scheduling in directional sensor networks,’’ in Proc. IEEE Int. Conf.
the author or coauthor of several articles. His research interests include
Commun., May 2010, pp. 1–5. dependability evaluation on wireless sensor networks, fault diagnosis of
[15] X. Ren, W. Liang, and W. Xu, ‘‘Quality-aware target coverage in energy discrete event systems, industrial communication systems, and smart cities.
harvesting sensor networks,’’ IEEE Trans. Emerg. Topics Comput., vol. 3, He acted as a reviewer for high-quality journals in those areas.
no. 1, pp. 8–21, Mar. 2015.
[16] S. Hanoun, A. Bhatti, D. Creighton, S. Nahavandi, P. Crothers, and
C. G. Esparza, ‘‘Target coverage in camera networks for manufacturing
workplaces,’’ J. Intell. Manuf., vol. 27, no. 6, pp. 1221–1235, 2016.
[17] S. Tang and J. Yuan, ‘‘DAMson: On distributed sensing scheduling to DANIEL G. COSTA (Senior Member, IEEE)
achieve high quality of monitoring,’’ in Proc. IEEE INFOCOM, Apr. 2013, received the B.Sc. degree in computer engineering
pp. 155–159. and the M.Sc. and D.Sc. degrees in electrical
[18] L. Guo, D. Li, Y. Zhu, D. Kim, Y. Hong, and W. Chen, ‘‘Enhancing engineering from the Federal University of Rio
barrier coverage with β quality of monitoring in wireless camera sensor Grande do Norte, Brazil, in 2005, 2006, and 2013,
networks,’’ Ad Hoc Netw., vol. 51, pp. 62–79, Nov. 2016.
respectively.
[19] J. Huang, C. Lin, X. Kong, B. Wei, and X. Shen, ‘‘Modeling and analysis
He did his research internship with the Univer-
of dependability attributes for services computing systems,’’ IEEE Trans.
sity of Porto, Portugal. He is currently an Associate
Serv. Comput., vol. 7, no. 4, pp. 599–613, Oct. 2014.
[20] T. C. Jesus, D. G. Costa, P. Portugal, F. Vasques, and A. Aguiar,
Professor with the Department of Technology,
‘‘Modelling coverage failures caused by mobile obstacles for the selection State University of Feira de Santana, Brazil. He is
of faultless visual nodes in wireless sensor networks,’’ IEEE Access, vol. 8, also with the Advanced Networks and Applications Laboratory, LARA,
pp. 41537–41550, 2020. State University of Feira de Santana. He is the author or coauthor of
[21] D. G. Costa, I. Silva, L. A. Guedes, P. Portugal, and F. Vasques, more than 100 articles in the areas of computer networks, industrial
‘‘Availability assessment of wireless visual sensor networks for target communication systems, the Internet of Things, smart cities, and sensor
coverage,’’ in Proc. Emerg. Technol. Factory Automat. (ETFA), Sep. 2014, networks. He has served several number of committees of distinguished
pp. 1–8. international conferences. He acted as a reviewer for high-quality journals.
[22] H. Zannat, T. Akter, M. Tasnim, and A. Rahman, ‘‘The coverage problem
in visual sensor networks: A target oriented approach,’’ J. Netw. Comput.
Appl., vol. 75, pp. 1–15, Nov. 2016.
[23] P. Fu, Y. Cheng, H. Tang, B. Li, J. Pei, and X. Yuan, ‘‘An effective and
robust decentralized target tracking scheme in wireless camera sensor
PAULO PORTUGAL (Member, IEEE) received
networks,’’ Sensors, vol. 17, no. 3, p. 639, 2017.
the Ph.D. degree in electrical and computer engi-
[24] J. Tao, T. Zhai, H. Wu, Y. Xu, and Y. Dong, ‘‘A quality-enhancing coverage
neering from the Faculty of Engineering (FEUP),
scheme for camera sensor networks,’’ in Proc. 43rd Annu. Conf. IEEE Ind.
Electron. Soc., Oct. 2017, pp. 8458–8463. University of Porto (UP), Porto, Portugal, in 2005.
[25] Y. Hu, X. Wang, and X. Gan, ‘‘Critical sensing range for mobile He is currently an Associate Professor with the
heterogeneous camera sensor networks,’’ in Proc. IEEE Conf. Comput. Electrical and Computer Engineering Department
Commun. (IEEE INFOCOM), Apr. 2014, pp. 970–978. (DEEC), FEUP, UP. His current research interests
[26] T. C. Jesus, D. G. Costa, and P. Portugal, ‘‘Wireless visual sensor networks include industrial communications and depend-
redeployment based on dependability optimization,’’ in Proc. IEEE 17th ability/performability modeling.
Int. Conf. Ind. Inform. (INDIN), Jul. 2019, pp. 1111–1116.
[27] D. G. Costa, C. Duran-Faundez, and J. C. N. Bittencourt, ‘‘Availability
issues for relevant area coverage in wireless visual sensor networks,’’ in
Proc. Conf. Electr., Electron. Eng., Inf. Commun. Technol. (CHILEAN),
Oct. 2017, pp. 1–6.
[28] T. C. Jesus, D. G. Costa, and P. Portugal, ‘‘On the computing of FRANCISCO VASQUES received the Ph.D.
area coverage by visual sensor networks: Assessing performance of
degree in computer science from LAAS-CNRS,
approximate and precise algorithms,’’ in Proc. IEEE 16th Int. Conf. Ind.
Toulouse, France, in 1996.
Inform. (INDIN), Jul. 2018, pp. 193–198.
Since 2004, he has been an Associate Professor
[29] C. Istin, D. Pescaru, and H. Ciocarlie, ‘‘Performance improvements of
video WSN surveillance in case of traffic congestions,’’ in Proc. Int. Joint with the University of Porto, Portugal. He is
Conf. Comput. Cybern. Tech. Inform., May 2010, pp. 659–663. the author or coauthor of more than 150 articles
[30] D. G. Costa, E. Rangel, J. P. J. Peixoto, and T. C. Jesus, ‘‘An availability in the areas of real-time systems and industrial
metric and optimization algorithms for simultaneous coverage of targets communication systems. His current research
and areas by wireless visual sensor networks,’’ in Proc. IEEE 17th Int. interests include real-time communication, indus-
Conf. Ind. Inform. (INDIN), Jul. 2019, pp. 617–622. trial communication, and real-time embedded
[31] A. Shukla, H. M. Pandey, and D. Mehrotra, ‘‘Comparative review of systems. He has been also a member of the Editorial Board of Sensors
selection techniques in genetic algorithm,’’ in Proc. Int. Conf. Futuristic (MDPI), Sensor Networks Section (MDPI), since 2018, and the International
Trends Comput. Anal. Knowl. Manage. (ABLAZE), 2015, pp. 515–519. Journal of Distributed Sensor Networks (Hindawi). Since 2007, he has been
[32] H. Wei and X.-S. Tang, ‘‘A genetic-algorithm-based explicit description an Associate Editor of the IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS for
of object contour and its ability to facilitate recognition,’’ IEEE Trans. the topic Industrial Communications.
Cybern., vol. 45, no. 11, pp. 2558–2571, Nov. 2015.

109580 VOLUME 8, 2020

You might also like