FoV-Based Quality Assessment and Optimization For Area Coverage in Wireless Visual Sensor Networks
FoV-Based Quality Assessment and Optimization For Area Coverage in Wireless Visual Sensor Networks
ABSTRACT Wireless visual sensor networks are commonly employed on several applications contexts such
as smart cities, intelligent transportation systems and industrial management, aiming at the use of visual data
from cameras to provide enhanced information and to expand the networks utilities. In these scenarios, some
applications may require high-definition images when performing more specialized tasks, for example in
face and text recognition, adding an important monitoring requirement when using camera-based sensors.
In fact, it is important to ensure that the network is able to gather visual data with the associated required
quality to each task, and such perceived quality may be processed as a function of the Field of View (FoV) of
the visual sensors. In order to address this issue, new quality metrics are proposed for wireless visual sensor
networks that are deployed to perform area coverage, exploiting for that different perceptions of the FoV.
Those metrics are proposed along with redeployment optimization methods for visual sensor nodes aiming
at the improvement of the perceived monitoring quality, which are based on greedy and evolutionary-based
approaches. The proposed metrics and algorithms are expected to be more realistic than previous solutions,
allowing flexible processing of variables as cameras’ positions, orientations and viewing angles, providing
then high flexibility on the definition of parameters and significantly contributing to the development of
sensor networks based on visual sensors.
INDEX TERMS Wireless sensor networks, area coverage, optimization, visual quality, quality of monitoring,
quality metric, visual sensing, field of view, mathematical modelling.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
109568 VOLUME 8, 2020
T. C. Jesus et al.: FoV-Based Quality Assessment and Optimization for Area Coverage in WVSN
gather better images, also impacting quality [7]. On the other as redundancy and dependability. In fact, in this work, the pro-
hand, some monitoring tasks may not require high-definition posed metrics are used to guide the network redeployment
images, such as in intrusion detection applications that based on QoM-based optimizations. For this purpose, we also
essentially need to detect movement patterns. For this last propose three optimization methods (Greedy, Pseudo-Greedy
case, it is possible to save resources or use cheaper hardware and Evolutionary - Genetic Algorithm) and compare their
in order to perform simpler monitoring tasks, while achieving results in terms of quality improvement. The model is more
acceptable results. Such scenario, with applications with realistic than others found in literature since it does not
different demands concerning the quality of retrieved visual consider predefined cameras’ position or orientation (neither
data, is susceptible to the adoption of quality metrics that on deployment or on redeployment) and provides support to
leverage the processing and resource allocation in such heterogeneous hardware configuration. For the best of our
networks. knowledge, no metric and optimization algorithms with these
It is important to distinguish between quality of objectives have been proposed before.
image [8]–[11] and Quality of Monitoring (QoM) [12]–[16] The remainder of the article is organized as follows.
in a visual network. The image quality assessment evaluates Section II presents some related works regarding cov-
the image content degradation as consequence of acquisition, erage and quality in Wireless Sensor Networks (WSN).
processing, compression, storage, transmission and reproduc- Some fundamental concepts are presented in Section III.
tion processes. On the other hand, quality of monitoring is a Then, three quality metrics are proposed and discussed
generic term used to refer to the capability of a network to in Section IV. The proposed optimization algorithms are
perform the expected monitoring functions over the region of described in Section V, detailing the ideas behind the greedy,
interest. In this case it is assessed the value of gathered visual pseudo-greedy and evolutionary approaches. Section VI
information not considering content degradation, although presents and discusses the results for some WVSN scenarios.
it is possible to join both approaches for a more realistic Finally, the conclusions are stated in Section VII.
assessment. In this paper it is discussed assessment of quality
of monitoring on wireless visual sensors networks only.
Still considering the relevant aspects for quality of visual II. RELATED WORKS
monitoring, several networks aspects like area coverage and Quality of monitoring in wireless sensor networks has
redundancy [17], [18] can affect the perceived quality, which been addressed in different contexts in the literature. When
means that it can be indirectly used to evaluate other metrics. concerning visual sensing, the most common approaches
It can also determine other network attributes such as depend- are related to targets coverage, comprising both scalar and
ability, availability, lifetime or power consumption [15], [19], directional sensors. Some of the most relevant works are
which means that a proper quality mapping could also be then discussed in this section, giving important clues of how
used to improve other network features or even to prevent and quality of monitoring has been assessed and optimized in
identify failures [20]. visual sensor networks.
In previous works in the literature [21]–[23], some QoM In [15], authors aim to find a scheduling approach for scalar
metric is usually exploited according to the sensors ability sensors in order to maximize the target coverage quality.
to view one or more targets. However, it is also possible In this case, it is explicitly proposed a QoM metric for targets,
to define a QoM for area coverage applications [24], which measuring the number of time slots in which targets are
is a more challenging task since there are no reference covered, considering the amount of energy consumed by each
points with respect to the camera. Instead, the regions expand sensor during the monitoring period.
continuously, varying both in distance and perspective from Similarly, the authors in [14] also use the concept of QoM
each visual sensor node. Moreover, some regions may be to provide an efficient sensor scheduling, selecting the active
redundantly covered by several visual nodes, increasing the sensors based on the QoM of the sensors related to the desired
complexity of area coverage. For these cases, it makes more targets. However, in that work, QoM is calculated for each
sense to define a metric to the entire network considering the single sensor, considering the ratio of the distance between
composition of covered areas. the sensor and target to the sensing range. Although the
The problem of quality of monitoring regarding area authors approach directional sensors, they focus specifically
coverage was initially addressed in [24], but imposing several in ultrasonic and infrared sensors, which present a different
restrictions to the network deployment (position, orientation, notion of QoM. Hence, that method only considers one sensor
viewing angle) and, as a consequence, to the network active at time, and consequently it does not provide a QoM
optimization process. In a different way, in this article we metric for the entire network.
circumvent these constraints proposing a new approach for The distance of a target point to a sensor is also considered
assessment of the quality of monitoring in WVSN, defining to determine a perception of QoM in [16], but in visual
new QoM-based metrics. The proposed metrics, defined networks, which means that the directional sensors are
as the Area Quality Metric (AQM ) and its variations, are cameras. A QoM metric is defined and used to guide the
types of QoM metrics that can be used for optimizations, network deployment based on a predefined set of discrete
comparisons or exploitation of different quality aspects, such feasible configurations for all camera types.
In [25], QoM in visual networks is also addressed, also TABLE 1. Adopted notations.
considering target coverage, but exploiting the fact that the
quality of visual information is sensitive to its viewpoint. That
way, the authors address full view coverage considering target
viewing related to target facing direction and the viewed
direction of the objects, which reflects the viewpoint of a
sensor. However, the perception of QoM is assumed as a
condition to select a non-faulty sensor node instead of being
considered as a metric.
Still considering visual networks for target coverage,
the authors in [13] consider an anisotropic monitoring
due to the perspective of monitored objects in relation to
the cameras. In that work, the cameras can assume any
orientation and position, since they are attached to unmanned
aerial vehicles (UAV).
In [24] the notion of QoM in visual networks is
discussed for area coverage. The authors consider the
weighted sensing quality and the importance of sensing
area to establish Quality of Monitoring in a full coverage
scenario. That work focuses on the network deployment
(position, orientation, viewing angle) and, as a consequence,
it can support an eventual network optimization process.
However, coverage redundancy is not clearly discussed in
that paper, providing inconsistent understanding about QoM
calculation.
In this article we discuss and propose new QoM metrics
for quality assessment when performing area coverage by
visual sensor networks, without restrictions to cameras’
orientations, positions or viewing angles. Such metrics are
expected to be meaningful for single visual sensor nodes
or even for an entire network. Moreover, we show how to
use these metrics to improve the QoM perception of the
network using different optimization methods, in a flexible
and broader perspective. Putting all these together, this article
brings important contributions to the area, which were not
proposed before.
dov2
camera can provide for a region located on a certain distance AD = AE =
cos θvs 2
dov (distance of view). Actually, it is considered that the
farther is the monitored region from the camera, the lower is dov1
AF = AG =
cos θvs 2
the level of details related to that region in captured images,
and consequently, the lower should be the amount of visual Dx = Ax + AD. cos (αvs )
information extracted from that region. This means that the
quality of captured images decreases as the distance from that Dy = Ay + AD. sin (αvs )
camera increases, which lead us to consider different quality Ex = Ax + AE. cos ((αvs + θvs ) mod 2π)
levels for the FoV of a camera. In other words, a FoV can Ey = Ay + AE. sin ((αvs + θvs ) mod 2π)
be perceived as an area with different associated levels of Fx = Ax + AF. cos (αvs )
monitoring quality over it.
That way, a sensor node vs ∈ VS has its FoV divided Fy = Ay + AF. sin (αvs )
into disjoint sub-regions FoVvsl1 , FoVvsl2 and FoVvsl3 , which Gx = Ax + AG. cos ((αvs + θvs ) mod 2π)
determine the visual levels 1, 2 and 3, with high, medium Gy = Ay + AG. sin ((αvs + θvs ) mod 2π) (7)
and low quality, respectively.
a The first level is defined by an
isosceles triangle AFG with its high dov1 defined as the In order to provide quantitative assessment, we assign a
distance from vertex A to the end of FoVvsl1 . The second and weight value for each visual level, which is w1 = 1 for
third levels are defined by isosceles trapezoids DEGF and FoVvsl1 , w2 = 0.5 for FoVvsl2 and w3 = 0.25 for FoVvsl3 .
BCED with highs equal to (dov2 −dov1 ) and (dov3 −dov2 ), Obviously, different values could be assigned, according to
respectively, as depicted in Figure 4. It is important to notice the application requirements. For example, following the
that the sensor FoV is not modified. It is only re-interpreted, definitions in [24], the assigned values would be w1 = 4 for
being FoVvs = FoVvsl1 ∪ FoVvsl2 ∪ FoVvsl3 and FoVvsl1 ∩ FoVvsl2 ∩ FoVvsl1 , w2 = 2 for FoVvsl2 and w3 = 1 for FoVvsl3 . Actually,
FoVvsl3 = ∅. we use a percentage approach because an entire region
‘‘poorly’’ viewed would be equivalent to (a percentage) part
of an ‘‘adequately’’ viewed region. Nevertheless, this does
not mean that it is indifferent for the application to monitor a
small area with good quality or a large area with low quality.
In fact, an application is probably not able to extract the
same visual information from 100 MB poorly monitored
(w3 = 0.25) and from 25 MB well monitored (w1 = 1),
and vice-versa. However, we believe these quality weights
are defined in a way that they can provide relevance
equivalence between levels. For example, the information
extracted from 25 well monitored MB can be as relevant as
the information extracted from 100 poorly monitored MB,
depending on the application. Actually, with less covered
area, but with high coverage quality, it may be possible
to make facial recognition. On the other hand, with a
larger covered area, but with an associated lower coverage
FIGURE 4. Quality perspective for a visual sensor’s Field of View. quality, it could be possible to detect intrusion or to perform
pattern identification. Therefore, it is not necessarily about
The distances dov1 , dov2 and dov3 can be calculated the importance of the task, but the possibility of adding
according to Equation 6 and the proportion of dov1 and value to visual information. This view-notion simplifies the
dov2 with respect to dov3 can be defined freely, considering understanding of the proposed metrics and indicates how
the application requirements and camera’s constraints. The practical they can be when performing quality assessment.
coordinates of vertices D, E, F and G can be calculated
according to Equation 7. In this article, we consider IV. PROPOSED QUALITY METRICS
only three quality levels, but they could be extended to One of the challenges to assess the monitoring quality for area
incorporate additional levels, without loss of generality. coverage is the necessity to deal with continuous variations
In fact, the quality variation will be more realistic if the same of quality in function of the distance of view of a visual
FoV could be divided in more quality levels. sensor. But this may be a prohibitive task if it is desired to
compute the QoM of the entire network instead of a single
visual sensor. For that, this work treats this potential complex
dov3 = Rs . cos θvs 2
scenario as a discrete problem considering that the area to
dov2 = 3 4 dov3 be monitored will be divided into monitoring blocks, thus
dov1 = 2 4 dov3 (6) approximating ‘‘area coverage’’ to the ‘‘coverage of several
targets’’. In this case, the smaller the MB, more realistic the
QoM assessment will be.
In this context, we propose three new QoM metrics: AQM ,
AQMabs and AQMrel . These Area Quality Metrics consider
that, similarly to the visual levels, each monitoring block
mb ∈ FoVvs receives a weight wl which is the weight of the
FoV sub-region of vs where mb can be viewed, as expressed
in Equation 8.
w1 , if mb ∈ FoVvs
l1
V. PROPOSED OPTIMIZATION ALGORITHMS each single visual sensor, then redeploying the sensor
In order to illustrate the utility of the proposed metrics, three for that orientation. In this case, each sensor node will
optimization algorithms are proposed aimed at the maximiza- be re-orientated in order to compute the highest AQM .
tion of the FoV-based quality of monitoring on randomly Actually, this is a simple way to improve the value of
deployed WVSN. Those optimization solutions, notably a QoM, while keeping lower computational costs as compared
classic greedy algorithm, a pseudo-greedy algorithm and with other approaches, although greedy algorithms may
an evolutionary algorithm, consider that visual sensors are provide sub-optimal results since they only evaluate local
rotatable, and so their orientations can be changed. information [30]. The proposed Greedy approach is detailed
The proposed greedy and pseudo-greedy algorithms con- in Algorithm 1.
sider that each visual sensor may take one of a finite set of It is worth to remark that, since this approach only analyzes
disjoint orientations. This assumption is aimed at making this each sensor individually, it is not possible to guide the
problem tractable, still assuring near-optimal maximization optimization directly by the metric AQM . However, it is
of the quality of monitoring [26]. On the other hand, since important to present this method to realize the premise of
evolutionary algorithms perform a guided random search, quality optimization and to have a basis of comparison.
showing good results when seeking in a very large solution In spite of that, one should notice that a notion of QoM
set, we consider for the proposed evolutionary algorithm that is provided by the variable SensorQoM (Algorithm 1,
visual sensors can take any orientation. Line 17), which is the quality perceived by a single visual
sensor.
A. GREEDY
A reasonable and feasible way to optimize the network is to B. PSEUDO-GREEDY
compute the best orientations for each sensor individually, The main disadvantage of the Greedy approach is to use only
aiming at the maximization of covered monitoring blocks. local information at the optimization process. To cope with
A classical greedy heuristic looks for a global optimization, that problem, we propose a Pseudo-Greedy algorithm based
handling only local data. This is due to the complexity of on [26], but here aiming at QoM optimization instead of
dealing with global information, such as area coverage. dependability optimization. This approach keeps looking for
That way, for greedy algorithm, a visual sensor vs may a global optimum while searching in local solutions, but in
assume Ovs different orientations (sectors), where each sector a different way of a classic Greedy approach, it uses some
has the same angle γsec . The value of γsec can be calculated global information recovered in a lightweight way to improve
by the quantity of sectors Ovs , according to Equation 13. the searching process. The idea is to re-orientate each sensor
node in a way that it generates the highest overall quality
γsec = 360o /Ovs
(13)
of monitoring instead of the highest quality possible by
Therefore, for each sector o = 1, . . . , Ovs , the possible new the sensor. Algorithm 2 shows the proposed Pseudo-Greedy
orientation of vs will be αvso , according to Equation 14 (as heuristic.
shown in Figure 6, where γsec = θvs = 60◦ ), where αvs is the The first step is to identify, for each visual sensor node, its
original orientation of vs. covered monitoring blocks (Line 3) and store the associated
weight value (Line 4). Then, iteratively, each sensor (one
αvs
o
= (o − 1) × γsec + αvs (14)
at time) is re-orientated to the position that generates the
The proposed Greedy approach is based on individually highest QoM considering the current position of the other
testing which orientation provides the highest QoM for sensors. For this, it is generated a finite set of sectors,
FIGURE 13. Example of a WVSN and the respective redeployment by each optimization algorithm.
exalt the Pseudo-Greedy results: it is so fast and efficient C. QUALITY OF MONITORING VS. AREA COVERAGE
in this context that overcomes the evolutionary approach Another interesting result to notice is that the growth
results. As a final remark, the simulations showed that the of covered area (Figure 9) followed by the growth of
Pseudo-Greedy optimization method converges in only 4 the AQM (Figure 10) in this comparison could leave the
iterations, in average. wrong impression that the area coverage optimization (area
Considering the AQMrel metric, in Figure 11 the networks maximization or redundancy minimization) directly leads to
with higher relative quality of monitoring, i.e., with more QoM optimization. But it may be not true in many cases.
MBs ‘‘well" viewed (MBs in majority inside of FoVvsl1 Actually, it is natural that, increasing the covered area after
and FoVvsl2 ), were redeployed by the Greedy optimization an optimization process, more non-monitored regions will
with more than 30 sensors, or by the Pseudo-Greedy and be encompassed, which tends to contribute to the gathering
Evolutionary optimization with more then 40 sensors, since of more visual data and the improvement of the QoM
they present AQMrel ≥ 62.5%. This means that, even if these perception. However, a non-optimal area coverage could
redeployments do not provide the highest amount of covered provide less overlapping of regions with the same weight,
area or the highest AQM , the available visual information is providing a higher QoM. An example of this phenomenon
assumed as having high quality and it can be used for more can be seen in Figure 12. In this case, the first network
specialized tasks. presents CAmb (VS ) = 28835 and AQMabs (VS ) = 14529,
while the second network presents CAmb (VS ) = 28166 In this article, we have shown how to analyze the features
AQMabs (VS ) = 14696. Hence, the network 1 has a higher of a network based on QoM metrics and how to use those
covered area and a lower QoM than network 2. This means metrics to improve the quality of monitoring through three
that we can not reduce the problem of QoM optimization proposed optimization algorithms. Also, this work showed
to a simple area coverage maximization or redundancy the importance of metrics to perform mathematical analysis
minimization. related to the utility of available visual information with
We investigated this issue more carefully and implemented respect to the perceived QoM, as well as the potential
the Greedy, Pseudo-Greedy end Evolutionary optimization to exploit other quality aspects, such as dependability or
processes aiming at area coverage maximization, according redundancy.
to [26]. Then, the results were compared with the optimiza- The presented numerical results reinforce the idea that
tion aiming at QoM maximization in each simulation. It could the proposed metrics indicate valuable information of the
be seen that only in 24% of the simulations, the area coverage networks, being possible to distinguish QoM among several
maximization implied in QoM maximization, which justifies distinct networks deployments. Additionally, the optimiza-
the metrics and algorithms proposed in this article. tion algorithms achieved good results, specially the Pseudo-
Greedy, which increased the QoM up to 54% and area
D. QUALITY OF MONITORING ASSESSMENT EXAMPLE coverage up to 65%. The relationship between QoM and area
In order to illustrate the discussion stated from the optimiza- coverage was also studied and we showed that they are not
tion methods, Figure 13 shows an example of distribution directly related, which justifies the metrics and algorithms
of visual sensors on the initial deployment and after proposed in this article.
the execution of the Greedy, the Pseudo-Greedy and the Finally, the ideas and insights discussed in this work,
Evolutionary redeployment algorithms in a WVSN with 20 although consistent, still demand additional investigation
visual nodes. Table 3 presents the covered area and the and potentially new proposals. In fact, new quality levels
quality metrics related to each (re)deployment. As suggested considering an anisotropic perspective, dealing properly with
by the graphics in Figures 9 and 10, the Pseudo-Greedy too oblique viewing angles or with peripheral regions, might
optimization provides a higher covered area as well as higher bring even more realism to the proposed model. Another
QoM. It is worthy to remark that these high values for the improvement could be the inclusion of dynamic external
metrics may imply in an unconventional redeployment. For factors that affect the perceived QoM, such as incidence or
example, in Figures 13c and 13d this is achieved orientating lack of luminosity, or even weather conditions like fog, rain,
some sensors in a way that their FoV are outside of the snow and dust. At last, other future works could also analyze
monitoring area. This also explains the reason for the Greedy the network dependability as a function of QoM, further
optimization to provide a higher AQMrel : there is higher area associating visual quality to the network availability.
coverage redundancy, specially for FoVvsl3 and FoVvsl2 (see
Figure 13b). REFERENCES
[1] F. G. H. Yap and H.-H. Yen, ‘‘A survey on sensor coverage and visual data
TABLE 3. Quality metrics for the results in Figure 13. capturing/processing/transmission in wireless visual sensor networks,’’
Sensors, vol. 14, no. 2, pp. 3506–3527, 2014.
[2] P. Dolezel and D. Honc, ‘‘Neural network for smart adjustment of industrial
camera—Study of deployed application,’’ in AETA-Recent Advances in
Electrical Engineering and Related Sciences: Theory and Application,
I. Zelinka, P. Brandstetter, T. T. Dao, V. H. Duy, and S. B. Kim, Eds. Cham,
Switzerland: Springer, 2020, pp. 101–113.
[3] Y. Balouin, H. Rey-Valette, and P.-A. Picand, ‘‘Automatic assessment and
analysis of beach attendance using video images at the Lido of Sète beach,
France,’’ Ocean Coastal Manage., vol. 102, pp. 114–122, Dec. 2014.
[4] L. F. Bittencourt, J. Diaz-Montes, R. Buyya, O. F. Rana, and M. Parashar,
‘‘Mobility-aware application scheduling in fog computing,’’ IEEE Cloud
Comput., vol. 4, no. 2, pp. 26–35, Mar./Apr. 2017.
[5] S. Y. Jang, Y. Lee, B. Shin, and D. Lee, ‘‘Application-aware IoT camera
VII. CONCLUSIONS virtualization for video analytics edge computing,’’ in Proc. IEEE/ACM
Symp. Edge Comput. (SEC), Oct. 2018, pp. 132–144.
This article discussed the quality of monitoring in wireless [6] T. C. Jesus, P. Portugal, F. Vasques, and D. G. Costa, ‘‘Automated
visual sensors networks when performing area coverage. methodology for dependability evaluation of wireless visual sensor
Quality was addressed based on the characteristics of the sen- networks,’’ Sensors, vol. 18, no. 8, p. 2629, 2018.
[7] D. G. Costa, ‘‘Visual sensors hardware platforms: A review,’’ IEEE Sensors
sor’s FoV , but such perception of quality was also exploited J., vol. 20, no. 8, pp. 4025–4033, Apr. 2020.
as an attribute of the entire visual sensor network. In this [8] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image quality
context, metrics were proposed for both cases, providing a assessment: From error visibility to structural similarity,’’ IEEE Trans.
Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
practical mathematical tool for quality assessment. Actually, [9] X. Min, K. Gu, G. Zhai, J. Liu, X. Yang, and C. W. Chen, ‘‘Blind quality
this network quality model is more realistic than others found assessment based on pseudo-reference image,’’ IEEE Trans. Multimedia,
in the literature, since it does not enforce predefined camera’s vol. 20, no. 8, pp. 2049–2062, Aug. 2018.
[10] H. Duan, G. Zhai, X. Min, Y. Zhu, Y. Fang, and X. Yang, ‘‘Perceptual
position or orientation and provide high flexibility on several quality assessment of omnidirectional images,’’ in Proc. IEEE Int. Symp.
parameters, such as quantity and width of quality levels. Circuits Syst. (ISCAS), May 2018, pp. 1–5.
[11] Y. Zhu, G. Zhai, X. Min, and J. Zhou, ‘‘The prediction of saliency map for THIAGO C. JESUS received the B.Sc. degree in
head and eye movements in 360 degree images,’’ IEEE Trans. Multimedia, computer engineering from the State University of
early access, Dec. 12, 2019, doi: 10.1109/TMM.2019.2957986. Feira de Santana, Brazil, in 2008, and the M.Sc.
[12] D. G. Costa, L. A. Guedes, F. Vasques, and P. Portugal, ‘‘QoV: Assessing degree in electrical engineering from the Federal
the monitoring quality in visual sensor networks,’’ in Proc. IEEE 8th University of Rio de Janeiro, Brazil, in 2011.
Int. Conf. Wireless Mobile Comput. Netw. Commun. (WiMob), Oct. 2012, He is currently pursuing the Ph.D. degree in
pp. 667–674. electrical and computer engineering program with
[13] W. Wang, H. Dai, C. Dong, F. Xiao, X. Cheng, and G. Chen, ‘‘VISIT: the Faculty of Engineering, University of Porto,
Placement of unmanned aerial vehicles for anisotropic monitoring tasks,’’ Portugal.
in Proc. 16th Annu. IEEE Int. Conf. Sens., Commun., Netw. (SECON),
He is also an Auxiliary Professor with the
Jun. 2019, pp. 1–9.
Department of Technology, State University of Feira de Santana. He is
[14] H. Yang, D. Li, and H. Chen, ‘‘Coverage quality based target-oriented
scheduling in directional sensor networks,’’ in Proc. IEEE Int. Conf.
the author or coauthor of several articles. His research interests include
Commun., May 2010, pp. 1–5. dependability evaluation on wireless sensor networks, fault diagnosis of
[15] X. Ren, W. Liang, and W. Xu, ‘‘Quality-aware target coverage in energy discrete event systems, industrial communication systems, and smart cities.
harvesting sensor networks,’’ IEEE Trans. Emerg. Topics Comput., vol. 3, He acted as a reviewer for high-quality journals in those areas.
no. 1, pp. 8–21, Mar. 2015.
[16] S. Hanoun, A. Bhatti, D. Creighton, S. Nahavandi, P. Crothers, and
C. G. Esparza, ‘‘Target coverage in camera networks for manufacturing
workplaces,’’ J. Intell. Manuf., vol. 27, no. 6, pp. 1221–1235, 2016.
[17] S. Tang and J. Yuan, ‘‘DAMson: On distributed sensing scheduling to DANIEL G. COSTA (Senior Member, IEEE)
achieve high quality of monitoring,’’ in Proc. IEEE INFOCOM, Apr. 2013, received the B.Sc. degree in computer engineering
pp. 155–159. and the M.Sc. and D.Sc. degrees in electrical
[18] L. Guo, D. Li, Y. Zhu, D. Kim, Y. Hong, and W. Chen, ‘‘Enhancing engineering from the Federal University of Rio
barrier coverage with β quality of monitoring in wireless camera sensor Grande do Norte, Brazil, in 2005, 2006, and 2013,
networks,’’ Ad Hoc Netw., vol. 51, pp. 62–79, Nov. 2016.
respectively.
[19] J. Huang, C. Lin, X. Kong, B. Wei, and X. Shen, ‘‘Modeling and analysis
He did his research internship with the Univer-
of dependability attributes for services computing systems,’’ IEEE Trans.
sity of Porto, Portugal. He is currently an Associate
Serv. Comput., vol. 7, no. 4, pp. 599–613, Oct. 2014.
[20] T. C. Jesus, D. G. Costa, P. Portugal, F. Vasques, and A. Aguiar,
Professor with the Department of Technology,
‘‘Modelling coverage failures caused by mobile obstacles for the selection State University of Feira de Santana, Brazil. He is
of faultless visual nodes in wireless sensor networks,’’ IEEE Access, vol. 8, also with the Advanced Networks and Applications Laboratory, LARA,
pp. 41537–41550, 2020. State University of Feira de Santana. He is the author or coauthor of
[21] D. G. Costa, I. Silva, L. A. Guedes, P. Portugal, and F. Vasques, more than 100 articles in the areas of computer networks, industrial
‘‘Availability assessment of wireless visual sensor networks for target communication systems, the Internet of Things, smart cities, and sensor
coverage,’’ in Proc. Emerg. Technol. Factory Automat. (ETFA), Sep. 2014, networks. He has served several number of committees of distinguished
pp. 1–8. international conferences. He acted as a reviewer for high-quality journals.
[22] H. Zannat, T. Akter, M. Tasnim, and A. Rahman, ‘‘The coverage problem
in visual sensor networks: A target oriented approach,’’ J. Netw. Comput.
Appl., vol. 75, pp. 1–15, Nov. 2016.
[23] P. Fu, Y. Cheng, H. Tang, B. Li, J. Pei, and X. Yuan, ‘‘An effective and
robust decentralized target tracking scheme in wireless camera sensor
PAULO PORTUGAL (Member, IEEE) received
networks,’’ Sensors, vol. 17, no. 3, p. 639, 2017.
the Ph.D. degree in electrical and computer engi-
[24] J. Tao, T. Zhai, H. Wu, Y. Xu, and Y. Dong, ‘‘A quality-enhancing coverage
neering from the Faculty of Engineering (FEUP),
scheme for camera sensor networks,’’ in Proc. 43rd Annu. Conf. IEEE Ind.
Electron. Soc., Oct. 2017, pp. 8458–8463. University of Porto (UP), Porto, Portugal, in 2005.
[25] Y. Hu, X. Wang, and X. Gan, ‘‘Critical sensing range for mobile He is currently an Associate Professor with the
heterogeneous camera sensor networks,’’ in Proc. IEEE Conf. Comput. Electrical and Computer Engineering Department
Commun. (IEEE INFOCOM), Apr. 2014, pp. 970–978. (DEEC), FEUP, UP. His current research interests
[26] T. C. Jesus, D. G. Costa, and P. Portugal, ‘‘Wireless visual sensor networks include industrial communications and depend-
redeployment based on dependability optimization,’’ in Proc. IEEE 17th ability/performability modeling.
Int. Conf. Ind. Inform. (INDIN), Jul. 2019, pp. 1111–1116.
[27] D. G. Costa, C. Duran-Faundez, and J. C. N. Bittencourt, ‘‘Availability
issues for relevant area coverage in wireless visual sensor networks,’’ in
Proc. Conf. Electr., Electron. Eng., Inf. Commun. Technol. (CHILEAN),
Oct. 2017, pp. 1–6.
[28] T. C. Jesus, D. G. Costa, and P. Portugal, ‘‘On the computing of FRANCISCO VASQUES received the Ph.D.
area coverage by visual sensor networks: Assessing performance of
degree in computer science from LAAS-CNRS,
approximate and precise algorithms,’’ in Proc. IEEE 16th Int. Conf. Ind.
Toulouse, France, in 1996.
Inform. (INDIN), Jul. 2018, pp. 193–198.
Since 2004, he has been an Associate Professor
[29] C. Istin, D. Pescaru, and H. Ciocarlie, ‘‘Performance improvements of
video WSN surveillance in case of traffic congestions,’’ in Proc. Int. Joint with the University of Porto, Portugal. He is
Conf. Comput. Cybern. Tech. Inform., May 2010, pp. 659–663. the author or coauthor of more than 150 articles
[30] D. G. Costa, E. Rangel, J. P. J. Peixoto, and T. C. Jesus, ‘‘An availability in the areas of real-time systems and industrial
metric and optimization algorithms for simultaneous coverage of targets communication systems. His current research
and areas by wireless visual sensor networks,’’ in Proc. IEEE 17th Int. interests include real-time communication, indus-
Conf. Ind. Inform. (INDIN), Jul. 2019, pp. 617–622. trial communication, and real-time embedded
[31] A. Shukla, H. M. Pandey, and D. Mehrotra, ‘‘Comparative review of systems. He has been also a member of the Editorial Board of Sensors
selection techniques in genetic algorithm,’’ in Proc. Int. Conf. Futuristic (MDPI), Sensor Networks Section (MDPI), since 2018, and the International
Trends Comput. Anal. Knowl. Manage. (ABLAZE), 2015, pp. 515–519. Journal of Distributed Sensor Networks (Hindawi). Since 2007, he has been
[32] H. Wei and X.-S. Tang, ‘‘A genetic-algorithm-based explicit description an Associate Editor of the IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS for
of object contour and its ability to facilitate recognition,’’ IEEE Trans. the topic Industrial Communications.
Cybern., vol. 45, no. 11, pp. 2558–2571, Nov. 2015.