0% found this document useful (0 votes)
37 views6 pages

(IEEE GEM 2022) On The Use of A Semantic Segmentation Micro-Service in AR Devices For UI Placement

Uploaded by

patrickliu.sfk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views6 pages

(IEEE GEM 2022) On The Use of A Semantic Segmentation Micro-Service in AR Devices For UI Placement

Uploaded by

patrickliu.sfk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

On the Use of a Semantic Segmentation

Micro-Service in AR Devices for UI Placement


1st Yuan Liang 2nd Zixiang Xu 3rd Sanaz Rasti 4th Soumyabrata Dev
School of Computer Science School of Computer Science School of Computer Science School of Computer Science
University College Dublin University College Dublin University College Dublin University College Dublin
Dublin, Ireland Dublin, Ireland Dublin, Ireland Dublin, Ireland
[email protected] [email protected] [email protected] [email protected]

5th Abraham G. Campbell


School of Computer Science
University College Dublin
Dublin, Ireland
2022 IEEE Games, Entertainment, Media Conference (GEM) | 978-1-6654-6138-2/22/$31.00 ©2022 IEEE | DOI: 10.1109/GEM56474.2022.10017522

[email protected]

Abstract—Augmented Reality (AR) technology is now increas- techniques could be applied in AR games and enterprise AR
ingly applied in various fields, which can bring an unprecedented applications. Research from [4] shows accurate and efficient
immersive experience and rich interaction to the application information placement in precision agriculture reduces error
field. However, complex interactions and informative interfaces
impose a long learning curve and burden on users. Making the and time cost. Inaccurate information placement can disrupt
AR experience more intelligent to reduce redundant operations the user’s view, reduce the efficiency and the users opinion
is one solution to enhance the user experience. One potential of the application, and even lead to a decrease in trust in
research direction is seamlessly combining the two fields of the application [5], [6]. Limited research has been conducted
machine learning and AR. This paper proposes using semantic on object placement within a typical farmers field with one
segmentation to assist automatic information placement in AR
using a case study within precision agriculture as an example. notable exception examining realistic wind turbine placement
The precise location of the crop area in the user view is [7].
determined by semantic segmentation, which helps to place A common problem in the application of AR is that real-
information in the AR environment automatically. The dataset istic interactions and well-informed interfaces burden users.
used for machine learning model construction consists of 242 It takes time for the user to learn the interaction logic and
farmland images. Four semantic segmentation techniques are
proposed and bench-marked against each other. The results information interface management, such as manually pinning
show that the Attention U-Net deep neural network has the the information corresponding to the object in the field of
highest recognition accuracy, reaching 91.9%. An AR automatic view to the object, or manually selecting the information
information placement prototype using Attention U-Net has been relevant to the task from the many information in the interface.
developed to run on tablets utilising a micro-service approach. These additional operations bring a long learning curve while
This work demonstrates how AR user interfaces could be placed
correctly within the real world, which traditionally has been providing flexibility, resulting in a poor user experience for
an understudied area of research within AR and is essential users in some domains in the transition from traditional
for future AR games and Enterprise applications. As such, this interaction to AR. In this paper, this problem of AR in the
solution has potential usage in all areas of AR application. field of precision agriculture is analyzed as a specific use
Index Terms—Machine learning, Semantic Segmentation, case but this could be applicable to both AR games and other
Computer vision, Augmented Reality, Precision Agriculture
AR Enterprise applications. This paper aims to make the AR
system intelligent and automatically place information on the
I. I NTRODUCTION
corresponding objects according to the user’s interaction habits
One of the main goals of AR is to correlate virtual in- to reduce the user’s redundant operations.
formation with information in the real world. The literature In an augmented reality-based precision agriculture system,
suggests that good label layout should result in labels being users are given access to crop information displayed in a
clearly associated with objects connected to them, with good head-mounted display. Automation, speed and accuracy re-
readability and no ambiguity with other labels. [1], [2]. duce unnecessary user operation complexity and increase the
Accurate placement of information is crucial when AR is to be efficiency of the system, which are essential attributes for
used for applications like precision agriculture [3] but these information placement in an augmented reality-based precision
agriculture system. Figure 1 compares the accurate placement
This research forms part of the CONSUS Programme which is funded under
the SFI Strategic Partnerships Programme (16/SPP/3296) and is co-funded by of information using our prototype while contrasting it with
Origin Enterprises Plc. an inaccurate placement if the prototype was not used. The

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.
accurate placement is fixed at the centre of the corresponding blocks, and each block uses three convolutions and one max-
crop area. The inaccurate placement generated in front of the pooling downsampling. After each downsampling, the number
user, floating in the air and corresponding to other crop areas of feature map is multiplied by 2, so the size change of the
in the distance, caused misdirection. Automated information feature map as shown in the figure is obtained. Finally, the
placement reduces redundant interactions and improves system feature map with size 32 × 32 was obtained. The right-hand
efficiency. The absence of real environment knowledge is a side, which is the expansive path, is also composed of 4 blocks.
severe issue in AR applications. It limits the efficient repre- Before the start of each block, the size of the feature map is
sentation and optimal layout of the information augmented multiplied by 2 through deconvolution, while the number of
onto the real world [8]. In order to enable the application the feature map is halved, then it is merged with the feature
to automatically place information in a reasonable position, map of the contracting compression path on the left. With a
semantic segmentation is applied to locate crop area in the binary task, the network has two output feature maps.
user’s field of view. With U-Net, to get a more accurate segmentation result,
In this paper, four semantic segmentation algorithms are the high-resolution feature map is combined with the low-
implemented and bench-marked: Fully convolutional network resolution feature map after upsampling [12], [13]. Our chan-
based on residual network feature extraction, U-Net, Attention nels number is still large, which is 1024, so more detailed
U-Net and Thresholding using Otsu. texture information will propagate back through the upsam-
Furthermore, an AR automatic information placement pro- pling. In FCN, deep and shallow information is fused by
totype has been developed. In the prototype system, the front- adding corresponding pixels, while U-Net is fused by splic-
end tablet captures the user’s field of view as a screenshot. ing. Splicing retains more dimension/location information,
Then it is sent to the machine learning algorithm deployed as enabling the layer behind to freely choose between shallow
a micro-service in the cloud to determine the crop area in the and deep features, which is more advantageous for semantic
user’s field of view. The centroid of the area is calculated as the segmentation tasks.
coordinates for information placement. Given the coordinates
of the screen, the AR system emits a ray in the direction C. Attention U-Net
of the coordinates and automatically places information at Attention U-Net [14] is a variant of U-Net architecture. The
the intersection points with the pre-detected plane, the land. attention mechanism is widely used in the field of natural
This information placement process does not require the user language processing, and it is also beginning to be applied
to manually identify and place, thus addressing the need for in the field of computer vision. The purpose of using the
automation. attention mechanism in U-Net is to make the algorithm focus
on relevant parts of the image while training while dehigh-
II. SEMANTIC SEGMENTATION OF CROP FIELD lighting other parts of the image. It can reduce computational
IMAGES resources wasted on unrelated activators and provide a better
A. Fully Convolutional Network (FCN) generalization of the network.
The attention mechanism used in Attention U-Net is the
Unlike the classic convolutional neural network (CNN), soft attention mechanism. The idea is to weight each pixel
which uses the fully connected layer to get fixed-length feature of an image. The relevant part gets high weight, and the
vectors after the convolution layer for classification, the FCN irrelevant part gets low weight. The benefit of soft attention is
accepts input images of any size [9]. The deconvolution layer that backpropagation can be implemented in training, and the
is used to carry out the upsampling of the feature map of the weights will also be trained to make the model more focused
last convolution layer to restore it to the same size as the input on relevant regions. The previous section explained the specific
image [10], [11]. The binary cross entropy of classification is architecture of U-Net. The left side of the U-shaped structure is
calculated pixel by pixel as the loss. Thus, a prediction can be the contracting path containing the convolution operation, and
generated for each pixel while retaining the spatial information the right is the expansive path containing the deconvolution
in the original input image, thus solving the problem of operation. There is a left-to-right skip connection operation at
semantic segmentation. each level of the path, which aims to improve the performance
To build the FCN model, ResNet is used as a pre-trained of upsampling in the expansive path using the spatial infor-
network.The fully connected layer is replaced for the output mation in the contracting path. The unique attention gate in
with the convolution layer, and enlarge the feature map to the Attention U-Net is installed in each skip connection.
size of the full map using upsampling. One deficiency of skip connection is that the information
from the contracting path, which is rich in spatial information,
B. U-Net lacks feature information because the feature information has
U-Net [12] is a deep convolutional neural network with not been fully extracted at the beginning of the convolution
a Coder-Decoder structure, a variant of FCN. It has a U- operation. The attention mechanism is used on the skip
shaped structure. The left side of it, which is the contracting connection to address this deficiency by making the model
path of the network, is a series of downsampling operations focus more on the relevant information, that is, the features of
composed of convolution and max-pooling. It comprises four interest.

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.
(a) Accurate placement (b) Inaccurate placement

Fig. 1: Demonstration of simulated information placement in the field. A demonstrates an accurate placement, B shows an
inaccurate placement.

roads, background (trees or walls), and sky in the images.


We manually mark the images used for training with labels
because there is no available open-source dataset with the
required content and view angle.
Figure 3 shows the process for labelling the image, the tool
is Labelme. The annotations are marked with dots. The cor-
responding label name is attached. Labels are binary, divided
Fig. 2: Schematic of the additive attention gate (AG) [14]. into crop and background. The output from Labelme is a JSON
file that contains the image information and the coordinates of
the red dots. Using the JSON file, we create the one channel
Figure 2 illustrates the structure of the AG. On the left side label ground-truth images for the corresponding crop images.
of the AG are two inputs. g is the gate signal from the deep
layer, which is rich in feature information. The x is from skip
connections, rich in spatial information. Since x comes from a
deeper layer than g, g has half the dimensions and the number
of features compared to the x. By using convolution operations
with different strides, the dimensions of g and x are unified
into the same. Then add them together and import the result
into the ReLU activation function [15], followed by a filter
convolution of 1 by 1 to produce a vector of weights. After
normalizing and upsampling this vector, it is multiplied with
the input x at the element level to get the output with the
weights.
D. Otsu Thresholding
Otsu’s method is used to perform automatic image thresh-
olding. The algorithm outputs a single threshold that separates Fig. 3: The Image labelling is performed using Labelme.
pixels into two classes, foreground, and background [16]. The areas surrounded by the red dots refer to the manually
The threshold is determined by maximizing inter-class vari- annotated crop area.
ance [17].
B. Qualitative Evaluation
III. EXPERIMENTS AND RESULTS
Figure 4 shows a few sample original images from our
A. Dataset dataset, the corresponding ground-truth label, and the segmen-
The raw image dataset comes from the CONSUS data tation results obtained using the four bench-marking methods.
warehouse. The dataset consists of a total of 242 images. We can observe that for the U-Net result, although some minor
The view angle of the images is the same as that of the user false positives are found in the second and the forth images,
wearing the AR device. Angles range from 30 degrees below the model misclassified a small region of tree area into crop,
parallel to the ground to 15 degrees above. There are crops, the crop areas classified by the U-Net segmentation model are

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.
relatively accurate. The boundary between the crop and the Using the bench-mark data, it can be concluded that both U-
background is clearly visible. Attention U-Net performed as Net and Attention U-Net perform satisfactorily, but Attention
well as U-Net in terms of accuracy, with clear and accurate U-Net is superior. The IoU of Attention U-Net is the highest.
boundaries. Moreover, there are almost no obvious false posi- Its accuracy is high but considering the input data have the
tives in the segmentation results generated by Attention U-Net, problem of class imbalance, the IoU is better representing the
which avoids mistaking trees for crops. On the other hand, the performance. The precision is another important metric, as
segmentation map generated using the FCN model generates a false positives are worse than true negatives in object place-
lot of false positives. It hardly distinguishes trees from crops, ment. The traditional segmentation approach Otsu Threshold-
either the roads, which is a clear sign for overfitting. The ing has fallen short of machine learning-based approaches.
Otsu methods do not perform well because a single colour The overfitting problem of the FCN model based on ResNet
threshold fails to correctly classify the crop- and non-crop- feature extraction is obvious. Figure 5 shows the convexity
regions. There is other green vegetation in the scene, like trees, of the receiver operating characteristic curve (ROC) of the
the dark colour gaps between each crop are also a distraction. Attention U-Net model with respect to the random classifier.
The ROC curve indicates that the Attention U-Net is robust,
C. Quantitative Evaluation and a small amount of false-positive rate can be sacrificed for
This section reports on the quantitative evaluation of the a higher true-positive rate.
three proposed methods. The dataset was randomly divided
into the training set and the test set according to the ratio of IV. M ICRO -S ERVICE POTENTIAL FOR FUTURE AR
70:30. This allowed the computation of five evaluation indices APPLICATIONS IN G AME AND E NTERPRISE
viz. IoU, accuracy, precision, recall, and F1-score [18]. Table I The primary contribution of this work is to demonstrate how
tabulates the obtained results. micro-service potential for future AR applications in the area
Let us suppose that TP, TN, FP, FN indicate the true of games and enterprise applications. The development trend
positive, true negative, false positive, false negative from the of AR front-end devices is lightweight, and such a trend will
confusion matrix of the result. The evaluation indices are inevitably lead to the inability to improve the computing power
described below: of the devices comprehensively. In the process of intelligent
transformation of AR interaction, the requirement for more
TABLE I: Evaluation scores of the segmentation output for
elevated system computing power is also on the rise. In
the four bench-marking methods.
this case of conflict, with the help of 5G, the strategy of
Metric FCN U-Net Attention U-Net Otsu transferring intelligent functions to cloud micro-services can
solve the conflict with minimum delay and achieve the goal
Accuracy 0.821 0.913 0.919 0.722
of being lightweight and intelligent.
IoU 0.745 0.864 0.870 0.535
In this prototype, the function of the automatic placement
Precision 0.760 0.913 0.914 0.883
of information using semantic segmentation technology is
Recall 0.952 0.944 0.950 0.747
deployed in the cloud in the form of micro-services. This
F1-Score 0.845 0.919 0.925 0.795
technique could be used in conjunction with other tracking
technologies like GPS and represents the last mile solution
where the information is roughly in the correct area, but a
TP + TN micro-service built upon the prototype outlined before could
Accuracy = (1)
TP + TN + FP + FN then be used to help with the final placement. The concept
of the placement being connected with a micro-service allows
TP
IoU = (2) for future AR games and enterprise applications to not be
TP + FP + FN dependent on a device’s computation power but allow for the
TP use of infrastructure such as a 5G network to facilitate the
Precision = (3) intelligent AR experience.
TP + FP
Although accurate automatic information placement can be
TP achieved by applying this approach, the time complexity of
Recall = (4)
TP + FN the system can be high. Before future planned optimizations,
it can take up to 5 seconds to get a result, this can still
TP be used for calibration purposes, and then existing AR de-
True Positive Rate = (5)
TP + FN vices’ own sensors can then take over to maintain position
information. After obtaining the information placement coor-
FP
False Positive Rate = (6) dinates calculated by the semantic segmentation service, the
FP + TN prototype needs to perform plane detection in the augmented
reality environment. The information will be placed once
2 ∗ Precision ∗ Recall 2 ∗ TP the detection is completed. This detection only needs to be
F1 = = (7)
Precision + Recall 2 ∗ TP + FP + FN performed once in the initial stage when the user uses the

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.
(a) Input image (b) Ground-truth image (c) FCN output (d) U-Net output (e) Attention U-Net output (f) Otsu output
Fig. 4: Some representative images and results are shown for comparisons. We show (a) original crop field image, (b) crop field
image with manually annotated segmentation map; (c) segmentation output from the FCN model; (d) segmentation output from
the U-Net model; (f) segmentation output from the Attention U-Net model; (e) segmentation output from the Otsu thresholding.
In the generated segmentation maps, the classified crop region is indicated in green color whereas the classified non-crop region
in black color.

at first, but for games, once calibration is achieved, even a


1-second lag would be acceptable as the world could use a
device’s IMU sensors and then re-calibrate with the image
data. Thus this paper highlights the advantages of micro-
services used in mobile AR environments in the future.

V. C ONCLUSION AND F UTURE W ORKS


This paper proposed an automatic information placement
strategy in AR using semantic segmentation and evaluated it
in the field of precision farming. Four image segmentation
strategies were evaluated to investigate which algorithm the
prototype should use, and their performance in aiding in object
placement was compared. The best performance was the At-
tention U-Net segmentation model. Deploying Attention gates
in Attention U-Net helps the algorithm achieve better perfor-
mance when the data contain hard-to-discern features, such
as crops and trees. An AR automatic information placement
prototype has been developed to run on tablets. Within the
crop area in the user’s view segmented using the Attention U-
Fig. 5: ROC curve of the Attention U-Net model. The orange Net semantic segmentation, the corresponding information is
curve shows the Attention U-Net performance on true-positive accurately placed. In the prototype, the semantic segmentation
and false-positive rates with different segmentation thresholds. algorithm is deployed in the cloud in the form of micro-
services. Providing AR applications with intelligent functions
with high requirements for computing power in the form of
function. So the efficiency of information placement after micro-services is the key to upgrading the experience of AR
that will be significantly improved. The other delay is caused applications in games and enterprises.
by network transmission and model efficiency, in which the Future studies will focus on optimizing the system’s ef-
network transmission delay is about 1 second and the model ficiency and accuracy by minimizing the plane detection
operation time is about 2 seconds. This may not seem useful time and exploring other efficient and accurate semantic

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.
segmentation methods. We also intend to annotate a large- [9] S. Dev, M. Hossari, M. Nicholson, K. McCabe, A. Nautiyal, C. Conran,
scale image dataset of crop images, along with manually J. Tang, W. Xu, and F. Pitié, “Localizing adverts in outdoor scenes,” in
Proc. IEEE International Conference on Multimedia & Expo Workshops
generated segmentation masks. This will significantly assist (ICMEW). IEEE, 2019, pp. 591–594.
the community in further bench-marking studies and the more [10] S. Rasti, C. J. Bleakley, G. C. M. Silvestre, N. M. Holden, D. Langton,
robust automatic information placement system development. and G. M. P. O’Hare, “Crop growth stage estimation prior to canopy
closure using deep learning algorithms,” Neural Computing and Appli-
Furthermore, the various variants of the U-Net model could cations, vol. 33, pp. 1733–1743, 2020.
also be a well-performed choice for crop segmentation in [11] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for
precision agriculture. Segmentation of different crop species semantic segmentation,” in Proc. IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 06 2015, pp. 3431–3440.
is also a potential research direction, supported by more [12] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks
detailed data. Furthermore, applying automatic information for biomedical image segmentation,” LNCS, vol. 9351, pp. 234–241, 10
placement to other application fields of AR, such as games and 2015.
[13] S. Dev, S. Manandhar, Y. H. Lee, and S. Winkler, “Multi-label cloud
enterprises, is a promising research direction. Using machine segmentation using a deep network,” in Proc. USNC-URSI Radio Science
vision over web-based micro-services can help shine a guiding Meeting (Joint with AP-S Symposium). IEEE, 2019, pp. 113–114.
light on how AR experiences can be delivered on low-cost [14] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa,
K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker,
hardware that leverages the cloud to help create the foundation and D. Rueckert, “Attention u-net: Learning where to look for the
for future AR applications in the enterprise and gaming space. pancreas,” 2018. [Online]. Available: https://arxiv.org/abs/1804.03999
Furthermore, the local deployment of semantic segmenta- [15] K. Fukushima, “Cognitron: A self-organizing multilayer neural net-
work,” Biological Cybernetics, vol. 20, pp. 121–136, 1975.
tion models is also an exciting research direction. Nvidia’s [16] M. Sezgin and B. Sankur, “Survey over image thresholding techniques
Jetson family of system-on-modules (SOMs) designed explic- and quantitative performance evaluation,” Journal of Electronic Imaging,
itly for AI model deployment can be used as devices for vol. 13, pp. 146–168, 01 2004.
[17] N. Otsu, “A threshold selection method from gray-level histograms,”
the model in this study [19], [20]. Its lightweight and low IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp.
energy consumption makes it easy to integrate with the front- 62–66, 1979.
end AR headset. Its GPU module can meet the computing [18] S. Dev, Y. H. Lee, and S. Winkler, “Color-based segmentation of
sky/cloud images from ground-based cameras,” IEEE Journal of Selected
power requirements of the semantic segmentation algorithm Topics in Applied Earth Observations and Remote Sensing, vol. 10,
proposed in this study. The Jetson Nano in this series can no. 1, pp. 231–242, 2016.
meet the requirements of low cost, while the high-end device [19] M. Roesler, L. Mohimont, F. Alin, N. Gaveau, and L. A. Steffenel, “De-
ploying deep neural networks on edge devices for grape segmentation,”
Jetson Orin Nano can meet the needs of computing power for in Smart and Sustainable Agriculture, S. Boumerdassi, M. Ghogho, and
most machine learning-based segmentation models. The local É. Renault, Eds. Cham: Springer International Publishing, 2021, pp.
deployment of the model in this series of devices can reduce 30–43.
[20] L. Mohimont, F. Alin, N. Gaveau, and L. A. Steffenel, “Lite CNN
the delay of the segmentation function on the network and in Models for Real-Time Post-Harvest Grape Disease Detection,” in
computation. Workshop on Edge AI for Smart Agriculture (EAISA 2022), Biarritz,
France, Jun. 2022. [Online]. Available: https://hal.univ-reims.fr/hal-
03647740
R EFERENCES
[1] L. Čmolı́k and J. Bittner, “Layout-aware optimization for interactive
labeling of 3d models,” Computers & Graphics, vol. 34, no. 4, pp.
378–387, 2010, procedural Methods in Computer Graphics Illustrative
Visualization.
[2] K. Hartmann, T. Götzelmann, K. Ali, and T. Strothotte, “Metrics for
functional and aesthetic label layouts,” in Smart Graphics, A. Butz,
B. Fisher, A. Krüger, and P. Olivier, Eds. Berlin, Heidelberg: Springer
Berlin Heidelberg, 2005, pp. 115–126.
[3] M. Zheng and A. G. Campbell, “Location-based augmented reality in-
situ visualization applied for agricultural fieldwork navigation,” 2019
IEEE International Symposium on Mixed and Augmented Reality Ad-
junct (ISMAR-Adjunct), pp. 93–97, 2019.
[4] P. Phupattanasilp and S.-R. Tong, “Augmented reality in the integrative
internet of things (AR-IoT): Application for precision farming,” Sustain-
ability, vol. 11, no. 9, 2019.
[5] S. Seinfeld, T. Feuchtner, A. Maselli, and J. Müller, “User represen-
tations in human-computer interaction,” Human-Computer Interaction,
pp. 1–39, 02 2020.
[6] T. Höllerer, S. Feiner, D. Hallaway, B. Bell, M. Lanzagorta, D. Brown,
S. Julier, Y. Baillot, and L. Rosenblum, “User interface management
techniques for collaborative mobile augmented reality,” Computers &
Graphics, vol. 25, no. 5, pp. 799–810, 2001, mixed realities - beyond
conventions.
[7] G. Dekker, Q. Zhang, J. Moreland, and C. Zhou, “Marwind: mobile
augmented reality wind farm visualization,” p. 1, 2013.
[8] R. Grasset, T. Langlotz, D. Kalkofen, M. Tatzgern, and D. Schmalstieg,
“Image-driven view management for augmented reality browsers,” in
Proc. IEEE International Symposium on Mixed and Augmented Reality
(ISMAR), 2012, pp. 177–186.

Authorized licensed use limited to: Hong Kong Metropolitan University. Downloaded on January 08,2024 at 10:10:42 UTC from IEEE Xplore. Restrictions apply.

You might also like