Detection and Tracking of Moving Objects at Road Intersections Using A 360-Degree Camera For Driver Assistance and Automated Driving
Detection and Tracking of Moving Objects at Road Intersections Using A 360-Degree Camera For Driver Assistance and Automated Driving
August 4, 2020.
Digital Object Identifier 10.1109/ACCESS.2020.3011430
ABSTRACT Many complicated road intersections are seen while driving. In some, blind spots make it
difficult for drivers or automated vehicles to discern moving objects coming from certain directions, possibly
confusing drivers or autonomous vehicles wishing to cross or to turn at the intersection. To address this
problem, we investigate detection and tracking of all moving objects at an intersection using a single
360-degree-view camera (3DVC). Through experiments, we develop methods allowing a 3DVC to capture
the entirety of a four-way intersection when installed at one corner. This paper also presents image processing
algorithms for detecting and tracking moving objects at intersections by processing images from the installed
3DVC. Experiments under varied conditions demonstrate that the proposed detection algorithm has a very
high detection rate. We also confirm the tracking ability for moving objects detected using the proposed
algorithm.
INDEX TERMS 360-degree camera (omnidirectional camera), driver assistance, autonomous driving,
moving object detection and tracking, image conversion.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
135652 VOLUME 8, 2020
C. Premachandra et al.: Detection and Tracking of Moving Objects at Road Intersections
images from the camera. When the camera is installed high images [18]. However, we found that tracking moving objects
above the intersection, it is possible to capture the entire is difficult with these images, as explained in the next section.
area. However, it remains difficult to capture pedestrians In this study, therefore, we generated rectangular images from
and bicycles, which do not clearly appear in camera images. the original circular images and performed moving object
Furthermore, installing such cameras at a sufficient height detection and tracking using those rectangular images.
can be difficult and costly. Experiments conducted to confirm the effectiveness of the
Other studies have thus attempted moving object detec- proposed detection algorithm showed high detection rates.
tion throughout intersections by installing multiple cameras Experiments also demonstrated that the proposed tracking
at complex intersections [10]. Multiple cameras can cap- algorithm can track detected moving objects with high accu-
ture images from different intersection areas, allowing mov- racy. This detection and tracking approach is therefore practi-
ing object detection by image processing. Moving objects cally applicable to development of driver assistance systems
detected by all cameras can be merged to grasp the behavior capable of autonomous driving at complex intersections with
of moving objects situation throughout the intersection. How- blind spots.
ever, the required image data processing increases when mul- The remainder of this paper is organized as follows.
tiple cameras are used, potentially making real-time detection Section 2 describes 3DVC installations at intersections.
difficult. Section 3 presents the proposed moving object detection
In this paper we target to detect all the moving objects method, and Section 4 describes it in detail. Section 5 presents
around an intersection. We installed the 3DVC 3m above the and discusses the experimental results of this work. Finally,
ground at a corner of an intersection, as shown in Fig. 2. Section 6 concludes the paper.
We experimentally found that this camera installation is suffi-
cient for capturing images of the entire intersection, including II. APPLICATION OF 3DVC TO DETECTING MOVING
all nearby moving objects. In the other word, an image from OBJECTS AT INTERSECTINS
3DVC includes the entire intersection area, when the 3DVC is In this study, we used a Ricoh Theta 360-degree camera as
fixed at an intersection corner following the above mentioned the 3DVC. This section describes the camera installation at
installation procedure. For example, Fig. 4 shows an image an intersection performed for this work and the image conver-
with this camera installing at an intersection. In the image, sion necessary for detecting and tracking moving objects. The
four roads of the intersection can easily be found when you Theta has two lenses, each generating a 180-degree circular
focus on the four road crossings. In this intersection, there image like those shown in Fig. 1. Combined, these circular
is a crossing on each road. Furthermore, the moving objects images provide a 360-degree view.
on roads as well as near the intersection region can also be
confirmed.
In this study, we attempt detection and tracking of all FIGURE 2. 3DVC installation at a road intersection.
moving objects in a road intersection using a single 3DVC
camera. Generally, a 3DVC has two lenses, each providing
a 180-degree circular image like those shown in Fig. 1. A. CAMERA INSTALLATION
Two 180-degree view circular images combined provide a We installed the 3DVC 3m above the ground at a corner
360-degree view. These 3DVC cameras have been used in of an intersection, as shown in Fig. 2. We experimentally
studies related to topics such as virtual reality [11], map- found that this camera installation is sufficient for capturing
ping [12], [13], 360-degree imaging systems [14], video images of the entire intersection, including all nearby moving
coding [15], depth estimation [16], and navigation behavior objects.
analysis [17]. To the extent of our knowledge, however, there As mentioned above, full 3DVC images combine a circular
have been no studies regarding their application to detection image from each of its lenses. Figure 3 shows an image from
of traffic situations at an intersection. In early stages of a 3DVC installed at an intersection. Note that the brightness
this work, we attempted detection of moving objects in the might be poor in some images, according to the lighting
entirety of an intersection by processing the original circular environment at the time (Fig. 3).
and (4).
x = (λ − λ0 ) cos ϑ1 (3)
y = ϑ − ϑ1 (4)
In the above equations, x and y respectively denote hor-
izontal and vertical coordinates of the projected location
coordinate, λ and λ0 are respectively the longitude and central
meridian of the location to project, and ϑ and ϑ 1 denote the
latitude of the location to project and standard parallels.
B. BRIGHTNESS IMPROVEMENT
As preprocessing, we apply gamma correction to improve
the brightness of original images [19]. In gamma correction,
an original camera image with 8-bit pixel intensity levels (Ii )
is converted into levels in the range [0, 1]. Gamma conversion
is then performed following Eq. (1), and a gamma-converted
image Io of Ii is generated following Eq. (2).
Ii γ1
Ig = ( ) (1)
255
Io = Ig ∗ 255 (2)
Figure 4 shows gamma conversion of the image in Fig. 3.
This brightness correction can improve detection of moving
objects that are far from the camera.
major steps in the algorithm, as described in the following Not every RGB is correlated with others [25], [26], so dif-
sub-sections. ferences in intensity can be assumed to have uniform standard
deviations. The covariance matrix can thus be expressed as
A. MOVING OBJECT CANDIDATE EXTRACTION BY
6i,t = σi,t
2
I. (7)
GAUSSIAN MIXTURE MODEL BASED BACKGROUND
SUBTRACTION Every Gaussian exceeding a defined threshold (th) is
Frame and background subtraction can be used to detect extracted as background, and others are extracted as fore-
moving objects in images [21]–[32]. In frame subtraction, ground. When a pixel matches at least one K Gaussian in
moving objects are detected by calculating the image subtrac- the mixture, ω, µ, and σ are updated. To clearly indicate
tion between two or a few consecutive frames [21], [23], [28]. this classification, pixels selected as foreground are colored
Moving objects must have some minimal speed for detection white, while pixels selected as background are colored black,
by frame subtraction. In this study, the camera is static and as
(
some targeted moving objects, such as wheelchairs, have 255 (if every Gaussian > th)
very low speeds. Note that objects like pedestrians waiting IGMM (i, j) (8)
0
at the intersection cannot be detected with frame subtraction,
so we decided not to use frame subtraction to extract mov- Figure 7 shows an image after gamma correction and
ing objects. Instead, we use background subtraction, which equirectangular conversion. This image was captured by an
generally performs subtraction between a current frame and 3DVC installed at an intersection under the above-described
a previously prepared background image [22], [27], which installation conditions. Figure 8 shows its background sub-
must be updated as lighting conditions at the intersection traction following the GMM method, demonstrating that
change. We use a background subtraction method that follows candidate moving objects can be extracted as white blobs.
a Gaussian mixture model (GMM) to extract moving object Note that these candidates do not have a uniform shape, and
candidates from the images [24]–[26], [46], [47] because some noise is present. As mentioned above, in this paper the
this method can automatically update background pixels in camera is static, so these subtraction methods can easily be
the image. The light condition of the outdoor environment used to detect moving objects from consecutive images from
slightly changes as sunlight changes. Sometimes it changes the camera rather than color based [40]–[43] and learning
sharply following the changes of cloud conditions. In GMM, based [44], [45], [48] methods.
multi Gaussian models (K) are generated and updated fol-
lowing the pixel value variation over the time, regarding each
pixel in the image. Then, if the current pixel value is away
from the multi Gaussian models, it is picked up as a pixel
of a moving object. The same process is applied to all the
pixels in the image to extract the pixels of moving objects.
GMM is robust against above-mentioned light changes in the
outdoor environment because a pixel of moving objects are
extracted by comparing with several Gaussian models, those
are generated following it’s value variation over the time.
Thus, the GMM is adaptive enough for detecting moving
objects from consecutive images from a static camera in an FIGURE 7. Image after gamma correction and equirectangular conversion.
regarding the moving object’s position and velocity. different types of moving objects move in different direc-
1 tions. Multiple detected positions can thus appear within the
2
pest est est
t = pt−1 + vt−1 dt + at (dt) + wp (9) defined region, leading to tracking errors. In such situations,
2
we focus on the directions of pest t . We calculate the moving
vest est
t = vt−1 + at dt + wv (10)
direction of an object by following its estimated positions in
Equations (9) and (10) can be summarized as a few previous frames. By comparing that direction value,
"1 # we select the pdet est
t(i) corresponding to pt . After confirming
1 dt pest
est
(dt)2
pt
= t−1 + [a ]+
wp
, (11) the relation between detected positions pdet t(i) and estimated
vest 0 1 vest 2 t
wv
t t−1 dt positions pest
t(i) , estimated pest values can be updated by pdet .
t(i) t(i)
V. EXPERIMENTS
We conducted experiments to evaluate the proposed mov-
ing object detection and tracking method. All experiments
were conducted installing the 3DVC camera at several
intersections.
B. RESULTS
As Fig. 14 shows, a single 3DVC could detect objects moving
from different directions at intersections.
REFERENCES
[1] J. Eguchi and H. Koike, ‘‘Discrimination of an approaching vehicle at an
intersection using a monocular camera,’’ in Proc. IEEE Intell. Vehicles
Symp., Jun. 2007, pp. 618–623.
[2] S. G. Ebrahimi, N. Seifnaraghi, and E. A. Ince, ‘‘Traffic analysis of avenues
and intersections based on video surveillance from fixed video cameras,’’
in Proc. IEEE 17th Signal Process. Commun. Appl. Conf., Apr. 2009,
pp. 848–851.
[3] F.-F. Xun, X.-H. Yang, Y. Xie, and L.-Y. Wang, ‘‘Congestion detection of
urban intersections based on surveillance video,’’ in Proc. 18th Int. Symp.
Commun. Inf. Technol. (ISCIT), Sep. 2018, pp. 495–498.
[4] Q. Baig, O. Aycard, T. D. Vu, and T. Fraichard, ‘‘Fusion between laser
and stereo vision data for moving objects tracking in intersection like sce-
nario,’’ in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2011, pp. 362–367.
[5] S. Sonoda, J. K. Tan, H. Kim, S. Ishikawa, and T. Morie, ‘‘Moving objects
detection at an intersection by sequential background extraction,’’ in Proc.
FIGURE 19. Comparison of the proposed moving object detection method 11th Int. Conf. Control, Autom. Syst., Oct. 2011, pp. 1752–1755.
with some conventional methods. [6] R. A. Bedruz, E. Sybingco, A. Bandala, A. R. Quiros, A. C. Uy, and
E. Dadios, ‘‘Real-time vehicle detection and tracking using a mean-shift
TABLE 1. Comparison of false positive rates. based blob analysis and tracking approach,’’ in Proc. IEEE 9th Int. Conf.
Humanoid, Nanotechnol., Inf. Technol., Commun. Control, Environ. Man-
age. (HNICEM), Dec. 2017, pp. 1–5.
[7] S. Wang, K. Ozcan, and A. Sharma, ‘‘Region-based deformable fully
convolutional networks for multi-class object detection at signalized traffic
intersections: NVIDIA AICity challenge 2017 track 1,’’ in Proc. IEEE
SmartWorld, Ubiquitous Intell. Comput., Aug. 2017, pp. 1–4.
[8] J.-P. Jodoin, G.-A. Bilodeau, and N. Saunier, ‘‘Tracking all road users at
multimodal urban traffic intersections,’’ IEEE Trans. Intell. Transp. Syst.,
vol. 17, no. 11, pp. 3241–3251, Nov. 2016.
[9] L. Juntao, L. Bingwu, and H. Lingyu, ‘‘Multiple objects segmentation and
of the above-described video data, thereby providing simi- tracking algorithm for intersection monitoring,’’ in Proc. 3rd IEEE Conf.
Ind. Electron. Appl., Jun. 2008, pp. 1413–1416.
lar experimental environments. Figure 19 shows the results, [10] P. Babaei and M. Fathy, ‘‘Abnormality detection and traffic flow mea-
which confirm that the proposed moving object detection surement using a hybrid scheme of SIFT in distributed multi camera
method based mainly on a GMM provided the best results. intersection monitoring,’’ in Proc. 7th Iranian Conf. Mach. Vis. Image
Process., 2011, pp. 1–5.
Table 1 summarizes false positive rates for each method. The [11] H. Huang, J. Chen, H. Xue, Y. Huang, and T. Zhao, ‘‘Time-variant visual
proposed method resulted in fewer false positives than did the attention in 360-degree video playback,’’ in Proc. IEEE Int. Symp. Haptic,
conventional methods. Audio Vis. Environ. Games (HAVE), Sep. 2018, pp. 1–5.
[12] T. Sawa, T. Yanagi, Y. Kusayanagi, S. Tsukui, and A. Yoshida, ‘‘Seafloor
mapping by 360 degree view camera with sonar supports,’’ in Proc.
VI. CONCLUSION MTS/IEEE Kobe Techno-Oceans (OTO), May 2018, pp. 1–4.
We proposed moving object detection and tracking meth- [13] L. Li, Z. Li, M. Budagavi, and H. Li, ‘‘Projection based advanced motion
model for cubic mapping for 360-degree video,’’ in Proc. IEEE Int. Conf.
ods for application at road intersections using a 3DVC for Image Process. (ICIP), Sep. 2017, pp. 1427–1431.
driver assistance and automated driving. We experimentally [14] H. Abdelhamid, W. DongDong, C. Can, H. Abdelkarim, Z. Mounir, and
confirmed that a 3DVC installed at an intersection corner G. Raouf, ‘‘360 degrees imaging systems design, implementation and
evaluation,’’ in Proc. Int. Conf. Mech. Sci., Electr. Eng. Comput. (MEC),
can image the entire intersection area. We also developed Dec. 2013, pp. 2034–2038.
algorithms for detecting and tracking moving objects at the [15] M. Budagavi, J. Furton, G. Jin, A. Saxena, J. Wilkinson, and A. Dickerson,
intersection using those 3DVC images. Experiments per- ‘‘360 degrees video coding using region adaptive smoothing,’’ in Proc.
IEEE Int. Conf. Image Process. (ICIP), Sep. 2015, pp. 750–754.
formed under varied conditions demonstrated that the devel- [16] K. Wegner, O. Stankiewicz, T. Grajek, and M. Domanski, ‘‘Depth estima-
oped algorithms showed high performance for detecting and tion from stereoscopic 360-degree video,’’ in Proc. 25th IEEE Int. Conf.
tracking moving objects. Image Process. (ICIP), Oct. 2018, pp. 2945–2948.
[17] F. Duanmu, Y. Mao, S. Liu, S. Srinivasan, and Y. Wang, ‘‘A subjective
The proposed methods performed well when the distance study of viewer navigation behaviors when watching 360-degree videos on
to moving objects was within 25 m. The proposed meth- computers,’’ in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2018,
ods can thus be applied at intersections between intersect- pp. 1–6.
[18] C. Premachandra, S. Ueda, and Y. Suzuki, ‘‘Road intersection moving
ing four-lane roads. Application of the proposed methods at object detection by 360-degree view camera,’’ in Proc. IEEE 16th Int. Conf.
larger intersections will require improvements to the detec- Netw., Sens. Control (ICNSC), May 2019, pp. 369–372.
tion and tracking algorithms, which we plan to investigate in [19] F. Ebner and M. D. Fairchild, ‘‘Development and testing of a color space
(IPT) with improved hue uniformity,’’ in Proc. 6th Color Imag. Conf.,
future work. In addition, all experiments in this study were 1998, pp. 8–13.
conducted during daytime under cloudy or sunny conditions. [20] P. J. Snyder, ‘‘Flattening the Earth: Two thousand years of map projec-
We are working to achieve similar performance under rainy tions,’’ Univ. Chicago Press, Street Chicago, IL, USA, Tech. Rep., 1993.
[21] C. Premachandra, M. Otsuka, R. Gohara, T. Ninomiya, and K. Kato,
and snowy conditions. By conducting tests with different ‘‘A study on development of a hybrid aerial terrestrial robot system for
cameras, we plan to achieve moving object detection and avoiding ground obstacles by flight,’’ IEEE/CAA J. Automatica Sinica,
tracking during nighttime as well. Future work will also vol. 6, no. 1, pp. 327–336, Jan. 2019.
[22] Y. Xia, S. Ning, and H. Shen, ‘‘Moving targets detection algorithm based
investigate estimations of detected moving object directions on background subtraction and frames subtraction,’’ in Proc. 2nd Int. Conf.
and development of a driver support system. Ind. Mechatronics Autom., May 2010, pp. 122–125.
[23] C. Premachandra, D. Ueda, and K. Kato, ‘‘Speed-up automatic quadcopter [46] Z. Zivkovic, ‘‘Improved adaptive Gaussian mixture model for background
position detection by sensing propeller rotation,’’ IEEE Sensors J., vol. 19, subtraction,’’ in Proc. 17th Int. Conf. Pattern Recognit., 2004, pp. 28–31.
no. 7, pp. 2758–2766, Apr. 2019. [47] Z. Zivkovic and F. van der Heijden, ‘‘Efficient adaptive density estimation
[24] P. KaewTraKulPong and R. Bowden, ‘‘An improved adaptive background per image pixel for the task of background subtraction,’’ Pattern Recognit.
mixture model for realtime tracking with shadow detection,’’ Proc. 2nd Lett., vol. 27, no. 7, pp. 773–780, May 2006.
Eur. Workshop Adv. Video Based Surveill. Syst., vol. 1, Sep. 2001, pp. 1–5. [48] S. S. Thenuwara, C. Premachandra, and S. Sumathipala, ‘‘Hybrid approach
[25] C. Stauffer and W. E. L. Grimson, ‘‘Adaptive background mixture models to face recognition system using PCA & LDA in border control,’’ in Proc.
for real-time tracking,’’ in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Nat. Inf. Technol. Conf. (NITC), Oct. 2019, pp. 9–15.
Pattern Recognit., Dec. 1999, pp. 23–25.
[26] A. Nurhadiyatna, W. Jatmiko, B. Hardjono, A. Wibisono, I. Sina, and
P. Mursanto, ‘‘Background subtraction using Gaussian mixture model
enhanced by hole filling algorithm (GMMHF),’’ in Proc. IEEE Int. Conf.
Syst., Man, Cybern., Oct. 2013, pp. 4006–4011.
[27] L. Chen, P. Zhu, and G. Zhu, ‘‘Moving objects detection based on
background subtraction combined with consecutive frames subtraction,’’ CHINTHAKA PREMACHANDRA (Member,
in Proc. Int. Conf. Future Inf. Technol. Manage. Eng., Oct. 2010, IEEE) was born in Sri Lanka. He received the
pp. 545–548. B.Sc. and M.Sc. degrees from Mie University, Tsu,
[28] C. Premachandra, R. Gohara, and K. Kato, ‘‘Fast lane boundary recogni- Japan, in 2006 and 2008, respectively, and the
tion by a parallel image processor,’’ in Proc. IEEE Int. Conf. Syst., Man, Ph.D. degree from Nagoya University, Nagoya,
Cybern. (SMC), Oct. 2016, pp. 947–952. Japan, in 2011.
[29] Y. Okamoto, T. U. o. S. Department of Electrical EngineeringGradu- From 2012 to 2015, he was an Assistant Profes-
ate School of Engineering, C. Premachandra, and K. Kato, ‘‘A study sor with the Department of Electrical Engineering,
on computational time reduction of road obstacle detection by parallel Faculty of Engineering, Tokyo University of Sci-
image processor,’’ J. Adv. Comput. Intell. Intell. Informat., vol. 18, no. 5,
ence, Tokyo, Japan. From 2016 to 2017, he was an
pp. 849–855, Sep. 2014.
Assistant Professor with the Department of Electronic Engineering, School
[30] C. Premachandra, R. Gohara, T. Ninomiya, and K. Kato, ‘‘Smooth auto-
matic stopping for ultra-compact vehicles,’’ IEEE Trans. Intell. Veh., vol. 4, of Engineering, Shibaura Institute of Technology, Tokyo. In 2018, he was
no. 4, pp. 561–568, Dec. 2019. promoted to an Associate Professor with the Department of Electronic
[31] Y. Yamazaki, C. Premachandra, and C. J. Perea, ‘‘Audio-Processing-Based Engineering, School of Engineering/Graduate School of Engineering and
human detection at disaster sites with unmanned aerial vehicle,’’ IEEE Science, Shibaura Institute of Technology, where he is currently the Manager
Access, vol. 8, pp. 101398–101405, 2020. of the Image Processing and Robotic Laboratory. His laboratory conducts
[32] P. Chinthaka, C. Premachandra, and S. Amarakeerthi, ‘‘Effective natural research in two main fields image processing and robotics. His former
communication between human hand and mobile robot using raspberry- research interests include computer vision, pattern recognition, speed up
pi,’’ in Proc. IEEE Int. Conf. Consum. Electron., Jan. 2018, pp. 1–3. image processing, and camera-based intelligent transportation systems,
[33] K. Kale, S. Pawar, and P. Dhulekar, ‘‘Moving object tracking using optical while latter interests include terrestrial robotic systems, flying robotic
flow and motion vector estimation,’’ in Proc. 4th Int. Conf. Rel., Infocom systems, and integration of terrestrial robot and flying robot.
Technol. Optim. (ICRITO), Sep. 2015, pp. 1–6. Dr. Premachandra is a member of IEICE, Japan, SICE, Japan, and SOFT,
[34] I. Tchikk, ‘‘A survey on moving object detection and tracking techniques,’’ Japan. He received the FIT Best Paper Award and the FIT Young Researchers
Int. J. Eng. Comput. Sci., vol. 6, no. 6, pp. 5212–5215, Apr. 2016. Award from IEICE and IPSJ, Japan, in 2009 and 2010. He has served many
[35] S. Shantaiya, K. Verma, and K. Mehta, ‘‘Multiple object tracking using international conferences and journals as a Steering Committee Member and
Kalman filter and optical flow,’’ Eur. J. Adv. Eng. Technol., vol. 2, no. 2,
an Editor. He is the Founding Chair of the International Conference on Image
pp. 34–39, 2015.
Processing and Robotics (ICIPRoB).
[36] R. E. Kalman, ‘‘A new approach to linear filtering and prediction prob-
lems,’’ Trans. ASME J. Basic Eng., vol. 5, pp. 35–45, Mar. 1960.
[37] H. A. Patel and D. G. Thakore, ‘‘Moving object tracking using Kalman fil-
ter,’’ Int. J. Comput. Sci. Mobile Comput., vol. 10, pp. 326–332, Apr. 2013.
[38] X. Li, K. Wang, W. Wang, and Y. Li, ‘‘A multiple object tracking method
using Kalman filter,’’ in Proc. IEEE Int. Conf. Inf. Autom., Jun. 2010,
pp. 1862–1866. SHOHEI UEDA received the B.Sc. degree in elec-
[39] J. Kim, D. Yeom, and Y. Joo, ‘‘Fast and robust algorithm of tracking tronics engineering from the Shibaura Institute of
multiple moving objects for intelligent video surveillance systems,’’ IEEE Technology, Tokyo, Japan, in 2019.
Trans. Consum. Electron., vol. 57, no. 3, pp. 1165–1170, Aug. 2011. His research interest includes moving object
[40] C. Premachandra, D. N. H. Thanh, T. Kimura, and H. Kawanaka,
tracking and their applications in robotics, auto-
‘‘A study on hovering control of small aerial robot by sensing existing floor
mated driving, and intelligent transportations
features,’’ IEEE/CAA J. Automatica Sinica, vol. 7, no. 4, pp. 1016–1025,
Jul. 2020. systems.
[41] M. Tamaki and C. Premachandra, ‘‘An automatic compensation system for
unclear area in 360-degree images using pan-tilt camera,’’ in Proc. Int.
Symp. Syst. Eng. (ISSE), Oct. 2019, pp. 1–4.
[42] I. Yuki, C. Premachandra, S. Sumathipala, and B. H. Sudantha, ‘‘HSV
conversion based tactile paving detection for developing walking support
system to visually handicapped people,’’ in Proc. IEEE 23rd Int. Symp.
Consum. Technol. (ISCT), Jun. 2019, pp. 138–142.
[43] M. Tsunoda, C. Premachandra, H. A. H. Y. Sarathchandra, K. L. A. N.
Perera, I. T. Lakmal, and H. W. H. Premachandra, ‘‘Visible light commu- YUYA SUZUKI received the B.Sc. degree in elec-
nication by using LED array for automatic wheelchair control in hospi- tronics engineering from the Shibaura Institute of
tals,’’ in Proc. IEEE 23rd Int. Symp. Consum. Technol. (ISCT), Jun. 2019, Technology, Tokyo, Japan, in 2018.
pp. 210–215. His research interest includes moving object
[44] R. C. Dilip, T. Lucas, A. Saarangan, S. Sumathipala, and C. Premachandra, detection and their applications in automated
‘‘Making online content viral through text analysis,’’ in Proc. IEEE Int. systems.
Conf. Consum. Electron. (ICCE), Jan. 2018, pp. 1–6.
[45] A. S. Winoto, M. Kristianus, and C. Premachandra, ‘‘Small and slim deep
convolutional neural network for mobile device,’’ IEEE Access, vol. 8,
pp. 125210–125222, 2020.