Lane Detection in Autonomous Vehicles A Systematic Review
Lane Detection in Autonomous Vehicles A Systematic Review
ABSTRACT One of the essential systems in autonomous vehicles for ensuring a secure circumstance
for drivers and passengers is the Advanced Driver Assistance System (ADAS). Adaptive Cruise Control,
Automatic Braking/Steer Away, Lane-Keeping System, Blind Spot Assist, Lane Departure Warning System,
and Lane Detection are examples of ADAS. Lane detection displays information specific to the geometrical
features of lane line structures to the vehicle’s intelligent system to show the position of lane markings.
This article reviews the methods employed for lane detection in an autonomous vehicle. A systematic
literature review (SLR) has been carried out to analyze the most delicate approach to detecting the road lane
for the benefit of the automation industry. One hundred and two publications from well-known databases
were chosen for this review. The trend was discovered after thoroughly examining the selected articles on
the method implemented for detecting the road lane from 2018 until 2021. The selected literature used
various methods, with the input dataset being one of two types: self-collected or acquired from an online
public dataset. In the meantime, the methodologies include geometric modeling and traditional methods,
while AI includes deep learning and machine learning. The use of deep learning has been increasingly
researched throughout the last four years. Some studies used stand-alone deep learning implementations for
lane detection problems. Meanwhile, some research focuses on merging deep learning with other machine
learning techniques and classical methodologies. Recent advancements imply that attention mechanism has
become a popular combined strategy with deep learning methods. The use of deep algorithms in conjunction
with other techniques showed promising outcomes. This research aims to provide a complete overview of the
literature on lane detection methods, highlighting which approaches are currently being researched and the
performance of existing state-of-the-art techniques. Also, the paper covered the equipment used to collect
the dataset for the training process and the dataset used for network training, validation, and testing. This
review yields a valuable foundation on lane detection techniques, challenges, and opportunities and supports
new research works in this automation field. For further study, it is suggested to put more effort into accuracy
improvement, increased speed performance, and more challenging work on various extreme conditions in
detecting the road lane.
INDEX TERMS Lane detection, autonomous vehicle, systematic literature review, geometric modelling,
deep learning, machine learning.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 11, 2023 3729
N. J. Zakaria et al.: Lane Detection in Autonomous Vehicles: A Systematic Review
Journal papers, conference proceedings, and book sections on Humanized Computing, International Journal of Advanced
the subject were included in the research. Robotic Systems, Journal of Electrical Engineering and
Technology, Multimedia Tools and Applications, and IEEE
4) EXCLUSION Access ranks second with two publications per article.
Articles written in languages other than English were not Table 3 indicates the publications of lane detection in
considered. Therefore, the exclusion criteria included short conferences. The tables show that the Advances in Intel-
papers, such as abstracts or expanded abstracts, and sur- ligent Systems and Computing conference ranks first with
vey/review papers. five publications, followed by ACM International Conference
Proceeding Series, 2nd International Conference for Emerg-
C. LITERATURE COLLECTION ing Technology, INCET 2021, Chinese Control Conference,
The literature search was carried out by providing the search CCC, IET Conference Publications, and 2018 6th Interna-
strings for each database, as shown in Figure 2. These tional Conference on Control Engineering and Information
search keywords resulted in a total of 435 publications being Technology, CEIT 2018 which ranks second with two publi-
returned. Next, each database’s search results were evaluated cations per conference.
using predetermined inclusion/exclusion criteria. The initial Table 4 shows the publications of lane detection in book
screening excluded review articles and non-English journals. chapters. There are ten book chapters which are Advanced
After that, each manuscript was assessed based on its title, Structured Materials, Lecture Notes on Data Engineering and
abstract, and a short read of the content to determine if it Communications Technologies, Transactions on Computer
should be accepted or rejected. The number of articles was Systems and Networks, Image and Graphics, Lectures Notes
reduced to 158 after this filtration. Next, after removing in Network and Systems, Computational Intelligence in Data
duplicate papers, 114 publications were included in the full- Science, Databases and Information Systems, Lecture Notes
text review. For reasons such as publications that are not in Computational Vision and Biomechanics, Image and Video
available as full text and similar to the previous articles by the Technology and Computational Science and Technology.
same author, just a small number of enhancements are also
excluded. Then, 102 studies were chosen to be included in IV. DISCUSSION
this SLR. As discussed above, the steps to obtain the publica- To answer the RQs, each publication was thoroughly exam-
tions related to this SLR have been presented PRISMA. The ined with the necessary data extracted. It consists of the
Preferred Reporting Items for Systematic Review and Meta- primary approach and the type of dataset used in the study,
analysis (PRISMA) [13] are shown in Figure 2. whether self-collected or acquired from an online dataset.
Each publication focuses on the dataset’s collection and
III. RESULTS preparation for network training and testing. The findings for
Table 1 lists the chosen publications, the year of publica- each RQ in their respective sections are as follows:
tion, the source title, and the number of citations. About
102 publications have been listed in Table 1 with the state of A. WHAT METHODS HAVE BEEN APPLIED FOR LANE
references. The lists included journals, conferences, and book DETECTION IN AUTONOMOUS VEHICLES?
chapters. Figure 3 depicts the publishing distribution from This section explores several related studies on detecting
2018 to 2021. Every year, a growing tendency in the literature road lane markers. The strategies for lane detection can be
is visible in the yearly distribution displayed in Figure 3. categorized into two methods based on past research: Geo-
For example, in 2018, about 16 papers were published, and metric modeling/traditional approaches for lane detection and
25 articles were published in 2019. Meanwhile, 29 and ii) Artificial Intelligence-based techniques. These are out-
32 papers were published in 2020 and 2021. Next, from 2018 lined in further detail below:
to 2021, 48 articles were published in conference proceed-
ings, 44 in journals, and ten as book chapters, as shown 1) GEOMETRIC MODELLING/TRADITIONAL METHODS
in Figure 4. For example, in 2018, 11 conferences, three The pipelines used by most traditional detection algorithms
journals, and two book chapters were published. comprise image preprocessing, feature extraction, lane model
Meanwhile, for the coming year, 2019, 16 conferences, fitting, and line tracking. Image preprocessing aims to remove
eight journals, and only one book chapter on-road lane some of the noise from the image. Feature extraction employs
detection have been published. Next, 14 conference papers, lanes’ features to extract lane-like areas. The lane model is
12 journals, and three book chapters have been published then fitted and tracked using a variety of methods. Feature
for 2020. Finally, the number of conferences published detection is an essential lane detection algorithm that affects
in 2021 is down from the previous year, when just seven performance [10]. As a result, the preprocessing image phase
articles were released. In the meantime, journal publications is required in many traditional methodologies for determining
have climbed to 21, with four book chapters scheduled for the quality of features for lane detection tasks. The construc-
release in 2021. Table 2 shows the distribution of papers tion of an area of interest (ROI), image augmentation for
in journals. Sensors journal ranks first with five publica- extracting lane information, and removing non-lane details
tions, followed by the Journal of Ambient Intelligence and are all part of image processing. The ROI extraction method
FIGURE 2. Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) Diagram. The research identified
through four database searching was 435 publications.
efficiently reduces redundant information in the image pre- Table 5 shows the details of the feature extraction, line
processing section by selecting the lower portion of the model fitting, and lane line tracking approaches used in the
image [11]. Several studies have created ROIs using vanish- geometric modeling-based lane detection method. First, fea-
ing point detection techniques [11], [14]. Furthermore, ROI ture extraction methods include several techniques such as
creation minimizes image noise, although it is not resistant to perspective transform, thresholding, filtering, edge detector,
shadows or automobiles [11]. Extracting specific features to image district extraction, grayscale, clustering, neighborhood
detect lanes in the features extraction process, such as color, searching-based feature points, sliding window, morphologi-
edge, geometric, and so on [10]. Several techniques, such as cal operations, and heterogeneous operators.
Inverse perspective mapping (IPM)/Perspective Transform, Next, Line Model Fitting contains several approaches such
filtering technique, edge detection-based technique, image as LSD, fitting, BLVS, vanishing point, waveform, geo-
district extraction, morphological operators, neighborhood metric analysis, HS algorithm, CLAHE, RANSAC, graph-
searching-based feature points, grayscale, thresholding, and based, seed fill algorithm, KLT, Histogram analysis, MPC,
clustering. a region-based iterative seed method, ant colony optimiza-
In addition, heterogenous operators and sliding windows tion, SUPER algorithm, nested fusion, Lucas-Kanade optical
also have been used in the past to reduce the effect of noise flow, and linear regression. Meanwhile, three techniques have
and to extract lanes conveniently. been applied for line tracking approaches the Kalman filter,
The lane model is then fitted with the line segment detector lane classification, and parabola equation. Geometric mod-
(LSD) and fitting-based methodologies, including B-spline, eling/traditional lane detection approaches are used in much
quadratic, polynomial, parabola, hyperbola, and least square. literature, such as by D. Kavitha & S. Ravikumar [16].
Bresenham line voting space (BLVS), vanishing point, wave- The input image is first transformed into a greyscale image
form, geometric modeling, harmony search (HS) algorithm, from a color image. The noise is eliminated, and edge detail
contrast limited adaptive histogram equalization (CLAHE), enhancement is performed for the image preprocessing pro-
random sample consensus (RANSAC), graph-based, seed cedure phase. After converting to greyscale, the author used
fill algorithm, histogram analysis, model predictive control the adaptive median filter (AMF) to reduce/remove noise
(MPC), a region-based iterative seed method, ant colony opti- and then used the Laplacian-based technique for contrast
mization, scene understanding physics-enhanced real-time enhancement. After the preprocessing stage of the task is
(SUPER) method, nested fusion, and linear regression were completed, the edges in the image are recognized using the
used. The Lucas-Kanade approach, Kanade-Lucas-Tomasi Canny operator for the feature extraction stage. The Hough
(KLT), and Lucas-Kanade optical flow have matched the transform is used to fit the line model after the edges have
lane model. Meanwhile, the most extensively used algo- been detected. The Hough transform is commonly used to
rithms for tracking road lane detection are the Kalman filter, extract characteristics affecting the geometry of an input
lane categorization, and parabola equation. Tracking is often image. The lane is then detected using the hyperbola fitting
employed as a post-processing step to compensate for lighting technique. Ghanem et al. [12] also proposed a geometric
fluctuations [11]. As a result, tracking aids in incorrectly modeling-based method for detecting road lanes, including
detecting occlusion due to faulty lane markers [12]. image processing, feature extraction, line fitting model, and
FIGURE 3. Distribution publications for the year 2018-2021. The trend for the statistics of the published papers is increasing every
year. The graph show that the lane detection study is still relevant for the upcoming year.
FIGURE 4. The number of publications over the year from 2018-2021. The number of publications for journal articles and book
chapters has been increasing over the year. Meanwhile, the conference publications are fluctuating in these four year.
lane line tracking pipelines. First, the Region of Interest (ROI) Transform is used to extract the line segments. After that, the
is used in the image processing stage to remove another input is filtered using the standard deviation (SD) filter. This
object unrelated to the lane markers. In the feature extraction textural filter aids in the provision of local intensity variation
step, edges are extracted from the image using the Canny information. When the texture is smoother, the SD filter’s
approach, which is robust against noise. Second, the Hough response is smaller. As a result, the SD filter is employed in
TABLE 1. (Continued.) Chosen Publications, Source Title, and The Number of Citations.
TABLE 1. (Continued.) Chosen Publications, Source Title, and The Number of Citations.
this research to show the degree of pixel value variability in basic principle, the image is separated into three sections:
a region. This SD filter computes the SD of the pixels in the the background part, the suspected foreground fraction, and
vicinity of the pixel of interest. In addition to the SD filter, the foreground part. Following that, a practical multi-layer
the Gaussian filter can remove noise. This study uses least- evaluation function was constructed to implement the online
square fitting to fit the line model. Meanwhile, the Kalman adjustment of lane lines using the straight-line fitted by the
filter is used to accomplish the lane tracking procedure in this Hough Transform. Kasmi et al. [44] is another paper that
research since it helps to converge to actual values faster than proposed the traditional technique. Initially selecting the best
other methods. Region of In terest, the author used the conventional method
After that, Gong et al. [34] used the double threshold for detecting the road lane.
approach to preprocess the self-collected road image and Following choosing the most informative ROI, the
get the ROI. The region of interest, which includes lane RANSAC approach detects the segment within the ROI.
line information, is intercepted to reduce background inter- Finally, to track the road lane, the Kalman filter is used.
ference on the road and improve the algorithm’s real-time Next, Akbari et al. [19] used the geometric modeling tech-
performance. The grey value of the image is then processed nique, which uses the ROI for preprocessing and the Canny
utilizing image enhancement employing exponential function operator to extract the edge feature, and the Hough trans-
transformation. After a nonlinear grey change, the low grey form to filter out unwanted edges and lead to straight lines.
value background area becomes darker, while the lane line The vanishing point then filters out the image’s irrelev ant
area becomes lighter in color. As a result, the contour of the straight-line segments. As a result, the B-spline clustering
high-grey-valued area becomes more visible, and the contrast and IPDA filter is also utilized in this literature to detect the
improves. The method effectively increases the difference road lane efficiently. These methods are quick and easy to
between the lane line region and the background information, use but require manual parameters. Furthermore, while they
lowering the threshold selection difficulty. The image grey can function well in routine situations, they cannot adjust
value adjustment and image smoothing were carried out only to changing conditions such as lighting and occlusion [10].
in the significant region of the road to tackle the problems of Furthermore, while conventional lane detection methods are
lane detection taking a long time and having poor noise resis- frequently quick and straightforward and can meet real-time
tance. The modified Canny operator was then used to extract requirements, the road environment is constantly changing
the lane line edge. When the Otsu threshold was chosen, the due to weather, light, and cars. The findings are not qualified
Kalman filter technique was used to anticipate the ideal point with high accuracy [15].
in the following image series using optimized autoregressive
data processing features. The OTSU technique is an approach 2) ARTIFICIAL INTELLIGENCE
for determining the image binarization segmentation thresh- Artificial intelligence (AI) is the idea of computers, specif-
old proposed by Japanese expert OTSU. The high and low ically computer systems, imitating human intelligence
thresholds are supposed to be known. According to the OTSU processes. Expert systems, natural language processing,
speech recognition, and machine vision are examples of a complex environment that also fails to meet the total avail-
AI applications. AI systems generally absorb enormous ability restriction of such a core function [45]. The two
volumes of labeled training data, analyze it for correla- branches of AI-based methodology described in this paper are
tions and patterns, and use them to forecast future states. machine learning and deep learning-based techniques. How-
For example, machine learning and deep learning are the ever, deep learning has become more popular than machine
AI algorithms that detect lanes. Unfortunately, most tradi- learning due to its excellent performance in either classifica-
tional lane detection systems suffer from either processing tion or detection using image frames as input to the network
time that does not meet real-time needs or inefficiency in technique.
TABLE 5. Feature Extraction, Line Model Fitting and Line Tracking Techniques for Geometric Modelling-Based Method in Lane Detection.
In recognizing the road lane, the DL adaptation approach presented as a novel means of integrating this technology.
can be used in various ways. Several researchers advised This is a new proposed state-of-the-art technique that other
employing the DL methodology independently, and others researchers can investigate further.
suggested integrating it with another method. Incorporating
this network increases the network’s efficiency in detecting i) CONVENTIONAL DEEP LEARNING
the lane mark under challenging settings. DL + geometric Several works of literature built a lane detection system using
modeling, DL + ML, and DL + DL are examples of methods this article’s stand-alone deep learning-based technique. For
that can be combined with another. Aside from that, com- example, Wu et al. [29] proposed a convolutional neural
bining DL with an attention mechanism has recently been network-based method for recognizing lanes in driving video
images. The expectation line represents an autonomous vehi- ii) DEEP LEARNING + GEOMETRIC MODELLING
cle’s driving behavior in greater detail. Using the long short- Several researchers combine a deep learning-based method-
term memory-based approach, the predicted line is then used ology with geometric modeling methods to increase the effi-
to estimate the vehicle’s future trajectory. Due to prior infor- ciency of detecting the road lane. While training on manually
mation, autonomous cars may drive smoothly by combining a labeled data, deep neural networks have demonstrated their
convolutional neural network with long short-term memory- potential to reach competing accuracy and time complexity.
based techniques (convLSTM). However, the lack of segmentation masks for host lanes in
Similarly, Sun et al. [71] use atrous convolution and spatial adverse road environments limits the applicability of fully
pyramid pooling techniques to construct a new network- supervised algorithms to such a situation. To address this
based deep learning method for lane detection. LaneNet issue, Yousri et al. [23] propose combining classical computer
is used to build the network, consisting of one encoder vision techniques and deep learning approaches to establish a
and two decoders. The Embedding Decoder and the Binary reliable benchmarking framework for lane recognition tasks
Decoder are the names of the two decoders. The author in complicated and dynamic road scenarios.
uses a sequential mix of the Atrous ResNet-101 and the To begin, researchers tested an automatic segmentation
Spatial Pyramid Pooling (SPP) networks to replace LaneNet’s method based on a series of traditional computer vision
original encoder. Meanwhile, the Embedding Decoder and approaches. This technique generates appropriate weak labels
Binary Decoder architecture are similar, except for the num- by precisely segmenting the semantic region of the host
ber of output dimensions. The suggested lane detection sys- lane in the complex urban images of the nuScenes dataset
tem in [77] is based on the Drive Works LaneNet pipeline, utilized in this framework. To begin with, the checkerboard-
which uses camera images. This paper presents an integrated based calibration technique is used to correct distortion.
framework for autonomous driving based on the NVidia deep Then, using the vertical mean distribution (VMD) approach,
neural network multi-class object identification framework, an adaptive region of interest (AROI) is chosen. Finally, the
the lane detection framework, and the free space detection author employs the progressive probabilistic Hough trans-
framework. This framework can also be used for localiza- form (PPHT) to locate the lane region and calculate the van-
tion based on map matching, mapping, and path planning ishing point. To limit the undesirable consequences of such
in autonomous driving solutions. Finally, in [80], Philion off-lane information, filtering must be done by masking areas
proposes a revolutionary, utterly convolutional lane detection of the images. As a result, the author segments the road using
model that learns to decode lane structures instead of depend- an adaptive algorithm based on a horizon line. The Canny
ing on post-processing to infer structure. approach is then used to deal with the arbitrary lane shapes
Meanwhile, Dawam and Feng proposed a computer vision- discovered in the photos. Because the lane lines are parallel,
based road surface marking identification system in [46], straight, and of varying colors, image processing techniques
serving as an additional layer of data for AVs to choose retain and enhance these characteristics. Then, color space
from. The authors used YOLOv3 in the cloud to train the conversion and morphological processes ensure precise lane
detector to recognize 25 different road surface markings using segmentation. The morphological top-hat procedure is com-
over 25,000 images. The experiment results show that the monly employed to separate the image’s brighter portions
detection accuracy and speed are reasonably good. from their darker surroundings. In the photos, bright pixels
Traditional approaches based on handcrafted characteris- depict lane lines.
tics are less reliable and computationally expensive due to the As a result, top-hat operation aids incorrect lane iden-
lack of distinguishing features and several road occlusions. tification in unforeseen lightning variations by denoising
Muthalagu et al. [35] proposed stand-alone deep learning and enhancing contrast. After using the perspective trans-
to deal with this by learning both the lane markings seg- form, line fitting is required to complete the segmentation
mentation and the localization and geometry of each lane stage to identify the lane region and improve lane features.
in the form of critical points using a compact and efficient Next, a sliding window search is used to iterate over dif-
multi-stage Convolutional Neural Network (CNN) architec- ferent line shapes for more flexible fitting when dealing
ture. The proposed methodology combines a lane mask pro- with arbitrary forms. Finally, the images are unwrapped to
posal network with a lane key-point determination network the standard view using the inverse perspective transform,
to correctly estimate the key points representing the vehicle and ground truth labels are constructed using single-channel
lanes’ left and correct lane markings. Finally, Dewangan conversion. SegNet, Modified SegNet, U-Net, ResNet, and
et al. [37] suggested a semantic segmentation architecture ResUNet++ are five state-of-the-art FCN-based architec-
encoder-decoder network. A hybrid model based on UNet and tures trained and benchmarked using the data. The work’s
ResNet has been adopted in this direction. First, the image contributions include the first time ResUNet++ was intro-
was down-sampled, and the required features were identified duced on the lane detection task, where it outperformed
using ResNet-50 as a segmentation model. Then, UNet was the other tested models, and the introduction of a robust
used to up-sample and decode the segments of the images lane detection using an ensemble-based approach, as well as
using the detected features. testing the models by looking at the ensemble prediction of
the top three models in shadowy scenes and obscuring road these difficulties brought on by input variability. Because of
scenarios. its versatility, an STN can be introduced into any model area.
Traditional computer vision (CV) techniques are often They can also be trained using only one backpropagation
time-consuming, require more processing resources, and algorithm.
employ complex algorithms to analyze the lane images’ Zhan and Chen [73] suggested a lane line detection tech-
detailed properties. This research [24] proposes a deep con- nique based on image processing and deep learning based
volutional neural network (CNN) architecture that avoids the on the FPGA development platform to accomplish the fast
complexities of existing CV techniques to address this issue. lane line detection effect of structured roadways, with speeds
As a result, CNN is considered a viable method for lane up to 104 FPS. First, the camera captures road data, which
marking prediction, although improved performance neces- is then transferred to the FPGA as image data via the AXI
sitates hyper-parameter modification. An S-Shaped Binary protocol. This part aims to convert data into RGB24 format,
Butterfly Optimization Algorithm (SBBOA) is used in this including data format conversion and transmission interface
paper to improve the initial parameter setting of the CNN. conversion. The image from the camera is first subjected to
This method chooses the relative CNN parameters for precise data preprocessing, which provides for data format conver-
lane marking. The suggested SBBOA optimized CNN frame- sion and transfer interface conversion. In addition, an image
work extracts the lane’s pixel attributes before using the CNN processing approach that includes threshold segmentation,
architecture to predict the lane. In this study, each lane line inverse perspective transformation, and lane line quadratic
is considered as a specific circumstance. The SBBOA-CNN curve fitting is used to detect lane lines. The final output
classifier determines which pixel belongs to which lane and detection results are the curvature radius of the present lane,
turns that knowledge into a parameter description. the lane’s bending direction, the path and distance of the
Next, Kanagaraj et al. [25] show how to improve the vehicle deviating from the lane center, and so on. At the same
efficiency of autonomous vehicles by using Convolutional time, the lane line coordinates are provided to enable the lane
Neural Networks with Spatial Transformer Networks and line type identification module to intercept the identification
real-time lane detection. First, the pipeline converts a real- area dynamically. As a result, this study uses the deep learning
time image to grayscale and smoothes the edges with a Gaus- (CNN) method to detect lane markers and display the output
sian Blur to reduce noise. Applying a Canny function to aid image.
edge detection is the next step in the process. The edges in The authors of [101] present a new lane marking detection
the image are obtained after performing the Canny process system based on lane structure analysis and convolutional
by measuring the gradients of adjacent pixels. A significant neural networks (CNNs). The pavement that serves as the
change in gradients can identify an edge. Because the lanes background for the lane markers is first removed in a prepro-
will be found in the bottom half of the image, a region of cessing stage. Following that, a region of interest is created
interest is constructed that corresponds to that portion of the using a set of local waveforms from local images, and a CNN
image. A Hough transformation is used to obtain the image’s classifier is used to find lane marking candidates. Finally, the
lane lines in the next stage. A single long lane line separates lane geometry analysis stage determines whether the item is
the left and right lanes. This is accomplished by filtering the a lane marking. A map relative localization method based on
lines based on their slope to determine which lines belong to road lane matching [49] is developed. When GNSS data is
which range and disregarding the others. The left and right neither exact nor unavailable, the technique provides lane-
lanes for the region of interest are found this way. The next level location accuracy for autonomous vehicle driving. As a
step is to overlap the lane lines with the original image to lane detector, the DarkSCNN neural network was deployed.
combine the images. The camera calibration matrices and The inverse perspective transform processes the detection and
distortion coefficients are computed before performing a dis- fits it to the polynomial.
tortion correction to raw images and creating a threshold Meanwhile, the Modified Iterative Closest Point algorithm
binary image using color transform and gradients. After that compares two-point clouds: one created using HD-map data
a perspective transformation creates a bird’s-eye view of and the other using camera data. Furthermore, in [79], images
the image. Even when lane lines in an image are parallel, from a front-view camera are captured and fed into a seman-
perspective causes it to appear to converge from a distance. tic segmentation network to extract features for detecting
It is easy to remove the curvature of lane lines from this road lane markings. The network is first built using the
perspective. The convolution is then used with a sliding win- U-Net architecture, a convolutional neural network designed
dow to maximize the number of heated pixels in each window. for biomedical image segmentation. The Hough Transform
The Spatial Transformer Network (STN) then interpolates method is then used to determine the segmentation network’s
images using a learnable transformation that removes spatial output lines. Unfortunately, Hough Transform also produces
invariance. The STN block enhances the classifier’s accuracy a lot of lines from segmented images. As a result, the K-means
when used in a convolutional neural network. Due to input Clustering technique is investigated to compute and identify
changes, convolutional neural networks might suffer from a the best line for each road lane marking.
lack of robustness. Scale, viewpoint, and backdrop clutter are Then, using a combination of semantic segmentation and
examples of these variances. The STN aids in the reduction of optical flow estimation networks, Lu et al. [20] proposed
a fast and reliable lane detecting approach. The study was determined using the obtained outermost lanes. After that, the
divided into lane segmentation, lane discrimination, and defined ROI will be forwarded to the depth processing task to
mapping. First, a robust semantic segmentation network be processed further.
was developed for keyframe segmentation, and a fast and Finally, the literature in [105] introduced the model
slim optical flow estimation network was employed to track pipeline, which consists of three modules: binary semantic
non-key frames in lane segmentation. The density-based spa- segmentation, clustering, and curve fitting. The semantic
tial clustering of applications with noise (DBSCAN) was segmentation module analyzes pixels in an image to see if
used to identify lanes in the second part. Finally, a mapping they belong to a lane line or the background. The clustering
approach for translating lane pixels from the pixel coordinate module clusters the lane points to form different lane line
system to the camera coordinate system and modeling lane instances. When the instance segmentation is completed the
curves in the camera coordinate system is proposed, provid- perspective, transformation converts the image into a bird’s-
ing feedback for autonomous driving. eye view. Finally, a curve fitting technique precisely identi-
First, the preprocessing of input frames in [75] involves fies each lane line. To ensure excellent temporal efficiency,
removing most of the sky region and performing the auto- the author uses MobileNet as the backbone of CNN in the
mobile dashboard. The frame is then scaled to a resolution semantic segmentation module. Furthermore, MobileNet is a
of 360 × 480. This frame is then input into the lane marking valuable model for mobile and embedded vision applications
segmentation network, which segments out the visible lane since it uses depth-wise separable convolution. In addition,
marking pixels before using graph-based algorithms to detect the author clusters points that correspond to various lane lines
instances of segmented lane markings. using the K-Means clustering algorithm.
The instance segmented output is subjected to perspective
transformation (bird’s eye view), followed by an attentive iii) DEEP LEARNING + MACHINE LEARNING
voting-based clustering approach and polynomial curve fit- A machine learning-based strategy is also chosen to integrate
ting, which yields the final result. Finally, the author created with DL to boost the efficiency of lane detection tasks and
a lane segmentation network with stride convolutions and combine DL with the old method. Lane detection utilizing
stride deconvolutions with relu activation in hidden units road features-based algorithms and color feature-based algo-
using the deep learning method, a CNN-based methodol- rithms, according to Zhang et al. [50], cannot achieve satis-
ogy. The research [108] developed a Spatio-temporal, deep factory performance due to several constraints. For example,
learning-based lane boundary recognition approach that can the number of lanes is frequently not set, and techniques
detect lane boundaries accurately in real-time under complex for detecting lanes are sometimes erroneous. Furthermore,
weather circumstances and traffic scenarios. The algorithm Hough transform-based algorithms interpret straight lines
is divided into three parts: first, perform the inverse perspec- as lanes, leading to street lamps being mistaken for lanes.
tive transform and lane boundary position estimation using Similarly, adverse weather, such as rain, will impact lane
lane boundaries’ spatial and temporal constraints; second, detecting. Likewise, inadequate lighting and a night setting
classify the boundary type and regress the lane boundary will produce poor results. However, there are yet no practical
position using convolutional neural networks (CNN). Finally, solutions for dealing with such issues. As a result, standard
the author optimizes the CNN output and uses Catmull-Rom approaches are ineffective in detecting lanes in complex traf-
(CR) spline fitting to conduct lane fitting. fic situations. In addition, lane detection should be done in
Then, in [65], a comprehensive method for detecting lanes real-time. Most algorithms, however, fail miserably at this
and impediments on the road is proposed. A combination of goal. As a result, by modeling the sophisticated traffic situa-
deep learning and a traditional image processing framework tion, this literature provides a quality-guided lane recognition
was developed for detecting lanes. When the DL approach algorithm that can successfully manage various lanes. The
and the conventional method are combined, data collection author first uses chessboard images for camera calibration
time and effort are reduced while performance is maintained. to determine the correspondence between the real-world and
The author first proposed the LiteSeg network architecture. image coordinate systems. They then use prior knowledge
The acquired RGB image is the network’s input, and the and picture quality scores to capture image regions of interest
output is a lane segmentation map with two classes: lane that only include lane information. After that, they create a
and non-lane. MobileNetV2 is the backbone network with two-stage CNN architecture for lane detection that uses a
a depth-wise and inverted residual structure. However, the binary lane mask for lane matching. The author then created
LiteSeg network, which uses the MobileNetV2 backbone, a multimodel feature fusion approach for training an SVM to
cannot detect all lanes correctly. Because the acquired data classify image regions. From the lane and non-lane areas, the
contain a lot of noise and fragmentation, the author offers author created a 137-D multimodel feature by combining a
a Hough transform-based lane detection method to fix the 128-D histogram of gradient (HOG) and a 9-D color moment.
problem. In addition, the author creates a lane model using They then train an SVM to classify various locations. Next,
a quadratic polynomial to deal with curvy lanes. After that, they use a sliding window approach to build a set of additional
the resulting candidate segments are fitted into the lane regions from the image and SVM to select lane regions for
model using Polynomial curve fitting. The road ROI is then testing. Finally, using image segmentation, they train an SVM
to split the image into lane-information sections and non-lane a real-time resilient neural network improvement for active
information regions. lane detection (RONELD), using deep learning probability
Afterward, Feng et al. [89] combine DL and ML for lane map outputs to identify, track, and optimize active lanes. They
detection. Deep learning (5-Layer SegNet)-based approach adaptively extract lane points from probability map outputs,
is used first to detect the lane. However, as the segmentation detect curved and straight lines, and then use weighted least-
results show, there are segmentation uncertainties as to which squares linear regression on straight lanes to correct fractured
areas not belonging to the lane will be divided into the lane lane edges caused by edge map fragmentation in real images.
in specific single cycles and vice versa. Therefore, Bayes’ Finally, by tracking previous frames, the author hypothesizes
theorem is used to make the segmentation more stable. As a genuine active lanes. Finally, Pizzati et al. [58] proposed
result, an RBF-kernel SVM (Support Vector Machine) is also an end-to-end system based on two cascaded neural net-
tested. works that run in real-time for lane boundary identification,
clustering, and classification. They train a CNN for lane
iv) TWO SERIAL DEEP LEARNING boundary instance segmentation as a first step. Then, they
Traditional techniques have yielded significant results but extract a description for each observed lane boundary and
have limitations: (1) lane awareness is challenged by varying run it through a second CNN. Instead of lane markings, CNN
weather conditions and illumination. Furthermore, previous has been trained to recognize lane boundaries. Then, instead
methods lack a unifying framework for describing various of semantic segmentation, they use instance segmentation on
scenes and (2) the inefficiency of using photos owing to lane boundaries. Mask R-CNN, for example, is a cutting-
potential label noise. J. Liu [72] introduced a lane detec- edge network segmentation technique. ERFNet was also
tion framework for autonomous vehicles based on learn- chosen as their baseline model. As a result, this paper uses
ing a comprehensive reference quality-aware discriminative another CNN to classify each lane boundary, linking the
gradient deep model, which uses two types of deep net- identified boundaries with the ground truth. Furthermore, the
works. To detect the presence of a lane, the author first cre- architecture for this work is based on H-Net.
ates a gradient-guided deep convolutional network because
the gradient value of the lane edge is greater than that of v) DEEP LEARNING WITH ATTENTION MECHANISM
other regions. Then use the entire reference image quality In the past, state-of-the-art lane detecting algorithms have
assessment (FR-IQA) method to find more discriminative outperformed traditional methods in complex scenarios, but
gradient signals while also utilizing geometric characteristics. they also have limitations. For instance, only a certain num-
Following that, a recurrent neural layer reflects the spatial ber of lanes can be spotted, and the cost of detection time
distribution of identified lanes using difficult-to-define visual is sometimes prohibitive. Human vision’s attention mecha-
cues. Finally, the noisy features are abandoned using the nism and methods make network learning more concerning.
sparsity penalty, and only a small percentage of the tagged Zhang et al. [9] presented a real-time lane recognition system
images are used in this paper. Next, Zou et al. [126] propose based on an attention strategy to address this issue. The
a deep hybrid architecture that combines the convolutional proposed network comprises an encoder module that extracts
neural network (CNN) with the recurrent neural network for lanes’ features and two decoder modules, a binary decoder
lane detection using the same strategy (RNN). A CNN block and an embeddable decoder, that forecast lanes’ instance
abstracts information from each frame. The CNN features feature maps. The author employs biologically inspired atten-
of several continuous frames with time-series properties are tion in the encoder to extract features holding a wealth of
subsequently sent into the RNN block for feature learning and information about the target area. A correlation between
lane prediction. the characteristics produced through convolutions and those
Pihlank and Riid [69] introduced a novel neural network- extracted by attention is developed to learn the contextual
based method that integrates autoencoder structural compo- information. The contextual information is combined with
nents, residual neural networks, and densely linked neural features from up-sampling in the decoder to compensate for
networks. The proposed architecture consists of three identi- the lost detailed information. The binary decoder assigns each
cally structured connected neural networks that combine the pixel to two categories: lane or backdrop. The distinct lanes
architectures of symmetrical AE (with dimension reducing are obtained by using an embeddable decoder. The binary
encoder and expanding decoder), ResNet, and DenseNet, decoder’s outputs are then used as one of the inputs to the
with feature map concatenation providing shortcut connec- embeddable decoder, which directs the production of exact
tions between encoder and decoder layers. Z. M. Chng et al. pixel points on the lanes.
presented two state-of-the-art algorithms, SCNN + RONELD Li et al. developed a unique Lane-DeepLab model for high-
and ENet-SAD + RONELD, in [55]. Furthermore, as this definition maps [15]. Two new features are included in the
research indicates, convolutional neural networks (CNNs) suggested method: 1) It optimizes the encoder structure by
are used to train deep learning models in recent state-of-the- adding an attention module to the ASPP module; 2) It uses
art lane detecting algorithms. While these models perform the SEB to merge high-level and low-level semantic informa-
admirably on train and test inputs, they perform poorly on tion to obtain more great features. Furthermore, in compli-
unknown datasets from various contexts. This study proposes cated scenarios with changeable weather, the proposed model
employs the attention mechanism and contextual seman- encoder and decoder outputs to make the most of the global
tics to fuse information to determine the lane line for the context information. There are two branches in the decoder’s
environment. final layer: binary branch and embedding branch. This study
Munir et al. [11] combine the deep learning-based algo- generates the binary branch and embedding branch outputs
rithm with the attention mechanism to detect the road using two convolutional layers with a 1 × 1 kernel. The
lane. Lane detection with a dynamic vision sensor (LDNet) binary branch produces semantic segmentation. The embed-
is suggested in this paper, which is constructed as an ding branch makes a three-channel map, meaning each pixel
encoder-decoder with an atrous spatial pyramid pooling block has a 3D embedding vector. The segmentation map output is
followed by an attention-guided decoder for predicting and utilized as a mask, and the mask is applied to the embedding
decreasing false predictions in lane detection tasks. There is map to generate only the lane pixels embedding the map.
no need for a post-processing step with this decoder. The The author then applies mean-shift clustering to produce
authors suggested LDNet, a novel encoder-decoder architec- clusters for each lane and the actual outcome of instance
ture for detecting lane marking using detailed event cam- segmentation. As a result, the lane model is fitted using cubic
era images. LDNet simplifies full-resolution detections by spline interpolation.
extracting higher-dimensional features from an image. The
authors also added an ASPP block to the network’s core, B. WHAT EQUIPMENT IS BEING USED TO COLLECT THE
which increases the feature map’s appropriate field size with- DATASET FOR THE TRAINING PROCESS?
out increasing the number of training parameters. Addition- The input data is the most critical aspect for detecting the
ally, adopting an attention-guided decoder increases feature road lane. Moreover, dataset preparation is essential for the AI
localization in the feature map, obviating post-processing approach, especially during training. As a result of the great
requirements. dataset preparation in the network model, autonomous cars
Furthermore, lane detection is essential in advanced driver can manage behavior and make judgments. After reviewing
assistance and autonomous driving systems. However, lane the journal, paper conferences, and a few book chapters,
detection is affected by various conditions, including some numerous works of literature contained self-collection of data
problematic traffic scenarios. The ability to detect multiple and were also done online. In addition, some researchers
lanes is also critical. R. Zhang et al. [10] presented RS-Lane, compile their dataset for AI training only, then compare it
a lane recognition method based on instance segmentation, to a publicly available benchmark dataset. On the other hand,
to address these issues. This approach is built on LaneNet several researchers only use self-collect data for training and
and takes advantage of ResNeSt’s Split Attention to increase validation. Meanwhile, several researchers have relied only
feature representation on slender and sparse annotations such on the public dataset for training and validation. In road
as lane markings. Self-Attention Distillation is used in this lane marking, radio detection and ranging (radar), a cam-
paper to improve the network’s feature representation capa- era, a global positioning system (GPS), and light detection
bilities without adding inference time. The input photos can and range (LiDAR) have all been used for the self-collect
be correctly processed in the preprocessing module, making dataset [23]. Other than that, there are also data from the
it easier to extract features later. The driving image and online simulator collected in various works of literature.
associated annotation are translated to a standard format used This subsection will describe the details of equipment
by the model. The annotated data are utilized for training the implementation for self-collect data in lane detecting.
network to achieve lane segmentation in the model training In 2018, 13 published articles used cameras, and one pub-
step. Denoising and fitting are used in the post-processing lished paper used a simulator for data collection. Next,
stage to obtain the final results from the model’s output. in 2019, 15 published papers used cameras, and one published
The network employs the encoder-decoder framework to paper used a simulator and radar for data collection, respec-
conduct semantic and instance segmentation simultaneously, tively. Furthermore, in 2020, about 12 published articles used
as proposed by LaneNet. The encoder’s backbone is ResNeSt, the camera to collect the dataset. Meanwhile, one paper pub-
which presents a Split-Attention mechanism. As a result, the lished utilized lidar, OpenStreetMap, and HD map to collect
authors add two more DAS lines to the network to improve its datasets, respectively. Finally, by 2021, about 13 articles used
feature extraction capabilities. SAD allows a network to learn a camera, and one paper used an HD map to acquire the data
from itself without external data. The lower layers can learn set.
the higher feature representation by mimicking the attention
maps of the higher layers. Because the lower layers’ ability to 1) CAMERA
represent features increases, the higher layers, and the entire To begin, the camera can be used to extract road markings.
network benefit. As a result, various cameras have been used, including
As a result, the decoder executes a deconvolution oper- webcams, Wi-Fi sports camera sensors, Kinect cameras,
ation to decode the encoder’s feature maps and performs smartphone cameras, monocular cameras, and stereo vision
upsampling and classification. The decoder has five lev- cameras. Monocular cameras are a cost-effective choice;
els that correspond to the encoder’s layers one-to-one. The however, they don’t provide depth information. On the other
author used Unet’s skip-connect approach to concatenate the hand, stereo vision cameras allow for the inference of depth
information and hence the reconstruction of three-dimensional Toyota Prius autonomous driving research prototype vehicle
scenarios for increased functionality, such as collision detec- with Nvidia Drive PX 2 and a Sekonix GMSL Camera was
tion [19]. Furthermore, the reliability and ability of cameras used by Kemsaram and Das [77]. In a car, A GMSL connector
to record every circumstance of the road environment in connects a Sekonix GMSL Camera to a Drive PX 2. Drive a
any direction have recently been enhanced [23]. Therefore, vehicle that has the PX 2 in the trunk. The Sekonix GMSL
vision sensors are also becoming more effective and less camera is mounted near the rear-view mirror, behind the
expensive due to current deep learning algorithms [37]. How- front windshield. The data set includes multi-frame images
ever, despite the prevalence of camera sensors, deep learning sampled from the driving video.
algorithms offer a high degree of generalization and learn the The video has a frame rate of a vertical resolution of
crucial elements of the driving environment across multiple 720 pixels and a width resolution of 1280 pixels with
layers. 30 frames per second. Next, the images are dissected and eval-
According to the literature, most researchers utilize a cam- uated. However, the quality of several pictures is poor due to
era to detect lane markings. The literature recommended the lighting and brightness. This emphasizes the significance
using the camera to self-collect data: Khan et al. [110] used of lane prediction. Therefore, the training image sample rate
the camera to acquire data. The road image was recorded is quite significant. The continuous visuals may be highly
with a single camera sensor to detect the road marking on the similar if the pace is high, rendering the model meaningless.
vehicle’s front side. As a result, a smartphone camera was As a result, just one image out of ten is chosen for the training
placed on the front windshield of the experimental car. The dataset. Therefore, the training data set should increase the
datasets used in this study were from videos captured with a lane detection model’s identification performance. In addi-
Samsung Galaxy Alpha smartphone (SM-G850F). The image tion, the training set should include more images of the
was captured at 30 frames per second mode without video curving lane. To begin, more images with curving lanes are
stabilization and had a 1920 x 1080 (.mp4) pixels resolution. extracted from the video. Then, the images with the least
The total number of videos applied in the experiment is 15, pixels are chosen. These images are also altered to create new
with 22,500 photos retrieved from them. The images were ones.
taken under various imaging situations, including lighting, The authors then employed a random sample of Zibo
traffic, and climate. The host vehicle was driven according to city road datasets consisting of three road scenarios: shadow
the two-second safety guideline during data collection. Main- occlusion, lane line wear, and bright illumination [34]. The
taining a safe following distance is critical when driving a car, visual data set in every road situation is collected in the video,
and autonomous driving requires that distance to be estab- which contains about 800 images of typical road scene graphs
lished. As a result, the two-second safety guideline criterion selected from the collected footage. In addition, 2400 graphs
is utilized to verify a safer following distance at any speed. are used in computer simulation investigations. The frame
According to the rule, any vehicle in front of the driver’s had a resolution size of 512 × 682 pixels. As a result of
car should be kept at least two seconds behind the driver’s the camera specifications, all of the original images in the
vehicle. Therefore, about 22500 images of roads were taken at experiment are greyed out. To reduce the vehicle’s hindrance
various times of the day and night, with varying lighting and on the camera view.
occlusions such as shadows, intricate backgrounds, traffic, [100] uses a camera positioned 21.5cm above the center
light rain, rains, and snow. Images with an after-rain effect of the rear axle and 27cm in front. The test data is acquired
can also be obtained. The dataset was taken with a camera while the automobile is driven manually to follow the track’s
installed on the dashboard, and the data gathering took place lane. Although the data is captured at 60 frames per second
in Selangor and Kuala Lumpur. The remaining images in the (fps) using the test platform’s onboard camera, the evalu-
dataset (light rain, rain, after rain, snow) were collected from ations are performed offline to ensure a fair comparison.
the internet. They were recorded throughout the day and night Finally, in [106], the author employed video sequences with
under various lighting conditions obstructions and consisted 1225 frames with a resolution of 640 × 480 pixels of com-
of reflection effect complicated background. plex metropolitan streets, which incorporate difficult traffic
Next, Liu et al. [53] deliberately chose roads with shadows, situations such as diverse pavement types, passing cars, faded
tire skid tracks, and noise. Around Lafayette, Indiana, the writings, and numerous shallows. In addition, after rain, the
author filmed local roads and Interstate Highway 65. Each author collected a new dataset to test the robustness in various
video clip is about 15 seconds long, allowing the images climates. There are 1706 frames in total in these databases.
captured to focus on the desired road features. The video was A Mobileye camera vision sensor was placed ahead
segmented once the data was collected, and the images were of the window shield in [127], and it had a vari-
extracted every six frames. In the end, 23,088 useful photo able updating rate of 50 to 130 milliseconds. The yaw
bits were gathered. Bhupathi and Hasan Ferdowsi [47] also rate sensor, which was mounted near the vehicle’s cen-
use a camera to capture videos. Utilizing the multiple sliding ter of gravity and updated every ten milliseconds, was
window method, the accuracy of lane detection is assessed on used. Each wheel had its speed sensor updated simul-
four video sequences. The camera’s position should be fixed taneously with the yaw rate sensor. A Micro AutoBox
and usually expected to be in the vehicle’s center. Next, a DS1501 additionally controlled the car from dSPACE Inc.,
circumstances, it is practicable to detect whether a LiDAR in [49]. Other elements such as road signs and traffic lights
beam has intercepted asphalt or road painting [128]. This is are included, but only road lanes are used in this publication.
especially useful when dealing with shadows and darkness, When a new camera frame is received, the author queries all
which cameras have trouble handling. Furthermore, LiDAR lanes from the HD map within a given radius of the most
provides a centimeter-accurate three-dimensional picture of recent position estimation. This study, for example, used a
the world. LiDAR, on the other hand, it’s more costly than distance of 20 meters. Because lane line detection takes time,
cameras. Nonetheless, advancements in optical technology the author should employ the stance when the camera is
and rising demand will lower the price of LiDAR. triggered. The road lane matching module uses information
from the front camera to detect lanes and a slice of an HD
3) SIMULATOR map near the most recent localization estimate as input. The
Little research in lane detection uses simulators to collect module determines the best modification for aligning camera
data for training and validation. For example, L. Tran and lanes to the HD map with the slightest error. The algorithm
M. Le [129] used a dataset of around 4000 training images utilized is the Iterative Closest Point algorithm.
to train a segmentation model for 20 hours, with 2000 images
annotated. The information comes from the CARLA simula- 6) OPENSTREETMAP
tor. Besides that, the training dataset for Unity3D simulation OSM datasets have been employed in intelligent transporta-
is then collected by M. C. Olgun et al. [107]. A setting was tion systems for various purposes, including road-level local-
built that resembled the author’s real-life roads. An AI con- ization [130] and lane-level determination [131]. Road
troller language in an automobile allows it to appropriately detection utilizing images obtained from a camera relies on
follow waypoints between lanes in a given scenario. Frames road priors and contextual information. First, the road back-
representing the car’s maneuver are saved in a jpg file; mean- bone is built using an OSM map based on the number of
while, image routes, speed, and steering information are kept lanes and lane width. The image is then projected with this
in CSV format. This dataset’s loss value is more consistent road geometry, considering the uncertainty associated with
than the manually collected dataset. The lane tracking train- the ego-vehicle stance. Finally, before the detection of a lane,
ing dataset contains 12531 authentic images supplemented the result is used.
with 20000 images. Next, in [51], the author employed a The study in [132] uses OSM data before creating a more
pioneer robot vehicle to mimic two different track settings. precise map. After that, the authors provide OSM data and
The program finds the lane using this visual input from the proprioceptive sensor fusion architecture. In the meantime,
Gazebo simulator. Based on lane identification findings and a similar approach derived from OSM was used to identify
Matlab output, it calculates the vehicle’s angular and linear ego-lane marking in LiDAR point clouds [44]. Nodes, Ways,
velocity. and Relations [44] are the three crucial components of OSM
data. Nodes are the geometrical elements that represent GPS
4) RADAR positions. For example, the roadways network is defined by
A high-resolution automotive radar prototype is utilized to byways, a detailed list of nodes. As a result, each way (road)
collect data in [12], [13], and [89]. The modulation mode of is made up of segments [130]. In other words, being a part of a
this radar sensor is FMCW (Frequency Modulated Contin- segment is similar to being a part of an OSM Way. As a result,
uous Wave). The baseband signal can calculate range, rela- the map matching problem should be recast as matching a
tive radial velocity, object angle, and reflection magnitude. GPS point to a segment. As a result, the author employs the
The signal processing chain begins with a 2-dimensional map-matching technique described in [130] to select the best
FFT (Fast Fourier Transform), CFAR (Constant False Alarm path (road). However, as discussed in this literature, the OSM
Rate), peak detection, and the maximum likelihood angle esti- data lacks precise information.
mation technique. The axis of the estimated azimuth angle is
evenly spaced. The detecting sites’ positions and the object’s C. WHAT WAS THE DATASET USED FOR THE NETWORK
range will fit into a fan-shaped grid-like pattern. TRAINING, VALIDATION, AND TESTING?
TuSimple [75], KITTI, Caltech, Cityscapes, ApolloScape,
5) HD MAP and CULane datasets are online road scene datasets or bench-
The dataset for lane recognition from HD maps is self- marks that provide training data for various uses. In this
collected in several research. As a navigation back- section, several popular public datasets will be discussed.
end, all commercial autonomous vehicles use accurate The network must be given a meaningful dataset to operate
high-definition maps with lane markings. However, the efficiently [107].
majority of high-definition maps are currently produced man-
ually. The generation of high-definition maps for autonomous 1) TUSIMPLE DATASET
driving using auto-assisted multi-category lane recogni- The TuSimple dataset is a publicly available traffic-detection
tion [15]. The HD map is defined as a map that consists data set (light traffic and clear lane markings). Its label for the
of the precise coordinates of road lanes in the Universal training dataset consists of continuous lane curves that start at
Transverse Mercator (UTM) coordinate system, as described the bottom of the input image and continue until the horizon
passes over the vehicles [75]. It consists of large datasets with and comparison in the tests for each frame with ground
training, and the testing number is 326 and 2782 in both bad truths labeled. The TuSimple lane dataset consists of 3,626
and excellent weather conditions, respectively [35]. They are picture sequences. These are highway driving scenes from the
recorded at various times of the day on two road lanes, three driver’s perspective. Each sequence contains 20 uninterrupted
road lanes, and four road lanes or extra highway roadways. frames captured over the one-second time frame. The last
The resolution of these RGB input images is 1280 × 720 frame of each series, i.e., the 20th image, is labeled with lane
pixels. Each image also includes the 19 frames with the annotations. In addition, this literature adds labels to every
unlabeled dataset. The annotations are JSON format, show- 13th frame in each sequence to augment the dataset. Finally,
ing the lanes’ x-position at different discretized y-positions. [105] used the TuSimple lane dataset on the lane detection
The literature that used the TuSimple dataset for training or task to train and test deep learning-based techniques.
validation has been discussed in this section. In their research,
Y. Sun et al. [71] utilize this public lane detection dataset. The 2) KITTI DATASET
author generates ground truth instance segmentation maps by The KITTI [133] benchmark is also popular data for road
drawing lines along with the pixel coordinates of each lane. scenes. It contains various information regarding the road
The lines have a thickness of 15 pixels. scene, including color pictures, stereo images, and laser point
In addition, different labels are assigned to various lanes. data. Jannik Fritsch and Tobias Kuehnl of Honda Research
The author divided the dataset into three parts: a train set Institute Europe GmbH generated the KITTI Vision bench-
with 3268 images, a validation set with 358 images, and a mark dataset [133]. There are 289 training and 290 test
test set with 2782 images, respectively. Next, the TuSimple images in the road and lane estimate benchmark. Urban
dataset was utilized by Lu et al. [20] to validate the pro- unmarked (UU), urban multiple marked (UMM), urban
posed lane detection model. The dataset employed in this marked (UM), and hybridization of the three categories are
study has good visual clarity, no blur, and a low detection the four categories that the pictures of road scenes fall into.
difficulty. Besides that, the TuSimple dataset is also used in The training dataset consists of 98 images; meanwhile, the
experiments carried out in [24]. In this study, the TuSimple testing dataset consists of 100 images. Ground truth was
dataset contains almost 7000 video segments. Each video created in the KITTI dataset by manually annotating the
clip comprises twenty frames in total. Seventy percent of the images. It is offered for two types of road terrain: the road area
videos are used for learning in the network, twenty percent (all lanes combined) and the lane (the current lane where the
for validation, and 105 for testing. In detail, the training, vehicle is traveling). For example, Shirke & Udayakumar [54]
validation, and testing sets contain 4900, 1400, and 700 video employed the KITTI dataset for region-based segmenta-
clips, respectively. The sample of TuSimple datasets images tion using an iterative seed approach for multilane iden-
was taken in a variety of climatic factors. Next, Pizzati et al. tification. Aside from that, in another article, Shirke and
[58] used this dataset, which consisted of 6408 images with Udayakumar [66] also used the KITTI vision benchmark
a resolution of 1280 × 720 images divided into training and dataset in their experimentation. Next, this public dataset
testing datasets with 3626 and 2782 images, respectively. The KITTI also was then used to validate the algorithm’s per-
TuSimple dataset is unique because it annotates complete formance in [37]. Last but not least, P. Lu et al. [27] used
lane boundaries instead of lane markings. As a result, this the benchmark’s testing dataset to validate the suggested
dataset is perfect for this research. technique.
Moreover, this dataset is used as a training and test-
ing dataset in [72], with about 3600 training images and 3) CALTECH LANE DATASET
2700 testing images. The author stated that the TuSimple This dataset [134] contains four video clips captured through-
dataset comprises a variety of weather scenarios and is a out Pasadena, California, at distinct intervals of the day.
massive dataset for measuring lane detection performance. The resolution of each video clip is 640 x 480 pixels and
Furthermore, this literature presented a strategy using the includes varying lighting and illumination situations, lane
spatially convolutional neural network (SCNN) method [19]. markings, sun glint, pavement types, shadows, crosswalks,
Although the TuSimple dataset includes various road situ- and congested environments. In addition, this dataset also
ations, including straight lines, curving lanes, splitting and consists of urban streets, both straight and curved [101] was
merging lanes, and shadows, only straight and curvy lane tested using the Caltech [134] dataset. Aside from that, the
scenarios were employed in this study. proposed methodology by Akbari et al. [19] was compared
This dataset was also utilized in the literature [35] to to two model-based methods using the Caltech Lane dataset.
evaluate their strategy. Next, Chng et al. run lane detection The author used about 1,224 labeled datasets in this literature,
experiments on the TuSimple test sets in [55]. According with 4,172 lanes extracted from four video clips collected
to the literature, this dataset is relatively simple, taken dur- from numerous urban roadways.
ing the daytime along highways in excellent or moderate
weather, and contains ground truths annotated on the last 4) CITYSCAPES DATASET
frame of each twenty-frame clip. The author manually selects Cityscapes’ high-resolution and finely labeled training
the lane markers demarcating the active lane for detection images [135] are well-known. On the other hand, this data
offers semantic segmentation labels but not lane informa- before 2018. This dataset contains 3626 training photos and
tion [28]. Next, the author of [15] uses the Cityscapes dataset 2782 testing images on highway roads. It is intended for
to test the network for broad semantic segmentation and ego-road lane recognition; however, it does not distinguish
multi-category lane line semantic segmentation tasks. The between lane marker kinds or offers space between lanes.
semantic comprehension of urban street sceneries from the TuSimple, on the other hand, is a simple dataset collected
perspective of a car is the focus of this literature. The collec- during the daylight along highways in excellent or moderate
tion contains 5000 photos with high-quality pixel-level anno- weather, with ground facts only labeled on the last frame
tations with 2975 training datasets, 500 validation datasets, of each clip of twenty frames [18]. Caltech is the second
and 1525 test datasets. most used dataset for lane detection. The Caltech Lanes
dataset contains four video sequences (or sub-datasets) in
5) APOLLOSCAPE DATASET urban settings, totaling 1225 images, which have been used in
Apollo has six separating markings, four guiding markings, some previous research [6], [9], [13], [21], [28]. Aside from
two stopping lines, 12 turning markings, and other pixel- that, the Kitti and CuLane datasets are well-known online
level lane markings and lane characteristics [28]. With about datasets for lane detecting tasks. The Kitti road has two sorts
19040 photos, this is a vast data collection (training sets of annotations: road segmentation, which covers all lanes,
are 12400, validation sets are 3320, and test sets are 3320, and ego-lane, which designates the lane in which the car is
respectively). In addition, a stopping line, a zebra line, a sin- presently moving. For examples of past research that used this
gle solid line, a single dash line, a double solid line, and dataset, see [21], [28], and [32]. CULane, on the other hand,
other semantic segmentation information can also be found features various challenging driving circumstances, including
on the road. However, some ground area near lane lines congested roads or highways with low lighting. As a result,
is easily mistaken for lane markings [28]. The following it is rarely preferred by researchers for detecting the lane.
works used the ApolloScape dataset for training and testing. [1], [8], [11], [16], [18] are some of the algorithms that
For instance, in [15], the author analyses the network for use this dataset. Some CULane frames lack lane markers
generic semantic segmentation tasks and multi-category lane (for example, crossing traffic light crossings) [18].
line semantic segmentation using the ApolloScape dataset. Furthermore, there are usually three sets of dataset par-
As the author knows, this dataset is challenging to work with tition for training the models: training set, validation set,
because it includes high-quality pixel-level ground truth of and test set. The training set will be used to fine-tune the
over 110 000 frames and lane elements such as six separating model’s parameters. Meanwhile, the validation set (which
markings, four guiding markings, two stopping lines, and can be ignored if just one model is supplied) and the test set
12 turning markings, among others. Furthermore, the author (which will be used to quantify the model’s accuracy) will
employs multi-class training in this experiment. ApolloScape be used to compare alternative models applied to that data.
offers three different datasets; however, they only used one for Normally, the proportions of these partitions are 70/20/10.
the lane detection task in this literature. The divisions of the dataset from multiple prior studies pro-
vided in this study were presented in this portion of the SLR,
6) CULANE DATASET as illustrated in Table 7. The division of the dataset consists
The CULane dataset can be considered more challenging, of a training set, validation set, and test set in percentages.
and many datasets include normal conditions and eight com- The popular dataset, such as TuSimple, are mostly divided
plex settings such as crowded, night, and online. On the into 60% training and 40% testing set. Meanwhile, the Kitti
other hand, the TuSimple dataset is more straightforward dataset is divided into 50% training and 50% testing. Next,
than CULane. Therefore, several frames in CULane lack the NuScenes data set is divided into 90% training and 10%
lane markers (e.g., at light traffic crossroads). The studies validation. Therefore, there is also a dataset used by previous
in [55] were carried out on test sets of one of the most widely researchers where the data distribution is not the same. For
used and extensively utilized lane detection datasets [5]. The example, CULane dataset distributed to 60% training, 10%
CULane train set is used to pre-train the models. Furthermore, validation and 30% testing [36], 65% training, 10% valida-
this dataset comprises several challenging driving scenarios tion and 25% testing [80], and 75% training and 25% testing
and ground truths annotated on all frames (e.g., congested set [55]. The previous CamVid, dataset has been divided into
city streets and night scenes with poor lighting). Besides, it is 80% training and 20% testing [65], 60% training and 40%
a simple dataset collected during the daytime along highways testing [119].
in excellent or moderate weather.
Most public datasets for lane detection, such as TuSimple, D. LEARNING OUTCOMES FROM THE RQs
Caltech, Kitti, CULane, and Cityscapes, are currently pro- According to the literature analysis, it is shown that in just
posed for urban roadways. The TuSimple is widely used in the four years, the development of the lane detection task from
literature, as evidenced by the publications chosen. It is the the traditional-based method, which requires many pipeline
most often used dataset among academics in lane detecting processes, to the existence of the Artificial intelligence field,
studies. Tusimple has been used to test many algorithms [1], which is an intense learning-based strategy, the study will
[5], [20], [21], as it was the largest lane detection dataset be easier and more efficient. For instance, deep learning
algorithms have a high degree of generalization and learn repositories exist for a dataset on lane detection, such as the
essential aspects of the driving environment. However, there TuSimple dataset, KITTI vision benchmark dataset, CULane
is always space for development in speed and accuracy, par- dataset, Cityscapes dataset, and Caltech dataset. This dataset
ticularly in adverse weather situations, when applying the is straightforward, has a variety of image situations, and has
deep learning-based approach. Thus, several works of litera- already been labeled for the training dataset. TuSimple is
ture have advocated the integration of this method. Therefore, the most popular dataset since it incorporates different road
integrating DL and attention mechanism becomes a state- conditions, including straight lines, curving lanes, splitting
of-the-art approach still new in this field as it just began and merging lanes, and shadows. Not only that, but the
to introduce in 2020. Therefore, only a few studies in the TuSimple dataset also includes lane detecting images with
literature have studied lane detection using deep learning lower illumination.
and the attention mechanism. The attention mechanism was Furthermore, the TuSimple dataset collects data from roads
previously utilized primarily in natural language processing in fair or moderate weather, with two lanes/three lanes/or
(NLP), but it is now broadly used in computer vision, partic- more lanes, and a variety of traffic scenarios, including clear
ularly in the medical field. Thus, it can be explored more in lane lines with excellent image quality, no blur, and rela-
the automation field. tively simple identification challenges. Unfortunately, even
Next, the self-collected dataset can be acquired using var- though several ready companies with Level 5 autonomous
ious sensors, including cameras. It has been found that the cars are claimed, the available data for extreme conditions is
camera is the most popular sensor for lane detection appli- still limited. The learning results from RQs 1, 2, and 3 are
cations. This is because cameras have improved in reliability summarised in Table 6. The table contains the technique
and are likely to capture any situation on the road from any deployed for 102 selected publications with the dataset type
angle. In addition, vision sensors are becoming more effective and equipment for the self-collect dataset.
and cost-efficient due to recent deep learning techniques.
Moreover, due to the widespread use and efficiency of cam- E. GENERAL DISCUSSION ON ADDRESING THE SPECIFIC
era sensors, deep learning algorithms can learn the crucial ISSUES BASED ON COMPUTER VISION TECHNIQUE
features and characteristics of the driving environment across Most geometric modeling/conventional approaches rely on
multiple layers in the model. Next, there are primary benefits or follow pre-processing feature extraction, lane model fit-
of using the camera. This sensor delivers extensive informa- ting, and lane tracking to detect the lane. For lane detec-
tion about the surroundings and is currently the most cost- tion tasks, image pre-processing is required to determine
effective and dependable method for automotive applications. the quality of features. In addition, this approach needs
Besides the camera, LiDAR, radar, HD map, simulator, and manually alter the parameters, although this procedure is
OSM are also used as equipment for data collection. It is efficient and uncomplicated. Furthermore, previous methods
due to the camera sensor being affected by light conditions, based on handcrafted features to detect lanes are limited in
which necessitates a filtering process. Therefore, during the scenarios using edge, texture, or color information, which
data collection using the camera, the driver must ensure that requires complicated post-processing modules to perform.
the range distance between the experiment vehicle and the Likewise, in many complex procedures, these approaches
front car is always suitable. Then, the same range distance function inadequately. Therefore, traditional computer vision
for better quality input image will be obtained. Other than (CV) techniques are time-consuming and resource-intensive
that, there are dangerous and consumes time to collect the and rely on complicated algorithms to analyze the delicate
dataset using a camera, especially during the rainy/monsoon aspects of lane images. In addition, the number of lanes is
season. Especially in Southeast Asia, there is a time when frequently not fixed, and techniques for detecting lanes are
heavy drop rain will continue for the whole week. In addition, sometimes erroneous. Straight lines, for example, are treated
it is difficult to collect the dataset in an urban area at a specific as lanes by Hough transform-based algorithms, which may
time, for example, during peak hours, when there would be cause street lamps to be mistaken for lanes.
many vehicles on the road and stuck in traffic jams. However, Furthermore, poor weather, such as rain, will impact lane
cameras are less expensive than LiDAR. Meanwhile, OSM detection. Likewise, inadequate lighting and a night setting
data is devoid of precision information. will produce poor results. However, there are yet to be
Next, the simulator is commonly used for modeling lane practical solutions for dealing with such issues. As a result,
detection and used as equipment for data collection. There conventional approaches are ineffective in detecting lanes
are several advantages when using the simulator to collect the in complex traffic situations. In addition, it must work in
dataset for training, testing, and validation. One of the advan- real-time. Most algorithms, however, need more of this pur-
tages is that it is not time-consuming and non-dangerous pose. Therefore, traditional techniques have yielded signifi-
because it is not involved with the physical and natural envi- cant results. However, they have several limitations: (1) lane
ronment. Therefore, it can create many conditions, especially detection is challenged in varying weather conditions and
extreme conditions such as rain, snow, fog, etc. illumination. Furthermore, previous methods need a con-
Other than the self-collect dataset, there are also sev- sistent framework for detecting various scenes and (2) the
eral available online datasets in the market. Various inefficiency of using images due to label noise.
Following that, due to the advancement of deep learn- continue to work throughout the year, regardless of whether
ing, numerous solutions have been suggested to enhance the it is sunny or cloudy, day or night, summer or winter, urban
achievement of computer vision works in contrast to conven- or rural, crowded or clear, and so on. The main challenge is to
tional approaches. Despite the prevalence of camera sensors, make the lane recognition approach resilient and prosperous
deep learning algorithms offer a high degree of generalization under various driving conditions.
and learn the essential elements of the driving environment From the literature’s selection, there are several issues on
across multiple layers. In contemporary state-of-the-art lane lane detection techniques that are related to data, such as:
detection techniques, convolutional neural networks (CNNs)
are also used to develop deep learning models. CNN is also 1) IMBALANCED DATA SET PROBLEM
created for image classification problems in deep learning- Extremely imbalanced data set problem because the back-
based technology, in which it can extract features from the drop class contains the majority of the lane pixels in the
images it receives. However, the image’s output is one- image. In addition, the amount of backdrop pixels is sig-
dimensional data that can only forecast which images belong nificantly more than the number of lane pixels due to the
to which sorts of objects. lane’s slenderness. It may be challenging to pick up on such
Furthermore, numerous low-level characteristics were lost characteristics [1]. Aside from unbalanced data, the quality
in the pooling layers of CNN. As a result, input changes might of acquired data and annotations also restricts the capacity of
cause convolution neural networks to lose robustness. Scale, various methods [2]. As a result of the limitations imposed by
viewpoint, and backdrop clutter are examples of these vari- available datasets, lane approaches developed on one dataset
ances. Furthermore, while these models perform admirably are unlikely to be applied to another. To address this issue,
on train and test inputs, they perform poorly on unknown state-of-the-art transfer learning and attention mechanisms
datasets from various contexts. The FCN network can over- must be implemented. Aside from that, a more generic dataset
come these issues and detect more accurate two-dimensional that replicates real-world road conditions should be inves-
data. Even the deep learning-based technique offers numer- tigated for the confined dataset. Furthermore, as this sector
ous advantages. However, they have a high computing cost, develops, more data sets are projected to become avail-
which can sometimes increase training loss and result in a able for researchers, particularly with the advent of entirely
vanishing gradient issue [37]. autonomous cars [3]. However, researchers are also hindered
In the past, advanced detecting algorithms such as deep by the lack of datasets, necessitating the creation of new
learning have outperformed traditional methods in complex databases to allow for additional algorithm testing. The new
scenarios, but they have limitations. For example, despite the databases can be created using synthetic sensor data from
importance of multilane detection, only a limited number of a test vehicle or by generating driving scenarios using a
lanes can be detected, and the cost of detection time is fre- commercially available driving simulator. Similarly, more
quently prohibitive. Therefore, various factors influence lane research is needed in the following areas.
detection tasks, including specific complex traffic scenarios.
Attention mechanisms have improved NLP and CV exten- 2) VARIATION AND CHANGEABLE LANE MARKINGS
sively. The employment of an attention mechanism improves With the vast diversity of lane markers, the complex and
feature localization in the feature map and eliminates the changing road circumstances, and the lane markings’ inherent
need for post-processing. Therefore, as lanes are long and thin properties, some scenarios, such as no line, shadow
thin for lane detection, there are considerably fewer annotated occlusion, and harsh lighting conditions [1], provide few or
lane pixels than background pixels, which is challenging for no visual signals. Therefore, detecting the lanes from the
a model to learn. Hence, the attention processes in feature image in these scenarios can be difficult. According to the
maps can emphasize crucial spatial information. The atten- findings, traditional approaches work in a controlled envi-
tion mechanism, in particular, can boost the weighted infor- ronment and have numerous problems regarding robustness
mation of lane line objectives while reducing unnecessary difficulties caused by road scene fluctuations. In addition,
data. It adds to the complexity of network learning. However, the lanes’ inconsistency, curvature, and varied lane patterns
as the author is aware, more research needs to be done on make detection much more difficult. Daytime has gotten a
using the attention mechanism in lane detecting tasks. In this lot of attention in the past, but nighttime and rainy situations
research area, many different forms of attention mechanisms have gotten less attention. Furthermore, it is apparent from the
can be used at the same time. As a result, the study’s future literature that, in terms of speed flow conditions, they have
direction can be investigated by applying another type of previously been examined at speeds ranging from 4 km/h
attention mechanism that has yet to be deployed. to 80 km/h, with high speed (above 80 km/h) receiving less
attention. Occluding overtaking vehicles or other objects and
F. ISSUE ON TECHNIQUE RELATED TO DATA excessive illumination make lane identification and tracking
The existing ADAS act as a driver’s aid, and many issues still difficult. Although reflector lanes are specified with several
need to be addressed or improved to achieve the objective of colors, lane markings are usually yellow and white. The
safe and enjoyable autonomous driving on real-world roads. number and width of lanes vary per country. There may be
In a real-world scenario, a lane recognition system should issues with vision clarity due to the presence of shadows.
The visibility of the lane lines was reduced due to various Therefore, maintaining skepticism about data and developing
weather conditions, such as rain, fog, and snow. Visibility techniques to anticipate and battle uncertainty is crucial. The
may be decreased in the evening. The performance of lane solution to this problem is to invest some time analyzing
detection and tracking algorithms suffers due to these issues data statistics and creating visualizations to aid in identifying
in lane recognition and tracking. As a result, developing a those anomalous or unusual cases: this is what data cleansing
dependable lane detection system is a difficult task. is all about.
I. CROSS-VALIDATION FOR EVALUATING AND system’s performance suffers due to unclear and deteriorated
COMPARING MODULES lane markers. Therefore, one of the most significant issues
Cross-validation is a technique for testing how well a sta- with current ADAS is the ambient and meteorological envi-
tistical analysis applies to a different dataset. Typically, the ronments substantially impacting the system’s functionality.
model is trained on a known dataset. This dataset is referred
2) RQ2
to as the training dataset. However, the model must work on
Regarding lane marking, camera quality is crucial, and
an unknown dataset in real-time. Cross-validation is used to
an adjacent vehicle may obscure the lane signs during
see how well a prediction model works with an anonymous
overtaking. Therefore, the algorithm’s accuracy is determined
dataset. The model may have a high degree of accuracy
by the camera used. Images were captured using monocular,
when the original validation division does not reflect the
stereo, and infrared cameras. From the literature, a stereo
entire population. However, it will be of little help in practice
camera outperforms a monocular camera.
because it can only work with limited data collection. When
it comes across data outside its scope, the system cannot
3) RQ3
recognize it, resulting in poor accuracy. It is verified how
accurate the model is on many diverse subsets of data when Approximately 60% of the researchers have used self-
cross-validation is employed in machine learning. As a result, collected datasets in their research.
it ensures that it generalizes well to data collected in the
future. It enhances the model’s accuracy. Cross-validation K. LIMITATION, FUTURE SCOPE AND CONTRIBUTIONS OF
might help avoid overfitting and underfitting. When a model THE CURRENT WORK
is trained ’’too well,’’ overfitting develops. It occurs when the The limitations and future scope of the current work can
model is sophisticated and has a large number of variables in be categorized into methods, datasets, and model network
comparison to the number of data. In such cases, the model architecture.
will perform admirably in training mode but may not be
accurate when applied to a new data. It is because it is not 1) METHODS
a generalized model. Underfitting happens when the model Limitations: Most geometric modeling/conventional appro-
does not fit the training data instead of overfitting. As a aches rely on or follow pre-processing feature extraction, lane
result, it is unable to generalize to new data. It’s because the model fitting, and lane tracking to detect the lane. For lane
model is simple and lacks sufficient independent variables. detection tasks, image pre-processing is required to deter-
In data analysis, both overfitting and underfitting are unde- mine the quality of features. In addition, this approach needs
sirable. It should always strive for a balanced approach or a manually alter the parameters, although this procedure is
just right paradigm. Overfitting and underfitting can both be efficient and uncomplicated. Furthermore, previous methods
avoided by cross-validation. Machine learning necessitates based on handcrafted features to detect lanes are limited in
extensive data analysis. Cross-validation is a great way to scenarios using edge, texture, or color information, which
get the machine ready for real-world circumstances. As a requires complicated post-processing modules to perform.
result, the system is prepared to take in new data and general- Likewise, in many complex scenarios, these approaches func-
ize it to make correct predictions. However, to the authors’ tion inadequately. Therefore, the traditional computer vision
knowledge, previous research in the lane detection sector (CV) techniques are time-consuming and resource-intensive
does not generally discuss or describe any cross-validation for and rely on complicated algorithms to analyze the delicate
evaluation. It is possible to state that it is a biased experiment aspects of lane images. In addition, the number of lanes is
that requires additional examination in this sector. frequently not fixed, and techniques for detecting lanes are
sometimes erroneous. Straight lines, for example, are treated
J. LIMITATION OF SYSTEMATIC LITERATURE REVIEW as lanes by Hough transform-based algorithms, which may
BASED ON RESEARCH QUESTIONS cause street lamps to be mistaken for lanes. Furthermore, poor
Referring to the research questions, RQ1, RQ2 and RQ3, weather, such as rain, will impact lane detection. Likewise,
there are existing of certain limitations as listed as follows: inadequate lighting and a night setting will produce poor
- results. However, there are yet no practical solutions for
dealing with such issues. As a result, conventional approaches
1) RQ1 are ineffective in detecting lanes in complex traffic situations.
The results from previous research demonstrate that in most In addition, it must work in real-time. Most algorithms, how-
circumstances, lane detection accuracy is about 96 percent ever, suffer from a lack of this purpose. Therefore, traditional
under normal conditions. Heavy rain, on the other hand, techniques have yielded significant results. However, they
significantly impacts the efficiency of lane marker detection. have several limitations: (1) lane detection is challenged in
In addition, external factors such as weather, visual quality, varying weather conditions and illumination.
shadows, and blazing, as well as internal factors such as lane Furthermore, previous methods need a consistent frame-
marking that is too narrow, too broad, or unclear, degrade work for detecting various scenes and (2) the inefficiency
the performance. Moreover, it has been observed that the of using images due to label noise. Following that, due to
the advancement of deep learning, numerous solutions have robustness difficulties caused by road scene fluctuations.
been suggested to enhance the achievement of computer Furthermore, occlusion from overtaking vehicles or other
vision works in contrast to conventional approaches. Despite objects and excessive illumination make lane identification
the prevalence of camera sensors, deep learning algorithms and tracking difficult. Other than that, the visibility of the lane
offer a high degree of generalization and learn the essential lines was reduced due to weather conditions such as rain, fog,
elements of the driving environment across multiple layers. and snow. The performance of lane detection and tracking
In the past, advanced detecting algorithms such as deep algorithms suffers due to these issues in lane recognition
learning have outperformed traditional methods in complex and tracking. In addition, according to [4], lane-like inter-
scenarios, but they have limitations. For example, despite the ferences, such as guardrails, railways, utility poles, pedes-
importance of multilane detection, only a limited number of trian sidewalks, buildings, and so on, will interfere with the
lanes can be detected, and the cost of detection time is fre- existing traditional method, such as the HT-based algorithm.
quently prohibitive. Therefore, various factors influence lane As a result, it has struggled in various challenging environ-
detection tasks, including specific complex traffic scenarios. ments, including night time and other environmental factors
Attention mechanisms have improved NLP and CV exten- (shadow, rain, etc.).
sively. The employment of an attention mechanism improves Furthermore, the host car then casts its shadows on the road
feature localization in the feature map and eliminates the surface as it enters or exits a tunnel or drives beneath a bridge.
need for post-processing. Therefore, as lanes are long and As a result, the road may have complicated painted road
thin for lane detection, there are considerably fewer annotated surface markings, utility lines, and buildings, which can cause
lane pixels than background pixels, which is challenging for the HT-based lane detection algorithm to provide misleading
a model to learn. Hence, the attention processes in feature edges and textures. On rainy days, reflection from the wet
maps can emphasize crucial spatial information. The atten- road may induce glare and image overexposure, resulting in
tion mechanism, in particular, can boost the weighted infor- lane detection failure in some instances. In addition to lane-
mation of lane line objectives while reducing unnecessary like interferences, lighting fluctuations make dividing line
data. It adds to the complexity of network learning. However, detection more challenging. Under artificial light, the system
as the author is aware, only a little research has been done on failed to recognize road lane characteristics in bright or wet
using the attention mechanism in lane detecting tasks. road conditions with significant reflection on rainy days.
Future Scope: In this research area, many different forms Future Scope 2: Employ feature-based learning models to
of attention mechanisms can be used at the same time. As a control lousy weather, illumination, and shadow issues.
result, the study’s future direction can be investigated by
applying another type of attention mechanism that has yet to 3) MODELS NETWORK ARCHITECTURE
be deployed. Limitations: Working with inaccurate and incomplete infor-
mation is what uncertainty entails. This study contains
2) DATASET numerous sources of uncertainty, including data noise and
Limitations 1: Extremely imbalanced data set problem an imprecise model. Noise is the term for variation in an
because the backdrop class contains most of the lane pixels observation. Both the inputs and the outputs are affected by
in the image. The amount of backdrop pixels is significantly this unpredictability. Genuine data, like the real world, is a
more than the number of lane pixels due to the lane’s slen- tangled mess. Besides, a random sample is a set of obser-
derness. It may take time to pick up on such characteristics. vations picked randomly from a domain with no systematic
Aside from unbalanced data, the quality of acquired data and bias.
annotations also restricts the capacity of various methods [2]. Nevertheless, a certain amount of bias will always exist.
Future Scope 1: State-of-the-art mechanisms such as trans- This arises when a model needs more data and knowledge,
fer learning and attention mechanisms can be implemented. commonly occurring when there aren’t enough samples to
Aside from that, a more generic dataset that replicates real- train the model. While some bias is inherent, uncertainty
world road conditions can be investigated for the confined grows if the sample’s degree of variance and preference is an
dataset. Furthermore, the new databases can be created using unsatisfactory representation of the task for which the model
synthetic sensor data from a test vehicle or by generating will be utilized.
driving scenarios using a commercially available driving For example, in lane detection, researchers may detect a
simulator. lane in a highway area only if the road is in good condition
Limitations 2: Changeable lane markings and illumination and there are few vehicles present except during peak hours.
variations. The wide diversity of lane markers, the complex Aside from that, lane detection in normal situations is far
and changing road circumstances such as no line, shadow easier than in extreme conditions. The painted lane marking
occlusion, provide few or no visible lane lines, the inconsis- is chosen at random. However, it can only be used in one
tency of the lanes, the curvature of the lane, and the varied instance. The scope can include highways, cities, rural areas,
lane pattern make detection much more difficult. Accord- and normal, rainy, and foggy circumstances. The sample must
ing to the findings, traditional approaches work in a con- have an acceptable amount of variance and bias to represent
trolled environment and have numerous problems regarding the task for which the data or model will be utilized. There
will never be all of the observations in any of the initial Future Scope: Invest some time analyzing data statistics
investigations. This implies that some cases will always go and creating visualizations to aid in identifying those anoma-
unnoticed. There will be areas of the problem domain that lous or unusual cases. This is what data cleansing is all
are not covered. about. Therefore, splitting the dataset into train and test sets
TABLE 8. Comparison of this slr with the already done review articles.
or using resampling methods like k-fold cross-validation. The use of deep learning has been increasingly researched
This technique can be used to deal with ambiguity in the throughout the last four years. Some studies used stand-alone
dataset’s representativeness and to assess the performance deep learning implementations for a single-lane detection
of a modeling procedure on data that isn’t included in the problem or multiple-lane implementations. Other than that,
training. some research focuses on merging deep learning with other
Contributions: Combining deep learning approaches with machine learning techniques and classical methodologies to
other techniques yields significant performances. The merg- improve efficiency.
ing of networks and attention mechanism was proposed On the other hand, recent advancements imply that atten-
to learn more discriminative features of road lanes than tion mechanism has become a popular strategy to combine
the stand-alone deep learning approach to significantly with deep learning methods to increase performance. Using
increase the detection accuracy of the road lane. These meth- deep algorithms in conjunction with other techniques also
ods/innovations regarding more precise lane detection are showed promising outcomes. This SLR will pave the path for
necessary to enable a real-time lane detection system. There- more studies to be accomplished to build more effective lane
fore, the model’s accuracy and speed should be improved in detection methods. In addition, more precise methods for use
normal and extreme conditions. in real-world industrial settings are required. We plan to build
on the findings of this study in the future, emphasizing cre-
L. COMPARISON WITH ALREADY DONE REVIEW ARTICLES
ating a network with high-speed performance and efficiency
This SLR is compared with the other review articles that have
that can be implemented in real-time.
previously been completed. As a result of the SLR, it was
discovered that most of the currently published research falls B. FUTURE DIRECTIONS AND RECOMMENDATIONS
into one of the categories presented and discussed in Table 8. The following directions for future contributions to the
discipline should be focused on based on the findings of
this SLR:
V. CONCLUSION AND FUTURE RECOMMENDATIONS
1) For exact feature learning, accurately labeled lane data
This review article is concluded by analyzing the outcomes
is required for deep network training.
and making recommendations for subsequent initiatives. This
2) Increases the number of publicly available online pub-
section describes all lane detection methods, self-collect
lic datasets that cover a wide range of scenes.
dataset preparation equipment, the top three most popular
3) More imbalance management approaches should be
online datasets, fundamental problems in this field, and the
investigated, such as computational cost, speed perfor-
state-of-the-art that can be investigated for future research.
mance, and algorithm/network training error.
A. CONCLUSIONS 4) Combining deep learning approaches with other tech-
The analysis from this SLR shows that the selected literature niques yields significant results, which merits further
used various methods and structures, with the input dataset investigation.
being one of two types: self-collected or acquired from an 5) The merging of networks and attention mechanisms
online public dataset. In the meantime, the methodologies has improved performance, but additional research is
include geometric modeling and traditional methods, while needed.
AI includes deep learning and machine learning. CNN, FCN, 6) They are developing approaches and technologies for
and RNN are examples of deep networks and architectures. lane detecting that are more efficient in speed and
precision. The model’s accuracy and rate under normal [17] N. M. Gohilot, A. Tigadi, and B. Chougula, ‘‘Detection of pedes-
and extreme situations should be enhanced to enable trian, lane and traffic signal for vision based car navigation,’’ in Proc.
2nd Int. Conf. for Emerg. Technol. (INCET), May 2021, pp. 1–6, doi:
real-time detection. 10.1109/INCET51464.2021.9456137.
7) The computational load is reduced. Therefore, training [18] S. Swetha and P. Sivakumar, ‘‘SSLA based traffic sign and lane detection
time, memory, and CPU resources should all be mini- for autonomous cars,’’ in Proc. Int. Conf. Artif. Intell. Smart Syst. (ICAIS),
Mar. 2021, pp. 766–771, doi: 10.1109/ICAIS50930.2021.9396046.
mized via efficient learning algorithms. [19] B. Akbari, J. Thiyagalingam, R. Lee, and K. Thia, ‘‘A multilane tracking
ACKNOWLEDGMENT algorithm using IPDA with intensity feature,’’ Sensors, vol. 21, no. 2,
p. 461, 2021.
The authors would like thanks to Ministry of Higher [20] S. Lu, Z. Luo, F. Gao, M. Liu, K. Chang, and C. Piao, ‘‘A fast and robust
Education (MOHE) for funding this research through Fun- lane detection method based on semantic segmentation and optical flow
damental Research Grant Scheme, Registration Proposal estimation,’’ Sensors, vol. 21, no. 2, pp. 400, 2021.
[21] S.-W. Baek, M.-J. Kim, U. Suddamalla, A. Wong, B.-H. Lee,
No: FRGS/1/2022/ICT11/UTM/02/2 (Attention-Based Fully and J.-H. Kim, ‘‘Real-time lane detection based on deep learning,’’
Convolutional Networks for Lane Detection of Different J. Electr. Eng. Technol., vol. 17, no. 1, pp. 655–664, Jan. 2022, doi:
Driving Scenes - No Vote: R.K130000.7843.5F563). 10.1007/s42835-021-00902-6.
[22] H. Park, ‘‘Implementation of lane detection algorithm for self-driving
REFERENCES vehicles using tensor flow,’’ in Proc. Int. Conf. Innov. Mobile Internet
[1] C. Y. Chan, ‘‘Trends in crash detection and occupant restraint tech- Services Ubiquitous Comput. (Advances in Intelligent Systems and Com-
nology,’’ Proc. IEEE, vol. 95, no. 2, pp. 388–396, Feb. 2007, doi: puting), vol. 773, 2019, pp. 438–447, doi: 10.1007/978-3-319-93554-
10.1109/JPROC.2006.888391. 6_42.
[2] Z. Sun, G. Bebis, and R. Miller, ‘‘On-road vehicle detection: A review,’’ [23] R. Yousri, M. A. Elattar, and M. S. Darweesh, ‘‘A deep learning-based
IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711, benchmarking framework for lane segmentation in the complex and
May 2006, doi: 10.1109/TPAMI.2006.104. dynamic road scenes,’’ IEEE Access, vol. 9, pp. 117565–117580, 2021,
[3] K. A. Brookhuis, D. De Waard, and W. H. Janssen, ‘‘Behavioural doi: 10.1109/ACCESS.2021.3106377.
impacts of advanced driver assistance systems-an overview,’’ Eur. [24] A. M. Alajlan and M. M. Almasri, ‘‘Automatic lane marking pre-
J. Transp. Infrastruct. Res., vol. 1, no. 3, pp. 245–253, 2001, doi: diction using convolutional neural network and S-shaped binary
10.18757/ejtir.2001.1.3.3667. butterfly optimization,’’ J. Supercomput., vol. 78, pp. 3715–3745,
[4] K. Grove, J. Atwood, P. Hill, G. Fitch, A. DiFonzo, M. Marchese, and Aug. 2021.
M. Blanco, ‘‘Commercial motor vehicle driver performance with adaptive [25] N. Kanagaraj, D. Hicks, A. Goyal, S. Tiwari, and G. Singh, ‘‘Deep
cruise control in adverse weather,’’ Proc. Manuf., vol. 3, pp. 2777–2783, learning using computer vision in self driving cars for lane and traffic
Jan. 2015, doi: 10.1016/j.promfg.2015.07.717. sign detection,’’ Int. J. Syst. Assurance Eng. Manage., vol. 12, no. 6,
[5] U. Zakir, U. Z. A. Hamid, K. Pushkin, D. Gueraiche, and pp. 1011–1025, Dec. 2021, doi: 10.1007/s13198-021-01127-6.
M. A. A. Rahman, ‘‘Current collision mitigation technologies for [26] S. Ghanem, P. Kanungo, G. Panda, and P. Parwekar, ‘‘An improved
advanced driver assistance systems—A survey,’’ Perintis eJ., vol. 6, and low-complexity neural network model for curved lane detection of
no. 2, pp. 78–90, 2016, doi: 10.1105/tpc.15.01050. autonomous driving system,’’ Wasser und Abfall, vol. 21, no. 4, p. 61,
[6] A. H. Eichelberger and A. T. McCartt, ‘‘Toyota drivers’ experiences 2019, doi: 10.1007/s35152-019-0027-x.
with dynamic radar cruise control, pre-collision system, and lane- [27] Q. Huang and J. Liu, ‘‘Practical limitations of lane detection algorithm
keeping assist,’’ J. Saf. Res., vol. 56, pp. 67–73, Feb. 2016, doi: based on Hough transform in challenging scenarios,’’ Int. J. Adv. Robot.
10.1016/j.jsr.2015.12.002. Syst., vol. 18, no. 2, pp. 1–13, 2021, doi: 10.1177/17298814211008752.
[7] N. S. A. Rudin, Y. M. Mustafah, Z. Z. Abidin, J. Cho, and H. F. M. Zaki, [28] P. Lu, C. Cui, S. Xu, H. Peng, and F. Wang, ‘‘SUPER: A novel lane
‘‘Vision-based lane departure warning system,’’ J. Soc. Automot. Eng. detection system,’’ IEEE Trans. Intell. Vehicles, vol. 6, no. 3, pp. 583–593,
Malaysia, vol. 2, no. 2, pp. 166–176, 2018. Sep. 2021, doi: 10.1109/TIV.2021.3071593.
[8] G. Kaur and D. Kumar, ‘‘Lane detection techniques: A review,’’ Int. J. [29] Z. Wu, K. Qiu, T. Yuan, and H. Chen, ‘‘A method to keep autonomous
Comput. Appl., vol. 112, no. 10, pp. 1–5, 2015. vehicles steadily drive based on lane detection,’’ Int. J. Adv. Robot. Syst.,
[9] L. Zhang, F. Jiang, B. Kong, J. Yang, and C. Wang, ‘‘Real-time lane vol. 18, no. 2, pp. 1–11, 2021, doi: 10.1177/17298814211002974.
detection by using biologically inspired attention mechanism to learn con- [30] S. Samantaray, R. Deotale, and C. L. Chowdhary, ‘‘Lane detection
textual information,’’ Cogn. Comput., vol. 13, pp. 1333–1344, Sep. 2021, using sliding window for intelligent ground vehicle challenge,’’ in Inno-
doi: 10.1007/s12559-021-09935-5. vative Data Communication Technologies and Application. Singapore:
[10] R. Zhang, Y. Wu, W. Gou, and J. Chen, ‘‘RS-lane: A robust lane detection Springer, 2021, pp. 871–881, doi: 10.1007/978-981-15-9651-3_70.
method based on ResNeSt and self-attention distillation for challenging [31] M. Liu, X. Deng, Z. Lei, C. Jiang, and C. Piao, ‘‘Autonomous lane keeping
traffic situations,’’ J. Adv. Transp., vol. 2021, pp. 1–12, Aug. 2021. system: Lane detection, tracking and control on embedded system,’’
[11] F. Munir, S. Azam, M. Jeon, B.-G. Lee, and W. Pedrycz, ‘‘LDNet: End- J. Electr. Eng. Technol., vol. 16, no. 1, pp. 569–578, Jan. 2021, doi:
to-end lane marking detection approach using a dynamic vision sensor,’’ 10.1007/s42835-020-00570-y.
2020, arXiv:2009.08020. [32] O. Rastogi, ‘‘Color masking method for variable luminosity in videos
[12] S. Ghanem, P. Kanungo, G. Panda, S. C. Satapathy, and R. Sharma, ‘‘Lane with application in lane detection systems,’’ in Proc. Int. Conf. Mach.
detection under artificial colored light in tunnels and on highways: An Intell. Data Sci. Appl., 2021, pp. 275–284, doi: 10.1007/978-981-33-
IoT-based framework for smart city infrastructure,’’ Complex Intell. Syst., 4087-9.
pp. 1–12, May 2021, doi: 10.1007/s40747-021-00381-2. [33] D. K. Dewangan and S. P. Sahu, ‘‘Lane detection for intelligent vehicle
[13] D. Moher, A. Liberati, J. Tetzlaff, and D. G. Altman, ‘‘Preferred reporting system using image processing techniques,’’ in Data Science. Singapore:
items for systematic reviews and meta-analyses: The PRISMA state- Springer, 2021, pp. 329–348, doi: 10.1007/978-981-16-1681-5_21.
ment,’’ BMJ, vol. 339, Jul. 2009, Art. no. b2535, doi: 10.1136/bmj.b2535. [34] J. Gong, T. Chen, and Y. Zhang, ‘‘Complex lane detection based on
[14] C. Lee and J. H. Moon, ‘‘Robust lane detection and tracking for real- dynamic constraint of the double threshold,’’ Multimedia Tools Appl.,
time applications,’’ IEEE Trans. Intell. Transp. Syst., vol. 19, no. 12, vol. 80, no. 18, pp. 27095–27113, Jul. 2021, doi: 10.1007/s11042-021-
pp. 4043–4048, Dec. 2018, doi: 10.1109/TITS.2018.2791572. 10978-x.
[15] J. Li, F. Jiang, J. Yang, B. Kong, M. Gogate, K. Dashtipour, and [35] R. Muthalagu, A. Bolimera, and V. Kalaichelvi, ‘‘Vehicle lane markings
A. Hussain, ‘‘Lane-DeepLab: Lane semantic segmentation in automatic segmentation and keypoint determination using deep convolutional neural
driving scenarios for high-definition maps,’’ Neurocomputing, vol. 465, networks,’’ Multimedia Tools Appl., vol. 80, no. 7, pp. 11201–11215,
pp. 15–25, Nov. 2021, doi: 10.1016/j.neucom.2021.08.105. Mar. 2021, doi: 10.1007/s11042-020-10248-2.
[16] D. Kavitha and S. Ravikumar, ‘‘Designing an IoT based autonomous [36] K. Ren, H. Hou, S. Li, and T. Yue, ‘‘LaneDraw: Cascaded lane and
vehicle meant for detecting speed bumps and lanes on roads,’’ J. Ambient its bifurcation detection with nested fusion,’’ Sci. China Technol. Sci.,
Intell. Hum. Comput., vol. 12, no. 7, pp. 7417–7426, Jul. 2021, doi: vol. 64, no. 6, pp. 1238–1249, Jun. 2021, doi: 10.1007/s11431-020-
10.1007/s12652-020-02419-8. 1702-2.
[37] D. K. Dewangan, S. P. Sahu, B. Sairam, and A. Agrawal, ‘‘VLD- [56] R. Agrawal and N. Singh, ‘‘Lane detection and collision preven-
Net: Vision-based lane region detection network for intelligent vehicle tion system for automated vehicles,’’ in Applied Computer Vision and
system using semantic segmentation,’’ Computing, vol. 103, no. 12, Image Processing. Singapore: Springer, 2021, doi: 10.1007/978-981-15-
pp. 2867–2892, Dec. 2021, doi: 10.1007/s00607-021-00974-2. 4029-5_5.
[38] L. Review and L. L. Bachrach, ‘‘Lane and obstacle detection system [57] Z. Wang, W. Ren, and Q. Qiu, ‘‘LaneNet: Real-time lane detection
based on single camera-based stereo vision system,’’ in Proc. Int. Conf. networks for autonomous driving,’’ 2018, arXiv:1807.01726.
Adv. Syst. Control Comput., 2021, pp. 259–266, doi: 10.1007/978-981- [58] F. Pizzati, M. Allodi, A. Barrera, and F. García, ‘‘Lane detection and
33-4862-2. classification using cascaded CNNs,’’ in Proc. Int. Conf. Comput. Aided
[39] A. N. Ahmed, S. Eckelmann, A. Anwar, T. Trautmann, and P. Hellinckx, Syst. Theory, in Lecture Notes in Computer Science: Including Subseries
‘‘Lane marking detection using LiDAR sensor,’’ in Proc. Int. Conf. P2P, Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor-
Parallel, Grid, Cloud Internet Comput., in Lecture Notes in Networks and matics, 2020, pp. 95–103, doi: 10.1007/978-3-030-45096-0_12.
Systems, 2021, pp. 301–310, doi: 10.1007/978-3-030-61105-7_30. [59] S. Chen, B. Li, Y. Guo, and J. Zhou, ‘‘Lane detection based on his-
[40] Y. Wu, F. Liu, W. Jiang, and X. Yang, ‘‘Multi spatial convolution block togram of oriented vanishing points,’’ in Pattern Recognition. Singapore:
for lane lines semantic segmentation,’’ in Proc. Int. Conf. Intell. Comput., Springer, 2020, pp. 3–11, doi: 10.1007/978-981-15-3651-9_1.
in Lecture Notes in Computer Science, vol. 12837, 2021, pp. 31–41. [60] N. Ma, G. Pang, X. Shi, and Y. Zhai, ‘‘An all-weather lane detection
[41] C. Y. Lee, J. G. Shon, and J. S. Park, ‘‘An edge detection–based eGAN system based on simulation interaction platform,’’ IEEE Access, vol. 8,
model for connectivity in ambient intelligence environments,’’ J. Ambient pp. 46121–46130, 2020, doi: 10.1109/ACCESS.2018.2885568.
Intell. Hum. Comput., vol. 13, no. 10, pp. 4591–4600, Oct. 2022, doi: [61] C. Hasabnis, S. Dhaygude, and S. Ruikar, ‘‘Real-time lane detection for
10.1007/s12652-021-03261-2. autonomous vehicle using video processing,’’ in ICT Analysis and Appli-
[42] L. Zhang, B. Kong, and C. Wang, ‘‘LLNet: A lightweight lane line cations (Lecture Notes in Networks and Systems). Singapore: Springer,
detection network,’’ in Proc. Int. Conf. Image Graph., 2021, pp. 355–369, 2020, pp. 217–225, doi: 10.1007/978-981-15-0630-7_21.
doi: 10.1007/978-3-030-87355-4_30. [62] Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, ‘‘Robust lane
[43] Y. Qin, J. Peng, H. Zhang, and J. Nong, ‘‘Lane recognition system for detection from continuous driving scenes using deep neural networks,’’
machine vision,’’ in Proc. 10th Int. Conf. Comput. Eng. Netw., 2020, IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 41–54, Jan. 2020, doi:
pp. 388–398, doi: 10.1007/978-981-15-8462-6_44. 10.1109/TVT.2019.2949603.
[44] A. Kasmi, J. Laconte, R. Aufrere, R. Theodose, D. Denis, and [63] T. Andersson, A. Kihlberg, A. Sundström, and N. Xiong, ‘‘Road boundary
R. Chapuis, ‘‘An information driven approach for ego-lane detec- detection using ant colony optimization algorithm,’’ in Advances in Nat-
tion using lidar and OpenStreetMap,’’ in Proc. 16th Int. Conf. Con- ural Computation, Fuzzy Systems and Knowledge Discovery, vol. 1074.
trol, Autom., Robot. Vis. (ICARCV), Dec. 2020, pp. 522–528, doi: New Zealand: Springer, 2020, pp. 409–416, doi: 10.1007/978-3-030-
10.1109/ICARCV50220.2020.9305388. 32456-8_44.
[45] M. Fakhfakh, L. Chaari, and N. Fakhfakh, ‘‘Bayesian curved lane estima-
[64] S. Hossain, O. Doukhi, I. Lee, and D.-J. Lee, ‘‘Real-time lane detec-
tion for autonomous driving,’’ J. Ambient Intell. Hum. Comput., vol. 11,
tion and extreme learning machine based tracking control for intel-
no. 10, pp. 4133–4143, Oct. 2020, doi: 10.1007/s12652-020-01688-7.
ligent self-driving vehicle,’’ in Intelligent Systems and Applications.
[46] E. S. Dawam and X. Feng, ‘‘Smart city lane detection for
Cham, Switzerland: Springer, 2020, pp. 41–50, doi: 10.1007/978-3-030-
autonomous vehicle,’’ in Proc. IEEE Int. Conf Dependable, Autonomic
29513-4_4.
Secure Comput., Int. Conf Pervasive Intell. Comput., Int. Conf
[65] V.-T. Luu, V.-C. Huynh, V.-H. Tran, T.-H. Nguyen, and T.-N.-H. Phu,
Cloud Big Data Comput., Int. Conf Cyber Sci. Technol. Congr.
‘‘Traditional method meets deep learning in an adaptive lane
(DASC/PiCom/CBDCom/CyberSciTech), Aug. 2020, pp. 334–338, doi:
and obstacle detection system,’’ in Proc. 5th Int. Conf. Green
10.1109/DASC-PICom-CBDCom-CyberSciTech49142.2020.00065.
Technol. Sustain. Develop. (GTSD), Nov. 2020, pp. 148–152, doi:
[47] K. C. Bhupathi and H. Ferdowsi, ‘‘An augmented sliding window tech-
10.1109/GTSD50082.2020.9303108.
nique to improve detection of curved lanes in autonomous vehicles,’’ in
[66] S. Shirke and R. Udayakumar, ‘‘Fusion model based on entropy by
Proc. IEEE Int. Conf. Electro Inf. Technol. (EIT), Jul. 2020, pp. 522–527,
using optimized DCNN and iterative seed for multilane detection,’’ Evol.
doi: 10.1109/EIT48999.2020.9208278.
[48] R. Muthalagu, A. Bolimera, and V. Kalaichelvi, ‘‘Lane detection tech- Intell., vol. 15, no. 2, pp. 1441–1454, Jun. 2022, doi: 10.1007/s12065-
nique based on perspective transformation and histogram analysis for 020-00480-y.
self-driving cars,’’ Comput. Electr. Eng., vol. 85, pp. 1–16, Jul. 2020, doi: [67] I. J. P. B. Franco, T. T. Ribeiro, and A. G. S. Conceição, ‘‘A novel visual
10.1080/15551020902995363. lane line detection system for a NMPC-based path following control
[49] A. Evlampev, I. Shapovalov, and S. Gafurov, ‘‘Map relative localization scheme,’’ J. Intell. Robotic Syst., vol. 101, no. 1, pp. 1–13, Jan. 2021, doi:
based on road lane matching with iterative closest point algorithm,’’ in 10.1007/s10846-020-01278-x.
Proc. 3rd Int. Conf. Artif. Intell. Pattern Recognit., 2020, pp. 232–236, [68] P. Subhasree, P. Karthikeyan, and R. Senthilnathan, ‘‘Driveable area
doi: 10.1145/3430199.3430229. detection using semantic segmentation deep neural network,’’ Computa-
[50] G. Zhang, C. Yan, and J. Wang, ‘‘Quality-guided lane detec- tional Intelligence in Data Science. India: Springer, 2020, pp. 222–230,
tion by deeply modeling sophisticated traffic context,’’ Signal Pro- doi: 10.1007/978-3-030-63467-4_18.
cess., Image Commun., vol. 84, May 2020, Art. no. 115811, doi: [69] R. Pihlak and A. Riid, ‘‘Simultaneous road edge and road surface mark-
10.1016/j.image.2020.115811. ings detection using convolutional neural networks,’’ in Proc. Int. Baltic
[51] B. Dorj, S. Hossain, and D.-J. Lee, ‘‘Highly curved lane detection algo- Conf. Databases Inf. Syst., in Communications in Computer and Infor-
rithms based on Kalman filter,’’ Appl. Sci., vol. 10, no. 7, pp. 1–22, 2020, mation Science, 2020, pp. 109–121, doi: 10.1007/978-3-030-57672-1_9.
doi: 10.3390/app10072372. [70] D. Rato and V. Santos, ‘‘Detection of road limits using gradients of the
[52] J. Hu, S. Xiong, J. Zha, and C. Fu, ‘‘Lane detection and trajectory accumulated point cloud density,’’ in Proc. 4th Iber. Robot. Conf. (Robot),
tracking control of autonomous vehicle based on model predictive con- 2020, pp. 267–279, doi: 10.1007/978-3-030-35990-4_22.
trol,’’ Int. J. Automot. Technol., vol. 21, no. 2, pp. 285–295, 2020, doi: [71] Y. Sun, L. Wang, Y. Chen, and M. Liu, ‘‘Accurate lane detection with
10.1007/s12239-020-0027-6. atrous convolution and spatial pyramid pooling for autonomous driv-
[53] D. Liu, Y. Wang, T. Chen, and E. T. Matson, ‘‘Accurate lane detection ing,’’ in Proc. IEEE Int. Conf. Robot. Biomimetics (ROBIO), Dec. 2019,
for self-driving cars: An approach based on color filter adjustment and pp. 642–647, doi: 10.1109/ROBIO49542.2019.8961705.
K -means clustering filter,’’ Int. J. Semantic Comput., vol. 14, no. 1, [72] J. Liu, ‘‘Learning full-reference quality-guided discriminative gradient
pp. 153–168, Mar. 2020, doi: 10.1142/S1793351X20500038. cues for lane detection based on neural networks,’’ J. Vis. Commun. Image
[54] S. Shirke and R. Udayakumar, ‘‘A novel region-based iterative seed Represent., vol. 65, pp. 1–7, Dec. 2019, doi: 10.1016/j.jvcir.2019.102675.
method for the detection of multiple lanes,’’ Int. J. Image Data [73] H. Zhan and L. Chen, ‘‘Lane detection image processing algorithm based
Fusion, vol. 11, no. 1, pp. 57–76, 2019, doi: 10.1080/19479832.2019. on FPGA for intelligent vehicle,’’ in Proc. Chin. Autom. Congr. (CAC),
1683623. Nov. 2019, pp. 1190–1196, doi: 10.1109/CAC48633.2019.8996283.
[55] Z. M. Chng, J. M. H. Lew, and J. A. Lee, ‘‘RONELD: Robust neural [74] S. Srivastava and R. Maiti, ‘‘Multi-lane detection robust to complex
network output enhancement for active lane detection,’’ in Proc. 25th illumination variations and noise sources,’’ in Proc. 1st Int. Conf. Electr.,
Int. Conf. Pattern Recognit. (ICPR), Jan. 2021, pp. 6842–6849, doi: Control Instrum. Eng. (ICECIE), Nov. 2019, pp. 1–8, doi: 10.1109/ICE-
10.1109/ICPR48806.2021.9412572. CIE47765.2019.8974796.
[75] D. Chang, V. Chirakkal, S. Goswami, M. Hasan, T. Jung, J. Kang, [96] A. Mahmoud, L. Ehab, M. Reda, M. Abdelaleem, H. A. E. Munim,
S.-C. Kee, D. Lee, and A. P. Singh, ‘‘Multi-lane detection using M. Ghoneima, M. S. Darweesh, and H. Mostafa, ‘‘Real-time lane
instance segmentation and attentive voting,’’ in Proc. 19th Int. detection-based line segment detection,’’ in Proc. New Gener. CAS
Conf. Control, Autom. Syst. (ICCAS), Oct. 2019, pp. 1538–1542, doi: (NGCAS), Nov. 2018, pp. 57–61, doi: 10.1109/NGCAS.2018.8572124.
10.23919/ICCAS47443.2019.8971488. [97] Q. Li, J. Zhou, B. Li, Y. Guo, and J. Xiao, ‘‘Robust lane-detection method
[76] X. Jiao, D. Yang, K. Jiang, C. Yu, T. Wen, and R. Yan, ‘‘Real-time for low-speed environments,’’ Sensors, vol. 18, no. 12, pp. 1–18, 2018,
lane detection and tracking for autonomous vehicle applications,’’ Proc. doi: 10.3390/s18124274.
Inst. Mech. Eng. D, J. Automobile Eng., vol. 233, no. 9, pp. 2301–2311, [98] W. Farag and Z. Saleh, ‘‘Road lane-lines detection in real-time for
Aug. 2019, doi: 10.1177/0954407019866989. advanced driving assistance systems,’’ in Proc. Int. Conf. Innov. Intell.
[77] N. Kemsaram, A. Das, and G. Dubbelman, ‘‘An integrated frame- for Informat., Comput., Technol. (3ICT), Nov. 2018, pp. 1–8, doi:
work for autonomous driving: Object detection, lane detection, and 10.1109/3ICT.2018.8855797.
free space detection,’’ in Proc. 3rd World Conf. Smart Trends [99] B. Li, Y. Guo, J. Zhou, Y. Cai, J. Xiao, and W. Zeng, ‘‘Lane detection
Syst. Secur. Sustainablity (WorldS4), Jul. 2019, pp. 260–265, doi: and road surface reconstruction based on multiple vanishing point & sym-
10.1109/WorldS4.2019.8904020. posia,’’ in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2018, pp. 209–214.
[78] H. Bilal, B. Yin, J. Khan, L. Wang, J. Zhang, and A. Kumar, ‘‘Real- [100] E. Adali, H. A. Seker, A. Erdogan, K. Haspalamutgil, F. Turan,
time lane detection and tracking for advanced driver assistance systems,’’ E. Aksu, and U. Karapinar, ‘‘Detecting road lanes under extreme con-
in Proc. Chin. Control Conf. (CCC), Jul. 2019, pp. 6772–6777, doi: ditions: A quantitative performance evaluation,’’ in Proc. 6th Int.
10.23919/ChiCC.2019.8866334. Conf. Control Eng. Inf. Technol. (CEIT), Oct. 2018, pp. 1–7, doi:
[79] L.-A. Tran and M.-H. Le, ‘‘Robust U-Net-based road lane markings 10.1109/CEIT.2018.8751835.
detection for autonomous driving,’’ in Proc. Int. Conf. Syst. Sci. Eng. [101] Y. Y. Ye, X. L. Hao, and H. J. Chen, ‘‘Lane detection method based on
(ICSSE), Jul. 2019, pp. 62–66, doi: 10.1109/ICSSE.2019.8823532. lane structural analysis and CNNs,’’ IET Intell. Transport Syst., vol. 12,
[80] J. Philion, ‘‘FastDraw: Addressing the long tail of lane detection by no. 6, pp. 513–520, 2018, doi: 10.1049/iet-its.2017.0143.
adapting a sequential prediction network,’’ in Proc. IEEE Comput. Soc. [102] Y. Y. Moon, Z. W. Geem, and G.-T. Han, ‘‘Vanishing point detection for
Conf. Comput. Vis. Pattern Recognit., Jun. 2019, pp. 11574–11583, doi: self-driving car using harmony search algorithm,’’ Swarm Evol. Comput.,
10.1109/CVPR.2019.01185. vol. 41, pp. 111–119, Aug. 2018, doi: 10.1016/j.swevo.2018.02.007.
[81] A. A. Ali and H. A. Hussein, ‘‘Real-time lane markings recognition based [103] A. J. Humaidi, S. Hasan, and M. A. Fadhel, ‘‘FPGA-based lane-detection
on seed-fill algorithm,’’ in Proc. Int. Conf. Inf. Commun. Technol., 2019, architecture for autonomous vehicles: A real-time design and develop-
pp. 190–195, doi: 10.1145/3321289.3321306. ment,’’ Asia Life Sci., vol. 16, no. 1, pp. 223–237, 2018.
[82] A. Yusuf and S. Alawneh, ‘‘GPU implementation for automatic lane [104] L. Xiong, Z. Deng, P. Zhang, and Z. Fu, ‘‘A 3D estimation of struc-
tracking in self-driving cars,’’ SAE Tech. Papers 2019-01-0680, 2019, tural road surface based on lane-line information,’’ IFAC-PapersOnLine,
doi: 10.4271/2019-01-0680. vol. 51, no. 31, pp. 778–783, 2018, doi: 10.1016/j.ifacol.2018.10.131.
[83] C. Y. Kuo, Y. R. Lu, and S. M. Yang, ‘‘On the image sensor processing [105] X. Chen and C. Luo, ‘‘Real- time lane detection based on a light-
for lane detection and control in vehicle lane keeping systems,’’ Sensors, weight model in the wild,’’ in Proc. IEEE 4th Int. Conf. Com-
vol. 19, no. 7, pp. 1–10, 2019, doi: 10.3390/s19071665. put. Commun. Eng. Technol. (CCET), Aug. 2021, pp. 36–40, doi:
[84] K. Manoharan and P. Daniel, ‘‘Robust lane detection in hilly shadow roads 10.1109/CCET52649.2021.9544226.
using hybrid color feature,’’ in Proc. 9th Annu. Inf. Technol., Electromech. [106] S. Liu, L. Lu, X. Zhong, and J. Zeng, ‘‘Effective road lane detec-
Eng. Microelectron. Conf. (IEMECON), Mar. 2019, pp. 201–204. tion and tracking method using line segment detector,’’ in Proc.
[85] X. Pan and H. Ogai, ‘‘Fast lane detection based on deep convolutional 37th Chin. Control Conf. (CCC), Jul. 2018, pp. 5222–5227, doi:
neural network and automatic training data labeling,’’ IEICE Trans. Fun- 10.23919/ChiCC.2018.8482552.
dam. Electron., Commun. Comput. Sci., vol. E102.A, no. 3, pp. 566–575, [107] M. C. Olgun, Z. Baytar, K. M. Akpolat, and O. K. Sahingoz,
Mar. 2019, doi: 10.1587/transfun.E102.A.566. ‘‘Autonomous vehicle control for lane and vehicle tracking by using deep
[86] H. Liu and X. Li, ‘‘Notice of retraction: Sharp curve lane detection learning via vision,’’ in Proc. 6th Int. Conf. Control Eng. Inf. Technol.
for autonomous driving,’’ Comput. Sci. Eng., vol. 21, no. 2, pp. 80–95, (CEIT), Oct. 2018, pp. 25–27, doi: 10.1109/CEIT.2018.8751764.
Mar. 2019, doi: 10.1109/MCSE.2018.2882700. [108] Y. Huang, S. Chen, Y. Chen, Z. Jian, and N. Zheng, ‘‘Spatial-temporal
[87] Y. Son, E. S. Lee, and D. Kum, ‘‘Robust multi-lane detection and track- based lane detection using deep learning,’’ in Proc. 14th IFIP Int. Conf.
ing using adaptive threshold and lane classification,’’ Mach. Vis. Appl., Artif. Intell. Appl. Innov., 2018, pp. 143–154.
vol. 30, no. 1, pp. 111–124, Feb. 2019, doi: 10.1007/s00138-018-0977-0. [109] J. Xiao, L. Luo, Y. Yao, W. Zou, and R. Klette, ‘‘Lane detection based on
[88] B. S. S. Rathnayake and L. Ranathunga, ‘‘Lane detection and predic- road module and extended Kalman filter,’’ in Image and Video Technol-
tion under hazy situations for autonomous vehicle navigation,’’ in Proc. ogy. Berlin, Germany: Springer, 2018, pp. 382–395, doi: 10.1007/978-3-
18th Int. Conf. Adv. ICT Emerg. Reg. (ICTer), 2019, pp. 99–106, doi: 319-75786-5.
10.1109/ICTER.2018.8615458. [110] B. S. Khan, M. Hanafi, and S. Mashohor, ‘‘A real time road marking
[89] Z. Feng, S. Zhang, M. Kunert, and W. Wiesbeck, ‘‘Applying neural detection system on large variability road images database,’’ in Proc. Int.
networks with a high-resolution automotive radar for lane detection,’’ in Conf. Comput. Sci. Technol., in Lecture Notes in Electrical Engineering,
Proc. Automot. Meets Electron. (AmE), 2019, pp. 8–13. vol. 488, 2018, pp. 31–41, doi: 10.1007/978-981-10-8276-4_4.
[90] W. Farag and Z. Saleh, ‘‘An advanced road-lanes finding scheme for self- [111] D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and
driving cars,’’ in Proc. 2nd Smart Cities Symp. (SCS), Bahrain, 2019, L. Van Gool, ‘‘Towards end-to-end lane detection: An instance seg-
pp. 1–6, doi: 10.1049/cp.2019.0221. mentation approach,’’ in Proc. IEEE Intell. Vehicles Symp., Jun. 2018,
[91] H. Park, ‘‘Lane detection algorithm based on Hough transform for high- pp. 286–291, doi: 10.1109/IVS.2018.8500547.
speed self driving vehicles,’’ Int. J. Web Grid Serv., vol. 15, no. 3, [112] Y. Zhang, J. Gao, and H. Zhou, ‘‘ImageNet classification with deep
pp. 240–250, 2019, doi: 10.1504/IJWGS.2019.10022421. convolutional neural networks,’’ in Proc. 2nd Int. Conf. Mach. Learn.
[92] K.-S. Lee, S.-W. Heo, and T.-H. Park, ‘‘A lane detection and track- Comput., 2020, pp. 145–151, doi: 10.1145/3383972.3383975.
ing method using image saturation and road width data,’’ J. Inst. [113] H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, Y. Sun, T. He,
Control, Robot. Syst., vol. 25, no. 5, pp. 476–483, May 2019, doi: J. Mueller, R. Manmatha, M. Li, and A. Smola, ‘‘ResNeSt: Split-attention
10.5302/J.ICROS.2019.19.0008. networks,’’ 2020, arXiv:2004.08955.
[93] H. Park, ‘‘Robust road lane detection for high speed driving of [114] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, ‘‘Aggregated resid-
autonomous vehicles,’’ in Web, Artificial Intelligence and Network ual transformations for deep neural networks,’’ in Proc. IEEE Conf.
Applications. Cham, Switzerland: Springer, 2019, pp. 256–265, doi: Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 5987–5995, doi:
10.1007/978-3-030-15035-8. 10.1109/CVPR.2017.634.
[94] N. S. Parameswaran, E. R. Achan, V. Subhashree, and R. Manjusha, [115] H. Zhou, J. Zhang, J. Lei, S. Li, and D. Tu, ‘‘Image semantic segmentation
‘‘Road detection by boundary extraction technique and Hough trans- based on FCN-CRF model,’’ in Proc. Int. Conf. Image, Vis. Comput.
form,’’ in Proc. Int. Conf. ISMAC Comput. Vis. Bio-Eng., vol. 30. Cham, (ICIVC), Aug. 2016, pp. 9–14, doi: 10.1109/ICIVC.2016.7571265.
Switzerland: Springer, 2019, pp 1805–1814. [116] Y. Lu, Y. Chen, D. Zhao, and J. Chen, ‘‘Graph-FCN for image seman-
[95] R. R. Dhanakshirur, P. Pillai, R. A. Tabib, U. Patil, and U. Mudenagudi, tic segmentation,’’ in Proc. Int. Symp. Neural Netw., in Lecture Notes
‘‘A framework for lane prediction on unstructured roads,’’ in Advances in Computer Science: Including Subseries Lecture Notes in Artificial
in Signal Processing and Intelligent Recognition Systems, vol. 968. Intelligence and Lecture Notes in Bioinformatics, 2019, pp. 97–105, doi:
Singapore: Springer, 2019. 10.1007/978-3-030-22796-8_11.
[117] S. Zhang, A. E. Koubia, and K. A. K. Mohammed, ‘‘Traffic lane detection MOHD IBRAHIM SHAPIAI (Member, IEEE)
using FCN,’’ 2020, arXiv:2004.08977. received the M.Eng. degree from the University
[118] N. J. Zakaria, H. Zamzuri, M. H. Ariff, M. I. Shapiai, S. A. Saruchi, and of York, U.K., in 2007, and the Ph.D. degree
N. Hassan, ‘‘Fully convolutional neural network for Malaysian road lane in machine learning from Universiti Teknologi
detection,’’ Int. J. Eng. Technol., vol. 7, no. 4, pp. 152–155, 2018, doi: Malaysia, in 2013. He is currently a Senior Lec-
10.14419/ijet.v7i4.11.20792. turer and a Researcher at the Center of Artifi-
[119] V. Badrinarayanan, A. Kendall, and R. Cipolla, ‘‘SegNet: A deep con-
cial Intelligence and Robotics (CAIRO), Universiti
volutional encoder–decoder architecture for image segmentation,’’ IEEE
Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, Teknologi Malaysia. From March 2010 to April
Dec. 2017, doi: 10.1109/TPAMI.2016.2644615. 2010, he worked as a Visiting Researcher at the
[120] H. Wu and B. Zhang, ‘‘A deep convolutional encoder–decoder neural Graduate School of Information, Production, and
network in assisting seismic horizon tracking,’’ 2018, arXiv:1804.06814. Systems, Waseda University, under the supervision of Dr. Junzo Watada.
[121] X. Ou, P. Yan, Y. Zhang, B. Tu, G. Zhang, J. Wu, and W. Li, ‘‘Moving From June 2012 to July 2012, he worked at the Faculty of Engineering,
object detection method via ResNet-18 with encoder–decoder structure Leeds University, under the supervision of Dr. Vassili Toropov. His research
in complex scenes,’’ IEEE Access, vol. 7, pp. 108152–108160, 2019, doi: interests include artificial intelligence, machine learning, brain–computer
10.1109/ACCESS.2019.2931922. interface, and swarm intelligence. In addition, he has been named a Certified
[122] V. Badrinarayanan, A. Handa, and R. Cipolla, ‘‘SegNet: A deep con- NVIDIA Deep Learning Instructor.
volutional encoder–decoder architecture for robust semantic pixel-wise
labelling,’’ 2015, arXiv:1505.07293.
[123] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, ‘‘ENet: A deep
neural network architecture for real-time semantic segmentation,’’ 2016,
arXiv:1606.02147. RASLI ABD GHANI is currently a Senior Lec-
[124] W. Weng and X. Zhu, ‘‘INet: Convolutional networks for biomedical turer with the Department of Electronic System
image segmentation,’’ IEEE Access, vol. 9, pp. 16591–16603, 2021, doi: Engineering, Malaysia-Japan International Insti-
10.1109/ACCESS.2021.3053408. tute of Technology (MJIIT), Universiti Teknologi
[125] B. De Brabandere, D. Neven, and L. Van Gool, ‘‘Semantic instance seg- Malaysia (UTM).
mentation with a discriminative loss function,’’ 2017, arXiv:1708.02551.
[126] Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, ‘‘Robust lane
detection from continuous driving scenes using deep neural networks,’’
2019, arXiv:1903.02193.
[127] C. M. Kang, S.-H. Lee, S.-C. Kee, and C. C. Chung, ‘‘Kinematics-based
fault-tolerant techniques: Lane prediction for an autonomous lane keeping
system,’’ Int. J. Control, Autom. Syst., vol. 16, no. 3, pp. 1293–1302,
Jun. 2018, doi: 10.1007/s12555-017-0449-8.
[128] A. Hata and D. Wolf, ‘‘Road marking detection using LIDAR reflective
MOHD NAJIB MOHD YASSIN (Member, IEEE)
intensity data and its application to vehicle localization,’’ in Proc. 17th
Int. IEEE Conf. Intell. Transp. Syst. (ITSC), Oct. 2014, pp. 584–589, doi:
received the M.Eng. degree in electronic engi-
10.1109/ITSC.2014.6957753. neering from the University of Sheffield, U.K.,
[129] L. Frommberger and D. Wolter, ‘‘Structural knowledge transfer by spatial in 2007, and the Ph.D. degree in electronic engi-
abstraction for reinforcement learning agents,’’ Adapt. Behav., vol. 18, neering from the University of Sheffield. He has
no. 6, pp. 507–525, Dec. 2010, doi: 10.1177/1059712310391484. been a Lecturer at the School of Microelec-
[130] A. Kasmi, D. Denis, R. Aufrere, and R. Chapuis, ‘‘Map matching tronics, Universiti Malaysia Perlis, since 2013.
and lanes number estimation with openstreetmap,’’ in Proc. IEEE Int. His research interests include computational elec-
Conf. Intell. Transp. Syst. (ITSC), Nov. 2018, pp. 2659–2664, doi: tromagnetics, conformal antennas, mutual cou-
10.1109/ITSC.2018.8569840. pling, wireless power transfer, array design, and
[131] A. Kasmi, D. Denis, R. Aufrere, and R. Chapuis, ‘‘Probabilistic frame- dielectric resonator antennas.
work for ego-lane determination,’’ in Proc. IEEE Intell. Vehicles Symp.
(IV), Jun. 2019, pp. 1746–1752.
[132] A. Joshi and M. R. James, ‘‘Generation of accurate lane-level maps from
coarse prior maps and lidar,’’ IEEE Intell. Transp. Syst. Mag., vol. 7, no. 1,
pp. 19–29, Spring 2015, doi: 10.1109/MITS.2014.2364081. MOHD ZAMRI IBRAHIM received the B.Eng.
[133] J. Fritsch, K. Tobias, and A. Geiger, ‘‘A new performance measure and and M.Eng. degrees from Universiti Teknologi
evaluation benchmark for road detection algorithms,’’ in Proc. 16th Int. Malaysia, Malaysia, and the Ph.D. degree from
IEEE Conf. Intell. Transp. Syst., Oct. 2013, pp. 1693–1700. Loughborough University, U.K. He is currently
[134] M. Aly, ‘‘Real time detection of lane markers in urban streets,’’ 2014, a Senior Lecturer with the Faculty of Electrical
arXiv:1411.7113.
and Electronics Engineering, University Malaysia
[135] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele, ‘‘The cityscapes dataset for Pahang, Malaysia. His research interests include
semantic urban scene understanding,’’ in Proc. IEEE Conf. Com- the area of computer vision, plasma science,
put. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 3213–3223, doi: embedded system programming, brain–computer
10.1109/CVPR.2016.350. interaction, image processing, intelligent systems,
and speech recognition.