Smart Eye A Navigation and Obstacle Detection For
Smart Eye A Navigation and Obstacle Detection For
ABSTRACT
the purpose of this study was to devise an efficient solution, known as SMART_EYE, aimed at assisting
visually impaired individuals in navigating unfamiliar environments and detecting obstacles. The
motivation behind this research stemmed from the significant population affected by vision impairment and
the limitations of existing navigation alternatives, which are often heavy and expensive and thus
infrequently adopted. To address this problem, we employed a method that utilized smart applications with
AI and sensor technology. The smart app captured and classified images, while obstacle detection was
performed using ultrasonic sensors. Voice commands were used to provide users with real-time information
about obstacles in their path. The results of this study demonstrated the effectiveness of the SMART_EYE
model in improving both qualitative and quantitative performance measures for visually impaired
individuals. The model offered a cost-effective alternative that enabled independent navigation and
obstacle detection, thereby enhancing the quality of life for this population. The practical implications of
this study are twofold: the SMART_EYE model provides a viable solution for visually impaired individuals
to navigate unfamiliar environments, and its cost-effectiveness addresses the limitations of existing assistive
devices. From a theoretical perspective, this study contributes to the field of assistive technology by
integrating AI and sensor technology in a smart app to aid visually impaired individuals. Furthermore, the
evaluation and ranking of different systems based on their impact on the lives of visually impaired people
provide a basis for further research and development in this area. In conclusion, this study's value lies in
its contribution to both theory and practice, showcasing the potential for future advancements in assistive
technology for visually impaired individuals, ultimately improving their quality of life.
Keyword: Visually Impaired, Objects, Detection, Recognition, Artificial Intelligence, Sensors
1. Introduction
Vision is an essential aspect of human life, providing us with the ability to perceive and
interact with the world around us. It enables us to navigate through various environments,
recognize objects, and interpret visual information. However, vision impairment is a prevalent
global issue, affecting a significant proportion of the population. According to the World Health
Organization (WHO), over 283 million people worldwide suffer from sight problems, including
39 million blind individuals and 228 million with low vision. (Andrius Budrionis, 2022) The loss
or impairment of vision can have profound consequences, impacting an individual's
independence, mobility, and overall quality of life.
For individuals with visual disabilities, navigating unfamiliar environments can be
particularly challenging (Iskander, 2022). The ability to move around independently and safely is
crucial for their well-being and participation in society. While various navigation alternatives
have been developed over the years to assist visually impaired individuals, they often face
limitations that hinder their widespread adoption and effectiveness.
The existing landscape of navigation alternatives for visually impaired individuals is
characterized by devices and systems that are either too heavy or too expensive for universal use.
These limitations pose significant barriers, preventing individuals with visual disabilities from
accessing the necessary assistance and support they require for efficient navigation (Real, 2019).
Many available devices are bulky and burdensome to carry, imposing physical and practical
challenges for the user. Additionally, the high costs associated with these devices make them
inaccessible to a substantial portion of the target population, further exacerbating the problem.
992
Pydala et al … Vol 4(2) 2023 : 992-1011
The primary problem addressed by this research is the lack of accessible and affordable
navigation alternatives for visually impaired individuals (Lee, 2016). While numerous efforts
have been made to develop devices and systems to assist this population, their adoption and
implementation remain limited. The current options are often impractical, either due to their
weight or cost, preventing visually impaired individuals from fully benefiting from the
advancements in assistive technology (R. Gnana Praveen, 2013).
Visually impaired individuals face significant challenges when it comes to navigating
unfamiliar environments independently. Without reliable and accessible navigation solutions,
they are often reliant on the assistance of others or restricted in their mobility (Ilag, 2019). This
limitation not only affects their ability to engage in various activities but also hampers their
independence and self-confidence.
The existing navigation alternatives fail to meet the needs of visually impaired individuals
in terms of portability, affordability, and effectiveness (Jafri, 2018). Bulky and heavy devices
impose physical and practical constraints, making them inconvenient for daily use. Furthermore,
the high costs associated with these devices create a financial barrier, preventing a considerable
segment of the population from accessing the necessary assistance.
Therefore, there is an urgent need for a lightweight and cost-effective navigation system
that can address the challenges faced by visually impaired individuals. Such a system would
empower them to navigate unfamiliar environments independently, enhancing their mobility, self-
reliance, and overall quality of life. By providing an accessible and affordable solution, visually
impaired individuals would have the opportunity to participate more fully in society, engage in
various activities, and experience a greater sense of freedom and empowerment.
In light of these challenges, this research aims to develop a navigation system, called
SMART EYE, that is specifically designed to meet the needs of visually impaired individuals.
The SMART_EYE system will be lightweight, portable, and cost-effective, offering a practical
and innovative solution for independent navigation and obstacle detection. By addressing the
limitations of existing alternatives, this research seeks to bridge the gap and provide visually
impaired individuals with an efficient and accessible navigation system that can significantly
improve their daily lives.
To overcome the limitations of existing navigation alternatives, this research proposes the
SMART_EYE system, which aims to provide a lightweight and cost-effective solution for
visually impaired individuals to navigate independently and detect obstacles in their path. The
SMART_EYE system utilizes a smart application integrated with AI and sensor technology to
capture and classify images, while obstacle detection is achieved through the use of ultrasonic
sensors. Real-time assistance and obstacle information are provided to the user through voice
commands, enabling them to navigate unfamiliar environments more confidently and safely.
The precise objective of this research is to develop and evaluate the effectiveness of the
SMART_EYE system in addressing the navigation challenges faced by visually impaired
individuals. This objective encompasses the following key aspects:
1. Designing a lightweight and portable system application that is accessible to visually impaired
individuals, promoting ease of use and mobility.
2. Integrating AI and sensor technology within the smart application to enable real-time obstacle
detection and classification, enhancing the user's situational awareness.
3. Evaluating the performance and effectiveness of the SMART_EYE system through rigorous
testing and validation to determine its potential impact on the lives of visually impaired
individuals.
4. Assessing the cost-effectiveness of the SMART_EYE system to ensure its affordability and
widespread accessibility.
By achieving these objectives, this research aims to contribute to the field of assistive
technology by providing a practical and innovative solution for visually impaired individuals to
navigate unfamiliar environments independently. The development of the SMART_EYE system
addresses the existing limitations of heavy and expensive navigation alternatives, offering a
lightweight, cost-effective, and efficient solution that can significantly improve the lives of
visually impaired individuals.
993
Pydala et al … Vol 4(2) 2023 : 992-1011
The structure of the paper is as follows: Section 2 provides a Literature Survey, Section 3 presents
the methodology of the Proposed Work, Section 4 presents the Results and Discussion, and finally,
Section 5 concludes the paper with future scope.
2. Literature Survey
The Literature Survey of the research highlights various studies and approaches that have
been proposed to aid visually impaired individuals in navigation and obstacle detection. The
studies use different technologies such as sensor-based and computer vision, ultrasonic sensors,
IR sensors, IoT, and deep learning algorithms to develop assistive devices. These assistive devices
aim to improve indoor and outdoor mobility and create a functional system for individuals with
visual impairments, including those who are blind. The studies use different feedback
mechanisms such as auditory commands, vibrations, and object identification to provide
directional information and obstacle detection.
Elmannai W. M., (2018) proposes a data fusion framework for guiding visually impaired
individuals. The framework combines data from various sensors such as ultrasonic sensors, depth
sensors, and cameras to accurately detect obstacles and provide directional information. The
proposed framework also employs machine learning algorithms to enhance the accuracy of
obstacle detection and to classify the type of obstacle. The authors conducted experiments to
validate the accuracy and reliability of the framework, and the results demonstrate a significant
improvement in obstacle detection compared to existing systems. The proposed framework has
the potential to enhance the mobility of visually impaired individuals and improve their quality
of life. Limitations with the above study is: The approach is incapable of showing indoor and
outdoor coverage area of obstacles, as well as a directional facility which lacks the ability to
recognize longer objects such as doors, walls, and so on.
According to Shah (2006) presents a study on a novel sensory direction model for the
visually impaired. The system uses ultrasonic sensors to detect obstacles and direction and
transmits this information to the user through vibrations of varying intensity and patterns on a
handle. The handle is designed to provide feedback to the user based on the vibration's intensity,
sensor position, and signal pulse length. The study involved 15 visually impaired individuals of
different age groups who were blindfolded and tested in various navigation scenarios. The device
was found to be flexible, lightweight, and ergonomically designed to fit different hand sizes.
However, one limitation of this technique is that it cannot be connected to a camera for image
processing, which restricts its ability to detect information about crosswalks and traffic signals in
the outdoor environment.
Many electronic devices that aid people with vision loss use information gathered from the
environment and provide feedback through touchable or auditory signals. However, the preferred
feedback form is still a matter of debate, and opinions vary among individuals. Despite this
(Elmannai W. &., 2017), there are certain essential components that any electronic system
assisting blind or visually impaired people must have to ensure its reliability and usefulness. These
characteristics can be used to evaluate the dependability and effectiveness of the system.
However, a drawback of some of these systems is that they may have difficulty detecting objects
at certain ranges and are limited to static object detection rather than dynamic object detection.
Islam (2019) reviews the development of walking assistants for visually impaired people
and discusses recent innovative technologies in this field, along with their merits and demerits.
The review aims to draw a schema for upcoming development in the field of sensors, computer
vision, and smartphone-based walking assistants. The goal is to provide a basis for different
researchers to develop walking assistants that ensure the movability and safety of visually
impaired people.
Ponnada (2018) presents a prototype of mobility recognition using feature vector
identification and sensor computed processor Arduino chips to assist visually challenged people
in recognizing staircases and manholes. The prototype provides more independence to the
sightless people while walking on the roads and helps them pass through without any assistance.
The model is developed using an Arduino kit and a low-weight stick to recognize obstacles, with
the chip programmed and embedded in the stick to detect manholes and staircases using a
bivariate Gaussian mixture model and speeded up robust features algorithm for feature extraction.
994
Pydala et al … Vol 4(2) 2023 : 992-1011
The developed model shows an accuracy of around 90% for manhole detection and 88% for
staircase detection. Ahmad (2018) proposes a model-based state-feedback control strategy for a
multi-sensor obstacle detection system in a smart cane. The accuracy of the sensors and actuator
positions is critical to ensuring correct signals are sent to the user. Low-cost sensors can result in
false alerts due to noise and erratic readings. The proposed approach uses a linear quadratic
regulator-based controller and dynamic feedback compensators to minimize false alerts and
improve accuracy. Real-time experiments showed significant improvements in error reductions
compared to conventional methods.
According to Bai (2018) presents a novel wearable navigation device to assist visually
impaired people in navigating indoor environments safely and efficiently. The proposed device
consists of essential components such as locating, way-finding, route following, and obstacle
avoiding modules. The authors propose a novel scheme that utilizes a dynamic subgoal selecting
strategy to guide users to their destination while avoiding obstacles in a complex, changeable, and
possibly dynamic indoor environment. The navigation system is deployed on a pair of wearable
optical see-through glasses for ease of use, and it has been tested on a collection of individuals
and found effective for indoor navigation tasks. The device's sensors are of low cost, small
volume, and easy integration, making it suitable as a wearable consumer device.
The study (Tiponut, 2010) focuses on electronic travel aids (ETAs) developed using sensor
technology and signal processing to improve the movement of visually impaired people (VIPs) in
constantly changing environments. Despite efforts to create effective ETAs, VIPs still rely on
traditional aids like white canes and guide dogs. The study proposes an ETA tool with an
Obstacles Detection System (ODS) and a Man-Machine Interface, inspired by the visual system
of locusts and flies. However, the tool has limitations in identifying obstacle labels and providing
auditory navigation for blind users.
Chaitali M. Patil (2016) propose a system framework that can help remove communication
barriers for people with visual, auditory, and speech disabilities, allowing them to communicate
with each other and non-disabled individuals using various modes of communication such as
American Sign Language, audio, braille, and regular text. The system aims to improve the
individual's capacity and desire to convey and transmit messages. However, the system lacks an
effective prototype using the latest technologies.
The paper (Bhasha, 2020) proposes a new smart cane for visually impaired people (VIP)
which can detect obstacles, water, and light environments in front of, and to the left and right of,
the user. The smart cane is constructed using an Arduino Mega 2560 Microcontroller, Ultrasonic
Sensors, Light Sensor, and Soil Moisture Sensor. The device generates an audio feedback signal
to the user if any obstacle or water or light environment is present in their walking path. The
proposed smart cane is more affordable than existing electronic sticks, and the VIPs find it
comfortable to use because it is very familiar like a traditional stick. The detection algorithms
used in this paper are simple and efficient for detecting the obstacles, water, and light environment
of the user’s path, thus helping the user to travel independently from source to destination. Ran
(2004) states that few visually impaired assistance aids can provide lively communications and
responsiveness to the user and that even if such a system existed, it would likely be sophisticated
and not take into account the demands of a blind person, such as simplicity, ease of use, and less
complexity.
Patil (2018) discusses NavGuide, which employs ultrasonic sensors to classify obstacles
and environmental conditions and uses vibration and audio alerts to provide information to the
user. However, NavGuide has limitations, including the inability to detect downhill slopes and
the detection of damp floors only after the user steps on them.
Marzec (2019) describes a navigation system that uses IR sensors to detect walls, buildings,
and other objects. The system requires the user to hold the device in their arms, and vibrations are
used to convey navigation signals about potential movements and nearby threats. Parimal A.
Itankar (2016) aimed to enhance the experience of visually impaired people by using weakly
supervised learning to match ambient music selected by a deep neural network. They suggested
a multifaceted strategy for measuring ambiguous concepts related to music, including availability,
implicit senses, immersion, and subjective fitness. The authors conducted in-depth trials involving
70 individuals and collected feedback on the features of their model. However, the investigation
995
Pydala et al … Vol 4(2) 2023 : 992-1011
had three significant flaws: the performance was not cutting-edge, the music database was limited
in terms of genre and size, and each experiment involved only 10 people, making it impossible to
extrapolate the findings. To generalize the findings, large-scale experiments are necessary, and
improved auditory feature representation through devices is needed to enhance accuracy.
Sangpal (2019) describes a system that uses Python and AIML to create an intelligent
chatbot assistant that mimics the behavior of a human assistant. The system is designed to respond
to queries or issues with spoken word remedies. Python programs are used to convert audio
commands to text format and for audio reply and voice recognition, similar to Google text-to-
speech. AIML is used to match instructions or text to existing dialogues and conversations using
predefined audio syntax. The Python interpreter forms the core of the system. This system
represents the state-of-the-art in intelligent chatbot assistants.
Ashraf *2020) describes an IoT-powered smart stick developed by Ayesha Ashraf et al. to
assist people with vision impairments. The stick has an ultrasonic sensor and a buzzer for
detecting obstacles and sounding an alarm. An Android app is also developed that can send
essential notifications and GPS location to saved phone numbers. The device is lightweight and
portable, making it easier for people with disabilities to walk around more easily and comfortably
without the risk of injury. The authors also explore how image processing and interaction with
the aid can help the user understand the structure of obstacles and objects before providing advice
from the aid. Overall, the smart stick is designed to enhance the mobility and safety of visually
impaired individuals.
Rahman (2021) presents a smart device for visually impaired people (VIPs) that utilizes
deep learning and the Internet of Things (IoT)(Kurniawan & Saputra, 2022). The device is divided
into three parts: an IoT-based smart stick that monitors the blind person's movement in real-time
through the cloud, deep learning algorithms for detecting obstacles, and a virtual assistant to
manage the integration. The paper uses the Mask R-CNN model for object detection, which
allows for accurate object detection in a short processing time. However, due to the wide range
of obstacles in the real world, this model uses a limited number of sensors and devices and a pre-
trained model in object recognition with a limited number of real-world images. Bhavani (2021)
proposes an approach to provide visually impaired individuals with direction finding, directional
help, walking path notification, and an understanding of their surroundings. The proposed
approach utilizes highly sensitive sensors and a comfortable and flexible carbon material to
construct the stick. The study identifies various disabilities and provides an auditory output that
the blind can use to understand the awareness of the buzzer's position.
The paper discusses an assistance aid proposed by (Salama, 2019) which uses an ultrasonic
sensor to detect obstacles in the path of visually impaired individuals. The sensor measures the
height and distance of the obstacle and communicates the information to a microcontroller. The
system can sense a distance of up to 12 feet with a resolution of 0.3 cm. However, the paper
highlights that such a system may not be suitable for visually impaired individuals as it may be
too complex and not meet their requirements for simplicity and ease of use.
The article discusses assistive technology, which refers to tools or equipment that help
people with impairments to participate fully in society (Foley, 2012) (Mountain, 2004) (Pentland,
1998). Smart aids are a type of assistive technology that can come in mobile computerized forms,
such as mobile phones, and are more covert than conventional assistive technologies, reducing
social stigma. Navigation aids for the blind are limited in their ability to detect and alert users to
the types of obstacles in front of them, and RFID-based systems are expensive and prone to
damage. To address these limitations, the article proposes a navigation system that uses deep
learning algorithms and a smartphone to identify various obstacles. This system does not require
the deployment of RFID chips and is not limited to particular indoor or outdoor settings, thus
expanding the locations where it can be used and providing visually impaired people with more
information about their surroundings.
Table 1 - Comparison of Assistance Aids for Visually Impaired People
Reference Metrics Advantages Disadvantages
(Elmannai W. Hurdle avoiding approach and data Significant improvement and Incapable of showing
M., 2018) transformation algorithm qualitative development in indoor and outdoor
conflict warning domain
996
Pydala et al … Vol 4(2) 2023 : 992-1011
coverage area of
obstacles
3. Methodology
997
Pydala et al … Vol 4(2) 2023 : 992-1011
The proposed methodology aims to design a lightweight and portable system application
that is accessible to visually impaired individuals, promoting ease of use and mobility.
Additionally, it involves integrating AI and sensor technology within the smart application to
enable real-time obstacle detection and classification, enhancing the user's situational awareness.
Proposed System
The system design includes a wearable device and a voice-based navigation system that
assists visually impaired individuals in recognizing, detecting, and avoiding obstacles. The system
leverages a combination of computer vision and sensor-based technology to achieve its objectives.
The methodology employed in this research aims to address the objectives of designing a
lightweight and portable system application and integrating AI and sensor technology for real-
time obstacle detection and classification. The proposed system utilizes a combination of
computer vision and sensor-based technology to assist visually impaired individuals in
recognizing, detecting, and avoiding obstacles. The methodology includes the following steps:
1. Designing a Wearable Device and Voice-Based Navigation System: The first step in the
methodology involves designing a wearable device and developing a voice-based navigation
system specifically tailored for visually impaired individuals. The device should be
lightweight and portable, ensuring ease of use and mobility. It should be comfortable for the
user to wear and provide convenient access to the navigation features.
2. Implementing Computer Vision and Sensor Technology: To enable real-time obstacle
detection and classification, the proposed system integrates computer vision and sensor
technology. Computer vision algorithms are used to process images captured by the device's
camera and detect objects in the environment. Sensor technology, such as ultrasonic sensors
or depth sensors, is utilized to measure the proximity and depth of obstacles.
3. Proximity Measurement Method: The proposed model introduces a novel proximity
measurement method for estimating the distance of obstacles based on their depth. This
strategy overcomes the limitations of the current system by enabling the detection of multiple
objects simultaneously, including longer obstacles like doors and walls. The proximity
measurement method enhances the user's situational awareness by providing accurate
information about the distance of obstacles in real-time.
4. Implementation Tools: The proposed work has been implemented using various tools and
technologies. Raspberry Pi, a small and affordable single-board computer, is utilized as the
hardware platform for the wearable device. Android Studio, an integrated development
environment, is employed to develop the Android app for the navigation system. Google
COLAB Tool, a cloud-based platform for machine learning, is utilized for object detection
algorithms.
5. Lightweight, Efficient, and Cost-Effective System: The developed system is designed to be
lightweight, efficient, and cost-effective. The hardware components are carefully selected to
ensure portability and minimize the overall weight of the device. The software algorithms are
optimized for efficient processing and real-time performance. By utilizing affordable and
readily available hardware and software resources, the system aims to provide a cost-effective
solution for visually impaired individuals in need of navigation assistance.
User Testing and Evaluation: Once the system is implemented, user testing and evaluation
are conducted to assess its performance and usability. Visually impaired individuals participate
in the testing process, providing feedback on the system's effectiveness in assisting with
navigation and obstacle detection. User feedback is valuable for refining and improving the
system's design and functionality.
998
Pydala et al … Vol 4(2) 2023 : 992-1011
999
Pydala et al … Vol 4(2) 2023 : 992-1011
simple, clear, and easy to navigate, enabling users to control the device and receive information
effectively.
3. Voice Recognition and Natural Language Processing: To enable voice-based navigation, the
system needs to incorporate voice recognition and natural language processing capabilities.
Voice recognition algorithms are employed to accurately interpret and understand the user's
voice commands. Natural language processing techniques help in understanding the context
and intent of the user's instructions, allowing the system to respond appropriately.
4. Navigation and Routing Algorithms: The design also involves developing navigation and
routing algorithms that can guide visually impaired individuals through different
environments. These algorithms consider factors such as the user's current location, desired
destination, available paths, and obstacle information. By utilizing mapping data and real-time
feedback from the obstacle detection system, the device can provide step-by-step directions,
alert users about obstacles in their path, and suggest alternative routes when necessary.
5. Integration of Sensor Technology: Sensor technology plays a crucial role in obstacle detection
and enhancing situational awareness. Sensors like ultrasonic or depth sensors can be integrated
into the wearable device to detect the presence and proximity of obstacles. The data from these
sensors is processed and used to provide real-time feedback to the user about the distance and
location of obstacles.
6. Accessibility and Ergonomics: Accessibility and ergonomics are critical considerations in the
design process. The device should be accessible to individuals with visual impairments, taking
into account factors such as tactile feedback, braille labels, and adjustable straps for fitting
different body sizes. The device should also be ergonomic, ensuring that it is comfortable and
unobtrusive for the user to wear for extended periods.
7. Iterative Design and User Feedback: The design process typically involves multiple iterations
and user feedback. Prototype versions of the wearable device and navigation system are tested
with visually impaired individuals to gather insights and improve the design. User feedback
helps in refining the interface, addressing usability issues, and enhancing the overall user
experience.
By considering these aspects and leveraging advancements in technology, the design of a
wearable device and voice-based navigation system can provide visually impaired individuals
with a user-friendly and effective means of navigating their surroundings independently.
Fig. 3. The proposed system Architecture of static/dynamic objects detection and navigation process .
The proposed work has been implemented using Raspberry Pi, Android Studio, and Google
COLAB Tool for object detection. The system is lightweight, efficient, and cost-effective, making
it a suitable option for visually impaired individuals who need assistance in navigating their
surroundings. The device can also assist regular walkers in detecting obstacles and avoiding
accidents.
1000
Pydala et al … Vol 4(2) 2023 : 992-1011
1001
Pydala et al … Vol 4(2) 2023 : 992-1011
• Acoustic modeling aims to estimate the conditional probability of the observed feature
vectors given the underlying phonetic units or subword units.
• Let 𝐻 represent the set of possible phonetic units or subword units, and 𝑃(𝑋 |𝐻)represent
the probability of observing feature vectors 𝑋 given a phonetic unit or subword unit
sequence H.
• Acoustic modeling techniques, such as Hidden Markov Models (HMMs) or Deep Neural
Networks (DNNs), are employed to learn and estimate these probabilities.
4. Language Modeling:
• Language modeling focuses on estimating the likelihood of word sequences or phrases in
a given language.
• Let W represent the set of possible word sequences, and P(W) represent the probability
distribution over word sequences.
• Language models learn the statistical patterns and context of words to estimate the
probability of a particular word sequence.
5. Decoding:
• Decoding combines the acoustic and language models to find the most probable word
sequence given the observed feature vectors.
• The decoding process involves finding the word sequence that maximizes the joint
probability 𝑃(𝑊 , 𝑋|𝐻 ), where H represents the phonetic or subword unit sequence.
• Decoding algorithms, such as Hidden Markov Model Decoding or Beam Search, are
employed to find the most probable word sequence.
6. Postprocessing:
• Postprocessing techniques are applied to refine the recognized text and improve its
accuracy.
• These techniques may involve language-specific rules, grammar checks, spell checking, or
statistical methods to correct common recognition errors.
7. Output:
• The output of the voice recognition algorithm is the recognized text, which represents the
transcription of the spoken input.
In mathematical terms, the voice recognition algorithm involves estimating conditional
probabilities of feature vectors given phonetic or subword units (acoustic modeling), estimating
the likelihood of word sequences (language modeling), decoding to find the most probable word
sequence, and applying postprocessing techniques. The recognized text is the final output of the
algorithm.
1002
Pydala et al … Vol 4(2) 2023 : 992-1011
Distance Calculation Formula: The distance calculation formula depends on the specific sensor
technology used for depth extraction.
For example, if ultrasonic sensors are employed, the distance can be estimated using the time-of-
flight principle:
1003
Pydala et al … Vol 4(2) 2023 : 992-1011
1004
Pydala et al … Vol 4(2) 2023 : 992-1011
Bluetooth Pairing:
In the proposed system the Bluetooth device used is “project18”. As soon as the Bluetooth
is “ON” the mobile app gets a list of wireless devices and the user should connect with the required
Bluetooth device by clicking on the “Connect Bluetooth” button on the Home page from the user's
mobile app “Speak Data”.
1005
Pydala et al … Vol 4(2) 2023 : 992-1011
Fig 11. Dynamic no. of object identifications is '1' with label(s) 'teddy bear'
Moreover the further object(s) identified samples are :
1006
Pydala et al … Vol 4(2) 2023 : 992-1011
1007
Pydala et al … Vol 4(2) 2023 : 992-1011
Bench
Plant
Bottle
Laptop
Suitcase
Human
Umbrella
Two Wheeler
Couch
Dog
1008
Pydala et al … Vol 4(2) 2023 : 992-1011
• Accuracy Values: The values in the table indicate the detection accuracy for each object
category under the respective lighting conditions. For example, in the "Human" row, the
accuracy for daytime detection is given as 0.95 (or 95%), while the accuracy for nighttime
detection is 0.76 (or 76%).
• Interpretation: Higher accuracy values suggest that the system has a better ability to accurately
detect the specified obstacles. For instance, an accuracy value of 0.95 (or 95%) indicates a
high likelihood of correctly identifying the obstacle, while a value of 0.76 (or 76%) suggests
a relatively lower accuracy.
It's important to note that the accuracy values in the table are provided without additional
information regarding the specific methodology or dataset used to calculate them. The accuracy
of obstacle detection can vary depending on the specific algorithms, training data, and evaluation
metrics employed in the system.
References
Ahmad, N. S. (2018). Multi-sensor obstacle detection system via model-based state-feedback
control in smart cane design for the visually challenged. IEEE Access, 64182-64192.
Andrius Budrionis, D. P. (2022). Smartphone-based computer vision travelling aids for blind and
visually impaired individuals: A systematic review. Assistive Technology, 178-194.
Ashraf, A. N. (2020). Iot Empowered Smart Stick Assistance For Visually Impaired People.
International Journal of Scientific & Technology Research, 356-360.
1009
Pydala et al … Vol 4(2) 2023 : 992-1011
Bai, J. L. (2018). Virtual-blind-road following-based wearable navigation device for blind people.
IEEE Transactions on Consumer Electronics, 136-143.
Bhasha, P. K. (2020). A Simple and Effective Electronic Stick to Detect Obstacles for Visually
Impaired People through Sensor Technology. Journal of Advanced Research in Dynamical
and Control Systems, 18-25.
Bhavani, R. (2021). Development of a smart walking stick for visually impaired people. Turkish
Journal of Computer and Mathematics Education, 999-1005.
Chaitali M. Patil, A. P. (2016). A Mobile Robostick for Visually Impaired People. International
Journal of Computer Engineering In Research Trends, 183-186.
Elmannai, W. &. (2017). Sensor-based assistive devices for visually-impaired people: Current
status, challenges, and future directions. Sensors, 565-577.
Elmannai, W. M. (2018). A highly accurate and reliable data fusion framework for guiding the
visually impaired. IEEE Access, 33029-33054.
Foley, A. &. (2012). Technology for people, not disabilities: Ensuring access and inclusion.
Journal of Research in Special Educational Needs, 192-200.
Ilag, B. N. (2019). Design review of Smart Stick for the Blind Equipped with Obstacle Detection
and Identification using Artificial Intelligence. International Journal of Computer
Applications, 55-60.
Iskander, M. H.-A. (2022). Health Literacy and Ophthalmology: A scoping review. Survey of
Ophthalmology, 1225-1233.
Islam, M. M. (2019). Developing walking assistants for visually impaired people: A review. IEEE
Sensors Journal, 2814-2828.
Jafri, R. &. (2018). User-centered design of a depth data based obstacle detection and avoidance
system for the visually impaired. Human-centric Computing and Information Sciences, 1-
30.
Kurniawan, B., & Saputra, H. T. . (2022). Telegram Implementation on Security and Monitoring
of Home Door Keys Based on Wemos and Internet of Things . Journal of Applied
Engineering and Technological Science (JAETS), 4(1), 244–250.
https://doi.org/10.37385/jaets.v4i1.1042
Lee, Y. H. (2016). RGB-D camera based wearable navigation system for the visually impaired.
Computer vision and Image understanding, 3-20.
Marzec, P. &. (2019). Low energy precise navigation system for the blind with infrared sensors.
2019 MIXDES-26th International Conference "Mixed Design of Integrated Circuits and
Systems (pp. 394-397). IEEE.
Mountain, G. (2004). Using the evidence to develop quality assistive technology services. Journal
of Integrated Care, 19-26.
Parimal A. Itankar, H. R. (2016). Indoor Environment Navigation for Blind with Voice Feedback.
International Journal of Computer Engineering In Research Trends, 609-612.
Patil, K. J. (2018). Design and construction of electronic aid for visually impaired people. IEEE
Transactions on Human-Machine Systems, 172-182.
Pentland, B. (1998). Living in a State of Stuck: How technology impacts the lives of people with
disabilities. Spinal Cord, 210-219.
Ponnada, S. Y. (2018). A hybrid approach for identification of manhole and staircase to assist
visually challenged. IEEE Access, 41013-41022.
R. Gnana Praveen, R. P. (2013). Blind Navigation Assistance for Visually Impaired based on
Local Depth Hypothesis from a Single Image. Procedia Engineering, 351-360.
Rahman, M. W. (2021). The architectural design of smart blind assistant using IoT with deep
learning paradigm. Internet of Things, 1126-1132.
Ran, L. H. (2004). Drishti: an integrated indoor/outdoor blind navigation system and service.
Second IEEE Annual Conference on Pervasive Computing and Communications (pp. 23-
30). hyderabad: IEEE.
Real, S. &. (2019). Navigation systems for the blind and visually impaired: Past work, challenges,
and open problems. Sensors, 3404.
Salama, R. &. (2019). Design of smart stick for visually impaired people using Arduino. In New
Trends and Issues Proceedings on Humanities and Social Sciences (pp. 58-71). IEEE.
1010
Pydala et al … Vol 4(2) 2023 : 992-1011
Sangpal, R. G. (2019). JARVIS: An interpretation of AIML with integration of gTTS and Python.
2nd International Conference on Intelligent Computing (pp. 486-489). IEEE.
Shah, C. B. (2006). Evaluation of RU-netra-tactile feedback navigation system for the visually
impaired. International Workshop on Virtual Rehabilitation (pp. 72-77). kgm: IEEE.
Tiponut, V. I. (2010). Work directions and new results in electronic travel aids for blind and
visually impaired people. WSEAS Transactions on Systems, 1086-1097.
1011