Sample Project Report
Sample Project Report
Submitted by
SCHOOL OF COMPUTING
COMPUTER SCIENCE AND ENGINEERING
KALASALINGAM ACADEMY OF RESEARCH
AND EDUCATION
KRISHNANKOIL 626 126
MAY 2024
ii
DECLARATION
V S N S Yashwanth Kommuri
9920004502
G N V Rajaram
9920004457
This is to certify that the above statement made by the candidate is correct to the best of my
knowledge.
Date:
Signature of supervisor
BONAFIDE CERTIFICATE
ACKNOWLEDGEMENT
First and foremost, we thank the ‘Supreme Power’ for the immense grace showered on us which
enabled us to do this project. We take this opportunity to express sincere thanks to the late,
We thank our Vice Chancellor Dr. S. NARAYANAN, Ph.D., for guiding every one of us
and infusing us with the strength and enthusiasm to work successfully.
We wish to express our sincere thanks to our respected Head of the Department Dr. N.
SURESH KUMAR, whose moral support encouraged us to process through our project work
successfully.
We offer our sincerest gratitude to our Project Supervisor, Dr. R. RAJA SUBRAMANIAN,
for his patience, motivation, enthusiasm, and immense knowledge.
We are extremely grateful to our Overall Project Coordinator, Dr. S. Ariffa Begum, for her
constant encouragement in the completion of the Capstone Project.
Finally, we thank all, our Parents, Faculty, Non-Teaching Faculty, and our friends for their
moral support.
v
SCHOOL OF COMPUTING
COMPUTER SCIENCE AND ENGINEERING
PROJECT SUMMARY
REALISTIC CONSTRAINTS:
Environmental:
Sustainability:
Engineering Standards:
IEEE P2413 - IEEE Standard for an Architectural Framework for the Internet of Things (IoT)
establishes guidelines and definitions for the architectural framework of IoT systems,
encompassing various aspects such as collaboration, scalability, security, and device
management. This standard provides a structured approach to designing IoT systems, ensuring
compatibility and seamless integration across diverse IoT environments. In accordance with
IEEE P2413, the proposed system embraces an architectural framework that enables efficient
communication, data exchange, and interconnection among IoT devices and platforms. By
adhering to these standards, the proposed system ensures robustness, flexibility, and scalability
in IoT deployment, facilitating seamless interaction and integration in IoT ecosystems.
vii
ABSTRACT
In the realm of modern cleaning solutions, the emergence of autonomous cleaning robots has
changed household and commercial maintenance. This report presents the design and
implementation of the proposed approach, an innovative autonomous cleaning robot equipped
with state-of-the-art sensors and intelligent algorithms. The system's scope encompasses a wide
range of environments, addressing the diverse cleaning needs of homes, offices, and public
spaces. Challenges such as adaptability to different floor surfaces and navigation through
complex layouts and obstacles are addressed through the integration of advanced hardware and
software components. In our proposed system, we use the combined strengths of deep learning
algorithms to achieve accurate object detection and waste identification. The system's
performance is evaluated against various datasets, demonstrating high accuracy and
effectiveness in real-world cleaning scenarios compared to existing solutions. Through its
cutting-edge design and capabilities, the suggested approach aims to improve cleaning
standards and simplify maintenance routines, offering a practical and effective solution for
ensuring hygienic living and working environments.
TABLE OF CONTENTS
ABSTRACT VII
LIST OF TABLES X
LIST OF FIGURES XI
CHAPTER I INTRODUCTION 1
1.1 OVERVIEW
1.3 CHALLENGES
7.1 METHODOLOGY
7.4 HOW THE FASTER RCNN ALGORITHM WORKS IN SYNC WITH HARDWARE
COMPONENTS
7.9 PROTOTYPE
REFERENCES 35
PUBLICATION
CERTIFICATIONS
PLAGIARISM REPORT
x
LIST OF TABLES
LIST OF FIGURES
Figure 9 CNN layers with data augmented layers and more dense layers 23
CHAPTER-I
INTRODUCTION
1.1 Overview:
The rapid advancement of technology in recent years has spurred the development of
innovative solutions aimed at simplifying and enhancing everyday tasks, with autonomous
cleaning robots emerging as a prominent example. According to recent market research, the
global cleaning robot market is projected to reach a value of $25.9 billion by 2027,
experiencing a compound annual growth rate (CAGR) of 21.5% from 2022 to 2027. These
robots have garnered significant attention for their potential to revolutionize home and
commercial cleaning practices, offering benefits such as increased efficiency, reduced labor
costs, and improved hygiene standards. In response to this growing demand, this report
proposes a novel framework that integrates advanced technology into the realm of cleaning
robotics.
At the heart of this framework lies the concept of a cleaning robot a cutting-edge robotic
solution designed to autonomously clean designated areas without requiring human
intervention. Recent studies have indicated a strong consumer interest in autonomous cleaning
robots, with over 60% of respondents expressing a willingness to invest in such technology to
simplify household chores. The proposed system leverages state-of-the-art components and
algorithms to achieve its cleaning objectives effectively.
The system is equipped with an array of components that collaborate seamlessly to empower
the robot's navigation, object detection, and precise execution of cleaning tasks. These
integrated features, including real-time data analysis and object detection capabilities,
underscore the system's dedication to enhancing efficiency and adaptability in cleaning
operations.
Furthermore, the proposed framework addresses the evolving needs of various settings,
including homes, offices, hospitals, and other commercial buildings. With the increasing
emphasis on cleanliness and sanitation in light of global health concerns, the adoption of
autonomous cleaning robots is expected to witness substantial growth across diverse industries.
By harnessing the power of automation and artificial intelligence, the proposed system aims to
2
redefine the standards of cleanliness and hygiene, offering a forward-thinking solution for
modern living and working environments.
The proposed framework presents a versatile solution with broad applications spanning various
domains, from household cleaning to industrial environments. In residential settings, the
proposed approach equipped with Innovative technology offers autonomous navigation and
obstacle avoidance capabilities, revolutionizing cleaning routines and minimizing human
intervention. Similarly, in commercial spaces such as offices, shopping centers, and hospitals,
the framework contributes to maintaining cleanliness and hygiene standards through automated
cleaning tasks. Its ability to navigate complex environments and efficiently clean floors
enhances operational efficiency and promotes a healthier environment for occupants. Overall,
the outlined structure addresses the diverse cleaning needs of both residential and commercial
settings, offering a reliable and efficient solution to streamline cleaning operations.
1.3 Challenges:
Despite the promising potential of autonomous cleaning robots, several challenges must be
addressed to ensure their effectiveness and widespread adoption. One of the primary hurdles is
the accurate detection and classification of objects across diverse environments, which is
crucial for the robot's ability to navigate and perform cleaning tasks efficiently. Achieving this
level of accuracy requires robust algorithms and sensors capable of recognizing various objects
and adapting to different surroundings. Additionally, seamless communication between the
onboard camera and the cloud server is essential for real-time data analysis, enabling the robot
to make informed decisions and respond swiftly to changes in its environment. Ensuring
reliable and low-latency communication is vital for enhancing the overall performance of the
system. Furthermore, optimizing the performance of hardware components, such as sensors,
motors, and microcontrollers, is crucial for achieving efficient cleaning operations. This
involves fine-tuning the hardware to work seamlessly together and integrating advanced
features to enhance the robot's capabilities. Overcoming these challenges will be key in
unlocking the full potential of autonomous cleaning robots and accelerating their adoption in
various settings, ultimately revolutionizing the way we approach cleaning tasks.
3
CHAPTER-II
LITERATURE REVIEW
Creating autonomous robots that can carry out diverse tasks in diverse environments has
garnered significant attention in recent times. Object detection and recognition is a major
challenge in this field, as it is an essential task for many robotic applications. To get around this
problem, the suggested system uses a cam module in an autonomous cleaning bot. The proposed
approach recognizes the object and sends the image to the cloud, which stores the trained data.
After that, the system compares the collected data with the trained data and displays the
outcome. Nonetheless, the ideas and technologies that are pertinent to the suggested system are
highlighted in the literature review that follows.
A microcontroller is utilized in the research paper "Design and implement of robotic arm and
control of moving via IoT with Arduino ESP32" by Ahmed et al. (2021) to create an Internet of
Things-based robotic arm control system. They demonstrated how the robotic arm can be
utilized in a variety of settings, including medical and warehouse automation, by using Internet
of Things technology to remotely control it. The goal of this project is to control a robotic arm,
and it emphasizes how crucial it is to use microcontrollers like the Arduino ESP32 in robotics
applications. For communication between the controller and the cloud server, the system makes
use of a Wi-Fi module. Similar to this, the suggested framework and the cloud are
communicated via the cam module in the suggested system to test the data.
The use of deep learning algorithms for real-time human detection on embedded platforms is
covered in the research paper "Real-Time Human Detection Using Deep Learning on Embedded
Platforms: A Review" by Rahmaniar and Hernawan (2021). They examined various deep-
learning techniques for human detection and talked about the benefits and drawbacks of each
technique. The study emphasized the value of real-time processing for robotics applications and
how embedded platforms can help accomplish this objective. Similar methods are employed in
the suggested system for object detection, using deep learning algorithms with the cam module.
4
The use of ultrasonic sensors for obstacle detection in an intelligent mobile robot is covered in
the research paper "Design and Implementation of Intelligent Mobile Robot based on
Microcontroller by Using Three Ultrasonic Sensors" by Rejab and Abd-Al Hussain (2018).
They demonstrated how the robot uses ultrasonic sensors to detect and avoid obstacles. This
study emphasizes the value of sensors in robotics applications by concentrating on obstacle
detection. The framework is guided in its movement by ultrasonic sensors, which are employed
in the proposed system to identify obstacles.
A microcontroller-based DC motor speed control utilizing the PWM technique was designed
and implemented in the research paper "Microcontroller Based DC Motor Speed Control Using
PWM Technique" by Russell and Bhuyan (2012). They demonstrated how to regulate the speed
of DC motors using the pulse width modulation (PWM) technique. The significance of motor
control in robotics applications is emphasized by this study. The framework in the suggested
system is moved by DC motors, and speed control is provided by the L298N motor driver
module.
When comparing the proposed framework with the research paper authored by Aman,
Rajkumar, and Anuradha on the "Autonomous Floor Cleaning Robot (Navigation)" (2020),
several similarities and differences emerge. Both initiatives share the common objective of
developing autonomous floor-cleaning robots, yet they diverge in their primary focuses and
operational methodologies. Their study appears to prioritize navigation strategies, likely delving
into path planning, obstacle avoidance, and localization techniques to enable effective
movement within a given environment. Conversely, the suggested system emphasizes object
identification using the camera, supplemented by cloud-based processing for image analysis
making it efficient for cleaning tasks.
In comparing the proposed framework with the research paper by Parth and Khan on the
"Autonomous Vacuum Cleaning robot using Sonar Navigation and SLAM" (2021), several
similarities and distinctions emerge. While both initiatives aim to develop autonomous vacuum-
cleaning robots, they employ different navigation and mapping techniques. Their work focuses
on utilizing sonar navigation and SLAM (Simultaneous Localization and Mapping) algorithms
to enable the robot to navigate and map its environment in real-time. In contrast, our system
prioritizes object identification using the cam module, supplemented by cloud-based processing
for image analysis. While sonar navigation and SLAM are effective for spatial awareness and
navigation, the proposed framework's approach allows for efficient object identification and
cleaning operations.
In contrast to the research paper authored by Sonia and Ganesh on the "Design and
Implementation of Floor Cleaning Robot Using IOT" (2021), the proposed framework presents
a distinctive approach to autonomous floor cleaning. While both projects involve floor-cleaning
robots, they diverge in terms of their underlying technologies and functionalities. Their work
emphasizes the use of Internet of Things (IoT) technology for connectivity and control, likely
involving remote monitoring and operation of the cleaning robot. In contrast, the system focuses
on object identification using the cam module, coupled with cloud-based processing for image
6
analysis and decision-making. This enables the robot to autonomously detect and classify
objects in its environment, facilitating efficient cleaning operations.
In contrast to the aforementioned studies, the suggested system seeks to accomplish object
detection and recognition for cleaning applications by combining the use of a cam module with
an Arduino microcontroller, L298N motor driver module, ultrasonic sensor, DC Motors, and
DC Pump in a proposed system. To store and test the captured data against the trained data,
this system makes use of the cloud. The suggested system emphasizes how crucial it is to
combine different hardware elements and technologies in robotics applications to accomplish
the intended task, in contrast to the previous studies that concentrated on different facets of
robotics.
Note: Our comprehensive study of image processing techniques in the Digital Image Processing course
(CSEOOE064) has equipped us with the necessary skills to manipulate and analyze images effectively. Leveraging
this knowledge with algorithms that accurately extract meaningful information from visual data, enhances the
overall functionality and performance of our prototype.
7
CHAPTER-III
The proposed system aims to address several key challenges in autonomous cleaning by
integrating various components. Leveraging the cam module, this system focuses on real-time
object detection and recognition within indoor environments. The Arduino, coupled with the
L298N motor driver module, orchestrates and controls the intricate maneuvers of the DC
Motors and DC Pump, enabling the robot's movement and functionality. The ultrasonic sensor
plays a pivotal role in providing distance measurements, facilitating obstacle detection, and
ensuring safe navigation. With the captured images, the camera processes and transmits data
to the cloud, where trained algorithms identify objects and relay information back to the
system. This process enables the system to display real-time object identification results,
thereby enhancing the robot's efficiency and autonomy.
CHAPTER-IV
PROPOSED SYSTEM
The proposed framework aims to develop an AutoCleanAI that can detect and classify objects
using an ESP32 camera. The framework also includes an Arduino, ultrasonic sensors, DC
motors, and a DC pump. The cam module detects objects using its camera and sends the
captured picture to the cloud where the practiced data is stored. The framework then tests the
captured picture with the practiced data and classifies it accordingly. The results are displayed
on a user interface where the user can see what object has been detected.
The ESP32 is an ideal camera for the proposed framework due to its low power consumption,
high-quality image capture, and built-in Wi-Fi capability. The module uses its built-in camera
to capture pictures of the surroundings. These pictures are then processed by the cam module
using the Faster RCNN algorithm to detect and classify objects. Then the cam module sends the
captured pictures to the cloud where the practiced data is stored. The practiced data is a set of
pre-classified pictures that the framework uses to compare with the newly captured pictures.
The proposed framework incorporates two essential modules: deep learning and IoT. In the deep
learning module, various state-of-the-art imagenet models, custom models, and the Faster
RCNN algorithm are utilized for robust object detection and classification. These models
undergo rigorous training using the WasteNet dataset to enhance their accuracy and
effectiveness in identifying waste materials. On the other hand, the IoT module integrates sensor
technologies such as ultrasonic sensors and cameras with deep learning algorithms, enabling
seamless data acquisition and processing. This integration ensures that the AutoCleanAI system
can adapt to different environmental conditions and effectively navigate its surroundings for
optimal cleaning performance. By combining deep learning and IoT technologies, the proposed
framework achieves a synergistic effect, enhancing the system's capabilities and versatility in
real-world applications.
9
The Faster RCNN (Region-based Convolutional Neural Network) algorithm plays a crucial role
in object detection and classification. This algorithm operates by dividing the image into various
regions and proposing potential object locations within each region. These proposed regions,
known as region proposals, are then analyzed by a convolutional neural network (CNN) to
extract features and classify objects. The CNN effectively learns discriminative features from
the proposed regions, enabling accurate identification and classification of objects within the
image. By leveraging both region proposals and CNN-based feature extraction, the Faster
RCNN algorithm achieves impressive accuracy and efficiency in object detection tasks. In the
context of the proposed framework, the cam module captures images of the environment, which
are then processed by the Faster RCNN algorithm deployed in the cloud. This enables real-time
object detection and classification, allowing the AutoCleanAI to efficiently navigate its
surroundings and perform targeted cleaning actions.
The Arduino controls the motors and pump. The motors are responsible for moving the robot
and avoiding obstacles using ultrasonic sensors. The pump is used for cleaning the surface. The
ultrasonic sensor is used for obstacle detection. The sensor sends out high-frequency sound
waves and detects the reflection of these sound waves to determine the distance to an object.
The Arduino then uses this information to control the motors and avoid obstacles. The motors
are responsible for the movement of the robot. The L298N driver module is used to control the
speed and direction of the motors. The motors are programmed to move forward, backward,
left, and right based on the input from the ultrasonic sensor.
Moving beyond the above applications, the system employs a vacuum suction mechanism to
efficiently remove debris like papers, adding a nuanced layer to its cleaning capabilities. This
integration further enhances the versatility and effectiveness of the proposed framework. The
cam module’s role extends beyond mere image capture; it serves as the linchpin for the entire
process, managing the seamless communication between hardware components and the cloud-
based analytical powerhouse.
10
In addition to object detection and classification, the proposed framework leverages cloud-based
data storage and processing capabilities for enhanced efficiency and scalability. The captured
images from the cam module are transmitted to the cloud infrastructure, where they are stored
securely. This cloud-based storage ensures that the system has access to a vast repository of
practiced data for comparison and analysis. Moreover, the cloud serves as the computational
powerhouse for executing complex algorithms such as the Faster RCNN. By offloading
computational tasks to the cloud, the framework can achieve real-time processing of captured
images, enabling rapid decision-making and response in dynamic environments. Furthermore,
cloud-based storage enables seamless data sharing and collaboration, facilitating continuous
learning and improvement of the AutoCleanAI system over time.
To enhance user experience and facilitate seamless interaction with the AutoCleanAI system, a
user interface (UI) is integrated into the framework. The UI provides a visual representation of
the classification results, allowing users to monitor the detected objects in real-time. Through
the UI, users can access detailed information about the identified objects, including their
classifications. Additionally, the UI offers intuitive controls for initiating cleaning actions and
configuring system settings. This interactive interface empowers users to actively engage with
the AutoCleanAI system, providing valuable feedback and insights for system optimization and
refinement. Furthermore, the UI serves as a platform for communication between the system
and its operators, enabling efficient coordination and management of cleaning operations in
diverse environments. By prioritizing user-centric design and interaction, the proposed
framework ensures that AutoCleanAI remains accessible, adaptable, and user-friendly for a
wide range of applications and users.
Note: Cloud processing plays a pivotal role in the proposed algorithm for our prototype, offering scalability and
accessibility to our system's functionalities. Our understanding of cloud processing was honed through the
comprehensive coverage provided in the Cloud Computing course (CSEOPE028), empowering us to leverage cloud
resources effectively for our project's requirements.
11
Note: The Faster R-CNN algorithm serves as the cornerstone of our prototype's object detection capabilities,
offering high accuracy and efficiency. Our familiarity with this advanced algorithm was cultivated through the
in-depth exploration provided in the Deep Learning course (CSE18R396) and Machine Learning (CSE18R212),
equipping us with the knowledge to implement cutting-edge techniques in our project.
12
CHAPTER -V
Item Quantity
Microcontrollers 1
Motor Drivers 2
Sensors 1
Camera 1
Motors 8
Pumps 1
Vacuum tubes, suction cups, hoses, 1
and filters
Water Holding Tank 1
Batteries & Charger 3&1
Chassis and wheels 1C & 4W
Cleaning tools & mops Required
Cables, Wires, and Connectors Required
Mounts and brackets Required
Fasteners (Screws, Nuts, Bolts) Required
Note: These hardware components are essential for the prototype's functionality, serving as integral building
blocks in its design. Our understanding of these components was enriched through the comprehensive coverage
provided in the Introduction to Internet of Things course (CSEOPE005) and Algorithms for Intelligent Systems
and Robotics (CSE18R292), laying a solid foundation for their practical application in real-world IoT projects.
13
The Arduino serves as the central processing unit of the AutoCleanAI, coordinating the
functionalities of various hardware components. It receives input from sensors, processes data,
and controls the operation of motors and pumps for effective cleaning actions. By executing
these instructions, it ensures precise navigation and seamless integration of cleaning functions.
The Cam module acts as the eyes of the AutoCleanAI, capturing images of the surroundings for
object detection and classification. Its built-in camera, low power consumption, and Wi-Fi
capability make it ideal for real-time image processing tasks. By transmitting images to the
cloud for analysis, it enables accurate identification of objects and efficient cleaning actions.
The ultrasonic sensor plays a pivotal role in obstacle detection, emitting high-frequency sound
waves and measuring their reflections to determine distances to objects. Integrated into the
AutoCleanAI system, it provides crucial spatial awareness, allowing the robot to navigate and
maneuver around obstacles safely. Its reliable performance ensures smooth operation in varied
environments.
The L298N motor driver module controls the speed and direction of the DC motors, facilitating
precise movement and navigation of the AutoCleanAI. By regulating the power supplied to the
motors, it ensures smooth operation and efficient cleaning actions. Its robust design and
compatibility with various motor types make it an essential component for driving the system.
The DC motors and DC pump power the cleaning tools and mops of the AutoCleanAI, enabling
it to perform scrubbing and vacuuming actions effectively. These motors drive the movement
of the robot and the rotation of cleaning attachments, while the pump facilitates the suction and
expulsion of liquid waste. Their combined functionality ensures thorough cleaning of surfaces
and efficient waste removal.
14
CHAPTER -VI
SYSTEM DESIGN
The design constraints that influence the development and implementation of the proposed
AutoCleanAI system encompass various factors that must be considered to ensure the system's
functionality, reliability, and usability in real-world environments. One significant design
constraint is the hardware limitation, which dictates the selection and integration of hardware
components such as microcontrollers, sensors, motors, and pumps. The chosen hardware must
meet specific criteria, including compatibility, power consumption, size, and cost, to ensure
optimal performance and affordability of the system. Additionally, environmental constraints,
such as varying surface conditions, cluttered spaces, and obstacle-rich environments, pose
challenges to the system's navigation and cleaning capabilities. The AutoCleanAI must be
designed to adapt to these environmental conditions and navigate effectively while avoiding
obstacles and hazards. Moreover, there are software constraints related to algorithm complexity,
processing speed, and memory usage, which influence the selection and implementation of
object detection and classification algorithms. The system's software architecture must be
optimized to achieve real-time processing and decision-making while minimizing
computational resources. Furthermore, user interaction constraints, such as the design of the
user interface and the accessibility of controls, impact the system's usability and user experience.
The user interface must be intuitive, informative, and responsive, allowing users to monitor
cleaning progress, adjust settings, and troubleshoot issues effectively. By addressing these
design constraints comprehensively, the proposed AutoCleanAI system can overcome
challenges and deliver efficient and reliable cleaning performance in diverse environments. The
architectural design of the proposed autonomous cleaning system aligns with IEEE P2413 -
IEEE Standard for an Architectural Framework for the Internet of Things (IoT), ensuring that it
follows a structured approach to IoT integration and compatibility, thereby enhancing its
scalability and adaptability across diverse environments.
16
CHAPTER -VII
This section outlines the design, construction, and testing process as well as the methodology
utilized for the autonomous cleaning bot. An Arduino microcontroller, an ESP32 CAM
module, an ultrasonic sensor, DC motors, and a DC pump are among the hardware elements
that were employed. The Arduino IDE is the software used to program the Arduino
microcontroller and cam module.
Assembling the
Preparing the basic design Creating the DL and sensors and micro-
driver modules controller
Testing our
Uploading the
Autonomous cleaning
modules to micro-
bot on different
controller
surfaces
The work plan delineates the step-by-step process essential for the development of the
autonomous cleaning bot. Initially, the project commences with the foundational stage of
preparing the basic design, outlining the system's architecture and functionalities. Following
this, the focus shifts towards crafting the deep learning (DL) and driver modules, crucial
components enabling object recognition and motor control. Subsequently, the assembly of
sensors and the microcontroller constitutes a pivotal phase, merging hardware elements and
establishing communication pathways.
17
Waste detection
Navigation
Output
Input Object Avoidance
Cleaning Actions
User Interface
The aforementioned diagram shows the flow of data and control signals within the system. It
provides a visual representation of the program's logic and how different functions are executed
based on certain conditions or events. The flow diagram helps to understand the flow of control
and data between the various components of the system.
7.1 Methodology:
The methodology employed in this study involves the integration of two main modules:
a. Deep Learning
b. IOT
and EfficientNetB0 and Faster RCNN. These algorithms were evaluated for their effectiveness
in object detection and classification, crucial for the autonomous cleaning robot's operation.
The WasteNet dataset is a collection of 40 JPG images divided into four subcategories: 'Coffee',
'Onion Peels', 'Red Juice', and 'Paper'. Each subcategory contains 10 images, all of which depict
different waste materials.
The purpose of this dataset is likely to be for training and testing machine & Deep learning
models that aim to identify or classify different types of waste materials. Each image in the
dataset is likely to be labeled with the corresponding subcategory it belongs to, which could be
used as the ground truth for training and evaluating machine & Deep learning models. By
providing a set of labeled images, this dataset can be used to evaluate the performance of such
models and to improve their accuracy.
The WasteNet dataset is used for evaluating both the ImageNet models and the Faster RCNN
algorithm, enabling a comprehensive assessment of their performance in waste material
classification and object detection tasks.
The VGGNET experiment involves using the VGG-16 convolutional neural network
architecture to classify images. The ResNet experiment involves using the ResNet
convolutional neural network architecture to classify images per class. The goals of these
experiments are to evaluate the performance of VGG-16 and ResNet on this task. VGG-16 and
ResNet are popular convolutional neural network architectures that have been shown to achieve
state-of-the-art performance on several image classification tasks. The main problem addressed
by this experiment is image classification, which is a fundamental problem in computer vision.
The ability to automatically classify images based on their content is important in many real-
world applications, such as object recognition, face recognition, and autonomous driving.
19
The MobileNet experiment involves training a convolutional neural network (CNN) called
MobileNet on a specific dataset for image classification tasks. The main problem addressed by
this experiment is the need for deep learning models that can run efficiently on mobile devices
with limited computational resources. Traditional CNN models, such as VGGNet and ResNet,
are computationally intensive and require high-performance hardware, making them difficult
to deploy on mobile devices. MobileNet is designed to address this issue by reducing the
computational complexity of the model while maintaining high accuracy. This makes it well-
suited for deployment on mobile devices.
The MobileNet experiment involves training the MobileNet model on a specific dataset for a
particular image classification task. The dataset can be selected based on the specific
application for which the model will be used. For example, if the model will be used to classify
images of animals, the dataset might include images of different animals. The goal is to train
the model to accurately classify new images that it has not seen before.
Note: The Introduction to Python Programming course (CSE18R254) has provided us with fundamental skills in
programming with Python, laying a solid foundation for our endeavors in software development and data
analysis. Harnessing the principles learned in this course enables us to work on robust and efficient algorithms,
further enhancing the capabilities of our prototype through effective code implementation and data manipulation
techniques.
20
These custom models include Simple CNN, CNN layers with augmented layers and dropout
layers, CNN layers with dropout layers, CNN layers with augmented layers, CNN layers with
two different augmented layers, CNN layers with data augmented layers and more dense layers.
The results, reported in terms of classification accuracy on a test dataset, showcased the
outcomes of experiments that explored different combinations of activation functions and
optimization algorithms across varying numbers of training epochs.
In Figure 4, the analysis of the Simple CNN model demonstrates its ability to effectively
identify and categorize various waste materials, thereby enhancing the efficiency of cleaning
operations.
Figure 5 illustrates the CNN Layers with Augmented and Dropout Layers model's performance,
accurately identifying and categorizing waste materials while addressing overfitting with
dropout layers. Similarly, Figure 6 highlights the effectiveness of the CNN Layers with Dropout
Layers model in waste material recognition and classification, despite the incorporation of
dropout layers for enhanced performance.
Figure 7 showcases the CNN Layers with Augmented Layers model's proficiency in accurately
detecting waste materials, utilizing augmentation techniques to enhance feature representation.
Similarly, Figure 8 demonstrates the CNN Layers with Two Different Augmented Layers
model's effectiveness in waste material identification and categorizing, leveraging multiple
augmentation layers for improved performance.
Fig. 9. CNN layers with data augmented layers and more dense layers
In Figure 9, the examination of the CNN Layers with Data Augmented Layers and More Dense
Layers model highlights its effectiveness in waste recognition and classification, leveraging
additional dense layers to enhance feature extraction and representation.
The Deep Learning module focused on training and evaluating these ImageNet and custom
models on the WasteNet dataset to classify and recognize different waste materials effectively.
After comprehensive experimentation, the Faster RCNN algorithm was selected for further
evaluation due to its superior performance in object detection and classification tasks. Faster
RCNN demonstrated remarkable accuracy in identifying objects, making it ideal for enabling
the autonomous cleaning robot to detect and classify waste materials for efficient cleaning
operations.
Table 2 provides a detailed analysis of the accuracy scores achieved through the
implementation of diverse custom models and the deep learning Faster RCNN algorithm. These
custom models encompass variations in activation functions paired with different optimizers,
highlighting the comprehensive exploration of optimization strategies within the deep learning
framework.
24
Activation Validation
Model Model name function Optimizer Accuracy accuracy
M1 Simple CNN ReLU RMSProp 97.12 97
M1 Simple CNN ReLU Adam 99.15 99.28
M1 Simple CNN ELU Adamax 98.43 64.49
M1 Simple CNN ELU N-Adam 99.76 99
M2 CNN Layers with ReLU RMSProp 89.87 86.87
Augmented and
Dropout Layers
M2 CNN Layers with ReLU Adam 97.58 98.55
Augmented and
Dropout Layers
M2 CNN Layers with ELU Adamax 94.3 98.55
Augmented and
Dropout Layers
M2 CNN Layers with ReLU N-Adam 96.74 99.64
Augmented and
Dropout Layers
M3 CNN Layers with ReLU RMSProp 97.5 95.12
Dropout Layers
M3 CNN Layers with ReLU Adam 94.81 97.82
Dropout Layers
M3 CNN Layers with SeLU Adamax 97.06 95.68
Dropout Layers
M3 CNN Layers with ReLU N-Adam 96.14 98.55
Dropout Layers
M4 CNN Layers with ELU RMSProp 92.06 91.74
Augmented Layers
M4 CNN Layers with ReLU Adam 96.01 94.57
Augmented Layers
M4 CNN Layers with ELU Adamax 96.74 96.74
Augmented Layers
M4 CNN Layers with ReLU N-Adam 97.71 100
Augmented Layers
M5 CNN Layers with Two ReLU RMSProp 91.81 96.44
Different Augmented
Layers
M5 CNN Layers with Two ReLU Adam 96.01 98.55
Different Augmented
Layers
M5 CNN Layers with Two ELU Adamax 90.46 93.84
Different Augmented
Layers
M5 CNN Layers with Two ELU N-Adam 97.46 95.29
Different Augmented
Layers
25
7.4 How the Faster RCNN algorithm works in sync with hardware components:
The technique for the proposed system entails a structured approach to create an autonomous
cleaning bot equipped with object detection capabilities using the cam module. The process
begins with assembling the necessary hardware components, including the ESP32 CAM
module, Arduino microcontroller, L298N motor driver module, ultrasonic sensor, DC Motors,
DC Pump, and a vacuum-sucking mechanism. Establishing electrical connections, and
configuring the cam module for image capture. Captured images are transmitted to a cloud-
based server for processing, where a pre-trained machine learning model, such as Faster R-
CNN, detects and classifies objects based on trained data stored in the cloud. The Arduino
microcontroller interprets the detection results and controls the bot's movements, using
ultrasonic sensors for obstacle avoidance. Additionally, it activates the DC pump for cleaning
and vacuuming the dust mechanism to collect debris efficiently. A user-friendly interface
displays results and allows customization.
26
Table 3 presents the results analysis of the implementation of different ImageNet models and
the deep learning Faster RCNN algorithm. It showcases the accuracy percentages achieved
during training for each method. Upon experimenting with different deep learning algorithms,
the table below shows the results, providing insights into the performance of each model and
algorithm in accurately detecting and classifying objects.
In the IoT module, the focus was on understanding the functionality of hardware components
and explaining the architectures of the proposed system. This involved a detailed examination
of microcontrollers, motor drivers, sensors, cameras, and other components essential for the
robot's operation. Additionally, the communication protocols and data exchange mechanisms
between these hardware components were studied to ensure seamless integration and operation
within the proposed system architecture. Through this comprehensive exploration, the IoT
module provided insights into the hardware infrastructure necessary to support the autonomous
cleaning robot's functionality.
The circuit diagram of the autonomous cleaning bot is shown in Figure 10 & Figure 11.
27
The circuit diagram shows the electrical connections between the various components of the
system, such as the ESP32 CAM module, Arduino microcontroller, L298N motor driver
module, ultrasonic sensor, DC Motors, and DC Pump. It is a detailed schematic that helps to
understand the flow of electricity throughout the system and how the various components are
interconnected.
The cam module is connected to the Arduino microcontroller through the Serial Peripheral
Interface (SPI) pins. The ultrasonic sensor is connected to the Arduino microcontroller through
the digital pins. The DC motors are connected to the L298N motor driver module, which is
28
controlled by the Arduino microcontroller through the PWM pins. The DC pump is connected
directly to the Arduino microcontroller through the digital pins.
The block diagram for the autonomous cleaning bot is shown in Figure 12.
The above diagram, on the other hand, provides a more high-level view of the system, showing
the major components and their relationships to each other. It helps to provide an overview of
the system's architecture and how the various components work together to achieve the desired
functionality.
The cam module captured images of the cleaning surface and sent them to the cloud, where the
trained data was stored. The Faster R-CNN algorithm tested the captured data with the trained
data and sent the results back to the microcontroller. The microcontroller then controlled the DC
Motors and DC Pump to clean the surface based on the detected objects.
29
To evaluate the performance and capabilities of the autonomous cleaning bot, we conducted
several tests. First, we tested the accuracy of the object detection algorithm by using different
types of objects and obstacles in various environments. We also tested the cleaning efficiency
of the system by measuring the time and amount of water used to clean different types of
surfaces with varying degrees of debris.
In Figure 13, the scrubbing results showcase the system's efficacy in removing coffee stains and
red juice spills through automated mop control. Meanwhile, Figure 14 illustrates the vacuuming
results, demonstrating the system's ability to effectively clear debris such as onion peels and
paper, ensuring cleanliness and maintenance of the environment.
30
The primary focus of the evaluation lies in the framework's object detection, facilitated by the
ESP32 camera and the Faster RCNN algorithm. By examining the accuracy and reliability of
object identification and classification, AutoCleanAI's proficiency in discerning various objects
within its environment is thoroughly assessed.
Finally, the evaluation assesses the overall cleaning performance of the AutoCleanAI. By
measuring its ability to detect and remove debris, including papers and other common waste
items, the framework's efficacy in fulfilling its cleaning objectives is evaluated. This analysis
provides insights into AutoCleanAI's practical utility and effectiveness in real-world cleaning
scenarios.
7.9 Prototype:
The prototype encompasses several essential hardware components. This compact and robust
prototype is designed to demonstrate the functionality and feasibility of our AutoCleanAI.
Through careful integration and testing, the prototype showcases seamless interaction and
efficient cleaning performance. User interaction is facilitated through a user-friendly interface
accessible via the website, allowing users to adjust settings, initiate cleaning routines, and check
progress in real-time. The prototype's design prioritizes user accessibility and ease of use,
ensuring an efficient cleaning experience.
32
Figure 15 illustrates the prototype design, showcasing the intricate integration of the cam
module within the autonomous cleaning bot, along with the Arduino microcontroller and
accompanying hardware components. Meanwhile, Figure 16 contains the demo video of the
prototype, providing a visual representation of its operational capabilities.
Table 4 provides a comparative overview of the key features of various autonomous cleaning
devices, including AutoCleanAI, Roomba, Braava, and Scooba. Every gadget can perform
different cleaning tasks, such as vacuuming to scrubbing wet areas. AutoCleanAI distinguishes
itself by combining scrubbing wet areas with vacuuming waste particles. Furthermore,
AutoCleanAI and others feature object detection capabilities, enhancing their adaptability and
efficiency. All devices offer some degree of autonomous control, with AutoCleanAI operating
fully autonomously. The technologies used vary across the devices, with AutoCleanAI utilizing
UR Sensor, ESP32 CAM, and Faster RCNN. In terms of pricing, AutoCleanAI provides a cost-
effective solution compared to others while offering comparable functionality.
33
Note: Our understanding of evaluation methodologies and analytics techniques was cultivated through the
Predictive Analytics course (CSE18R257). Leveraging this knowledge, we can systematically assess to ensure
that our prototype algorithm meets the desired objectives and performs optimally in real-world scenarios.
34
CHAPTER – VIII
Looking ahead, the future scope for AutoCleanAI holds immense potential for further
advancement and refinement. While the system excels in various environments, challenges
remain, particularly regarding surface compatibility on substrates like sand and roads. To
address this limitation, future iterations will focus on independent surface adaptability and
enhanced autonomy, allowing AutoCleanAI to operate more effectively and efficiently across
diverse terrains. Additionally, ongoing efforts will be directed toward expanding the system's
training dataset to include a broader spectrum of waste particles, enabling comprehensive waste
detection and classification. By enhancing autonomy, increasing speed, and broadening the
range of detectable waste particles, AutoCleanAI will evolve into a highly proficient cleaning
agent, improving cleaning practices and setting new standards for cleanliness and efficiency in
various sectors.
35
REFERENCES
3. Parth Vibhandik and Zaid Khan, “Autonomous Vacuum Cleaning Robot using Sonar
Navigation and SLAM,” International Research Journal of Engineering and
Technology (IRJET), July 2021, vol. 8, Issue 7, pp. 1905.
4. B. Sonia and P. Ganesh, “Design and Implementation of Floor Cleaning Robot Using
IOT,” International Journal of Creative Research Thoughts (IJCRT), January 2021, vol.
9, Issue 1, pp. 246-249.
6. Ramalingam, B., Le, A.V., Lin, Z. et al. “Optimal selective floor cleaning using deep
learning algorithms and reconfigurable robot hTetro”. Sci Rep 12, 15938 (2022).
7. Canedo, Daniel, Pedro Fonseca, Petia Georgieva, and António J. R. Neves. 2021. “A
Deep Learning-Based Dirt Detection Computer Vision System for Floor-Cleaning
Robots with Improved Data Collection”, Institute of Electronics and Informatics
Engineering of Aveiro/Department of Electronics, Telecommunications and
Informatics (IEETA/DETI) , Technologies 9, no. 4: 94.
36
8. Patil, Swati & Yelmar, S & Yedekar, S & Mhatre, S & Pawashe, V. “Autonomous
Robotic Vacuum Cleaner”, International Research Journal of Innovations in
Engineering and Technology (IRJIET), 2021, 142-146.
9. Uman Khalid, Muhammad Faizan Baloch, Haseeb Haider, Muhammad Usman Sardar,
Muhammad Faisal Khan , Abdul Basit Zia1 and Tahseen Amin Khan Qasuria, “Smart
Floor Cleaning Robot (CLEAR)”, IEEE Standards University E-Magazine, March
2023, vol.5, Issue 3, pp. 145-151.
10. R. Raja Subramanian, V. Vasudevan, “A deep genetic algorithm for human activity
recognition leveraging fog computing frameworks”, Jwenal of Visual Communication
and Image Representation, Volume 77, 2021, 103132, ISSN 1047-3203,
https://doi.org/10.1016/j.jvcir.2021.103132.
11. Raja Subramanian R., Vasudevan V. (2021) HARfog: An Ensemble Deep Learning
Model for Activity Recognition Leveraging IoT and Fog Architectures. In: Gunjan
V.K., Zurada J.M. (eds) Modern Approaches in Machine Learning and Cognitive
Science: A Walkthrough. Studies in Computational Intelligence, vol 956. Springer,
Cham. https://doi.org/10.1007/978-3-030-68291-0_11.
12. Burman, Vibha & Kumar, Ravinder. “IoT-Enabled Automatic Floor Cleaning Robot”,
Recent Advances in Mechanical Engineering, 2021.
21. Khaleda Sh. Rejab and Sara Mazin Naji Abd-Al Hussain, “Design and Implementation
of Intelligent Mobile Robot based on Microcontroller by using Three Ultrasonic
Sensors”, International Journal of Current Engineering and Technology, November
2017, vol. 7, No.6, pp. 1-6.
22. Anwer Sabah Ahmed, Heyam A. Marzog1 and Dr. Laith Ali Abdul-Rahaim, “Design
and Implement of Robotic Arm and control of moving via IoT with Arduino ESP32”,
International Journal of Electrical and Computer Engineering (IJECE), October 2021,
vol. 9, No.4, pp. 101-110.
23. Md. Kamruzzaman Russel, Muhibul Haque Bhuyan, “Microcontroller Based DC Motor
Speed Control Using PWM Technique”, International Conference on Electrical,
Computer and Telecommunication Engineering, December 2012, pp. -522.
24. Wahyu Rahmaniar and Ari Hernawan, “Real-Time Human Detection Using Deep
Learning on Embedded Platforms: A Review”, Journal of Robotics and Control (JRC),
November 2021, vol. 2, Issue 6, pp. 462-468.
Abstract— The autonomous cleaning bot is a state-of-the- ultrasonic sensor is used to detect objects and obstacles in the
art robotic system that utilizes advanced cutting-edge bot's path.
technology to automate the cleaning process. This paper
presents the design and implementation of an autonomous One of the main use cases of this proposed system is in
cleaning bot efficiently and effectively cleans a variety of commercial and industrial cleaning applications, where a
indoor spaces, from homes to commercial buildings, using a large area needs to be maintained efficiently and without
combination of sensors, algorithms, and cleaning tools. The bot human intervention. Another potential use case is in home
is capable of navigating through complex environments and cleaning systems, where the bot can be used to clean floors,
detecting obstacles in its path, making it an ideal solution for carpets, and other surfaces. The applications of an
areas that require regular cleaning. It also features a user- autonomous cleaning bot are numerous. It can help reduce
friendly interface that allows for easy customization of cleaning the workload of cleaning staff, increase productivity, and
schedules and zones, as well as real-time monitoring of the reduce the use of chemicals and water. Additionally, it can
bot's progress. The experimentation and results demonstrate help maintain a clean and healthy environment by removing
the effectiveness of the system in autonomously cleaning and dirt, dust, and allergens.
detecting objects. The bot successfully sends captured data to
the cloud for analysis, and the results accurately indicate the However, one of the main challenges in implementing
type of object detected. With its rechargeable battery, modular this system is the need for robust image processing and
design, and easy maintenance the autonomous cleaning bot is a object recognition algorithms, as well as the ability to
cost-effective, efficient, and eco-friendly solution that enhances navigate through different environments without damaging
hygiene and safety standards while reducing the need for the bot or the objects in its path. Overall, the proposed
human labor. The system's design and implementation have autonomous cleaning bot system offers a promising solution
been discussed in detail, including the integration of all for automated cleaning and maintenance tasks in a variety of
hardware components. The proposed system has demonstrated settings, but further research and development are needed to
its effectiveness in cleaning and detecting objects improve its functionality and usability in real-world
autonomously. The paper concludes with potential future applications.
improvements and research directions for the autonomous
cleaning bot. The rest of the paper is structured as follows:
Section 2 provides a comprehensive literature survey on deep
Keywords— object detection, autonomous cleaning, learning models, focusing specifically on the Faster R-CNN
microcontroller, sensor fusion. algorithm, which is used in the proposed system. The
proposed system is described in Section 3 along with
I. INTRODUCTION specifics on how the ESP32 CAM module is combined with
Autonomous cleaning bots have become increasingly other hardware elements to support object recognition and
popular due to their ability to efficiently clean and maintain image transmission to the cloud. Section 4 outlines the
large areas without human intervention. It can be used in experimental setup and methodology employed to evaluate
various settings, including homes, offices, hospitals, and performance of the system. Following the results, Section 5
other commercial buildings. In this proposed system, the discusses the quantitative, and qualitative and then highlights
main hardware components include an ESP32 CAM module, the contributions of authors. Finally, the authors address the
an Arduino microcontroller, an ultrasonic sensor, DC motors, conclusions, limitations, and future potential of the suggested
and a DC pump. The ESP32 CAM module serves as the system.
main image processing unit, while the Arduino
microcontroller is the main control unit for the autonomous
cleaning bot that controls the DC motors and DC pump. The
cmt
Compose
Inbox 10,794
Acceptance Notification – IESIA 2024 Inbox ×
More Title : Auto Clean AI: A Deep Learning-Enabled Autonomous Surface Cleaning Bot Integrated with IoT Technology
Note:
All accepted and presented papers will be published in Springer Book series "Studies in Autonomic, Data-driven and
You should finish the registration before deadline, or you will be deemed to withdraw your paper: Complete the Reg
Note :
1. This is Hybrid Conference, both online and physical presentation mode is available.
The advance program for the conference will be available very soon on the conference website,
https://iesia.smartsociety.org/
Recommendation: Accept
https://mail.google.com/mail/u/0/?ogbl#search/cmt/FMfcgzGxSRNZjHnpmMcmmvswTRrFnMXr 1/1
V S N S YASHWANTH KOMMURI
Cloud Computing
79
25/25 53.57/75
16686
Jul-Oct 2023
(12 week course)
2
ar.kalasalingam.ac.inInternet 20 words — 1%
3
www.ncbi.nlm.nih.govInternet 17 words — 1%
4 "Principles of Internet of Things (IoT) Ecosystem:Insight Paradigm", Springer
This is to certify that the project work entitled “AutoCleanAI: A Deep Learning-Enabled
Autonomous Surface Cleaning Bot Integrated with IoT Technology” categorized as internal
project done by V S N S YASHWANTH KOMMURI, G N V RAJARAM of the Computer
Science Department under the guidance of Dr. R. RAJA SUBRAMANIAN during Even
semester of the academic year 2023 - 2024 are as per the quality guidelines specified by IQAC.
Quality Grade