0% found this document useful (0 votes)
21 views

Sample Project Report

Ok
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Sample Project Report

Ok
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

i

AUTOCLEANAI: A DEEP LEARNING-ENABLED AUTONOMOUS


SURFACE CLEANING BOT INTEGRATED WITH IOT
TECHNOLOGY

CAPSTONE PROJECT (PHASE 2) REPORT

Submitted by

V S N S Yashwanth Kommuri - 9920004502


G N V Rajaram – 9920004457

in partial fulfillment for the award of the degree


of
BACHELOR OF TECHNOLOGY
IN

COMPUTER SCIENCE AND ENGINEERING

SCHOOL OF COMPUTING
COMPUTER SCIENCE AND ENGINEERING
KALASALINGAM ACADEMY OF RESEARCH
AND EDUCATION
KRISHNANKOIL 626 126

MAY 2024
ii

DECLARATION

We affirm that the project work titled “AUTOCLEANAI: A DEEP LEARNING-ENABLED


AUTONOMOUS SURFACE CLEANING BOT INTEGRATED WITH IOT TECHNOLOGY”
being submitted in partial fulfillment for the award of the degree of Bachelor of Technology in
Computer Science and Engineering is the original work carried out by us. It has not formed part of
any other project work submitted for the award of any degree or diploma, either in this or any other
University.

V S N S Yashwanth Kommuri
9920004502

G N V Rajaram
9920004457

This is to certify that the above statement made by the candidate is correct to the best of my
knowledge.
Date:

Signature of supervisor

Dr. R. Raja Subramanian


Associate Professor
Department of Computer Science and Engineering
iii

BONAFIDE CERTIFICATE

Certified that this project report “AUTOCLEANAI: A DEEP LEARNING-ENABLED


AUTONOMOUS SURFACE CLEANING BOT INTEGRATED WITH IOT TECHNOLOGY”
is the bonafide work of “V S N S YASHWANTH KOMMURI (9920004502), G N V RAJARAM
(9920004457)” who carried out the project work under my supervision.

Dr. R. Raja Subramanian Dr. N. Suresh Kumar


SUPERVISOR HEAD OF THE DEPARTMENT
Associate Professor Professor/Head
Computer Science and Engineering Computer Science and Engineering
Kalasalingam Academy of Research Kalasalingam Academy of Research
and Education and Education
Krishnankoil 626126 Krishnankoil 626126
Virudhunagar District. Virudhunagar District.

Submitted for the Project Viva-voce examination held on

Internal Examiner External Examiner


iv

ACKNOWLEDGEMENT

First and foremost, we thank the ‘Supreme Power’ for the immense grace showered on us which

enabled us to do this project. We take this opportunity to express sincere thanks to the late,

“Kalvivallal” Thiru T. KALASALINGAM, Chairman, Kalasalingam Group of


Institutions,

“Illayavallal” Dr. K. SRIDHARAN, Ph.D., Chancellor, Dr. S. SHASI ANAND, Ph.D.,


Vice President, who is the guiding light for all the activities in our university.

We thank our Vice Chancellor Dr. S. NARAYANAN, Ph.D., for guiding every one of us
and infusing us with the strength and enthusiasm to work successfully.

We wish to express our sincere thanks to our respected Head of the Department Dr. N.
SURESH KUMAR, whose moral support encouraged us to process through our project work
successfully.

We offer our sincerest gratitude to our Project Supervisor, Dr. R. RAJA SUBRAMANIAN,
for his patience, motivation, enthusiasm, and immense knowledge.

We are extremely grateful to our Overall Project Coordinator, Dr. S. Ariffa Begum, for her
constant encouragement in the completion of the Capstone Project.

Finally, we thank all, our Parents, Faculty, Non-Teaching Faculty, and our friends for their
moral support.
v

SCHOOL OF COMPUTING
COMPUTER SCIENCE AND ENGINEERING

PROJECT SUMMARY

Project Title AutoCleanAI: A Deep Learning-Enabled Autonomous Surface Cleaning Bot


Integrated with IoT Technology
Project Team Members V S N S YASHWANTH KOMMURI - 9920004502
(Name with Register No) G N V RAJARAM - 9920004457

Guide Name/Designation Dr. R. RAJA SUBRAMANIAN, Associate Professor, Department of


Computer Science and Engineering
Program Concentration Area Intelligent Systems

Technical Requirements Machine Learning, Deep Learning, Embedded C.

Engineering standards and realistic constraints in these areas:

Area Codes & Standards / Realistic Constraints


Tick ✓

AutoCleanAI is designed to work on tiles, wood, and carpets, but ✓


it has limited adaptability to diverse floor surfaces and challenges
Environmental
in navigating through complex room layouts like in sand, and
roads, hindering its overall cleaning performance and efficiency.
Autonomous cleaning robots encounter sustainability challenges ✓
because they use materials that harm the environment and
contribute to waste buildup. These challenges make it difficult to
Sustainability achieve long-term sustainability goals, emphasizing the
importance of finding environmentally friendly alternatives and
implementing strategies to reduce waste in the design and
production of robots.
vi

REALISTIC CONSTRAINTS:
Environmental:

The proposed system navigates environmental constraints by prioritizing efficiency and


minimizing ecological impact. It addresses challenges such as limited adaptability to diverse
floor surfaces and obstacles, aiming to optimize cleaning performance while minimizing
resource consumption. By leveraging innovative technologies and design strategies, the system
strives to overcome environmental hurdles, ensuring effective operation in various settings.
Through proactive measures and continuous improvement, the proposed system aims to reduce
environmental constraints and promote eco-friendly cleaning practices.

Sustainability:

The proposed system addresses sustainability challenges encountered by autonomous cleaning


robots by prioritizing environmentally friendly materials and waste reduction strategies.
Through careful selection of materials, the system minimizes environmental harm and reduces
waste accumulation. Additionally, by incorporating energy-efficient components and
optimizing the system, the framework aims to decrease energy consumption and reliance on
resources. Moreover, the system's modular design facilitates easy repair and component
replacement, extending its lifespan and reducing the need for frequent replacements. By
implementing these sustainability-focused measures, the proposed system strives to contribute
positively to environmental conservation efforts and support long-term sustainability goals in
the field of autonomous cleaning robotics.

Engineering Standards:

IEEE P2413 - IEEE Standard for an Architectural Framework for the Internet of Things (IoT)
establishes guidelines and definitions for the architectural framework of IoT systems,
encompassing various aspects such as collaboration, scalability, security, and device
management. This standard provides a structured approach to designing IoT systems, ensuring
compatibility and seamless integration across diverse IoT environments. In accordance with
IEEE P2413, the proposed system embraces an architectural framework that enables efficient
communication, data exchange, and interconnection among IoT devices and platforms. By
adhering to these standards, the proposed system ensures robustness, flexibility, and scalability
in IoT deployment, facilitating seamless interaction and integration in IoT ecosystems.
vii

ABSTRACT
In the realm of modern cleaning solutions, the emergence of autonomous cleaning robots has
changed household and commercial maintenance. This report presents the design and
implementation of the proposed approach, an innovative autonomous cleaning robot equipped
with state-of-the-art sensors and intelligent algorithms. The system's scope encompasses a wide
range of environments, addressing the diverse cleaning needs of homes, offices, and public
spaces. Challenges such as adaptability to different floor surfaces and navigation through
complex layouts and obstacles are addressed through the integration of advanced hardware and
software components. In our proposed system, we use the combined strengths of deep learning
algorithms to achieve accurate object detection and waste identification. The system's
performance is evaluated against various datasets, demonstrating high accuracy and
effectiveness in real-world cleaning scenarios compared to existing solutions. Through its
cutting-edge design and capabilities, the suggested approach aims to improve cleaning
standards and simplify maintenance routines, offering a practical and effective solution for
ensuring hygienic living and working environments.

Keywords – WASTE DETECTION, AUTO CLEANING, DEEP LEARNING, REAL-TIME


MONITORING, USER-FRIENDLY INTERFACE, INTERNET OF THINGS, NAVIGATION,
MICROCONTROLLER.
viii

TABLE OF CONTENTS

TITLE PAGE NO.

ABSTRACT VII

LIST OF TABLES X

LIST OF FIGURES XI

LIST OF ACADEMIC REFERENCE COURSES XII

CHAPTER I INTRODUCTION 1

1.1 OVERVIEW

1.2 USE CASES & APPLICATIONS

1.3 CHALLENGES

CHAPTER II LITERATURE REVIEW 3

2.1 ESP32 MICROCONTROLLER INTEGRATION IN ROBOTIC ARM

2.2 DEEP LEARNING ALGORITHMS FOR REAL-TIME DETECTION

2.3 ULTRASONIC SENSORS FOR DETECTION IN MOBILE ROBOTICS

2.4 DC MOTOR SPEED CONTROL USING PWM TECHNIQUE

2.5 APPROACHES FOR DETECTION & SYSTEM DEVELOPMENT

2.6 EMPHASIS ON NAVIGATION FOR FLOOR CLEANING

2.7 NAVIGATION TECHNIQUES USING SONAR & DETECTION

2.8 COMPARISON WITH FLOOR CLEANING ROBOT USING IOT

CHAPTER III PROBLEM DEFINITION & BACKGROUND 7

3.1 PROBLEM DEFINITION

3.2 PROBLEM FORMULATION

CHAPTER IV PROPOSED SYSTEM 8

4.1 CAMERA INTEGRATION

4.2 INTEGRATION OF DEEP LEARNING & IOT MODULES

4.3 ALGORITHM DEPLOYMENT

4.4 ARDUINO AND HARDWARE CONTROL

4.5 VACUUM SUCTION MECHANISM

4.6 CLOUD-BASED DATA STORAGE & PROCESSING


ix

4.7 USER INTERFACE & INTERACTION

CHAPTER V REQUIREMENTS & SPECIFICATIONS 12

5.1 ARDUINO MICROCONTROLLER

5.2 ESP32 CAMERA

5.3 ULTRASONIC SENSOR

5.4 L298N MOTOR DRIVER

5.5 DC MOTORS & DC PUMP

CHAPTER VI SYSTEM DESIGN 14

6.1 INTEGRATING HARDWARE COMPONENTS

6.2 DESIGN CONSTRAINTS & STANDARDS

CHAPTER VII EXPERIMENTATION AND OUTCOMES 16

7.1 METHODOLOGY

7.2 DATASET DESCRIPTION

7.3 COMPREHENSION OF THE AFOREMENTIONED ALGORITHMS

7.4 HOW THE FASTER RCNN ALGORITHM WORKS IN SYNC WITH HARDWARE
COMPONENTS

7.5 OBJECT DETECTION PERFORMANCE EVALUATION

7.6 ADAPTABILITY ASSESSMENT IN VARIED ENVIRONMENTS

7.7 PROCESSING EFFICIENCY & REAL-TIME CAPABILITIES

7.8 ASSESSMENT OF CLEANING PERFORMANCE

7.9 PROTOTYPE

CHAPTER VIII CONCLUSION & FUTURE SCOPE 34

REFERENCES 35

PUBLICATION

CERTIFICATIONS

PLAGIARISM REPORT
x

LIST OF TABLES

TABLES DETAILS PAGE NO.


Table 1 Hardware & Software Requirements 12

Table 2 Accuracy Scored 24

Table 3 Result Analysis 26

Table 4 Comparison of AutoCleanAI with Available Products 33


xi

LIST OF FIGURES

FIGURES DETAILS PAGE NO.


Figure 1 Architecture of the system 14

Figure 2 Work Plan 16

Figure 3 Process Flow of AutoCleanAI 17

Figure 4 Simple CNN model 20

Figure 5 CNN layers with augmented layers and dropout layers 21

Figure 6 CNN layers with dropout layers 21

Figure 7 CNN layers with augmented layers 22

Figure 8 CNN layers with two different augmented layers 22

Figure 9 CNN layers with data augmented layers and more dense layers 23

Figure 10 Arduino L298N Motor Driver Circuit 27

Figure 11 ESP32-CAM Motor Control Circuit 27

Figure 12 Schematic representation of AutoCleanAI 28

Figure 13 Scrubbing Results 29

Figure 14 Vacuuming Results 30

Figure 15 Prototype Design of AutoCleanAI 32

Figure 16 Working of the AutoCleanAI 32


xii

LIST OF ACADEMIC REFERENCE COURSES

S. NO. COURSE CODE COURSE NAME


1 CSE18R254 INTRODUCTION TO PYTHON
PROGRAMMING
2 CSE18R257 PREDICTIVE ANALYTICS

3 CSE18R212 MACHINE LEARNING

4 CSE18R292 ALGORITHMS FOR INTELLIGENT


SYSTEMS AND ROBOTICS
5 CSE18R396 DEEP LEARNING

6 CSEOPE005 INTRODUCTION TO INTERNET OF


THINGS
7 CSEOPE028 CLOUD COMPUTING

8 CSEOOE064 DIGITAL IMAGE PROCESSING


1

CHAPTER-I

INTRODUCTION

1.1 Overview:

The rapid advancement of technology in recent years has spurred the development of
innovative solutions aimed at simplifying and enhancing everyday tasks, with autonomous
cleaning robots emerging as a prominent example. According to recent market research, the
global cleaning robot market is projected to reach a value of $25.9 billion by 2027,
experiencing a compound annual growth rate (CAGR) of 21.5% from 2022 to 2027. These
robots have garnered significant attention for their potential to revolutionize home and
commercial cleaning practices, offering benefits such as increased efficiency, reduced labor
costs, and improved hygiene standards. In response to this growing demand, this report
proposes a novel framework that integrates advanced technology into the realm of cleaning
robotics.

At the heart of this framework lies the concept of a cleaning robot a cutting-edge robotic
solution designed to autonomously clean designated areas without requiring human
intervention. Recent studies have indicated a strong consumer interest in autonomous cleaning
robots, with over 60% of respondents expressing a willingness to invest in such technology to
simplify household chores. The proposed system leverages state-of-the-art components and
algorithms to achieve its cleaning objectives effectively.

The system is equipped with an array of components that collaborate seamlessly to empower
the robot's navigation, object detection, and precise execution of cleaning tasks. These
integrated features, including real-time data analysis and object detection capabilities,
underscore the system's dedication to enhancing efficiency and adaptability in cleaning
operations.

Furthermore, the proposed framework addresses the evolving needs of various settings,
including homes, offices, hospitals, and other commercial buildings. With the increasing
emphasis on cleanliness and sanitation in light of global health concerns, the adoption of
autonomous cleaning robots is expected to witness substantial growth across diverse industries.
By harnessing the power of automation and artificial intelligence, the proposed system aims to
2

redefine the standards of cleanliness and hygiene, offering a forward-thinking solution for
modern living and working environments.

1.2 Use Cases & Applications:

The proposed framework presents a versatile solution with broad applications spanning various
domains, from household cleaning to industrial environments. In residential settings, the
proposed approach equipped with Innovative technology offers autonomous navigation and
obstacle avoidance capabilities, revolutionizing cleaning routines and minimizing human
intervention. Similarly, in commercial spaces such as offices, shopping centers, and hospitals,
the framework contributes to maintaining cleanliness and hygiene standards through automated
cleaning tasks. Its ability to navigate complex environments and efficiently clean floors
enhances operational efficiency and promotes a healthier environment for occupants. Overall,
the outlined structure addresses the diverse cleaning needs of both residential and commercial
settings, offering a reliable and efficient solution to streamline cleaning operations.

1.3 Challenges:

Despite the promising potential of autonomous cleaning robots, several challenges must be
addressed to ensure their effectiveness and widespread adoption. One of the primary hurdles is
the accurate detection and classification of objects across diverse environments, which is
crucial for the robot's ability to navigate and perform cleaning tasks efficiently. Achieving this
level of accuracy requires robust algorithms and sensors capable of recognizing various objects
and adapting to different surroundings. Additionally, seamless communication between the
onboard camera and the cloud server is essential for real-time data analysis, enabling the robot
to make informed decisions and respond swiftly to changes in its environment. Ensuring
reliable and low-latency communication is vital for enhancing the overall performance of the
system. Furthermore, optimizing the performance of hardware components, such as sensors,
motors, and microcontrollers, is crucial for achieving efficient cleaning operations. This
involves fine-tuning the hardware to work seamlessly together and integrating advanced
features to enhance the robot's capabilities. Overcoming these challenges will be key in
unlocking the full potential of autonomous cleaning robots and accelerating their adoption in
various settings, ultimately revolutionizing the way we approach cleaning tasks.
3

CHAPTER-II

LITERATURE REVIEW

Creating autonomous robots that can carry out diverse tasks in diverse environments has
garnered significant attention in recent times. Object detection and recognition is a major
challenge in this field, as it is an essential task for many robotic applications. To get around this
problem, the suggested system uses a cam module in an autonomous cleaning bot. The proposed
approach recognizes the object and sends the image to the cloud, which stores the trained data.
After that, the system compares the collected data with the trained data and displays the
outcome. Nonetheless, the ideas and technologies that are pertinent to the suggested system are
highlighted in the literature review that follows.

2.1 ESP32 Microcontroller Integration in Robotic Arm:

A microcontroller is utilized in the research paper "Design and implement of robotic arm and
control of moving via IoT with Arduino ESP32" by Ahmed et al. (2021) to create an Internet of
Things-based robotic arm control system. They demonstrated how the robotic arm can be
utilized in a variety of settings, including medical and warehouse automation, by using Internet
of Things technology to remotely control it. The goal of this project is to control a robotic arm,
and it emphasizes how crucial it is to use microcontrollers like the Arduino ESP32 in robotics
applications. For communication between the controller and the cloud server, the system makes
use of a Wi-Fi module. Similar to this, the suggested framework and the cloud are
communicated via the cam module in the suggested system to test the data.

2.2 Deep Learning Algorithms for Real-Time Detection:

The use of deep learning algorithms for real-time human detection on embedded platforms is
covered in the research paper "Real-Time Human Detection Using Deep Learning on Embedded
Platforms: A Review" by Rahmaniar and Hernawan (2021). They examined various deep-
learning techniques for human detection and talked about the benefits and drawbacks of each
technique. The study emphasized the value of real-time processing for robotics applications and
how embedded platforms can help accomplish this objective. Similar methods are employed in
the suggested system for object detection, using deep learning algorithms with the cam module.
4

2.3 Ultrasonic Sensors for Detection in Mobile Robotics:

The use of ultrasonic sensors for obstacle detection in an intelligent mobile robot is covered in
the research paper "Design and Implementation of Intelligent Mobile Robot based on
Microcontroller by Using Three Ultrasonic Sensors" by Rejab and Abd-Al Hussain (2018).
They demonstrated how the robot uses ultrasonic sensors to detect and avoid obstacles. This
study emphasizes the value of sensors in robotics applications by concentrating on obstacle
detection. The framework is guided in its movement by ultrasonic sensors, which are employed
in the proposed system to identify obstacles.

2.4 DC Motor Speed Control Using PWM Technique:

A microcontroller-based DC motor speed control utilizing the PWM technique was designed
and implemented in the research paper "Microcontroller Based DC Motor Speed Control Using
PWM Technique" by Russell and Bhuyan (2012). They demonstrated how to regulate the speed
of DC motors using the pulse width modulation (PWM) technique. The significance of motor
control in robotics applications is emphasized by this study. The framework in the suggested
system is moved by DC motors, and speed control is provided by the L298N motor driver
module.

2.5 Approaches for Detection and System Development:

In contrast to the research conducted by Pranav and Ashish on the "Development of


Autonomous Indoor Floor Cleaning Robot" (2022), the proposed framework focuses on
leveraging the camera for object identification. While they may emphasize overall system
development and autonomy in indoor floor cleaning, the proposed framework specifically
targets efficient object identification through visual perception. This approach enables robots to
detect objects, send pictures to the cloud for analysis, and display the recognized object types in
real-time. Unlike the comprehensive approach taken by them, which likely involves various
sensors and algorithms for navigation, the proposed framework simplifies its functionality to
prioritize efficient object identification and classification.
5

2.6 Emphasis on Navigation for Floor Cleaning:

When comparing the proposed framework with the research paper authored by Aman,
Rajkumar, and Anuradha on the "Autonomous Floor Cleaning Robot (Navigation)" (2020),
several similarities and differences emerge. Both initiatives share the common objective of
developing autonomous floor-cleaning robots, yet they diverge in their primary focuses and
operational methodologies. Their study appears to prioritize navigation strategies, likely delving
into path planning, obstacle avoidance, and localization techniques to enable effective
movement within a given environment. Conversely, the suggested system emphasizes object
identification using the camera, supplemented by cloud-based processing for image analysis
making it efficient for cleaning tasks.

2.7 Navigation Techniques using Sonar & Detection:

In comparing the proposed framework with the research paper by Parth and Khan on the
"Autonomous Vacuum Cleaning robot using Sonar Navigation and SLAM" (2021), several
similarities and distinctions emerge. While both initiatives aim to develop autonomous vacuum-
cleaning robots, they employ different navigation and mapping techniques. Their work focuses
on utilizing sonar navigation and SLAM (Simultaneous Localization and Mapping) algorithms
to enable the robot to navigate and map its environment in real-time. In contrast, our system
prioritizes object identification using the cam module, supplemented by cloud-based processing
for image analysis. While sonar navigation and SLAM are effective for spatial awareness and
navigation, the proposed framework's approach allows for efficient object identification and
cleaning operations.

2.8 Comparison with Floor Cleaning Robot Using IoT:

In contrast to the research paper authored by Sonia and Ganesh on the "Design and
Implementation of Floor Cleaning Robot Using IOT" (2021), the proposed framework presents
a distinctive approach to autonomous floor cleaning. While both projects involve floor-cleaning
robots, they diverge in terms of their underlying technologies and functionalities. Their work
emphasizes the use of Internet of Things (IoT) technology for connectivity and control, likely
involving remote monitoring and operation of the cleaning robot. In contrast, the system focuses
on object identification using the cam module, coupled with cloud-based processing for image
6

analysis and decision-making. This enables the robot to autonomously detect and classify
objects in its environment, facilitating efficient cleaning operations.

In contrast to the aforementioned studies, the suggested system seeks to accomplish object
detection and recognition for cleaning applications by combining the use of a cam module with
an Arduino microcontroller, L298N motor driver module, ultrasonic sensor, DC Motors, and
DC Pump in a proposed system. To store and test the captured data against the trained data,
this system makes use of the cloud. The suggested system emphasizes how crucial it is to
combine different hardware elements and technologies in robotics applications to accomplish
the intended task, in contrast to the previous studies that concentrated on different facets of
robotics.

Note: Our comprehensive study of image processing techniques in the Digital Image Processing course
(CSEOOE064) has equipped us with the necessary skills to manipulate and analyze images effectively. Leveraging
this knowledge with algorithms that accurately extract meaningful information from visual data, enhances the
overall functionality and performance of our prototype.
7

CHAPTER-III

PROBLEM DEFINITION & BACKGROUND

3.1 Problem Definition:

The proposed system aims to address several key challenges in autonomous cleaning by
integrating various components. Leveraging the cam module, this system focuses on real-time
object detection and recognition within indoor environments. The Arduino, coupled with the
L298N motor driver module, orchestrates and controls the intricate maneuvers of the DC
Motors and DC Pump, enabling the robot's movement and functionality. The ultrasonic sensor
plays a pivotal role in providing distance measurements, facilitating obstacle detection, and
ensuring safe navigation. With the captured images, the camera processes and transmits data
to the cloud, where trained algorithms identify objects and relay information back to the
system. This process enables the system to display real-time object identification results,
thereby enhancing the robot's efficiency and autonomy.

3.2 Problem Formulation:


The proposed system, denoted as framework A, aims to address the task of efficient cleaning
operations within varied environmental surfaces denoted as ES, encompassing materials such
as tiles, wood, and carpets. This framework comprises two fundamental sub-modules: a1 and
a2.

Formula is A{a1, a2} ES

The primary function of a1 is to implement an advanced deep-learning algorithm designed


specifically for object detection. Within our project scope, these objects primarily encompass
waste particles like coffee grounds, onion peels, and discarded paper. In tandem, sub-module
a2 represents the IoT component, seamlessly integrating sensors and actuators to complement
the functionalities of a1. The sensor suite adopted in our project notably includes ultrasonic
sensors and camera modules, strategically chosen to enhance environmental perception and
data acquisition. With a concerted focus on optimizing the efficiency and effectiveness of both
a1 and a2, our ultimate goal is to enable framework A to interact intelligently with environment
E, navigating and responding adeptly to its varied surface conditions.
8

CHAPTER-IV

PROPOSED SYSTEM

The proposed framework aims to develop an AutoCleanAI that can detect and classify objects
using an ESP32 camera. The framework also includes an Arduino, ultrasonic sensors, DC
motors, and a DC pump. The cam module detects objects using its camera and sends the
captured picture to the cloud where the practiced data is stored. The framework then tests the
captured picture with the practiced data and classifies it accordingly. The results are displayed
on a user interface where the user can see what object has been detected.

4.1 Camera Integration:

The ESP32 is an ideal camera for the proposed framework due to its low power consumption,
high-quality image capture, and built-in Wi-Fi capability. The module uses its built-in camera
to capture pictures of the surroundings. These pictures are then processed by the cam module
using the Faster RCNN algorithm to detect and classify objects. Then the cam module sends the
captured pictures to the cloud where the practiced data is stored. The practiced data is a set of
pre-classified pictures that the framework uses to compare with the newly captured pictures.

4.2 Integration of Deep Learning and IoT Modules:

The proposed framework incorporates two essential modules: deep learning and IoT. In the deep
learning module, various state-of-the-art imagenet models, custom models, and the Faster
RCNN algorithm are utilized for robust object detection and classification. These models
undergo rigorous training using the WasteNet dataset to enhance their accuracy and
effectiveness in identifying waste materials. On the other hand, the IoT module integrates sensor
technologies such as ultrasonic sensors and cameras with deep learning algorithms, enabling
seamless data acquisition and processing. This integration ensures that the AutoCleanAI system
can adapt to different environmental conditions and effectively navigate its surroundings for
optimal cleaning performance. By combining deep learning and IoT technologies, the proposed
framework achieves a synergistic effect, enhancing the system's capabilities and versatility in
real-world applications.
9

4.3 Algorithm Deployment:

The Faster RCNN (Region-based Convolutional Neural Network) algorithm plays a crucial role
in object detection and classification. This algorithm operates by dividing the image into various
regions and proposing potential object locations within each region. These proposed regions,
known as region proposals, are then analyzed by a convolutional neural network (CNN) to
extract features and classify objects. The CNN effectively learns discriminative features from
the proposed regions, enabling accurate identification and classification of objects within the
image. By leveraging both region proposals and CNN-based feature extraction, the Faster
RCNN algorithm achieves impressive accuracy and efficiency in object detection tasks. In the
context of the proposed framework, the cam module captures images of the environment, which
are then processed by the Faster RCNN algorithm deployed in the cloud. This enables real-time
object detection and classification, allowing the AutoCleanAI to efficiently navigate its
surroundings and perform targeted cleaning actions.

4.4 Arduino and Hardware Control:

The Arduino controls the motors and pump. The motors are responsible for moving the robot
and avoiding obstacles using ultrasonic sensors. The pump is used for cleaning the surface. The
ultrasonic sensor is used for obstacle detection. The sensor sends out high-frequency sound
waves and detects the reflection of these sound waves to determine the distance to an object.
The Arduino then uses this information to control the motors and avoid obstacles. The motors
are responsible for the movement of the robot. The L298N driver module is used to control the
speed and direction of the motors. The motors are programmed to move forward, backward,
left, and right based on the input from the ultrasonic sensor.

4.5 Vacuum Suction Mechanism:

Moving beyond the above applications, the system employs a vacuum suction mechanism to
efficiently remove debris like papers, adding a nuanced layer to its cleaning capabilities. This
integration further enhances the versatility and effectiveness of the proposed framework. The
cam module’s role extends beyond mere image capture; it serves as the linchpin for the entire
process, managing the seamless communication between hardware components and the cloud-
based analytical powerhouse.
10

4.6 Cloud-based Data Storage and Processing:

In addition to object detection and classification, the proposed framework leverages cloud-based
data storage and processing capabilities for enhanced efficiency and scalability. The captured
images from the cam module are transmitted to the cloud infrastructure, where they are stored
securely. This cloud-based storage ensures that the system has access to a vast repository of
practiced data for comparison and analysis. Moreover, the cloud serves as the computational
powerhouse for executing complex algorithms such as the Faster RCNN. By offloading
computational tasks to the cloud, the framework can achieve real-time processing of captured
images, enabling rapid decision-making and response in dynamic environments. Furthermore,
cloud-based storage enables seamless data sharing and collaboration, facilitating continuous
learning and improvement of the AutoCleanAI system over time.

4.7 User Interface and Interaction:

To enhance user experience and facilitate seamless interaction with the AutoCleanAI system, a
user interface (UI) is integrated into the framework. The UI provides a visual representation of
the classification results, allowing users to monitor the detected objects in real-time. Through
the UI, users can access detailed information about the identified objects, including their
classifications. Additionally, the UI offers intuitive controls for initiating cleaning actions and
configuring system settings. This interactive interface empowers users to actively engage with
the AutoCleanAI system, providing valuable feedback and insights for system optimization and
refinement. Furthermore, the UI serves as a platform for communication between the system
and its operators, enabling efficient coordination and management of cleaning operations in
diverse environments. By prioritizing user-centric design and interaction, the proposed
framework ensures that AutoCleanAI remains accessible, adaptable, and user-friendly for a
wide range of applications and users.

Note: Cloud processing plays a pivotal role in the proposed algorithm for our prototype, offering scalability and
accessibility to our system's functionalities. Our understanding of cloud processing was honed through the
comprehensive coverage provided in the Cloud Computing course (CSEOPE028), empowering us to leverage cloud
resources effectively for our project's requirements.
11

In summary, the proposed framework is an AutoCleanAI that utilizes several hardware


components. The system employs the cam module for object identification, capturing pictures,
and transmitting them to the cloud for processing. Subsequently, the cloud-based Faster RCNN
algorithm performs real-time object classification, and the integrated hardware components,
guided by the Arduino, execute dynamic responses to ensure autonomous and adaptive
cleaning operations. The Motors and Pump work in conjunction with the cam module to
facilitate smooth movement and collision avoidance. Additionally, the inclusion of the vacuum
suction mechanism demonstrates the framework's innovation in debris removal, representing a
significant advancement in autonomous cleaning technology. The user can conveniently view
the classification results on a user interface, providing insights into the detected objects and
enhancing user interaction with the AutoCleanAI system.

Note: The Faster R-CNN algorithm serves as the cornerstone of our prototype's object detection capabilities,
offering high accuracy and efficiency. Our familiarity with this advanced algorithm was cultivated through the
in-depth exploration provided in the Deep Learning course (CSE18R396) and Machine Learning (CSE18R212),
equipping us with the knowledge to implement cutting-edge techniques in our project.
12

CHAPTER -V

REQUIREMENTS & SPECIFICATIONS

The proposed framework utilizes a combination of essential components to achieve its


functionality. The table below provides a detailed breakdown of the quantities required for each
component, highlighting their significance in the construction and operation of the system.

Item Quantity

Microcontrollers 1

Motor Drivers 2
Sensors 1

Sensor & Cam Holders 2

Camera 1

USB to TTL Connector 1

Power converter (BMT) 1

Power Supply Board 1

Motors 8

Pumps 1
Vacuum tubes, suction cups, hoses, 1
and filters
Water Holding Tank 1
Batteries & Charger 3&1
Chassis and wheels 1C & 4W
Cleaning tools & mops Required
Cables, Wires, and Connectors Required
Mounts and brackets Required
Fasteners (Screws, Nuts, Bolts) Required

Table 1: Hardware & Software Requirements

Note: These hardware components are essential for the prototype's functionality, serving as integral building
blocks in its design. Our understanding of these components was enriched through the comprehensive coverage
provided in the Introduction to Internet of Things course (CSEOPE005) and Algorithms for Intelligent Systems
and Robotics (CSE18R292), laying a solid foundation for their practical application in real-world IoT projects.
13

5.1 Arduino Microcontroller:

The Arduino serves as the central processing unit of the AutoCleanAI, coordinating the
functionalities of various hardware components. It receives input from sensors, processes data,
and controls the operation of motors and pumps for effective cleaning actions. By executing
these instructions, it ensures precise navigation and seamless integration of cleaning functions.

5.2 ESP32 Camera:

The Cam module acts as the eyes of the AutoCleanAI, capturing images of the surroundings for
object detection and classification. Its built-in camera, low power consumption, and Wi-Fi
capability make it ideal for real-time image processing tasks. By transmitting images to the
cloud for analysis, it enables accurate identification of objects and efficient cleaning actions.

5.3 Ultrasonic Sensor:

The ultrasonic sensor plays a pivotal role in obstacle detection, emitting high-frequency sound
waves and measuring their reflections to determine distances to objects. Integrated into the
AutoCleanAI system, it provides crucial spatial awareness, allowing the robot to navigate and
maneuver around obstacles safely. Its reliable performance ensures smooth operation in varied
environments.

5.4 L298N Motor Driver:

The L298N motor driver module controls the speed and direction of the DC motors, facilitating
precise movement and navigation of the AutoCleanAI. By regulating the power supplied to the
motors, it ensures smooth operation and efficient cleaning actions. Its robust design and
compatibility with various motor types make it an essential component for driving the system.

5.5 DC Motors & DC Pump:

The DC motors and DC pump power the cleaning tools and mops of the AutoCleanAI, enabling
it to perform scrubbing and vacuuming actions effectively. These motors drive the movement
of the robot and the rotation of cleaning attachments, while the pump facilitates the suction and
expulsion of liquid waste. Their combined functionality ensures thorough cleaning of surfaces
and efficient waste removal.
14

CHAPTER -VI

SYSTEM DESIGN

6.1 Integrating Hardware Components:

Figure 1 is the architecture of the proposed AutoCleanAI system is designed to integrate


hardware components, software algorithms, and cloud-based processing seamlessly, enabling
efficient cleaning operations in diverse environments. At its core, the system comprises
hardware components such as the ESP32 microcontroller, ultrasonic sensors, DC motors, and a
DC pump, orchestrated to facilitate object detection, avoidance, waste identification, and
cleaning actions. Furthermore, the system architecture includes a user interface component for
monitoring and controlling the AutoCleanAI remotely. Users can access real-time cleaning
progress, view detected objects, and adjust cleaning settings through the user interface,
enhancing user interaction and control over the system. Overall, the architecture of the
AutoCleanAI system is designed to leverage a combination of hardware and software
components, cloud-based processing, and user interface design to create an intelligent and
adaptable cleaning solution for various environments. By integrating these components
effectively, the system achieves efficient object detection, obstacle avoidance, waste
identification, and cleaning operations, contributing to improved cleanliness and hygiene in
indoor spaces.

Fig. 1. Architecture of the system


15

6.2 Design Constraints & Standards:

The design constraints that influence the development and implementation of the proposed
AutoCleanAI system encompass various factors that must be considered to ensure the system's
functionality, reliability, and usability in real-world environments. One significant design
constraint is the hardware limitation, which dictates the selection and integration of hardware
components such as microcontrollers, sensors, motors, and pumps. The chosen hardware must
meet specific criteria, including compatibility, power consumption, size, and cost, to ensure
optimal performance and affordability of the system. Additionally, environmental constraints,
such as varying surface conditions, cluttered spaces, and obstacle-rich environments, pose
challenges to the system's navigation and cleaning capabilities. The AutoCleanAI must be
designed to adapt to these environmental conditions and navigate effectively while avoiding
obstacles and hazards. Moreover, there are software constraints related to algorithm complexity,
processing speed, and memory usage, which influence the selection and implementation of
object detection and classification algorithms. The system's software architecture must be
optimized to achieve real-time processing and decision-making while minimizing
computational resources. Furthermore, user interaction constraints, such as the design of the
user interface and the accessibility of controls, impact the system's usability and user experience.
The user interface must be intuitive, informative, and responsive, allowing users to monitor
cleaning progress, adjust settings, and troubleshoot issues effectively. By addressing these
design constraints comprehensively, the proposed AutoCleanAI system can overcome
challenges and deliver efficient and reliable cleaning performance in diverse environments. The
architectural design of the proposed autonomous cleaning system aligns with IEEE P2413 -
IEEE Standard for an Architectural Framework for the Internet of Things (IoT), ensuring that it
follows a structured approach to IoT integration and compatibility, thereby enhancing its
scalability and adaptability across diverse environments.
16

CHAPTER -VII

EXPERIMENTATION AND OUTCOMES

This section outlines the design, construction, and testing process as well as the methodology
utilized for the autonomous cleaning bot. An Arduino microcontroller, an ESP32 CAM
module, an ultrasonic sensor, DC motors, and a DC pump are among the hardware elements
that were employed. The Arduino IDE is the software used to program the Arduino
microcontroller and cam module.

The work plan is shown in Fig. 2.

Assembling the
Preparing the basic design Creating the DL and sensors and micro-
driver modules controller

Testing our
Uploading the
Autonomous cleaning
modules to micro-
bot on different
controller
surfaces

Fig. 2. Work Plan

The work plan delineates the step-by-step process essential for the development of the
autonomous cleaning bot. Initially, the project commences with the foundational stage of
preparing the basic design, outlining the system's architecture and functionalities. Following
this, the focus shifts towards crafting the deep learning (DL) and driver modules, crucial
components enabling object recognition and motor control. Subsequently, the assembly of
sensors and the microcontroller constitutes a pivotal phase, merging hardware elements and
establishing communication pathways.
17

The flow diagram for the AutoCleanAI is shown in Figure 3.

Waste detection

Navigation

Output
Input Object Avoidance

Cleaning Actions

User Interface

Fig. 3. Process flow of AutoCleanAI

The aforementioned diagram shows the flow of data and control signals within the system. It
provides a visual representation of the program's logic and how different functions are executed
based on certain conditions or events. The flow diagram helps to understand the flow of control
and data between the various components of the system.

7.1 Methodology:

The methodology employed in this study involves the integration of two main modules:

a. Deep Learning
b. IOT

Within the Deep Learning module, a comprehensive exploration of various ImageNet


algorithms was conducted, including VGG16, VGG19, ResNet50, ResNet101, MobileNetV2,
18

and EfficientNetB0 and Faster RCNN. These algorithms were evaluated for their effectiveness
in object detection and classification, crucial for the autonomous cleaning robot's operation.

7.2 Dataset Description:

The WasteNet dataset is a collection of 40 JPG images divided into four subcategories: 'Coffee',
'Onion Peels', 'Red Juice', and 'Paper'. Each subcategory contains 10 images, all of which depict
different waste materials.

The purpose of this dataset is likely to be for training and testing machine & Deep learning
models that aim to identify or classify different types of waste materials. Each image in the
dataset is likely to be labeled with the corresponding subcategory it belongs to, which could be
used as the ground truth for training and evaluating machine & Deep learning models. By
providing a set of labeled images, this dataset can be used to evaluate the performance of such
models and to improve their accuracy.

The WasteNet dataset is used for evaluating both the ImageNet models and the Faster RCNN
algorithm, enabling a comprehensive assessment of their performance in waste material
classification and object detection tasks.

7.3 Comprehension of the aforementioned algorithms:

i. VGGNET & ResNet models:

The VGGNET experiment involves using the VGG-16 convolutional neural network
architecture to classify images. The ResNet experiment involves using the ResNet
convolutional neural network architecture to classify images per class. The goals of these
experiments are to evaluate the performance of VGG-16 and ResNet on this task. VGG-16 and
ResNet are popular convolutional neural network architectures that have been shown to achieve
state-of-the-art performance on several image classification tasks. The main problem addressed
by this experiment is image classification, which is a fundamental problem in computer vision.
The ability to automatically classify images based on their content is important in many real-
world applications, such as object recognition, face recognition, and autonomous driving.
19

ii. MobileNet model:

The MobileNet experiment involves training a convolutional neural network (CNN) called
MobileNet on a specific dataset for image classification tasks. The main problem addressed by
this experiment is the need for deep learning models that can run efficiently on mobile devices
with limited computational resources. Traditional CNN models, such as VGGNet and ResNet,
are computationally intensive and require high-performance hardware, making them difficult
to deploy on mobile devices. MobileNet is designed to address this issue by reducing the
computational complexity of the model while maintaining high accuracy. This makes it well-
suited for deployment on mobile devices.

The MobileNet experiment involves training the MobileNet model on a specific dataset for a
particular image classification task. The dataset can be selected based on the specific
application for which the model will be used. For example, if the model will be used to classify
images of animals, the dataset might include images of different animals. The goal is to train
the model to accurately classify new images that it has not seen before.

iii. EfficientNetB0 model:

The EfficientNetB0 experiment aims to evaluate the performance of the EfficientNetB0


convolutional neural network architecture on image classification tasks. EfficientNetB0 is a
scalable convolutional neural network architecture that achieves state-of-the-art performance
on various image classification benchmarks. The primary objective of this experiment is to
assess the efficiency and effectiveness of the EfficientNetB0 model in comparison to other
convolutional neural network architectures, such as VGGNet and ResNet. The main problem
addressed by this experiment is the need for deep learning models that can achieve high
accuracy while minimizing computational complexity and resource requirements. By
evaluating the performance of the EfficientNetB0 model, researchers seek to determine its
suitability for real-world applications where computational efficiency is crucial.

Note: The Introduction to Python Programming course (CSE18R254) has provided us with fundamental skills in
programming with Python, laying a solid foundation for our endeavors in software development and data
analysis. Harnessing the principles learned in this course enables us to work on robust and efficient algorithms,
further enhancing the capabilities of our prototype through effective code implementation and data manipulation
techniques.
20

In addition to the aforementioned ImageNet models, customized models employing various


activation functions and optimizers were also utilized. Optimizers in CNNs are computational
techniques used during the training process to systematically adjust the model's parameters
(weights and biases), to minimize the loss between predicted and actual results. These
optimizers aid in the convergence of the neural network. In CNNs, activation functions are
mathematical operations performed on the outputs of the network's neurons or layers. Their
purpose is to introduce nonlinearity, which allows the CNN to detect complex patterns and
relationships in the data. The following are typical activation functions used in CNNs. These
activations are strictly used in hidden layers as well as at the last dense layer via SoftMax,
which is brilliant for multiclassification datasets for our custom CNN model.

These custom models include Simple CNN, CNN layers with augmented layers and dropout
layers, CNN layers with dropout layers, CNN layers with augmented layers, CNN layers with
two different augmented layers, CNN layers with data augmented layers and more dense layers.
The results, reported in terms of classification accuracy on a test dataset, showcased the
outcomes of experiments that explored different combinations of activation functions and
optimization algorithms across varying numbers of training epochs.

In Figure 4, the analysis of the Simple CNN model demonstrates its ability to effectively
identify and categorize various waste materials, thereby enhancing the efficiency of cleaning
operations.

Fig. 4. Simple CNN model


21

Fig. 5. CNN layers with augmented layers and dropout layers

Figure 5 illustrates the CNN Layers with Augmented and Dropout Layers model's performance,
accurately identifying and categorizing waste materials while addressing overfitting with
dropout layers. Similarly, Figure 6 highlights the effectiveness of the CNN Layers with Dropout
Layers model in waste material recognition and classification, despite the incorporation of
dropout layers for enhanced performance.

Fig. 6. CNN layers with dropout layers


22

Fig. 7. CNN layers with augmented layers

Figure 7 showcases the CNN Layers with Augmented Layers model's proficiency in accurately
detecting waste materials, utilizing augmentation techniques to enhance feature representation.
Similarly, Figure 8 demonstrates the CNN Layers with Two Different Augmented Layers
model's effectiveness in waste material identification and categorizing, leveraging multiple
augmentation layers for improved performance.

Fig. 8. CNN layers with two different augmented layers


23

Fig. 9. CNN layers with data augmented layers and more dense layers

In Figure 9, the examination of the CNN Layers with Data Augmented Layers and More Dense
Layers model highlights its effectiveness in waste recognition and classification, leveraging
additional dense layers to enhance feature extraction and representation.

The Deep Learning module focused on training and evaluating these ImageNet and custom
models on the WasteNet dataset to classify and recognize different waste materials effectively.
After comprehensive experimentation, the Faster RCNN algorithm was selected for further
evaluation due to its superior performance in object detection and classification tasks. Faster
RCNN demonstrated remarkable accuracy in identifying objects, making it ideal for enabling
the autonomous cleaning robot to detect and classify waste materials for efficient cleaning
operations.

Table 2 provides a detailed analysis of the accuracy scores achieved through the
implementation of diverse custom models and the deep learning Faster RCNN algorithm. These
custom models encompass variations in activation functions paired with different optimizers,
highlighting the comprehensive exploration of optimization strategies within the deep learning
framework.
24

Activation Validation
Model Model name function Optimizer Accuracy accuracy
M1 Simple CNN ReLU RMSProp 97.12 97
M1 Simple CNN ReLU Adam 99.15 99.28
M1 Simple CNN ELU Adamax 98.43 64.49
M1 Simple CNN ELU N-Adam 99.76 99
M2 CNN Layers with ReLU RMSProp 89.87 86.87
Augmented and
Dropout Layers
M2 CNN Layers with ReLU Adam 97.58 98.55
Augmented and
Dropout Layers
M2 CNN Layers with ELU Adamax 94.3 98.55
Augmented and
Dropout Layers
M2 CNN Layers with ReLU N-Adam 96.74 99.64
Augmented and
Dropout Layers
M3 CNN Layers with ReLU RMSProp 97.5 95.12
Dropout Layers
M3 CNN Layers with ReLU Adam 94.81 97.82
Dropout Layers
M3 CNN Layers with SeLU Adamax 97.06 95.68
Dropout Layers
M3 CNN Layers with ReLU N-Adam 96.14 98.55
Dropout Layers
M4 CNN Layers with ELU RMSProp 92.06 91.74
Augmented Layers
M4 CNN Layers with ReLU Adam 96.01 94.57
Augmented Layers
M4 CNN Layers with ELU Adamax 96.74 96.74
Augmented Layers
M4 CNN Layers with ReLU N-Adam 97.71 100
Augmented Layers
M5 CNN Layers with Two ReLU RMSProp 91.81 96.44
Different Augmented
Layers
M5 CNN Layers with Two ReLU Adam 96.01 98.55
Different Augmented
Layers
M5 CNN Layers with Two ELU Adamax 90.46 93.84
Different Augmented
Layers
M5 CNN Layers with Two ELU N-Adam 97.46 95.29
Different Augmented
Layers
25

M6 CNN Layers with ReLU RMSProp 86.49 86.68


Data Augmented
Layers and More
Dense Layers
M6 CNN Layers with ReLU Adam 96.26 99.64
Data Augmented
Layers and More
Dense Layers
M6 CNN Layers with ELU Adamax 90.46 93.84
Data Augmented
Layers and More
Dense Layers
M6 CNN Layers with ELU N-Adam 97.46 95.29
Data Augmented
Layers and More
Dense Layers
M7 Faster RCNN ELU RMSProp 85.43 57.41
M7 Faster RCNN ReLU Adam 99.28 99.64
M7 Faster RCNN ReLU Adamax 98.79 96.01
M7 Faster RCNN ELU N-Adam 100 99.28

Table 2. Accuracy Scored

7.4 How the Faster RCNN algorithm works in sync with hardware components:

The technique for the proposed system entails a structured approach to create an autonomous
cleaning bot equipped with object detection capabilities using the cam module. The process
begins with assembling the necessary hardware components, including the ESP32 CAM
module, Arduino microcontroller, L298N motor driver module, ultrasonic sensor, DC Motors,
DC Pump, and a vacuum-sucking mechanism. Establishing electrical connections, and
configuring the cam module for image capture. Captured images are transmitted to a cloud-
based server for processing, where a pre-trained machine learning model, such as Faster R-
CNN, detects and classifies objects based on trained data stored in the cloud. The Arduino
microcontroller interprets the detection results and controls the bot's movements, using
ultrasonic sensors for obstacle avoidance. Additionally, it activates the DC pump for cleaning
and vacuuming the dust mechanism to collect debris efficiently. A user-friendly interface
displays results and allows customization.
26

Table 3 presents the results analysis of the implementation of different ImageNet models and
the deep learning Faster RCNN algorithm. It showcases the accuracy percentages achieved
during training for each method. Upon experimenting with different deep learning algorithms,
the table below shows the results, providing insights into the performance of each model and
algorithm in accurately detecting and classifying objects.

Name of model No. of epochs Validation Precision Recall


for training Accuracy
accuracy
=100%

Faser RCNN 99% 98% 98% 98%


ResNet 101 92% 93% 93% 93%
ResNet 50 97% 93% 93% 93%
MobileNetV2 98% 37% 14% 37%
EfficientNetB0 98% 95% 95% 95%
VGG16 65% 62% 38% 62%
VGG19 59% 71% 70% 71%

Table 3. Result Analysis

In the IoT module, the focus was on understanding the functionality of hardware components
and explaining the architectures of the proposed system. This involved a detailed examination
of microcontrollers, motor drivers, sensors, cameras, and other components essential for the
robot's operation. Additionally, the communication protocols and data exchange mechanisms
between these hardware components were studied to ensure seamless integration and operation
within the proposed system architecture. Through this comprehensive exploration, the IoT
module provided insights into the hardware infrastructure necessary to support the autonomous
cleaning robot's functionality.

The circuit diagram of the autonomous cleaning bot is shown in Figure 10 & Figure 11.
27

Fig. 10. Arduino L298N Motor Driver Circuit

Fig. 11. ESP32-CAM Motor Control Circuit

The circuit diagram shows the electrical connections between the various components of the
system, such as the ESP32 CAM module, Arduino microcontroller, L298N motor driver
module, ultrasonic sensor, DC Motors, and DC Pump. It is a detailed schematic that helps to
understand the flow of electricity throughout the system and how the various components are
interconnected.

The cam module is connected to the Arduino microcontroller through the Serial Peripheral
Interface (SPI) pins. The ultrasonic sensor is connected to the Arduino microcontroller through
the digital pins. The DC motors are connected to the L298N motor driver module, which is
28

controlled by the Arduino microcontroller through the PWM pins. The DC pump is connected
directly to the Arduino microcontroller through the digital pins.

The block diagram for the autonomous cleaning bot is shown in Figure 12.

Fig. 12. Schematic representation of AutoCleanAI

The above diagram, on the other hand, provides a more high-level view of the system, showing
the major components and their relationships to each other. It helps to provide an overview of
the system's architecture and how the various components work together to achieve the desired
functionality.

The cam module captured images of the cleaning surface and sent them to the cloud, where the
trained data was stored. The Faster R-CNN algorithm tested the captured data with the trained
data and sent the results back to the microcontroller. The microcontroller then controlled the DC
Motors and DC Pump to clean the surface based on the detected objects.
29

Together, these visual representations provide a comprehensive understanding of the


autonomous cleaning bot system, including its electrical connections, functional architecture,
and program logic.

To evaluate the performance and capabilities of the autonomous cleaning bot, we conducted
several tests. First, we tested the accuracy of the object detection algorithm by using different
types of objects and obstacles in various environments. We also tested the cleaning efficiency
of the system by measuring the time and amount of water used to clean different types of
surfaces with varying degrees of debris.

Fig. 13 Scrubbing Results

In Figure 13, the scrubbing results showcase the system's efficacy in removing coffee stains and
red juice spills through automated mop control. Meanwhile, Figure 14 illustrates the vacuuming
results, demonstrating the system's ability to effectively clear debris such as onion peels and
paper, ensuring cleanliness and maintenance of the environment.
30

Fig. 14 Vacuuming Results

7.5 Object Detection Performance Evaluation:

The primary focus of the evaluation lies in the framework's object detection, facilitated by the
ESP32 camera and the Faster RCNN algorithm. By examining the accuracy and reliability of
object identification and classification, AutoCleanAI's proficiency in discerning various objects
within its environment is thoroughly assessed.

7.6 Adaptability Assessment in Varied Environments:

Another critical aspect of the evaluation is the framework's adaptability to changing


environmental conditions. Through robustness testing, AutoCleanAI's resilience in varied
settings, such as different cluttered spaces, and obstacle-rich environments, is examined. This
evaluation ensures that the framework can operate effectively in real-world conditions,
regardless of environmental challenges.
31

7.7 Processing Efficiency and Real-Time Capabilities:

The evaluation of processing efficiency encompasses AutoCleanAI's real-time processing


capabilities and response times. By measuring the processing speed of the Faster RCNN
algorithm and assessing cloud-based processing latency and response time, the framework's
ability to deliver prompt and dynamic responses to environmental factors is evaluated. This
analysis ensures that the framework can swiftly and accurately identify objects and execute
cleaning tasks on time.

7.8 Assessment of Cleaning Performance:

Finally, the evaluation assesses the overall cleaning performance of the AutoCleanAI. By
measuring its ability to detect and remove debris, including papers and other common waste
items, the framework's efficacy in fulfilling its cleaning objectives is evaluated. This analysis
provides insights into AutoCleanAI's practical utility and effectiveness in real-world cleaning
scenarios.

7.9 Prototype:

The prototype encompasses several essential hardware components. This compact and robust
prototype is designed to demonstrate the functionality and feasibility of our AutoCleanAI.
Through careful integration and testing, the prototype showcases seamless interaction and
efficient cleaning performance. User interaction is facilitated through a user-friendly interface
accessible via the website, allowing users to adjust settings, initiate cleaning routines, and check
progress in real-time. The prototype's design prioritizes user accessibility and ease of use,
ensuring an efficient cleaning experience.
32

Fig. 15 Prototype Design of AutoCleanAI Fig. 16 Working of the AutoCleanAI

Figure 15 illustrates the prototype design, showcasing the intricate integration of the cam
module within the autonomous cleaning bot, along with the Arduino microcontroller and
accompanying hardware components. Meanwhile, Figure 16 contains the demo video of the
prototype, providing a visual representation of its operational capabilities.

Table 4 provides a comparative overview of the key features of various autonomous cleaning
devices, including AutoCleanAI, Roomba, Braava, and Scooba. Every gadget can perform
different cleaning tasks, such as vacuuming to scrubbing wet areas. AutoCleanAI distinguishes
itself by combining scrubbing wet areas with vacuuming waste particles. Furthermore,
AutoCleanAI and others feature object detection capabilities, enhancing their adaptability and
efficiency. All devices offer some degree of autonomous control, with AutoCleanAI operating
fully autonomously. The technologies used vary across the devices, with AutoCleanAI utilizing
UR Sensor, ESP32 CAM, and Faster RCNN. In terms of pricing, AutoCleanAI provides a cost-
effective solution compared to others while offering comparable functionality.
33

Features AutoCleanAI Roomba Braava Scooba

Scrubbing wet Vacuuming Mopping Scrubbing,


Cleaning Functions areas, vacuuming
waste particles vacuuming

Object Detection Yes Yes Yes Yes

Type of Control Autonomous Autonomous Autonomous Semi-autonomous

UR Sensor, ESP32 IR, RF, and fixed- IR with virtual wall


IR with virtual wall
Technologies Used CAM, and Faster charging accessories for
accessories
RCNN mechanism industrial cleaning

Price $180 $500 $700 $500

Table 4. Comparison of AutoCleanAI with Available Products.

Note: Our understanding of evaluation methodologies and analytics techniques was cultivated through the
Predictive Analytics course (CSE18R257). Leveraging this knowledge, we can systematically assess to ensure
that our prototype algorithm meets the desired objectives and performs optimally in real-world scenarios.
34

CHAPTER – VIII

CONCLUSION & FUTURE SCOPE

In conclusion, the development of AutoCleanAI marks a significant advancement in


autonomous cleaning technology, achieving an accuracy of 99% with the Faster RCNN
algorithm. This remarkable achievement underscores the system's effectiveness in identifying
and classifying waste particles, ensuring efficient and thorough cleaning operations. Moreover,
the successful patent filing for AutoCleanAI and its application for startup status signifies our
commitment to innovation and entrepreneurial endeavors in the field of robotics. With a focus
on two core modules, deep learning, and IoT, we have leveraged a diverse array of imagenet
models, custom algorithms, and cutting-edge algorithms like Faster RCNN to optimize system
performance. The integration of wire-to-wire connections and data mechanisms in the IoT
modules ensures seamless operation and data acquisition. As a result, AutoCleanAI exhibits
significant efficiency across various settings, including industrial facilities, residential areas, and
hospitals, highlighting its versatility and adaptability in addressing diverse cleaning needs.

Looking ahead, the future scope for AutoCleanAI holds immense potential for further
advancement and refinement. While the system excels in various environments, challenges
remain, particularly regarding surface compatibility on substrates like sand and roads. To
address this limitation, future iterations will focus on independent surface adaptability and
enhanced autonomy, allowing AutoCleanAI to operate more effectively and efficiently across
diverse terrains. Additionally, ongoing efforts will be directed toward expanding the system's
training dataset to include a broader spectrum of waste particles, enabling comprehensive waste
detection and classification. By enhancing autonomy, increasing speed, and broadening the
range of detectable waste particles, AutoCleanAI will evolve into a highly proficient cleaning
agent, improving cleaning practices and setting new standards for cleanliness and efficiency in
various sectors.
35

REFERENCES

1. Pranav Iyengar and Ashish Umbarkar, “Development of Autonomous Indoor Floor


Cleaning Robot,” International Journal of Recent Technology and Engineering
(IJRTE), September 2022, vol. 11, Issue 3, pp. 6-10.

2. Aman Nikam, Rajkumar Pandey and Anuradha Dandwate, “Autonomous Floor


Cleaning Robot (Navigation),” International Journal of Science & Engineering
Development Research, 2020, vol. 5, Issue 3, pp. 77-82.

3. Parth Vibhandik and Zaid Khan, “Autonomous Vacuum Cleaning Robot using Sonar
Navigation and SLAM,” International Research Journal of Engineering and
Technology (IRJET), July 2021, vol. 8, Issue 7, pp. 1905.

4. B. Sonia and P. Ganesh, “Design and Implementation of Floor Cleaning Robot Using
IOT,” International Journal of Creative Research Thoughts (IJCRT), January 2021, vol.
9, Issue 1, pp. 246-249.

5. R. H. Krishnan, B. A. Naik, G. G. Patil, P. Pal, and S. K. Singh, “AI Based Autonomous


Room Cleaning Bot,” 2022 International Conference on Futuristic Technologies
(INCOFT), Belgaum, India, 2022, pp. 1-4, doi:
10.1109/INCOFT55651.2022.10094492.

6. Ramalingam, B., Le, A.V., Lin, Z. et al. “Optimal selective floor cleaning using deep
learning algorithms and reconfigurable robot hTetro”. Sci Rep 12, 15938 (2022).

7. Canedo, Daniel, Pedro Fonseca, Petia Georgieva, and António J. R. Neves. 2021. “A
Deep Learning-Based Dirt Detection Computer Vision System for Floor-Cleaning
Robots with Improved Data Collection”, Institute of Electronics and Informatics
Engineering of Aveiro/Department of Electronics, Telecommunications and
Informatics (IEETA/DETI) , Technologies 9, no. 4: 94.
36

8. Patil, Swati & Yelmar, S & Yedekar, S & Mhatre, S & Pawashe, V. “Autonomous
Robotic Vacuum Cleaner”, International Research Journal of Innovations in
Engineering and Technology (IRJIET), 2021, 142-146.

9. Uman Khalid, Muhammad Faizan Baloch, Haseeb Haider, Muhammad Usman Sardar,
Muhammad Faisal Khan , Abdul Basit Zia1 and Tahseen Amin Khan Qasuria, “Smart
Floor Cleaning Robot (CLEAR)”, IEEE Standards University E-Magazine, March
2023, vol.5, Issue 3, pp. 145-151.

10. R. Raja Subramanian, V. Vasudevan, “A deep genetic algorithm for human activity
recognition leveraging fog computing frameworks”, Jwenal of Visual Communication
and Image Representation, Volume 77, 2021, 103132, ISSN 1047-3203,
https://doi.org/10.1016/j.jvcir.2021.103132.

11. Raja Subramanian R., Vasudevan V. (2021) HARfog: An Ensemble Deep Learning
Model for Activity Recognition Leveraging IoT and Fog Architectures. In: Gunjan
V.K., Zurada J.M. (eds) Modern Approaches in Machine Learning and Cognitive
Science: A Walkthrough. Studies in Computational Intelligence, vol 956. Springer,
Cham. https://doi.org/10.1007/978-3-030-68291-0_11.

12. Burman, Vibha & Kumar, Ravinder. “IoT-Enabled Automatic Floor Cleaning Robot”,
Recent Advances in Mechanical Engineering, 2021.

13. S. H. Haruna, A. Umar, Z. Haruna, O. -O. Ajayi, A. Y. Zubairu and R. Rayyan,


"Development of an Autonomous Floor Mopping Robot Controller using Android
Application," 2022 5th Information Technology for Education and Development
(ITED), Abuja, Nigeria, 2022, pp. 1-6, doi: 10.1109/ITED56637.2022.10051505.

14. K. Saravanan, E. Siva Prasanna, R. Sattish, R. Udhaya Abinesh and P. Anandakumar,


"Automatic Floor Cleaning Robot," 2022 IEEE International Conference on Data
Science and Information System (ICDSIS), Hassan, India, 2022, pp. 1-5, doi:
10.1109/ICDSIS55133.2022.9915986.
37

15. S Monika, K Aruna Manjusha, S V S Prasad, B.Naresh, “Design and Implementation


of Smart Floor Cleaning Robot using Android App”, International Journal of Innovative
Technology and Exploring Engi-neering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-
4S2 March, 2019.

16. R. R. Subramanian, A. Abilakshmi, T. Kalyani, P. Sunayana, G. Monalisa and C. V.


Chaithanya, "Design and Evaluation of a Deep Learning Aided Approach for Kidney
Stone Detection in CT scan Images," 2023 International Conference on Applied
Intelligence and Sustainable Computing (ICAISC), Dharwad, India, 2023, pp. 1-6, doi:
10.1109/ICAISC58445.2023.10199835.

17. R. R. Subramanian, L. Ravikiran, K. S. Vamsi, K. M. Feroz, K. Logadharani and M.


Varshitha, "BreastNet: Design and Evaluation of a Deep Learning model for
recognizing Breast Cancer from Imag-es," 2022 6th International Conference on
Electronics, Communica-tion and Aerospace Technology, Coimbatore, India, 2022, pp.
960-965, doi: 10.1109/ICECA55336.2022.10009187.

18. R. R. Subramanian, K. N. Kumar Reddy, K. J. Surya, G. N. Murthy, M. Abhiram and


P. S. Geethika, "Autonomous Obstacle and Object Detection for Visually Impaired
With Audio Aid," 2023 2nd Interna-tional Conference on Vision Towards Emerging
Trends in Commu-nication and Networking Technologies (ViTECoN), Vellore, India,
2023, pp. 1-6, doi: 10.1109/ViTECoN58111.2023.10157184.

19. R. R. Subramanian, N. V. A. S. Kumar, N. S. Sundar, N. H. Vardhan, M. U. M. Reddy


and M. V. S. K. Reddy, "FlowerBot: A Deep Learning aided Robotic Process to detect
and pluck flowers," 2022 6th International Conference on Electronics, Communication
and Aerospace Technology, Coimbatore, India, 2022, pp. 1153-1157, doi:
10.1109/ICECA55336.2022.10009077.
38

20. R. Raja Subramanian, V. Vasudevan, “HARDeep: design and evalua-tion of a deep


ensemble model for human activity recognition”, In-ternational Journal of Innovative
Computing and Applications, Vol. 14, No. 3, 2023.

21. Khaleda Sh. Rejab and Sara Mazin Naji Abd-Al Hussain, “Design and Implementation
of Intelligent Mobile Robot based on Microcontroller by using Three Ultrasonic
Sensors”, International Journal of Current Engineering and Technology, November
2017, vol. 7, No.6, pp. 1-6.

22. Anwer Sabah Ahmed, Heyam A. Marzog1 and Dr. Laith Ali Abdul-Rahaim, “Design
and Implement of Robotic Arm and control of moving via IoT with Arduino ESP32”,
International Journal of Electrical and Computer Engineering (IJECE), October 2021,
vol. 9, No.4, pp. 101-110.

23. Md. Kamruzzaman Russel, Muhibul Haque Bhuyan, “Microcontroller Based DC Motor
Speed Control Using PWM Technique”, International Conference on Electrical,
Computer and Telecommunication Engineering, December 2012, pp. -522.

24. Wahyu Rahmaniar and Ari Hernawan, “Real-Time Human Detection Using Deep
Learning on Embedded Platforms: A Review”, Journal of Robotics and Control (JRC),
November 2021, vol. 2, Issue 6, pp. 462-468.

25. R. R. Subramanian, V. S. N. S. Y. Kommuri, V. C. B. Metta and N. V. R. Gopalabhatla,


"Cleanobot: Design of an Autonomous Bot for Cleaning Surfaces Leveraging Deep
Learning and IoT Frameworks," 2023 International Conference on Applied Intelligence
and Sustainable Computing (ICAISC), Dharwad, India, 2023, pp. 1-6.
2023 International Conference on Applied Intelligence and Sustainable Computing (ICAISC)

Cleanobot: Design of an Autonomous Bot for


Cleaning Surfaces Leveraging Deep Learning and
IoT Frameworks
Venkata Chaitanya Bharadwaj Metta
R. Raja Subramanian V S N S Yashwanth Kommuri
Computer Science and Engineering
Computer Science and Engineering Computer Science and Engineering
Kalasalingam Academy of Research
Kalasalingam Academy of Research Kalasalingam Academy of Research
and Education
and Education and Education
Virudhunagar, India
Virudhunagar, India Virudhunagar, India
[email protected]
[email protected] [email protected]
Naga Venkata Rajaram Gopalabhatla
Computer Science and Engineering
Kalasalingam Academy of Research
and Education
Virudhunagar, India
[email protected]

Abstract— The autonomous cleaning bot is a state-of-the- ultrasonic sensor is used to detect objects and obstacles in the
art robotic system that utilizes advanced cutting-edge bot's path.
technology to automate the cleaning process. This paper
presents the design and implementation of an autonomous One of the main use cases of this proposed system is in
cleaning bot efficiently and effectively cleans a variety of commercial and industrial cleaning applications, where a
indoor spaces, from homes to commercial buildings, using a large area needs to be maintained efficiently and without
combination of sensors, algorithms, and cleaning tools. The bot human intervention. Another potential use case is in home
is capable of navigating through complex environments and cleaning systems, where the bot can be used to clean floors,
detecting obstacles in its path, making it an ideal solution for carpets, and other surfaces. The applications of an
areas that require regular cleaning. It also features a user- autonomous cleaning bot are numerous. It can help reduce
friendly interface that allows for easy customization of cleaning the workload of cleaning staff, increase productivity, and
schedules and zones, as well as real-time monitoring of the reduce the use of chemicals and water. Additionally, it can
bot's progress. The experimentation and results demonstrate help maintain a clean and healthy environment by removing
the effectiveness of the system in autonomously cleaning and dirt, dust, and allergens.
detecting objects. The bot successfully sends captured data to
the cloud for analysis, and the results accurately indicate the However, one of the main challenges in implementing
type of object detected. With its rechargeable battery, modular this system is the need for robust image processing and
design, and easy maintenance the autonomous cleaning bot is a object recognition algorithms, as well as the ability to
cost-effective, efficient, and eco-friendly solution that enhances navigate through different environments without damaging
hygiene and safety standards while reducing the need for the bot or the objects in its path. Overall, the proposed
human labor. The system's design and implementation have autonomous cleaning bot system offers a promising solution
been discussed in detail, including the integration of all for automated cleaning and maintenance tasks in a variety of
hardware components. The proposed system has demonstrated settings, but further research and development are needed to
its effectiveness in cleaning and detecting objects improve its functionality and usability in real-world
autonomously. The paper concludes with potential future applications.
improvements and research directions for the autonomous
cleaning bot. The rest of the paper is structured as follows:
Section 2 provides a comprehensive literature survey on deep
Keywords— object detection, autonomous cleaning, learning models, focusing specifically on the Faster R-CNN
microcontroller, sensor fusion. algorithm, which is used in the proposed system. The
proposed system is described in Section 3 along with
I. INTRODUCTION specifics on how the ESP32 CAM module is combined with
Autonomous cleaning bots have become increasingly other hardware elements to support object recognition and
popular due to their ability to efficiently clean and maintain image transmission to the cloud. Section 4 outlines the
large areas without human intervention. It can be used in experimental setup and methodology employed to evaluate
various settings, including homes, offices, hospitals, and performance of the system. Following the results, Section 5
other commercial buildings. In this proposed system, the discusses the quantitative, and qualitative and then highlights
main hardware components include an ESP32 CAM module, the contributions of authors. Finally, the authors address the
an Arduino microcontroller, an ultrasonic sensor, DC motors, conclusions, limitations, and future potential of the suggested
and a DC pump. The ESP32 CAM module serves as the system.
main image processing unit, while the Arduino
microcontroller is the main control unit for the autonomous
cleaning bot that controls the DC motors and DC pump. The

979-8-3503-2379-5/23/$31.00 ©2023 IEEE


4/1/24, 12:22 AM Acceptance Notification – IESIA 2024 - [email protected] - Gmail

cmt

Compose

Inbox 10,794
Acceptance Notification – IESIA 2024 Inbox ×

Starred Microsoft CMT <[email protected]>


to me, sanjoy.mondal, iesia2024
Snoozed
Sent Dear Prof. YASHWANTH KOMMURI,

Drafts Paper ID / Submission ID : 80

More Title : Auto Clean AI: A Deep Learning-Enabled Autonomous Surface Cleaning Bot Integrated with IoT Technology

Greeting from IESIA 2024.


Labels
We are pleased to inform you that your paper has been accepted for the Presentation as a full paper for the Intern
Management, Kolkata, India” with following reviewers’ comment.
Your revision is due in seven days from now. Please upload the final version of camera ready paper in proper Sprin

Note:
All accepted and presented papers will be published in Springer Book series "Studies in Autonomic, Data-driven and

**Publisher Decision Disclaimer:**


Please note that Springer holds the final decision regarding the publication of all accepted papers. Upon acceptan
publication. We appreciate your understanding in this matter.

You should finish the registration before deadline, or you will be deemed to withdraw your paper: Complete the Reg

Note :
1. This is Hybrid Conference, both online and physical presentation mode is available.

The advance program for the conference will be available very soon on the conference website,
https://iesia.smartsociety.org/

The reviews are below.

======= Review 1 =======


1. Mention the novelty of your work
2. Add few more recent references.

Recommendation: Accept

https://mail.google.com/mail/u/0/?ogbl#search/cmt/FMfcgzGxSRNZjHnpmMcmmvswTRrFnMXr 1/1
V S N S YASHWANTH KOMMURI

Cloud Computing

79
25/25 53.57/75

16686

Jul-Oct 2023
(12 week course)

Roll No: NPTEL23CS89S636300445 To verify the certificate No. of credits recommended: 3 or 4


AutoCleanAI-4.pdf
ORIGINALITY REPORT

SIMILARITY INDEX 12%


PRIMARY SOURCES

1 R. Raja Subramanian, V S N S Yashwanth Kommuri,Venkata Chaitanya Bharadwaj

Metta, Naga Venkata240 words — 8%


Rajaram Gopalabhatla. "Cleanobot: Design of an Autonomous
Bot for Cleaning Surfaces Leveraging Deep Learning and IoT
Frameworks", 2023 International Conference on Applied
Intelligence and Sustainable Computing (ICAISC), 2023
Crossref

2
ar.kalasalingam.ac.inInternet 20 words — 1%

3
www.ncbi.nlm.nih.govInternet 17 words — 1%
4 "Principles of Internet of Things (IoT) Ecosystem:Insight Paradigm", Springer

Science and Business13 words — < 1%


Media LLC, 2020
Crossref
INTERNAL QUALITY ASSURANCE CELL
PROJECT AUDIT REPORT

This is to certify that the project work entitled “AutoCleanAI: A Deep Learning-Enabled
Autonomous Surface Cleaning Bot Integrated with IoT Technology” categorized as internal
project done by V S N S YASHWANTH KOMMURI, G N V RAJARAM of the Computer
Science Department under the guidance of Dr. R. RAJA SUBRAMANIAN during Even
semester of the academic year 2023 - 2024 are as per the quality guidelines specified by IQAC.

Quality Grade

Deputy Dean (IQAC)

Administrative Quality Assurance Dean (IQAC)

You might also like