0% found this document useful (0 votes)
12 views48 pages

Thesis

Uploaded by

ballaba413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views48 pages

Thesis

Uploaded by

ballaba413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Thesis: Smart Agricultural Robot Based on

Computer Vision

by
Mbah Mohamed Omar 147
Moussa Coul
ETete 140

Department of Electrical and Electronic Engineering


Islamic University of Technology (IUT)
Gazipur, Bangladesh

June 2024
Smart Agricultural robot based on computer
vision

Approved by:

——————————————–
Dr. GOLAM SAROWAR
Supervisor and [Associate/Assistant] Professor,
Department of Electrical and Electronic Engineering,
Islamic University of Technology (IUT),
Boardbazar, Gazipur-1704.

Date: . . . . . .
Contents

List of Tables 3

List of Figures 4

List of Acronyms 5

1 Introduction 6
1.0.1 BASIC FUNCTIONALITIES OF AGRICULTURAL HARVESTING ROBOT 6
1.0.2 DIFFERENT ASPECTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.0.3 BACKGROUND AND MOTIVATION . . . . . . . . . . . . . . . . . . . . . . 10

2 Overview 12
2.0.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.0.2 Topic Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.0.3 Research Needs and Gaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.0.4 Disguise and Acquisition Methodologies . . . . . . . . . . . . . . . . . . . . . 14
2.0.5 Autonomy and Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.0.6 Integration of Farm Management Systems . . . . . . . . . . . . . . . . . . . 14
2.0.7 Energy efficiency and sustainability . . . . . . . . . . . . . . . . . . . . . . . 14
2.0.8 Economic and Social Implications . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Methodology 16
3.1 The requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Research and Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Requirement Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Design and Hardware Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 System Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Integration of Hardware and Software . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Hardware Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.1 Power Supply: 12V Battery of 3300mAh . . . . . . . . . . . . . . . . . . . . . 21
3.3.2 Motor Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.3 Power Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.4 ESP32 Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.5 Pan-Tilt Mechanism with SG90 Servo Motors for ESP32-CAM . . . . . . . . 22
3.3.6 Ultrasonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.7 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1
3.4 Software and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4.1 YOLO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4.2 Example of Online and Real-Time Tracking with Convolutional Neural Net-
works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.3 Example : Fruit Detection Using YOLO . . . . . . . . . . . . . . . . . . . . 28
3.4.4 Fruit Counting in Real-Time using an Object Tracking Algorithm and YOLO 29

4 04 Results and Discussion 33


4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5 Conclusion 34
5.1 here we go . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Demonstration of Outcome Based Education (OBE) 35


6.0.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.0.2 COURSE OUTCOME (COs) ADDRESSED . . . . . . . . . . . . . . . . . . 35
6.0.3 ASPECTS OF PROGRAM OUTCOMES (POS) ADDRESSED . . . . . . . 37
6.0.4 KNOWLEDGE PROFILES (K3 – K8) ADDRESSED . . . . . . . . . . . . . 41
6.0.5 ATTRIBUTES OF RANGES OF COMPLEX ENGINEERING PROBLEM
SOLVING (P1 – P7) ADDRESSED . . . . . . . . . . . . . . . . . . . . . . . 43
6.0.6 ATTRIBUTES OF RANGES OF COMPLEX ENGINEERING ACTIVITIES
(A1 – A5) ADDRESSED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

2
List of Tables

3.1 Model performance evaluation with Raw dataset under 416×416 pixels’ resolution. . 31
3.2 Model performance evaluation with 0.5 dataset under 416×416 pixels’ resolution. . . 32
3.3 Model performance evaluation with 0.25 dataset under 416×416 pixels’ resolution. . 32

6.1 Summary of different types of harvesting robots and their key features. . . . . . . . . 36
6.2 Summary of different types of harvesting robots and their key features. . . . . . . . . 38
6.3 Table summarizing COs, POs, and Explanation/Justification for agricultural har-
vesting robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4 Table summarizing the Knowledge Profile (Attribute) for the engineering discipline. 41
6.5 Table summarizing the Explanation/Justification for each Knowledge Profile (K) in
the context of Smart Agricultural Harvesting Robots. . . . . . . . . . . . . . . . . . 42
6.6 Table summarizing the Range of Complex Engineering Problem Solving and corre-
sponding attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.7 Table summarizing the Explanation/Justification for each P attribute in the context
of developing agricultural harvesting robots. . . . . . . . . . . . . . . . . . . . . . . . 44
6.8 Table summarizing the Range of Complex Engineering Activities and corresponding
attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.9 Table summarizing the Explanation/Justification for each A attribute in the context
of developing agricultural harvesting robots. . . . . . . . . . . . . . . . . . . . . . . . 46

3
List of Figures

1.1 Digital farming with agricultural robotics (source: www.AdaptiveAgroTech.com ). . 7


1.2 Various types of harvesting robots, including those for plums, apples, sweet peppers,
strawberries, litchis, tomatoes, and kiwifruits. Photos reprinted with permission
from respective sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Robot Arm Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 ESP 32 PINOUT(source: last minute engineer.com) . . . . . . . . . . . . . . . . . . 22
3.5 Fusion 360 CAD model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Fusion 360 model 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7 Assembled robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.8 Assembled robot with With navigation capabilities and image processing . . . . . . . 26
3.9 (Top) General workflow of YOLO ; (Bottom) YOLO Architecture.s. . . . . . . . . . 27
3.10 The architecture of Deep SORT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.11 YOLOv2 architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.12 YOLO-fruits model flowchart for dataset, training and detection process. . . . . . . 30

4
List of Acronyms

CV Computer Vision
IUT Islamic University of Technology
YOLO You Only Look Once
CNN Convolutional Nural Network
PWM Pulse Width Modulation
CPU Central Processing Unit

5
Chapter 1

Introduction

1.0.1 BASIC FUNCTIONALITIES OF AGRICULTURAL HARVEST-


ING ROBOT
The agriculture industry has undergone a dramatic transition in recent years, thanks to advances in
robotic technology. Among these advancements, harvesting robots stand out as a game-changing so-
lution to modern farming difficulties. Harvesting robots have the potential to revolutionize agricul-
tural operations by automating the labor-intensive activity of fruit picking, increasing production,
and mitigating manpower shortages.
Traditionally, agriculture has relied primarily on manual labor for chores like harvesting, which
are frequently monotonous, physically demanding, and vulnerable to seasonal fluctuations in work-
force availability. However, as demographics change and urbanization increases, the availability
of trained agricultural workers has decreased, creating substantial problems for farm operations
around the world. Harvesting robots are a cutting-edge solution that has arisen in response to
these difficulties. They use sophisticated technology like robotics, artificial intelligence, and ma-
chine learning to do jobs that have historically been performed by human workers. These robots
are outfitted with advanced vision systems, motion delivery systems, and end-effectors that are
specifically engineered to precisely identify, find, and pick ripe fruits. Harvesting robots have revo-
lutionized agricultural operations and brought about many advantages for farmers, such as higher
crop yields, lower labor costs, and enhanced operational efficiency. Furthermore, these robots could
solve important problems including the lack of labor, labor reliance, and the requirement for sus-
tainable farming methods. In light of this, the purpose of this study is to present a thorough
analysis of harvesting robots, examining their features, potential applications, and effects on the
agricultural industry. We aim to shed light on how harvesting robots could revolutionize modern
agriculture by analyzing the most recent developments in robotic technologies and how they are
being applied to fruit harvesting. We intend to stimulate more research, development, and use
of harvesting robots to meet the changing needs of the agricultural business by gaining a greater
understanding of these cutting-edge technologies.

1.0.2 DIFFERENT ASPECTS


Harvesting robots, which integrate a variety of complex systems to solve the urgent problems of
labor shortages and resource optimization in agriculture, constitute a significant achievement in

6
agricultural technology. These robots essential features can be combined into several connected
parts.

Figure 1.1: Digital farming with agricultural robotics (source: www.AdaptiveAgroTech.com ).

Vision Systems:
Harvesting robots rely on modern vision systems that use a variety of sensors such as RGB cameras,
multispectral cameras, LiDAR, and depth cameras to perform well. These sensors, when paired
with sophisticated image processing techniques and machine learning algorithms, allow robots to
properly detect and localize fruits in a variety of demanding settings.
Feature extraction and AI integration:
The visual systems extract critical properties such as color, texture, and shape to distinguish
between ripe and unripe fruit. AI and machine learning, particularly neural networks, improve
detection accuracy by responding to changing lighting and occlusions, increasing overall harvesting
efficiency.
Motion Delivery Systems:
Robots can travel fields and orchards with the use of effective mobility devices, including wheeled
or tracked bases. Robust motion planning algorithms guarantee accurate positioning, allowing the
robot to access and retrieve fruits situated in different parts of the surroundings.
Manipulation and Control:
The real harvesting process depends on robotic grippers and arms that are managed by accurate
motion delivery systems. Sophisticated control algorithms and feedback mechanisms are necessary
for these devices to perform the delicate operation of harvesting fruits without causing damage.

7
End-Effectors:
The end-effector, which consists of harvesting tools and grippers, needs to be thoughtfully made
in order to grasp and separate fruits without hurting them. Various end-effector types—such as
vacuum-based or soft grippers—are employed based on the fruit variety and plant structure. It’s
important to make sure that removing the fruit from the plant with as little damage as possible
is done. To do this, there must be a harmonious design between the control systems and the
end-effectors.
Environmental Adaptability:
Harvesting robots need to be able to function well in a variety of environmental settings, including
sun and shade. The robots can continue to operate effectively and adjust to these changes because
of their powerful vision and control systems.
Obstacle Avoidance:
To ensure smooth operation and prevent harm to the robot and the plants, effective obstacle
recognition and avoidance systems are required in the presence of barriers such as leaves, branches,
and support structures.
Artificial Intelligence and Machine Learning:
Artificial intelligence (AI) and machine learning algorithms greatly increase the robot’s capacity to
precisely identify and categorize fruits. Additionally, these technologies improve decision-making
by enabling the robot to modify and optimize its behaviors in response to real-time inputs. During
the harvesting process, fast and effective decision-making is made possible by the integration of
edge computing technologies and high-performance processors.
Hardware and Software Integration:
The harvesting robot’s general architecture comprises the integration of advanced software systems
(control algorithms, data processing frameworks) with a variety of hardware components (sensors,
actuators, and processors). The smooth operation and coordination of many subsystems are guar-
anteed by effective integration. Advanced communication technologies, such as wireless connection
and the Internet of Things (IoT), provide effective data interchange and control commands, guar-
anteeing the harmonious operation of the robot’s systems.
Economic and Practical Considerations:
Cost-Effectiveness and Deployment: Using creative ideas and materials is one way to try to keep
development and deployment costs as low as possible. Maintaining excellent performance under
real-world settings, ease of use, and reliability are necessary for practical deployment in commercial
orchards.
Effects on Farming:
Harvesting robots have the potential to greatly boost labor productivity, lessen the need for seasonal
labor, and enhance overall agricultural efficiency. Maximizing resource utilization and reducing
waste, helps sustainable farming methods and solves the labor crisis.

8
Figure 1.2: Various types of harvesting robots, including those for plums, apples, sweet peppers,
strawberries, litchis, tomatoes, and kiwifruits. Photos reprinted with permission from respective
sources.

Several cutting-edge technologies come together to form harvesting robots, and each one is essen-
tial to enable accurate and productive agricultural operations. Robust motion delivery mechanisms,
finely built end-effectors, advanced AI and machine learning, and sophisticated vision systems are
all integrated into these robots, which provide a practical answer to the labor shortages and re-
source optimization issues facing the agriculture sector. To overcome current obstacles and fully
achieve the promise of harvesting robots in enhancing agricultural productivity and sustainability,
more research and development are necessary.

9
1.0.3 BACKGROUND AND MOTIVATION

BACKGROUND

In an effort to alleviate labor shortages and improve fruit picking efficiency, harvesting robots are
a noteworthy development in agricultural technology. Usually installed on movable platforms, these
robotic systems are outfitted with manipulators and end-effectors and employ advanced machine
vision techniques to precisely choose ripe fruits. This technique, called ”selective harvesting,”
combines the effectiveness of machinery with the pickers’ customary selectivity (Bac et al., 2014b).
Robotic systems have drawn growing attention from academia and industry as the possibility that
they could completely replace human labor grows (Sanders, 2005).
Robotic fruit picking has been an area of research and development since the late 1960s. Practical
implementation in commercial orchards is still limited despite decades of research and development
because of a number of important obstacles. These robots’ relatively sluggish harvesting speeds
and high production costs are the main challenges. The robots’ capacity to recognize and choose
ripe fruits is mostly dependent on machine vision systems, which are frequently inadequate. These
methods are not very good at identifying fruits that are darkened, hidden by branches and leaves,
or green against a green background. They can even mistakenly classify yellow leaves as fruits on
occasion (Bac et al., 2014b).
Recent advancements in robotics and artificial intelligence (AI) are making it possible to create
more effective harvesting machines. The goal of developing modern AI-powered machine vision
systems is to improve fruit identification and selection accuracy. To overcome the limitations of
current machine vision technologies, a key focus is on neural network models trained to distinguish
various fruit features. However, despite these advancements, the technology isn’t yet advanced
enough for widespread commercial application. To match the productivity of human labor, these
robots need to increase in both speed and accuracy. In agriculture, there are numerous potential
benefits of robotic harvesting. It’s still one of the most labor-intensive industries in agriculture,
with only 15% of the work automated and a significant portion done by hand during harvesting.
Because of urbanization, it is now harder to get seasonal labor, which has led to a scarcity of crops
and highlighted the urgent need for automated solutions (FAO, 1976).
Increased yield, a solution to the agricultural shortage, and a significant reduction in labor costs
are all possible with robotic harvesting. Incorporating these robots into existing cultivation systems
could further boost their efficiency, making them a viable and practical solution for agricultural
harvests in the future.

MOTIVATION
A combination of economic, labor market, and technological factors drives the motivation for de-
veloping and deploying harvesting robots.
Labor Shortages:
Horticulture is one of agriculture’s most labor-intensive businesses. The availability of labor for fruit
picking has been dropping due to urbanization and a decrease in the number of seasonal laborers
willing to perform such physically demanding chores (FAO, 1976). This labor scarcity has resulted
in crop shortages and higher production costs, necessitating an urgent need for automated solutions

10
that can consistently substitute human work.
Cost Reduction:
Labor expenses make up a considerable component of total production costs in fruit harvesting,
accounting for up to 40% of the overall value of the crop (Bac et al., 2014b). Farmers may drastically
cut labor expenses by automating the harvesting process, which improves overall profitability and
makes the agricultural industry more sustainable in the long run.
Increased Efficiency and Productivity:
Unlike human workers, harvesting robots can operate continuously without the need for breaks
or rest, leading to a substantial increase in productivity. These robots can work day and night,
maximizing the harvesting window and ensuring that fruits are picked at their optimal ripeness,
thereby reducing waste and improving yield (Shewfelt et al., 2014).
Precision and Quality:
Modern harvesting robots are being developed with advanced AI and machine vision systems that
can accurately assess fruit ripeness, size, and other quality parameters. This precision ensures
that only the best-quality fruits are harvested, maintaining high standards of produce quality and
enhancing market acceptance (Bac et al., 2014b). Such selective harvesting also reduces the risk of
damaging unripe fruits, which can continue to mature for future harvests.
Technological Advancements:
Rapid advancements in AI and robotics are driving improvements in the capabilities of harvesting
robots. Contemporary neural network models and enhanced machine vision systems are being
trained to better recognize and differentiate fruits from their surroundings, addressing some of the
limitations of earlier prototypes. These advancements are gradually making harvesting robots more
reliable and cost-effective, bringing them closer to widespread commercial deployment (Bac et al.,
2014b).
Sustainability and Environmental Impact:
Automating the harvesting process can also contribute to more sustainable agricultural practices.
Robots can be programmed to follow optimized paths and employ precise picking techniques that
minimize damage to plants and the environment. This precision agriculture approach helps in
maintaining the health of orchards and reducing the ecological footprint of farming operations.

11
Chapter 2

Overview

2.0.1 INTRODUCTION
The desire to boost productivity, cut labor costs, and maximize resource utilization is causing a
major upheaval in the agriculture sector. The development of harvesting robots is one of the most
promising advances in this industry. The purpose of these robots is to automate the process of
selecting fruit, which provides a solution to the labor shortage and growing expenses related to
manual harvesting. An intricate combination of hardware and software components make up a
harvesting robot’s architecture, and each one is essential to the machine’s successful and efficient
operation.
A power supply and power management system are among the hardware parts of the harvesting
robot that guarantee a steady and dependable power source for the robot’s operations. The ESP32
module, a potent microcontroller that directs the robot’s movements, is essential to its operation.
The servo motors that control the exact movements of the robot arm assembly, ultrasonic sensors
for obstacle detection, and the ESP32-CAM for image capture all work in tandem with this.
On the software side, powerful tools like Fusion 360, MATLAB Simulink, and Simscape are
used to design and simulate the robot’s physical and dynamic behavior. Fusion 360 helps to
create comprehensive CAD models of the robot structure, whereas Simulink and Simscape allow
for modeling and optimization of the robot arm’s movements. The software suite also contains
advanced image processing and object detection capabilities based on the YOLO v3 model, which
enables the robot to precisely recognize and locate ripe fruits for harvesting.

2.0.2 Topic Relevance


Modern agriculture has made great strides thanks to smart agricultural robots, especially harvesting
robots that meet global sustainability targets and solve important difficulties. Using recent findings
and scholarly works on robotics and precision agriculture, this part investigates the applicability of
these machines.
Precision Agriculture and Technology Integration
Through the use of technology and data-driven methods, precision agriculture seeks to maximize
farming techniques. Through targeted applications of fertilizers, herbicides, and water, precision
agriculture decreases environmental impact, increases crop yields, and improves resource efficiency
(Gómez-Garcı́a et al., 2020). Farmers can get more precision in field operations, which improves

12
resource management and promotes sustainable farming methods, by incorporating intelligent agri-
cultural robots into their operations.
Addressing the SDGs, or Sustainable Development Goals
Several UN Sustainable Development Goals (SDGs) are directly impacted by intelligent agricultural
robots which are : Goal 2, Zero Hunger: Harvesting robots can help satisfy the world’s growing
food demand without adding more agricultural land, which will increase food security by raising
agricultural productivity and efficiency; Goal 8 (Decent Work and Economic Growth): By lower-
ing physical strain and dangerous surroundings, automation of labor-intensive jobs like harvesting
can enhance working conditions for farmers (UN, 2022); Goal 12 (Responsible Consumption and
Production): Smart agricultural robots reduce waste and maximize resource utilization to sup-
port sustainable production methods, which lessens the environmental impact of agriculture (FAO,
2021).
Technological Progress and Research Areas of Interest
Research on intelligent agricultural robots is moving quickly forward, with the goal of improving
the robots’ capacities for a range of agricultural duties. For example, Saeys et al. (2020) discuss
developments in robotics for fruit harvesting, focusing on the creation of multiple-degree-of-freedom
robotic arms and advanced sensor technologies for precise fruit handling and detection.
Issues Handled in the Agriculture Sector
Several issues that the agriculture industry faces are addressed by intelligent agricultural robots such
as ,Labor Scarcity: Automation lessens the need for seasonal labor, which is frequently expensive
and in short supply, especially during the busiest harvesting seasons (Valente et al., 2020); Climate
Variability: Robots with sensors and AI algorithms built in can function in a range of weather
scenarios, guaranteeing steady output and lowering crop hazards associated with weather (FAO,
2021).
Case Study: Application in Fruit Harvesting
Because fruit harvesting requires careful handling and exact timing, there are unique obstacles
involved. In order to carefully choose and harvest ripe fruits without harming them, robotic arms,
machine learning algorithms, and vision systems are all used in fruit harvesting robots, such those
created by Swarztrauber et al. (2019). This increases crop quality and marketability overall.

2.0.3 Research Needs and Gaps


Recently, there have been notable developments in the field of smart agricultural robotics, particu-
larly harvesting robots. But in order to reach their full potential, a great deal of research demands
and holes still need to be filled. By highlighting the interdisciplinary nature of the opportunities
and problems in this field, this topic addresses a variety of these requirements and gaps.

Perception and Sensation


Object Detection and Recognition: Particularly in dense foliage, current systems frequently have
trouble correctly recognizing and differentiating between fruits, leaves, and stems. To improve the
precision and dependability of sensors and algorithms used in object identification and recognition,
research is required. Environmental Adaptability: Harvesting robots must function well in a va-
riety of weather situations, light conditions, and crop varieties. One of the biggest challenges yet
ahead is creating resilient perception systems that can adjust to these changes. 3D Mapping and
Localization: Precise localization in the field and accurate 3D mapping of the surroundings are

13
essential for efficient navigation and harvesting. It is imperative to enhance SLAM (Simultaneous
Localization and Mapping) algorithms specifically designed for agricultural environments.

2.0.4 Disguise and Acquisition Methodologies


End-Effector Design: A major field of study is the creation of adaptable and soft end-effectors that
can handle many crop varieties without inflicting harm. In order to create harvesting instruments
that are suitable, this involves researching the biomechanics of various fruits and veggies. Manip-
ulative dexterity: It is crucial to improve robotic arm dexterity so that it can resemble human
movement in subtle tasks. We need to do research on more advanced actuation techniques and
control algorithms. Force Sensing and Feedback: By including sophisticated force sensing and feed-
back technologies, robots can reduce crop damage and increase harvesting efficiency by adjusting
their grasp and movements in real-time.

2.0.5 Autonomy and Decision


The development of effective algorithms for path planning is crucial in order to maximize the robot’s
mobility and efficiently cover the field while consuming the least amount of time and energy. Real-
Time Decision Making: By putting cutting-edge AI and machine learning techniques into practice,
harvesting performance can be greatly enhanced by enabling real-time decision-making based on
the sensory information received by the robot. Choosing when and how to harvest each fruit falls
under this category. Adaptive Learning: Over time, robots should be able to learn and adjust to
changing surroundings and crops. It is imperative to do research into adaptive learning algorithms
that enable robots to gain experience and enhance their performance.

2.0.6 Integration of Farm Management Systems


Data Integration: Harvesting robots provide massive amounts of data, which might be useful for
farm management. Creating methods to combine this data with existing farm management software
for real-time monitoring and decision assistance is a critical area of research. IoT and Connectivity:
To ensure coordinated operations, robots, sensors, and central management systems must communi-
cate and connect reliably. Research into reliable IoT solutions and network designs for agricultural
environments is required.

2.0.7 Energy efficiency and sustainability


Power management is required since harvesting robots must operate for extended periods of time
in the field. Research into energy-efficient power systems, including renewable energy sources such
as solar power, is critical to ensuring their long-term viability. Environmental Impact: Evaluating
and limiting the environmental impact of deploying large numbers of robots in agricultural fields
is a growing topic. More research into the environmental impact of robotic agricultural practices is
required.

2.0.8 Economic and Social Implications


Cost-Effectiveness: Creating low-cost solutions that make robotic harvesting economically viable
for farmers of all sizes is a key problem. More research into low-cost materials, production proce-

14
dures, and scalable solutions is required. Labor Dynamics: It is critical to understand how robotic
harvesting affects farm labor dynamics, such as job displacement and the possibility of new types
of employment. Research on how to best incorporate robots into the present workforce can aid in
mitigating harmful social consequences. Barriers to adoption: Identifying and addressing adoption
challenges, such as technological complexity, farmer training, and aversion to change, is critical to
the widespread use of harvesting robots.
ai

15
Chapter 3

Methodology

This chapter describes the full process used in the development of a smart agricultural robot. The
process includes requirement collection via research and analysis, rigorous design, implementation,
testing and validation, and meticulous documenting and reporting. Each phase is critical to the
harvesting robot’s successful design and deployment.

Figure 3.1: Methodology

3.1 The requirements


3.1.1 Research and Literature Review
The first phase entails a thorough study and literature review to better understand the current
state of harvesting robots and identify technological gaps and obstacles. Key actions include:
-Reviewing academic papers, journals, and articles about agricultural robots.
-Analyzing existing harvesting robots to determine their capabilities, limitations, and technology.
-Identifying the sorts of fruits that can be gathered effectively with robots, as well as the precise
requirements for each.

16
3.1.2 Requirement Analysis
According on the findings, particular needs for the harvesting robot are identified. This includes:
Specifying the types of fruits to be picked and their unique features (e.g., size, shape, and delicacy).
Identifying the environmental conditions in which the robot will operate (for example, temperature,
humidity, terrain). Establishing the robot’s technical characteristics, such as precision, speed, and
battery life. Identifying the components required for both hardware (motors, sensors, CPUs) and
software (control algorithms, user interfaces).

3.2 Design and Hardware Implementation


3.2.1 System Architecture.
The system architecture provides an overview of the hardware and software components, as well
as how they interact. The hardware design outlines the robot’s physical components, such as the
power supply, motor driver, sensors, robotic arm, and control units. Software Design: Describes
the software components, including control algorithms, image processing software, and user inter-
face. This includes the implementation of machine learning models for fruit detection and picking.
Prototyping entails constructing initial prototypes to validate design concepts. This process entails
creating mock-ups or small-scale versions of the robot and testing its fundamental functions.

17
Figure 3.2: Architecture.

Hardware Components
Power Supply and Power Management System: The power supply is the source of energy for
the entire robot. It typically consists of rechargeable batteries that can provide sufficient power to
operate all the electronic and mechanical components of the robot for an extended period. Power
Management System: This system is responsible for distributing power from the supply to various
parts of the robot. It ensures that each component receives the correct voltage and current, prevents
power surges, and manages battery life to optimize performance and longevity.
ESP32 Module: The ESP32 is a versatile microcontroller with built-in Wi-Fi and Bluetooth
capabilities. It acts as the brain of the robot, coordinating the actions of various sensors and
actuators. It processes input from the camera, ultrasonic sensor, and other inputs, and executes
the necessary control commands.
ESP32-CAM: The ESP32-CAM is a camera module integrated with the ESP32 micro-controller.
It captures high-resolution images and streams video, which are crucial for real-time image pro-
cessing and object detection. This module is compact, low-cost, and capable of capturing detailed
visual information.
Ultrasonic Sensor: This sensor is used for measuring distances to obstacles in the robot’s

18
path. It emits ultrasonic waves and measures the time it takes for the echoes to return, thus
calculating the distance to objects. This helps the robot navigate through the orchard without
colliding with trees, fruits, or other obstacles.
Servo Motors: Servo motors are precise actuators that control the movement of the robot’s
arm and end-effectors. Each joint of the robot arm is driven by a servo motor, allowing for controlled
and precise movements. The servo motors ensure that the robot can reach the desired position and
angle to harvest the fruits without causing damage.
Robot Arm Assembly: The robot arm consists of multiple segments connected by joints.
These segments are typically made from lightweight and durable materials such as aluminum or
carbon fiber. The design of the robot arm allows for a wide range of motion, enabling the robot
to reach fruits at various heights and angles. The end-effector, attached to the arm, is designed to
gently grasp and pick the fruits.

Software Components
CAD Modeling:
Fusion 360: This software is used for designing and simulating the physical structure of the
robot. It allows engineers to create detailed 3D models of the robot arm, sensor placements, and
overall assembly. Simulation features help in testing the design virtually to identify any issues
before physical manufacturing.
Robot Arm Modeling, MATLAB Simulink and Simscape:

Figure 3.3: Robot Arm Modeling.

These tools are used to model the dynamic behavior of the robot arm. Simulink provides a graph-

19
ical environment for simulating the control algorithms, while Simscape allows for the modeling of
physical systems such as motors and mechanical linkages. This combination helps in optimizing the
arm’s performance and ensuring precise control over its movements.
Image Processing and Object Detection,YOLO v3 : YOLO v3 is an advanced object detec-
tion algorithm that processes images in real-time. It can detect and localize multiple objects within
a single frame with high accuracy. YOLO v3 is used to identify ripe fruits in the images captured
by the ESP32-CAM. The algorithm provides the coordinates of the detected fruits, which are used
by the robot’s control system to position the arm for harvesting.

3.2.2 Integration of Hardware and Software


The architecture of the harvesting robot integrates both hardware and software components to
achieve seamless operation.
Power Management: The power supply provides energy to all hardware components, man-
aged by the power management system. This ensures stable and efficient operation, preventing
power surges or depletion that could damage the robot
ESP32 Control: The ESP32 module acts as the central controller. It receives data from
the ESP32-CAM and ultrasonic sensor, processes this data, and sends control signals to the servo
motors. The ESP32 handles the real-time processing and decision-making necessary for the robot’s
operation.
Image Capture and Processing: The ESP32-CAM captures real-time images of the fruit
canopy. These images are sent to the ESP32 module, where they are processed using the YOLO
v3 algorithm. The algorithm detects and localizes ripe fruits within the images, providing their
coordinates for further action.
Sensor Feedback: The ultrasonic sensor continuously measures distances to nearby obstacles.
This data is used by the ESP32 module to navigate the robot safely through the orchard, avoiding
collisions and ensuring it can reach the desired locations.
Robot Arm Operation: The ESP32 module sends commands to the servo motors based on
the coordinates provided by the image processing algorithm. The servo motors move the robot arm
to the precise location of the ripe fruit. The end-effector then gently grasps and picks the fruit,
minimizing damage.
System Design and Simulation: The overall design and structure of the robot, including the
arm assembly and sensor placements, are created using Fusion 360. This ensures that all components
fit together and function as intended. MATLAB Simulink and Simscape are used to simulate and
optimize the dynamic behavior of the robot arm, ensuring precise and efficient movements.

3.3 Hardware Implementation


A harvesting robot’s design and hardware implementation require some specialized components,
each of which plays an important part in assuring the robot’s functionality, efficiency, and de-
pendability. This chapter delves into the detailed design and hardware aspects of a harvesting
robot powered by a 3300mAh 12V battery, which includes key components such as a motor driver,
power management system, ESP32 module, pan-tilt mechanism with SG90 servo motors for the
ESP32-CAM, an ultrasonic sensor, and a robot arm.

20
3.3.1 Power Supply: 12V Battery of 3300mAh
The harvesting robot’s major power source is a 12V, 3300mAh rechargeable battery. This battery
supplies enough voltage and current to power the robot’s motors, sensors, and other electronic
components.
-Voltage and Capacity: The 12V rating provides enough power to run a variety of motors and
electronic components, while the 3300mAh capacity strikes a balance between operating duration
and battery size.
-Rechargeability: Using a rechargeable battery lowers operating expenses and reduces envi-
ronmental impact. The battery may be recharged with a compatible charging circuit, allowing the
robot to be deployed several times with minimal downtime.
-Battery Management System (BMS): A BMS is provided to prevent the battery from
overcharging and deep discharge. It monitors the battery’s state and assures safe operation.

3.3.2 Motor Driver


The motor driver is an essential component that regulates the movement of the robot’s wheels and
arm. It serves as an interface between the microprocessor and the motors, allowing for fine control
over speed and direction.
-Functionality: The motor driver receives control signals from the microcontroller (ESP32)
and changes the power sent to the motors accordingly. This guarantees smooth and accurate
movement.
-Specifications: An H-bridge motor driver, such as the L298N, is commonly utilized. It can
handle currents of up to 2A per channel and voltages up to 46V.
-Connections: The motor driver communicates with the ESP32 via digital pins, which send
PWM signals to regulate motor speed and direction

3.3.3 Power Management System


The power management system is in charge of transferring power from the battery to various compo-
nents while assuring effective use and protecting against over-voltage and under-voltage situations.
-Voltage Regulation: Voltage regulation ensures that each component receives the right work-
ing voltage, which is commonly accomplished with voltage regulators such as the LM7805 for 5V
components and the LM317 for adjustable voltages.
-Protection Mechanisms: Overcurrent protection, thermal shutdown, and short circuit pro-
tection are among the characteristics used to prevent component damage.
-Power Distribution Board: The Power Distribution Board A bespoke power distribution
board can be utilized to improve connections and provide consistent power delivery to all compo-
nents.

21
3.3.4 ESP32 Module

Figure 3.4: ESP 32 PINOUT(source: last minute engineer.com)

The ESP32 module is the core processor unit, handling all control and communication operations.
It is a powerful microcontroller with Wi-Fi and Bluetooth support, making it suitable for a wide
range of applications.
-Processing Power: The ESP32, which has a dual-core CPU and a variety of peripherals, can
perform complicated tasks including image processing, sensor data analysis, and motor control.
-Connectivity: The built-in Wi-Fi and Bluetooth provide remote monitoring, control, and
data transmission, allowing for real-time updates and modifications. -Programming: The ESP32
may be programmed using either the Arduino IDE or the ESP-IDF framework, which allows for
the development and deployment of bespoke algorithms and control logic.
-Pin Configuration: The ESP32 includes several GPIO pins for connecting to sensors, motors,
and other peripherals. Careful planning of pin assignments is required to eliminate conflicts and
assure dependable functioning.

3.3.5 Pan-Tilt Mechanism with SG90 Servo Motors for ESP32-CAM


The pan-tilt mechanism enables the ESP32-CAM (camera module) to move in numerous directions,
offering a wide field of view for monitoring and target identification.
-SG90 Servo Motors: These lightweight and inexpensive servos allow for fine control over the
camera’s alignment. Each servo has a range of motion of about 180 degrees.

22
-ESP32-CAM Integration: The ESP32-CAM module is installed to the pan-tilt mechanism,
and the ESP32 micro controller manages the servos, allowing enabling real-time modifications based
on the robot’s needs.
-Control Signals: The servos are controlled using PWM signals from the ESP32. The duty
cycle of these signals controls the angle of the servos, allowing for accurate camera orientation.

3.3.6 Ultrasonic Sensor


An ultrasonic sensor is used to identify obstacles and measure distances, ensuring that the harvesting
robot navigates safely and efficiently.

Figure 3.5: Fusion 360 CAD model

-Functionality: The sensor generates ultrasonic waves and measures the time it takes for the
echo to return. This information is utilized to determine the distance between items in the robot’s
route.
-Integration: The sensor connects to the ESP32, which evaluates the distance data and mod-
ifies the robot’s direction or actions to prevent collisions.
-Mounting: The ultrasonic sensor is usually installed on the front of the robot to give a clear
detection path. It may also be installed on a servo motor to scan the surroundings.
-Data Processing: Include algorithms in the ESP32 firmware for processing sensor data and
making navigation decisions.

23
Figure 3.6: Fusion 360 model 2

3.3.7 Implementation
Assembly and Integration
-Power Supply Setup: The 12V battery is connected to the power management system, which
distributes the appropriate voltage to the ESP32, motors, and sensors. correct gauge wiring is
utilized to meet the current demands of the motors and other components. Connectors facilitate
assembly and maintenance,also fuses and protective circuits to prevent short circuits and overloads.
-Motor Driver Connection: The motor driver is connected to both the ESP32 and the
motors. The robot’s wheels and arm move as a result of ESP32 control impulses. PWM Control:
the ESP32 is Set up to generate PWM signals for regulating motor speed and direction using the
motor driver. Feedback: encoders are Install as feedback mechanisms on the motors to give accurate
control and monitoring of the motor’s position and speed. -ESP32 Programming: The ESP32
is programmed to handle ultrasonic sensor inputs, drive the pan-tilt mechanism, and coordinate
the movements of the robot arm. Code Development: firmware is written with the Arduino IDE
or ESP-IDF, including libraries for motor control, sensor data collecting, and camera handling.
- Pan-Tilt Mechanism Assembly: The SG90 servo motors are installed on a frame, and the
ESP32-CAM is connected to enable a complete range of motion. Calibration: Check the servos for
proper movement and alignment of the camera. Integration: the ESP32-CAM is attached to the
pan-tilt mechanism and connected the servo control wires to the ESP32. - Sensor Integration:
The ultrasonic sensor is installed on the robot and connected to the ESP32, allowing for real-
time obstacle detection. Mounting Position: the sensor is aligned to maximize its field of vision and
detect range. Data Processing: algorithms are included in the ESP32 firmware for processing sensor
data and making navigation decisions. -Robot Arm Installation: The robot arm is constructed
and attached to the robot, with each joint motor connected to the ESP32 for precise control.
End-Effector: the end-effector are attached and testing its gripping mechanism to ensure it can
handle the desired fruits without harm. Movement Algorithms: Create and fine-tune algorithms
that regulate the arm’s movements for efficient and precise fruit picking.

24
Figure 3.7: Assembled robot

Testing and Calibration


Initial testing should be performed to ensure that each component, such as the power supply,
motor driver, ESP32, pan-tilt mechanism, ultrasonic sensor, and robot arm, functions properly.
Testing all components to verify they function together effortlessly. Evaluate the robot’s ability
to maneuver, identify obstacles, and harvest fruit; Calibrate sensors and actuators to improve
performance. Adjust the ultrasonic sensor’s sensitivity, pan-tilt mechanism angles, and robot arm
movements.The field testing involves deploying the robot in a real-world agricultural context to
assess its performance under actual working conditions. Examine its capacity to handle various
sorts of fruits and navigate the environment successfully.

25
Figure 3.8: Assembled robot with With navigation capabilities and image processing

3.4 Software and Algorithm


3.4.1 YOLO
It has been demonstrated that one of the most reliable methods for tackling object detection is
through deep learning algorithms [1]. In terms of speed and accuracy, YOLOv4 [2] has emerged
as the best model for object recognition in recent times. The 2016 release of YOLOv4’s precursor,
You Only Look Once, or YOLO was created by Joseph Redmon and was regarded as one of the
first CNNs with real-time speed. The term ”You Only Look Once” originated from the one-shot
detection method that was responsible for its quickness. This mechanism concurrently predicted
the bounding box coordinates and class probabilities from a picture. YOLO predicts bounding
boxes with appropriate confidence of having spotted an object of class C after dividing an input
image into S x S grids (Figure 1).

26
Figure 3.9: (Top) General workflow of YOLO ; (Bottom) YOLO Architecture.s.

The YOLO (Object Detection Loading) model, initially used for detecting small objects and
unusual aspect ratios, faced challenges in detecting objects with unusual aspect ratios and local-
ization errors. In 2017, YOLOv2 was introduced, which improved accuracy through anchor boxes,
batch normalization, and high-resolution classifiers. YOLOv3 followed, with improvements in bi-
nary cross-entropy, logistic regression, and feature extraction. YOLOv4 was introduced in 2020,
more accurate and faster than YOLOv3, running two times faster than EfficientDet and enabling
training on a single conventional GPU. YOLOv4’s structure was modified to enable scaling for
different applications, with YOLOv4-tiny and YOLOv4-CSP being developed to maximize speed
and computational cost. This study compared the performances of YOLOv4, YOLOv4-tiny, and
YOLOv4-CSP in detecting pear fruits. A backup system, such as object tracking, was developed to
ensure accuracy in counting, providing a more reliable measure of object count in case of detection
system failure.

27
3.4.2 Example of Online and Real-Time Tracking with Convolutional
Neural Networks
Deep SORT has shown to be one of the most reliable and quick methods for tracking many objects
[3]. Originally designed as a basic method of detection-based online tracking, the Simple Online
and Real-time Tracking (SORT) algorithm aimed to efficiently associate object detections on each
frame. It made use of convolutional neural networks’ stellar reputation for precise object detection.
Moreover, the tracking components included the Hungarian algorithm and the Kalman filter, two
traditional techniques for motion prediction and data correlation. Because of its low complexity,
SORT outperformed other cutting-edge trackers by a factor of 20. In the MOT (Multiple Object
Tracking) Challenge 2015, it also performed better than the conventional online tracking techniques
using Faster R-CNN [30] as the detector.
The primary limitations of SORT were occlusions and shifting angles of view. Wojke et al. [3]
created Deep SORT, an expanded form of SORT, to address this problem (shown in Figure 2).
Deep SORT incorporates a deep appearance-based metric from the convolutional neural network in
addition to motion-based metrics for data association. Higher robustness from occlusion, viewpoint
changes, and the use of a nonstationary camera for lower identity switches were the outcomes of this
modification. Deep SORT was determined to run at a runtime speed of 25–50 frames per second
on contemporary GPUs [3]. Because of its robustness and adaptability for real-time tracking, In
this work, the tracking method chosen to count the pear fruits in real time was Deep SORT.

Figure 3.10: The architecture of Deep SORT.

3.4.3 Example : Fruit Detection Using YOLO


Yolo based models have been applied to fruit detection in several research. Using their Mango-
YOLO model, which demonstrated an F1 score of 0.968, an AP of 98.3%, and an inference speed
of 14 FPS on an NVIDIA GeForce GTX 1070 Ti GPU, Koirala et al. carried out real-time mango
fruit detection. Though this was still insufficient for real-time detection (greater or equal to 24
FPS), Liu et al. and Lawal [4] also proposed their YOLOv3-based tomato detection systems that
demonstrated similarly remarkable AP values of 96.4% and 99.5%, respectively, and a faster speed
of around 19 FPS for both studies using an NVIDIA GeForce GTX 1070 Ti GPU and an NVIDIA
Quadro M4000 GPU, respectively. Li et al..To detect lemon fruits, [5] also created Lemon-YOLO,
in which they used a SEResGNet34 network instead of Darknet-53. With the powerful Tesla V100

28
GPU, their system achieved an AP of 96.28% and a detection speed of 106 FPS. Several other
research assessed how well YOLO-based models performed in identifying different fruits, including
apples, lemons, bananas, and cherries. It is significant to highlight that the majority of the research
did not provide loss curves for their models, making it challenging to confirm whether the dataset
was over- or underfitted. Additionally, without overfitting, it is challenging to verify if they have
attained the best achievable performance criteria

Figure 3.11: YOLOv2 architecture.

3.4.4 Fruit Counting in Real-Time using an Object Tracking Algorithm


and YOLO
For fruit counting, only one study integrated YOLO with a multiple object tracking system. Itakura
et al. [6] counted pear fruits from a movie using YOLOv2 and the Kalman filter, achieving an AP
of 97% in detection and an F1 score of 0.972 in counting. Their counting system’s pace was
not specified, though. Furthermore, it was unclear if the tracking algorithm approach was offline
(processes all data in one batch) or online (present forecasts rely only on historical data). Real-
time counting is better suited for online tracking. Offline tracking, also known as batch tracking,
is not a desirable alternative for real-time counting because processing doesn’t happen until after
all observations are collected, even if it can be more precise. It is challenging to determine whether
their technology is relevant in real-time due to the absence of information regarding their tracking
methodology.

29
Experimental plateforme and Performance Evaluation
In this study, the experiments were conducted on a computer that has AMD Ryzen 5 3500U with
Radeon Vega Mobile Gfx 2.10 GHz quad-core CPUs, and a NVIDIA GeForce GTX 1070Ti GPU.

Figure 3.12: YOLO-fruits model flowchart for dataset, training and detection process.

Rather than creating excessively specialized predictors using the default anchor box configura-
tion provided by YOLOv3, it is vital to determine the size of the anchor box that is most likely to
be counted from the created dataset before training and testing. Based on the three detection layer
scales displayed in Figure3.8, nine clusters measuring 416 × 416 pixels were produced using the
K-mean clustering technique. The YOLO-tomato models were enhanced by assigning the anchors
to each scale in descending order. Three distinct nine clusters were produced as a result of the
tomato datasets being divided into Raw, 0.5 ratio, and 0.25 ratio categories. According to the
average IoU values, the raw ratio is 77.45%, the 0.5 ratio is 78.33%, and the 0.25 ratio is 78.55%.
Images with 416 × 416 pixels are fed into the model. Training loss20 is decreased by adjusting
the learning rate. Given that the input photos consist of two classes—ripe and unripe tomatoes—a
learning rate of 0.001 between 0 and 4000 iterations with a maximum batch size of 4000 was used.
Batch and Subdivision were set to 64 and 16, respectively, to minimize memory use. The weight
decay was set to 0.0005 and the momentum to 0.9. Additionally, the YOLO-Tomato was trained

30
using random initialization, whereas YOLOv3 and YOLOv4 were trained using official pre-trained
weights.
Using Precision, Recall, F1-score, and AP as assessment metrics, the trials performed on the
trained YOLO-tomato, YOLOv3, and YOLOv4 models were found to be effective. The equations
(3.1)–(3.4) illustrate the computation process.
TP
P recision = (3.1)
TP + FP
TP
Recall = (3.2)
TP + FN
2 ∗ P recision ∗ Recall
F1 = (3.3)
P recision + Recall
In these equations, TP, FN, and FP are abbreviations for True Positive (correct detections), False
Negative (missed detections), and False Positive (incorrect detections). F1 score was conducted
as a trade-off between Recall and Precision to show the comprehensive performance of the trained
models8, defined in Eq. (3.3). Average Precision–AP was adopted to show the overall performance
of the models under different confidence thresholds, expressed as follows:
X
AP = (rn+1 − rn ) · max p(f ) (3.4)
n

Where P(f) is the measured precision at recall f

Results and discussion


Model performance
To ensure consistency with the training picture resolution, the trained models were tested using
an image resolution of 416 × 416 pixels set at batch size 1. The test dataset’s tomato count is
detected by the YOLO-tomato models, which produce accurate detection results. The YOLOv3
and YOLOv4 models were compared with the computed Precision, Recall, F1-score, and AP of
the identified tomatoes. Tables 1 (raw), Table 2 (0.5 ratio), and Table 3 (0.25% ratio) display the
experimental outcomes.

Methods Precision (%) Recall (%) F1 (%) AP (%)

YOLOv3 97.4 96.2 96.8 97.8

YOLOv4 97.4 100.0 98.7 99.5

Table 3.1: Model performance evaluation with Raw dataset under 416×416 pixels’ resolution.

31
Methods Precision (%) Recall (%) F1 (%) AP (%)

YOLOv3 94.4 98.0 96.2 97.3

YOLOv4 98.0 99.9 99.0 99.5

Table 3.2: Model performance evaluation with 0.5 dataset under 416×416 pixels’ resolution.

Methods Precision (%) Recall (%) F1 (%) AP (%)

YOLOv3 96.3 95.9 96.1 97.1

YOLOv4 98.0 100.0 99.0 99.0

Table 3.3: Model performance evaluation with 0.25 dataset under 416×416 pixels’ resolution.

Based on the assumptions outlined in Tables 1, 2, and 3, we discovered that all approaches
perform exceptionally well because tiny datasets are used.
The assessed performance varies depending on the approach used. According to the AP com-
parison results in the tables, the YOLO-Fruits-A model rose by 0.4% in Table 1, 1.2% in Table
2, and 1.2% in Table 3. This is a result of DenseNet’s feature enhancements, which improve the
model’s ability to identify tiny tomatoes. Precision, Recall, F1-score, and AP all increased when
Mish was activated in the models.
Because Raw, 0.5 ratio, and 0.25 ratio all had the identical configuration file, we saw little to
no variation in their detection times. As a result, Table 4 shows the average detection time of the
YOLO-Tomato model for each image.

32
Chapter 4

04 Results and Discussion

4.1 Results
4.2 Discussion

33
Chapter 5

Conclusion

5.1 here we go

34
Chapter 6

Demonstration of Outcome Based


Education (OBE)

6.0.1 INTRODUCTION
The rapid advancement of technology in agriculture, particularly the creation of smart agricultural
robots, has the potential to transform traditional farming practices. In this context, Outcome-Based
Education (OBE) offers a structured framework for aligning educational programs with the skills
needed to design, develop, and implement new solutions such as harvesting robots. This demon-
stration will explain how the OBE approach handles course outcomes (COs), program outcomes
(POs), knowledge profiles, and the characteristics required to solve complex technical challenges in
smart agriculture.
Smart agricultural robots, particularly those developed for harvesting, pose numerous issues that
necessitate a thorough understanding of robotics, automation, sensor technology, artificial intel-
ligence, and sustainable practices. The educational program, which is based on OBE principles,
ensures that students obtain not just theoretical knowledge but also practical skills necessary for
tackling real-world agricultural challenges. This demonstration illustrates the use of OBE in the
development of abilities that are directly applicable to the agriculture sector’s changing needs.

6.0.2 COURSE OUTCOME (COs) ADDRESSED

The following table shows the COs for EEE 4700/4800

COs CO Statement POs

Identify a contemporary real life problem related to electri-


CO1 PO2
cal and

35
Determine functional requirements of the problem consid-
CO2 PO4
ering

Select a suitable solution and determine its method consid-


CO3 PO8
eringprofessional ethics, codes and standards.

Adopt modern engineering resources and tools for the so-


CO4 PO5
lution of the problem

Prepare management plan and budgetary implications for


CO5 PO11
the solution of the problem.

Analyze the impact of the proposed solution on health,


CO6 PO6
safety, culture and society.

Analyze the impact of the proposed solution on environ-


CO7 PO7
ment and sustainability.

Develop a viable solution considering health, safety, cul-


CO8 PO3
tural, societal andenvironmental aspects.

Work effectively as an individual and as a team member for


CO9 PO9
the accomplishment of the solution.

Prepare various technical reports, design documentation,


CO10 and deliver effective presentations for demonstration of the PO10
solution.

Recognize the need for continuing education and participa-


CO11 PO12
tion in professional societies and meetings

Table 6.1: Summary of different types of harvesting robots and


their key features.

36
6.0.3 ASPECTS OF PROGRAM OUTCOMES (POS) ADDRESSED

The following table shows the aspects addressed for certain Program Outcomes (POs) addressed in
EEE 4700/4800 for Project and Thesis.

Put
Different As-
Statement Tick
pects
(✓)

Public health

Design/development of solutions: Design solutions for complex Safety ✓


electrical and electronic engineering problems and design
PO3 systems, components or processes that meet specified needs with Cultural
appropriate consideration for public health and safety, cultural,
societal, and environmental considerations. Societal ✓

Environmental

Investigation: Conduct investigations of complex electrical and Design of experi-



electronic engineering problems using research-based knowledge ments
PO4 and research methods including design of experiments, analysis
and interpretation of data, and synthesis of information to Analysis and inter-

provide valid conclusions. pretation of data

Synthesis of infor-

mation

Societal ✓

The engineer and society: Apply reasoning informed by Health


contextual knowledge to assess societal, health, safety, legal, and
PO6 cultural issues and the consequent responsibilities relevant to Safety ✓
professional engineering practice and solutions to complex
electrical and electronic engineering problems. Legal ✓

Cultural
Environment and sustainability: Understand and evaluate the
sustainability and impact of professional engineering work in the Societal
PO7
solution of complex electrical and electronic engineering
problems in societal and environmental contexts. Environmental

37
Ethics: Apply ethical principles embedded with Religious values
religious values, professional ethics and
PO8 Professional ethics
responsibilities, and norms of electrical and
electronic engineering practice. and responsibilities

Norms

Engineering man-
agement principles
Project management and finance: Demonstrate knowledge and
understanding of engineering management principles and Economic decision-
PO11 economic decision-making and apply these to one’s own work, as making
a member and leader in a team, to manage projects and in
multidisciplinary environments. Manage projects

Multidisciplinary
environmental

Table 6.2: Summary of different types of harvesting robots and


their key features.

38
The following table explains or justifies how the COs and corresponding POs have been addressed
in EEE 4700/4800 (Project and Thesis)

Put
COs Explanation/Justification POs Tick
(✓)

A contemporary real-life problem related to agricul-


tural harvesting robots involves the development of
CO1 robust and adaptable robotic systems that can effec- PO2 ✓
tively handle the complexities of diverse agricultural
environments and crop varieties

The agricultural harvesting robot can be adapted to


provide feasible and efficient solutions that align with
the complexities and demands of modern agricultural
CO2 practices. Synthesizing information from various dis- PO4 ✓
ciplines including robotics, AI, agriculture, and en-
gineering is crucial in shaping the design and devel-
opment of such advanced robotic systems.

CO3 PO8 ✓

One modern engineering resource for addressing the


challenge of developing robust and adaptable agricul-
tural harvesting robots is advanced sensor technol-
ogy. This includes high-resolution cameras, LiDAR
CO4 PO5 ✓
(Light Detection and Ranging), GPS (Global Posi-
tioning System), and other sensors that can provide
real-time data about crop maturity, environmental
conditions, and terrain variability.

CO5 PO11 ✓

CO6 PO6 ✓

CO7 PO7 ✓

39
The development of agricultural harvesting robots
can impact society, safety, and the environment in
the following ways: Society, Changes in employ-
ment dynamics and labor market. Access to con-
sistent and efficient harvesting in regions with la-
bor shortages; Safety: Reduced risk of injuries and
occupational hazards for agricultural workers. Po-
tential reduction in chemical exposure through tar-
CO8 PO3 ✓
geted harvesting. Environment: Optimization of
resource utilization and reduction of waste. Min-
imization of soil compaction and potential for im-
proved soil health. Contribution to reduced emis-
sions and improved energy efficiency. These impacts
highlight the potential benefits and considerations
associated with the integration of agricultural har-
vesting robots.

We are developing a smart agricultural robot based


on computer vision that integrates individual work
with teamwork and collaboration in multidisci-
CO9 PO9 ✓
plinary situations advancing sustainable agriculture.
Work with teamwork and collaboration in multidis-
ciplinary situations.

It is crucial to effectively convey the solution to


the engineering community and the general public
through the use of technical reports, design docu-
CO10 PO10 ✓
mentation, and compelling presentations in order to
provide clear instructions for building and operating
the intelligent agricultural robot.

CO11 PO12 ✓

Table 6.3: Table summarizing COs, POs, and Explana-


tion/Justification for agricultural harvesting robots.

40
6.0.4 KNOWLEDGE PROFILES (K3 – K8) ADDRESSED
The following table shows the Knowledge Profiles (K3 – K8) addressed in EEE 4700/4800 (Project
and Thesis).

Put
K Knowledge Profile (Attribute) Tick
(✓)

A systematic, theory-based formulation of engineering fundamentals required


K3 ✓
in the engineering discipline

Engineering specialist knowledge that provides theoretical frameworks and bod-


K4 ies of knowledge for the accepted practice areas in the engineering discipline;
much is at the forefront of the discipline

K5 Knowledge that supports engineering design in a practice area ✓

Knowledge of engineering practice (technology) in the practice areas in the


K6 ✓
engineering discipline

Comprehension of the role of engineering in society and identified issues in


engineering practice in the discipline: ethics and the engineer’s professional
K7 ✓
responsibility to public safety; the impacts of engineering activity; economic,
social, cultural, environmental and sustainability

K8 Engagement with selected knowledge in the research literature of the discipline

Table 6.4: Table summarizing the Knowledge Profile (Attribute)


for the engineering discipline.

41
The following table explains or justifies how the Knowledge Profiles (K3 – K8) have been ad-
dressed in EEE 4700/4800 (Project and Thesis).

K Explanation/Justification

Addressing a systematic, theory-based formulation of engineering fundamen-


tals required for a Smart Agricultural Harvesting Robot involves a structured
approach that integrates various disciplines and theoretical foundations. The
following steps can guide the development of such a formulation: Problem Def-
K3
inition and Scope, Multidisciplinary Analysis, Conceptual Design and Model-
ing, Integration of AI and Automation, Feasibility and Performance Analysis,
Safety and Ethical Considerations, Prototyping and Validation, Regulatory and
Standards Compliance, Documentation and Knowledge Transfer

K4

In the practice area of designing agricultural harvesting robots, essential knowl-


edge includes robotics and automation, sensing and perception systems, me-
chanical engineering and design, agricultural science, energy systems and power
K5 management, data analysis and decision making, and human-robot interaction
and safety. This multidisciplinary knowledge is crucial for developing robots ca-
pable of efficiently and safely harvesting crops while minimizing environmental
impact and enhancing agricultural productivity.

Use of gain expertise in relevant technologies such as sensors, actuators, image


K6
processing algorithms, and machine learning models.

Engineering in Society explores the ethical, societal, and environmental im-


plications of engineering solutions in agriculture, emphasizing public safety,
sustainability. In order to address these problems, a comprehensive strategy
that takes into account technological innovation’s wider social, economic, cul-
K7
tural, and environmental ramifications is needed. To make sure that intelligent
agricultural robots benefit both agriculture and society at large, engineers in
this discipline must place a high priority on sustainability, public safety, and
ethical decision-making.

K8

Table 6.5: Table summarizing the Explanation/Justification for


each Knowledge Profile (K) in the context of Smart Agricultural
Harvesting Robots.

42
6.0.5 ATTRIBUTES OF RANGES OF COMPLEX ENGINEERING
PROBLEM SOLVING (P1 – P7) ADDRESSED
The following table shows the attributes of ranges of Complex Engineering Problem Solving (P1 –
P7) addressed in EEE 4700/4800 (Project and Thesis).

Put
P Range of Complex Engineering Problem Solving Tick
(✓)

Put
Complex Engineering Problems have characteristic
Attribute Tick
P1 and some or all of P2 to P7:
(✓)

P1: Cannot be resolved without in-depth engineering


knowledge at the level of one or more of K3, K4, K5, K6
Depth of knowledge ✓
or K8 which allows a fundamentals-based, first principles
required
analytical approach

P2: Involve wide-ranging or conflicting technical, engineer-


Range of conflicting ✓
ing and other issues
requirements

P3: Have no obvious solution and require abstract thinking,


Depth of analysis
originality in analysis to formulate suitable models
required

Familiarity of issues P4: Involve infrequently encountered issues ✓

P5: Are outside problems encompassed by standards and


Extent of applica- ✓
codes of practice for professional engineering
ble codes

P6: Involve diverse groups of stakeholders with widely vary-


Extent of stake- ✓
ing needs
holder involvement
and conflicting
requirements

P7: Are high level problems including many component


Interdependence ✓
parts or sub-problems

Table 6.6: Table summarizing the Range of Complex Engineering


Problem Solving and corresponding attributes.

43
The following table explains or justifies how the attributes of ranges of Complex Engineering
Problem Solving (P1 – P7) have been addressed in EEE 4700/4800 (Project and Thesis).

P Explanation/Justification

Developing an agricultural harvesting robot requires a deep understanding of engineering


principles, agricultural science, robotics and automation, data analytics, AI, ethical con-
P1 siderations, regulatory compliance, and interdisciplinary collaboration. This comprehensive
knowledge base is essential for creating effective and efficient robotic solutions tailored to
agricultural needs.

Developing an agricultural harvesting robot involves managing conflicting requirements such


as precision vs. speed, adaptability vs. standardization, power efficiency vs. performance,
P2 autonomy vs. human intervention, cost vs. capability, environmental impact vs. produc-
tivity, and regulatory compliance vs. technological advancement. Balancing these factors is
crucial for creating an effective and efficient robotic solution for agricultural harvesting.

Developing agricultural harvesting robots requires a comprehensive analysis of crop char-


P3 acteristics, environmental factors, efficiency, precision, crop health, and integration with
existing agricultural systems.

Understanding the issues related to agricultural harvesting robots involves considering crop
P4 variability, terrain challenges, labor efficiency, crop health, and integration with existing
agricultural systems.

The applicable codes for agricultural harvesting robots encompass safety standards, envi-
ronmental regulations, product quality standards, data privacy and security regulations,
P5 robotic industry standards, and ethical guidelines. These codes ensure the robot’s safe op-
eration, environmental compliance, product quality, data security, industry compatibility,
and ethical harvesting practices.

The development of agricultural harvesting robot involves diverse stakeholders with varying
P6
needs, including farmers, researchers, policymakers, and technology providers.

Engineering tasks are complex and involve many interconnected components, such as hard-
P7
ware design, software development, and environmental considerations.

Table 6.7: Table summarizing the Explanation/Justification for


each P attribute in the context of developing agricultural harvest-
ing robots.

44
6.0.6 ATTRIBUTES OF RANGES OF COMPLEX ENGINEERING
ACTIVITIES (A1 – A5) ADDRESSED
The following table shows the attributes of ranges of Complex Engineering Activities (A1 – A5)
addressed in EEE 4700/4800 (Project and Thesis).

Put
A Range of Complex Engineering Activities Tick
(✓)

Complex activities means (engineering) activities or projects that have


Attribute (✓)
some or all of the following characteristics:

A1: Involve the use of diverse resources (and for this purpose resources include people,
Range of re- ✓
money, equipment, materials, information and technologies)
sources

A2: Require resolution of significant problems arising from interactions between wide-
Level of interac- ✓
ranging or conflicting technical, engineering or other issues
tion

A3: Involve creative use of engineering principles and research-based knowledge in


Innovation ✓
novel ways

A4: Have significant consequences in a range of contexts, characterized by difficulty


Consequences
of prediction and mitigation
for society and
the environment

Familiarity A5: Can extend beyond previous experiences by applying principles-based approaches ✓

Table 6.8: Table summarizing the Range of Complex Engineering


Activities and corresponding attributes.

45
The following table explains or justifies how the attributes of ranges of Complex Engineering
Activities (A1 – A5) have been addressed in EEE 4700/4800 (Project and Thesis).

A Explanation/Justification

To develop an agricultural harvesting robot, several resources are required, including hardware, software,
data, funding and support and human resources. Considering the complexity of developing an agricultural
A1
harvesting robot, a multidisciplinary approach involving mechanical, electrical, and software engineering,
as well as agricultural expertise, is essential

It is imperative to address significant obstacles that may arise from the interplay of many technical,
engineering, and agricultural concerns. This means resolving conflicts between needs, integrating various
A2
technologies, and optimizing the effectiveness of intelligent agricultural robots in complex agricultural
environments.

Innovative agricultural harvesting robot development involves interdisciplinary collaboration, adaptive


A3 design, data-driven decision making, sustainable solutions, human-robot interaction, field testing, and
regulatory compliance.

Agricultural harvesting robots can impact society by changing employment dynamics and requiring work-
A4 force training, while also affecting the environment through resource conservation and potential soil health
implications.

To extend beyond previous experiences in developing an agricultural harvesting robot, apply principles-
based approaches such as holistic systems thinking, iterative development, data-driven decision making,
A5 risk management, ethical considerations, and regulatory compliance. These approaches will help cre-
ate a robust, adaptable, and socially responsible solution that aligns with established engineering and
agricultural principles.

Table 6.9: Table summarizing the Explanation/Justification for


each A attribute in the context of developing agricultural harvest-
ing robots.

46

You might also like