MiniProjectReport_GREENGUARDIAN
MiniProjectReport_GREENGUARDIAN
VI th Semester
Guide
1
Shri Ramdeobaba College of Engineering & Management, Nagpur 440 013
(An Autonomous Institute affiliated to Rashtrasant Tukdoji Maharaj Nagpur University Nagpur)
May 2025
CERTIFICATE
The project titled “Green Guardian: “An AI-Based Intelligent Environmental
Monitoring and Advisory System” is aimed at developing a smart solution to monitor
environmental parameters and provide actionable insights using Artificial Intelligence.
The system integrates sensor-based data collection, intelligent analysis, and real-time
recommendations to promote eco-friendly
practices and raise awareness about environmental sustainability. This project was
undertaken as a part of the partial fulfillment for the degree of Bachelor of Technology
(B.Tech.) in Computer Science and Engineering at Shri Ramdeobaba College of
Engineering and Management, Nagpur, during the academic year 2024–2025. The
project emphasizes the practical application of AI and IoT technologies in addressing
real-world environmental challenges, and reflects the team's commitment to creating
innovative and impactful solutions for a greener future.
Date:
Place: Nagpur
_______________ ________________
Prof. Sushmit Saantra Dr. Preeti Voditel
Project Guide H.O.D
Department of CSE & ET Department of CSE & ET
________________
2
Dr. M. B. Chandak
Principal
DECLARATION
We hereby declare that the thesis titled “An AI-Based Intelligent Environmental Monitoring and
Advisory System” submitted herein, has been carried out in the Department of Computer Science
and Engineering of Shri Ramdeobaba College of Engineering and Management, Nagpur. The
work is original and has not been submitted earlier as a whole or part for the award of any
degree/diploma at this or any other institution / University.
Date:
Place: Nagpur
Gunjan Rathi 05
Aditya Singh 27
Ayam Bharadwaj 32
Ayush Ramteke 34
Sujal Kothari 64
3
APPROVAL SHEET
This report entitled “An Intelligent System for Plant Disease Detection and Cure Assistance
using Artificial Intelligence” by Gunjan Rathi , Aditya Singh, Ayam Bharadwaj , Ayush
Ramteke , Sujal Kothari is approved for the degree of Bachelor of Technology (B.Tech).
Date:
Place: Nagpur
_______________ ________________
Prof. Sushmit Saantra External Examiner
Project Guide
_______________
Dr. Preeti Voditel
H.O.D
Department of CSE & ET
4
ACKNOWLEDGEMENTS
We would like to express our deepest gratitude to Prof. Sushmit Saantra, our
project guide, for her invaluable guidance, constructive feedback, and unwavering
support throughout the development of our project, “An Intelligent System for
Plant Disease Detection and Cure Assistance using Artificial Intelligence” Her
expertise and mentorship have been essential in bringing this project to fruition.
We are also grateful to our team members — Gunjan Rathi , Aditya Singh, Ayam
Bharadwaj , Ayush Ramteke and Sujal Kothari — for their dedication,
collaboration, and hard work. The success of this project is a reflection of our
shared vision, teamwork, and the countless hours spent brainstorming, designing,
and implementing solutions together. Each team member brought unique strengths
and perspectives that enriched the overall outcome.
We would also like to acknowledge the support and resources provided by our
institution, which created a conducive environment for learning and innovation.
The tools, facilities, and guidance from our faculty members were invaluable in
completing this project.
Date:
-Projectees
Gunjan Rathi
Aditya Singh
Ayam Bharadwaj
5
Ayush Ramteke
Sujal Kothari
ABSTRACT
Agriculture plays a vital role in sustaining the economy and feeding the growing
global population. However, one of the major challenges faced by farmers is the
timely detection and management of plant diseases, which can significantly
reduce crop yield and quality. To address this issue, our project, “Green
Guardian: An AI-Based Intelligent System for Plant Disease Detection and
Cure”, presents a smart solution that leverages the power of Artificial Intelligence
and Machine Learning for accurate diagnosis and effective treatment
recommendations.
This project aims to assist farmers and agricultural professionals in early disease
detection, thereby reducing crop losses, minimizing pesticide use, and promoting
sustainable farming practices. The solution is designed to be user-friendly,
accessible via mobile or web platforms, and adaptable to various types of crops.
By integrating technology with agriculture, Green Guardian represents a step
forward in smart farming and precision agriculture.
6
TABLE OF CONTENTS
Certificate ii
Declaration iii
Approval Sheet iv
Acknowledgment v
Abstract vi
List of Figures ix
Chapter 1: Introduction 9
1.2 Motivation 10
Chapter 3: Methodology 23
7
25
3.2CNN Model Architecture
29
3.7 Summary
Chapter 4: Implementation 30
Chapter 5: Discussions 35
References 41
8
CHAPTER 1
INTRODUCTION
Plant diseases have always been a major concern in agriculture, directly impacting crop
production and quality. The early identification and treatment of these diseases are
critical to ensure healthy plant growth and optimal yield. However, in many regions,
especially rural and remote areas, farmers face challenges in recognizing diseases early
due to limited access to agricultural experts and lack of awareness. This often results in
the unchecked spread of infections, heavy use of pesticides, and significant crop losses.
Recent advances in Artificial Intelligence (AI), Machine Learning (ML), and Computer
Vision have opened new avenues for solving agricultural problems. Particularly, image
classification techniques using Convolutional Neural Networks (CNNs) have shown
great promise in the field of plant disease recognition. By analyzing visual symptoms on
plant leaves, such systems can identify diseases with a high degree of accuracy.
The project Green Guardian is built upon this technological foundation. It aims to create
an intelligent system that not only detects plant diseases from images but also provides
useful recommendations for treatment and prevention. This system is designed to be
9
easily accessible, helping farmers make informed decisions and reduce dependency on
manual methods or costly expert consultations.
The integration of AI into agriculture marks a shift toward smart and precision farming,
where technology assists in enhancing productivity, reducing waste, and ensuring the
sustainable use of resources. Green Guardian is a step in this direction, empowering
farmers with a digital assistant that bridges the gap between traditional practices and
modern technological solutions.
1.2.Motivation
In recent years, the advancement of artificial intelligence (AI) and deep learning has opened
up new possibilities for tackling real-world problems. Computer vision, a subset of AI, has
shown remarkable success in image classification and pattern recognition tasks, making it a
promising tool for plant disease detection. With the proliferation of smartphones and
affordable cameras, capturing images of diseased crops is now easier than ever. By
integrating AI with agriculture, it is possible to develop intelligent systems that can identify
plant diseases from images, diagnose them accurately, and provide actionable insights for
treatment — all within seconds.
The motivation behind this project lies in the urgent need for scalable, accurate, and
10
accessible solutions to support plant health monitoring. Our aim is to build a deep learning-
based system that can automatically detect plant diseases from leaf images and suggest
appropriate cures or preventive measures. Such a tool not only helps farmers act swiftly and
effectively but also reduces the reliance on chemical treatments by promoting targeted and
timely interventions.
Furthermore, early and precise disease detection contributes to reducing crop loss, improving
food security, and minimizing environmental damage. It empowers even those with minimal
agricultural training to manage plant diseases with confidence. By leveraging technology in
this way, we can bridge the gap between modern agricultural knowledge and those who need
it the most — creating a more sustainable, resilient, and informed farming ecosystem.
While the primary focus of this project is on detecting and treating plant diseases, it is important
to consider the broader implications of artificial intelligence (AI) in agriculture and food systems
— particularly its growing relevance in culinary applications. AI is increasingly transforming
how food is produced, processed, and even prepared, with significant intersections emerging
between agricultural technology and the culinary world.
In the context of this project, which involves identifying diseases in plants and suggesting
appropriate cures, the role of AI can extend beyond the farm. Once healthy crops are ensured
through early disease detection, the produce makes its way through the food chain — ultimately
landing in kitchens, restaurants, and food manufacturing units. Here, AI has the potential to
enhance culinary decision-making by optimizing ingredient use, ensuring food safety, and even
recommending recipes based on the freshness and health status of the crops.
For example, AI systems can be integrated with agricultural data to predict crop yield and
availability, which in turn influences culinary planning and food supply management.
Additionally, AI models trained on plant disease data can also assist in identifying signs of
spoilage or contamination in post-harvest crops used in food preparation, thereby ensuring
11
only healthy, safe ingredients make it to the consumer’s plate.
In essence, AI forms a bridge between agriculture and gastronomy. The same technologies
used to detect diseases in the field can inform decisions in the kitchen — enhancing food
quality, safety, and sustainability. This project contributes to that ecosystem by ensuring the
first and most critical step: growing healthy plants. By doing so, it supports a pipeline of AI
applications that extend from farm to fork, reinforcing the role of technology in every stage
of the food journey.
While the core of this project revolves around plant disease detection using image-based
deep learning, NLP adds a valuable layer of user interaction and engagement. For
example, after the AI model identifies a disease from a plant image, NLP can be used to
interpret the results and generate clear, actionable descriptions in the user’s preferred
language. This allows users to understand the diagnosis and follow treatment
recommendations without needing technical knowledge or agricultural expertise.
12
Furthermore, NLP enables the development of chatbot assistants or voice-driven
interfaces that can respond to farmer queries such as:
Such interactions can be supported in multiple languages and dialects, making the system
more inclusive and regionally adaptable. This is particularly valuable in rural areas where
literacy levels or language barriers might otherwise limit access to technology.
NLP can also facilitate the integration of the system with agricultural knowledge bases,
allowing it to search and summarize relevant information from large datasets, manuals,
or online sources. This provides users with curated and up-to-date information without
needing to sift through complex documents.
CHAPTER 2
LITERATURE SURVEY
2.1 Introduction
13
Agriculture is one of the cornerstones of the Indian economy and food security. A significant
threat to crop yield and quality is plant diseases, which, if not identified early and accurately, can
lead to devastating economic losses and food shortages. Traditionally, disease identification is
performed manually by experts through visual inspection of leaf patterns and other plant parts.
This process is time-consuming, requires domain expertise, and is often subjective—leading to
inconsistent results.
Recent developments in Artificial Intelligence (AI), specifically in computer vision and machine
learning, have paved the way for automating this detection process. Leveraging the power of
Deep Convolutional Neural Networks (DCNN), researchers can now classify diseases in plant
leaves with high accuracy, even outperforming traditional image processing methods. This
section explores the evolution of plant disease detection methods, focusing on traditional
techniques, the rise of deep learning, dataset challenges, and current state-of-the-art research
aligned with your model’s architecture
Case Study 1:
Title: Plant Disease Detection Using Image Processing and Machine Learning
Source: ArXiv, 2021
Authors: Pranesh Kulkarni, Atharva Karwande, Tejas Kolhe & Others
Summary:
This paper proposed a classical approach for plant disease detection by integrating basic image
processing techniques with machine learning classifiers. The system used standard operations
such as image resizing, noise filtering, and contrast enhancement. Otsu's method was employed
for segmentation, isolating the infected parts of the leaf. Features were extracted based on color
distribution, texture patterns (such as entropy and contrast), and shape. These were then
classified using SVM and k-Nearest Neighbors.
The researchers tested the method on five different plant species and 20 diseases, achieving
accuracy between 90% and 93% under controlled lighting conditions. While impressive, the
14
method was highly sensitive to changes in lighting, leaf orientation, and background noise.
Case Study 2:
Summary:
This study explored the performance of four advanced deep learning models—InceptionV3,
DenseNet121, ResNet101-V2, and Xception—on datasets containing images of tomato and corn
leaves affected by diseases such as blight, bacterial spot, and rust. The authors used the
PlantVillage dataset and applied data augmentation to expand their training set. Their model
training involved tuning hyperparameters such as learning rate, dropout rate, and the number of
dense layers.
DenseNet121 and Xception stood out, achieving validation accuracies over 97.5%. The research
confirmed that deeper CNN models with residual or dense connections learn complex, non-linear
patterns more effectively than shallow ones. The paper also emphasized how data augmentation
(rotation, zoom, brightness shift) helped improve model robustness.
15
pre-trained models like Xception or DenseNet121, our model architecture follows a similar
principle of increasing depth and non-linearity to improve detection capability. Like Yasina and
Fatima, we also implemented extensive image augmentation to make the model resilient to
image variability. Their high validation accuracy supports our hypothesis that deeper
convolutional architectures significantly outperform traditional models when detecting disease
symptoms across similar crops. Furthermore, their findings reinforce our future goal of testing
pre-trained models for improved transfer learning
Case Study 3:
Summary:
This extensive review analyzed more than 100 peer-reviewed articles published between 2015
and 2023 on the topic of plant disease detection. The authors categorized the research into three
major approaches: traditional image processing, shallow learning models, and deep learning-
based methods. They found that CNNs consistently delivered the best performance in terms of
accuracy, scalability, and real-time applicability.
The review highlighted key challenges that persist across studies, such as dataset imbalance
(overrepresentation of certain diseases), environmental noise (like wind or sunlight glare), and
the lack of real-world datasets. Another significant point was the lack of interpretability in
CNNs. Despite their accuracy, many models fail to provide transparency in how they classify
diseases, which limits their acceptance in farming communities.
16
decision to build a custom CNN model and focus on data augmentation to reduce overfitting.
The review’s emphasis on environmental variability inspired us to test our model on unrelated
datasets (e.g., Unsplash images) to assess false positive rates. Additionally, the authors’ concern
about model explainability points to future improvements we could make—such as integrating
visualization techniques like Grad-CAM to highlight disease regions on the leaf. Overall, this
case study situates our work within the broader research trajectory and justifies its relevance.
Case Study 4:
Summary:
This study reviewed modern deep learning architectures and their suitability for real-time
disease detection in plants. Unlike other works focused on accuracy alone, this paper explored
how to deploy models in constrained environments like rural farms. The authors tested
lightweight CNNs like MobileNet, EfficientNet, and Tiny-YOLO on disease classification tasks
and evaluated their performance on smartphones and IoT devices.
They also introduced model optimization strategies such as quantization, pruning, and weight
sharing to reduce memory usage and improve inference speed without significantly losing
accuracy. The paper concluded that mobile-ready AI applications are the key to transforming
plant disease diagnosis into a scalable and accessible tool.
17
edge devices like Raspberry Pi or Android phones.
The limitations of handcrafted feature engineering in traditional models led to a paradigm shift
toward deep learning—especially Convolutional Neural Networks (CNNs). These networks
automatically learn hierarchical representations of image features, thus reducing human
intervention and increasing generalization across datasets.
Yasina and Fatima (2023) explored various deep learning architectures such as InceptionV3,
ResNet-101-V2, Xception, and DenseNet121 on tomato and corn leaf images. Their experiments
demonstrated that deeper architectures, especially those with residual and depthwise separable
convolution layers, offered better performance due to their ability to learn more complex
patterns. They observed validation accuracies above 95%, and after optimizing the model with
batch normalization and dropout layers, they achieved results exceeding 97.5%.
In a broader review, Zhou et al. (2023) conducted a systematic literature survey on plant disease
detection and classification, reviewing over 100 papers. They concluded that CNNs significantly
outperform traditional approaches in terms of precision, recall, and scalability. Their study
emphasized that feature extraction via convolution layers made it possible to detect subtle
variations between similar diseases such as early and late blight, which are difficult to
distinguish using simple thresholding techniques.
Khan et al. (2023) presented a review of state-of-the-art deep learning models and their
applications in smart agriculture. They highlighted how fine-tuned architectures like EfficientNet
and MobileNet can be deployed on resource-constrained devices, making disease detection
accessible via smartphones. This finding is highly relevant for real-world deployment where
farmers may not have access to high-end computing resources.
Simonyan and Zisserman (2014) introduced the VGG network, a deep convolutional neural
network that achieved remarkable success in large-scale image recognition tasks. VGG’s simple
yet deep architecture set a standard for subsequent deep learning models. The use of deeper
networks allowed the detection of subtle image features that would be difficult for traditional
18
models to capture.
He et al. (2016) further enhanced deep learning architectures with the introduction of Residual
Networks (ResNets), which enabled the training of even deeper networks by addressing the
problem of vanishing gradients. This architecture is particularly useful for plant disease
detection, where networks often require deeper layers to capture complex disease patterns.
The model used in our project aligns with this research direction. By leveraging a custom-built
Deep CNN architecture trained on the New Plant Diseases Dataset, we achieved high validation
accuracy while keeping model complexity manageable for potential on-device inference.
The choice and quality of datasets play a pivotal role in the performance and generalizability of
deep learning models. One of the most widely used datasets is the PlantVillage dataset, which
consists of over 50,000 high-resolution images labeled for different crops and disease types. This
dataset serves as the benchmark for training CNN models in most research.
However, many studies—including ours—have opted to explore more diverse and realistic
datasets. The New Plant Diseases Dataset from Kaggle was used in our project. It includes a
broader range of disease categories and images captured in less controlled environments
compared to PlantVillage. This diversity improves the model's robustness and simulates real-
world deployment scenarios.
Yasina and Fatima (2023) also emphasized the importance of data augmentation in their
research. Techniques such as horizontal flipping, zooming, rotation, and brightness adjustment
help artificially expand the dataset and prevent overfitting. Similarly, our model implementation
used a range of augmentation methods during preprocessing to ensure better generalization.
19
2.5 Challenges in Plant Disease Detection:
While deep learning models have revolutionized plant disease detection, several challenges still
hinder their widespread implementation:
Most datasets, including PlantVillage, are collected under lab-like conditions—uniform lighting,
clean backgrounds, and isolated leaves. Such conditions do not reflect real-world agricultural
environments, leading to overfitting. Moreover, some diseases are underrepresented, creating
class imbalance and skewed learning outcomes.
b) Environmental Noise
Leaves can be misclassified as diseased due to damage from wind, pest bites, or even water
droplets. Some deep learning models falsely detect symptoms in these cases, especially when
trained on ideal datasets. Robustness to such noise remains a significant research gap.
c) Model Interpretability
Deep learning models are often criticized for being black boxes. Farmers and agricultural
officers may hesitate to trust predictions without clear explanations. Some researchers are
exploring Grad-CAM and LIME for visualizing attention maps on infected regions, which
might help bridge this gap.
d) Resource Constraints
Deploying CNN models in rural or low-infrastructure areas remains challenging due to limited
access to powerful GPUs or stable internet. Khan et al. (2023) suggested optimizing models
using pruning and quantization to make them suitable for mobile applications—a strategy that
aligns with our future development goals.
20
2.6 Summary and Insights
In summary, the field of plant disease detection has witnessed a significant transformation—
from rule-based systems and handcrafted features to data-driven and autonomous deep learning
models. Literature by Yasina and Fatima, Zhou et al., and Khan et al. highlights not just the
performance improvements brought by CNNs, but also the practical barriers in terms of real-
world deployment.
CHAPTER 3
METHODOLOGY
Chapter 3: Methodology
21
This chapter describes the end-to-end methodology of the GreenGuard Classifier, covering data
acquisition and preprocessing, CNN model design, training procedures, unknown class handling,
and integration into a Streamlit application with multilingual support.
To ensure a robust and generalizable model, two primary data sources were utilized:
1. Scope & Diversity: Over 38,000 high-resolution images covering eight plant species
(Apple, Corn, Potato, Tomato, Grape, Peach, Pepper, Strawberry) and their most
common diseases, plus healthy samples.
2. Label Quality: Expert-annotated disease labels following PlantVillage taxonomy,
ensuring consistency across classes.
3. Environmental Variability: Includes images captured under diverse lighting conditions,
backgrounds, and orientations to simulate real-world scenarios.
4. Train/Validation Split: Stratified sampling maintained per-class balance with an 80/20
split, preserving representative distributions in both sets.
5. Licensing & Provenance: Publicly available under permissive Kaggle license; metadata
retained to track image origins and ensure reproducibility.
22
3. Preprocessing Consistency: Unknown samples resized, normalized, and augmented
identically to plant images to avoid statistical discrepancies.
4. Class Balance: Composed to represent ~5% of the training set, preventing the unknown
class from dominating or being underrepresented.
5. Quality Control: Manual review removed ambiguous or low-quality images (e.g.,
blurred, extreme color casts) to maintain clear non-plant characteristics
Directory Organization:
dataset/
├── Apple/
│ ├── Apple_scab/
│ ├── Black_rot/
│ ├── Cedar_apple_rust/
│ └── healthy/
├── Corn/
│ ├── Cercospora_leaf_spot/
│ ├── Common_rust/
│ ├── Northern_leaf_blight/
│ └── healthy/
└── unknown/
23
○ Scale pixel values to [0, 1] by dividing by 255.
2. Data Augmentation:
3. Label Encoding:
Inspired by VGG-style deep networks, the custom CNN comprises five convolutional blocks:
24
Block Filters Layers (Conv—ReLU) Pooling
1 32 2 2×2 Max
2 64 2 2×2 Max
●
Dropout: 25% after final pooling; 40% after dense layer to combat overfitting.
25
○ Batch size: 32
2. Callbacks:
3. Validation:
○ Achieved 94% train accuracy and 92% validation accuracy on held-out set.
Two-tier strategy:
1. Training:
26
○ Ensures the model learns generalized "non-leaf/disease" features.
2. Post-Prediction Threshold:
27
○ Positioned translation controls adjacent to results, enabling on-demand conversion
without page reload.
28
● Requirements: Python 3.9+, TensorFlow 2.x, Streamlit, googletrans.
File Structure:
app.py
class_info.py
god_plant_disease_model_final.keras
dataset/licenses/...
●
● Execution: streamlit run app.py exposes a local web interface at localhost:8501.
3.7 Summary
CHAPTER 4
IMPLEMENTATION
In this section, we present the results obtained from the implementation and
evaluation of the intelligent system. This includes insights from user testing,
performance metrics, and data analysis that illustrate how well the system meets its
objectives.
Note: Due to the limited dataset size, it could affect the diversity of available diseases.
29
4.1.1 User Input provide
4.1.2 Accuracy
From the given test we can derive and say the accuracy is more than 95%. Disease prediction
Accuracy is 90-95 % based on the input provided
30
Fig 1.1 Taking input in form of Image
31
Fig 1.3 Giving Details After Analyzing
32
Fig 1.4 Google Translation on the Recommended Treatment
33
Fig 1.5 Friendly UI for phone use(app)
34
CHAPTER 5
DISCUSSIONS
This section interprets the results of the plant disease detection and cure system with
respect to its performance, limitations, and potential improvements. Key findings of the
development and implementation processes are discussed below:
35
with clear and practical next steps. The system supports simplified language and
contextual explanations, making it accessible to users with minimal agricultural
background. The potential to add multilingual support in future iterations could further
increase system accessibility.
3. Limitations
Despite the promising results, the system has certain limitations. The accuracy of disease
classification may drop if images are of poor quality, under bad lighting, or contain
background noise (e.g., soil, other plants). Additionally, the NLP component currently
offers static recommendations and lacks adaptive learning — it does not yet evolve based
on user feedback or outcomes of suggested treatments. Also, the dataset used is limited in
terms of variety of crops and disease stages, which may hinder generalizability to new
regions or plant types.
36
CHAPTER 6
6.1 Conclusion
This project presents an intelligent, AI-powered system for the detection of plant diseases
and recommendation of appropriate treatments, aiming to assist farmers and agricultural
workers in maintaining healthy crops. By leveraging deep learning techniques such as
Convolutional Neural Networks (CNNs) for image-based classification, the system is
capable of identifying plant diseases with high accuracy, offering a reliable tool for early
detection and response.
The integration of Natural Language Processing (NLP) further enhances the user
experience by translating technical results into understandable, actionable advice. This
not only supports informed decision-making but also makes the technology accessible to
a broader audience, including those with limited technical knowledge. The system’s
potential is further extended through features like personalized suggestions, intuitive
interfaces, and the ability to incorporate user preferences in future iterations.
While the current system demonstrates strong performance, it is not without limitations.
These include dependency on high-quality images, limited personalization based on
ongoing user behavior, and restricted language support. However, the foundational
architecture is robust and scalable, providing a strong base for future enhancements such
as adaptive learning, multilingual support, and mobile deployment for offline use.
In summary, this project bridges the gap between AI research and practical agricultural
37
needs, offering a scalable solution to a real-world problem. It contributes to the ongoing
transformation of agriculture through intelligent automation and sets the stage for more
comprehensive, user- centered agri-tech innovations.
38
5. Integration with Agricultural Databases and Weather APIs
The system could be enhanced by integrating with external agricultural knowledge bases
or real- time weather data APIs. This would allow for context-aware recommendations,
such as adjusting treatments based on upcoming weather conditions or region-specific
disease outbreaks.
39
REFERENCES
40
7. Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional
Networks for Large-Scale Image Recognition. arXiv.
https://arxiv.org/abs/1409.1556
8. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for
Image Recognition. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (pp. 770–778).
https://doi.org/10.1109/CVPR.2016.90
9. Howard, A. G., et al. (2017). MobileNets: Efficient Convolutional Neural
Networks for Mobile Vision Applications. arXiv.
https://arxiv.org/abs/1704.04861
10. Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for
Convolutional Neural Networks. In Proceedings of the 36th International
Conference on Machine Learning (pp. 6105–6114).
https://arxiv.org/abs/1905.11946
11. Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using Deep
Learning for Image-Based Plant Disease Detection. Frontiers in Plant
Science, 7, 1419. https://doi.org/10.3389/fpls.2016.01419
41