Plant Disease Detection Using Deep Learning
Plant Disease Detection Using Deep Learning
https://doi.org/10.22214/ijraset.2022.43700
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
Abstract: Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of
the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and
recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease
diagnosis. Using a public dataset of 54,306 images of diseased andhealthy plant leaves collected under controlled conditions,
we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained
model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the
approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path
toward smartphone-assisted crop disease diagnosis on a massive global scale
I. INTRODUCTION
The occurrence of plant diseases has a negative impact on agricultural production. If plant diseases are not discovered in time,
food insecurity will increase [1]. Early detection is the basis for effective prevention and control of plant diseases, and they play
a vital role in the management and decision-making of agricultural production. In recent years, plant disease identification has
been a crucial issue.
Disease-infected plants usually show obvious marks or lesions on leaves, stems, flowers, or fruits. Generally, each disease or
pest condition presents a unique visible pattern that can be used to uniquely diagnose abnormalities. Usually, the leaves of
plants are the primary source for identifying plant diseases, and most of the symptoms of diseases may begin to appear on the
leaves [2].
In most cases, agricultural and forestry experts are used to identify on-site or farmers identify fruit tree diseases and pests based
on experience. This method is not only subjective, but also time- consuming, laborious, and inefficient. Farmers with less
experience may misjudgment and use drugs blindly during the identification process. Quality and output will also bring
environmental pollution, which will cause unnecessary economic losses. To counter these challenges, research into the use of
image processing techniques for plant disease recognition has become a hot research topic.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1009
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
experiments on a version of the PlantVillage dataset where the leaves were segmented, hence removing all the extra
background information which might have the potential to introduce some inherent bias in the dataset due to the regularized
process of data collection in case of PlantVillage dataset. Segmentation was automated by the means of a script tuned to
perform well on our particular dataset. We chose a technique based on a set of masks generated by analysis of the color,
lightness and saturation components of different parts of the images in several color spaces (Lab and HSB). One of the steps of
that processing also allowed us to easily fix color casts, which happened to be very strong in some of the subsets of the dataset,
thus removing another potential bias.
This set of experiments was designed to understand if the neural network actually learns the “notion” of plant diseases, or if it is
just learning the inherent biases in the dataset. Figure 2 shows the different versions of the same leaf for a randomly selectedset
of leaves.
V. VISUALIZATION TECHNIQUE
In recent years, the successful application of deep learning technology in plant disease classification provides a new idea for
the research of plant disease classification. However, DL classifiers lack interpretability and transparency. The DL
classifiers are often considered black boxes without any explanation or details about the classification mechanism. High
accuracy is not only necessary for plant disease classification but also needs to be informed how the detection is achieved
and which symptoms are present in the plant. Therefore, in recent years, many researchers have devoted themselves to the
study of visualization techniques such as the introduction of visual heat maps and salient maps to better understand the
identification of plant diseases. Among them, the works of [35] and [36] are crucial to understanding how CNN
recognizes disease from images.
For example, Brahimi et al. [35] introduced saliency maps to visualize the symptoms of plant diseases. Mohanty et al. [10]
used AlexNet and GoogLeNet architectures, through the precision (P), recall (R), F1 score, and the overall accuracy to
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1010
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
evaluate the performance of the models on the PlantVillage. Used the three scenarios (color gray and segmentation) to assess
the performance of the 2 CNN famous architectures, and come to the conclusion that GoogLeNet outperformed AlexNet, the
first layer of the visual results clearly showed the disease spots also. In Cruz et al. [37], the improved LeNet model was used
to detect olive plant diseases, that is, segmentation and edge maps were used to identify plant diseases. Brahimi et al. [38]
proposed a new visualization method, that is, a new DL model teacher/student network was introduced to identify the spots
of plant diseases, compared with the existing plant disease treatment methods, the new method obtained a clearer
visualization effect.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1011
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
VIII. DISCUSSION
Using the deep convolutional neural network architecture, we trained a model on images of plant leaves with the goal of
classifying both crop species and the presence and identity of disease on images that the model had not seen before. Within
the Plant Village data set of 54,306 imagescontaining 38 classes of 14 crop species and 26 diseases (or absence thereof), this
goal has been achieved as demonstrated by the top accuracy of 99.35%. Thus, without any feature engineering, the model
correctly classifies crop and disease from 38 possible classes in 993 out of 1000 images.Importantly, while the training of the
model takes a lot of time (multiple hours on a high performance GPU cluster computer), the classification itself is very fast
(less than a second on a CPU), and can thus easily be implemented on a smartphone. This presents a clear path toward
smartphone-assistedcrop disease diagnosis on a massive global scale.
However, there are a number of limitations at the current stage that need to be addressed in future work. First, when tested on
a set of images taken under conditions different from the images used for training, the model's accuracy is reduced
substantially, to just above 31%. It's important to note that this accuracy is much higher than the one based on random
selection of 38 classes (2.6%), but nevertheless, a more diverse set of training data is needed to improve the accuracy. Our
current resultsindicate that more (and more variable) data alone will be sufficient to substantially increase the accuracy, and
corresponding data collection efforts are underway.
The second limitation is that we are currently constrained to the classification of single leaves, facing up, on a homogeneous
background. While these are straightforward conditions, a real world application should be able to classify images of a
disease as it presents itself directly on the plant. Indeed, many diseases don't present themselves on the upper side of leaves
only (or at all), but on many different parts of the plant. Thus, new image collection efforts should try to obtain images from
many different perspectives, and ideally from settings that are as realistic as possible.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1012
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
X. FUTURE ENHANCEMENT
The disease detection system can be integrated in cloud system for efficient result processing. Integration of automated
disease detection system with sensors to measure soil.
XI. CONCLUSION
The Deep learning algorithm used in proposed work is SVM. SVM gave good result in when the detection categories were
less. As the no of disease categories increased it failed to achieve the accuracy. Transfer learning is the current effective
research for obtaining the better performance of the models with a minimal and faster training phase. It proved very true with
the proposed framework. The proposed framework able to attain better accuracy with all the three models such as VGG-16,
ResNet-50, and ResNet-50 v2, yet ResNet-50 based transfer learning model a bit more efficient when compared to the other
models. The proposed framework efficient with the multiclass classification of various diseases along with healthy leaves that
include crops of pepper, potato, and tomato.
REFERENCES
[1] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). “Going deeper with convolutions,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition. 18: (1) Input Interface (2) Interface output.
[2] GSMA Intelligence (2016). The Mobile Economy- Africa 2016. London: GSMA.
[3] Raza, S.-A., Prince, G., Clarkson, J. P., Rajpoot, N. M., et al. (2018). Automatic detection of diseased tomato plants using thermal and stereo visible
light images. PLoS ONE 10:e0123262. doi: 10.1371/journal.pone.0123262
[4] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2019). ImageNet large scale visual recognition challenge. Int. J. Comput.
Vis. 115, 211–252. doi: 10.1007/s11263-015-0816y
[5] Lowe, D. G. (2019). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis.60,91110.doi:10.1023/B:VISI.0000029664.99615.9
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1013