0% found this document useful (0 votes)
4 views

Past Work

This conference paper presents a computer vision system utilizing Convolutional Neural Network (CNN) models for detecting five local mango breeds in Bangladesh. The study involved analyzing 15,000 images and comparing three different CNN models, with the best model achieving an accuracy of 92.80%. The paper discusses various methodologies, including image acquisition, noise removal, and overfitting elimination techniques to enhance the classification performance.

Uploaded by

usm4nmusic
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Past Work

This conference paper presents a computer vision system utilizing Convolutional Neural Network (CNN) models for detecting five local mango breeds in Bangladesh. The study involved analyzing 15,000 images and comparing three different CNN models, with the best model achieving an accuracy of 92.80%. The paper discusses various methodologies, including image acquisition, noise removal, and overfitting elimination techniques to enhance the classification performance.

Uploaded by

usm4nmusic
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/340692096

A Computer Vision System for Bangladeshi Local Mango Breed Detection


using Convolutional Neural Network (CNN) Models

Conference Paper · December 2019


DOI: 10.1109/EICT48899.2019.9068830

CITATIONS READS

10 425

4 authors:

A S M Farhan Al Haque Riazur Rahman


Daffodil International University Daffodil International University
3 PUBLICATIONS 75 CITATIONS 19 PUBLICATIONS 141 CITATIONS

SEE PROFILE SEE PROFILE

Ahmed Al Marouf Md. Abbas Ali Khan


University of Calgary Daffodil International University
54 PUBLICATIONS 495 CITATIONS 25 PUBLICATIONS 119 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Ahmed Al Marouf on 06 March 2021.

The user has requested enhancement of the downloaded file.


4th International Conference on Electrical Information and Communication Technology (EICT), 20-22 December 2019, Khulna, Bangladesh

A Computer Vision System for Bangladeshi Local


Mango Breed Detection using Convolutional Neural
Network (CNN) Models

A. S. M. Farhan Al Haque Md. Riazur Rahman Ahmed Al Marouf Md. Abbas Ali Khan
Department of Computer Department of Computer Department of Computer Department of Computer
Science and Engineering Science and Engineering Science and Engineering Science and Engineering
Daffodil International Daffodil International Daffodil International Daffodil International
University University University University
Email: Email:riazur_rahman@daffo Email: Email:
[email protected] dilvarsity.edu.bd [email protected] [email protected]

Abstract—Magnifera Indica, traditionally known as mango, is a machine learning based classification algorithms is one of the
drupe found around the world in over 500 species. India has tradition way to recognize different species. In such
produced 19.5 million metric tons of mango in 2017. In recognition systems, researchers have to determine the feature
Bangladesh, mango has been referred as the national tree and extraction methods and feature descriptors to define some
government has included endemic species of mango as distinguishable criteria for each class. Traditional feature
geographical index (GI) of Bangladesh. Recognizing specific descriptors such as local binary pattern (LBP) [8-9], scale
breeds has become a significant computer vision task. In this invariant feature transform (SIFT) [10], speeded up robust
paper, we have proposed the convolutional neural network features (SURF) [11], histogram of oriented gradients (HOG)
(CNN) based approach for detecting five mango species namely,
[12] etc. are applied for better classifications.
Chosha, Fazli, Harivanga, Lengra and Rupali from 15000
different images. For better experimentation, we have applied In the context of image processing, images are considered
three different models of CNN and analyzed the recognition as the vital stimuli and various step-wise processes are
rates with various criteria. For performance evaluation, we have adopted to find the best features of each class. One of the main
utilized the classic metrics such as precision, recall, F1-score, problem with the image input is, the images could be noisy or
ROC and accuracy. Among the experimented three models, the very pure. For a machine learning training model, if we use
third model, outperformed in terms of accuracy with 92.80%. pure images it will be biased. On the other hand, using the
mixture of noisy and fair images would be much appreciated
Keywords—Megnifera Indica, Mango Species Detection,
Computer vision, Convolutional Neural Network (CNN).
for the overall detection system. Therefore, in this paper, for
experimental works, we tried to gather around 15,000 images
I. INTRODUCTION for overall training, testing and validation. Because the more
inputs we fed into a neural network the more accurate
Mango is one of the most popular seasonal fruit in connection may result in the model.
Bangladesh. Along the mesmerizing taste, according to
USDA National Nutrient Database, mango contains To help cluster and classify, neural network has been
significant among of protein, carbohydrate, fiber and sugars proven to work better than traditional classifiers in many
[1]. Mangoes are a good source of antioxidants, containing cases. Based on the supplied inputs and associated weights, a
certain phytochemicals such as gallotannins and mangiferin net input function and activation function is built for
which have been studied for their health benefits [2]. generating the output. With comparison to this simplified
Considering the health benefits in mind people love to version of neural network, convolutional neural network
consume mangos of different species, as different species have (CNN) takes the input image and applied the convolution of
different size and taste. Farmers are gaining interest day-by- defined kernel layer. In process of max-pooling or average-
day as the profit margin is higher in growing mango than any pooling and hidden layers of output. Based on different size of
other fruits. Though professional farmers are quite aware of kernels and assigned parameters for the kernel layer and
the breeds, but common people have limited knowledge of pooling layer, respectively, the models could be used for
mango breeds. Many people are growing interest in roof- image based various recognition systems. For our experiment,
gardening while choosing the right breed of mango has we have assigned parameters for three different models and
become a difficult task for them. An image processing based run the models for the mango breed detection. The evaluation
application software could be useful for the interested farmers metrics are kept the traditional ones such as precision, recall,
to detect different breeds of mango trees. F1- score and accuracy. The area under the ROC are also
reported of the models.
With the rapid evolution of computer vision, researchers’
have addresses to solve the species recognition [3-5], disease In this paper, we have formalized a RGB color image
recognition [5-7] problem in various ways. Appling classic dataset of Bangladeshi local five mango breeds and utilized

987-1-7281-6040-5/19/$31.00 ©2019 IEEE


the dataset for detecting the mango breeds using ConvNet There are also few more works related to mango detection
algorithm. For the experimental analysis, we have used three reported such as mango leaf deficiency detection [7], detection
different ConvNet models with different parameters and of mango ripening stages [8], leaf image dataset for various
reported the difference between them. The next part of the detection systems [9] etc.
model is the classical neural network where each neuron
corresponds to a feature which help classifying the images.
This detection system could be utilized for many related
application such as mango ripeness detection, mango
freshness/quality detection, various disease detection
processes. For the rest of the paper, we have discussed the
related works in section II. Section III consists the research
methodology, section IV shows the experimental evaluation,
section V gives the ways of CNN optimization discussions
and finally conclusion in section VI.
II. RELATED WORKS
Though mango breed classification is an interesting and
promising work with many applications but surprisingly there
has not been any work reporting the classification of mango
fruit. Although there were few works reporting mango
detection for various applications. We briefly present them as
follows.
The authors in [1] proposed a method for detecting
mangoes from trees using a combination of Randomized
Hough Transform (RHT) and Backpropagation Neural
Network (BNN). RHT was used for detecting initial possible
places in image for mangoes and BNN was used for detecting
mango fruits from these possible places. They reported
accuracy of 96.26 % for clear fruit images. The accuracy has Fig. 1. The proposed methodology
decreased for occluded or overlapped fruits though they did
not report any accuracy value for this case. Yet another work III. RESEARCH METHODOLOGY
on mango detection has been reported in [2]. Mango detection In this section, we have focused on the proposed model
in this work was achieved by using a detection framework for our research to classify different fruits, their breeds and
based on Single shot Multi Box Detector (SSD) network the quality of the fruits. The overall process of recognition
which is an improved neural network based on deep leaning. fruits using deep CNN is shown in a pictorial form in Figure
The authors reported promising results with the system
1. The research work has been accumulated by the necessary
attaining 90% F-score at 35 fps for 400 X 400 input images.
steps described below in the subsection.
Some research has also been performed on yield
estimation of mango fruits. In [3], an approach based on a A. Image Acquisition
multi-sensor framework using multi view geometry has been While the problem domain has been fixed and problem
proposed. This system using R-CNN for detection has analysis has been completed the next phase of our work is to
reported very promising results with only 1.36% error rate per capture the images. Very few images were available in the
individual trees. They also reported that with multi-view internet and most of the images are captured by us in different
settings this system can achieve state of the art performance locations and situations. A Nikon D7200 DSLR
with zero calibration. In another work [4] on mango yield (24.2megapixel, ISO 100 - 25,600, liquid crystal display: 3.2
estimation the authors have proposed two new techniques for in. diagonal, 1,228,800 dots, wide viewing angle, thin film
automatic yield estimation of fruit in images of mango tree transistor based liquid crystal display, TFT-LCD screen, 6
canopies, one using texture-based dense segmentation and frame per second shooting capacity, 24_16 mm image sensor,
other using shape-based fruit detection. They have reported an 51 point autofocus system) camera having 18-140 mm lens. In
overall 68% F-score for their system with estimated fruit our research, we have captured images of 5 breeds of mango
counts within the 16% of actual fruits.
shown in Figure 2.
There are few more research works on mango fruit
B. Dataset
grading. Authors in [5] proposed approaches and algorithms
using fuzzy image processing, content predicated analysis, To yield the best possible performance from our CNN
and statistical analysis to decide the grade of mango fruits. model we have collected a prodigious dataset having 15000
They reported more than 80% accuracy in detecting mango images approximately 3000 images per sample class. The
grades. Another work [6] on mango grade detection proposed images are divided into training, validation and test set using
a computer vision based approach using Recursive Feature the classical holdout method. The model has been trained with
Elimination (RFE) technique with Support Vector Machine a considerable 2200 images for every sample class and
(SVM) classifier. According to their reported results the validated by 400 images. Then the model was tested by
system performs with 3% detection error, maturity prediction another 400 different images to investigate the performance to
accuracy of 96%, 92% of surface defect accuracy and classify. E
performance accuracy of 90% for grading.
(a) Chosha (b) Harivanga (c) Rupali (d) Lengra (e) Fazli

Fig. 2. Five breeds of mango

the model to under fit. It is proved to be a very efficient way


to handle the overfitting problem.
C. Noise Removal
One of the most key challenges to work with images is the
noise which generally includes during the capture of images. 3. Dropout
As the performance of the CNN model mostly rely on the Dropout is a probable solution for overfitting by dropping
image quality, the noise exclusion techniques are very crucial. or ignoring a random number of neurons in the neural
We have deployed a fuzzy filter to remove Gaussian noise network. It is ideally efficient for introducing non linearity by
[21]. If input image is denoted by fp and fmax denotes the forcing a deep learning model to learn more robust features
maximum intensity value among 8-neighboring pixels, then a and reduces dependencies on particular features or neurons.
function is calculated using equation 1. As we have used three In our work, we have tried to turn off different number of
RGB channels for CNN model, the filter works separately on neurons to get better results and experimentally found that
the R, G and B components for noise reduction. The final 50% dropout generates the best accuracy result for
result is obtained by concatenating the three component classification of images.
results using equation 1.
1, 𝑓𝑝 𝑖𝑠 0 𝑜𝑟 255 4. Reduce Architecture Complexity
𝐹𝑝 = { (𝑓𝑝 −𝑓𝑚𝑎𝑥 )2 (1) Overtraining of the training data is very influential to
exp (− ) , 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 cause overfitting. We have to be cautious about the number
2×8×σ
of convolutional layers, the number of filters deployed per
layer, even the size and number of neurons in each layer of
D. Overfitting Elimination neural network. The selection of model configuration is very
Overfitting is a major setback for the performance of any critical as we never want to apply unnecessarily complex
machine learning approaches which causes the model to model resulting overfitting.
memorize the details of the training data too closely but fail to
extract feature. Due to the shortfall of generalizing the features E. Deep Learning Model
the model cannot perform well on the test data. Resulting a The deep learning CNN model that we have deployed in
very high accuracy for the training set and very poor result in our research has multiple convolutional and max pooling
the validation and test set. Several classical methods have layer follower by a fully connected neural network having
been used to prevent this overfitting problem from the CNN neurons that contains different features. We have tried 3 CNN
model and boost accuracy. model having different configuration to find out which model
yields the best output. The model configurations are given in
1. Image Augmentation: Table I.
Generally image collection and preprocessing phase is In model M1, there are 2 convolutional layers each
very time consuming and tedious. But equipping a deep followed by max pooling layer which in turn is fed to neural
learning model to produce great image classification results network. The first convolution layer will have 16 filters and
is not possible without a substantial number of training the second layer has 32 filters. The max pooling layer shrinks
image. Image augmentation steps in with a viable solution the feature maps with a kernel size of (2, 2). The number of
creating different version of images from a training image by neurons in the neural network is 256. The model M2 has 3
multiple operations of processing and manipulation. The API convolutional layers having 16, 32 and 64 filters respectively
used in Keras for image augmentation is and the number of neurons is 512. The model M3 contains 4
ImageDataGenerator. We have applied several techniques convolutional layers with filters 32, 64, 64 and 128
like re-scaling, shearing, height shifting, width shifting, respectively and the neural network has a massive 1024
zoom, rotation, horizontal flip etc. techniques are used to neurons. The number of parameters accumulated in each
augment data. layer are given in Table I. The learning rate has been kept 1e-
2. Ridge Regression Regularization 3 for M1 and M2 whereas M3 has 1e-4. The batch size for the
three models is 32. So 32 images are fed into the CNN models
Simply L2 regularization reduces the complexity of the for training and testing. As the classification problem is
model by adding a squared magnitude co-efficient to penalize multiclass so we have used categorical cross-entropy loss
the given loss function. This least squared penalty term forces function. The Adam optimizer is used for all the models.
TABLE I. MODEL CONFIGURATION OF THE CNN MODELS TABLE II. ACCURACY FOR THREE CNN MODELS FOR TRAINING,
VALIDATION AND TEST SET

Training Validation Test


Model

Parameters
Accuracy Accuracy Accuracy

Learning
Kernel /
Kernels

Neuron
Layers

No. of

No. of

Batch
Serial

Rate
Size

Size
M1 81.62% 82.38% 81.16%
M2 86.62% 88.35% 90.42%
M3 90.28% 93.60% 92.80%
Conv 16 [3.3] 896 TABLE III. PERFORMANCE METRICS FOR THE CNN MODEL M3
MaxPool - [2.2]
Conv 32 [3.3] 9248 Sample
M1 0.001 32 Precision Recall F1 Score
MaxPool - [2.2] Name
Neural Chosha 0.92 0.96 0.94
- 256 95552512
Network Fazli 1.00 1.00 1.00
Conv 16 [3.3] 896
Harivanga 0.86 1.00 0.93
MaxPool - [2.2]
9248 Lengra 0.91 0.80 0.85
Conv 32 [3.3]
Rupali 0.96 0.88 0.92
M2 MaxPool - [2.2] 0.001 32
Conv 64 [3.3] 18496
MaxPool - [2.2]
Neural
- 512 44303360
Network
Conv 32 [3.3] 896
MaxPool - [2.2] -
Conv 64 [3.3] 18496
MaxPool - [2.2] -
Conv 64 [3.3] 36928
M3 0.0001 32
MaxPool - [2.2] -
Conv 128 [3.3] 73956
MaxPool - [2.2]
Neural
- 1024 18875392
Network

IV. EXPERIMENTAL EVALUATION


We have implemented 3 CNN models in our work. All the
three models have been executed for 100 epochs for equipping
the model for the best efficiency. The batch size of the training
set was 64 and 32 for both validation set and test set. All the
CNN models have been deployed with Adam optimizer along
with the categorical loss function as the problem is multiclass
Fig. 3. Accuracy curves for the CNN model for training and validation set
having five breeds of mango. The learning rates for the three
models are 1e-3, 1e-3 and 1e-4 respectively and
experimentally investigated that the lowest learning rate 1e-4
is the most efficient for converging to the optimal solution.
The accuracy result for the training set, validation set and test
set has been depicted in the Table II. The table shows that the
M3 model turns out to be the best performing one in
classifying the five breeds of mango. The accuracy yielded for
M3 model is 90.28% for training set, 93.60% for validation
set and 92.80% for the test set outperforming the M1 and M2
models with accuracy 85.76% and 89.43% respectively.
The accuracy curves for the training and validation set has
been plotted in Figure 3. It is seen from the figure that, both
the curves were pretty close while converging to the global
optima and the accuracy for the test set has been better than
validation set in the most epochs. The model is found to
converge to a very good accuracy in very short epochs. Within
20 epochs, the curves have elevated over the 80% accuracy
rate which is a great performance detail of the model. Fig. 4. Loss curves for the CNN model for training and validation set
Similarly Figure 4, shows the loss curves for the two datasets.
In case of loss, training set has more optimized values all For a meticulous experimentation, the accuracy
through the epochs but at the later epochs the loss value of the might not reflect the robustness of the model. From that
validation set improves and traverses to the most optimal standpoint, we have further evaluated precision, recall and F1
value. The optimization rate for loss of the model for is pretty score to investigate the rigorous performance of the best
high as the loss sharply drops down to very low value at the model in Table III. The results of the performance metrics
very early stage of training and kept the optimization very explicitly draw the conclusion that model M3 is very efficient
steady till the end of the execution. to classify the mangoes corresponding their class.
100
95
90
85

Accuracy(%)
80
75
70
65
60
55
50
0 20 40 60 80 100
Epochs
Train Accuracy Test Accuracy

Fig. 5. ROC curve for the CNN model Fig. 6. The optimization of the CNN model over 100 epochs

Specifying the finest metrics values, fazli class


produces the perfect 100% for all the three performance
metrics. The precision value for rupali is very promising with
96%, chosha has 96% for precision and 94% for F1 score.
Furthermore, the ROC curve for the model has been
derived and plotted in Figure 5 that exhibits great
classification quality as well. From the ROC curve the
anticipated performance from a deep learning model is
perceived. As we can easily predict a very high AROC from
the ROC curve. The value obtained for the AROC is 97.3%.

V. CNN MODEL OPTIMIZATION


Deep learning algorithms generally deploys a surprising Fig. 7. Different Loss curves for varying learning rates
number of parameters. In our CNN model we have used
3,560,709 parameters. In our first convolutional layer, 896 100%
parameters were used for 32 filters. Then 18496 parameters 90%
were used in the second layer convolution where 64 filters are 93.60%
90.43%
80%
applied. And a massive 3277824 parameters were required to 85.68% 85.62%
train the neural network part of the CNN model that 70%
75.49%
accommodates 1024 neurons. Out of all there 72.43%
AROCare few hyper-
= 97.3% 60%
Accuracy

65.74%
parameters we have shown few optimization analysis on 50%
number of epochs, learning rates, different optimizers and 55.46%
using different ratio for dropouts. 40%
30% 41.68%
A. Number of Epochs
20%
Epochs are the number of times the CNN model has been 25.41%
10%
trained for. As anticipated the more CNN model trains, the
better detection performance it perceives. But overtraining 0%
may lead the model to overfitting. So the improvement rate 0% 10% 20% 30% 40% 50% 60% 70% 80% 90%100%
Dropout
needs to be monitored very effectively. The model should be
stopped training while the improvements got stuck within a Fig. 8. Accuracy values for test different dropouts
minimum threshold. In Figure 6 the optimization of the CNN
B. Learning Rate
model is shown. The model has a significant convergence to
a great accuracy at the very initial epochs and reach a very Learning rate is the hyper-parameter that supervises the
appreciable 82.40% test accuracy within the halfway 50 rate at which the model perceives the classification ability. It
epochs and converge to the optimal solution of 93.60% at the has the gradient decent approach towards the optimal
100th epoch. solution. The higher learning rate enables the model for
longer jumps that often lead to miss the local optima. And a
comparatively very low rate will produce a very slow
convergence and a tradeoff question between time and speed
arises. The CNN model performs differently for the varying
learning rates. The figure of different curves for loss for
different rates is shown in Figure 7. Evidently a very high [4] R.Nikam, and M. Sadavarte, “Application of Image Processing
Technique in Mango Leaves Disease Severity Measurement”, National
learning rate does not reach near to minimum loss at all where Conference on Emerging Trands in Computer, electrical and
the smaller rates tend to approach to the minimum loss value. Electronics (ETCEE-2015), International Journal of Advance
But the best result is obtained for 1e-4 that has a very steady Engineering and Research Development (IJAERD), 2015.
loss curve comparing to the curve for 1e-3 where fluctuations [5] K. Muthukannan, P. Latha, P. Nisha, and R. Pon Selvi, “An
of loss over epochs is observed. Even for the higher learning Assessment on Detection of Plant Leaf Diseases and Its severity using
image segmentation”, International Journal of Computer Science and
rates 1e-2 we can see the initial loss might be the lowest the Information Technology Research (IJCSITR), January-March 2015.
model could get even near to the optimized loss. [6] J. Sethupathy, and Veni S., “OpenCV based Disease Identification of
Mango Leaves”, International Journal of Engineering and Technology
C. Dropout (IJET), Vol. 8 no 5, October-November 2016.
The hyper-parameter dropout is very efficient for the [7] G. Kshirsagar, and A. N. Thakre, “Plant Disease Detection in Image
elimination of overfitting problem. This technique turns off a Processing using MATLAB”, International Journal on Recent and
specified number of neurons in the fully connected neural Innovation Trends in Computing and Comunication (IJRITCC), Vol.
6, issue. 4, April 2018.
network that triggers non linearity and forces out the
[8] T. Ojala, M. Pietikäinen, and D. Harwood (1994), “Performance
possibilities of overfitting from the model. So the dropout has evaluation of texture measures with classification based on Kullback
potential effect on the accuracy. In our research, we have tried discrimination of distributions”, Proceedings of the 12th IAPR
to show a comparison of accuracies for different ratio of International Conference on Pattern Recognition (ICPR 1994), vol. 1,
pp. 582 - 585.
dropout and experimentally obtained the best result for 50%
[9] T. Ojala, M. Pietikäinen, and D. Harwood (1996), “A Comparative
dropout. The Figure 8 shows the accuracy variations obtained Study of Texture Measures with Classification Based on Feature
for different ratio of dropout. Evidently from the curve Distributions”, Pattern Recognition, vol. 29, pp. 51-59
obtained, the accuracy for the same model is just 65.74% that [10] D. G. Lowe, “Object recognition from local scale-invariant features”,
reaches top of the graph at 50%. With the increasing value for Proceedings of the International Conference on Computer Vision. 2.
dropout as expected the accuracy falls down. It is because pp. 1150–1157, 1999. doi:10.1109/ICCV.1999.790410
with higher number of neurons being turned off the important [11] H. Bay, T. Tuytelaars, L. Van Gool, “SURF: speeded up robust
features”, Proceddings of ECCV, 2006.
features are also disabled for classification. Simply the
[12] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
accuracy fall down drastically. detection”, In proceddings of CVPR, 2005.
[13] K. Nanaa, M. Rizon, M. N. A. Rahman, Y. Ibrahim, and A. Z. A. Aziz,
VI. CONCLUSION "Detecting mango fruits by using randomized hough transform and
In our research, we have proposed a CNN based backpropagation neural network." In 2014 18th International
Conference on Information Visualisation, pp. 388-391. IEEE, 2014.
model that can classify among the different breeds of mango
[14] Q. Liang, W. Zhu, J. Long, Y. Wang, W. Sun, and W. Wu, "A Real-
fruit with a very satisfactory performance. The result has been Time Detection Framework for On-Tree Mango Based on SSD
analyzed further with the performance metrics like precision, Network." In International Conference on Intelligent Robotics and
recall and F1 score to examine the rigorous performance and Applications, pp. 423-436. Springer, Cham, 2018.
found to be performing comprehensively praiseworthy. The [15] M. Stein, S. Bargoti, and J. Underwood, "Image based mango fruit
ROC curve has also exhibited a great shape with a very high detection, localisation and yield estimation using multiple view
geometry.", Sensors 16, no. 11 (2016): 1915.
AUCROC value of 97.3%. Later we have tried to show some
[16] W. S. Qureshi, A. Payne, K. B. Walsh, R. Linker, O. Cohen, and M. N.
optimization analysis for different learning rates, different Dailey. "Machine vision for counting fruit on mango tree canopies.",
value of dropout and varying number of epochs for execution. Precision Agriculture 18, no. 2 (2017): 224-244.
This work is still in progress. We are trying to add more [17] T. R. B.Razak, M. B. Othman, M. N. B. A. Bakar, K. A. Ahmad, and
breeds of mango and other fruits. We can apply more deep A. R. Mansor. "Mango grading by using fuzzy image analysis." In
International Conference on Agricultural, Environment and Biological
learning pretrained network like MobileNet, GoogleNet or Sciences (ICAEBS'2012) May 26-27, 2012 Phuket. 2012.
AlexNet etc. to get even better results. [18] C. S. Nandi, B. Tudu, and C. Koley. "Computer vision based mango
fruit grading system." In International Conference on Innovative
REFERENCES Engineering Technologies (ICIET 2014) Dec, pp. 28-29. 2014.
[1] United States Department of Agriculture, Agricultural Research [19] M. Merchant, V. Paradkar, M. Khanna, and S. Gokhale. "Mango Leaf
Service, National Nutrient Database for Standard Reference Legacy Deficiency Detection Using Digital Image Processing and Machine
Release, Basic Report on Raw Mango [Online] Learning." In 2018 3rd International Conference for Convergence in
https://ndb.nal.usda.gov/ndb/foods/show/2271 Technology (I2CT), pp. 1-3. IEEE, 2018.
[2] N. Shubrook, “The health benefits of mango” [Online] [20] F. S. Mim, S. M. Galib, M. F. Hasan, and S. A. Jerin. "Automatic
https://www.bbcgoodfood.com/howto/guide/health-benefits-mango detection of mango ripening stages–An application of information
[3] N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. technology to botany." ,Scientia horticulturae 237 (2018): 156-163.
C. Lopez, and J. V. B. Soares, “Leafsnap: A Computer Vision System [21] [21] Rahman T, Haque MR, Rozario LJ, Uddin MS. Gaussian noise
for Automatic Plant Species Identification,”, Proceedings of the 12th reduction in digital images using a modified fuzzy filter. In: 2014 17th
European Conference on Computer Vision (ECCV), October 2012. International Conference on Computer and Information Technology
(ICCIT). IEEE; 2014. p. 217–222.

View publication stats

You might also like