Rice Quality Classification System Using Convolutional Neural Network and An Adaptive Neuro-Fuzzy Inference System
Rice Quality Classification System Using Convolutional Neural Network and An Adaptive Neuro-Fuzzy Inference System
Corresponding Author:
Lia Kamelia
Department of Electrical Engineering, Faculty of Science and Technology
Universitas Islam Negeri Sunan Gunung Djati
St. A. H. Nasution 105, Bandung, West Java, Indonesia
Email: [email protected]
1. INTRODUCTION
Rice is a staple food commodity central to fulfilling human dietary needs worldwide. The quality of
rice is paramount in determining its commercial value and use in various food products such as rice, rice flour,
and processed foods. Quality differences in rice encompass parameters such as colour, size, texture, and aroma.
Manual classification of rice quality is a time-consuming and subjective task that can result in errors [1]. Proper
classification facilitates compliance with international regulations for export and import, supporting global
trade of this essential commodity. Additionally, the economic impact is significant, as the quality assurance
achieved through processing and classification enhances the market value of rice products, benefiting both
producers and the broader economy [2], [3].
Automated methods for efficiently and accurately classifying rice quality utilize advanced technology
and machine learning techniques. One common approach is to employ computer vision and image analysis to
assess various rice attributes, including colour, size, and shape [4]. The current process of rice classification
typically involves a combination of traditional and modern methods, depending on the level of automation and
technology used in the rice processing industry. The human inspection process involves operators who visually
inspect and classify rice grains. This assessment is often subjective and can lead to variations in classification
results. Some rice processing facilities use mechanical equipment that automatically sorts rice based on specific
physical parameters, such as size and weight. However, they may be unable to classify based on more complex
criteria like colour or texture [5]. Rice samples can be sent to laboratories with chemical and physical analysis
equipment for more in-depth quality testing [6]. These tests include monitoring moisture content, starch
content, and other factors that affect rice quality. Many rice processing facilities currently use laboratory
methods, such as human inspection for visual assessment and sorting machines for size-based sorting [7], [8].
Convolutional neural network (CNN) is a highly effective artificial neural network (ANN)
architecture in image processing and pattern recognition tasks. It was primarily developed to handle visual data
such as images and videos and has become a core technology in computer vision and image processing [9].
CNN have several advantages, including the ability to understand hierarchical features in images, invariance
to transformations, utilization of correlated data, automatic feature extraction, scalability for various tasks, the
use of pre-trained architectures, the ability to handle large-sized data, improved accuracy in image recognition,
and utilization of hardware acceleration [10]. Koklu et al. [2] uses ANN and CNN algorithms to classify and
evaluate the quality of seeds from five different rice varieties in Turkey. The ANN and deep neural network
(DNN) algorithms were used for the feature dataset, while CNN was used for the image dataset. The
classification success rates were 99.87% for ANN, 99.95% for DNN, and 100% for CNN, demonstrating the
successful application of these models in rice variety classification.
In recent years, research has focused on image processing technology and intelligent systems like the
adaptive neuro-fuzzy inference system (ANFIS) [11], [12]. Applying ANFIS in rice colour analysis offers the
potential to classify rice with high precision based on measurable colour characteristics. ANFIS is a
computational model that combines elements from ANN and fuzzy inference systems to address problems
involving uncertainty and ambiguous data [13]. It utilizes fuzzy membership functions to measure the degree
of membership of elements in one or more fuzzy sets, employs fuzzy rules to express relationships between
inputs and outputs, performs a fuzzy inference process to compute membership values, incorporates a neural
network model with learned weights to adapt to data, and matches the fuzzy inference results with the neural
network output to generate an outcome [14]. ANFIS is known for its ability to handle ambiguous data,
rule-based interpretability, and flexibility in decision-making based on fuzzy concepts, finding applications in
industrial process control, recommendation systems, pattern recognition, and prediction tasks [12].
Research on rice quality classification has been carried out using various methods such as the CNN
method, multi-class support vector machines (SVM), back propagation neural network (BPNN), and ANFIS
[15]. Mandal [16] proves that ANFIS can classify rice into whole rice grains, broken rice grains, and imperfect
rice grains with 98.5% accuracy. However, the data set in this study was taken with images of individual rice,
even though, in reality, humans observe the quality of rice as a collection of rice, not one by one. So, this
research takes the image of rice as a collection. Research by Zia et al. [17], the visual geometry group
(VGG-19) machine-learning technique was used to evaluate damaged and undamaged rice seeds and rice
models with brown spots, with an accuracy of 98.8% and 100%, respectively. This research also produces a
website-based application that can classify types of damaged and undamaged rice based on 1 grain of rice, not
based on a collection of grains. This is the difference between the proposed research and the proposed research
because the proposed research classifies the quality of rice at medium and premium levels based on the image
of the rice collection.
In this research, a system was created to assess the quality of rice based on its colour using the ANFIS
method. This research will explain 3 (three) models that represent machine learning used to assess rice quality
based on colour. The three models are the CNN model using an image generator, the CNN model using a
combination of image generator and cv2, and the ANFIS model implemented in Python [18]. The three models
will be seen to compare their performance accuracy. From this comparison, it can be concluded which model
best assesses rice quality based on colour.
2. METHOD
2.1. Classification using convolutional neural network
Designing CNN MobileNetV2 is the first step in designing an application. The focus of the discussion
is only given to evaluating the accuracy produced by the CNN model. There are 2 (two) variations of the
MobileNetV2 CNN model will be used as a comparison in this research. The first variation is a CNN model
using an image generator, and the second is a CNN model that combines an image generator with the cv2
library [19], [20]. The design of the CNN model has several purposes, including visual data processing, such
as pattern recognition, object detection, classification, segmentation, and other tasks. Classifying rice into two
categories, medium and premium, using CNN involves structured steps, starting with data collection and
pre-processing. In the initial phase, a diverse dataset of rice images is gathered, ensuring that each image is
appropriately labelled with its corresponding class. These images undergo pre-processing, resized to a
consistent dimension, normalized pixel values, and data augmentation techniques are applied to enhance the
dataset's variability.
The next step is the design of the CNN architecture. It involves stacking convolutional layers to extract
features, pooling layers for spatial down-sampling, and fully connected layers for decision-making. The
architecture is then trained on the labelled dataset using a chosen loss function, such as categorical
cross-entropy, and optimized through backpropagation. Transfer learning, starting with a pre-trained model
and fine-tuning it for rice classification, is often employed to leverage knowledge from models trained on large
datasets like ImageNet. Hyperparameter tuning follows, where parameters like learning rate, batch size, and
dropout rates are adjusted to optimize the model's performance [21]. The trained model is then evaluated on a
validation set to ensure it generalizes well to unseen data and subsequently tested on a separate test set for
real-world assessment. Post-training, the model's output probabilities are analyzed, and a decision threshold is
set to classify rice images into medium or premium. Post-processing steps include refining the decision
boundaries based on specific criteria. The model's performance is thoroughly evaluated using metrics like
accuracy, precision, recall, and F1 score, focusing on understanding and addressing misclassifications [22], [23].
The purpose of the CNN model using an image generator is to overcome the problem of limited or
unbalanced data in a certain number of classes or categories in the dataset. An image generator is a tool that
allows dynamically generating new image variations from the original image with various transformations such
as rotation, shift and crop. The design of this model starts from input images and ends with accuracy
calculation, as shown in Figures 1(a) and (b).
(a) (b)
Figure 1. Classification using (a) CNN and (b) CNN with contrast enhancement
The image input stage begins by collecting a dataset of rice images used in the rice quality
classification process. The collection of rice quality image datasets was carried out through two methods: by
shooting rice according to its quality, namely medium rice and premium rice. After that, the dataset will be
divided into two parts: the training and testing sets. Then, the process of restructuring the dataset will be carried
out. This stage is essential in pre-processing data before starting model training or data analysis. The goal is to
keep datasets structured, easily accessible, and ready to use in various data analyses.
Data augmentation is commonly used in training ANN models, such as CNN, to improve model
performance and generalization. Data augmentation involves image transformations, such as rotation, shifting,
zooming, or cropping, as well as applying effects such as horizontal flips or colour changes. The process of
implementing this CNN model uses the TensorFlow library. This CNN model uses MobileNetV2 as an
architecture using training (pre-trained) on the ImageNet dataset [24]. The convolution layer performs
convolution operations using filters or kernels, where each kernel identifies specific patterns or features in the
Rice quality classification system using convolutional neural network and an adaptive … (Lia Kamelia)
4116 ISSN: 2252-8938
image. MobileNetv2 also organizes these layers in "blocks", which are collections of CNN layers grouped for
specific tasks. Each block can have different CNN layers to extract image features with different levels of
complexity. The training process is carried out to teach the model, and the training results are visualized [25].
The model is then evaluated using prepared training data and test data. The data obtained in this model is data
without going through an image enhancement (contrast enhancement) process. In the CNN design integrated
with contrast enhancement, there is an increase in the pre-processing stage, where the image contrast is
increased before the dataset is augmented. OpenCV plays a role in increasing image contrast, and the results
of increasing the contrast are then used in the dataset augmentation process using the image generator.
This process begins with the feature extraction stage of each image in the dataset. Feature extraction
is essential in image processing before the data can be used in machine learning or deep learning model training,
especially in pattern recognition or classification. In this model, the feature extraction method used is RGB
feature extraction, which is a simple approach to extract statistical information from the primary colour
components, namely red (R), green (G), and blue (B) in an image. A neural network model will be combined
with fuzzy logic at this stage. ANN is used for tasks such as classification and regression. Fuzzification is a
fuzzy inference system step that converts the model output (ANN) into linguistic or fuzzy variables [19], [20].
In this process, we used the Gauss function as the membership function of an element in the fuzzy set. In the
Norm+rule stage, a comparison is made, and the maximum membership level is determined between the model
predictions ("medium" or "premium") and the previously created membership function. It will be used in
defuzzification's next step to produce a more concrete output. The defuzzification stage is where the previously
calculated fuzzy membership levels are converted into a single or concrete value representing the final result
of the fuzzy inference system. In this context, the defuzzification method used is the centroid. A centroid is a
single value generated from the defuzzification process in a fuzzy inference system. This value represents the
midpoint or centre of the calculated fuzzy membership distribution.
or off-white. However, sometimes, the colour difference between the two types of rice may be hard to see
because they share a high degree of resemblance. Therefore, in the pre-processing process, contrast
enhancement is applied to the image to clarify colour differences and features that may be difficult to recognize
in low-contrast images. The differences between the rice classes are shown in Figure 3.
The total data of 308 rice images will be divided into 2 data: training data from 246 rice images and
test data from 62 rice images. This study divides the test into three parts: testing the CNN Mobilenetv2 model
using an image generator, the CNN Mobilenetv2 model using a combination of image generator and cv2, and
the ANFIS model. By conducting this test, this study can identify the effectiveness and accuracy of each model.
The output of the CNN model is as follows:
Accuracy on training data: 0.6641
Loss of training data: 4.2733
Accuracy on test data: 0.7500
Loss on test data: 3.3561
- 4s 779 ms/stop – loss: 4.8639 - Accuracy: 0.6250
Accuracy: 62.5 %
The results of this evaluation provide an understanding of the extent to which the CNN model can
understand and classify image data accurately. From the program output results, it can be concluded that the
model achieved an accuracy level of around 66.41% on the training data. This accuracy reflects how well the
model can identify and classify data in the training data. Apart from that, the training data has a loss value of
4.2733. This loss value describes the extent of the difference between the model predictions and the actual
values in the training data. The lower the loss value, the better the model adapts to the training data. The model
achieves an accuracy rate of around 75.00% on test data. This accuracy indicates how well the model can
predict data that has never been seen before.
Furthermore, there is a loss value of 3.3561 in the test data. The loss value on test data measures how
well the model can generalize to new data not included in the training data. The output from measuring model
accuracy using the test data set obtained an accuracy of 62.5%. Testing the CNN Mobilenetv2 model using a
combination of image generator and cv2 library is to evaluate the model's ability to classify rice quality. The
output of the CNN model with contrast enhancement is as follows:
Accuracy on training data: 0.7422
Loss on training data: 3.0692
Accuracy on test data: 0.7500
Loss on test data: 3.4488
- 3s 3s/step - loss: 2.9940 - Accuracy: 0.7500
Accuracy: 75.0 %
The results of this model evaluation show that the model achieves an accuracy rate of 74.22% on the
training data, indicating how well the model can understand the patterns contained in the training data. The
higher the accuracy of the training data, the better the model's ability to adapt to the data used for training. This
CNN model has a loss rate of about 3.0692 in the training data, which reflects the degree to which the model
successfully matches the training data. The lower the loss value, the better the model matches the data.
Meanwhile, the model achieves an accuracy rate of around 75.00% on the test data.
Rice quality classification system using convolutional neural network and an adaptive … (Lia Kamelia)
4118 ISSN: 2252-8938
In testing the ANFIS model, four testing stages play an essential role in evaluating model
performance: first, testing the accuracy of the neural network model as an assessment of whether the combined
model is feasible—second, class prediction testing to assess the success of the ANFIS model in predicting rice
quality. Third, testing uses a confusion matrix as a more detailed evaluation tool for the running model. Finally,
testing with the classification report provides more detailed information about the performance of the
classification model beyond just looking at the confusion matrix or just the level of accuracy.
The ANN model managed to achieve an accuracy rate of 82.25%. This accuracy follows the targets
previously determined in selecting the neural network model that will be combined with fuzzy logic. By
achieving this level of accuracy, the ANN model can be well used in combination with fuzzy logic to continue
building the ANFIS model. Testing the confusion matrix in the ANFIS model is needed to measure the extent
to which the model can correctly perform classification. The results of the confusion matrix of the ANFIS
model are shown in Table 1.
Table 1 shows the results of the confusion matrix of the ANFIS model, which states that there are 35
medium predictions, which are medium (true positives), 4 premium predictions which should be medium (false
negatives), 7 medium predictions which should be premium (false positives), and 16 predictions an indeed
premium (true negatives). The purpose of testing the classification report of the ANFIS model is to provide
more detailed and detailed information about the performance of the classification model in the context of
testing. Classification reports provide several evaluation metrics, including precision, recall, F1-score, and
support, as shown in Table 2. The graph of the difference in accuracy between the CNN method, CNN with
contrast enhancement and ANFIS is shown in Figure 4. The results show that the ANFIS method has the highest
accuracy.
Accuracy 0.82 62
Macro avg 0.82 0.80 0.80 62
Weighted avg 0.82 0.82 0.82 62
100
82.25
75
80
Accuracy (%)
62.5
60
40
20
0
CNN CNN-CE ANFIS
Methods
The accuracy of the classification results in this study reached 62.5%, which is relatively low
compared to Hamzah and Mohamed [5], which was more than 90%. It is because, based on several references
that discuss the CNN method, it is suitable for large datasets. Machine learning techniques, such as SVM, are
used to classify rice grains accurately [21]. A rice grain image study underwent pre-processing, segmentation,
and feature extraction. The multi-class SVM was used to classify three types of rice grains: basmathi, ponni,
and brown rice. The study achieved 92.22% classification accuracy, better than the accuracy of the proposed
methods using ANFIS. It is because the proposed research takes images in collection form and is classified not
based on shape but on colour and texture, as rice buyers do in reality. Of course, this will result in a
classification process that is quite difficult because it is adapted to observations using the human eye. ANFIS
and CNN are distinct machine learning approaches, each with its strengths and areas of applicability. It is not
accurate to definitively claim that one is superior to the other, as their performance depends on the nature of
the task at hand. ANFIS creates interpretable rule-based models, making it valuable in domains requiring
decision-making transparency, like expert systems and medical diagnosis. Additionally, ANFIS excels in
handling data with ambiguity or uncertainty, where the relationships between inputs and outputs are not
well-defined. When the system has limited labelled data, it can often provide reasonable predictions with
smaller datasets.
4. CONCLUSION
This research compares two methods to classify types of rice into two classes, namely medium and
premium. The methods used are CNN and ANFIS. Because the CNN accuracy is not very good, a contrast
enhancement process is carried out on the rice image. There was an increase from 62.5% to 75%. However,
when compared with the classification using the ANFIS method, the highest accuracy was still obtained
through the ANFIS method, which was 82.25%. CNN and ANFIS are valuable machine learning approaches,
each with strengths and areas of applicability. ANFIS excels in creating interpretable rule-based models,
handling ambiguous or uncertain data, and providing reasonable predictions with smaller datasets. It can be
applied to various data types, such as time series, tabular data, and text, and is computationally simpler than
deep CNNs. CNNs are preferred for image-related tasks and deep learning, outperforming ANFIS in computer
vision applications.
REFERENCES
[1] B. Lurstwut and C. Pornpanomchai, “Image analysis based on color, shape and texture for rice seed (Oryza sativa L.) germination
evaluation,” Agriculture and Natural Resources, vol. 51, no. 5, pp. 383–389, 2017, doi: 10.1016/j.anres.2017.12.002.
[2] M. Koklu, I. Cinar, and Y. S. Taspinar, “Classification of rice varieties with deep learning methods,” Computers and Electronics in
Agriculture, vol. 187, Aug. 2021, doi: 10.1016/j.compag.2021.106285.
[3] Y. Meng, Z. Ma, Z. Ji, R. Gao, and Z. Su, “Fine hyperspectral classification of rice varieties based on attention module 3D-2DCNN,”
Comput. Electron. Agric., vol. 203, p. 107474, Dec. 2022, doi: 10.1016/J.COMPAG.2022.107474.
[4] R. O. Ojo, A. O. Ajayi, H. A. Owolabi, L. O. Oyedele, and L. A. Akanbi, “Internet of Things and Machine Learning techniques in
poultry health and welfare management: A systematic literature review,” Computers and Electronics in Agriculture, vol. 200, Sep.
2022, doi: 10.1016/j.compag.2022.107266.
[5] A. S. Hamzah and A. Mohamed, “Classification of white rice grain quality using ANN: a review,” IAES International Journal of
Artificial Intelligence (IJ-AI), vol. 9, no. 4, pp. 600–608, 2020, doi: 10.11591/ijai.v9.i4.pp600-608.
[6] E. O. Díaz, H. Iino, K. Koyama, S. Kawamura, S. Koseki, and S. Lyu, “Non-destructive quality classification of rice taste properties
based on near-infrared spectroscopy and machine learning algorithms,” Food Chemistry, vol. 429, Dec. 2023, doi:
10.1016/j.foodchem.2023.136907.
[7] L. Chen, S. Li, Q. Bai, J. Yang, S. Jiang, and Y. Miao, “Review of image classification algorithms based on convolutional neural
networks,” Remote Sensing, vol. 13, no. 22, Nov. 2021, doi: 10.3390/RS13224712.
[8] D. Ireri, E. Belal, C. Okinda, N. Makange, and C. Ji, “A computer vision system for defect discrimination and grading in tomatoes
using machine learning and image processing,” Artificial Intelligence in Agriculture, vol. 2, pp. 28–37, 2019, doi:
10.1016/j.aiia.2019.06.001.
[9] P. Wang, E. Fan, and P. Wang, “Comparative analysis of image classification algorithms based on traditional machine learning and
deep learning,” Pattern Recognition Letters, vol. 141, pp. 61–67, 2021, doi: 10.1016/j.patrec.2020.07.042.
[10] P. Dhruv and S. Naskar, “Image classification using convolutional neural network (CNN) and recurrent neural network (RNN): a
review,” in Machine Learning and Information Processing, 2020, pp. 367–381. doi: 10.1007/978-981-15-1884-3_34.
[11] T. L. Nguyen, S. Kavuri, S. Y. Park, and M. Lee, “Attentive Hierarchical ANFIS with interpretability for cancer diagnostic,” Expert
Systems with Applications, vol. 201, Sep. 2022, doi: 10.1016/J.ESWA.2022.117099.
[12] F. Salehi, “Recent advances in the modeling and predicting quality parameters of fruits and vegetables during postharvest storage:
a review,” International Journal of Fruit Science, vol. 20, no. 3, pp. 506–520, Jul. 2020, doi: 10.1080/15538362.2019.1653810.
[13] O. Bilalović and Z. Avdagić, “Robust breast cancer classification based on GA optimized ANN and ANFIS-voting structures,” 2
2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO),
Opatija, Croatia, pp. 279–284, 2018, doi: 10.23919/MIPRO.2018.8400053.
[14] H. Li, P. -C. Shih, X. Zhou, C. Ye, and L. Huang, “An improved novel global harmony search algorithm based on selective
acceptance,” Applied Sciences, vol. 10, no. 6, 2020, doi: 10.3390/app10061910.
[15] S. B. Ahmed, S. F. Ali, and A. Z. Khan, “On the frontiers of rice grain analysis, classification and quality grading: a review,” IEEE
Access, vol. 9, pp. 160779–160796, 2021, doi: 10.1109/ACCESS.2021.3130472.
[16] D. Mandal, “Adaptive neuro-fuzzy inference system based grading of basmati rice grains using image processing technique,”
Applied System Innovation, vol. 1, no. 2, 2018, doi: 10.3390/asi1020019.
[17] H. Zia, H. S. Fatima, M. Khurram, I. U. Hassan, and M. Ghazal, “Rapid testing system for rice quality control through
comprehensive feature and kernel-type detection,” Foods, vol. 11, no. 18, pp. 1–17, 2022, doi: 10.3390/foods11182723.
[18] M. L. Waskom, “Seaborn: statistical data visualization,” Journal of Open Source Software, vol. 6, no. 60, pp. 1-4, Apr. 2021, doi:
10.21105/joss.03021.
Rice quality classification system using convolutional neural network and an adaptive … (Lia Kamelia)
4120 ISSN: 2252-8938
[19] B. S. Rao, K. Akhil, R. V. K. Reddy, D. D. Sree, and D. Manogna, “Identification of nutrient deficiency in rice leaves using Dense
Net-121,” 2022 International Conference on Edge Computing and Applications (ICECAA), Tamilnadu, India, pp. 1573–1578, 2022,
doi: 10.1109/ICECAA55415.2022.9936191.
[20] A. Adeel et al., “Entropy-controlled deep features selection framework for grape leaf diseases recognition,” Expert Systems, vol.
39, no. 7, Aug. 2022, doi: 10.1111/exsy.12569.
[21] Y. Kumar, A. K. Dubey, R. R. Arora, and A. Rocha, “Multiclass classification of nutrients deficiency of apple using deep neural
network,” Neural Computing and Applications, vol. 34, no. 11, pp. 8411–8422, 2022, doi: 10.1007/s00521-020-05310-x.
[22] J. Straub, “Machine learning performance validation and training using a ‘perfect’ expert system,” MethodsX, vol. 8, Jan. 2021, doi:
10.1016/j.mex.2021.101477.
[23] R. S. Singla, A. Gupta, R. Gupta, V. Tripathi, M. S. Naruka, and S. Awasthi, “Plant disease classification using machine learning,”
2023 International Conference on Disruptive Technologies (ICDT), pp. 409–413, 2023, doi: 10.1109/ICDT57929.2023.10151118.
[24] N. Rathnayake, U. Rathnayake, T. L. Dang, and Y. Hoshino, “An efficient automatic fruit-360 image identification and recognition
using a novel modified cascaded-ANFIS algorithm,” Sensors, vol. 22, no. 12, Jun. 2022, doi: 10.3390/S22124401.
[25] A. Nayak, S. Chakraborty, and D. K. Swain, “Application of smartphone-image processing and transfer learning for rice disease
and nutrient deficiency detection,” Smart Agricultural Technology, vol. 4, Aug. 2023, doi: 10.1016/J.ATECH.2023.100195.
[26] C. R. Harris et al., “Array programming with NumPy,” Nature, vol. 585, no. 7825, pp. 357–362, Sep. 2020, doi: 10.1038/s41586-
020-2649-2.
BIOGRAPHIES OF AUTHORS