0% found this document useful (0 votes)
25 views6 pages

Identifying Lung Cancer Using CT Scan Im Db51bda9

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

Identifying Lung Cancer Using CT Scan Im Db51bda9

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Computer and Information System (IJCIS)

Peer Reviewed – International Journal


Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

Identifying Lung Cancer Using CT Scan Images Based On Artificial Intelligence


1stMD. Ismail Hossain Sadhin, 2ndMethila Farzana Woishe, 3rd Nila Sultana, 4thTamanna Zaman Bristy
1D3 Automotive Technology, 2information Systems
1,2,3,4Dept. of Computer Science, American International University-Bangladesh, Dhaka, Bangladesh
1,2,3,4408/1, Kuratoli, Khilkhet, Dhaka 1229, Bangladesh.
[email protected], [email protected],
[email protected], [email protected]

Abstract—Lung cancer appears to be the common reason behind the death of human beings at some stage on the planet. Early detection
of lung cancers can growth the possibility of survival amongst human beings. The preferred 5-years survival rate for lung most cancers
sufferers will increase from 16% to 50% if the disease is detected in time. Although computerized tomography (CT) is frequently more
efficient than X-ray. However, the problem regarded to merge way to time constraints in detecting this lung cancer concerning the numerous
diagnosing strategies used. Hence, a lung cancer detection system that usage of image processing is hired to categorize lung cancer in CT
images. In image processing procedures, procedures like image pre-processing, segmentation, and have extraction are mentioned
intimately. This paper is pointing to set off the extra precise comes approximately through making use of distinctive improve and
department procedures. In this proposal paper, the proposed method is built in some filter and segmentation that pre-process the data and
classify the trained data. After the classification and trained WONN-MLB method is used to reduce the time complexity of finding result.
Therefore, our research goal is to get the maximum result of lung cancer detection.

Keywords— Lung cancer, CT image, Segmentation, Image processing, X-ray.

I. INTRODUCTION and research occurring recognition of lung cancer.


However, a few systems do not have the best accuracy of
Lung cancer is one of the most sterling cancers inside recognition and more or less systems nonetheless have
the world, with the smallest survival rate after the been given to be progressed to recognize the very best
determination, with a non-stop increment with inside the accuracy inclining to 100%. Image processing strategies
variety of passing every year. The previous detection is, and machine getting to know strategies were actualized to
the better the probabilities of fruitful treatment are. But become aware of and classify lung cancers. Besides,
detection has a few issues moreover. Here our main focus Artificial Intelligence strategies were exploit to clear up
is how to extend the quality of early cancer detection. The the estimate and selection for large data. The paper studied
most common problem is nodule size. It is basically very modern-day systems advanced main cancers recognition
wide in human lung. Generally, a nodule diameter can based mostly on CT scan images of lungs to select out the
take any value between couples of millimeters up to current super systems and assessment modified into
several centimeters [1]. Nodule show an enormous variety executed on them and new edition modified into proposed.
in thickness and hence deceivability on a radiograph. As
nodules can seem everywhere inside the lung field. 1.1 Research Background
Another problem is the complexity of time. Early Lung cancer is one of the bases of cancer demise. It
detection requires complexity to reduce time. Apart from is strenuous to see as it occurs and has incurable
this, accuracy is likewise important. Then again, no prodromes. Cancer most frequently analyzed is lung
cancer with 11.6%. However, primary detection and
preprocessing comparable noise removal, image
treatment of the disease can reduce the likelihood of death.
smoothing can assist with increasing the recognition of Best Imaging Method Computed tomography is robust for
nodules. These are our important recognition elements in determining lung cancer because it can reveal any
this exploration paper. suspected and unsuspected lung cancer nodule. In both
Lung cancer is one of the reasons for cancer demises. cases, the escalating changes in the CT images and the
It is hard to stumble on as it arises and well-known shows misinterpretation of the anatomical structure by specialists
signs and symptoms inside the terminal phase [2]. The and radiologists could lead to problems when stamping
main often analyzed cancer is lung cancer, that's 11.6%. the cancer cell. Many systems had been evolved and
However, early detection and treatment of the disease can studies have been accomplished to locate lung cancers.
reduce the chance of mortality. The high-quality imaging However, a few systems do now no longer have pleasant
approach CT imaging is stable for lung cancer dedication detection accuracy, and a few systems nonetheless want
considering that it can unveil every suspected and to be advanced to gain the most accuracy of 100%. Image
processing and machine studying strategies had been
unpredicted lung cancer nodule [3]. Be that because it
updated to become aware of and classify lung cancers.
may, the alternate of escalated CT scan images and
Artificial intelligence strategies had been used to resolve
anatomical shape misinterpretation through experts and the prediction and selection of large data. We tested
radiologists would possibly reason problems in stamping contemporary systems which have evolved especially in
the cancerous cell [4]. There were many systems advanced
Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 39
International Journal of Computer and Information System (IJCIS)
Peer Reviewed – International Journal
Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

cancer recognition based especially on computed II. LITERATURE REVIEW


tomography images of the lungs to select modern
outstanding systems, and the assessment became carried A few analysts have proposed and carried out the
out on them and the new edition changed proposed. area of lung cancer making use of distinct techniques of
image processing and machine learning. The study
1.2 Research Motivation and Objective proposed [5] proposed that offers classification among
Health maintenance is one of the elemental sources of nodules and ordinary lung anatomy structure. LDA is
enormous information. Meticulous analysis of health applied as a classifier and perfect thresholding for the
maintenance information is exceedingly in a request for
division. The framework has 84.06% accuracy,
diagnosing the illness at early organize. According to the
97.14%fectability, and round 53.33% specificity even
new situation in medical science, Lung cancer is one of
the unsafe and execute maladies within the world. though the machine detects the maximum cancers nodule,
Nevertheless, early conclusion and medicament can spare its accuracy remains unacceptable. No, machine learning
life. Most cancers begin within the lung and one of the techniques had been applied to classify. Authors [6] used
reasons is smoking. After that square of our body from a contortion neural network as a classifier in his CAD
battling it. Harms in cigarette smoke can debilitate the framework to pick out lung cancer. The framework has
body’s resistant framework, making it harder to murder 83.8% of accuracy, 82.6% of sensitivity, and 86.8% of
cancer cells. Recently, many research works have been specificity. The advantage of this version is that it utilizes
designed to identify patterns in massive amounts of around channel with inside the Region of interest (ROI)
information with higher quality. However, there can be a extraction stage, which decreases the rate of preparing and
demand for a completely unique class technique to acknowledgment steps [7]. Although the implementation
increase the evaluation accuracy with time. Moreover, ML value is reduced, it has however unacceptable accuracy.
algorithms are designed to increase the prediction
This research [8] created a framework utilizing watershed
accuracy of massive amounts of information. However,
segmentation. In pre-dealing with it makes use of Gabor
the error rate is still not exploited to its full potential.
Therefore, these paintings motivate optimized system clear out to enhance the image quality. It contrasts the
studying algorithms to enhance the evaluation accuracy precision and neural fluffy version and area developing
with reduced time and error. Therefore, these matters are technique. The accuracy of the proposed is 90.09%, which
motivated us to optimize the accuracy and take some steps is notably higher than the show with the division. The
to improve this field. advantage of this version is that it makes use of marker-
controlled watershed segmentation, which looks after
1.3 Problem Statement over-division issues. As an impediment, it does now no
Our main task here is to improve the quality of early longer set up the sickness as beneficial or dangerous and
cancer detection. Usually, the diameter of a nodule can be exactness is excessive but now no longer acceptable. In
anything from little millimeters to several centimeters. this paper [9], the Nonparallel Plane Proximal Classifier
The nodules have a wide variety of thicknesses and (NPPC) become stated for cancer type in a Computer-
therefore stand out on x-rays. Because nodules can seem
Aided Diagnosis (CAD) system to assure excessive type
everywhere inside the lung field complexity of time. Early
detection requires complexity to shorten the time. Beyond accuracy and to minimize the computation time. Artificial
that, accuracy is also important. Again, no preprocessing intelligence-primarily based totally computer diagnostics
such as noise removal, image smoothing, seems to help (CAD) is a non-invasive, goal-oriented solution that
improve nodule recognition. It is our important facilitates radiologists' diagnosis of lung nodules [10].
understanding factor in this research work. It is one in all However, Valvular coronary heart problems had been
the foremost real cancers at intervals in the world, with the considered to be one of the toughest elegance troubles.
littlest durability rate once the determination, with endless Author [11] used 3 effective and popular team-learning
increment within the number of passing every year. The agents for early detection of valvular coronary heart
previous of detection is, the upper the probabilities of disease in bagging, promotion, and random subspaces.
fruitful treatment are. However, detection incorporates a However, the type time becomes decreased the usage of
few problems moreover. Here our focus is the way to the methods, the rate at which accuracy become now no
extend the standard of early cancer detection. Those are
longer achieved. In this research has [12], three
the point that is frequently interrupted completely over the
Generalized Mixing (GM) abilities had been carried out
research:
• Nodule size the usage of dynamic weights to enhance the typing
• Identify the affected Nodule accuracy of the sorting system. Although the statistic
• Can’t reduce the time complexity handles the single label type, more than one label's
• Noisy image troubles are not solved. A Basic assessment of ANN
becomes achieved in this paper [13], which comes
approximately in an increment inside the adequacy, and
Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 40
International Journal of Computer and Information System (IJCIS)
Peer Reviewed – International Journal
Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

specificity of the demonstrative strategies, however, it linear mixture of a weighted median filter had been taken,
comes up quickly to minimize the computational and their roots had been obtained. Their properties are
complexity. The author proposed a method that was analyzed via way of means of figuring out the strength
modified to study using the thoracic surgery dataset to spectral density, the basis manner rectangular error, and
verify the accuracy of their proposed method in the signal-to-noise ratio. The median filtering technique is
distinguishing the multiple strategies used in current executed via way of means of sliding the window over the
strategies that include a weight-optimized maximum image. The median filter is one of the best-acknowledged
likelihood boosted neural network (WONN-MLB) for the filters for order information as it plays properly for sure
core. Based on study and selection of function and most sorts of noise which include Gaussian, random, and salt
lung cancers (LCD) [14]. Tumor tissue primarily based and pepper sounds. We use this filter to dispose of
totally on neurotic assessment is taken into consideration ultrasonic pixels on protein crystal images earlier than the
as one of the fundamental pressings for early dedication binary technique. Median filters are usually used to reduce
in most cancers patients. Automated image analysis image noise withinside the identical manner as median
strategies enhance the diagnostic accuracy of the disorder filters. However, it frequently works higher than a mean
and decrease human error. In this study [15] authors filter to get beneficial info withinside the picture. The
proposed distinct computational strategies for using significant filter often gets rid of salt and pepper noise
convolutional neural networks (CNN). from CT images.

S Gaussian filter is a linear filter. It has been extensively


studied in image processing and computer vision. By
III. RESEARCH METHODOLOGY using the gossip filter to suppress the noise, the noise is
accelerated, at the same time, the signal is distorted. Using
In the proposed method, we take a CT image as an Gaussian filters as processing for edge identification also
output that is why initially our CT image is noisy and over results in edge position displacement, fading edges, and
shredded. This inconvenient problem makes a huge effect past edges. Here the authors first examine the different
to identify the cancer nodule and effective part. To reduce techniques of these problems. They then propose an
this problem, we organized our paper in this method such adaptive goose-filtering algorithm in which the filter
as; median filter, Gaussian filter, watershed segmentation. varies according to both the noise characteristics and the
local variation of the signal. The linear Gaussian filter
could be a very famous in-floor feature, is extensively
utilized by researchers, and has grown to be the usual of
industrial cleaning. It is usually used to dazzle the image
or reduce noise. This facilitates the image and removes
stained noise from the image. Only a Gaussian filter will
blur the edges and reduce the contrast. This can be applied
to the input surface by wrapping the surface measured by
the Gaussian weight function.

Watershed segmentation is any other nearby technique,


which has its origins in mathematical morphology. The
trendy idea became introduced. Vincent made a step
Figure 1. Proposed Model. forward in applicability and offered a set of rules in an
order of importance quicker and greater unique than the
Median filters are beneficial for decreasing random preceding ones. The watershed segmentation treats the
noise, particularly while the density of the sound image as a topographic landscape with ridges and valleys.
amplitude possibility has huge tails and periodic patterns. The heights of the terrain are normally decided through
All pulses within side the input signal are eliminated with the gray values of the corresponding pixels or their
enough median filter passes at the same time as all root gradient amplitude. Based on this 3-D representation, the
capabilities of the input signal are preserved. The signal watershed transformation divides the image into
of the finite period is filtered after a finite variety of passes watersheds. Watersheds separate the basins from every
via the median filter with a set window to the basis signal, other. The watershed transform absolutely decomposes
which ends up in signal convergence. In this article, the the image and hence assigns every pixel to the area or
basis signal and its properties are analyzed for a one- watershed. When there may be noisy scientific imaging
dimensional signal. An adaptive period median filter, a data, there are numerous small areas. This is known as
weighted median filter, a hybrid median FIR filter, and a
Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 41
International Journal of Computer and Information System (IJCIS)
Peer Reviewed – International Journal
Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

over-segmentation. Watershed segmentation is a field- The main goal of the proposed system is to reach close
primarily based totally method that makes use of image to this performance. The proposed CAD system starts with
morphology. It calls for the selection of at least one preprocessing the 3D CT scans using watershed
marker interior for every for-budget of the image, segmentation, normalization, down sampling, and zero-
inclusive of heritage as a separate budget. To apprehend centering. The preliminary technique changed into
watersheds, one can consider an image as a surface on actually inputting the preprocessed 3D CT scans into 3D
which bright pixels constitute mountain peaks and valleys CNNs, however, the consequences have been poor. So an
of darker pixels. The surface is pierced into some valleys extra preprocessing changed into finished to input the best
after which slowly submerged in a water bath. Water areas of interest into the 3D CNNs. Then input areas round
flows into every puncture and starts offevolved to fill the nodule applicants detected through the U-Net have been
valleys. However, water from specific punctures must fed into 3D CNNs to in the long run classify the CT scans
now no longer be mixed, that's why the demo has to be as tremendous or terrible for lung cancer.
made at the primary touchpoints. These dams are the
bounds of the water basin and the bounds of the objects
withinside the picture. A conventional set of rules is used IV. RESULT AND ANALYSIS
for splitting, that is, to split specific objects in an image.
Image pre-processing makes use of a gabber filter to 4.1 Results
decorate the image and makes use of a marker-primarily To image preprocessing, we used median filter,
based watershed technique to split and stumble on Gaussian filter, and watershed segmentation for
cancerous nodules. In many cases, the icons are decided identification-affected nodule. A method named WONN-
on because of the neighborhood minima of the image from MLB was also used in our research.
which the bridge is filled. This version best capabilities
together with the area, perimeter, and eccentricity of Proposed WONN-MLB: With ‘1000’ patient records
cancerous nodules. taken into consideration for experimentation and the
number of records effectively recognized because the
After using median filter, Gaussian filter, and disease is ‘920’, the diagnosing accuracy is considered as
watershed segmentation the CT image was prepared for follows:
reading and identifying the cancer nodule in the lung.
However, the accuracy time that means the identification DA= (The number of data currently diagnosed
of cancer in the nodule was much delayed. That is why the as disease / Total counted data) *100
accuracy was a little bit affected and occur the time delay. DA = (920 /1000) ∗ 100 = 92%
To solve this problem, we use WONN-MLB. This
technique is used with a weight-optimized neural network For false positive rate,
to have the most probability boosting for lung cancer With ‘1000’ wide variety of patient data taken into
disease. Since WONN-MLB taken into consideration the consideration as samples and ‘85’ wide variety of patient
beneficial functions primarily based totally on likelihood, data incorrectly diagnosed with lung cancer disease, the
the informative and large functions have now been false positive rate is as set as follows:
removed, compromising disease diagnosis accuracy. This FPR= (Incorrectly diagnosed data / Number of total
method reduces the time delay and helps to improve the counted data)*100
accuracy. For 1000 patient data: Diagnosing Accuracy
FPR= (85 / 1000) * 100= 8.5%
92%, False Positive Rate 8.5%, Classification Time 8.3
ms,F1-score 92%. The Classification Time,
For calculating one patient classification the WONN-
In section 1.2, discussed about machine learning (ML)
MLB take 0.0083 ms and for 1000 times the calculation
algorithm. Here in proposed method two ML algorithm is
is-
used. Those are Support Vector Machine (SVM) and
Random Forest. For using Random Forest, the correct CT =1000 * 0.0083 ms =8.3 ms
result accuracy increased because it getting the maximum F1-score = 2 × (Precision × Recall) / (Precision+ Recall)
number of similar result and represent as a final result.
= 2 * (92*91) / (92+91)
With Random Forest, the model being slow and time
complexity become higher. To reduce this problem SVM = 91.49 %
method has been used. Because SVM method work in F1-score is a single measure of performance test for the
time and reduce unnecessary processing time. positive class.

Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 42


International Journal of Computer and Information System (IJCIS)
Peer Reviewed – International Journal
Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

Table 1. Summary of Result

Accuracy
Method Test Set Error (%)
(%)
Median Filter 1000 45.87 54.13
Gaussian Filter 1000 62.34 37.66
Watershed
1000 86.6 13.4
Segmentation
Figure 3. Comparison between the Previous and Proposed
WONN-MLB 1000 92 8 Research.

In this previous method, they used only WONN-MLB


In table 1, all the usable method is displayed. After to classify and predict their model. However, in proposed
using Median Filter, the model accuracy was just 45.87% model filters and machine learning algorithms are utilized
and it increased 62.34% when Gaussian filter is used. to find out the higher accuracy in the result.
However, the model accuracy was very low. For the
purpose to increase the accuracy, Watershed V. CONCLUSION AND FUTURE WORK
Segmentation is utilized. Although the accuracy being The present high-quality model has no exceptional
increased but it was not top of the mark. For the case, results in accuracy and might no longer classify the degree
WONN-MLB is worn and the result reached on 92%. of cancer detected in nodules. Therefore, a cutting-edge
system is proposed. The proposed system is employed to
4.1.1 Evolution of Implementation come across the cancerous nodule from the lung CT
experiment image using watershed segmentation for
detection. The proposed version detects cancer with 92%
accuracy that is above the contemporary version and the
classifier has an accuracy of 86.6%. Overall, we can see
development inside the proposed system compared to the
contemporary high-quality version However, this
proposed would not classify into different stages as stage
I, II, III, IV of cancer. Therefore, future scope
development for the duration of that is frequently finished
through implementing classification in numerous levels.
Also, similarly, accuracy is frequently elevated through
right pre-processing and removal of false objects.
Figure 2. After Median and Gaussian Filter Application Our research work is often expanded to a broader range
and Cancer Identification. of sectors for generalization or sector-specific
Those pictures are filtered by Median and Gaussian observations. The principal goal of our research was to
filtration and the red mark place visualize the cancer- recognize and predict the affected nodules of lung cancer.
affected area. But no publicly available dataset mainly focused on
affected nodules of lung cancer prediction. Lung cancer
4.2 Comparison with Previous Research prediction forestalls future actions based on past actions.
For that reason, we relied on the action and activity-based
• The research has proposed a new method named
datasets to evaluate our model with a limited data
WONN-MLB to decrease the time complexity and expand
sequence. The main limitations of CT screening are the
the accuracy.
high nodule detection rate. More than 50% of participants
• The Accuracy result of the proposed approach is higher have at least one unaccounted nodule. CT scan of the
than preceding research. results associated with additional costs. Cost of biopsy and
removal of patient or benign non-calcified nodules. The
• The Error Rate is also very low compared to previous risk of cancer associated with multiple follow-up CT
research. scans is small but difficult to quantify. Those are the
points, which will be working in the future:

• Can predict and give accurate results with high


computation resources and big data.

Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 43


International Journal of Computer and Information System (IJCIS)
Peer Reviewed – International Journal
Vol : Vol. 03, Issue 01, Maret 2022
e-ISSN : 2745-9659
https://ijcis.net/index.php/ijcis/index

• Try another pre-processor filter. [12] Costa, V. S., Farias, A. D. S., Bedregal, B., Santiago, R. H.,
& Canuto, A. M. D. P. (2018). Combining multiple
• Use other methods or Layers for higher accuracy.
algorithms in classifier ensembles using generalized
• Use more frames per second and improve with more mixture functions. Neurocomputing, 313, 402-414.
computational resources. [13] Dande, P., & Samant, P. (2018). Acquaintance to artificial
neural networks and use of artificial intelligence as a
diagnostic tool for tuberculosis: a review. Tuberculosis,
REFERENCES 108, 1-9.
[1] Sharma, S. (2018). A two-stage hybrid ensemble classifier- [14] Obulesu, O., Kallam, S., Dhiman, G., Patan, R., Kadiyala,
based diagnostic tool for chronic kidney disease diagnosis R., Raparthi, Y., & Kautish, S. (2021). Adaptive diagnosis
using optimally selected reduced feature set. International of lung cancer by deep learning Classification Using
Journal of Intelligent Systems and Applications in Wilcoxon gain and generator. Journal of Healthcare
Engineering, 6(2), 113-122. Engineering, 2021.
[2] Podolsky, M. D., Barchuk, A. A., Kuznetcov, V. I., [15] Khosravi, P., Kazemi, E., Imielinski, M., Elemento, O., &
Gusarova, N. F., Gaidukov, V. S., & Tarakanov, S. A. Hajirasouliha, I. (2018). Deep convolutional neural
(2016). Evaluation of machine learning algorithm networks enable the discrimination of heterogeneous digital
utilization for lung cancer classification based on gene pathology images. EBioMedicine, 27, 317-328.
expression levels. Asian Pacific journal of cancer
prevention, 17(2),835-838.
[3] Gindi, A., Attiatalla, T. A., & Sami, M. M. (2014). A
comparative study for comparing two feature extraction
methods and two classifiers in the classification of early-
stage lung cancer diagnosis of chest x-ray images. Journal
of American Science, 10(6), 13-22.
[4] Suzuki, K., Kusumoto, M., Watanabe, S. I., Tsuchiya, R.,
& Asamura, H. (2006). Radiologic classification of small
adenocarcinoma of the lung: radiologic-pathologic
correlation and its prognostic impact. The Annals of
thoracic surgery, 81(2), 413-419.
[5] Aggarwal, T., Furqan, A., & Kalra, K. (2015, August).
Feature extraction and LDA-based classification of lung
nodules in chest CT scan images. In 2015 International
Conference on Advances in Computing, Communications,
and Informatics (ICACCI) (pp. 1189-1193). IEEE.
[6] Jin, X. Y., Zhang, Y. C., & Jin, Q. L. (2016, December).
Pulmonary nodule detection based on CT images using
convolution neural network. In 2016 9th International
symposium on computational intelligence and design
(ISCID) (Vol. 1, pp. 202-204). IEEE.
[7] Maurer, A. (2021). An Early Prediction of Lung Cancer
using CT Scan Images. Journal of Computing and Natural
Science, 39-44.
[8] [8] Ignatious, S., & Joseph, R. (2015, April). Computer-
aided lung cancer detection system. In 2015 Global
Conference on Communication Technologies (GCCT) (pp.
555-558). IEEE.
[9] Ghorai, S., Mukherjee, A., Sengupta, S., & Dutta, P. K.
(2010). Cancer classification from gene expression data by
NPPC ensemble. IEEE/ACM Transactions on
Computational Biology and Bioinformatics, 8(3), 659-671.
[10] Li, K., Liu, K., Zhong, Y., Liang, M., Qin, P., Li, H., & Liu,
X. (2021). Assessing the predictive accuracy of lung
cancer, metastases, and benign lesions using an artificial
intelligence-driven computer aided diagnosis system.
Quantitative Imaging in Medicine and Surgery, 11(8),
3629.
[11] Das, R., & Sengur, A. (2010). Evaluation of ensemble
methods for diagnosing valvular heart disease. Expert
Systems with Applications, 37(7), 5110-5115.

Journal IJCIS homepage - https://ijcis.net/index.php/ijcis/index Page 44

You might also like