0% found this document useful (0 votes)
18 views7 pages

Improving Quality of Medical Scans using GANs

Improving the quality of medical images is essential for precise diagnosis and treatment planning. When low quality images are used to train the neural network model, the good accuracy cannot be achieved. Nowadays, Generative Adversarial Networks (GANs) have become a potent image enhancement tool that can provide a fresh method for raising the caliber of medical images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views7 pages

Improving Quality of Medical Scans using GANs

Improving the quality of medical images is essential for precise diagnosis and treatment planning. When low quality images are used to train the neural network model, the good accuracy cannot be achieved. Nowadays, Generative Adversarial Networks (GANs) have become a potent image enhancement tool that can provide a fresh method for raising the caliber of medical images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Improving Quality of Medical Scans using GANs


1
Tanushree Bharti; 2Yogam Singh; 3Mudit Jain; 4Ankita Kumari
1
JRF, Poornima University, Jaipur
2
JECRC University, Jaipur
3
Poornima University, Jaipur
4
Teaching Assistant, Poornima University, Jaipur

Abstract:- Improving the quality of medical images is Despite even the medical imaging is crucial for
essential for precise diagnosis and treatment planning. diagnosis, therapy planning, obtaining representative and
When low quality images are used to train the neural diverse datasets for machine learning model training is still
network model, the good accuracy cannot be achieved. difficult because of privacy issues and restricted access to
Nowadays, Generative Adversarial Networks (GANs) uncommon cases. In order to enhance current datasets, this
have become a potent image enhancement tool that can research investigates the possibility of using Generative
provide a fresh method for raising the caliber of medical Adversarial Networks (GANs) to generate artificial medical
images. In order to improve medical images, this paper scans [3]. We explore the difficulties in medical imaging, the
presents a GAN-based framework that reduces noise, shortcomings of conventional data augmentation methods,
increases resolution, and corrects artifacts. The suggested and the requirement for intelligent data augmentation
technique makes use of a generator network to convert strategies. A major challenge in developing models fit for
low-quality images into their high-quality equivalents, clinical use is the lack of sufficient diverse labelled training
and a discriminator network to assess the veracity of the data [4]. Additionally, class disparity frequently occurs in
improved images. To ensure robustness across various medical data. The idea behind the GANs is that it has a
modalities, the model is trained on a diverse dataset of generator and discriminator built in, which makes it useful for
medical images, including MRI, CT, and X-ray scans. comparing human scans. It is helpful in analysis for
Our experimental results show that GAN-based method improvement of the medical scans. The practical and research
significantly improves the image quality when compared was significantly improved, suggesting that GAN-based data
to conventional methods, as evidenced by enhanced peak augmentation holds promise for medical applications [5].
signal-to-noise ratio (PSNR) and structural similarity
index (SSIM) according to quantitative evaluations. This II. LITERATURE REVIEW AND
study emphasizes the value of incorporating deep NEED OF THE STUDY
learning methods into medical image processing pipelines
and the potential of GANs to advance medical imaging Medical image quality can now be improved with
technology so that a robust neural network model can be greater effectiveness thanks to Generative Adversarial
designed. Networks (GANs), which can handle tasks like noise
reduction, resolution enhancement, and image reconstruction.
Keywords:- Medical Images Quality, Convolutional Neural Many researchers did work on the GAN, some most cited
Networks, Generative Adversarial Networks (GANs), Peak research work is taken here. Qiaoying Yang et al [6] showed
Signal to Noise Ratio (PSNR), Structural Similarity Index that GANs could reduce noise while maintaining diagnostic
(SSIM). accuracy, resulting in a significant improvement in low-dose
CT scan quality.
I. INTRODUCTION
Moreover, super-resolution has been achieved with
Medical imaging technologies, which provide non- GANs, improving the resolution of medical images. The
invasive observation of internal body components, are work of Chunyuan Li et al. [7] demonstrated that GAN-based
essential to contemporary healthcare. Examples of these approaches perform better in this context than traditional
technologies include CT, MRI, and X-ray. These pictures are methods by reconstructing high-resolution images with better
useful diagnostic tools that help doctors find anomalies, preservation of fine details. Specifically, the Super-
monitor the course of diseases, and schedule treatments. Resolution GAN (SRGAN) produced high-quality images
However, the availability of varied and representative that were more accurate and aesthetically pleasing. Moreover,
datasets for training is a major factor in how well machine GANs have been used in medical image synthesis, which
learning models perform in medical image interpretation [1]. makes it possible to produce high-quality images from
The part of Convolutional Neural Networks (CNNs) have imperfect or low-quality data.
demonstrated remarkable performance in segmenting images
and recognizing objects in recent years. Utilizing these Yibin Song et al [8] did work on Liver lesion detection
networks for clinical tasks, such as classifying medical and classification with novel neural network architectures.
pictures, and segmenting organs and diseases, should Zhu Jun-Yan et al. [9] develop a model for image to image
improve medical judgment [2]. translation and created high-quality MRI images from CT
scans using GANs. Zhang, et al [10] proposed a translating

IJISRT24DEC979 www.ijisrt.com 1125


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
and segmenting multimodal for medical volumes with cycle-  Uncertainty Labels: One special feature of CheXpert is its
and shape-consistency GAN. ability to provide uncertainty labels for conditions that,
even for human radiologists, may be unclear or
This research paper's main goal is to find out how well challenging to classify. With the option to label data as
GANs work in producing realistic medical scans that "positive," "negative," or "uncertain," models can be
faithfully depict patient data that has not yet been seen. Our trained to handle uncertainty in medical diagnosis [12].
main aim or the motive is to overcome the shortcomings of
conventional methods of data augmentation and enable the  Diversity: The dataset is a valuable resource for creating
production of varied and superior medical datasets by generalizable models because it contains X-rays from a
utilizing deep learning. With differing degrees of diverse patient population that spans a wide range of ages,
effectiveness, balancing GANSs (BAGANs)[11] have been genders, and clinical conditions.
used to correct class imbalance in GAN data and it compares
the image that is uploaded in discriminator and the generator  Training and Validation: CheXpert consists of two sets: a
works better at low data regimes, when the model's primary training set and a validation set. Skilled radiologists
issues are overfitting and inadequate generalization. manually annotate the validation set. This configuration
makes it possible to evaluate models robustly.
III. RESEARCH METHODOLOGY
 Conventional Methods of Data Augmentation
 CheXpert Dataset Medical picture collections have been extensively
We used 224,316 frontal and lateral chest radiographs of enhanced by the application of conventional data
65,240 people from Stanford Hospital from the publicly augmentation techniques including rotation, flipping, and
available CheXpert dataset. Fourteen common chest cropping. These methods might not be able to capture the
conditions, including edema, consolidation, cardiomegaly, intricate variances found in actual medical scans.
pleural effusion, and pneumothorax, are labeled in the dataset. Furthermore, their efficacy in producing realistic and diverse
Natural language processing (NLP) techniques were used to data is limited as they fail to take into consideration the
extract the labels from radiology reports. The dataset has key underlying anatomical structures and diseases.
features as follows:

Fig 1 Data Augmentation by Generative Adversarial Network

 Configuration for an Experiment We use randomly selected portions of the training


We use a 14-way classification challenge to train dataset (1%, 10%, 50%, and 100%) to train each of the three
DenseNet- 121, where each input image may represent more primary experiments of the generative adversarial network
than one pathology. Testing, validation, and training use a 90% 5% 5% split.

IJISRT24DEC979 www.ijisrt.com 1126


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The example of the result including 5% of the similar to  The Requirement for Sensible Data Enrichment
the parent data in the dataset takes places a robust results on The drawbacks of conventional augmentation
the categorization of thoracic X-rays the construction by approaches may be solved by intelligent data augmentation
professor’s research. We pre-trained DenseNet-121 of Torch techniques, such as those based on deep learning. These
XRay Vision on ImageNet using transfer of the all methods improve the robustness and diversity of the training
experiment of all the experiments. Figure 1 shows the dataset by producing synthetic scans that closely mimic real
augmented images by GAN. patient data by understanding the underlying distribution of
medical pictures [13].
 Difficulties with Medical Imaging
Accurate diagnosis and treatment of medical imaging Generative Adversarial Networks (GANs) rank fourth.
require addressing a number of obstacles presented by the field.
Among these difficulties are: Scan distortion, various The GAN class of deep learning models is made up of a
variables, including noise, artifacts, and motion artifacts, can discriminator and a generator neural network. While the
cause distortion in medical scans. Healthcare practitioners generator generates synthetic data, in the generative
may find it difficult to establish precise diagnoses as a result adversarial network the part of the discriminator will have the
of these aberrations, which can seriously impair the images' image of the scans, the generator produces higher- fidelity
quality and interpretability. synthetic pictures by producing more realistic samples.
Figure 2 shows the process of generative adversarial network
for generating images.

Fig 2 Process of Generative Adversarial Network

IV. IMPLEMENTATION  Adversarial Training: The discriminator and generator in a


min-max game are trained simultaneously. As the
 Deep Convolutional GAN (DCGAN) discriminator endeavors to precisely distinguish between
One essential tool for picture-generating problems is genuine and counterfeit images, the generator aims to
architecture. To generate visually accurate and high- produce visuals that are indistinguishable from real
resolution images, it uses convolutional layers to extract photos.To enhance both networks' performance, the
hierarchical characteristics from the input data1. parameters are optimized throughout the training phase.

 Generator Network: This generator learns to produce  Convolutional Layers: DCGANs employ convolutional
visuals that mimic actual data by using random noise as layers for the generator and discriminator rather than fully
input. Usually, it is made up of convolutional layers, linked layers. Convolutional layers work effectively for
which are followed by upsampling layers such as applications like picture production because they can
transposed or nearest-neighbor convolutions. The capture spatial patterns in images.
generator's output is an image.
 Batch Normalization: To stabilize training and quicken
 Discriminator Network: The discriminator is a convergence, batch normalization is frequently used in
convolutional neural network (CNN) that learns to both the discriminator and generator networks. It makes
distinguish between real images from the dataset and fake the decreasing the internal covariate shift issue through
ones from the generator. It accepts an image as input. It the activations of each layer.
produces a likelihood score that indicates the authenticity
of the provided image.

IJISRT24DEC979 www.ijisrt.com 1127


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Avoiding Pooling Layers: DCGANs usually steer clear of discriminator to discriminate between actual input-output
the generator and discriminator pooling layers. Rather, image pairings and fictitious ones produced by the
they employ fractional-strided convolutions (transposed generator [16].
convolutions) for upsampling in the generator and stride
convolutions for downsampling in the discriminator. This  Aviation Training: Comparable to Unlike previous
improves the preservation of spatial information. GANs, Pix2Pix trains the discriminator and generator at
the same time via adversarial training. The discriminator
 Activation Functions: The generator network uses seeks to discern between pairings of actual and fake
standard activation functions like ReLU (Rectified Linear images, while the generator seeks to make output images
Unit), with the exception of the output layer, which uses that are identical to real photos.
tanh, an appropriate activation function, to guarantee that
the generated images' pixel values fall between [-1, 1].  L1 Loss: Pix2Pix augments the objective function with a
Leaky ReLU is frequently utilized in the discriminator to pixel-wise L1 loss component in addition to the
avoid the dead neuron issue. DCGANs have shown adversarial loss. This loss encourages the generated
remarkable success in producing lifelike images in a variety images to more closely approximate the ground truth
of domains such as bedrooms, faces, and landscapes. They images at the pixel level. Together, adversarial loss and
have also served as an inspiration for a great deal of L1 loss yield visually pleasing results while maintaining
development and innovation in the generative modeling minute details.
space.
 Design Modifications: To enable information flow at
 Pix2pix Network various sizes, Pix2Pix frequently employs a U-Net design
This is a conditional GAN architecture that discovers for the generator, which incorporates skip connections
how to translate an input picture to an output picture. It has between the encoder and decoder. This architecture aids
proven effective in a variety of applications, including the in the capture of both local and global elements,
creation of medical images from semantic labeling and improving performance in jobs involving the translation
image-to- image translation [14]. of images.

The main work or the duty is the image to image Pix2Pix is a popular tool for many computer vision
transition or Pix2Pix, is a conditional generative adversarial applications, including semantic segmentation, style transfer,
network (GAN) designed especially for image-to-image image colorization, and more. It is a useful tool for many
translation applications. This is how Pix2Pix functions and image alteration applications because of its capacity to learn
what sets it apart: mappings across various visual domains from paired training
data
 Conditional GAN Framework: Pix2Pix expands the
capabilities of the GAN framework by including a  StarGAN
conditional setting. In a traditional GAN, the generator With just one model, StarGAN is a flexible GAN
generates images from random noise as input, while the architecture that can translate images between different
discriminator looks for differences between genuine and domains. It makes it possible to create a variety of medical
fake images. The generator in Pix2Pix gains the capability images with various features and qualities. StarGAN, or "Star
to translate images between distinct domains because both Generative Adversarial Network," is a generative adversarial
the discriminator and generator are conditioned on input network (GAN) architecture designed for multi-domain
images [15]. image-to- image translation. This is how StarGAN functions
and what makes it unique:
 Translation of Images to Images: Pix2Pix is especially
made for jobs in which the input and "Image-to-Image  Multi-Domain Image Translation: StarGAN can handle
Translation with Conditional Adversarial Networks," or many domains inside a single model, in contrast to
Pix2Pix for short, is a sort of conditional generative standard image-to-image translation techniques that call
adversarial network (GAN) in which the output images for distinct models for each translation assignment. It
are paired. This covers tasks such as mapping satellite supports several target domains at once and has the ability
imagery to maps, creating realistic graphics from to convert images across them. It may, for instance,
sketches, transforming daytime views into nighttime convert pictures of human faces into many facial styles,
sceneries, and more. The network gains the ability to representing various ages, genders, and races [17].
translate input images from one domain into equivalent
output images in another.  Single Generator and Discriminator: For all domain
translations, StarGAN has a single generator and
 Generator and Discriminator Networks: In Pix2Pix, an discriminator architecture. This indicates that the
encoder-decoder architecture is commonly used as the accountable generator is the same for creating images in
generator. The input image is encoded by the encoder into every target domain, and the discriminator gains the
a latent representation, which is then decoded by the ability to discern between images produced by the
decoder to produce the output image. The part of the generator that are false and genuine images from any
Convolutional neural networks are used in the domain.

IJISRT24DEC979 www.ijisrt.com 1128


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Conditional GAN Framework: To enable multi-domain  Teaching the Generator Model
translation, StarGAN expands the conditional GAN Training the generator model includes optimizing the
framework. The target domain labels, which indicate the network parameters to minimize the discrepancy between the
desired domain for the output image, and the source generated and real images. This process requires appropriate
domain labels, which indicate the domain of the input loss functions and training strategies in addition to a large and
image. With this conditioning, the network is able to learn diverse dataset of medical scans to ensure convergence and
how to produce images in various target domains stability.
according to the label of the input domain.
GANs can capture the hidden underlying properties of
 Adversarial and Cycle-Consistency Losses: It offers the training dataset, including illnesses, textures, and
details on how well the model performs across dataset anatomical structures. By selecting samples from the learnt
regimens and augmentation techniques as indicated by the latent space, the generator model can be trained to produce
ROC- AUC score. The little data alignment we can the previously unobserved medical scans. These produced scans
improvement in the image of the scan by the generator. can be utilized for a number of tasks, such as creating
This is observed across all disorders, with an AUC pathological instances to train reliable diagnostic models,
increase of 0.07 for fractures and 0.03 for lung lesions and domain adaption, and data augmentation.
pleural abnormalities that the generated images in the
target domain are identical between GAN augmentation It should be emphasized that traditional augmentation
and no augmentation to genuine images. It also features a improves performance when compared to no augmentation,
cycle-consistency loss, motivated by CycleGAN, which however not as much as GAN augmentation. Similarly, our
aims to translate back to the source domain such that the results for the second- smallest data regimen (10%)
reconstructed images are almost identical to the original demonstrate that the conditions are improved by GAN
input images. By doing this, the produced photos' artifacts augmentation. These positive results demonstrate the
are lessened and uniformity is preserved [18]. superior performance of models trained with GAN
augmentation. The learning curves for training without
The fifth feature is the Domain Adaptation Module that augmentation and with GAN augmentation are seen in Figure
StarGAN offers. It gains the capacity to alter the generated 3. It turns out that GAN-based overfitting is mitigated in the
images according to the characteristics of the target domain. 1% low-data regimen by augmentation.
The quality and realism of the translated images are improved
by this module, which adjusts global and local characteristics In the most recent epoch, the difference between the
to match the target domain. training and validation losses of the GAN enhanced model is
0.03, whereas the non-augmented model's difference is 0.06.
StarGAN has addressed a wide range of image In the 10% regimen, both models show comparable
translation problems, including face attribute change, style overfitting, the variation in the difference in the generative
transfer, and domain adaptation. Its capacity to manage adversarial network would be the interval of 3 to 4.
several domains with a single model accounts for its
effectiveness and adaptability for a broad spectrum of This is in line with our AUC findings, which show that
computer vision and image processing applications. while adding synthetic photos helps reduce overfitting for
extremely small data batches, it may not always be helpful as
dataset sizes grow.

Fig 3 Testing and Updating the Curve Comparison of GAN-based Augment Training against no Augment

IJISRT24DEC979 www.ijisrt.com 1129


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. RESULTS AND DISCUSSION best discriminative for a given category, where the category
is determined by projecting the output layer weights onto the
Since this model performed better than the others final convolutional feature maps. Figure 4 shows the image
overall, we show the image that is been compared between before GAN and after applying GAN. Table 2 shows the
the generator and discriminator and the process of identifying AUC results for not augmented, standard augmented, and
the real image, suggesting that the network activations might GAN augmented classes across dataset regimens.
be the same in both cases.
It is essential to evaluate the quality and realism of the
Even so, the strongest activations seem to be localized generated medical scans to ensure that they are suitable for
to a certain area in each X-ray for all CAMs. In a clinical use in subsequent tasks. This involves both qualitative
setting, CAMs may offer helpful, interpretable insight assessment by medical experts and quantitative metrics,
regarding the areas of an X-ray that signal particular diseases including pixel-level similarity evaluations, to confirm the
due to the appearance of high activation at specific spots. clinical relevance of the generated images.
Class activation map visualization presents the image that is

Fig 4 Images before and after GAN

Table 2: AUC Results for Augmented Classes across Dataset Regimens and Augmentation Strategies
Dataset Size Pathology Not the augment Standard Augment Generative adversarial network augment
The Lung part 1.727 1.728 1.758
1% Pleural 1.566 1.550 1.594
Broken part 1.583 1.601 1.656
The Lung part 2.809 1.796 2.852
10% Pleural 1.632 1.655 1.670
Broken part 1.700 1.723 1.742
The Lung part 1.826 1.822 1.828
50% Pleural 1.710 1.696 1.706
Broken part 1.789 1.780 1.793
The Lung part 1.835 2.945 1.834
100% Pleural 1.721 1.712 1.727
Broken part 0031.811 1.793 1.807

IJISRT24DEC979 www.ijisrt.com 1130


Volume 9, Issue 12, December – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
VI. CONCLUSION [8]. Yibin Song and colleagues, "Liver lesion detection
and classification with novel neural network
In conclusion of this study, we compare the architectures." In: International Conference on
performance of non-augmented and standard augmented Computer-Assisted Intervention and Medical Image
models over a range of data schedules. Our results suggest Computing, 830-838. Springer, Cham (2017).
that class- imbalanced medical datasets can be effectively [9]. Zhu, Jun-Yan, and associates. "Toward multimodal
corrected via GAN-based data augmentation. Through a image-to-image translation." Pages 465–476 in
range of dataset sizes and pathologies, we show Advances in Neural Information Processing Systems,
improvements in table 2 and figure 4. The discriminator and 2017.
generator are the two main components of the generative [10]. Zhang, et al., "Translating and segmenting multimodal
adversarial network process. The discriminator compares the medical volumes with cycle-and shape-consistency
uploaded fake image to the original version of the image, and generative adversarial network." In International
this process continues until the generator and discriminator Conference on Computer-Assisted Intervention and
have similar image connections. Eventually, we receive the Medical Image Computing, pages 56–64. Cham,
comparison version of the image in the scans along with an Springer, 2018.
update on a big data set, kindly note that this may not always [11]. Kavita Lal, Madan Lal Saini; A study on deep fake
be the case. To sum up, GANs present a viable method for identification techniques using deep learning. AIP
creating invisible medical scans with a variety of traits and Conf. Proc. 15 June 2023; 2782 (1):
diseases. These models can help create large-scale, high- 020155. https://doi.org/10.1063/5.0154828
quality medical datasets by utilizing deep learning to [12]. Y. Singh, M. Saini and Savita, "Impact and
overcome the drawbacks of standard data augmentation Performance Analysis of Various Activation
methods. To address issues including data scarcity, model Functions for Classification Problems," 2023 IEEE
interpretability, and ethical concerns about the use of International Conference on Contemporary
synthetic data in medical research and practice, more studyis Computing and Communications (InC4), Bangalore,
required that can be taken in future work. India, 2023, pp. 1-7, doi:
10.1109/InC457730.2023.10263129.
REFERENCES [13]. Sarmah, J., Saini, M.L., Kumar, A., Chasta, V. (2024).
Performance Analysis of Deep CNN, YOLO, and
[1]. Hoo-Chang Shin et al. "Synthetic data augmentation LeNet for Handwritten Digit Classification. In:
using GAN for improved liver lesion classification." Sharma, H., Chakravorty, A., Hussain, S., Kumari, R.
In: International Workshop on Medical Imaging (eds) Artificial Intelligence: Theory and Applications.
Simulation and Synthesis, pages 1-11. Cham, AITA 2023. Lecture Notes in Networks and Systems,
Springer, 2018. vol 844. Springer, Singapore.
[2]. Eunji Choi et al. "Stargan: Unified generative https://doi.org/10.1007/978-981-99-8479-4_16
adversarial networks for multi-domain image-to-image [14]. M. L. Saini, A. Patnaik, Mahadev, D. C. Sati and R.
translation." In 2018's IEEE Conference on Computer Kumar, "Deepfake Detection System Using Deep
Vision and Pattern Recognition Proceedings, pp. Neural Networks," 2024 2nd International
8789–8797. Conference on Computer, Communication and
[3]. Jelmer M. Wolterink and colleagues "Generative Control (IC4), Indore, India, 2024, pp. 1-5, doi:
adversarial networks for noise reduction in low-dose 10.1109/IC457434.2024.10486659.
CT." IEEE Medical Imaging Transactions 36.12, 2017; [15]. P. D. S. Prasad, R. Tiwari, M. L. Saini and Savita,
2536–2545. "Digital Image Enhancement using Conventional
[4]. Maayan Frid-Adar et al. "Synthetic data augmentation Neural Network," 2023 2nd International Conference
using GAN for improved liver lesion classification." for Innovation in Technology (INOCON), Bangalore,
In: International Workshop on Medical Imaging India, 2023, pp. 1-5, doi:
Simulation and Synthesis, pages 1-11. Cham, 10.1109/INOCON57975.2023.10100995.
Springer, 2018. [16]. Skandarani, Youssef, Pierre-Marc Jodoin, and Alain
[5]. Armanious, Kristina, et al. "Using adversarial networks Lalande. "Gans for medical image synthesis: An
to synthesize CT from ultrasound images." In empirical study." Journal of Imaging 9.3 (2023): 69.
International Conference on Computer-Assisted [17]. Showrov, Atif Ahmed, et al. "Generative Adversarial
Intervention and Medical Image Computing, pp. 81– Networks (GANs) in Medical Imaging:
89. Springer, Cham (2017). Advancements, Applications and Challenges." IEEE
[6]. Qiaoying Yang et al. "Low-dose CT image denoising Access (2024).
using a generative adversarial network with [18]. E. G. Kumar, M. Lal Saini, S. A. Khadar Ali and B. B.
Wasserstein distance and perceptual loss." 2018 IEEE Teja, "A Clinical Support System for Prediction of
Access 6: 47958–47966. Heart Disease using Ensemble Learning
[7]. Chunyuan Li and colleagues, "Unsupervised image-to- Techniques," 2023 International Conference on
image translation networks." In Neural Information Sustainable Communication Networks and
Processing Systems Advances, pp. 700–708 in 2017. Application (ICSCNA), Theni, India, 2023, pp. 926-
931, doi: 10.1109/ICSCNA58489.2023.10370569.

IJISRT24DEC979 www.ijisrt.com 1131

You might also like