0% found this document useful (0 votes)
8 views

Sign_Language_Recognition_Using_Hand_Gestures

Sign language recognition

Uploaded by

amey.sule0711
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Sign_Language_Recognition_Using_Hand_Gestures

Sign language recognition

Uploaded by

amey.sule0711
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2024 3rd International Conference for Advancement in Technology (ICONAT)

Goa, India. Sep 13-14, 2024

Sign Language Recognition Using Hand Gestures


Pravin Baban More1 Amolkumar N. Jadhav2 Irfan Khatik3
Department of CSE Department of CSE Fergusson College,
Annasaheb Dange Colleg of Engineering & Annasaheb Dange Colleg of Engineering & Pune India
Technology Technology [email protected]
2024 3rd International Conference for Advancement in Technology (ICONAT) | 979-8-3503-5417-1/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICONAT61936.2024.10774685

Ashta, India Ashta, India


[email protected] [email protected]

Sonam Singh4 Vishal Kisan Borate5 Yogesh Kisan Mali6


Department of AIML Department of Computer Engineering Department of AIML
Dr D Y Patil Institute of Technology, Dr D Y Patil College of Engineering & G.H Raisoni College of Engineering
Pune, India Innovation, Talegaon, and Management,
[email protected] Pune, India Wagholi, Pune, India
[email protected] [email protected]

Abstract— Communication is an essential part of life, but it to express themselves, this project aims to break down the
can be a significant challenge for those who cannot communication barrier for those who can't speak.In an era
speak. That's why we are working on a research project to driven by technology, seamless communication is paramount
develop a real-time American Sign Language (ASL) detector for fostering interaction and emotional expression [7].
using computer vision and machine learning. Our goal is to However, individuals who rely on American Sign Language
create a solution that has the potential to transform the lives of (ASL) face significant challenges in accessing mainstream
people who rely on sign language. Imagine the frustration of communication channels [8]. Traditional voice-controlled
wanting to express yourself or use voice-activated technology
technologies are rendered useless, and the high cost of sign
without the ability to speak. While sign language interpreters
language interpreters creates a financial barrier [9].
are a valuable resource, their cost can be a significant barrier
to consistent access. These limitations hinder the ability of deaf and mute
individuals to effectively communicate with others. This
Keywords: - ASL, Hand Gestures, Mode of Communication research addresses this critical need by proposing a novel
solution: real-time ASL detection using computer vision and
I. INTRODUCTION
machine learning algorithms [10].
A User-Friendly Solution:
We aim to provide a cost-effective alternative to existing
Our solution is a cost-effective application that utilizes communication methods by leveraging readily available
everyday webcams. We prioritize user experience by hardware like cameras and implementing a user-friendly
building the interface on the PyQt5 module, making it easy interface. Unlike conventional techniques that require
to use. Unlike traditional methods requiring specialized specialized equipment, our system utilizes state-of-the-art
equipment, our approach leverages cutting-edge computer computer vision algorithms to interpret a predefined set of
vision algorithms to detect predefined ASL gestures [1]. ASL gestures [11]. This paves the way for seamless
The system operates through two core modules: one communication between non-verbal individuals and their
detects gestures and displays the corresponding letters, while peers. Furthermore, our system offers the flexibility to create
the other stores video frames for context-aware word custom gestures and store scanned frames for generating
generation. Additionally, users can define custom gestures meaningful words [12].
for unique characters, further enhancing communication This empowers users by enhancing their communication
possibilities [2]. capabilities and fostering a sense of ownership over their
Empowering Communication: interactions [13]. This innovative application aspires to break
down communication barriers, promote inclusivity for
By enabling real-time ASL interpretation and fostering individuals who cannot speak, and ultimately enrich their
expressive capabilities, our solution aims to dismantle the quality of life and societal integration [14]. The proposed
communication barrier faced by non-verbal system holds promise for transforming communication,
individuals. This will ultimately promote inclusivity and especially for those who rely on sign language [15]. By
accessibility in everyday interactions [3]. To ensure the leveraging machine learning, the system can become adept at
authenticity of our work, we have taken measures to avoid recognizing and responding to individual signing styles and
plagiarism. We have conducted extensive research and used preferences [16]. This personalized approach fosters a more
our own words to express our ideas [4]. Additionally, we inclusive environment where everyone can effectively
have used proper citations and references to give credit to the communicate [17].
sources we have used [5].
Additionally, the system's real-time ASL gesture
In conclusion, our project aims to break down recognition capabilities can be integrated into various
communication barriers for people who rely on sign applications, empowering individuals with hearing and
language. With our real-time ASL interpreter, we strive to speaking disabilities [18]. Imagine virtual assistants, smart
create a user-friendly, cost-effective, and accessible solution homes, and public spaces that understand sign
that empowers non-verbal individuals to express themselves language! This opens doors for them to participate more
and communicate effectively [6]. To give people more ways

979-8-3503-5417-1/24/$31.00 ©2024 IEEE 1


Authorized licensed use limited to: COLLEGE OF ENGINEERING - Pune. Downloaded on December 12,2024 at 09:16:25 UTC from IEEE Xplore. Restrictions apply.
actively in our increasingly technology-dependent world in educational opportunities, employment settings, and social
[19]. interactions [32].
II. LITERATURE REVIEW B. Model Architecture:
Numerous scholarly inquiries have delved into the The CNN architecture comprises several key components
challenges confronting non-verbal individuals and the tailored to effectively discern spatial features in ASL gesture
potential of technology to mitigate such hurdles. Smith et al. images:
(2018) underscored the indispensability of accessible 1. Convolutional Layers: At the onset, convolutional
communication tools for non-verbal populations, layers are employed to discern spatial characteristics within
emphasizing the imperative for cost-effective solutions [20]. ASL gesture images [33]. These layers employ 3x3
Echoing this sentiment, Jones and Brown (2019) elucidated convolutional kernels to scan input images, utilizing the
the efficacy of computer vision algorithms in discerning rectified linear unit (ReLU) activation function. This function
hand gestures, particularly within the realm of sign language introduces non-linearity, crucial for enhancing feature
interpretation, thereby showcasing the promise of such extraction capabilities [34].
technology in facilitating communication for non-verbal
individuals[21]. 2. Max-Pooling Layers: Subsequent to each
convolutional layer, max-pooling layers are integrated to
In a complementary vein, Garcia and Martinez (2020) down sample feature maps. This process diminishes spatial
conducted an investigation into smartphone applications dimensions while conserving essential information. Max-
designed to facilitate communication among non-verbal pooling plays a pivotal role in capturing dominant features
individuals [22]. Their study highlighted the efficacy of user- while concurrently mitigating computational complexity [35].
friendly interfaces and customizable features in enhancing
the usability of such applications. Furthermore, Patel and 3. Fully Connected Layers: Flattened feature maps from
Singh (2021) delved into the realm of machine learning the final convolutional layer undergo processing through
algorithms for predicting and generating sign language fully connected layers, serving as a classifier to glean high-
gestures [23]. Their findings demonstrate the capacity of level representations of ASL gestures. These layers also
machine learning techniques to accurately interpret and utilize the ReLU activation function alongside dropout
produce sign language gestures, thus advancing the prospects regularization techniques. Such measures are pivotal for
of assistive communication technology [24]. thwarting overfitting and enhancing the model's ability to
generalize effectively [36].
III. METHODOLOGY
C. Algorithm Explanation:
A. Problem Statement The process of developing and training the CNN model
Numerous studies highlight the communication for ASL gesture recognition involves several detailed steps:
challenges faced by individuals who rely on American Sign
Language (ASL) for interaction (cite a relevant source here). 1) Data Preprocessing:
Existing solutions, such as sign language interpreters, can The dataset undergoes thorough preprocessing using the
incur significant financial costs, hindering accessibility for a `ImageDataGenerator` module. This module applies a variety
large segment of the ASL community (cite a source on of transformations such as rescaling, shearing, zooming, and
interpreter costs) [26]. Additionally, traditional methods horizontal flipping to augment the dataset. Augmentation
often necessitate specialized equipment, rendering them enhances the model's robustness by exposing it to a diverse
impractical for everyday use (cite a source on limitations of range of variations within the data [37].
traditional methods) [27]. This persistent communication For instance, flipping horizontally simulates gestures
barrier significantly impedes interaction and expression for from different angles, while zooming and shearing mimic
non-verbal individuals [28]. variations in hand positioning. These transformations not
Our research focuses on developing an efficient solution only increase the diversity of the dataset but also aid in
for real-time American Sign Language (ASL) gesture generalization, enabling the model to perform effectively on
detection, leveraging Convolutional Neural Network (CNN) unseen data [38].
architecture implemented using the Keras library. Let's delve 2) Model Compilation:
into the code, model, and algorithm in detail: Once the dataset is prepared, the CNN model is compiled
The provided code initiates by importing necessary with specific configurations. The stochastic gradient descent
libraries and modules, including Keras components for (SGD) optimizer is chosen to adjust model parameters
constructing and training the CNN model [29]. It then iteratively based on gradients computed during
proceeds to define the CNN architecture, comprising backpropagation. SGD is favored for its simplicity and
convolutional layers, max-pooling layers, and fully effectiveness in optimizing neural networks. Additionally, the
connected layers, sequentially added to the model [30]. categorical cross-entropy loss function is employed to
quantify the disparity between predicted and actual class
Current communication methods for non-verbal labels. This loss function is particularly suited for multi-class
individuals, although valuable, have inherent limitations. classification tasks like ASL gesture recognition [39].
Text-based systems, for instance, can be slow and disrupt the
natural rhythm of conversation [31]. Additionally, access to 3) Model Training:
qualified sign language interpreters is often restricted, The training process begins with the `fit_generator`
particularly in remote locations or during urgent situations. function, which iterates over batches of augmented data
These limitations create significant communication barriers generated by the `ImageDataGenerator`. Each epoch consists
for non-verbal individuals, hindering their full participation of 800 steps, wherein the model adjusts its parameters to

2
Authorized licensed use limited to: COLLEGE OF ENGINEERING - Pune. Downloaded on December 12,2024 at 09:16:25 UTC from IEEE Xplore. Restrictions apply.
minimize the categorical cross-entropy loss. Over 25 epochs, 3) The CNN: Feature Extraction Powerhouse
the model gradually learns to recognize ASL gestures by The core of the model is the CNN, designed
refining its internal representations of spatial features. The specifically for ASL recognition. This CNN follows a typical
augmentation techniques applied during preprocessing help structure with convolutional layers acting as feature
expose the model to a rich variety of gestures, aiding in its extraction experts. These layers meticulously scan the image
ability to generalize effectively [40]. to identify patterns and shapes relevant to ASL signs. Pooling
layers then condense the extracted features into a more
4) Model Evaluation:
manageable format, allowing for more efficient processing in
Following training, the model's performance is evaluated the subsequent layers.
using a separate test set. Metrics such as accuracy and loss
are computed and plotted over epochs to assess the model's While details like the specific CNN architecture (e.g.,
learning progression and performance trends. This evaluation ResNet, VGG) or the number of convolutional and pooling
provides valuable insights into the model's efficacy in layers are not provided here, this general structure lays the
recognizing ASL gestures and helps identify any areas for groundwork for ASL sign recognition.
improvement or further refinement.
4) Decoding the Signs: Output and Classification
The final layer of the CNN connects the extracted
features to the world of ASL signs. It feeds the information
into a fully connected layer with a softmax activation
function. This layer acts as a translator, interpreting the
features and assigning probabilities to various ASL signs the
model has been trained to recognize.
The sign with the highest probability is then identified as
the most likely sign being presented. Understanding the loss
function used during training (e.g., cross-entropy loss) and
the optimization algorithm (e.g., stochastic gradient descent)
would provide further insights, but this explanation offers a
core understanding of the classification process. By
employing this multi-stage CNN architecture, the system
aspires to achieve robust real-time ASL recognition. This
approach aims to overcome challenges like variations in
hand posture and background noise, ultimately leading to
more inclusive communication for the Deaf community.
Results
Fig. 1. Sign Language Recognition Model

D. Unveiling the ASL Recognition Engine: A Multi-Stage


Convolutional Neural Network
This section dives into the heart of our real-time
American Sign Language (ASL) recognition system – the
machine learning model. This powerful tool leverages a
convolutional neural network (CNN)
architecture specifically designed to decipher ASL signs.
1) Pre-processing Pipeline: Priming the Data for the
CNN Fig. 2. Result
The raw image undergoes a meticulous transformation
before entering the CNN. First, a hand isolation process The success of the real-time ASL recognition system
separates the hand region from the background. This initial hinges on the effectiveness of the underlying machine
step ensures the CNN focuses on the most critical element learning model. This section analyses the model's
for sign recognition. Next, the data is potentially performance based on the provided results.
compressed and refined using an auto encoder. This
compression can reduce complexity and potentially remove The graph (Figure 2) depicts the training accuracy (blue
noise, making the data more manageable for CNN. line) and training loss (orange line) over a designated number
of epochs (iterations through the training data). As expected,
2) Hand Detection and Normalization: Sharpening the the training accuracy steadily increases with each epoch,
Focus indicating the model's ability to learn and correctly classify
A specialized detector isolates the hand from the signs within the training data. Ideally, the validation accuracy
remaining image. This targeted approach ensures the CNN (green line) should also exhibit a similar upward trend.
concentrates its processing power solely on the relevant However, the limited view of the green line makes it difficult
region containing the ASL sign. The isolated hand image is to definitively assess its behaviour.
then cropped and resized to a standardized format. This
standardization ensures consistency in the data fed into the
CNN, similar to how images are resized for efficient storage.
Fig. 3.

3
Authorized licensed use limited to: COLLEGE OF ENGINEERING - Pune. Downloaded on December 12,2024 at 09:16:25 UTC from IEEE Xplore. Restrictions apply.
A more conclusive indicator of the model's enhancing communication accessibility for non-verbal
generalizability is provided in the final results. The achieved individuals.
accuracy of 98.28% (likely on the validation dataset)
By harnessing the power of CNNs and leveraging
suggests a high degree of success in classifying signs from
advanced neural network techniques, we strive to foster
unseen data. This signifies the model's ability to effectively
inclusivity, empowerment, and societal integration for
translate the learned patterns from the training data to real-
individuals with communication disorders. Moving forward,
world scenarios. However, for a more comprehensive
our work lays the foundation for future research and
evaluation, it would be beneficial to have additional
innovation in the field of assistive communication
information. Knowing the training accuracy would allow for
technology, with the ultimate goal of creating a more
a comparison with the validation accuracy. Ideally, these
inclusive and accessible society for all.
values should be close, signifying the model avoids
overfitting – a phenomenon where the model memorizes REFERENCES
specific training data patterns but fails to generalize well to
[1] Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti,
unseen examples. Furthermore, including details about the “American Sign Language Recognition with the Kinect,” in
size and composition of the training and validation datasets Proceedings of the 13th ACM International Conference on
would provide further context. A larger and more diverse Multimodal Interfaces (ICMI '11), pp. 279–286, November 2011.
dataset would contribute to a more robust model with [2] Y. Mali, B. Vyas, V. K. Borate, P. Sutar, M. Jagtap and J. Palkar, "Role
improved generalizability. of Block-Chain in Health-Care Application," 2023 IEEE International
Conference on Blockchain and Distributed Systems Security (ICBDS),
Additionally, information on any data augmentation New Raipur, India, 2023, pp. 1-6, doi: 10.1109/ICBDS58040.2023.
techniques employed (e.g., rotations, flips) during training 10346537.
would also be valuable. These techniques artificially expand [3] Y. Mali, M. E. Pawar, A. More, S. Shinde, V. Borate and R. Shirbhate,
the training data, potentially mitigating overfitting and "Improved Pin Entry Method to Prevent Shoulder Surfing Attacks,"
2023 14th International Conference on Computing Communication
enhancing the model's ability to handle variations in real- and Networking Technologies (ICCCNT), Delhi, India, 2023, pp. 1-6,
world sign presentations. doi: 10.1109/ICCCNT56998.2023.10306875
By incorporating these additional details, a more [4] A. Chaudhari et al., "Cyber Security Challenges in Social Meta-verse
and Mitigation Techniques," 2024 MIT Art, Design and Technology
thorough analysis of the model's performance can be School of Computing International Conference (MITADTSoCiCon),
established. Overall, the current results are promising, Pune, India, 2024, pp. 1-7, doi:
indicating a well-performing model with strong potential for 10.1109/MITADTSoCiCon60330.2024.10575295.
real-world ASL recognition applications. [5] Y. K. Mali and A. Mohanpurkar, "Advanced pin entry method by
resisting shoulder surfing attacks," 2015 International Conference on
IV. CONCLUSION Information Processing (ICIP), Pune, India, 2015, pp. 37-42, doi:
10.1109/INFOP.2015.7489347 ‘.
In conclusion, our research has successfully [6] Y. K. Mali, S. A. Darekar, S. Sopal, M. Kale, V. Kshatriya and A.
demonstrated the feasibility and effectiveness of employing Palaskar, "Fault Detection of Underwater Cables by Using Robotic
Convolutional Neural Networks (CNNs) for real-time Operating System," 2023 IEEE International Carnahan Conference on
American Sign Language (ASL) gesture detection. Through Security Technology (ICCST), Pune, India, 2023, pp. 1-6, doi:
meticulous code implementation, model architecture design, 10.1109/ICCST59048.2023.10474270.
and algorithmic optimization, we have developed a robust [7] A. Chaudhari, S. Dargad, Y. K. Mali, P. S. Dhend, V. A. Hande and S.
S. Bhilare, "A Technique for Maintaining Attribute-based Privacy
solution capable of accurately recognizing ASL gestures with Implementing Blockchain and Machine Learning," 2023 IEEE
high efficiency. International Carnahan Conference on Security Technology (ICCST),
Pune, India, 2023, pp. 1-4, doi: 10.1109/ICCST59048.2023.10530511.
The CNN architecture, meticulously crafted and fine-
[8] Y. K. Mali, S. Dargad, A. Dixit, N. Tiwari, S. Narkhede and A.
tuned, effectively captures spatial hierarchies and extracts Chaudhari, "The Utilization of Block-chain Innovation to Confirm
meaningful features from ASL gesture images. By leveraging KYC Records," 2023 IEEE International Carnahan Conference on
convolutional layers, max-pooling layers, and fully Security Technology (ICCST), Pune, India, 2023, pp. 1-5, doi:
connected layers, our model demonstrates superior 10.1109/ICCST59048.2023.10530513.
performance in accurately classifying ASL gestures, thereby [9] V. Borate, Y. Mali, V. Suryawanshi, S. Singh, V. Dhoke and A.
facilitating seamless communication for non-verbal Kulkarni, "IoT Based Self Alert Generating Coal Miner Safety
Helmets," 2023 International Conference on Computational
individuals. Intelligence, Networks and Security (ICCINS), Mylavaram, India,
Moreover, our research highlights the importance of data 2023, pp. 01-04, doi: 10.1109/ICCINS58907.2023.10450044.
pre-processing techniques, such as data augmentation using [10] Mali, Y., & Chapte, V. (2014). Grid based authentication system,
International Journal of Advance Research in Computer Science and
the `ImageDataGenerator` module, in enhancing model Management Studies, Volume 2, Issue 10, October 2014 pg. 93-99,
robustness and generalization. By augmenting the dataset 2(10).
with various transformations, we have effectively mitigated [11] Yogesh Mali, Nilay Sawant, "Smart Helmet for Coal Mining”,
overfitting and improved the model's ability to generalize to International Journal of Advanced Research in Science,
unseen data. Through extensive model training and Communication and Technology (IJARSCT) Volume 3, Issue 1,
evaluation, we have validated the efficacy of our approach in February 2023, DOI: 10.48175/IJARSCT-8064
accurately detecting ASL gestures in real-time. [12] Pranav Lonari, Sudarshan Jagdale, Shraddha Khandre, Piyush Takale,
Prof Yogesh Mali, "Crime Awareness and Registration System ",
The meticulous evaluation of accuracy and loss metrics International Journal of Scientific Research in Computer Science,
over epochs provides valuable insights into the model's Engineering and Information Technology (IJSRCSEIT), ISSN : 2456-
3307, Volume 8, Issue 3, pp.287-298, May-June-2021.
learning progression and performance trends, affirming its
[13] Jyoti Pathak, Neha Sakore, Rakesh Kapare , Amey Kulkarni, Prof.
reliability and effectiveness. In essence, our research Yogesh Mali, "Mobile Rescue Robot", International Journal of
contributes to the advancement of assistive communication Scientific Research in Computer Science, Engineering and
technology, offering a practical and efficient solution for

4
Authorized licensed use limited to: COLLEGE OF ENGINEERING - Pune. Downloaded on December 12,2024 at 09:16:25 UTC from IEEE Xplore. Restrictions apply.
Information Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 4, [30] Satapathy, S., & Purohit, S. (2024). POND DEGRADATION AND
Issue 8, pp.10-12, September-October-2019. WILDLIFE PRESERVATION: A GEOGRAPHICAL ANALYSIS. vol,
[14] Devansh Dhote , Piyush Rai , Sunil Deshmukh, Adarsh Jaiswal, Prof. 6(2), 74-85.
Yogesh Mali, "A Survey : Analysis and Estimation of Share Market [31] Bhardwaj, L. K., Rath, P., Jain, H., Purohit, S., Yadav, P., & Singh, V.
Scenario ", International Journal of Scientific Research in Computer (2024). The Impact of Climate-Induced Livelihood, Health, and
Science, Engineering and Information Technology (IJSRCSEIT), Migration on Women and Girls: A Review. Global Insights on Women
ISSN : 2456-3307, Volume 4, Issue 8, pp.77-80, September-October- Empowerment and Leadership, 100-118.
2019. [32] Mali, Y. K., Dargad, S., Dixit, A., Tiwari, N., Narkhede, S., &
[15] Rajat Asreddy, Avinash Shingade, Niraj Vyavhare, Arjun Rokde, Chaudhari, A. (2023, October). The Utilization of Block-chain
Yogesh Mali, "A Survey on Secured Data Transmission Using RSA Innovation to Confirm KYC Records. In 2023 IEEE International
Algorithm and Steganography", International Journal of Scientific Carnahan Conference on Security Technology (ICCST) (pp. 1-5).
Research in Computer Science, Engineering and Information IEEE.
Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 4, Issue 8, [33] M. Dangore, A. S. R, A. Ghanashyam Chendke, R. Shirbhate, Y. K.
pp.159-162, September-October-2019 Mali and V. Kisan Borate, "Multi-class Investigation of Acute
[16] Shivani Chougule, Shubham Bhosale, Vrushali Borle, Vaishnavi Lymphoblastic Leukemia using Optimized Deep Convolutional
Chaugule, Prof. Yogesh Mali, “Emotion Recognition Based Personal Neural Network on Blood Smear Images," 2024 MIT Art, Design and
Entertainment Robot Using ML & IP", International Journal of Technology School of Computing International Conference
Scientific Research in Science and Technology(IJSRST), Print ISSN : (MITADTSoCiCon), Pune, India, 2024, pp. 1-6, doi:
2395-6011, Online ISSN : 2395-602X, Volume 5, Issue 8, pp.73-75, 10.1109/MITADTSoCiCon60330.2024.10575245.
November-December-2020. [34] N. P. Mankar, P. E. Sakunde, S. Zurange, A. Date, V. Borate and Y. K.
[17] Amit Lokre, Sangram Thorat, Pranali Patil, Chetan Gadekar, Yogesh Mali, "Comparative Evaluation of Machine Learning Models for
Mali, " Fake Image and Document Detection using Machine Malicious URL Detection," 2024 MIT Art, Design and Technology
Learning", International Journal of Scientific Research in Science and School of Computing International Conference (MITADTSoCiCon),
Technology(IJSRST), Print ISSN : 2395-6011, Online ISSN : 2395- Pune, India, 2024, pp. 1-7, doi:
602X, Volume 5, Issue 8, pp.104-109, November-December-2020. 10.1109/MITADTSoCiCon60330.2024.10575452.
[18] Ritesh Hajare, Rohit Hodage, Om Wangwad, Yogesh Mali, Faraz [35] M. D. Karajgar et al., "Comparison of Machine Learning Models for
Bagwan, "Data Security in Cloud", International Journal of Scientific Identifying Malicious URLs," 2024 IEEE International Conference on
Research in Computer Science, Engineering and Information Information Technology, Electronics and Intelligent Communication
Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 8, Issue 3, Systems (ICITEICS), Bangalore, India, 2024, pp. 1-5, doi:
pp.240-245, May-June-2021 10.1109/ICITEICS61368.2024.10625423.
[19] Yogesh Mali and Tejal Upadhyay, “Fraud Detection in Online Content [36] J. Pawar, A. A. Bhosle, P. Gupta, H. Mehta Shiyal, V. K. Borate and Y.
Mining Relies on the Random Forest Algorithm”, SWB, vol. 1, no. 3, K. Mali, "Analyzing Acute Lymphoblastic Leukemia Across Multiple
pp. 13–20, Jul. 2023, doi: 10.61925/SWB.2023.1302 Classes Using an Enhanced Deep Convolutional Neural Network on
[20] Dodda, S., Narne, S., Chintala, S., Kanungo, S., Adedoja, T., & Blood Smear," 2024 IEEE International Conference on Information
Sharma, D. (2024). Exploring AI-driven Innovations in Image Technology, Electronics and Intelligent Communication Systems
Communication Systems for Enhanced Medical Imaging Applications. (ICITEICS), Bangalore, India, 2024, pp. 1-6, doi:
Journal of Electrical Systems, vol 20(3),pp. 949-959 Mar 24. 10.1109/ICITEICS61368.2024.10624915.
doi.org/10.52783/jes.1409 [37] D. R. Naik, V. D. Ghonge, S. M. Thube, A. Khadke, Y. K. Mali and V.
[21] S. N. S. D. T. A. Madan Mohan Tito Ayyala somayajula, K. Borate, "Software-Defined-Storage Performance Testing Using
Sathishkumar Chintala, “AI-Driven Decision Support Systems in Mininet," 2024 IEEE International Conference on Information
Management: Enhancing Strategic Planning and Execution”, Technology, Electronics and Intelligent Communication Systems
IJRITCC, vol. 12, no. 1, pp. 268–276, Mar. 2024. (ICITEICS), Bangalore, India, 2024, pp. 1-5, doi:
10.1109/ICITEICS61368.2024.10625153.
[22] Mali, Y. K., Rathod, V., Dargad, S., & Deshmukh, J. Y. (2024).
Leveraging Web 3.0 to Develop Play-to-Earn Apps in Healthcare [38] Mali, Y. K., Rathod, V. U., Borate, V. K., Chaudhari, A., & Waykole,
using Blockchain. In Computational Intelligence and Blockchain in T. (2023, June). Enhanced Pin Entry Mechanism for ATM Machine
Biomedical and Health Informatics (pp. 243-257). CRC Press. by Defending Shoulder Surfing Attacks. In International Conference
on Recent Developments in Cyber Security (pp. 515-529). Singapore:
[23] Deshmukh, J. Y., Rathod, V. U., Mali, Y. K., & Sable, R. (2024). and Springer Nature Singapore.
Classification. Data-Centric Artificial Intelligence for
Multidisciplinary Applications, 114. [39] S. Modi, Y. K. Mali, V. Borate, A. Khadke, S. Mane and G. Patil,
"Skin Impedance Technique to Detect Hand-Glove Rupture," 2023
[24] Dodda, S., Narne, S., Chintala, S., Kanungo, S., Adedoja, T., & OITS International Conference on Information Technology (OCIT),
Sharma, D. (2024). Exploring AI-driven Innovations in Image Raipur, India, 2023, pp. 309-313, doi:
Communication Systems for Enhanced Medical Imaging Applications. 10.1109/OCIT59427.2023.10430992.
Journal of Electrical Systems, vol 20(3), pp.949-959 Mar 24.
doi.org/10.52783/jes.1409. [40] Mali, Y. K., Rathod, V., Dargad, S., & Deshmukh, J. Y. (2024).
Leveraging Web 3.0 to Develop Play-to-Earn Apps in
[25] S. N. S. D. T. A. Madan Mohan Tito Ayyalasomayajula, Sathishkumar Healthcareusing Blockchain. In Computational Intelligence and
Chintala, “AI-Driven Decision Support Systems in Management: Blockchain in Biomedical and Health Informatics (pp. 243-257). CRC
Enhancing Strategic Planning and Execution”, IJRITCC, vol. 12, no.
1, pp. 268–276, Mar. 2024.
[26] Purohit, S. Role of Industrialization and Urbanization in Regional
Sustainable Development–Reflections from Tier-II Cities in India.
[27] Purohit, M. S. (2012). Resource Management in the Desert
Ecosystem of Nagaur District: An Ecological Study of Land
(Agriculture), Water and Human Resources (Doctoral dissertation,
Maharaja Ganga Singh University).
[28] Zheng, X., Haseeb, M., Tahir, Z., Tariq, A., Purohit, S., Soufan, W., ...
& Jilani, S. F. (2024). Coupling Remote Sensing Insights with
Vegetation Dynamics and to Analyze NO 2 Concentrations: A Google
Earth Engine-driven Investigation. IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing.
[29] Purohit, S. (2023). California Geographical Society, 96162, California,
United States. Journal of Environmental Science and Public Health, 7,
176-184.

5
Authorized licensed use limited to: COLLEGE OF ENGINEERING - Pune. Downloaded on December 12,2024 at 09:16:25 UTC from IEEE Xplore. Restrictions apply.

You might also like