0% found this document useful (0 votes)
180 views

Advance Assessment Evaluation: A Deep-Learning Framework With Sophisticated Text Extraction For Unparalleled Precision

Ai-based assessment scrutiny is the most convenient and precise method to eliminate the repetitive task of answer grading; consisting of text extraction methodologies and using Deep Learning Architecture to evaluate with reference to the correct answer and Question provided. In the landscape of educational assessment, the traditional methods of answer evaluation face challenges in adapting to the dynamic and evolving nature of learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views

Advance Assessment Evaluation: A Deep-Learning Framework With Sophisticated Text Extraction For Unparalleled Precision

Ai-based assessment scrutiny is the most convenient and precise method to eliminate the repetitive task of answer grading; consisting of text extraction methodologies and using Deep Learning Architecture to evaluate with reference to the correct answer and Question provided. In the landscape of educational assessment, the traditional methods of answer evaluation face challenges in adapting to the dynamic and evolving nature of learning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Advance Assessment Evaluation:


A Deep-Learning Framework with Sophisticated
Text Extraction for Unparalleled Precision
Tanishq Jaiswal1, Varsha Teeratipally2, Ritendu Bhattacharyya3, Bharani Kumar Depuru4
1,2
Research Associate, 3Team Leader, Research and Development 4Director,
Innodatatics, Hyderabad, India

*Corresponding Author: Bharani Kumar Depuru


ORC ID:0009-0003-4338-8914

Abstract:- Ai-based assessment scrutiny is the most modern education. The educational landscape continues
convenient and precise method to eliminate the repetitive to evolve, our research not only addresses current
task of answer grading; consisting of text extraction challenges but also lays the groundwork for future
methodologies and using Deep Learning Architecture to advancements in the field of educational assessment,
evaluate with reference to the correct answer and promising a new era of precision and adaptability.
Question provided. In the landscape of educational
assessment, the traditional methods of answer evaluation This paper includes text extraction from
face challenges in adapting to the dynamic and evolving architecture-based Convolutional Neural Networks
nature of learning. This paper proposes a complete end- (CNN), Recurrent Neural Networks (RNN), and
to-end answer-grading architecture that can be deployed transformers like an encoder-decoder transformer
to provide an interface for a fully automated- Deep- (whisper).
learning answer-grading mechanism.
Keywords:- Audio Evaluation, Text Extraction, Deep
This research introduces a groundbreaking Learning, Grading Answer, Whisper, PALM2, Flask.
approach to address these challenges, presenting a
solution that seamlessly integrates advanced text I. INTRODUCTION
extraction and deep learning architectures. Our
objective is to achieve unparalleled precision in answer In today's scenario, a significant number of competitive
evaluation, setting a new standard in the field. Our exams adopt a multiple-choice format, posing a challenge
method involves the extraction of audio files, precise text for students to provide detailed answers. When dealing with
extraction from audio, and a Deep Neural Networks a large student population, the manual evaluation of
DNN-based model for answer evaluation, based on a responses becomes practically unfeasible. With the surging
database that provides the correct answer and relevant demand for AI and software-related jobs, students aspire to
data is fetched. Proposing a reliable, accurate, easy-to- excel in these subjects. Considering these factors, we have
deploy best-in-class technology to eradicate manual developed an application that allows students to verbally
repetitive tasks. respond to given questions, with the system providing
automated evaluations. This recording process enhances
Providing a very user-friendly interface to the students' confidence in the subject matter and improves soft
student, and a dynamic backend to monitor results along skills like verbal communication. Students can take
with the high level of precision. These AI-based immediate action as the score is displayed within no time. It
evaluation methods can be used in numerous places in maximizes the automation for the evaluation; this not only
the evolving Education industry providing students with reduces costs by minimizing manual correction efforts but
a convenient interface and automation. The objective is also saves time as responses are recorded rather than
to elevate the precision and adaptability of answer written.
assessment methodologies in the dynamic landscape of

IJISRT24JAN1680 www.ijisrt.com 1869


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 1: CRISP-ML (Q) Methodological Framework, Outlining its Key Components and Steps Visually
Source: Mind Map - 360DigiTMG

The application employs the open-source Cross Neural Networks (RNN) [14], among others. The project
Industry Standard Practice for Machine Learning (CRISP- initiation involved thorough research into various
ML) methodology by 360DigiTMG. CRISP-ML(Q) [Fig.1] techniques. We recorded and gathered diverse audio
[1] is designed to guide the project lifecycle of a machine- samples, questions, and answers. Data visualization was
learning solution. Deep-learning techniques are extensively performed, and a model was developed, with comparisons
utilized for text extraction from audio and subsequent made to other models. The process involved the use of a
evaluation, incorporating diverse architectures such as NoSQL database and subsequent deployment. Monitoring
Convolutional Neural Networks [15] (CNN), Recurrent confirmed the system's high accuracy.

II. METHODS AND TECHNOLOGY

A. System Requirements (Computer Hardware and Software) used

Table 1: System requirements


Operating System Ubuntu

RAM 16 GB

Instance Type g4dn.xlarge

GPU 16 GB

This above table [Table.1] represents the entire system requirement to build and execute this project.

IJISRT24JAN1680 www.ijisrt.com 1870


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
B. Model Architecture

Fig. 2: Architecture Diagram: explanation of the workflow of the [AI evaluation] project
Source: https://360digitmg.com/ml-workflow

The project architecture [Fig.2] explains how the entire


project has been conducted in developing the entire model. C. Data Collection: -
According to the business problem, relevant data was The data that is utilized in this project consists of data
generated for text extraction via recording samples of acquired through the repetitive recording of diverse audio
different distributions as well as evaluation by forming a samples from different individuals, this process was
dataset for checking the evaluation performance on sample conducted in a very specific manner, ensuring that it covers
data, after the selection of models, the text extraction was various accent distributions. A total of 30 audios we have
fed with an audio input which generated a text output using collected to create a diverse dataset. This systematic
a pipeline [2], this output was then sent to evaluation models approach not only facilitates the representation of diverse
along with respective correct answers which were extracted accents but also enables a robust foundation for the
from a database that stored all questions and correct answer, subsequent analyses and evaluations that are within the
after this the model would display the score to the user. scope of our research.
Once the model was finalized, with the help of Flask, the
application was deployed into the EC2 instance (ec2 is a A second dataset was produced for evaluation. Here,
service in AWS seamless deployment on a big scale for we have prepared a set of questions covering the three topics
easily scalable for the end users. (data structures, AI, and Python). It even includes student
responses in addition to the right answer. We ensured to
For the inference, one UI will open, where the answer address the entire spectrum of student responses, including
will be recorded which will be on the systems and audio those that were partially right, wrong, and entirely right.
chunks that will get converted to proper audio file format. After reviewing this, we concluded that the marks were
That audio will be passed to the model for text extraction, given correctly.
after which the text will get further sent to the evaluation
pipeline to get graded. Once the result is obtained in the
backend it will then be rendered to the user's screen in the
UI.

IJISRT24JAN1680 www.ijisrt.com 1871


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
D. Dataset Dimension the speech signal is accepted by the Speech2Text speech
model. Since it's a transformer-based seq2seq model [5], the
Table 2: Data Dimension transcriptions are produced in an autoregressive manner.
Audio file format .wav
 Whisper
Whisper [Fig.3] [8] exhibits a powerful ability to
Text data format .txt
generalize across various datasets and domains. However,
its expected performance can be enhanced for particular
This above table [Table.2] portrays the data related languages and assignments by fine-tuning using semi-
details which has been used throughout the project. supervised learning on a large dataset consisting of 680,000
hours. It is a flexible tool designed to handle recordings by
E. Model Building dividing them into 30-second segments and processing each
The project has two parts where one focuses on text sequentially. Achieving an accuracy rate ranging from 95%
retrieval using audio [3] while the other evaluates the audio to 98.5% without manual intervention, the model operates
and assigns a score [4]. Our initial phase involved on the transformer architecture, featuring stacked encoder
functioning on retrieval text from audio during this stage we and decoder blocks with an attention mechanism facilitating
checked several models that include Speech2text, Deep information exchange between them. Developers have the
speech, and whisper to explore their efficacy. flexibility to integrate it into their pipelines and customize it
to suit their specific use cases, freeing them from
 Speech2text dependency on OpenAI. Whisper excels in recognizing
Speech-to-Text [6] can handle audio more quickly than various accents, background noises, and technical jargon,
real-time, averaging 15 seconds to process a 30-second supporting over 57 languages, such as Afrikaans, Czech, and
audio clip. Your recognition request may take much longer Galician, in addition to English. Furthermore, it can translate
if the audio quality is low. It feeds speech inputs into the content from 99 languages to English. Despite its impressive
encoder after reducing their length by 3/4th using a capabilities, Whisper remains cost-effective compared to
convolutional down sampler. The transcripts/translations are alternative solutions.
produced autoregressive by the model, which is trained
using standard autoregressive cross-entropy loss. Compare to Deep Speech [Table.3] and Speech2Text
LibriSpeech [7], CoVoST 2, MuST-C, and other datasets [Table.4] it depicts more accurate and error rate is also very
have been used to refine Speech2Text for ASR and ST. An less.
extracted float tensor of log-mel filter-bank features from

Fig. 3: Whisper Architecture Diagram


Source: Architecture Diagram)

IJISRT24JAN1680 www.ijisrt.com 1872


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Deep Speech demonstrates robust performance in challenging conditions
Deep speech [9] employs an end-to-end approach such as background noise and speaker variations its training
characterized by five-layer activation functions in the first system utilizes a recurrent neural network (RNN) with
second third and fifth layers involve clipped rectified-linear multiple GPUs. Notably, the model exhibits a 160 error rate
functions while the fourth layer is recurrent the last layer on the comprehensive test set.
incorporates a SoftMax function Remarkably deep speech

Table 3: Comparison table of Whisper and Deep Speech


Feature Whisper Deep Speech

Architecture Recurrent neural network (RNN) or transformer-based model Connectionist temporal classification
(CTC)

Dataset size 680,000 500,000

WER 4.3 3.55

Organization Open-AI Mozilla

 Feature

Table 4: Comparison table of Whisper and Speech2Text


Feature Whisper Speech2Text

Handling Background Excels in handling background noise, including May face challenges with background noise,
Noise ambient room noise, outside noise, or music playing potentially impacting transcription accuracy

Music Performance Performs well even when the speaker is performing May struggle with accurate transcription
music (singing, rapping, spoken-word poetry) during musical performances

Error Reduction Reports 20% fewer additions of missing words, 45% May have higher rates of additions and
fewer corrections per transcription corrections in transcriptions

Accented English and Demonstrates high accuracy with English speakers May experience challenges with accented
Rapid Speech having accents and rapid speech English and rapid speech, potentially leading
to lower accuracy

Auto-Translation additional feature - auto-translation to English text Does not have similar unexpected features

Then now comes the second part, evaluating the audio is designed to emulate aspects of the human brain enabling
based on the question and assigning a score to it. To do this Llama 2 to generate contextually relevant and coherent text
we employed a range of llm models comprising llama-2 outputs in response to user inputs.
mistral-7b zephyr-7b and palm2.
 Zephyr-7b
 Llama-2 Zephyr-7B [11] is constructed with a combination of
Llama 2 [10] has undergone a fine-tuning process transformers, including transformers 4350 dev0 pytorch,
tailored for chat-related applications encompassing training 201 ku 118 datasets, and 2120 tokenizers 0140, boasting a
with a substantial dataset comprising over 1 million human vast parameter count of 7 billion. Trained extensively in
annotations. Furthermore, its fine-tuned models undergo diverse languages, it excels in tasks such as translation,
training with the aid of over 1 million human annotations summarization, analysis, and answering questions. The
enhancing their adaptability to various chat scenarios. training process involved a blend of public and synthetic
Notably, llama 2 exhibits the flexibility to undergo further datasets, employing direct preference optimization. Fine-
fine-tuning using newer data inputs. When users input a text tuning further enhances its capabilities, ensuring accurate
prompt or provide text to llama 2 through alternative means information retrieval tailored to specific queries. The
the model endeavors to predict the most plausible model's training corpus encompasses a wide range of
subsequent text. This predictive capability is achieved sources, including websites, articles, books, and more.
through a neural network characterized by a cascading
algorithm housing billions of variables commonly referred
to as parameters. This intricate neural network architecture

IJISRT24JAN1680 www.ijisrt.com 1873


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Mistral 7b  PaLM-2
Mistral 7b distinguishes itself as the earliest large PaLM2 has various features like Multilingualism,
language model [12] by utilizing Sliding window attention logic, coding, effectiveness, and economy of cost. PaLM2
to effectively for bigger patterns at a lower price and group- [Fig.4] [13] is a language processing model that gathers
query attention for quick interference. High throughput and diverse data, cleans it, and uses the Transformer built for
low latency are made possible by its distinctive architecture efficient training. It undergoes unsupervised pre-training,
but it becomes difficult to stay accurate when writing a lot fine-tuning, and fine-tuning on smaller datasets for real-
of text. In spite of this mistral 7b performs better than Llama world tasks. Its Pathways employs decoupling and adaptive
2 in several areas. computation for holistic understanding and accurate outputs.

Compared to Llama 2 [Table.5] and other models,


Palm2 is very much accurate and also hallucinate very less.
Compared to Palm 2 other models hallucinate with
complicated prompts.

Fig. 4: PaLM-2Architecture Diagram


Source:-Architecture Diagram

Table 5: Comparison between Palm 2 and Llama 2


Feature PaLM 2 Llama 2

Model size 540 billion parameters 70 billion parameters

Training data 560 billion words 560 billion words

Architecture Transformer-based Transformer-based

Training method Self-supervised learning Self-supervised learning

F. Model Evaluation III. RESULTS AND DISCUSSION


After recording numerous audio samples and
evaluating them, we found that Whisper demonstrates Text extraction from audio using the whisper model
superior accuracy when compared to other models like coupled with accurate evaluation by PaLM2 yielded
Speech2Text and Deep Speech. Whisper exhibited precision successful results. Integration through the flask was
with a low word error rate, leading us to choose it over other accomplished leading to a successful deployment in
models. Notably, it effectively eliminated background noise Amazon Web Services (AWS) EC2 instance which is
and provided accurate transcriptions. For evaluation, we scalable and cost-efficient. The user-friendly design of this
employed several LLM models, including PaLM2, llama-2, project has proven beneficial for students fostering
mistral-7b, and zephyr-7b. Prompt engineering has been confidence and optimizing time usage. This made the
applied to these LLM models in several ways. PaLM2 evaluation process easily accessible.
consistently yielded scores closely resembling human
evaluation standards.

IJISRT24JAN1680 www.ijisrt.com 1874


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 5: Welcome page of the application

Fig. 6: Exam evaluation page of the application

First, we land on the welcome page [Fig.5] of the [2]. Rafael Dantas Lero, Chris Exton, Andrew Le Gear,
application which provides 3 different boxes each Communications using a speech-to-text-to-speech
containing a description of the subject for the exam, apart pipeline, Published: International Conference on
from the discussion there is a link that redirects to a separate Wireless and Mobile Computing, Networking and
evaluation page [Fig.6]. On this page, we have a block in Communications (WiMob), 2019,
which the question is shown, and below that is the “start DOI:https://doi.org/10.1109/WiMOB.2019.8923157
recording” button which upon pressing starts recording the [3]. Pooja Panapana, Eswara Rao Pothala, Sai Sri
answer that the student speaks in English. On pressing the Lakshman Nagireddy, Hemendra Praneeth
stop recording button the audio recording gets completed the Mattaparthi& Niranjani Meesala Towards Automatic
audio gets evaluated in the backend and the answer is Bidirectional Conversion of Audio and Text: A
returned which is the score of the answer is shown out of 10. Review from Past Research, 2023, volume 716
DOI:https://link.springer.com/chapter/10.1007/978-3-
IV. CONCLUSION 031-35501-1_30
[4]. Lishan Zhang, Yuwei Huang, Xi Yang, Shengquan Yu
Our research shows a major step toward &Fuzhen Zhuang Towards An automatic short-answer
revolutionizing answer evaluation methodologies in the grading model for semi-open-ended questions, 2019
quickly changing field of educational technology. Our work DOI: https://doi.org/10.1080/10494820.2019.1648300
tackles the inherent challenges of traditional assessment [5]. Shuyu Li, ,Yunsick Sung towards Transformer-Based
approaches by integrating sophisticated Deep Learning Seq2Seq Model for Chord Progression Generation,
Architectures with advanced text extraction techniques. Our 2023 DOI: https://doi.org/10.3390/math11051111
project aims to improve the accuracy and flexibility of [6]. Changhan Wang, Yun Tang, Xutai Ma, Anne Wu,
answer evaluation and to make answer grading more Sravya Popuri, Dmytro Okhonko, Juan Pino | fairseq
convenient and scalable. It is also expected to be a S2T: Fast Speech-to-Text Modeling with fairseq, 2020,
significant development in the field of education. We DOI:https://doi.org/10.48550/arXiv.2010.05171
present a paradigm shift in automating the evaluation [7]. Vassil Panayotov, Guoguo Chen, Daniel Povey,
process by navigating the complexities of student responses Sanjeev Khudanpur towards Librispeech: An ASR
with the seamless integration of AI technologies, corpus based on public domain audio books| Published
particularly deep learning models. in IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2015 DOI:
REFERENCES https://doi.org/10.1109/ICASSP.2015.7178964
[8]. Xuedong Huang, A. Acero, F. Alleva, Mei-Yuh
[1]. Stefan Studer, Thanh Binh Bui, Christian Drescher, Hwang, Li Jian g& M. Mahajan towards whisper:
Alexander Hanuschkin, Ludwig Winkler, Steven Microsoft Windows highly intelligent speech
Peters and Klaus-Robert Muller, Towards CRISP- recognizer, 2002,
ML(Q): A Machine Learning Process Model with https://doi.org/10.1109/ICASSP.1995.479281
Quality Assurance Methodology, 2021, Volume 3, [9]. Awni Hannun, Carl Case, Jared Casper & Bryan
Issue 2.https://doi.org/10.3390/make3020020 Catanzaro Towards Deep Speech: Scaling up end-to-

IJISRT24JAN1680 www.ijisrt.com 1875


Volume 9, Issue 1, January 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
end speech recognition, 2014 Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz,
DOI:https://doi.org/10.48550/arXiv.1412.5567 Michael Isard, Abe Ittycheriah, Matthew Jagielski,
[10]. Hugo Touvron, Louis Martin, Kevin Stone, Peter Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha
Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee,
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li,
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu,
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Frederick Liu, Marcello Maggioni, Aroma Mahendru,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Joshua Maynez, Vedant Misra, Maysam Moussalem,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom,
Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Alicia Parrish, Marie Pellat, Martin Polacek, Alex
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan
Isabel Kloumann, Artem Korenev, Punit Singh Koura, Richter, Parker Riley, Alex Castro Ros, Aurko Roy,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Brennan Saeta towards: PaLM 2 Technical Report
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier DOI:https://doi.org/10.48550/arXiv.2305.10403
Martinet, Todor Mihaylov, Pushkar Mishra, Igor [14]. Alex Sherstinsky towards Fundamentals of Recurrent
Molybog, Yixin Nie, Andrew Poulton, Jeremy Neural Network (RNN) and Long Short-Term Memory
Reizenstein, Rashi Rungta, Kalyan Saladi, Alan (LSTM) network
Schelten, Ruan Silva, Eric Michael Smith, Ranjan DOI:http://dx.doi.org/10.1016/j.physd.2019.132306
Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross [15]. Keiron O'Shea, Ryan NashTowards An Introduction to
Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Convolutional Neural Networks
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, DOI:https://doi.org/10.48550/arXiv.1511.08458
Melanie Kambadur, Sharan Narang, Aurelien
Rodriguez, Robert Stojnic, Sergey Edunov, Thomas
Scialom towards Llama 2: Open Foundation and Fine-
Tuned Chat Models, 2023
DOI:https://doi.org/10.48550/arXiv.2307.09288
[11]. Lewis Tunstall, Edward Beeching, Nathan Lambert,
Nazneen Rajani, Kashif Rasul, Younes Belkada,
Shengyi Huang, Leandro von Werra, Clémentine
Fourrier, Nathan Habib, Nathan Sarrazin, Omar
Sanseviero, Alexander M. Rush & Thomas Wolf
towards Zephyr: Direct Distillation of LM
Alignment,2023 DOI:
https://doi.org/10.48550/arXiv.2310.16944
[12]. Albert Q. Jiang, Alexandre Sablayrolles, Arthur
Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel,
Guillaume Lample, Lucile Saulnier, Lélio Renard
Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le
Scao, Thibaut Lavril, Thomas Wang, Timothée
Lacroix, William El Sayed towards Mistral-7b, 2023
DOI: https://doi.org/10.48550/arXiv.2310.06825
[13]. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin
Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen,
Eric Chu, Jonathan H. Clark, Laurent El Shafey,
Yanping Huang, Kathy Meier-Hellstern, Gaurav
Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez
Abrego, Junwhan Ahn, Jacob Austin, Paul Barham,
Jan Botha, James Bradbury, Siddhartha Brahma, Kevin
Brooks, Michele Catasta, Yong Cheng, Colin Cherry,
Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clément Crepy, Shachi Dave, Mostafa
Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan
Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng,
Vlad Fienber, Markus Freitag, Xavier Garcia,
Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari,
Steven Hand, Hadi Hashemi, Le Hou, Joshua

IJISRT24JAN1680 www.ijisrt.com 1876

You might also like