0% found this document useful (0 votes)
136 views113 pages

Smart Moderator Project Report

Modified report

Uploaded by

Aafiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views113 pages

Smart Moderator Project Report

Modified report

Uploaded by

Aafiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 113

Anjuman-I-Islam’s

M.H.SabooSiddik Polytechnic
8,M.H.SabooSiddik Polytechnic Road, Mumbai 400008

FINAL YEAR DIPLOMA IN COMPUTER ENGINEERING

(2024-2025)

PROJECT REPORT ON

PROJECT TITLE

Smart Moderator

BY

220445 - Sayyed Maria Imran

220455 - Shaikh Samiya

220460 - Syed Afifa Fareeduddin

UNDER THE GUIDANCE OF

MS. ZAIBUNNISA MALIK

Maharashtra State Board of Technical Education (MS-BTE)

Mumbai (Autonomous)(ISO 9001:2008) (ISO/IEC 27001:2005

1|Page
INDEX
Obtained
Sr.No. Title Marks
Marks Faculty Sign with Date

1. Problem Identification 02

Industrial Survey &


2. 02
Literature Review

3. Project Proposal 03

4. Execution of Plan 02

5. Final Project Report 06

6. Project Log Book 02

7. Project Portfolio 04

8. Presentation&Defence 04

Total 25

2|Page
Problem
Identification

3|Page
Smart Moderator
Existing System
Currently, educational institutions often use basic OMR systems to grade multiple-choice exams.
These systems rely on dedicated OMR scanners and standard OMR sheets with bubbles filled in
by students. Some popular OMR systems include Scantron and Remark Test Grading. They
work well for automating grading but have limitations:

 Rigid Format Requirements: Most OMR systems require specific bubble sheet formats,
which restricts their use to standardized tests and limits flexibility in exam design.

 Limited Analytics: Existing OMR systems primarily deliver raw scores without detailed
performance insights. This restricts educators’ ability to analyze trends or identify areas
where students might need more support.

 Resource-Intensive Setup: OMR scanners are specialized equipment that can be


expensive and require maintenance, making it challenging for some institutions to use
them consistently.

Problem Statement

The current process of manually correcting OMR (Optical Mark Recognition) sheets in large-
scale exams like NEET is time-consuming, error-prone, and inefficient. There is a need to
develop a revised system that addresses these challenges by automating the evaluation process.
This solution ensures faster, more accurate, and scalable grading for multiple-choice exams.

Proposed System - Smart Moderator:

The Smart Moderator project builds upon traditional OMR systems by adding flexibility, depth,
and accessibility:

1. Flexible Compatibility: Smart Moderator works with any document scanner or


smartphone, removing the need for specialized OMR hardware. This makes it more
accessible, especially for smaller institutions or remote learning settings.

4|Page
2. Enhanced Analytics and Insights: Unlike basic systems that focus on scores alone, Smart
Moderator provides comprehensive analytics like error patterns, common mistakes, and
subject-wise strengths and weaknesses, helping educators support students more
effectively.

3. Scalable and Integrative: It is designed to handle large exam batches and integrates with
educational platforms to make the grading process smooth, scalable, and suitable for a
variety of exam types.

4. Future-Oriented Capabilities: The project has room for future upgrades like feedback
features for students and additional question formats, ensuring it remains valuable as
educational needs evolve.

In essence, Smart Moderator brings flexibility, accessibility, and deeper insights to the grading
process, filling the gaps of traditional OMR systems and offering a modern solution to meet
today’s educational demands.

Purpose

The primary purpose of this project is to address the inefficiencies and inaccuracies associated
with the manual evaluation of OMR (Optical Mark Recognition) sheets in large-scale exams
such as NEET. Manual grading is not only time-consuming but also prone to human errors,
leading to delays and possible inaccuracies in results. By developing an automated system, this
project aims to significantly enhance the speed, accuracy, and reliability of OMR sheet
evaluations. The solution will utilize image processing techniques to automatically detect and
evaluate marked answers, ensuring timely, precise, and scalable results for institutions managing
high-stakes examinations. Additionally, the system will reduce operational costs by minimizing
human involvement in the grading process, improving both efficiency and resource allocation.

Scope

The Smart Moderator project has a wide reach in the area of grading exams. It aims to improve
how multiple-choice exams are graded in various educational settings, including schools and

5|Page
universities. The project is flexible enough to work with different subjects and types of tests,
making it useful for many standardized exams.

Additionally, Smart Moderator plans to connect with existing school systems and provide
features like detailed result analysis, performance tracking, and easy-to-use interfaces for both
teachers and students. There are also plans for future upgrades, such as adding support for
different question types and feedback options to help improve learning. Overall, the project aims
to make grading faster and more accurate, benefiting education in today’s busy environment.

Features

o Automated OMR Sheet Scanning: The system will scan OMR sheets using regular
2D scanners, eliminating the need for specialized hardware.
o MCQ Evaluation: The system will accurately evaluate MCQ answers marked on
OMR sheets and calculate the results.
o Error Detection and Correction: It will detect incorrect or incomplete markings and
notify users for manual review.
o Data Storage: A secure database will store scanned data and results for future
reference.
o Real-time Result Processing: Results will be calculated and displayed immediately
after the evaluation.
o User-Friendly Interface: An intuitive interface for examiners to upload, scan, and
review OMR sheets.
o Customizable Exam Setup: Examiners can define answer keys and manage different
exam patterns.
o Reports and Analytics: Detailed reports will be generated, including individual
scores, class performance, and item analysis.

Advantages

o Efficiency: Significantly reduces the time taken to evaluate large numbers of OMR
sheets.
o Accuracy: Minimizes human errors typically encountered during manual grading.

6|Page
o Scalability: Can handle large-scale exams with thousands of candidates.
o Cost-Effective: Reduces the reliance on manual labor and specialized OMR hardware.
o Real-time Feedback: Results can be processed and made available immediately.
o Transparency: Automated systems provide transparent and unbiased grading.

Disadvantages

o Technical Issues: The system may experience glitches, especially with low-quality
scanned images.
o Dependence on Technology: Any technical failures, such as hardware or software
crashes, can disrupt the evaluation process.
o Initial Setup Cost: Although long-term costs are reduced, the initial development and
setup of the system can be expensive.
o Limited to MCQ Exams: The system is designed specifically for MCQs and may not be
suitable for other exam formats.
o Training Required: Examiners need proper training to use the system effectively, which
could be time-consuming.

7|Page
Industrial
Survey &
Literature
Review

8|Page
Smart Moderator
Abstract
This project centers on the development and implementation of a software program designed to
enhance efficiency within specific application domain, e.g., educational assessment or content
moderation. The primary goal is to create a system that streamlines task-specific processes, such
as grading assessments or moderating content, focusing on accuracy, speed, and user-
friendliness. Utilizing a blend of technologies, including machine learning, Optical Mark
Recognition (OMR), and data processing algorithms, the program addresses core challenges by
automating repetitive tasks, minimizing human error, and facilitating data-driven decision-
making. The outcomes demonstrate a marked improvement in efficiency, speed, and reliability
compared to conventional methods, confirming the program's viability as a scalable solution.
This project emphasizes the potential of specialized software to transform traditional workflows,
providing a foundation for future innovations in automated solutions.

Literature Review

Sr. Author Published Future


Title Year Findings GAP
No. Name At Direction
1 Automatic Hossam 2021 IEEE The paper While the The paper
Exam Magdy presents an framework suggests
Correction Balaha and Automatic Exam provides high extending the
Framework Mahmoud Correction accuracy, framework to
(AECF) for M. Saafan Framework especially for handle more
the MCQs, (HMB-AECF) mathematical complex
Essays, and that uses machine equations, it question types,
Equations learning and lacks integration such as those
Matching natural language with more involving
processing to advanced deep diagrams or
evaluate different learning models multi-step
exam types, like BERT or reasoning.
including MCQs, GPT-3 for Additionally,
essays, and further the authors
equations, with improving text- propose
high accuracy. It based question enhancing the
features an evaluations. equation
equation There is also similarity

9|Page
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
similarity
algorithm (HMB-
MMS-EMA)
algorithm to
achieving 100%
limited handle
accuracy and a
discussion of trigonometric
text similarity
scalability for and logarithmic
measure with a
large datasets functions and
best accuracy of
and real-time incorporating
77.95% using the
correction needs state-of-the-art
Universal
in massive open deep learning
Sentence Encoder
online courses models (e.g.,
(USE). The
(MOOCs). GPT-series) for
framework's math
text evaluations.
checker surpasses
the SymPy
Python package.

2 Efficient Tien Dzung 2011 IEEE The paper While the The paper
and Nguyen, proposes a system is suggests further
Reliable Quyet camera-based accurate and development of
Camera- Hoang grading system cost-efficient, it an automatic
Based Manh, for multiple- currently paper feeder to
Multiple- Phuong Bui choice tests that requires manual enhance the
Choice Test Minh, Long offers a reliable paper feeding, system’s real-
Grading Nguyen and cost-effective which limits its time
System Thanh, alternative to scalability for capabilities.
Thang traditional optical real-time, large- This
Manh mark recognition scale exam improvement
Hoang (OMR) systems. grading. would allow the
The system Additionally, system to
captures images there is no process answer
of answer sheets support for sheets
using a camera handling more continuously
and processes complex answer without manual
them for grading sheets or intervention,
through image additional making it more
enhancement, question formats suitable for
skew correction, beyond commercial use.
and normalization multiple-choice. Expanding the
techniques. It system to
demonstrated a handle various

10 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
high recognition
accuracy of
99.7%, even when exam formats
using non- and more
transoptic complex layouts
(regular) paper, could also be
which reduces explored in
overall costs future work.
compared to
OMR systems.

3 Automatic Saikat 2024 IEEE This research The system The authors
Multiple Mahmud, presents an struggles with suggest
Choice Kawshik automated system low-quality enhancing the
Question Biswas, for evaluating input images, system's ability
Evaluation Api Alam, multiple-choice such as those to process low-
Using Rifat Al questions (MCQs) that are tilted, quality images
Tesseract Mamun using Tesseract hazy, or contain and further
OCR and Rudro, OCR and the faint text. refining the
YOLOv8 Nusrat YOLOv8 object Additionally, it object detection
Jahan detection model. may incorrectly model.
Anannya, The system can classify some Additional
Israt Jahan process student- objects due to features such as
Mouri, answered MCQ noise or other handling more
Kamruddin sheets by artifacts in the diverse marking
Nur detecting and image. The styles and
analyzing paper also notes improving the
markings such as that further robustness of
filled circles or improvements the OCR engine
crosses, are needed to for faint text are
regardless of the handle more also potential
template used. It complex and areas for
achieves high noisy input data. improvement.
accuracy in The researchers
identifying both also envision
valid and expanding the
incorrect system for
responses with an mobile devices,
F1 score of 0.98 making it more
and a mean accessible in
Average Precision real-world

11 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
(mAP) of 0.99.
This solution
effectively
reduces manual
educational
effort in grading
settings.
and offers
flexibility by not
requiring fixed
template sheets.

4 Generation Sarjak 2021 IEEE The paper The system may Improve the
and grading Maniar, proposes a system generate MCQ
of arduous Prof. called "évaluer," multiple MCQs generation
MCQs Kumkum which automates from the same process to
using NLP Saxena, Jai the generation and sentence if the prevent multiple
and OMR Parmani, grading of input text is questions from
detection Mihika difficult multiple- shorter than the being generated
using Bodke choice questions desired number from the same
OpenCV (MCQs). The of questions. sentence.
MCQs are
paraphrased to When multiple Enhance the
make them harder bubbles are system’s
to look up on the marked for a handling of
internet, helping question, the ambiguous or
to prevent system selects erroneous OMR
malpractices. The the one with the inputs, such as
system uses highest contour multiple
Natural Language density, which bubbled
Processing (NLP) might lead to answers.
for paraphrasing incorrect
grading if the Explore
and OpenCV for
question should integrating
Optical Mark
be discarded machine
Recognition
instead. learning models
(OMR) grading. to generate
more
sophisticated
distractors for
MCQs.

This paper
presents an

12 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
innovative
approach to
solving
common issues
in online
education by
automating
question
generation and
grading, which
could be
beneficial in
various
educational
settings.

Enhancements
The paper The system is
to improve the
presents an restricted to a
interface,
Henry E. automatic grading single exam
reduce
Ascencio, system for format and
Automatic processing time,
Carlos F. multiple-choice requires
Multiple and adapt the
Peña, exams using darkened marks
Choice Test system for
Kevin R. computer vision, for correct
5 Grader 2021 IEEE mobile
Vásquez, providing a detection.
using platforms,
Manuel reliable, fast Improvements
Computer enabling real-
Cardona, method for could focus on
Vision time grading
Sebastián grading exams flexibility and
without the
Gutiérrez through image real-time
need to capture
processing grading
images manuall
techniques. capabilities.
y.

6 Multiple Alexander 2013 IEEE The paper The paper The author
Choice Sayapin, presents a method highlights the plans to
Assessment for evaluating the need for enhance the
s: difficulty and objective method to
Evaluation differentiation measures of test improve the
of Quality ability of difficulty and quality of
multiple-choice differentiation multiple-choice
assessments ability, as assessments in
(MCAs). The traditional terms of their
author introduces evaluations difficulty and

13 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
a statistical
approach to
determine how
challenging a test
is for students and
how well it
differentiates
between various
levels of student
knowledge. The
proposed method often rely on
calculates a subjective
measure of expert
differentiation
correctness for judgments.
ability.
student responses, However, it
Additionally,
which accounts does not address
the system will
for both correct other important
be developed
answers and aspects of
further to allow
distractors. This MCAs, such as
for broader
allows for a more validity and
applications
nuanced reliability,
across different
evaluation of which the author
subjects and
student acknowledges
platforms.
performance. are complex and
Additionally, the require expert
paper suggests a input.
way to set a
passing threshold
by simulating
random answers
and using
statistical
significance to
determine the
minimum score
needed to pass.

7 Multiple Anusha 2014 IEEE The paper The paper The authors
Choice Hegde, proposes a new highlights the suggest
Questions Nayanika variant of challenge of expanding the
with Ghosh,Vira multiple-choice designing good concept of

14 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
questions (MCQs) justifications to
where students other types of
must justify their questions, such
chosen answer by as reading
selecting one or justifications, comprehension,
more supporting which can be where students
statements from a time-consuming could highlight
list provided by and require portions of text
the instructor. careful to justify their
This approach consideration to answers. They
aims to address avoid making also propose
certain the questions exploring more
weaknesses of easier through complex
traditional MCQs, process of justifications
such as guessing elimination. that require
and low-level Additionally, students to
learning, while while the system apply a
Justification maintaining the helps reduce sequence of
j Kumar
s ability to guessing and reasoning steps
automate grading. improve the to arrive at the
The method helps assessment of correct answer.
distinguish higher-order Finally, they
students who truly thinking, it still encourage the
understand the does not fully educational
material from address all research
those who guess limitations of community to
or rely on test- MCQs, such as build on this
taking strategies. testing higher- work by
A plugin for the level skills experimenting
widely used across different with other auto-
Moodle e- cognitive gradable
learning platform domains. mechanisms
has been and adapting the
developed to system for
support this new different subject
question format. areas

8 Evaluation Noorminsh 2004 IEEE The paper The paper notes Future work
of Online ah Iahad, evaluates the that while should focus on
Assessment Emmanouil effectiveness of feedback improving the
: The Role Kalaitzakis, online mechanisms user interface

15 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
of Feedback Georgios assessments in e- were and breaking
in Learner- A. learning appreciated, the tests into
Centered e- Dafoulas, environments, depth and clarity smaller sections
Learning Linda A. focusing on the of the to maintain
Macaula role of feedback. explanations student
The study provided could engagement.
examines an be improved. It The paper also
online test used in also identifies a suggests further
an e-Commerce gap in the research into
course at the students’ how feedback in
University of willingness to e-learning
Manchester engage with environments
Institute of detailed can be enriched
Science and feedback after and how the
Technology completing the assessment
(UMIST). It test, with many process can be
emphasizes the focusing tailored to
importance of primarily on different
"rich" feedback in their overall learning styles.
learner-centered marks. The size Additionally,
paradigms, where of the test and the authors
students benefit the time propose
from immediate required to investigating
grading and complete it may the impact of
explanations of have contributed computer-
their mistakes. to lower mediated
The findings engagement communication
indicate that well- with feedback. on distance
designed online learning and
assessments can exploring
provide effective alternative
learning feedback
mechanisms and mechanisms
that feedback that could
significantly enhance the
enhances the learning
learning process. experience.
The paper also
finds that
usability and
functionality are

16 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
crucial in
ensuring students
engage with
feedback.

9 Multiple- Aditya R. 2018 IEEE The paper The research The authors
column Mitra, Dion introduces a indicates a need suggest further
Format for Krisnadi, three-column for improvement refinement of
Reducing Steven format for MCQ in recognizing the handwriting
Task Albert, answer sheets, certain recognition
Complexity Arnold designed to handwritten algorithm to
of Aribowo improve both characters, improve
Recognizin accuracy and particularly the accuracy for
g efficiency in letter 'C', which problematic
Handwritte recognizing consistently characters. They
n Answers handwritten showed lower also recommend
in Multiple- answers during accuracy rates. exploring ways
choice automated Additionally, to reduce
Question grading. The while the three- processing time,
(MCQ) Test three-column column format especially for
design allows improves large-scale
students to correct recognition assessments.
their answers accuracy, it Another
within a increases potential
designated area, processing time, direction is the
reducing the which could be integration of
chance of errors a challenge more advanced
when recognizing when handling neural network
handwriting. The large datasets. models to
study enhance
demonstrates that recognition
this format leads performance
to better accuracy and scalability.
in answer
recognition,
particularly in
cases where
students change
their answers.
Additionally, the
use of a Radial

17 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
Basis Function
Neural Network
(RBFNN)
classifier for
handwritten
character
recognition
provided
satisfactory
results, though
some specific
letters (e.g., 'C')
still posed
recognition
challenges.

10 Mobile- G.M. 2019 IEEE The paper The application Future


Based MCQ Rasiqul presents a mobile- is designed for improvements
Answer Islam based application small-scale should focus on
Sheet Rasiq, designed to exams and may enhancing noise
Analysis Abdullah analyze and not be able to reduction
and Al Sefat, evaluate multiple- replace methods to
Evaluation M.M. choice question expensive OMR handle low-
Application Fahim (MCQ) answer systems in quality images
Hasnain sheets without the large-scale more
need for standardized effectively.
specialized testing. Additionally,
Optical Mark Additionally, the system
Reader (OMR) noisy images or could be
machines or poor image expanded to
paper. Using an quality due to include more
Android lighting and complex
smartphone, the other factors can question types
app scans answer negatively or integrated
sheets and impact the with other
processes the system’s educational
images to performance, tools to further
determine the requiring future improve its
selected answers improvements utility in a
by counting black in noise broader range of
pixels in the reduction and assessment

18 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
answer circles.
The application
offers an
affordable
solution for
smaller-scale
image
exams, achieving
enhancement scenarios.
an average
techniques.
accuracy of
99.44% when a
threshold of 45%
black pixel
content is used to
detect answers.

11 Variouds Nirali V 2015 IJERT The research Lack of Investigating


Techniques Patel, presents various Comprehensive the integration
For techniques used Evaluation: The of deep learning
Assessment Ghanshyam for assessing paper does not techniques for
Of OMR I Prajapati Optical Mark provide a enhanced OMR
Sheets Recognition thorough sheet
Through (OMR) sheets comparative recognition.
Ordinary using standard 2D analysis of all
2D Scanner: scanners. The techniques Developing
A Survey authors highlight across multiple hybrid models
the efficiency and datasets, which that combine
accuracy of could help in the strengths of
different understanding various
algorithms in their practical methodologies.
recognizing applicability Conducting
marked responses better. real-world case
and emphasize the
Limited Scope studies to
potential of using
of Techniques: validate the
conventional
Some advanced performance of
scanners to reduce
techniques, such proposed
costs compared to
as deep learning techniques in
specialized OMR
approaches for diverse
hardware.
OMR educational
assessment, are environments.
not discussed.
Integrating these

19 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
could improve
accuracy and
robustness.

Real-World
Application:
There is a need
for more studies Exploring the
focusing on the use of mobile
practical devices for
implementation OMR scanning
of these and processing
techniques in to increase
various accessibility.
educational
settings,
considering
different scanner
types and
environmental
conditions.

12 OMR Janardhan 2024 IJERT The paper While the study The authors
Automated Singh K. presents a method successfully suggest further
Grading (Assistant for automating the demonstrates the research in:
Professor,
grading of automated
Department Enhancing the
of ISE, multiple-choice grading system's
questions using capabilities, algorithm to
RNSIT,
Optical Mark potential gaps better handle
Bengaluru)
Recognition may include: poorly marked
Sanjay (OMR) answers.
Kulkarni Limitations in
technology. The
recognizing Expanding the
Sanket B system improves
marks that are system to
Patil the accuracy and
not perfectly accommodate
efficiency of
Shashank filled. subjective
grading compared
M questions and
to traditional Challenges in handwritten
Shashanka manual methods. handling varied responses.
answer sheet
designs or Implementing
formats. machine
learning

20 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
techniques for
improved
accuracy and
adaptability
across different
datasets.

Despite its
effectiveness,
the study
The paper acknowledges
The authors
presents an some
suggest
automated Optical limitations, such
exploring the
Mark Recognition as sensitivity to
Nithin T. implementation
(OMR) grading the quality of
of deep learning
system that scanned images
Md Nasim techniques to
enhances the and variations in
automate the
T. Raj efficiency and OMR sheet
detection of
Shekhar accuracy of designs. The
marks more
OMR Auto evaluating authors
Omendra robustly. Future
13 Grading 2015 IJERT multiple-choice observed that
Singh research could
System questions the system could
Gautam focus on
(MCQs). The be further
developing a
system utilizes enhanced to
Yuraj mobile
image processing handle diverse
Gholap application for
techniques to read answer sheet
OMR grading,
and grade OMR formats and
enabling real-
sheets, thus integrate more
time grading
reducing manual advanced
and feedback
intervention and machine
for students.
potential errors. learning
algorithms for
improved
accuracy.

14 Machine Vishwas 2021 IJERT The paper The study The authors
Learning Tanwar presents an highlights some suggest
based innovative limitations, exploring
Automatic machine learning including the advanced
Answer model designed to model's natural
automate the dependency on language
Checker answer-checking the quality and processing

21 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
(NLP)
techniques and
deep learning
diversity of the models, such as
training data. recurrent neural
Additionally, it networks
process in a way
may struggle (RNNs) or
that mimics
with complex or transformer-
human evaluators.
ambiguous based
The proposed
answers that architectures, to
system enhances
require nuanced improve the
Imitating the efficiency and
human model's
Human accuracy of
judgment. The performance.
Way of grading,
authors noted Future research
Answer particularly for
that there is could also focus
open-ended and
Checking potential for on expanding
subjective
further the dataset to
answers,
refinement of include a wider
providing more
the feature variety of
consistent results
extraction subjects and
than traditional
process to answer types,
grading methods.
capture deeper enabling the
semantic model to
understanding. generalize
better across
different
contexts.

15 OMR Sheet rs. Nayan 2024 IJERT The paper reviews The authors note The authors
Evaluation Ahire advancements in limitations propose
using Image Optical Mark related to the improvements
Processing , Ms. Recognition skewness of in handling
Vaishnavi (OMR) OMR sheets skewed and
Adhangle technology using when placed on distorted
image processing the conveyor images,
, Mr. Nikhil
for educational belt, which can integrating
Handore
assessments, affect accuracy. advanced
highlighting the machine
challenges of The review learning and
traditional suggests a lack deep learning
of exploration techniques for
into real-time

22 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
enhanced
accuracy and
robustness.

Further research
could focus on
developing
systems. scalable and
cost-effective
It presents various
solutions for
image processing processing
various OMR
techniques for challenges and
applications
OMR evaluation, handling
beyond
including pre- variations in
educational
processing, paper
assessments,
segmentation, orientation.
such as surveys
feature extraction,
and form
and classification
processing.
methods.

16 Evaluation G. 2023 IJERT The proposed While the paper Future research
of Optical Himabindu, OMR technique is successfully could focus on:
Mark A. Reeta, a low-cost, demonstrates a
Recognition A. Srinivas efficient system low-cost OMR
(OMR) Manikanta, capable of solution, it does Enhancing the
Sheet Using S. processing thin not extensively system's
Computer Manogna and low-quality explore the robustness
Vision answer sheets. impact of under different
The system varying lighting environmental
utilizes various conditions or the conditions.
image processing system's
techniques and performance on Exploring
has been tested different paper machine
successfully on a types and learning
significant printing techniques to
number of qualities. There improve mark

23 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
detection
accuracy.

Implementing
is also a lack of the system for
comparative diverse
questionnaires, analysis against applications,
demonstrating its existing OMR such as
robustness and technologies to attendance
effectiveness. validate the monitoring and
claimed survey analysis,
efficiency. to evaluate its
versatility in
practical
scenarios.

17 Automatic R. Kumar, 2017 IJERT The research While the study Enhancing the
OMR A. presents an demonstrated robustness of
Answer Rajasekara efficient Optical the effectiveness the OMR
Sheet n Mark Recognition of the proposed system to
Evaluation (OMR) system OMR system, handle a wider
using that utilizes there are some variety of mark
Efficient & Optical Character observations: styles and
Reliable Recognition answer sheet
OCR (OCR) for designs.
System automatic The system may Integrating
evaluation of struggle with machine
answer sheets. unconventional learning
The system mark styles or algorithms to
demonstrates high poorly filled improve
accuracy in sheets. character
recognizing
marks on OMR recognition,
There was
sheets, reducing especially for
limited testing
the time and labor handwritten
on diverse
associated with responses.
lighting
manual grading conditions and Conducting
processes. varying paper extensive tests
qualities. across different
environments
The OCR
and paper
performance

24 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
qualities to
assess the
system's
adaptability and
reliability.
may vary with Exploring the
handwriting use of mobile
recognition, devices for
which was not OMR
the primary evaluation,
focus of this making the
research. system more
accessible for
educational
institutions with
limited
resources.

18 Cost Vidisha 2019 IJERT The authors One limitation The system can
effective Ware, propose a low- of the system is be expanded by
optical Nithya cost and user- that the integrating
mark Menon, friendly Optical questionnaire is machine
recognition Prajakti Mark Recognition static in nature. learning and
software for Varute, (OMR) system to Future cloud
Rachana address the high improvements computing to
educational Dhannawat costs and could involve make it more
institutions complexities making the adaptive. Future
associated with questionnaire improvements
existing OMR dynamic and can focus on
technologies. The incorporating creating a
proposed system technologies dynamic
uses easily such as cloud questionnaire
available A4- computing and format and
sized paper and a machine increasing the
standard scanner learning to system's
or multifunctional enhance scalability for
printer to process adaptability and larger
OMR sheets. The user- institutions and
system is capable friendliness. other use cases.
of adetecting
different marking

25 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
One limitation
of the system is
that the
questionnaire is
static in nature.
Future
improvements
styles, such as could involve
bubbles, ticks, making the
and crosses, and questionnaire
integrates with an dynamic and
online website for incorporating
personalized technologies
result display. such as cloud
computing and
machine
learning to
enhance
adaptability and
user-
friendliness.

19 Automated M. 2018 IJERT The authors While the Future research


Scoring Alomran propose an system should focus on
System for and D. Chai automated scoring addresses cost developing a
Multiple system for efficiency and web-based
Choice Test multiple-choice adds a novel version of the
with Quick tests using image method for system,
Feedback processing. Key student ID allowing users
features include: recognition and to upload
answer answer sheets to
Segmented changing, future a server for
handwritten work is needed automatic
optical character to enhance the processing.
recognition system's user Additionally,
(OCR) for student interface and improving user
ID recognition. make it experience and
available online integrating the
An intuitive
for broader system with
answer-changing
accessibility. other
mechanism that
Another educational
allows students to

26 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
change answers
multiple times
without needing
to replace the
answer sheet.

Quick feedback observation is


by annotating the the potential
answer sheets and limitation of
sending them platforms could
handwritten
back via email. enhance its
character
The system practicality and
recognition due
reduces the cost reach.
to varied
and logistical handwriting
constraints styles.
associated with
traditional MCQ
grading systems
and provides a
fast and low-cost
solution.

20 Automatic Dhananjay 2017 IJERT The paper The paper does The authors
OMR Kulkarni, presents a low- not mention suggest that the
Answer Ankit cost, efficient, and performance system can be
Sheet Thakur, reliable system comparisons expanded to
Evaluation Jitendra for evaluating with existing support
using Kshirsagar, Optical Mark systems or different
Efficient & Y. Ravi Recognition discuss any languages and
Reliable Raju (OMR) answer limitations of be used for
OCR sheets using the proposed evaluating
System Optical Character approach. feedback in
Recognition Additionally, academic
(OCR) there is little institutions.
technology. The emphasis on There is also
system aims to challenges like potential for
provide a OCR accuracy implementing
software-based for handwritten the system at
approach that IDs or varying the micro-level
works with marking styles in
standard scanners, on OMR sheets. administrative
eliminating the sectors to

27 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
need for
specialized
equipment and enhance
high costs organizational
associated with feedback
traditional OMR processes. The
systems. The system could be
proposed system scaled further to
efficiently scores handle more
multiple-choice questions and
tests and provides sets, making it
quick feedback by more versatile
comparing for larger exams
student responses and feedback
with a master evaluations.
answer key stored
in a database.

21 An Abrar H. 2016 Springer The paper The system, Increase the size
Automated Abdul presents an while of the template
Multiple Nabi, Inad automated system functional, database to
Choice A. Aljarrah for grading shows room for handle a wider
Grader for multiple-choice improvement in variety of
Paper- exams using accuracy, handwriting
Based Optical Character especially with styles.
Exams Recognition handwritten
(OCR). The responses, Explore
system processes which were additional
scanned images of impacted by feature
exam sheets to variations in extraction
identify the student methods to
student's ID and handwriting due improve
their selected to stress during accuracy.
answers, exams. Perform
outputting the Additionally, it experiments on
results in an Excel only works with more diverse
sheet. The system English datasets to
achieved an characters, enhance
overall accuracy limiting its reliability
of 95.58% in application in across different
recognizing other languages. environments

28 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
student IDs and
answers, with a
and student
processing time of
populations.
1 to 4 seconds per
exam paper.

The paper
The study
proposes a low-
focused on a
cost, robust, and
controlled
fast OMR
environment Future research
evaluation
with adequate could involve
Inclusion of technique that
brightness. It developing a
Vertical Bar uses images
didn't address mobile
in the OMR captured by
complex real- application that
Sheet for mobile phone
world scenarios directly
Image- cameras for
with significant integrates the
Based evaluating OMR
Kshitij variations in algorithm for
Robust and sheets. It
22 Rachchh, 2019 Springer lighting or real-time OMR
Fast OMR introduces the use
E.S. Gopi heavy image evaluation and
Evaluation of a vertical bar in
blurring. Future expanding its
Technique the OMR sheet to
work could capabilities to
Using assist in Principal
explore handle
Mobile Component
extending the challenging
Phone Analysis (PCA)-
robustness of conditions like
Camera based skew
the system to poor lighting or
correction,
handle a wider motion blur.
achieving 100%
range of
accuracy with
environmental
reduced
conditions.
computation time.

23 Optical Erik 2021 Springer The study - Lack of - Development


Mark Miguel de identifies key standardization: of a
Recognition Elias, Paulo advances and A key gap standardized
: Advances, Marcelo limitations in identified is the public dataset
Difficulties, Tasinaffo, Optical Mark absence of a for evaluating
and R. Hirata Recognition public dataset to OMR solutions.
Limitations Jr. (OMR) standardize the
technology. It evaluation of - Advances in
reviews 35 papers OMR systems. machine
on OMR, Current research learning,
focusing on uses proprietary particularly

29 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
datasets, making
it difficult to
compare
using
performance
convolutional
across studies.
neural networks
- Flexibility and (CNNs), to
usability: Many improve the
OMR solutions accuracy of
remain rigid, mark detection
requiring and
specific classification.
technological
templates and
challenges, - Improved
fiducial markers
processing flexibility in
that reduce
techniques, OMR systems
flexibility for
datasets, to allow
users. This
accuracy, and customizable
affects the
cost. The paper forms without
usability of
highlights the lack the need for
OMR systems,
of a standard pre-defined
especially in
dataset for templates or
educational
evaluating OMR fiducial
settings where
systems, a major markers,
teachers may
limitation that making OMR
need
impairs the more accessible
customizable
comparative to a broader
forms.
analysis of range of users.
different - Real-world
solutions. - Further
application: The
exploration of
study
mobile-based
emphasizes the
OMR systems
need for more
that can handle
real-world
real-world
testing,
issues like
particularly with
lighting
untrained users,
variations,
to ensure the
skew, and noise.
robustness of
OMR systems in
practical
scenarios.

30 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
- Setup
The paper Limitations: The
presents physical setup,
Eyegrade, a low- involving
cost system for webcam
automatic grading alignment, was
of multiple-choice seen as a -Enhanced user
exams using a weakness, interface and
regular webcam requiring ease of setup.
instead of improvements
expensive - Improved
for ease of use. connection with
scanners.
Eyegrade - OCR Learning
Grading Management
recognizes both Accuracy:
Multiple Systems (LMS)
Jesus Arias the marked While effective,
Choice and exam
Fisteus, answers and the system's
Exams with authoring tools.
Abelardo handwritten OCR could
Low-Cost
24 Pardo, 2012 Springer student benefit from - Expanding
and
Norberto identification enhancements to Eyegrade to
Portable
Fernández numbers. It offers improve ID mobile
Computer-
García improved recognition. platforms for
Vision
flexibility, increased
Techniques - Customization:
allowing the use portability.
of regular paper Current systems
and non-erasable like GradeCam - Public release
pens. limit the number of the system as
Experimental of questions and open-source
results show that answers per software for
the system is sheet, while wider adoption.
efficient, easy to Eyegrade offers
use, and reliable, more flexibility,
with a high level which could be
of user further
satisfaction. improved in
future iterations.

25 An Ismail 2018 Springer The paper - The system's - Improving the


Efficient, Khan, Sami proposes a novel performance is system’s
Cost- ur Rahman, and cost-effective affected by performance
Effective Fakhre system for image quality, with bad-quality
and User- Alam automatically especially when images by
Friendly grading multiple- the circles are incorporating

31 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
choice questions
(MCQs) using
basic image
processing
techniques. It advanced image
allows candidates enhancement
to mark answers techniques.
in various ways
(cross, tick, or - Further
fill) and reducing the
significantly not filled need for
reduces the time properly. specialized
needed for hardware and
grading. The - The use of a making the
system requires camera instead system more
minimal of an OMR adaptable to
Approach scanner offers
resources: a different
for MCQs flexibility and
camera and a environments.
Treatment cost savings but
computer.
Experiments may reduce - Extending the
show that it can accuracy in poor system to
save up to 80% of lighting or with handle other
the time required low-resolution types of
for marking cameras. assessments, not
answers just MCQs, and
compared to integrating it
traditional OMR with
systems. The educational
performance is platforms for
satisfactory, even broader use.
with low-quality
images, achieving
an accuracy of
95%.

26 Highly Hiroki 2010 Elsevier This paper One limitation is - Optimize more
Optimized Sugano presents a highly that not all OpenCV
Implementa (Kyoto optimized version OpenCV functions for the
tion of University) of the OpenCV functions were Cell processor.
OpenCV for , Ryusuke library for the optimized, and
the Cell Miyamoto Cell Broadband some functions - Implement
dynamic

32 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
Engine (Cell),
which is widely (like simple
used in high- arithmetic ones)
performance had less allocation of
embedded performance SPEs to
systems. The gain due to optimize
optimized overheads such resource usage.
implementation, as code overlays
named CVCell, and - Adjust
shows improved synchronization partitioning
performance for costs. strategies based
several OpenCV Additionally, on input image
functions when dynamic sizes (vertical or
(Nara compared to Intel allocation of tiled
Institute of Core 2 Duo SPE resources partitioning) to
Broadband
Science and E6850 processors. was not handle larger
Engine
Technology Notably, many implemented, image
) CVCell functions which could resolutions.
outperform Intel’s have further
implementation improved - Explore
using Intel efficiency. further SIMD
Performance Functions like optimizations
Primitives (IPP). cvIntegral for PPE (Power
Real-time image experienced Processing
processing is bottlenecks due Element) in
demonstrated to memory resizing tasks
with substantial access patterns, like object
speedups in image limiting the detection.
recognition tasks, speedup from
such as object parallelization.
detection.

27 A Review Peiyuan 2022 Elsevier The paper reviews The paper Future research
of Yolo Jiang, Daji the development identifies that should focus on:
Algorithm Ergu, of the YOLO while YOLO
Developme Fangyao (You Only Look has made - Improving
nts Liu, Ying Once) algorithm significant detection of
Cai, Bo Ma for object advancements, small objects
detection, from its there are still and crowded
(Southwest inception in 2015 challenges in scenes.
Minzu (YOLOv1) to its detecting small
University) - Exploring

33 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
subsequent
versions
(YOLOv2 to
YOLOv5). It
objects and
highlights the key
objects in close
features,
proximity,
improvements, further
particularly in
and limitations of optimizations
earlier versions.
each version, such for YOLO in
It also points out
as accuracy, various real-
that there is
speed, and time
limited research
generalization applications.
on YOLOv1 in
capabilities. It
this review, - Expanding
also notes that
which is a gap scenario-
improvements in
for further specific analysis
YOLO are
studies. The and
ongoing, with
authors suggest implementation
YOLOv5 being
that scenario- to better adapt
lightweight and
based YOLO models
easy to deploy.
implementations to different use
YOLOv4 focuses
of YOLO could cases.
on integrating
be explored
multiple
more deeply in
optimizations, and
future research.
YOLOv3
introduces multi-
scale detection for
better accuracy.

28 Automatic Emiliano 2023 Elsevier This paper - There is a lack Developing


evaluation del Gobbo, systematically of transparency AGFTM that
of open- Alfonso reviews the and ensures human
ended Guarino ,B current state of interpretability oversight and
questions arbara automatic grading in the models, explainability,
for online Cafarelli,L and feedback which is crucial avoiding the
learning. A uca tools and methods for trust in "black-box"
systematic Grilli ,Pierp (AGFTM) used educational nature of many
mapping aolo for evaluating environments. deep learning
Limone open-ended systems.
questions, - Few studies
particularly in validate their - Creating open-
solutions with source,

34 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
higher education
(HE). It highlights
that although
AGFTM is a
growing research
field, it is still
immature with
practical accessible tools
implementations diverse datasets,
that teachers
yet to be widely and the small
and institutions
adopted. Various dataset size
can adopt for
techniques, such limits
practical use.
as machine generalizability.
learning and - Conducting
- AGFTM often
natural language more user
does not include
processing (NLP), studies to gauge
user studies
have been the
(teachers and
employed to effectiveness
students) to
grade open-ended and fairness of
assess the
questions. Despite AGFTM in real-
usability and
these world
acceptance of
advancements, educational
the systems.
challenges contexts.
remain, including
subjective
grading, lack of
datasets, and
difficulties in
evaluating
creativity.

29 Reduced Dan- 2023 Elsevier The review The review The paper
Grading in Anders examines the highlights gaps suggests further
Assessment Normann, growing trend of in understanding research on the
: A Scoping Lise Vikan reduced grading the external use relationship
Review Sandvik, in educational of grades, such between
Henning assessments, as how reduced reduced
Fjørtoft discussing the grading affects grading,
benefits and communication feedback, and
challenges. with parents and student
Reduced grading accountability motivation. It

35 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
is linked to
also
improved
recommends
feedback,
exploring new
enhanced learning
grading
environments, measures.
strategies and
reduced stress for Additionally,
their long-term
students, and there is a need
impacts on
fostering intrinsic for more
student
motivation. research on
performance,
However, there practical
along with
are also implementation
improving
challenges related in various
communication
to unclear educational
with
implementation contextS.
stakeholders
and negative
like parents and
impacts on
school
motivation for
administrators.
some students.

30 Enhanceme Supriya 2024 Elsevier The research Despite the 1. Online


nt of Mahadevka paper proposes a model’s high Recognition:
Handwritte r, Shruti hybrid approach accuracy, the Incorporating
n Text Patil, Ketan for handwritten study identifies real-time
Recognition Kotecha text recognition limitations recognition and
Using AI- (HTR) by related to applying the
based combining recognizing model to
Hybrid Convolutional low-quality or dynamic
Approach Neural Networks broken handwritten
(CNN), characters and inputs.
Bidirectional the need for
Long Short-Term improvement in 2. Multilingual
Memory online Support:
(BiLSTM), and recognition Expanding the
Connectionist systems. The model to
Temporal existing model support
Classification primarily recognition of
(CTC). The focuses on handwritten text
hybrid model offline HTR for in multiple
achieved English and languages.
remarkable French text, and 3. Quality
accuracy of further Improvements:

36 | P a g e
Sr. Author Published Future
Title Year Findings GAP
No. Name At Direction
98.50% and
98.80% on the
IAM and RIMES Addressing
datasets, broken or low-
respectively, quality
demonstrating a handwritten text
significant to improve
improvement in recognition
enhancement is accuracy.
HTR accuracy
needed to
over existing 4. Activation
handle
methods. This Functions and
multilingual
hybrid method Neural Layers:
datasets or more
excels at Experimenting
complex writing
recognizing with additional
styles
diverse neural network
handwriting styles layers and
and overcoming activation
the challenges functions for
posed by variable- performance
length sequences optimization.
in text
recognition.

References

[1] Hossam Magdy Balaha and Mahmoud M. Saafan, “Automatic Exam Correction Framework
(AECF) for the MCQs, Essays, and Equations Matching”, IEEE, 2021.

[2] Tien Dzung Nguyen, Quyet Hoang Manh, Phuong Bui Minh, Long Nguyen Thanh, Thang
Manh Hoangn, “Efficient and Reliable Camera-Based Multiple-Choice Test Grading System”,
IEEE, 2011.

[3] Saikat Mahmud, Kawshik Biswas, Api Alam, Rifat Al Mamun Rudro, Nusrat Jahan
Anannya, Israt Jahan Mouri, KamruddinNurn, “Automatic Multiple Choice Question Evaluation
Using Tesseract OCR and YOLOv8”, IEEE, 2024.

37 | P a g e
[4] Sarjak Maniar, Prof. Kumkum Saxena, Jai Parmani, Mihika Bodke, “Generation and grading
of arduous MCQs using NLP and OMR detection using OpenCV”, IEEE, 2021.

[5] Henry E. Ascencio, Carlos F. Peña, Kevin R. Vásquez, Manuel Cardona, Sebastián Gutiérrez,
“Automatic Multiple Choice Test Grader using Computer Vision”, IEEE, 2021.

[6] Alexander Sayapin, Applied Mathematics Chair, SibSAU, Krasnpyarsk, Russia, “Multiple
Choice Assessments: Evaluation of Quality”, IEEE, 2013.

[7] Anusha Hegde, Nayanika Ghosh, Viraj Kumar, “Multiple Choice Questions with
Justifications”, IEEE, 2014.

[8] NoorminshahIahad, Emmanouil Kalaitzakis, Georgios A. Dafoulas, Linda A. Macaulay,


“Evaluation of Online Assessment: The Role of Feedback in Learner-Centered eLearning”,
IEEE, 2014.

[9] Aditya R. Mitra, Dion Krisnadi, Steven Albert, Arnold Aribowo, “Multiple-column Format
for Reducing Task Complexity of Recognizing Handwritten Answers in Multiple-choice
Question”, IEEE, 2018.

[10] G.M. Rasiqul Islam Rasiq, Abdullah Al Sefat, M.M. Fahim Hasnain, “Mobile-Based MCQ
Answer Sheet Analysis and Evaluation Application”, IEEE, 2019.

[11] Nirali V Patel, Ghanshyam I Prajapati, “Various Techniques for Assessment of OMR Sheets
Through Ordinary 2D Scanner: A Survey”, IJERT, 2015.

[12] Nithin T. Md Nasim T. Raj Shekhar Omendra Singh Gautam Yuraj Gholap, “OMR Auto
Grading System”, IJERT, 2015.

[13] Vishwas Tanwar, “Machine Learning Based Automatic Answer Checker Imitating Human
Way of Answer Checking”, IJERT, 2021.

[14] Mrs. Nayan Ahire, Ms. Vaishnavi Adhangle, Mr. Nikhil Handore, “OMR Sheet Evaluation
Using Image Processing”, IJERT, 2024.

38 | P a g e
[15] Himabindu, A. Reeta, A. Srinivas Manikanta, S. Manogna, “Evaluation of Optical Mark
Recognition (OMR) Sheet Using Computer Vision”, IJERT, 2023.

[16] R. Kumar, A. Rajasekaran, “Automatic OMR Answer Sheet Evaluation using Efficient &
Reliable OCR System”, IJERT, 2017.

[17] Vidisha Ware, Nithya Menon, PrajaktiVarute, Rachana Dhannawat, “Cost effective optical
mark recognition software for educational institutions”, IJERT, 2019.

[18] Vidisha Ware, Nithya Menon, PrajaktiVarute, Rachana Dhannawa, “Automated Scoring
System for Multiple Choice Test with Quick Feedback”, IJERT, 2018.

[19] Dhananjay Kulkarni, Ankit Thakur, Jitendra Kshirsagar, Y. Ravi Raju, “Automatic OMR
Answer Sheet Evaluation Using Efficient& Reliable OCR System”, IJERT, 2017.

[20] Janardhan Singh K. Sanjay Kulkarni Sanket B Patil Shashank M Shashanka, “OMR
Automated Grading”, IJERT, 2024.

[21] Abrar H. Abdul Nabi, Inad A. Aljarrah, “An Automated Multiple Choice Grader for Paper-
Based Exams”, Springer, 20.

[22] shitijRachchh, E.S. Gopi, “Inclusion of Vertical Bar in the OMR Sheet for Image-Based
Robust and Fast OMR Evaluation Technique Using Mobile Phone Camera”, Springer, 20.

[23] Erik Miguel de Elias, Paulo Marcelo Tasinaffo, R. Hirata Jr, “Optical Mark Recognition:
Advances, Difficulties, and Limitations’, Springer, 20.

[24] Jesus Arias Fisteus, Abelardo Pardo, Norberto Fernández García, “Grading Multiple Choice
Exams with Low-Cost and Portable Computer-Vision Techniques, Springer, 20.

[25] Ismail Khan, Sami ur Rahman, Fakhre Alam, “An Efficient, Cost-Effective and User-
Friendly Approach for MCQs Treatment”, Springer, 20.

[26] Hiroki Sugano, Ryusuke Miyamoto, “Highly Optimized Implementation of OpenCV for the
Cell Broadband Engine”, Elsevier, 20.

39 | P a g e
[27] Peiyuan Jiang, Daji Ergu, Fangyao Liu, Ying Cai, Bo Ma, “A Review of Yolo Algorithm
Developments”, Elsevier, 20.

[28] Emiliano del Gobbo, Alfonso Guarino, Barbara Cafarelli, Luca Grilli, Pierpaolo Limone.,
“Automatic evaluation of open-ended questions for online learning. A systematic mapping”,
Elsevier, 20.

[29] Dan-Anders Normann, Lise Vikan Sandvik, Henning Fjørtoft, “Reduced Grading in
Assessment: A Scoping Review”, Elsevier, 20.

[30] Supriya Mahadevkar, Shruti Patil, Ketan Kotecha, “Enhancement of Handwritten Text
Recognition Using AI-based Hybrid Approach”, Elsevier, 20.

40 | P a g e
Project
Proposal

41 | P a g e
Smart Moderator
Rationale

The demand for accurate and efficient assessment tools in educational institutions has become
increasingly critical, particularly for large-scale multiple-choice question (MCQ) exams such as
NEET. Traditional manual grading processes are not only time-consuming but also highly prone
to human errors, especially when dealing with thousands of answer sheets. The limitations of
existing systems, which often rely on specialized Optical Mark Recognition (OMR) scanners and
pre-printed answer sheets, contribute to significant logistical challenges and increased costs.
These traditional methods also lack flexibility, as they may struggle to accurately process poorly
marked or non-standard responses.

To address these pressing challenges, there is a clear need for a more adaptive and accessible
solution that integrates advanced technologies. By leveraging OMR technology alongside
modern image processing techniques, the Smart Moderator project aims to facilitate grading
through the use of ordinary cameras, including mobile devices. This innovative approach not
only enhances accessibility but also significantly reduces reliance on expensive, specialized
equipment, making the system more practical for a wider range of educational settings.

The integration of mobile-based scanning capabilities is particularly important for


accommodating users in remote or under-resourced environments. It allows students to capture
their answer sheets conveniently without the constraints of traditional methods. Additionally,
ensuring high accuracy in answer recognition, regardless of variations in image quality, is a key
focus of this project. By utilizing advanced algorithms and seamless database integration, the
system can perform real-time answer matching and score generation, delivering prompt feedback
to both students and educators.

Automating the grading process through the Smart Moderator not only minimizes the potential
for manual errors but also significantly improves efficiency in the assessment workflow. This
allows educators to dedicate more time to teaching and engaging with students rather than
getting bogged down in administrative tasks. Moreover, the reduction of specialized hardware
requirements lowers operational costs, making the solution more sustainable and accessible.

42 | P a g e
By enabling on-demand printed answer sheets, the project aligns with sustainability goals,
reducing paper waste associated with pre-printed materials. The ability to analyze assessment
data and generate actionable insights also empowers educational institutions to adopt data-driven
strategies that can enhance teaching effectiveness and improve student performance. Ultimately,
Smart Moderator aims to transform the assessment landscape, making it more efficient, accurate,
and inclusive for all stakeholders involved.

 Introduction

Smart Moderator is a web-based platform designed to automate the assessment of multiple-


choice questions (MCQs) in educational settings, particularly for large-scale examinations like
NEET. By integrating advanced Optical Mark Recognition (OMR) technology with image
processing techniques, the system captures and processes assessments digitally, making the
grading process more accessible and cost-effective. Unlike traditional methods that require
specialized equipment and trained personnel, this platform enables institutions to conduct
evaluations without the need for expensive OMR hardware, lowering the barrier to entry for
high-quality exam assessments. The platform operates on a secure website with controlled
access, ensuring that only authorized users can manage assessment data, which is protected
through privacy measures. The automated grading system significantly reduces the likelihood of
manual errors and expedites the evaluation process, allowing educators to deliver accurate
feedback to students. By streamlining grading tasks, Smart Moderator frees up educators to
dedicate more time to instructional activities and student engagement.

Smart Moderator's seamless integration with a centralized database allows for real-time answer
matching and immediate score computation, resulting in a fast and efficient assessment
workflow. In an era where data-driven decision-making is key to advancing educational
outcomes, Smart Moderator stands as a valuable asset for institutions seeking to enhance their
assessment processes. By automating routine tasks and providing actionable insights, the
platform supports schools and universities in their mission to improve teaching quality, optimize
learning experiences, and better prepare students for future academic and professional
challenges.

 Purpose

43 | P a g e
The primary purpose of this project is to address the inefficiencies and inaccuracies associated
with the manual evaluation of OMR (Optical Mark Recognition) sheets in large-scale exams
such as NEET. Manual grading is not only time-consuming but also prone to human errors,
leading to delays and possible inaccuracies in results. By developing an automated system, this
project aims to significantly enhance the speed, accuracy, and reliability of OMR sheet
evaluations. The solution will utilize image processing techniques to automatically detect and
evaluate marked answers, ensuring timely, precise, and scalable results for institutions managing
high-stakes examinations. Additionally, the system will reduce operational costs by minimizing
human involvement in the grading process, improving both efficiency and resource allocation.

 Scope

The Smart Moderator project has a wide reach in the area of grading exams. It aims to improve
how multiple-choice exams are graded in various educational settings, including schools and
universities. The project is flexible enough to work with different subjects and types of tests,
making it useful for many standardized exams.

Additionally, Smart Moderator plans to connect with existing school systems and provide
features like detailed result analysis, performance tracking, and easy-to-use interfaces for both
teachers and students. There are also plans for future upgrades, such as adding support for
different question types and feedback options to help improve learning. Overall, the project aims
to make grading faster and more accurate, benefiting education in today’s busy environment.

Literature Survey

Several studies have focused on automated grading systems for MCQs, addressing limitations of
traditional OMR methods. Nguyen et al. (2011) presented a camera-based system using image
processing for skew correction and normalization, achieving 99.7% accuracy but requiring
manual sheet feeding. Mahmud et al. (2024) used OCR and YOLOv8 for flexible grading
without fixed templates, though low-quality images remain a challenge.Mobile-based OMR
solutions like Islam et al. (2019) improved accessibility with 99.44% accuracy, yet noise and
image quality issues persist. Advanced algorithms show promise, but scalability and cost-
effective solutions need more exploration. Our system aims to address these gaps using modern
image processing and adaptable OMR methods.

44 | P a g e
In conducting this research, we have meticulously reviewed a total of 30 scholarly papers
sourced from prominent publishers, including IEEE, IJERT, Elsevier, and Springer. The primary
focus was on understanding the different methodologies, applications, and outcomes within the
scope of our research topic. We categorized the papers as:

 Papers from IEEE – 10


 Papers from IJERT – 10
 Papers from SPRINGER – 5
 Papers from ELSEVIER – 5
 Total Papers – 30

IEEE papers contributed insights into technical frameworks and automation, while IJERT
focused on practical applications and engineering case studies. Elsevier provided broad reviews
and trend analyses, and Springer offered advanced theoretical models and statistical insights.

A research table was created to organize these findings, summarizing each paper's source,
methodology, and key contributions, allowing a structured comparison that supports a clear
direction for our study.

Problem Definition

The traditional approach of manually grading OMR (Optical Mark Recognition) sheets, widely
used in large-scale exams such as NEET, presents several significant challenges. With hundreds
of thousands of answer sheets to process, manual correction methods are labor-intensive, prone
to human error, and lack the scalability needed to meet rising testing demands efficiently. This
not only slows down the result processing time but also risks inconsistencies in evaluation
quality, which can affect the reliability of scores and ultimately, the fairness of the assessment
process.

To address these issues, there is a pressing need for an automated system capable of efficiently
handling the evaluation of OMR sheets. An automated solution promises to enhance grading
speed, reduce the margin for error, and provide a scalable approach adaptable to exams of
various sizes and complexities. This project, therefore, seeks to explore, design, and implement a

45 | P a g e
streamlined, technology-driven system for automated OMR grading, ensuring a more reliable,
accurate, and effective process.

Proposed Methodology

The development of this automated OMR grading system will involve a structured approach that
covers all necessary stages from requirements gathering to system evaluation. The methodology
includes identifying requirements, data processing, applying detection algorithms, and
performing results validation. Each stage is outlined in detail as follows:

1. Identifying Requirements

The first step involves gathering and defining the system's requirements to align with grading
objectives. This includes identifying the necessary functionalities, such as the ability to read
various OMR sheet formats, process multiple-choice answers, and generate detailed scoring
reports. We will determine the hardware specifications, including the processing power required
for image recognition, and the software tools necessary for implementation, primarily focusing
on OpenCV for image processing and YOLOv8 for answer detection. Additionally, user
interface requirements will be defined to ensure ease of use for educators and administrators.

2. Data Collection and Preprocessing

A diverse dataset of OMR sheets will be analyzed to ensure that the system can effectively
handle different layouts and marking styles. This dataset will include images of OMR sheets
with varying qualities and answer patterns. Each sheet will be labeled with the correct answers to
create a ground truth dataset for model training and validation. The preprocessing stage will
involve applying techniques such as resizing and noise reduction to enhance image quality. This
prepares the data for accurate recognition by improving the clarity of the answer bubbles. The
data will be split into training, validation, and test sets to support model development and
evaluation.

3. Algorithm Implementation

The project will implement specific algorithms for effective OMR processing. OpenCV will be
utilized for image processing tasks, such as bubble detection on the OMR sheets, employing

46 | P a g e
techniques like contour detection and thresholding to identify filled and unfilled answer bubbles
accurately. This allows for enhanced image analysis and preprocessing. Meanwhile, the
YOLOv8 model will be employed for detecting answer regions on the OMR sheets. This
involves training the model using annotated images to enable it to recognize answer areas based
on the provided training data. The flexibility of these algorithms will facilitate the extraction of
detected answers and their storage in a structured format, with provisions for handling
ambiguous or unmarked answers.

4. Answer Verification and Scoring

Once the answers are detected, the system will verify them against the stored answer key. The
process involves retrieving the correct answers for each question and employing a scoring
algorithm that iterates through the detected answers, matching them with the correct responses. If
a detected answer matches the answer key, the score is incremented accordingly.The final scores,
along with detailed reports of correct and incorrect responses, will be stored in a database for
further analysis.

5. Performance Evaluation

The performance of the OMR grading system will be rigorously evaluated to ensure its
effectiveness. We will compare the system’s detected answers with the ground truth data from
the test dataset, calculating accuracy metrics such as precision, recall, and F1 score. Additionally,
processing times will be measured across varying volumes of OMR sheets to assess the system's
scalability. This evaluation will help identify areas for improvement by analyzing misrecognized
answers and understanding the impact of preprocessing techniques and model parameters on
overall performance.

6. Output Generation and Validation

The final output of the system will include automated scoring reports that summarize the results
of each OMR sheet processed. These reports will detail the number of correct, incorrect, and
unattempted answers, providing valuable feedback to educators. Validation will involve
comparing the scores generated by the system against manually graded results to confirm

47 | P a g e
accuracy and reliability. Additionally, performance summaries will be generated, showcasing the
system's efficiency and potential advantages over traditional grading methods.

 Aim
The aim of this project is to develop a web-based platform that automates the assessment
of multiple-choice questions (MCQs) using Optical Mark Recognition (OMR)
technology. This platform is designed to enhance the efficiency and accuracy of grading
processes in educational settings, especially for large-scale examinations like NEET. By
digitizing assessments, Smart Moderator will make grading more accessible and cost-
effective, eliminating the need for expensive OMR hardware. The system will ensure
secure access for authorized users, minimize manual errors, and provide timely feedback
to educators and students, ultimately supporting institutions in improving educational
outcomes.

 Objective

The objective of the Smart Moderator project is to understand the intricacies of the
system and the relevant topics related to Optical Mark Recognition (OMR) technology.
This includes identifying and achieving the planned aims and requirements necessary for
the successful development of the platform. The project will involve a series of structured
steps, beginning with the gathering of detailed system requirements to ensure that Smart
Moderator can effectively process various OMR sheet formats and generate accurate
scoring reports.
To develop an automated system that detects and processes OMR sheets while
minimizing human error and time consumption, a diverse dataset of OMR sheets will be
preprocessed to ensure adaptability to different marking styles, resulting in clearer images
that enhance answer bubble detection accuracy. To implement and train image processing
algorithms using OpenCV and YOLOv8 for the precise identification of filled responses,
thereby increasing overall efficiency.
To design a scoring mechanism that provides reliable and timely scoring results,
facilitating immediate feedback to educators.

Resources

48 | P a g e
 Hardware
1. Processing Power
An Intel Core i7 (10th gen or newer) or AMD Ryzen 7 CPU is recommended for
efficient image processing and algorithm handling. An NVIDIA GTX 1660 or higher
GPU with at least 6GB VRAM is essential for training the YOLOv8 model, enabling
faster processing.
2. Memory (RAM)
A minimum of 16GB RAM is recommended, with 32GB preferred for handling large
datasets and smooth multitasking during model training and data processing.
3. Storage
A 512GB SSD is ideal as the primary drive for quick access to development tools and
software, while a secondary 1TB HDD or SSD is suggested for storing datasets and
outputs.
4. High-Resolution Scanner
A 1200 dpi scanner is necessary to digitize OMR sheets clearly, reducing errors in image
recognition and improving algorithm accuracy.
5. Networking and External Storage
Reliable internet and a 1TB external drive or cloud storage are recommended for data
backup, remote work, and accessing online resources.
 Software
1. Operating System
A Windows 10/11 (64-bit) or Ubuntu Linux OS is recommended, providing
compatibility with key development tools and efficient processing capabilities for image
recognition tasks.
2. Python and Libraries
Python 3.8 or higher will be the primary programming language. Essential libraries
include OpenCV for image processing and bubble detection on OMR sheet and
YOLOv8 framework for answer region detection, offering accurate and optimized object
detection capabilities.

49 | P a g e
3. Database Management
MySQL or PostgreSQL can be utilized to manage and store OMR sheet results, student
data, and scoring information securely.
4. Development Environment
An IDE like PyCharm, Visual Studio Code, or Jupyter Notebook provides a streamlined
coding environment, with integrated debugging tools and support for Python packages,
ensuring efficient project development and testing.
5. Web Development Tools
For front-end and back-end development, HTML, CSS, and JavaScript (with a
framework like Flask or Django for Python) will help create the platform's web interface,
allowing secure access and user management for educators.

Action Plan

(Will provide soft copy of this after discussion with students)

50 | P a g e
Project
Report

51 | P a g e
Anjuman-I-Islam’s
M.H. Saboo Siddik Polytechnic
8,M.H.SabooSiddik Polytechnic Road, Mumbai 400008

Certificate

This is to certify that Ms. Sayyed Maria Imran from Computer Engineering

Department of M. H. Saboo Siddik Polytechnic, Mumbai having Enrollment No.

2200203 has completed Final Project Report having Title Smart Moderator during

the academic year 2024– 2025 in a group consisting of 3 persons under the

guidance of Faculty Guide Ms.Zaibunnisa Malik & Co Guide Ms.Munira Ansari.

Place: Mumbai Sign of Guide: _____________

Date: ____________ Sign of HOD: _____________

52 | P a g e
Anjuman-I-Islam’s
M.H. Saboo Siddik Polytechnic
8,M.H.SabooSiddik Polytechnic Road, Mumbai 400008

Certificate

This is to certify that Ms. Shaikh Samiya from Computer Engineering Department

of M. H. Saboo Siddik Polytechnic, Mumbai having Enrollment No. 220020368

has completed Final Project Report having Title Smart Moderator during the

academic year 2024– 2025 in a group consisting of 3 persons under the guidance

of Faculty Guide Ms.Zaibunnisa Malik & Co Guide Ms.Munira Ansari.

Place: Mumbai Sign of Guide: _____________

Date: ____________ Sign of HOD: _____________

53 | P a g e
Anjuman-I-Islam’s
M.H. Saboo Siddik Polytechnic
8,M.H.SabooSiddik Polytechnic Road, Mumbai 400008

Certificate

This is to certify that Ms. Syed Afifa Fareedudin from Computer Engineering

Department of M. H. Saboo Siddik Polytechnic, Mumbai having Enrollment No.

220020309 has completed Final Project Report having Title Smart Moderator

during the academic year 2024– 2025 in a group consisting of 3 persons under the

guidance of Faculty Guide Ms.Zaibunnisa Malik & Co Guide Ms.Munira Ansari.

Place: Mumbai Sign of Guide: _____________

Date: ____________ Sign of HOD: _____________

54 | P a g e
Acknowledgement

It is our esteemed pleasure to present the project report on


“Smart Moderator”

We would firstly like to thank our Principal (I/c), Head of the Department & Guide
Ms. Zaibunnisa Malik for encouraging and motivating us with her guidance and
total support for our work. We shall also like to thank Ms. Salima Khatib for
working as our sub guide and making our path to integrity much simpler.

We also thank all the teachers who constantly motivated us and provided us their
precious knowledge about the procedures carried out for making a project along
with technical knowledge they have availed.

We would also like to thank our principal Mr. A.K Qureshi for providing us this
Opportunity of integrating our own project and constantly supporting us
throughout the process.

It would also be pleasure thanking all the staff, be it teaching or non-teaching


who always understood by us and never made any problem tread our way.

55 | P a g e
Abstract
(One paragraph 150 words)

In large-scale examinations, such as NEET, manual correction of OMR (Optical Mark


Recognition) sheets is a time-consuming and error-prone process. Our project introduces an
automated OMR sheet evaluation system that addresses these limitations by providing a faster,
more reliable, and efficient grading process for multiple-choice exams. The primary objective of
the project is to improve grading accuracy and scalability, reducing human intervention and error
margins.

The system captures and processes OMR responses using advanced image recognition
techniques, comparing each answer against a predefined answer key stored in the database. This
automation allows real-time, accurate evaluations that cater to high exam volumes. The grading
criteria are predefined and configured within the system, ensuring consistent scoring. With a
user-friendly interface, the system also provides students and administrators with timely access
to scores, reducing turnaround times for exam results and enhancing transparency in the
evaluation process.

56 | P a g e
Table of Content

Sr. No. Chapter Page No.

Introduction and Background

1.1.Introduction

1.2.Background

1.3.Motivation

1.4.Problem Statement
1.
1.5.Objective and Scope

1.6.Advantages

1.7.Disadvantages

1.8.Limitations

1.9.Conclusion

Literature Survey

2.1.Introduction
2.
2.2.Research Papers

2.3.References

2.4.Conclusion

57 | P a g e
Sr. No. Chapter Page No.

Proposed Methodology

3.1.System Design

3.1.1.Introduction

3.1.2.Block Diagram

3.1.3.System architecture diagram


3.
3.1.4.Data Flow Diagram

3.1.5.Software Design Approach

3.2.Time Line Chart

3.3 Gantt Chart

3.4.Conclusion

58 | P a g e
Chapter 1: Introduction and Background

Content:

1.1. Introduction
1.2. Background
1.3. Motivation
1.4. Problem Statement
1.5. Objective and Scope
1.6. Advantages
1.7. Disadvantages
1.8. Limitations
1.9. Conclusion

59 | P a g e
1.1. Introduction

In exams like NEET, where thousands of answer sheets need to be graded accurately and
quickly, the traditional way of checking OMR (Optical Mark Recognition) sheets by hand is
slow, tiring, and often leads to mistakes. The project "Smart Moderator“ aims to solve these
problems by automatically grading multiple-choice exams.

Using smart image processing methods, this system can recognize marks on OMR sheets quickly
and accurately. It identifies patterns, checks for inconsistencies, and adapts to different types of
answer sheets, making the process faster and reducing errors. By automating grading, Smart
Moderator ensures consistent results for large exam batches, saving time and improving
accuracy.Built to handle the needs of high-volume exams, this project provides a reliable
solution that speeds up grading, reduces human mistakes, and helps deliver results on time,
making it a valuable tool for exam boards, schools, and students.

1.2. Background

The "Smart Moderator" project is focused on solving current problems in grading multiple-
choice exams. As more students take standardized tests, there is a growing need for quicker and
more accurate grading solutions. Traditional methods of checking answer sheets by hand are
becoming outdated, causing delays and mistakes that can affect students' academic progress.With
advancements in technology, it's clear that the grading process needs to be modernized. Relying
on manual grading can lead to inconsistencies and put extra work on teachers, making it
important to adopt an automated system. By using advanced image processing techniques, this
project aims to make grading easier and faster.In the end, the Smart Moderator project hopes to
improve the grading experience for students and schools, providing a solution that meets the
needs of today’s fast-paced educational environment and enhances overall assessment results.

1.3. Motivation

The Smart Moderator project was created to solve the pressing issues of grading multiple-choice
exams effectively and accurately. The motivation behind this project comes from the challenges
faced by schools and colleges in managing a large number of answer sheets manually.
Traditional grading methods can be slow and prone to mistakes, which leads to delays in
providing results and can impact students' academic progress.

60 | P a g e
This project aims to offer a reliable tool for both educators and students, ensuring that grades are
delivered quickly and accurately. By automating the grading of OMR sheets, Smart Moderator
not only makes the grading process more efficient but also lightens the workload for teachers,
allowing them to focus more on teaching.

The idea for this project emerged from witnessing students' frustrations as they waited for their
exam results and the pressure on teachers to grade exams promptly.

1.4. Problem Statement

The current process of manually correcting OMR (Optical Mark Recognition) sheets in large-
scale exams like NEET is time-consuming, error-prone, and inefficient. There is a need to
develop a revised system that addresses these challenges by automating the evaluation process.
This solution ensures faster, more accurate, and scalable grading for multiple-choice exams.

1.5. Objective and Scope

 Objective:
 To study the principles of Optical Mark Recognition (OMR) technology for grading
exams.
 To collect data from answer sheets using OMR scanning methods.
 To analyze the collected data and identify grading patterns.
 To automate the grading process to provide quick and accurate results.
 Scope:

The Smart Moderator project has a wide reach in the area of grading exams. It aims to improve
how multiple-choice exams are graded in various educational settings, including schools and
universities. The project is flexible enough to work with different subjects and types of tests,
making it useful for many standardized exams.

Additionally, Smart Moderator plans to connect with existing school systems and provide
features like detailed result analysis, performance tracking, and easy-to-use interfaces for both
teachers and students. There are also plans for future upgrades, such as adding support for

61 | P a g e
different question types and feedback options to help improve learning. Overall, the project aims
to make grading faster and more accurate, benefiting education in today’s busy environment.

Or this:

Scope:

The Smart Moderator project aims to improve how multiple-choice exams are graded in schools
and universities. It can work with different subjects and types of tests, making it useful for many
standardized exams.

The project will connect with current school systems and provide features like detailed result
analysis, performance tracking, and easy-to-use interfaces for teachers and students. Future
updates may include support for different question types and feedback options. Overall, Smart
Moderator wants to make grading faster and more accurate to help education in today’s busy
world.

1.6. Advantages

 Increased Efficiency: The automated grading process significantly reduces the time
required to evaluate multiple-choice exams, allowing for quicker results and feedback
for students.
 Improved Accuracy: By minimizing human error associated with manual grading,
the system enhances the precision of grading, leading to more reliable results.
 Scalability: The system can handle a large volume of exams simultaneously, making
it suitable for schools and universities with many students.
 Detailed Analytics: Smart Moderator can provide comprehensive analysis of student
performance, helping educators identify trends and areas for improvement.
 User-Friendly Interface: The system is designed to be intuitive for both teachers and
students, facilitating easy navigation and interaction.
 Cost-Effective: By automating the grading process, schools can save on labor costs
and allocate resources more efficiently.
 Consistent Grading Standards: The system ensures uniform grading criteria,
reducing variability that can arise from different human graders.

62 | P a g e
 Integration with Existing Systems: Smart Moderator can connect with current
school management systems, streamlining the grading process and improving overall
workflow.
 Future-Ready: The project has the potential for future upgrades, allowing it to adapt
to new educational needs and technologies.

1.7. Disadvantages
 Limited Personalization: While the system offers tailored recommendations, it may
not fully capture the nuances of individual experiences and preferences that a human
counselor could provide.
 Technical Issues: Users may encounter technical problems, such as software bugs or
connectivity issues, which could hinder the grading process.
 Dependence on Technology: Over-reliance on the automated system may reduce
users' ability to make independent career decisions or judgments.
 Data Privacy Concerns: The collection and analysis of user data could raise privacy
issues, especially if sensitive information is involved.
 Potential Bias: The accuracy of the system is heavily dependent on the quality of
data used for training. If the data is biased or incomplete, it can lead to inaccurate or
unfair recommendations.
 Lack of Human Interaction: The absence of face-to-face interaction may limit
emotional support and understanding, which can be crucial in career guidance.
 Inflexibility: The system might struggle to accommodate non-traditional career paths
or unique individual circumstances, limiting its utility for some users.

1.8. Limitations

 Limited Adaptability to Individual Needs: The system may not fully accommodate
unique user circumstances, such as specific learning needs or preferences, which could
affect the grading process and results.

63 | P a g e
 User Accessibility Issues: Access to the system may be hindered by factors like lack of
internet connectivity, limited availability of compatible devices, or varying levels of
digital literacy, which could exclude some students and teachers.
 Data Quality Concerns: The accuracy of the grading results relies on the quality of the
OMR sheets and scanned data. Poor-quality input can lead to inaccurate results and
assessments.
 Potential for Technical Errors: As with any automated system, there may be technical
glitches or errors in the software that could impact grading accuracy or processing times.
 Lack of Personal Interaction: Automated grading systems may not provide the
personalized feedback and support that teachers can offer, which can be essential for
student learning and growth.
 Dependence on Technology: Users may become overly reliant on the system for
grading, which could diminish their critical thinking and assessment skills over time.
 Inflexibility in Question Formats: The system may have limitations in supporting
various question formats beyond multiple-choice, potentially restricting its use in diverse
exam settings.

1.9. Conclusion

The Smart Moderator project marks an important advancement in the way multiple-choice exams
are graded, aiming to streamline the assessment process and enhance accuracy. While the project
offers numerous benefits, such as faster grading and detailed performance analysis, it is vital to
recognize and address its limitations, including technical issues, potential bias in data, and the
lack of a human touch in the evaluation process.

To ensure the Smart Moderator serves as an effective tool in education, it should be viewed as a
supportive resource rather than a complete replacement for traditional grading methods. Human
oversight remains essential, particularly for interpreting results and providing emotional support
to students. Finding the right balance between automated grading and personal engagement will
be crucial for helping students achieve their best outcomes.

64 | P a g e
Ongoing improvements and adaptations will be necessary to tackle these limitations and ensure
that the Smart Moderator meets the diverse needs of educational institutions and their students in
today's rapidly changing academic landscape.

65 | P a g e
Chapter 2: Literature Survey

Content:

2.1. Introduction
2.2. Research Papers
2.3. References
2.4. Conclusion

66 | P a g e
2.1. Introduction

In exams like NEET, where thousands of answer sheets need to be graded accurately and
quickly, the traditional way of checking OMR (Optical Mark Recognition) sheets by hand is
slow, tiring, and often leads to mistakes. The project "Smart Moderator“ aims to solve these
problems by automatically grading multiple-choice exams.

Using smart image processing methods, this system can recognize marks on OMR sheets quickly
and accurately. It identifies patterns, checks for inconsistencies, and adapts to different types of
answer sheets, making the process faster and reducing errors. By automating grading, Smart
Moderator ensures consistent results for large exam batches, saving time and improving
accuracy.Built to handle the needs of high-volume exams, this project provides a reliable
solution that speeds up grading, reduces human mistakes, and helps deliver results on time,
making it a valuable tool for exam boards, schools, and students.

2.2. Research Papers

Paper Title 1: Automatic Exam Correction Framework (AECF) for the MCQs, Essays, and
Equations Matching

Author: Hossam Magdy Balaha and Mahmoud M. Saafan

Published in: IEEE (2021)

Abstract: Automatic grading requires the adaptation of the latest technologies.It has become
essential, especially when most of the courses became online courses (MOOCs). The objectives
of the current work are (1) Reviewing the literature on the text semantic similarity and automatic
exam correction systems, (2) Proposing an automatic exam correction framework (HMB-AECF)
for MCQs, essays, and equations that is abstracted into five layers, (3) Suggesting an equations
similarity checker algorithm named “HMB-MMS-EMA,” (4) Presenting an expression matching
dataset named “HMB-EMD-v1,” (5) Comparing the different approaches to convert textual data
into numerical data (Word2Vec, FastText, Glove, and Universal Sentence Encoder (USE)) using
three well-known Python packages (Gensim, SpaCy, and NLTK), and (6) Comparing the
proposed equations similarity checker algorithm (HMB-MMS-EMA) with a Python package
(SymPy) on the proposed dataset (HMB-EMD-v1). Eight experiments were performed on the

67 | P a g e
Quora Questions Pairs and the UNT Computer Science Short Answer datasets. The best-
achieved highest accuracy in the first four experiments was 77.95% without fine-tuning the pre-
trained models by the USE. The best-achieved lowest root mean square error (RMSE) in the
second four experiments was 1.09 without fine-tuning the used pre-trained models by the USE.
The proposed equations similarity checker algorithm (HMB-MMS-EMA) reported 100%
accuracy over the SymPy Python package, which reported 71.33% only on “HMB-EMD-v1.”

Paper Title 2: Efficient and Reliable Camera-Based Multiple-Choice Test Grading System

Author: Tien Dzung Nguyen, Quyet Hoang Manh, Phuong Bui Minh, Long Nguyen Thanh,
Thang Manh Hoang

Published in: IEEE (2011)

Abstract: This paper proposes a new idea for grading multiple-choice tests which is based on a
camera with reliability and efficiency. The bounds of the answer sheet image captured by the
camera are first allocated using the Hough transform and then skew-corrected into the proper
orientation, followed by normalization to a given size. Next, the tick mark corresponding to the
answer for each question can be recognized by the allocation of the mask that wraps the answer
area. The experimental results showed that the proposed system has achieved significant
improvement in performance in terms of accuracy, reliability, and elapsed time compared with
conventional optical mark recognition (OMR) systems. The proposed system also demonstrated
that it can achieve high accuracy of 99.7% while using non-transoptic answer sheet paper at a
lower cost.

Paper Title 3: Automatic Multiple Choice Question Evaluation Using Tesseract OCR and
YOLOv8

Author: Saikat Mahmud, Kawshik Biswas, Api Alam, Rifat Al Mamun Rudro, Nusrat Jahan
Anannya, Israt Jahan Mouri, Kamruddin Nur

Published in: IEEE(2024)

Abstract: This paper presents a novel approach for automating the grading of multiple-choice
question (MCQ) answer sheets using computer vision and pattern recognition techniques. The

68 | P a g e
system examines students’ marked answer sheet images by comparing them with the question
sheet image and answer keys. The computer vision and pattern recognition help extract pertinent
data such as question number detection, MCQ option detection, and the answer markings. The
proposed approach reliably produces an output report that displays the students’ correct answers
with an accuracy of 0.98 F1 score and 0.99 mAP from any form of unstructured question script.
This approach can provide a dependable and effective grading system, reducing manual work
and offering prompt feedback to students without any constraints on the answer sheets.

Paper Title 4: Generation and grading of arduous MCQs using NLP and OMR detection using
OpenCV

Author: Sarjak Maniar, Prof. Kumkum Saxena, Jai Parmani, Mihika Bodke

Published in:IEEE (2021)

Abstract: During the 21st century pandemic, it has become difficult for students and teachers to
engage physically. In the aftermath of the epidemic, the process of assessing and grading
students has become cumbersome. Multiple Choice Questions (MCQs) have increasingly
become a popular method of assessing a person's knowledge, but the questions generated directly
from the chapter provided are readily available on the internet. Thus, évaluer, the proposed
approach, solves this problem by paraphrasing the text and, consequently, the questions, making
it difficult to look them up online. The use of OMR (Optical Mark Recognition) sheets or manual
correction was prominent in the pre-pandemic situation. Évaluer automates the process of
generating MCQs that are difficult to search for on the internet and speeds up the task of grading
a candidate’s OMR answer sheet.

Paper Title 5: Automatic Multiple Choice Test Grader using Computer Vision
Author:Henry E. Ascencio, Carlos F. Peña, Kevin R. Vásquez, Manuel Cardona, Sebastián
Gutiérrez
Published in:IEEE(2021)

Abstract: This paper contains the procedure for the implementation of an application for exam
test grading in a fully automatic way, putting into practice the resources that artificial vision
makes available. This application allows you to grade an exam that has been designed in a pre-

69 | P a g e
established format and that has subsequently been solved by the evaluated person. The
qualification process is possible through the comparison between the test already solved with the
correct answers and the test that the evaluated person has completed. Python and OpenCV were
used for the development of the application, the latter was necessary in the image analysis and
processing stage, where the objective of the application was based on the recognition of contours
and marks detected in the sample photographs. This project proposes a viable and practical
option in the optimization of the exam test qualification process, saving time, and making teacher
performance more efficient.

Paper Title 6:Multiple Choice Assessments: Evaluation of Quality


Author:Alexander Sayapin, Applied Mathematics Chair, SibSAU, Krasnpyarsk, Russia

Published in:IEEE(2013)

Abstract: Multiple choice assessments are widely used in modern higher education. A lot of
different best practices that describe how to build a good assessment are known, but there is no
way to evaluate how good the assessment is, or how difficult is it, or how many stages of
competencies it can unveil. The method to evaluate the difficulty and the differentiate ability of a
multiplechoice assessment is described in this article.

Paper Title 7:Multiple Choice Questions with Justifications


Author:Anusha Hegde, Nayanika Ghosh,Viraj Kumar

Published in:IEEE(2014)

Abstract: Multiple choice questions (MCQs) are widely used as an efficient means to grade
large batches of students. With technology enabling extremely large classes (MOOCs), the use of
MCQs has increased rapidly, leading to an increased scrutiny of their pedagogical utility. In this
paper, we present avariant of MCQs that requires students to justify their answer by choosing
one or more supporting statements from an instructor-defined list. Thus we retain the ability to
automate the grading process, while addressing some (but not all) of the known weaknesses of
such assessment. To help the educational research community evaluate the pedagogical utility of

70 | P a g e
this approach, we have created an open-source plugin for creating and evaluating such questions
on a widely used e-learning platform.

Paper Title 8: Evaluation of Online Assessment: The Role of Feedback in Learner-Centered e-


Learning

Author: NoorminshahIahad, Emmanouil Kalaitzakis, Georgios A. Dafoulas, Linda A. Macaulay

Published in: IEEE (2004)

Abstract: Advancement of the Information and Communication Technologies enables the


integration of technology with daily activities and education is not an exception. E-learning,
which applies the concept of open and distance learning is learning through the Internet. It had
been reviewed as an efficient knowledge transfer mechanism. E-learning is seen as a future
application worldwide, promoting life long learning by enabling learners to learn anytime,
anywhere and at the learner’s pace. This paper presents the evaluation of an online test based on
a case study of an e-Commerce course offered by the Computation Department, University of
Manchester Institute of Science and Technology (UMIST). The main aim of the online test is to
provide ‘rich’ feedback to students, which is one of the requirements of the learner-centred
learning paradigm. The online test, in the form of multiple choice questions, provides feedback
through automatic grading, providing correct answers and referring the students to the learning
content which explains the correct answers. Evaluation of the online test was based on
twocriteria: functionality and usability. In terms of functionality, evaluation was meant to get the
students’view of the feedback provided by the system, while in terms of usability, the evaluation
sought to ensure that the system not only functions as expected by the users but is also usable.
Results show that the online test is suitable for online-learning and provides rich feedback.

Paper Title 9:Multiple-column Format for Reducing Task Complexity of Recognizing


Handwritten Answers in MCQ Test
Author:Aditya R. Mitra, Dion Krisnadi, Steven Albert, Arnold Aribowo

Published in:IEEE(2018)

Abstract: Multiple-choice question (MCQ) test is a very common and popular test instrument
among educators utilized for assessing student performance, particularly in large size classes, in

71 | P a g e
an objective manner. Despite its popularity and the advantages it offers, any automated test
grading application designed for handling MCQ test is challenged with scoring time efficiency
and accuracy issues. Between both, achieving higher accuracy is still of greater interest as
reflected in many research works. Furthermore, recognizing student’s answers appear in
handwritten form is not an easy task to accomplish. When a single column answer sheet is used,
then any attempt to recognize student’s answer written outside the determined area will
obviously create an additional cost to the application. Not only it will affect the time required for
locating and processing the corrected answer, but the non-standard handwritten answer may
decline the accuracy level also. A three-column answer sheet is proposed then as a solution to
accommodating students’ need for putting their corrected answer. The computational cost of
recognizing at most three consecutive answers in the designated area is shown to follow a linear
growth. Therefore, it can be concluded that the three- column answer sheet offers advantages to
time efficiency and indirectly, recognition accuracy issues.

Paper Title 10:Mobile-Based MCQ Answer Sheet Analysis and Evaluation Application

Author:G.M. Rasiqul Islam Rasiq, Abdullah Al Sefat, M.M. Fahim Hasnain

Published in:IEEE(2019)

Abstract: Abstract—Multiple Choice Question (MCQ) script sheets are very popular in
Bangladesh as it is an easy way to take exams that include Multiple choice questions. It is also a
great way of collecting a huge amount of data in a short period of time. This research proposes a
novel approach to analyzing and evaluating MCQ scripts without using any OMR machine or
OMR paper. The first step is to scan the MCQ script as an image via an android smartphone.
Secondly, the specific answer area from the image is cropped. After that, each answer choice is
extracted and Black pixels are calculated in each choice to obtain the given answer. In the final
step, a text file containing given answers to all questions is generated.

Paper Title 11: Various Techniques for Assessment of OMR Sheets Through Ordinary 2D
Scanner: A Survey

Author:Nirali V Patel, Ghanshyam I Prajapati

72 | P a g e
Published in: IJERT(2015)

Abstract: Optical Mark Recognition (OMR) is the process of gathering information from human
beings by recognizing marks on a document. OMR is accomplished by using a hardware device
(scanner) that detects a reflection or limited light transmittance through piece of paper. The
OMR machines are not scanners in the sense that they do not form an image of the sheets that
pass through. Instead, the OMR device simply detects whether predefined areas are blank or
have been marked. OMR scans a printed form and reads predefined positions and records where
marks are made on the form. OMR is useful for applications in which large numbers of hand-
filled forms need to be processed quickly and with great accuracy, such as surveys, reply cards,
questionnaires. OMR allows for the processing of hundreds or thousands of physical documents
per hour. The existing system requires special hardware which turns out to be very costly for any
organization. So using such a system may be cost inefficient or not feasible by organizations it is
the need of the hour to develop system which would be cost effective and time effective in other
words cheap and best. The error rate for OMR technology is less

than 1%.

Paper Title 12: OMR Auto Grading System

Author:Nithin T. Md Nasim, T. Raj Shekhar Omendra Singh, Gautam Yuraj Gholap

Published in:IJERT (2015)

Abstract: The project is based on an idea to grade an OMR sheet using a mobile utilizing the
android platform. Even today large number of institutes and colleges implement the ideaof OMR
sheet for evaluation of students based on multiple choice questions. Most of the standardized
tests also use the same. Big institutes use expensive OMR software along with the machines
associated to evaluate the OMR sheets. But all those ‘not so rich’ institutes andindividual
teachers would not have the financial credibility to afford the costly setup and would have to
manage it using manpower. So our idea is to use a simple mobile application with simple
interface that would be understandable to all. First we will discuss the OpenCVimplementation
of the image-processing algorithm and report on its challenges. Second we will give an overview
of what was implemented on the android phone. Finally we will summarize the challenges and
possibilities for future work.

73 | P a g e
Paper Title 13: Machine Learning based Automatic Answer Checker Imitating Human Way of
Answer Checking

Author: Vishwas Tanwar

Published in:IJERT(2021)

Abstract: In today’s scenario, examinations can be classified into 2 types, one is objective and
the other is subjective. Competitive exams are usually of mcqs types and due to this they need to
be conducted on computer screens as well as evaluated on them. Currently, almost every
competitive exam is conducted in online mode due to the large number of students appearing in
them. But apart from competitive exams, computers cannot be used to carry out subjective exams
like boards exam. This bring s in the nee d of Artificial Intelligence in ouronline exam systems.
If artificial intelligence gets implemented in online exam conduction systems, then it will be a
great help in checking subjective answers as well. Another advantage of this would be the speed
and accuracy with which the results of the exams would be produced. Our proposed system
would be designed in such a way that it will give marks in a similar way as of a human. This
system will hence be of great use to educational institutions.

Paper Title 14: OMR Sheet Evaluation using Image Processing

Author: Mrs. Nayan Ahire, Ms. Vaishnavi Adhangle, Mr. Nikhil Handore

Published in:IJERT (2024)

Abstract: Optical Mark Recognition (OMR) technology has revolutionized the grading and
assessmentprocesses in educational institutions, surveys, and various other fields. OMR sheets,
designed with predefined bubbles or checkboxes, are scanned and processed to extract relevant
data. This paper presents a comprehensive review of the methodologies and advancements in
OMR sheet evaluation using image processing techniques. The review begins with an overview
of traditional OMR systems and their limitations, such as susceptibility to errors due to variations
in scanning quality, paper orientation, and noise interference. Subsequently, itdelves into the
evolution of image processing algorithms tailored for OMR sheet evaluation.Several key
components of OMR sheet evaluation are discussed, including image pre-processing techniques

74 | P a g e
for enhancing readability, segmentation methods for isolating individual marks, feature
extraction algorithms for capturing relevant data, and classification techniques for accurate
identification of marked responses. The review highlights recent trends and innovations in OMR
sheet evaluation, such as the integration of machine learning and deep learning algorithms for
improved accuracy and robustness. Additionally, itaddresses challenges such as handling skewed
or distorted images, multi-page OMR sheets, and real-time processing requirements.
Furthermore, the paper discusses benchmark datasets and evaluation metrics commonly used to
assess the performance of OMR systems. It also examines practical considerations such as
scalability, cost-effectiveness, and usability in diverse settings.

Paper Title 15:Evaluation of Optical Mark Recognition (OMR) Sheet Using Computer Vision

Author: G. Himabindu, A. Reeta, A. Srinivas Manikanta, S. Manogna

Published in:IJERT (2023)

Abstract: Optical mark recognition (OMR) is a traditional data input technique and an important
human computer interaction technique which is widely used in examination evaluation. This
technology has been used in checking the answer sheets of university and college examinations,
survey forms, customary inquiry forms,competitive examinations, etc. In today’s technology,
there are lots of applications in our life related to computer –based image processing and
computerized recognition. Aimed at the drawbacks of current OMR technique, a new image-
based low-cost OMR technique is presented in the paper. The new technique is capableof
processing thin papers and low-printing precision answer sheets. The system key techniques and
relevant implementations, which include the image scan, tilt correction, scanning error
correction, regional deformation correction and mark recognition, are presented. This new
technique is proved robust and effective by the processing results of large amount of
questionnaires.

Paper Title 16:Efficient systemforEvaluationof OMR Sheet

Author: Divya Patel, Shaikhji Zaid

Published in:IJERT (2017)

75 | P a g e
Abstract: Optical mark recognition is the process of capturing human-marked data from
document forms such as surveys and tests. This technology provides a solution for reading and
processing large number of forms such as questionnaires or multiple-choice tests. It is widely
used, especially for grading students in schools. Today we find that lot of competitive exams are
being conducted as entrance exams. These exams consist of MCQs. The students have to fill the
right box or circle for the appropriate answer to the respective questions. So our aim is to
develop Image processing based Optical Mark Recognition sheet scanning system. In this system
OMR answer sheet will be scanned and the scanned image of the answer sheet will be given as
input to the software system. Using Image processing, we will find the answers marked for each
of the questions, total marks and displaying of total marks will be also implemented. The
existing systems available for the same purpose are costly, working on particular scanners only
and dependent on other parameters such as paper and print quality. The proposed system consists
of an ordinary printer, scanner and a computer to perform computation.

Paper Title 17: Cost Effective Optical Mark Recognition Software for Educational Institutions

Author:Vidisha Ware, Nithya Menon, PrajaktiVarute, Rachana Dhannawa

Published in:IJERT( 2019)

Abstract: Optical Mark Recognition (OMR) is a technology for effectively extracting data from
filled-in fields or bubbles on printed forms. The current systems available for OMR are very
expensive and they detect only a marking scheme. Moreover, the image processing techniques
used for scanning the OMR sheet also consumes a lot of time and is quite complex, as it includes
various restrictions related to the positioning of the sheet. In this paper, a solution to this problem
is proposed, where an OMR system is developed using a scanner or a multifunctional printer as
an input. The quality of the OMR sheet used in this system is low cost and easily available to any
educational institution. The image processing techniques are implemented with the help of
PyCharm IDE that not only helps to detect various marking schemes like bubble shape mark and
tick mark but also verifies the answers in the sheet and displays the total marks obtained by the
student, in a more efficient manner. In order to make the system user-friendly, the GUI of the
system is improved and personalized by integrating an online website with the OMR software
that displays the results of the individual student.

76 | P a g e
Paper Title 18: Automated Scoring System for Multiple Choice Test withQuick Feedback

Author: M. Alomran and D. Chai

Published in: IJERT 2018

Abstract: Although automatic scoring systems for multiple choice questions already exist, they
are still restrictive and use specialized and expensive tools. In this paper, an automatedscoring
system is proposed to reduce the cost and processing restrictions by taking advantage of image
processing technology. The proposed method enables the user to print the answersheets and
subsequently scan them by an off-the-shelf scanner. In addition, a personal computer can process
all the scanned sheets automatically. After scoring, the proposed system annotates the sheets
with feedback and send them back to students via email. Moreover, two novel features are
introduced. The first feature is the handwriting recognition method to recognize student ID. We
called this the segmented handwrittencharacter recognition. This new method replaces the
conventional student ID recognition commonly known as the Matrix Identifier. The second
feature is our specially designedanswer sheet that allows students to easily change their answers
with multiple attempts. As a result, there is no need to erase pencil shading or change the entire
answer sheet if any mistake happened during the test. The proposed system is designed to be
cheap and fast.

Paper Title 19: Automatic OMR Answer Sheet Evaluation using Efficient & Reliable OCR
System

Author: Dhananjay Kulkarni, Ankit Thakur, Jitendra Kshirsagar, Y. Ravi Raju

Published in:IJERT 2017

Abstract: In today’s modern world of technology when everything is computerized, the


Evaluation exercise of examining and assessing the educational system has become absolute
necessity. Today, more emphasis is on objective exam which is preferred to analyze scores of the
students since it is simple and requires less time in the examining objective answer-sheet as
compared to the subjective answer-sheet. This paper proposes a new technique for
generatingscores of multiple-choice tests which are done by developing a technique that has

77 | P a g e
software based approach with computer & scanner which is simple, efficient & reliable to all
with minimal cost. Its main benefit to work with all available scanners, In addition no special
paper &colour required for printing for mark sheet. To recognize & allot scores to the answer
marked by of the student’s Optical character recognition technique is executed here.

Paper Title 20: OMR Automated Grading

Author: Janardhan Singh K., Sanjay Kulkarni, Sanket B Patil, Shashank M, Shashanka

Published in:IJERT(2024)

Abstract: The paper highlights the necessity for a technologically advanced system capable of
efficiently grading multiple-choice question (MCQ) exams through webcam-based evaluation.
MCQ-style assessments have gained widespread use in educational and organizational settings
due to their effectiveness and time-saving advantages. However, manually grading these exams
presents significant challenges. Managing a large numberof answer sheets in a timely manner is
labor-intensive and error-prone, potentially leading to scoring discrepancies. Additionally, the
logistical burden of storing and handling physical answer sheets is cumbersome, with risks such
as damage from environmental factors like fire or moisture. While larger institutions may utilize
specialized Optical Mark Recognition (OMR) technology for grading, smaller educational
entities often lack access to such costlyequipment. To address these challenges, the paper
proposes an innovative solution: leveraging webcam technology to automate the grading process.
By capturing images ofanswer sheets and employing sophisticated content-filtering and image
processing algorithms facilitated by the OpenCV library, the system can accurately interpret and
evaluate marked answers. Overall, the proposed system represents a significant advancement in
exam grading methodology,providing a practical and cost-effective solution to the longstanding
challenges associated with manual grading of MCQ-based assessments. By integrating
webcamtechnology into the grading process, the system aims to enhance efficiency and accuracy
while catering to the needs of various educational and organizational assessments.

Paper Title 21: An Automated Multiple Choice Grader for Paper-Based Exams

Author: Abrar H. Abdul Nabi, Inad A. Aljarrah

Published in: Springer (2016)

78 | P a g e
Abstract: In this paper an automated multiple choice grader for paper-based exams is
implemented. The system consists of two main parts, a software program and a document feeder
scanner. The exam papers are fed to the scanner which scans them one by one and send them as
an input to the software. The software program recognizes the student Identification Number
(ID) and the answers for each exam paper and reports the final results in an Excel sheet. The
system starts by applyingan aligning procedure and segmenting the scanned image in order to
extract form number, student ID, and answers boxes, then a pre-processing step that handles all
irregular cases of input is implemented; where in this step a best possible shape that results in the
highest recognition accuracy is gained. After getting a proper separated characters and numbers,
a feature extraction process is applied on each character/number to calculate its feature vector.
The feature vector is then compared with templates of feature vectors for each of the answers
choices and numbers withtheir variations, where both characters and numbers are in English
language. After recognizing all the answers and all ID number digits; the system starts grading
the student paper and comparing student answer with the pre-entered key answers. A recognition
rate of 95.58 % is attained.

Paper Title 22: Inclusion of Vertical Bar in the OMR Sheet for Image-Based Robust and Fast
OMR Evaluation Technique Using Mobile Phone Camera

Author:Kshitij Rachchh, E.S. Gopi

Published in: Springer (2016)

Abstract: Optical mark recognition (OMR) is a prevalent data gathering technique which is
widely used in educational institutes for examinations consisting of multiple choice questions
(MCQ). The students have to fill the appropriate circle for the respective questions. Current
techniques for evaluating the OMR sheets need dedicated scanner, OMR software, high-quality
paper for OMR sheet and high precision layout of OMR sheet. As these techniques are costly but
very accurate, these techniques are being used to conduct many competitive entrance
examinations in most of the countries. But, small institutes, individual teachers and tutors cannot
use these techniques because of high expense. So, they resort to manually grading the answer
sheets because of the absence of any accurate, robust, fast and low-cost OMR software. In this
paper, we propose the robust technique that uses the low-quality images captured using mobile

79 | P a g e
phone camera for OMR detection that gives 100% accuracy with less computation time. We
exploit the property that the principal component analysis (PCA) basis identifies the direction of
maximum variance of the data, to design the template (introducing the vertical bar in the OMR
sheet) without compromising the look of OMR answer sheet. Experiments are performed with
140 images to demonstrate the proposed robust technique.

Paper Title 23: . Optical Mark Recognition: Advances, Difficulties, and Limitations

Author:Erik Miguel de Elias, Paulo Marcelo Tasinaffo, R. Hirata Jr.

Published in:Springer (2021)

Abstract: Performing mass assessment corrections is a tedious and costly task, especially when
allocating teachers or instructors to do these corrections. Such task can be facilitated and
accelerated by Optical Mark Recognition (OMR) technology, bringing educational institutions to
look for this solution. OMR initially appeared as a dedicated hardware solution, but software
solutions have emerged with the evolution of technology, gradually replacing dedicated
equipment. However, most solutions lack fexibility, mainly for the end-users. The literature
proposes several methods, often highlighting the issue of cost and accessibility. The present
work reviews 35 papers around OMR subject and lists the reviewed methods’ main
characteristics, datasets, restrictions, technological challenges, techniques used, processing time,
and accuracy. We map and categorize the restrictions to help the reader improve the current
software OMR technology state. We also call the community’s attention to the lack of a standard
dataset that could be used to compare OMR solutions..

Paper Title 24:Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision
Techniques

Author:Jesus Arias Fisteus, Abelardo Pardo, Norberto Fernández García

Published in:Springer (2012)

Abstract: Although technology for automatic grading of multiple choice exams has existed for
several decades, it is not yet as widely available or affordable as it should be. The main reasons

80 | P a g e
preventing this adoption are the cost and the complexity of the setup procedures. In this paper,
Eyegrade, a system for automatic grading of multiple choice exams is presented. While most
current solutions

are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a
regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as
optical character recognition of handwritten student identification numbers, which avoids the use
of bubbles in the answer sheet. When compared with similar webcam-based systems, the user
interface in Eye- grade has been designed to provide a more efficient and error-free data
collection procedure. The tool has been validated with a set of experiments that show the ease of
use (both setup and operation), the reduction in grading time, and an increase in the reliability of
the results when compared with conventional, more expensive systems.

Paper Title 25:An Efficient, Cost-Effective and User-Friendly Approach for MCQs

Author:Ismail Khan, Sami ur Rahman, Fakhre Alam

Published in:Springer (2018)

Abstract: The ongoing era is called the technology era and every one want to have automatic
system means that the work is done just on one click. The inclusion of human being in any work
may become problematic, fraudulent and unsophisticated. We keep the need of current time and
decide to develop such a system which is capable of to automatically grade Multiple Choice
Questions (MCQs) paper. Manually grading/marking of a paper is time consuming, boring and
complicated task. Moreover, it may possible to make fraud by fraudulent examiner during paper
marking. In this paper, we have proposed a novel approach for automatic paper grading. The
proposed approach is user friendly and efficient, which will mark a candidate’s answers
automatically and return, within a very short period. The proposed system for MCQs marking
consists of a camera and computer, and can accept any type of marking on bubbles.

Paper Title 26:Highly Optimized Implementation of OpenCV for the Cell Broadband Engine

Author:Hiroki Sugano, Ryusuke Miyamoto

Published in:Elsevier (2010)

81 | P a g e
Abstract: Recently, real-time processing of image recognition is required for embedded
applications such as automotive applications, robotics, entertainment, and so on. To realize real-
time processing of image recognition on such systems we need optimized libraries for embedded
processors. OpenCV is one of the most widely used libraries for computer vision applications
and has many functions optimized for Intel processors, but no function is optimized for
embedded processors. We present a parallel implementation of OpenCV library on the Cell
Broadband Engine (Cell), which is one of the most widely used high performance embedded
processors. Experimental result shows that most of the functions optimized for the Cell processor
are faster than functions optimized for Intel Core 2 Duo E6850 3.00 GHz.

Paper Title 27:A Review of Yolo Algorithm Developments

Author:Peiyuan Jiang, Daji Ergu, Fangyao Liu, Ying Cai, Bo Ma

Published in:Elsevier (2022)

Abstract: Object detection techniques are the foundation for the artificial intelligence field. This
research paper gives a brief overview of the You Only Look Once (YOLO) algorithm and its
subsequent advanced versions. Through the analysis, we reach many remarks and insightful
results. The results show the differences and similarities among the YOLO versions and between
YOLO and Convolutional Neural Networks (CNNs). The central insight is the YOLO algorithm
improvement is still ongoing. This article briefly describes the development process of the
YOLO algorithm, summarizes the methods of target recognition and feature selection, and
provides literature support for the targeted picture news and feature extraction in the financial
and other fields. Besides, this paper contributes a lot to YOLO and other object detection
literature.

Paper Title 28:Automatic evaluation of open-ended questions for online learning. A systematic
mapping

Author:Emiliano del Gobbo, Alfonso Guarino ,Barbara Cafarelli,Luca Grilli ,Pierpaolo Limone

Published in:Elsevier (2023)

82 | P a g e
Abstract: The assessment of students’ performances in Higher Education is one of the essential
components of teaching activities. Open-ended tasks allow a more in-depth assessment of
students’ learning levels, but their evaluation and grading are time-consuming and prone to
subjective bias. Since the Covid-19 pandemic, most traditional Higher Education courses
converted to online courses; automatic grading and feedback tools and methods (AGFTM) have
become critical components of online learning systems, especially with regards to short answers
and essays assessment. This work frames the recent advancement in AGFTM through a
systematic mapping of the research field and a literature review. This analysis gives an overview
of the trends, specific goals, methods,quality of proposals, challenges and limitations in this
research area. The results indicate that it is a growing research area, with a large set of
techniques involved, but still not mature, where practical implementations have yet to come.

Paper Title 29:Reduced Grading in Assessment: A Scoping Review

Author:Dan-Anders Normann, Lise Vikan Sandvik, Henning Fjørtoft

Published in:Elsevier (2023)

Abstract: Increasingly, educators are adopting reduced grading practices to enhance the desired
or lessen the undesired aspects of assessment. This review traces the scholarly origins of reduced
grading and maps research on the phenomenon. Using citation analysis and qualitative content
analysis and drawing on a theory of action perspective, we explore how reduced grading is
conceptualized in the literature. The citation analysis uncovered two clusters of publications: one
investigating primary and secondary education and the other covering higher education. The
content analysis revealed four categories: rationales, contextual conditions, implementation, and
consequences of reduced grading. Supported by a variety of rationales, reduced grading has been
conceptualizedin various ways, and the research field is divided into two sub-domains. We
discuss the implications of these results for practitioners and researchers.

Paper Title 30:Enhancement of Handwritten Text Recognition Using AI-based Hybrid


Approach

Author:Supriya Mahadevkar, Shruti Patil, Ketan Kotecha

83 | P a g e
Published in:Elsevier (2024)

Abstract: Handwritten text recognition (HTR) within computer vision and image processing
stands as a prominent and challenging research domain, holding significant implications for
diverse applications. Among these, it finds usefulness in reading bank checks, prescriptions, and
deciphering characters on various forms. Optical character recognition (OCR) technology,
specifically tailored for handwritten documents, plays a pivotal role in translating characters
from a range of file formats, encompassing both word and image documents. Challenges in HTR
encompass intricate layout designs, varied handwriting styles, limited datasets, and less accuracy
achieved. Recent advancements in Deep Learning and Machine Learning algorithms, coupled
with the vast repositories of unprocessed data, have propelled researchers to achieve remarkable
progress in HTR. This paper aims to address the challenges in handwritten text recognition by
proposing a hybrid approach. The primary objective is to enhance the accuracy of recognizing
handwritten text from images. Through the integration of Convolutional Neural Networks (CNN)
and Bidirectional Long Short-Term Memory (BiLSTM) with a Connectionist Temporal
Classification (CTC) decoder, the results indicate substantial improvement. The proposed hybrid
model achieved an impressive 98.50% and 98.80% accuracy on the IAM and RIMES datasets,
respectively. This underscores the potential and efficacy of the consecutive use of these
advanced neural network architectures in enhancing handwritten text recognition accuracy.

2.3. References

[1] Hossam Magdy Balaha and Mahmoud M. Saafan, “Automatic Exam Correction Framework
(AECF) for the MCQs, Essays, and Equations Matching”, IEEE, 2021.

[2] Tien Dzung Nguyen, Quyet Hoang Manh, Phuong Bui Minh, Long Nguyen Thanh, Thang
Manh Hoangn, “Efficient and Reliable Camera-Based Multiple-Choice Test Grading System”,
IEEE, 2011.

[3] Saikat Mahmud, Kawshik Biswas, Api Alam, Rifat Al Mamun Rudro, Nusrat Jahan
Anannya, Israt Jahan Mouri, KamruddinNurn, “Automatic Multiple Choice Question Evaluation
Using Tesseract OCR and YOLOv8”, IEEE, 2024.

84 | P a g e
[4] Sarjak Maniar, Prof. Kumkum Saxena, Jai Parmani, Mihika Bodke, “Generation and grading
of arduous MCQs using NLP and OMR detection using OpenCV”, IEEE, 2021.

[5] Henry E. Ascencio, Carlos F. Peña, Kevin R. Vásquez, Manuel Cardona, Sebastián Gutiérrez,
“Automatic Multiple Choice Test Grader using Computer Vision”, IEEE, 2021.

[6] Alexander Sayapin, Applied Mathematics Chair, SibSAU, Krasnpyarsk, Russia, “Multiple
Choice Assessments: Evaluation of Quality”, IEEE, 2013.

[7] Anusha Hegde, Nayanika Ghosh, Viraj Kumar, “Multiple Choice Questions with
Justifications”, IEEE, 2014.

[8] NoorminshahIahad, Emmanouil Kalaitzakis, Georgios A. Dafoulas, Linda A. Macaulay,


“Evaluation of Online Assessment: The Role of Feedback in Learner-Centered eLearning”,
IEEE, 2014.

[9] Aditya R. Mitra, Dion Krisnadi, Steven Albert, Arnold Aribowo, “Multiple-column Format
for Reducing Task Complexity of Recognizing Handwritten Answers in Multiple-choice
Question”, IEEE, 2018.

[10] G.M. Rasiqul Islam Rasiq, Abdullah Al Sefat, M.M. Fahim Hasnain, “Mobile-Based MCQ
Answer Sheet Analysis and Evaluation Application”, IEEE, 2019.

[11] Nirali V Patel, Ghanshyam I Prajapati, “Various Techniques for Assessment of OMR Sheets
Through Ordinary 2D Scanner: A Survey”, IJERT, 2015.

[12] Nithin T. Md Nasim T. Raj Shekhar Omendra Singh Gautam Yuraj Gholap, “OMR Auto
Grading System”, IJERT, 2015.

[13] Vishwas Tanwar, “Machine Learning Based Automatic Answer Checker Imitating Human
Way of Answer Checking”, IJERT, 2021.

[14] Mrs. Nayan Ahire, Ms. Vaishnavi Adhangle, Mr. Nikhil Handore, “OMR Sheet Evaluation
Using Image Processing”, IJERT, 2024.

85 | P a g e
[15] Himabindu, A. Reeta, A. Srinivas Manikanta, S. Manogna, “Evaluation of Optical Mark
Recognition (OMR) Sheet Using Computer Vision”, IJERT, 2023.

[16] R. Kumar, A. Rajasekaran, “Automatic OMR Answer Sheet Evaluation using Efficient &
Reliable OCR System”, IJERT, 2017.

[17] Vidisha Ware, Nithya Menon, PrajaktiVarute, Rachana Dhannawat, “Cost effective optical
mark recognition software for educational institutions”, IJERT, 2019.

[18] Vidisha Ware, Nithya Menon, PrajaktiVarute, Rachana Dhannawa, “Automated Scoring
System for Multiple Choice Test with Quick Feedback”, IJERT, 2018.

[19] Dhananjay Kulkarni, Ankit Thakur, Jitendra Kshirsagar, Y. Ravi Raju, “Automatic OMR
Answer Sheet Evaluation Using Efficient& Reliable OCR System”, IJERT, 2017.

[20] Janardhan Singh K. Sanjay Kulkarni Sanket B Patil Shashank M Shashanka, “OMR
Automated Grading”, IJERT, 2024.

[21] Abrar H. Abdul Nabi, Inad A. Aljarrah, “An Automated Multiple Choice Grader for Paper-
Based Exams”, Springer, 20.

[22] shitijRachchh, E.S. Gopi, “Inclusion of Vertical Bar in the OMR Sheet for Image-Based
Robust and Fast OMR Evaluation Technique Using Mobile Phone Camera”, Springer, 20.

[23] Erik Miguel de Elias, Paulo Marcelo Tasinaffo, R. Hirata Jr, “Optical Mark Recognition:
Advances, Difficulties, and Limitations’, Springer, 20.

[24] Jesus Arias Fisteus, Abelardo Pardo, Norberto Fernández García, “Grading Multiple Choice
Exams with Low-Cost and Portable Computer-Vision Techniques, Springer, 20.

[25] Ismail Khan, Sami ur Rahman, Fakhre Alam, “An Efficient, Cost-Effective and User-
Friendly Approach for MCQs Treatment”, Springer, 20.

[26] Hiroki Sugano, Ryusuke Miyamoto, “Highly Optimized Implementation of OpenCV for the
Cell Broadband Engine”, Elsevier, 20.

86 | P a g e
[27] Peiyuan Jiang, Daji Ergu, Fangyao Liu, Ying Cai, Bo Ma, “A Review of Yolo Algorithm
Developments”, Elsevier, 20.

[28] Emiliano del Gobbo, Alfonso Guarino, Barbara Cafarelli, Luca Grilli, Pierpaolo Limone.,
“Automatic evaluation of open-ended questions for online learning. A systematic mapping”,
Elsevier, 20.

[29] Dan-Anders Normann, Lise Vikan Sandvik, Henning Fjørtoft, “Reduced Grading in
Assessment: A Scoping Review”, Elsevier, 20.

[30] Supriya Mahadevkar, Shruti Patil, Ketan Kotecha, “Enhancement of Handwritten Text
Recognition Using AI-based Hybrid Approach”, Elsevier, 20.

2.4. Conclusion

In conclusion, the development of an automated OMR sheet evaluation system will significantly
benefit students by streamlining the assessment process and providing timely feedbackWe
successfully made the research by reading all the research papers& begin the planning of our
project.

87 | P a g e
Chapter 3:Proposed Methodology

Content:

3.1. System Design

3.1.1. Introduction

3.1.2. Block Diagram

3.1.3. System architecture diagram

3.1.4. Data Flow Diagram

3.1.5. Software Design Approach

3.2. Time Line Chart

3.3. Gantt Chart

3.4. Conclusion

88 | P a g e
3.1. System Design

3.1.1. Introduction

System design is the process of organizing the parts of a system to work together effectively. It
includes defining the overall structure, dividing the system into modules, and specifying how
each part interacts with others. The design process also involves planning the flow of data
through the system, ensuring smooth operation and efficient data handling. A well-thought-out
system design considers the needs of its users and the purpose of the system, aiming to create a
reliable and organized setup. This approach helps in building systems that are easy to manage,
scalable, and effective at solving the intended problems. By mapping out components,
connections, and data flows, system design allows each part to work together to achieve the
system’s goal.In this project, we introduce Smart Moderator, a tool that automates the grading of
multiple-choice question (MCQ) exams using OMR sheets. Through image processing, it
provides fast, accurate assessments, reducing manual effort.

In this chapter, we’ll be creating the block diagram, system architecture, and data flow diagram,
as well as planning how the software will be designed.

3.1.2. Block Diagram

Fig 3.1.2(a). Block Diagram for Smart Moderator

89 | P a g e
3.1.3. System architecture diagram

Fig 3.1.3(a). System architecture diagram for Smart Moderator

3.1.4. Data Flow Diagram

Fig 3.1.4.(a) Level 0 Data Flow Diagram for Smart Moderator

90 | P a g e
Fig 3.1.4.(b) Level 1 Data Flow Diagram for Smart Moderator

91 | P a g e
Fig 3.1.4.(c) Level 2 Data Flow Diagram for Smart Moderator

92 | P a g e
3.1.5. Software Design Approach

There are several models and approaches available for project development, such as the V-
Model, Incremental Model, Spiral Model, Waterfall Model, RAD Model, and various Agile
frameworks like SCRUM, Crystal, and DSDM. For our project, we have chosen the Agile
model.

Why to use Agile Model?

The Agile Model is a flexible and iterative approach to software development. It focuses on
collaboration, customer feedback, and rapid delivery of small, workable pieces of the project. In
Agile, development is divided into short cycles called sprints, where teams work on specific
features or tasks. Unlike traditional models, Agile allows for changes and adjustments
throughout the process, making it easy to adapt to new requirements. This model emphasizes
continuous improvement and regular testing, ensuring that the final product meets user needs
effectively. Agile is widely used in modern software development due to its ability to produce
high-quality results quickly.

 Requirement: Our main goal is to create a system that quickly grades MCQ papers using
OMR sheets.
 Design: The design will focus on making a user-friendly interface and a strong system for
handling data.
 Development: In this phase, we’ll build the project step by step, adding features and
refining them regularly.
 Testing: Testing happens during each sprint, so we can find and fix problems right away.
 Deployment: After testing, we will release the features to users gradually.
 Review: After deployment, we will collect user feedback to identify areas for
improvement and make necessary adjustments to enhance the system's performance and
user experience.

93 | P a g e
Fig 3.1.5 (a).Agile Model

3.2. Time Line Chart

Fig 3.2. Time Line Chart

94 | P a g e
3.3. Gantt Chart

Fig 3.3. Gantt Chart

Task Start on day Duration


Problem Identification 0 15
Industrial Survey & Literature
Review 16 21
Project Proposal 0 40
Project Report 0 45
Presentation 45 15
Project Logbook 0 112
Project Portfolio 50 112

3.4. Conclusion

Hence, the Agile model was selected, and the initial design of our project has been completed.
We have successfully conducted research and established the fundamentals of our system. We
will soon begin the implementation phase, focusing on developing and refining features based on
user feedback.

95 | P a g e
Project
Logbook

96 | P a g e
******Log Book******

Week No:

Activities Planned:

Action Taken on planned Activities / Corrective measures adopted:

Reason for Delay if any:

Remark and Signature of the Guide:

97 | P a g e
Project
Portfolio

98 | P a g e
Portfolio for Self Directed Learning for Major Project
Work

Name of Student: Sayed Maria Imran

Semester: V

Programme/Branch: Computer Engineering

Roll No: 220445

Title of the Project: Smart Moderator

Name and Designation of Project Guide: Mrs Zaibunnisa L.H.


Malik, HOD Department of Computer Engineering,M.H. Saboo
Siddik Polytechnic

Name of Institute: M.H. Saboo Siddik Polytechnic

After Finalization of Project Topic & Formation of Project Team

99 | P a g e
1. How many alternatives we thought before finalizing the project topic?

a. We considered three main alternatives before settling on the Smart Moderator project.
These included developing an online examination platform, an automated grading system
for written exams, and a learning management system for career guidance.

2. Did we consider all the technical fields related to the branch of our diploma programme?

a. Yes, we evaluated various fields such as software development, artificial intelligence,


educational technology, and data analytics to ensure our project was aligned with our
diploma's technical focus.

3. Why we found the present project topic as most appropriate?

a. The Smart Moderator project was deemed most appropriate due to its potential to address
significant challenges in the current examination process, particularly in improving
grading speed and accuracy while providing useful feedback to students.

4. Whether all the group members agreed on the present project topic? If not, what were the
reasons for their disagreement?

a. Initially, not all group members were on board. Some preferred the online examination
platform because they believed it would have broader applicability. However, after
discussions highlighting the specific problems our project would solve, we reached a
consensus.

5. Whether the procedure followed in assessing alternatives and finalizing the project topic
was correct? If not, then discuss the reasons.

a. The procedure was generally effective. We conducted brainstorming sessions and group
discussions, but in retrospect, a more structured decision-making framework (like a
SWOT analysis) could have helped us evaluate the alternatives more thoroughly.

6. What were the limitations in other alternatives of project topic?

a. The online examination platform lacked the focus on grading efficiency, while the
automated grading for written exams was limited in scalability. The learning management
system, although relevant, felt too broad and less targeted compared to our chosen topic.

7. How we formed our team?


a. Our team was formed based on mutual interests in OMR technology and educational
systems. We gathered classmates with complementary skills in software development,
project management, and research.

8. Whether we faced any problem in forming the team? If yes, then what was the problem and
how was it resolved?

100 | P a g e
a. Yes, we initially struggled to find members with the right skills. This was resolved by
holding discussions to clarify each member's strengths and redistributing tasks to align
skills with project needs.

9. Am I the leader of our project team? If yes, then why was I chosen? If not, why could I not
become the project team leader?

a. I am not the project leader for our team. Syed Afifa was selected as the leader because of
her strong organizational skills, experience, and ability to bring everyone together to
work effectively. Her clear communication and attention to detail make her a natural fit
for leading the team, and I am happy to support her in this role.

10. Do I feel that the present team leader is the best choice available in the group? If yes, then
why? If not, then why?

a. Yes, I believe Syed Afifa is the best choice for our team leader. She has demonstrated
excellent leadership abilities by actively listening to everyone’s ideas and making
decisions that balance our project goals with each team member's strengths. Her approach
fosters a positive, collaborative environment, which makes working on this project both
enjoyable and productive.

11. According to me, who should be the leader of the team and why?

a. According to me, Syed Afifa is the ideal leader for our team. Her ability to delegate tasks
efficiently and keep everyone motivated ensures that we stay on track and produce high-
quality work. Her leadership style empowers each of us, making her the best fit to lead
our project.

12. Can we achieve the targets set in the project work within the time and cost limits?
a. I believe we can achieve the project targets within the time and budget constraints, as we
have a well-structured plan and clear timelines for each phase of the project.

13. What are my good/bad sharable experiences while working with my team that provoked me
to think? What did I learn from these experiences?
a. A good experience was the brainstorming session where diverse ideas led to innovative
solutions. A bad experience was when miscommunication about deadlines caused
frustration. I learned the importance of clear communication and regular check-ins to
keep everyone aligned.

14. Any other reflection which I would like to write about the formation of the team and
finalization of the project title, if any?

a. Overall, the process of forming our team and finalizing the project title was an enriching
experience. It highlighted the importance of collaboration, adaptability, and open
communication. I look forward to seeing how our diverse skills contribute to the success
of the Smart Moderator project.

101 | P a g e
After Finalization of Project Proposal

1. Which activities are having maximum risk and uncertainty in our project plan?

a. The activities with maximum risk and uncertainty include the integration of OMR
technology and the development of the grading algorithm. These areas are prone to
technical challenges and require thorough testing to ensure accuracy and reliability.

2. What are the most important activities in our project plan?

b. The most important activities include designing the OMR sheet, developing the grading
algorithm, testing the system for accuracy, and ensuring user feedback mechanisms are in
place. These activities are crucial for the project’s success and overall functionality.

3. Is work distribution equal among project group members? If not, what are the reasons?
How can we improve work distribution?

c. Work distribution is not entirely equal, as some members have specific expertise that
requires them to handle more complex tasks. To improve distribution, we can assess each
member's strengths and redistribute tasks accordingly, ensuring everyone is engaged and
contributing to their capabilities.

4. Is it possible to complete the project in the given time? If not, then what are the reasons for
it? How can we ensure that the project is completed within time?

d. Yes, I believe we can complete the project within the given time frame if we adhere to
our schedule and maintain regular communication. However, we need to remain flexible
and address any unforeseen challenges promptly to stay on track.

5. What extra care and precaution should be taken in executing the activities of high risk and
uncertainty? If possible, how can such risks and uncertainties be reduced?

e. For high-risk activities, we should conduct thorough testing and quality assurance checks.
To reduce risks, we can implement a phased approach, allowing us to identify and
address issues early in the development process.

6. Can we reduce the total cost associated with the project? If yes, then describe the ways.

f. Yes, we can reduce costs by using open-source tools and libraries for OMR technology
instead of purchasing expensive software. Additionally, we can optimize resource
allocation to minimize wastage and ensure efficient use of funds.

7. For which activities of our project plan is the arrangement of resources not easy and
convenient?
g. Arranging resources for the testing phase, particularly for obtaining real exam papers and
feedback from students, may pose challenges. We might need to collaborate with
educational institutions to facilitate this process.

102 | P a g e
8. Did we make enough provisions for extra time/expenditure etc., to carry out such activities?
h. We have made some provisions for extra time, but we need to revisit our budget to ensure
we have adequate funds for unforeseen expenses, particularly in the testing phase.

9. Did we make enough provisions for time delays in our project activity? In which activities
are there more chances of delay?

i. We have provisions for time delays, especially for the integration and testing phases,
where challenges are more likely to arise. Regular progress reviews will help us address
potential delays proactively.

10. In our project schedule, which are the days of more expenditure? What provisions have we
made for the availability and management of cash?
j. The days with higher expenditures include those planned for purchasing resources and
conducting testing. We have set aside a contingency fund to manage cash flow and
ensure resources are available when needed.
11. Any other reflection which I would like to write about project planning?
k. Overall, the project planning phase has been insightful, emphasizing the importance of
clear communication and flexibility. I look forward to collaborating with the team to
adapt our plans as needed and achieve our project goals.

103 | P a g e
Portfolio for Self Directed Learning for Major Project
Work

Name of Student: Shaikh Samiya Nishat

Semester: V

Programme/Branch: Computer Engineering

Roll No: 220455

Title of the Project: Smart Moderator

Name and Designation of Project Guide: MrsZaibunnisa L.H.


Malik, HOD Department of Computer Engineering,M.H. Saboo
Siddik Polytechnic

Name of Institute: M.H. Saboo Siddik Polytechnic

After Finalization of Project Topic & Formation of Project Team

1. How many alternatives we thought before finalizing the project topic?

104 | P a g e
a. We considered three main alternatives before settling on the Smart Moderator project.
These included developing an online examination platform, an automated grading system
for written exams, and a learning management system for career guidance.

2. Did we consider all the technical fields related to the branch of our diploma programme?

a. Yes, we evaluated various fields such as software development, artificial intelligence,


educational technology, and data analytics to ensure our project was aligned with our
diploma's technical focus.

3. Why we found the present project topic as most appropriate?

a. The Smart Moderator project was deemed most appropriate due to its potential to address
significant challenges in the current examination process, particularly in improving
grading speed and accuracy while providing useful feedback to students.

4. Whether all the group members agreed on the present project topic? If not, what were the
reasons for their disagreement?

a. Initially, not all group members were on board. Some preferred the online examination
platform because they believed it would have broader applicability. However, after
discussions highlighting the specific problems our project would solve, we reached a
consensus.

5. Whether the procedure followed in assessing alternatives and finalizing the project topic
was correct? If not, then discuss the reasons.

a. The procedure was generally effective. We conducted brainstorming sessions and group
discussions, but in retrospect, a more structured decision-making framework (like a
SWOT analysis) could have helped us evaluate the alternatives more thoroughly.

6. What were the limitations in other alternatives of project topic?

a. The online examination platform lacked the focus on grading efficiency, while the
automated grading for written exams was limited in scalability. The learning management
system, although relevant, felt too broad and less targeted compared to our chosen topic.

7. How we formed our team?


a. Our team was formed based on mutual interests in OMR technology and educational
systems. We gathered classmates with complementary skills in software development,
project management, and research.

8. Whether we faced any problem in forming the team? If yes, then what was the problem and
how was it resolved?

105 | P a g e
a. Yes, we initially struggled to find members with the right skills. This was resolved by
holding discussions to clarify each member's strengths and redistributing tasks to align
skills with project needs.

9. Am I the leader of our project team? If yes, then why was I chosen? If not, why could I not
become the project team leader?

a. I am not the project leader for our team. Syed Afifa was selected as the leader because of
her strong organizational skills, experience, and ability to bring everyone together to
work effectively. Her clear communication and attention to detail make her a natural fit
for leading the team, and I am happy to support her in this role.

10. Do I feel that the present team leader is the best choice available in the group? If yes, then
why? If not, then why?

a. Yes, I believe Syed Afifa is the best choice for our team leader. She has demonstrated
excellent leadership abilities by actively listening to everyone’s ideas and making
decisions that balance our project goals with each team member's strengths. Her approach
fosters a positive, collaborative environment, which makes working on this project both
enjoyable and productive.

11. According to me, who should be the leader of the team and why?

a. According to me, Syed Afifa is the ideal leader for our team. Her ability to delegate tasks
efficiently and keep everyone motivated ensures that we stay on track and produce high-
quality work. Her leadership style empowers each of us, making her the best fit to lead
our project.

12. Can we achieve the targets set in the project work within the time and cost limits?
a. I believe we can achieve the project targets within the time and budget constraints, as we
have a well-structured plan and clear timelines for each phase of the project.

13. What are my good/bad sharable experiences while working with my team that provoked me
to think? What did I learn from these experiences?
a. A good experience was the brainstorming session where diverse ideas led to innovative
solutions. A bad experience was when miscommunication about deadlines caused
frustration. I learned the importance of clear communication and regular check-ins to
keep everyone aligned.

14. Any other reflection which I would like to write about the formation of the team and
finalization of the project title, if any?

a. Overall, the process of forming our team and finalizing the project title was an enriching
experience. It highlighted the importance of collaboration, adaptability, and open
communication. I look forward to seeing how our diverse skills contribute to the success
of the Smart Moderator project.

106 | P a g e
After Finalization of Project Proposal

1. Which activities are having maximum risk and uncertainty in our project plan?

l. The activities with maximum risk and uncertainty include the integration of OMR
technology and the development of the grading algorithm. These areas are prone to
technical challenges and require thorough testing to ensure accuracy and reliability.

2. What are the most important activities in our project plan?

m. The most important activities include designing the OMR sheet, developing the grading
algorithm, testing the system for accuracy, and ensuring user feedback mechanisms are in
place. These activities are crucial for the project’s success and overall functionality.

3. Is work distribution equal among project group members? If not, what are the reasons?
How can we improve work distribution?

n. Work distribution is not entirely equal, as some members have specific expertise that
requires them to handle more complex tasks. To improve distribution, we can assess each
member's strengths and redistribute tasks accordingly, ensuring everyone is engaged and
contributing to their capabilities.

4. Is it possible to complete the project in the given time? If not, then what are the reasons for
it? How can we ensure that the project is completed within time?

o. Yes, I believe we can complete the project within the given time frame if we adhere to
our schedule and maintain regular communication. However, we need to remain flexible
and address any unforeseen challenges promptly to stay on track.

5. What extra care and precaution should be taken in executing the activities of high risk and
uncertainty? If possible, how can such risks and uncertainties be reduced?

p. For high-risk activities, we should conduct thorough testing and quality assurance checks.
To reduce risks, we can implement a phased approach, allowing us to identify and
address issues early in the development process.

6. Can we reduce the total cost associated with the project? If yes, then describe the ways.

q. Yes, we can reduce costs by using open-source tools and libraries for OMR technology
instead of purchasing expensive software. Additionally, we can optimize resource
allocation to minimize wastage and ensure efficient use of funds.

7. For which activities of our project plan is the arrangement of resources not easy and
convenient?
r. Arranging resources for the testing phase, particularly for obtaining real exam papers and
feedback from students, may pose challenges. We might need to collaborate with
educational institutions to facilitate this process.

107 | P a g e
8. Did we make enough provisions for extra time/expenditure etc., to carry out such activities?
s. We have made some provisions for extra time, but we need to revisit our budget to ensure
we have adequate funds for unforeseen expenses, particularly in the testing phase.

9. Did we make enough provisions for time delays in our project activity? In which activities
are there more chances of delay?

t. We have provisions for time delays, especially for the integration and testing phases,
where challenges are more likely to arise. Regular progress reviews will help us address
potential delays proactively.

10. In our project schedule, which are the days of more expenditure? What provisions have we
made for the availability and management of cash?
u. The days with higher expenditures include those planned for purchasing resources and
conducting testing. We have set aside a contingency fund to manage cash flow and
ensure resources are available when needed.
11. Any other reflection which I would like to write about project planning?
v. Overall, the project planning phase has been insightful, emphasizing the importance of
clear communication and flexibility. I look forward to collaborating with the team to
adapt our plans as needed and achieve our project goals.

108 | P a g e
Portfolio for Self Directed Learning for Major Project
Work

Name of Student: Syed Afifa Fareeduddin

Semester: V

Programme/Branch: Computer Engineering

Roll No: 220460

Title of the Project: Smart Moderator

Name and Designation of Project Guide: Mrs Zaibunnisa L.H.


Malik, HOD Department of Computer Engineering,M.H. Saboo
Siddik Polytechnic

Name of Institute: M.H. Saboo Siddik Polytechnic

After Finalization of Project Topic & Formation of Project Team

109 | P a g e
1. How many alternatives we thought before finalizing the project topic?

a. We considered three main alternatives before settling on the Smart Moderator project.
These included developing an online examination platform, an automated grading system
for written exams, and a learning management system for career guidance.

2. Did we consider all the technical fields related to the branch of our diploma programme?

a. Yes, we evaluated various fields such as software development, artificial intelligence,


educational technology, and data analytics to ensure our project was aligned with our
diploma's technical focus.

3. Why we found the present project topic as most appropriate?

a. The Smart Moderator project was deemed most appropriate due to its potential to address
significant challenges in the current examination process, particularly in improving
grading speed and accuracy while providing useful feedback to students.

4. Whether all the group members agreed on the present project topic? If not, what were the
reasons for their disagreement?

a. Initially, not all group members were on board. Some preferred the online examination
platform because they believed it would have broader applicability. However, after
discussions highlighting the specific problems our project would solve, we reached a
consensus.

5. Whether the procedure followed in assessing alternatives and finalizing the project topic
was correct? If not, then discuss the reasons.

a. The procedure was generally effective. We conducted brainstorming sessions and group
discussions, but in retrospect, a more structured decision-making framework (like a
SWOT analysis) could have helped us evaluate the alternatives more thoroughly.

6. What were the limitations in other alternatives of project topic?

a. The online examination platform lacked the focus on grading efficiency, while the
automated grading for written exams was limited in scalability. The learning management
system, although relevant, felt too broad and less targeted compared to our chosen topic.

7. How we formed our team?


a. Our team was formed based on mutual interests in OMR technology and educational
systems. We gathered classmates with complementary skills in software development,
project management, and research.

8. Whether we faced any problem in forming the team? If yes, then what was the problem and
how was it resolved?

110 | P a g e
a. Yes, we initially struggled to find members with the right skills. This was resolved by
holding discussions to clarify each member's strengths and redistributing tasks to align
skills with project needs.

9. Am I the leader of our project team? If yes, then why was I chosen? If not, why could I not
become the project team leader?

a. Yes, I am the leader of our project team. I was chosen due to my strong organizational
skills, ability to motivate team members, and prior experience in managing group
projects. The team felt confident in my vision for the Smart Moderator project, believing
that my approach would guide us effectively through the development process.

10. Do I feel that the present team leader is the best choice available in the group? If yes, then
why? If not, then why?

a. Yes, I believe I am the best choice for the team leader because of my strong
organizational skills, ability to motivate the team, and relevant experience in managing
similar projects. I strive to create an environment where everyone feels empowered to
contribute.

11. According to me, who should be the leader of the team and why?

a. I believe I am the right leader for the team because of my proactive nature and
willingness to tackle challenges. My vision for the project aligns with the team’s goals,
and I am committed to ensuring that we achieve success together.

12. Can we achieve the targets set in the project work within the time and cost limits?
a. I believe we can achieve the project targets within the time and budget constraints, as we
have a well-structured plan and clear timelines for each phase of the project.
13. What are my good/bad sharable experiences while working with my team that provoked me
to think? What did I learn from these experiences?
a. A good experience was the brainstorming session where diverse ideas led to innovative
solutions. A bad experience was when miscommunication about deadlines caused
frustration. I learned the importance of clear communication and regular check-ins to
keep everyone aligned.

14. Any other reflection which I would like to write about the formation of the team and
finalization of the project title, if any?

a. Overall, the process of forming our team and finalizing the project title was an enriching
experience. It highlighted the importance of collaboration, adaptability, and open
communication. I look forward to seeing how our diverse skills contribute to the success
of the Smart Moderator project.

After Finalization of Project Proposal

111 | P a g e
1. Which activities are having maximum risk and uncertainty in our project plan?

a. The activities with maximum risk and uncertainty include the integration of OMR
technology and the development of the grading algorithm. These areas are prone to
technical challenges and require thorough testing to ensure accuracy and reliability.

2. What are the most important activities in our project plan?

a. The most important activities include designing the OMR sheet, developing the grading
algorithm, testing the system for accuracy, and ensuring user feedback mechanisms are in
place. These activities are crucial for the project’s success and overall functionality.

3. Is work distribution equal among project group members? If not, what are the reasons?
How can we improve work distribution?

a. Work distribution is not entirely equal, as some members have specific expertise that
requires them to handle more complex tasks. To improve distribution, we can assess each
member's strengths and redistribute tasks accordingly, ensuring everyone is engaged and
contributing to their capabilities.

4. Is it possible to complete the project in the given time? If not, then what are the reasons for
it? How can we ensure that the project is completed within time?

a. Yes, I believe we can complete the project within the given time frame if we adhere to
our schedule and maintain regular communication. However, we need to remain flexible
and address any unforeseen challenges promptly to stay on track.

5. What extra care and precaution should be taken in executing the activities of high risk and
uncertainty? If possible, how can such risks and uncertainties be reduced?

a. For high-risk activities, we should conduct thorough testing and quality assurance checks.
To reduce risks, we can implement a phased approach, allowing us to identify and
address issues early in the development process.

6. Can we reduce the total cost associated with the project? If yes, then describe the ways.

a. Yes, we can reduce costs by using open-source tools and libraries for OMR technology
instead of purchasing expensive software. Additionally, we can optimize resource
allocation to minimize wastage and ensure efficient use of funds.

7. For which activities of our project plan is the arrangement of resources not easy and
convenient?
a. Arranging resources for the testing phase, particularly for obtaining real exam papers and
feedback from students, may pose challenges. We might need to collaborate with
educational institutions to facilitate this process.

8. Did we make enough provisions for extra time/expenditure etc., to carry out such activities?

112 | P a g e
a. We have made some provisions for extra time, but we need to revisit our budget to ensure
we have adequate funds for unforeseen expenses, particularly in the testing phase.

9. Did we make enough provisions for time delays in our project activity? In which activities
are there more chances of delay?

a. We have provisions for time delays, especially for the integration and testing phases,
where challenges are more likely to arise. Regular progress reviews will help us address
potential delays proactively.

10. In our project schedule, which are the days of more expenditure? What provisions have we
made for the availability and management of cash?
a. The days with higher expenditures include those planned for purchasing resources and
conducting testing. We have set aside a contingency fund to manage cash flow and
ensure resources are available when needed.
11. Any other reflection which I would like to write about project planning?
a. Overall, the project planning phase has been insightful, emphasizing the importance of
clear communication and flexibility. I look forward to collaborating with the team to
adapt our plans as needed and achieve our project goals.

113 | P a g e

You might also like