0% found this document useful (0 votes)
25 views

Assessment Full Note

KU 3d Semester Notes

Uploaded by

Gulmesh Chand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Assessment Full Note

KU 3d Semester Notes

Uploaded by

Gulmesh Chand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 140

UNIT I : Perspectives on Assessment and Evaluation

___________________________________________________________________________________________

Assessment and Evaluation in Education -Purposes of Factors to be considered for successful


Evaluation assessment
Types of evaluation-Formative and Current practices in assessment and
Summative, Outcome Evaluation, Process evaluation –CCE- concept, need and
Evaluation, Self Evaluation, Peer Evaluation, relevance, Grading system- concept, types -absolute
Product Evaluation, External Evaluation, grading, direct grading and relative
Internal Evaluation and Objective based grading, merits and demerits. Grade Point
Evaluation. Average, Cumulative Grade Point Average,
Brief introduction to Instructional objectives Weighted average and weighted score/point.
as the basis of scientific evaluation-Bloom’s Classification of learners according to their
taxonomy of educational objectives; level of performance in Grading system (By
Domains of learning – cognitive, affective giving letter grades such as: A+, A, B+,B
and Psycho motor.(Learn optional Notes) etc.

_______________________________________________________________________________

Measurement is the process of assigning numbers to a set of persons or objects according to certain
established rules. Measurement is the process of determining the quantity of something. In education it
is expressing in quantitative terms the degree to which a pupil possesses a given characteristic.
Measurement in education is a much complex process than in physical sciences.
Testing is a procedure in which a sample of an individual’s behavior is obtained, evaluated and scored
using standardized procedures and tests are device used for this. Tests are tools that can contribute
importantly to the process of evaluating students, curriculum and teaching methods. Testing is often
considered synonymous to assessment, which is not true. There is difference between testing and
assessment. Often test results are considered as the only criterion for evaluation and other educational
decisions. Mostly performance on tests can be generalized to non-test behaviours also. Testing is the
not the end point of assessment but rather a part of the broad assessment process.

Educational assessment is the process of documenting, usually in measurable terms, knowledge,


skills, attitudes, and beliefs. Assessment is any systematic procedure for collecting information that
can be used to make inferences about the characteristics of people or objects. It should lead to increased
understanding of these characteristics. Testing uses only one systematic method of collecting
information and are therefore is one tool for assessment. So assessment is a broader, comprehensive
process than testing. It can focus on the individual learner, the learning community (class, other
organized group of learners), the institution, or the educational system as a whole. The term assessment
is generally used to refer to all activities teachers use to help students learn and to gauge student
progress.

Assessment: Assessment is the change or development in the behavior of the student as a result of instruction.
This change can be physical, psychological, social, attitudinal, personality, changes in interest etc. Since
education aims at the all round development of the child, evaluation is used for assessing the all round changes in
the children. For this assessment of change the teacher has to conduct pre-test before instruction to find the
pupil’s entry behavior. Instruction brings about changes in the behaviour of the child. Then the child is exposed
to post-testing to find the terminal behavior of the child. The difference between the post-test score and the pre-
test score gives an assessment of the students’ growth or progress as a result of instruction.
Assessment is an integral component of the teaching process. It has been estimated that teachers
devote one third of their professional time in assessment related activities. Assessment provides
relevant information that both enhances instruction and promotes learning. In other words there is a
close reciprocal relationship between instruction learning and assessment. With this expanded
conceptualization of teaching, instruction and assessment are integrally related, with assessment
providing objective feedback about what the students have learned, how well they have learned it, how
effective the instruction has been, and what information concepts and objectives require more attention.
Instead of teaching being limited to an instruction-learning process, it is conceptualized more
accurately as an instruction-learning-assessment process. In this the goal of assessment like that of
instruction, is to facilitate student achievement.
Characteristics of Assessment
• Assessment assesses the student progression and guides us in decision making.
• Assessment focuses on learning and teaching outcomes.
• It is used to drive instruction. It is the basis for improvement.
• Assessment is done at multiple levels - classroom, institution, programmes, and courses.
• Assessment helps in bringing about changes in the learning environment.
• Assessment uses internally defined criteria and brings changes according to the circumstances.
• Assessment is flexible. It is formative, internal, process oriented and diagnostic in nature.
Basic assumptions in educational assessment:
1. Educational constructs exist and can be measured. The measurement need not be perfect.
2. There are different ways to measure any given educational construct.
3. All assessment procedures have strength and limitations.
4. Multiple sources of information should be part of the assessment process.
5. Assessments can be conducted in a fair manner.

Three Levels of Assessment


➢ Assessment as learning - Assessment as learning occurs when students reflect on and monitor
their progress to inform their future learning goals. ‘Assessment as learning’ is perhaps more
connected with diagnostic assessment and can be constructed with more of an emphasis on
informing learning. Assessment as learning generates opportunities for self assessment and for
peer assessment. Students take on increased responsibility to generate quality information about
their learning and that ofothers– • Teacher and student co-construct learning • Teacher and
student co-construct assessment • Teacher and student co-construct learning progress map
Assessment for learning and assessment as learning activities should be deeply embedded in
teaching and learning and be the source of iterative feedback, allowing students to adjust,
rethink and re-learn.
➢ Assessment for learning- Assessment for learning occurs when teachers use inferences about
student progress to inform their teaching. The assessment for learning involves increased levels
of student autonomy, but not without teacher guidance and collaboration. The assessment for
learning is sometimes seen as being akin to ‘formative assessment’ and can be seen as
informing teaching. There is more emphasis towards giving of useful advice to the student and
less emphasis on the giving of marks and the grading function– • Teacher designs learning •
Teacher designs assessment with feedback to student • Teacher assesses what has been learnt
(student develops insight into what has not)

➢ Assessment of learning - Assessment of learning occurs when teachers use evidence of student
learning to make judgements on student achievement against goals and standards. Most
commonly, assessment is defined as a process whereby someone attempts to describe and
quantify the knowledge, attitudes or skills possessed by another. Teacher directedness is
paramount and the student has little involvement in the design or implement of the assessment
process in these circumstances– • Summative • Teacher designs learning • Teacher collects
evidence • Teacher judges what has been learnt (and what has not)

➢ Assessment in Learning - The assessment in learning places the question at the centre of
teaching and learning. It deflects the teaching from its focus on a ‘correct answer’ to focus on
‘what is the way to obtain the correct answer’. Through the inquiry, students engage in
processes that generate feedback about their learning, which come from multiple sources, and
activities. It contributes to the construction of other learning activities, lines of enquiry and the
generation of other questions– • Student at the centre of learning • Student monitors, assesses
and reflects on learning • Student initiates demonstration of learning (to self and others) •
Teacher as coach and mentor Teachers and students need to understand the purpose of each
assessment strategy so that the overall assessment ‘package’ being used by learners and teachers
accurately captures, generates and uses meaningful learning information to generate deep
learning and understanding.

Purpose of Assessment

• To ascertain what learning, change and progress takes place in the child over a period of time in
different subjects of study and other aspects of the child’s personality.

• To find out the needs and learning style of every learner.

• To devise a teaching-learning plan that is responsive to the individual needs and learning styles.

• To improve the teaching-learning materials by adding value.

• To help every learner find out their interests, aptitudes, strengths and weaknesses so that the learner
can evolve effective learning strategies.

• To measure the extent to which curricular objectives have been realized.

• To enhance the effectiveness of the teaching-learning process.

• To record the progress of every learner and communicate it to parents and other stakeholders.

• To maintain a dialogue between the teacher and the student and also the parents as a collaborative
endeavor for overall improvement of the system.
• To involve the learners in the process through peer and self assessment.

Different stages in Assessment

Stage-1: Gathering information about and evidence of the extent of effectiveness of teaching and
learning We gather information in a variety of ways, using a number of tools. Observation,
conversation and discussion, assignments, projects, different types of tests etc are some of the methods
and tools we use for collecting information.

Stage-2: Recording of Information The information gathered has to be systematically recorded


because it constitutes not only rich inputs that have to be used for improving teaching and learning but
also evidence to support the conclusion we come to about the progress made by the students. In order to
make the recording effective, we must use different recording devices such as learner profile,
ancecdotal records, case studies, report books and both quantitative and qualitative data should be
collected.

Stage-3: Analysing and Reporting the Information Collected The recorded information constitutes
valuable feedback that the teacher, the student and the parents should use to enhance the learning
process. To do this, the gathered information has to be analysed periodically so that the teacher can
draw conclusions about how a child is learning and progressing.

Stage-4: Using the Information for Improvement Assessment should result in improvement.
Though the student, the teacher and the parents are all stakeholders in this paradigm, it is the teacher
who has to take the initiative to use the analysis of information on each learner to enhance learning.
This calls for reflective practices.

Evaluation is value judgment on an observation, performance test or indeed any data + evaluated by
placing a meaning in it relative to a standard norm or some other situation.

NCERT considers Educational Evaluation as the process of determining


1. The extent to which an objective is being attained.
2. The effectiveness of the learning experiences provided in the classroom.
Characteristics of Evaluation:
• Evaluation is a continuous, comprehensive process and forms an integral part of the total system of
education.
• Evaluation is a broader concept that involves academic and non academic aspects of education.
• Evaluation involves all means of collecting evidences on student behavior. It utilizes all tools and
techniques of evaluation.
• Evaluation includes both qualitative and quantitative observations.
• Evaluation is concerned with the total personality of the child and gives evidences for the child’s
personality development.
• Evaluation is based on accurate assessment and value judgments.
• Evaluation is based on pre-determined objectives and goals of education.
• Evaluation is subjective and personal.
• Evaluation is a co-operative process involving the pupils, teachers, parents and others.
• The scope of Evaluation is broad.
Principles of Evaluation:
• Evaluation should be based on objectives and the evaluator should have knowledge about the
relationship between objectives, instruction and evaluation.
• Evaluation should not be confined to the tests results only. It should be based on data collected by
different assessment procedures.
• While evaluating the evaluator should always keep in mind the moral aspects of evaluation.
• Evaluation should be impartial and personal bias should not affect evaluation.
• The evaluator should consider the utility and limitations of a tool while selecting an evaluation
tool. The tools selected for evaluation should fulfill the aim of evaluation.
• Evaluation is not an end in itself; it is only a means to attain higher goals.

Uses of Assessment and Evaluation: Assessment and Evaluation provides information that help
educators make better educational decisions, it can benefit our educational institutions and society as
whole. Both are useful to the different persons involved in evaluation process - the teacher, student,
parents, administrator, planner, manager, supervisors etc.
• Appropriate assessment and evaluation procedures allow teachers to monitor student progress and
provide feedback.
• Assessment and Evaluation can provide information that allows teachers to modify and improve
their instructional practices. It provides feedback for the teacher regarding his teaching and the
learning experiences he provided for the students. Thus he can bring about the necessary changes
required.
• Educational assessments and Evaluations provide useful information to help educators select, place
and classify, compare and group students.
• In an era of increased accountability, policy makers and educational administrators are relying
more on information from educational assessments and evaluation to guide policy decisions.
• It also provides information that promotes self-understanding and helps students plan for the
future. It helps to diagnose the weakness and strength of pupils, for prediction and make provision
for giving guidance for the growth of students.
• It helps a teacher to find out the extent to which the objectives of education are attained.
• Students get to know about their strength and weakness and thus improve their performance.
Knowledge about their performance acts as a motivating factor for the students.
• It keeps the parents well-versed with the performance of their children which helps them to take
appropriate action for their improvement.
• It also helps the administrators, planners and supervisors to take appropriate decisions regarding
various aspects of education. E.g. curriculum development.
• It is used for research purposes.
Diagnostic and Prognostic use of Assessment: Assessment the process of collecting various data
about students can serve various purposes. The main purpose of assessment in education is that it forms
the basis of identifying the strength and weakness of students. Assessment can be either diagnostic or
prognostic based on whether it is used to identify the student's strength or weakness.
Diagnostic assessment: Assessment carried out to find the weakness of students i.e. learning
difficulties is known as diagnostic assessment. It mainly aims at finding the cause of learning
difficulties and providing remedial instruction. Diagnostic assessments can be done before and after
the instruction. Diagnostic assessment done before instruction (also known as pre-assessments)
provides instructors with information about student's prior knowledge and misconceptions before
beginning a learning activity. Diagnostic assessment done after instruction helps in understanding
how much learning has taken place after the learning activity is completed. Instructors usually build
concepts sequentially throughout a course. So if the students fail to grasp the concepts in a particular
area it may create learning gaps. This will make a student lag in his learning and decrease his
achievement. So it becomes very important for a teacher to conduct diagnostic assessment.
Diagnostic assessment is always followed by remedial assessment. It can be cyclic process of
diagnostic assessment remedial teaching diagnostic assessment remedial
teaching. Diagnostic assessment data may be done from:
• Summative assessments of the previous learning activity.
• Short assessments that focus on key knowledge and concepts like instant tests.
• Using an achievement test, intelligence test or a diagnostic test.
• Oral questioning and observation of the teacher.
• Cumulative record
Prognostic assessment: Prediction means telling something about future on the basis of present. A
prognosis is a prediction that is based on the information gathered now. Prognosis is a term denoting
the prediction of how a learner will progress in future. Education is the process of developing the innate
abilities of the students. So assessment also should fulfill this function of finding the innate abilities of
the students. It should identify the strength, capacities and potentialities of the students. The
identification of these innate abilities can help a student in perusing further studies or choosing a job.
This will help in predicting success in a career or course of study. Prognostic assessment can be used to
select students for a particular course or job. Prognostic assessment can be used in providing
educational and vocational guidance to students. It can be the basis for predicting how an individual
would behave in certain situations. Based on prognostic assessment one can provide enrichment
programmes and special training for students. Prognostic assessment can be done from:
• Intelligence tests can be used to predetermine one's success in academic achievement and various
professions.
• Aptitude test can be used to predict capacity and potential success in particular fields. E.g. teaching
aptitude test, differential aptitude test
• Vocational interest inventories
• Selection interviews.
• Entrance examinations.
• Achievement test can also be used for prognosis to a certain extent.
• Teachers observation or interview can also form a basis for finding the abilities of the students which can
be used for prediction
Evaluation of prognostic test
• Standardized prognostic tests have declined in use.
• Validity of most of the available prognostic test is low.
• Prognostic tests for general purpose are not really general.
• Prognostic tests for national use have limitations as they do not consider regional variations.

Placement: Evaluation is used for grading, promotion and placement in the same school and in other
institutions. When a school is large enough to have several groups at the same grade or level, a decision
must be reached on some grounds as to who goes into which group. This is based on the evaluation of
the students. Also evaluation helps in the grouping of students into different groups in homogeneous
grouping, when they are transferred from one school to another, etc. placement is done on the basis of
the present educational status of the student.

Difference between assessment and evaluation

Assessment Evaluation
Formative: ongoing to improve learning Summative: final to gauge quality
Process oriented: how learning is going on Product oriented: what has been learned
Diagnostic: identify areas for improvement Judgmental: judge the overall performance
Focuses on immediate teaching learning Focuses on grades or marks
outcomes
To drive instruction To rate a student
Internally defined criteria and goals Externally imposed standards
Flexible: adjust as problems arise Fixed: changes are not made usually
Main goal is improvement Goal is reward, success, failure, punish, pass etc.
Strive for ideal outcomes Divide better from the worse
Focus on goals of student learning Focus on all major goals of a programme or course

TYPES OF ASSESSMENT: There are different types of assessment on the basis of various aspects of
assessment. The different types of assessment are described below.
A.) Based on the Time of Assessment - Formative and Summative Assessment - Horner Michael
Scriven in 1967 coined the terms Formative and Summative Evaluation. He used the terms in his essay
Methodology of Education to refer to the assessment of an instructional programme that have been
completed as Summative and that which is going on and can be modified as Formative.
Formative Assessment - Formative assessment is the assessment of the students at every stage of the
instructional process. It goes on along with the instructional process at very short intervals. The tests
used are called formative tests. It can be used to monitor learning progress during instruction and
provide feedback to the student and teacher. It is helpful to the teacher to make adjustments and adapt
to the learning process. It provides immediate feedback. Feedback to the student gives reinforcement
of successful learning. Feedback to the teacher makes modify instruction. Teacher can arrange remedial
programme on the basis of feedback.
❖ Formative Evaluation is concerned with judgements made during the design and or
development of a programme which are directed towards modifying, forming or
otherwise improving the programme before its completed.”
❖ “Formative evaluation occurs over a period of time and monitors student progress’’
W. Wiersma and S.G Jurs Write
Characteristics of Formative Evaluation

1. Formative evaluation is a done during an instructional programme


2. The instructional programme should aim at the attainment of certain objectives during the
implementations of the programme also.
3. Formative Evaluation is done to monitor learning and modifying the programme if needed before
its completion.
4. Formative Evaluation is for current students.
5. It relatively focuses on molecular analysis.
6. It is cause seeking
7. It is interested in the broader experiences of the programme uses.
8. Its design is exploratory and flexible.
9. It tends to ignore the local effects of the particular programme.
10. It seeks to identify influential variable.
11. It requires analysis of instructional material for mapping the hierarchical structure of the learning
tasks and actual teaching of the course for a certain period.

Summative Evaluation - Summative evaluation describes judgments about the merit of an already
conducted instructional programme i.e. at the end of a unit, month, term, semester or course. It is done
mainly to find whether the final product is upto the expected standards, had the process gone according
to the plan, what has been achieved at the end. The tests used for summative evaluation is called
summative tests. Summative evaluation results can be used to assign grades, marks, certify pupils
achievement etc.
1. “Summative Evaluation describes judgements about the merits of an already
completed programme, procedure or product.’’
According to A.J. Nikto (1983)
1. “Summative evaluation is done at the conclusion of instruction of measures the
extent to which students have attained the desired outcomes.’’
W. Wiersma and S.G. Gurs (1990)
Characteristics of Summative Evaluation
1. It tends to the use of Well-defined evaluation designs
2. It focuses on analysis
3. It provides descriptive analysis
4. It tends of stress local effects.
5. It is unobtrusive and non reactive as far as possible.
6. It is concerned with broad range of issues.
7. Its instruments are reliable and valid.

Formative evaluation Summative evaluation


Conducted when teaching learning process goes Conducted at the end of the process, course,
on year, unit, and semester.
Conducted daily Conducted weekly, monthly or yearly
Design is exploratory and flexible Well defined instructional designs
Is internal assessment Can be internal or external assessment
Data gathered by observation, oral questioning, Data gathered by unit test, achievement test,
discussions, quizzes, assignments etc. examinations etc.
Usually adopts criterion referenced approach Usually adopts norm referenced approach but
can be criterion referenced also.
Provides immediate feedback for teacher and No immediate feedback
students
Developmental in nature, caters to day to day Mainly judgmental in nature, to judge the
improvement merit of overall instruction
Detailed information is required Less detailed information gathered
Provide a feedback to the teacher and student Provide a feedback to the teacher and student
about how things is going on about how things went
It has limited scope It has wide scope
B.) CONTEXT: INTERNAL AND EXTERNAL ASSESSMENT - External assessment fails to
assess the all round development of the students i.e. their knowledge, attitude, skills, values etc. Even in
area of assessment of the knowledge of the students, it has come under heavy criticism. Here lies the
importance of internal assessment.
External assessment - External assessment is that which is carried out by an external agency usually at
the end of a course or year or semester. It is mainly the system of examinations conducted by a board of
examinations who are not directly involved in the teaching learning process. E.g. Entrance
examinations, CBSE board examinations. It is mainly used to assess the scholastic achievement of the
students and certify their level of achievement. It provides an independent assessment of the students.
Internal assessment - Internal assessment is done internally by the teacher of the same institution who
is directly involved in the teaching learning process. In this form of assessment the teacher will be the
person who conduct the whole process of assessment - setting question paper, collecting data, scoring
and analyzing data, providing grades or marks. It is a Continuous process which is formative in nature -
that which is done throughout the instructional process. It helps in providing immediate feedback for
the students and develops proper study habits in students. It also motivates them. It is diagnostic in
purpose and helps in improving the instructional process.
Limitations of internal assessment -
 Can be misused by the teachers
 Requires experienced ,honest and sincere teachers
 Cannot replace external exam
 Requires a lot of time to undertake several activities

Blue-print of internal exam


▪ Scholastic aspects - Curricular area& intelligence.
▪ Non-scholastic aspects - Personal and social habits, Interests, Attitudes, Physical health.
▪ Co-curricular activities - Literary and scientific activities, Cultural activities, Outdoor
activities and other productive activities.
In-spite of the theoretical superiority of continuous internal assessment cannot replace external assessment. Both
internal and external assessment is equally important as the same sides of the same coin. But both have a part to
play in a good assessment system and should be used together.

Comparison of Internal and External assessment


Aspects External Assessment Internal Assessment

Objective Assess level of achievement Improve level of achievement

Coverage Only scholastic aspects Both scholastic and non scholastic


aspects

Nature Summative and rigid Formative and flexible

Evaluation techniques Written and practical exams Psychological tests, interviews,


observation

Evaluation tools Question papers Rating scales and schedules also

Periodicity After the end of a stage of Continuous process


education

Quality of Assessment Uniformity of practice and Standard of assessment depends on


standards maintained in all the teacher
schools
Organizational structure Top-heavy administrative Teacher operated and informal
structure

Examinee status Merely a roll number Human being with distinct


personality

Teacher status Insignificant and usually a Trusted and active individual


mistrusted tool
Teacher student Does not affect and not Affects and is affected by teacher
relationship affected by teacher student student relationship
relationship
Use of the result The result can be used for The result used to improve the
promotion, selection, teaching learning process,
comparison of schools, diagnostic purpose and as a part of
students, certification etc providing marks or grades

C.)PROCESS AND PRODUCT EVALUATION


The first distinction to make is in what is to measure: product or process or some combination of both.
Product refers to most of the cases some tangible object resulting from a persons performance.it can be
evaluated on the basis of its quality, appearance and conformance to pre-determined specifications or
some other criteria. Process refers to the procedures, methods used to arrive at that particular point,
which may or may not result at a final product. Process are more often more difficult to assess as it is
more subjective and complex in nature. They are usually evaluated on the basis of quality and
efficiency through observational techniques.
In assessment of a skill the important aspect of the skill is the process so we assess the process as a
measure of the process would give a more realistic and direct measure of the skill. Similarly if a
product is more important than the process then product evaluation is used. In certain situations the
process or how you do the job is not important the final product is more significant. In some other
situations it becomes very difficult to evaluate or assess the process, it becomes practically impossible
to measure them e.g. in most creative and artistic endeavours the final product can be easily judged but
the creative process involved usually defies exact definition and inspection. While in some cases the
process is more important e.g. while assessing a sales executive trainee his approach and attitude to
customers is more important in the initial stage than his final product of how many items he sold. In
some cases both product and the process is important e.g. painting a house. In some situations the
process has to be assessed at different stages and how it is carried out in each stage and also the final
product.
Process Evaluation A process evaluation looks of the actual development and implementation of a
particular program its establishes whether you’ve hit quantifiable Targets and implemented strategies as
planned. This types of evaluation can be very useful in determining whether a program should be
continued, expanded upon, refined or eliminated. A process evaluation looks at the actual development
and implementation of a particular program. It establishes whether you have hit quantifiable targets and
implemented strategies as planned. It is typically done at the end of the project and it looks at the
program from start to finish, assessing cause and effect relationships between the program components
and outcome. This type of evaluation can be very useful in determining whether a program should be
continued, expanded upon, refined or eliminated.
Process evaluation captures the HOW of a program. Process evaluation has been defined as the
evaluation that assesses the delivery of the program(Scheirer, 1994). Process evaluation identifies what
the program is and if it is delivered as intended both to the “right audience” and in the “right amount”.
The following questions (according to Scheirer) can guide a process evaluation:
Why is the program expected to produce its results?
For what types of people may it be effective?
In what circumstances may it be effective?
What are the day-to-ay aspects of program delivery?
Why Process Evaluation Is Important
Information from process evaluations is useful for understanding how program impact and outcome
were achieved and for program replication. Looking at outcomes – without analyzing how they were
achieved – fails to account for the human capital (over-worked staff) involved in getting to good
outcomes and the true costs of the program.
Evaluating the “input” (the very first column in a logic model) is just as valid as evaluating the last
columns about outcomes. It is called a “logic” model after all – and logically there is a chain of cause
and effect which means, if we have the right resources at the very beginning of the chain (inputs) than
we assume we will be able to get to the outcomes to which we aspire.
1. Process Evaluation (P): develop ongoing evaluation of the implementation of major strategies
through various tactical programs to accept, refine, or correct the program design (i.e. evaluation of
recruitment, orientation, transition, and retention of first year students).
a. purpose (1) provide decision makers with information necessary to determine if the program
needs to be accepted, amended, or terminated.
b. tasks (1) identify discrepancies between actual implementation and intended design
(2) identify defects in the design or implementation plan
c. methods (1) a staff member serves as the evaluator
(2) this person monitors and keeps data on setting conditions, program elements as they actually
occurred
(3) this person gives feedback on discrepancies and defects to the decision makers
i) To increase understanding of the details of exactly what to happening within a program.
ii) To help interpret impact/outcome evaluation findings for instance, if a program did not achieve
its outcomes, process Evaluation can help explain why.
iii) To identify best practice for sharing with other programs.
Benefits of Process Evaluation
• To increase understanding of the details of exactly what is happening within a program.
• To identify best practice for sharing with other program.
• To help interpret impact/outcome evaluation findings.
• For instance, if a program did not achieve its outcomes, process evaluation can explain why.
Product evaluation It is typically done at the end of the project and its looks at the program from start
to finish. Assessing cause and effect relationships between the programme components and outcomes.
Product evaluation is a kind of evaluation where the evaluator views and scores the final product made
and not on the actual performance of making that product. It is concerned on the product alone and not
on the process. It also focuses on achievement of the learner.
Product Evaluation (P): evaluation of the outcome of the program to decide to accept, amend, or
terminate the program, using criteria directly related to the goals and objectives (i.e. put desired student
outcomes into question form and survey pre- and post-). Loop back to the original objectives in the
Evaluation to see if and how these would be changed or modified based on the data.
a. purpose (1) decide to accept, amend, or terminate the program
b. task (1) develop the assessment of the program
c. methods (1) traditional research methods, multiple measures of objectives, and other methods
The learning competencies associated with products or outputs are linked with an evaluation with three
levels of performance. They are
1.Beginner's level
2.Skilled level
3.Expected level
There are other ways to state learning competencies for product or outputs in the following
ways:
Level 1: Does the finished product or project illustrate the minimum expected parts or
functions? (beginner)
Level 2: Does the finished product or project contain additional parts and functions on top of
the minimum requirements which tend to enhance the find product? (skilled level)
Level 3: Does the finished product contain the basic minimum parts and functions, have the
additional features on top of the minimum and is aesthetically pleasing? (expert level)
Process vs. Product Assessment

Product evaluation is an evaluation student performance in a specific learning context. Example: -


School report card Process evaluation examines the experiences and activities involved in the learning
situation. Example: -Student- teacher interaction -Instructional methods -Teacher actions and so forth.

Process assessment focuses on the steps or procedures underlying a particular ability or task, i.e., the
cognitive steps in performing a mathematical operation or the procedure involved in analyzing a blood
sample. Because it provides more detailed information, process assessment is most useful when a
student is learning a new skill and for providing formative feedback to assist in improving performance.

Product assessment focuses on evaluating the result or outcome of a process. Using the above
examples, we would focus on the answer to the math computation or the accuracy of the blood test
results. Product assessment is most appropriate for documenting proficiency or competency in a given
skill, i.e., for summative purposes. In general, product assessments are easier to create than product
assessments, requiring only a specification of the attributes of the final product.

D.) SELF EVALUATION & PEER EVALUATION


Self Evaluation
Student Self-assessment is the process by which students must analyze their learning, provide feedback
to themselves and determine the ways to enhance their performance. Self-assessment is more accurately
defined as a process by which students
1) monitor and evaluate the quality of their thinking and behavior when learning and
2) identify strategies that improve their understanding and skills.
Student self-assessment is “the process by which the student gathers information about
and reflects on his or her own learning … [it] is the student’s own assessment of personal progress in
knowledge, skills, processes, or attitudes. Self-assessment leads a student to a greater awareness and
understanding of himself or herself as a learner”
“A self evaluation is our thoughtful and considered written review of our performance during the
evaluation cycle.”

 It is the looking at your progress, development and learning to determine what has improved
and what areas still need improvement.
 It involves rating established goals, competencies and overall performance.
Important aspects of self assessment
1. Goal-setting is a key component of both self-assessment and learning. The students should set their
own goals. Teachers commonly use the SMART acronym as a way of guiding students in the design of
a learning target. In this acronym: S-Specific, M-Measurable, A-Achievable or Attainable, R-Relevant
and T-Time-bound.
2. Self-monitoring involves focused attention to what they are doing, often in relation to external
standards.
3. Reflection occurs “when students think about how their work meets established criteria; they
analyze the effectiveness of their efforts, and plan for improvement”
4. Metacognition - Reflection can lead to “thinking about thinking” makes them better equipped to
employ the necessary cognitive skills to complete a task or achieve a goal.
5. Self-judgment judgments give students a meaningful idea of what they know and what they still
need to learn.
6. Feedback is information about ones' performance which forms the basis understanding of oneself,
what they know, what is achieved, what is to be achieved etc.
7. Instructional Correctives are strategies or ways to improve performance based on their self
evaluation and feedback.
Advantages of Self Evaluation
 Self-assessment, allows us to tap into student differences in order to see how our teaching can
respond to students needs.
 Self assessment is possible and helps to become an active participant in one's own evaluation.
 It helps to assess one's strength and weakness one need to improve or modify.
 Constructive participation is possible.
 It helps to increase the commitment of an individual in his/her goal , settings/achievement,
competency development and future career.
 Self-assessment by pupils, is becoming an essential component of formative assessment.
 Self-assessment promotes meta cognitive skills, increases student responsibility for learning and
reduces disruptive classroom behaviour.
 It helps students to assess their work realistically and accurately, teachers can help to promote
learning and self-confidence
 Student self assessment empowers students and incorporate increased dialogue between
students and teachers which enables students to critically analyze their own learning, the
product and process of learning, their performances.
 It provides information useful for the planning and student improvement.
 It indicates the strength and weaknesses of the teacher.
 It helps the teacher to think, reflect and write down the lack points.
 It helps the student better idea of the goals that they are trying to reach.
 Student can take responsibility of their own learning
 Students get a chance to predict their main targets for the coming year and think about their
career advancements.
Things needed to complete Self Evaluation
 Time
 Quiet
 Relax
 Highlight the highlights
 Don’t forget about achievements made early on the evaluation period.
 Don’t be stuffy
 Solicit feedback from co-workers
 Be objective
 Use appropriate language
 Suggest specific improvements
Role of Teacher
 The teacher should not target too many issues at a time for appraisal and action.
 By asking questions about students learning the teacher will gain information about how
students are understanding.
 Such an information will help the teacher to adjust his teaching students learn what he wants
them to.
 It helps the students to evaluate their own learning.
Teacher should keep in mind
 Clarity of the stated educational aims and learning outcomes.
 Realism of the stated prior knowledge.
 Curriculum and content perceptions of usefulness/relevance.
 Way in which the curriculum was presented.
 Development of subject specific skills.
 Appropriateness of method of assessment
 Appropriateness of the style of teaching and the performance of the teacher.
 Motivation/attitudes of the students.
 Support available to the students/coursebooks/resources for independent learning.
 Overall experience of the student of the teaching and support for learning.
Disadvantages
 Teacher feedback.
 Conciousness
 Format based plan
 Lack of maturity
 It works only is students have been trained to self asses themselves.
 Grading is a predetermined process but it is an average of the marks awarded by the members of
the group.
PEER EVALUATION Peer assessment or peer evaluation can mean many things a means of raising
the bar by exposing students to exceptionally good or bad solutions. Peer grading of home work,
quizzes, etc and an aid to improving team performance or determining individual effort and individual
grades. Process of collegial feedback on quality of learning. It is a process of gathering information and
evidence about the effectiveness of the peers learning and works with a view of constructive critical
scrutiny.
Process for Peer Evaluation of Teaching
 It should be based on the clear understanding of the particular context of learning or teaching.
 Dialogue between the reviewer and the person/persons whose work being reviewed provides
mechanism to improve.
 Reports based on review process provide contribution to purposes than an account of the single
events.
 It helps instructors to improve the quality of learning or teaching in their classroom and
department.
Application of Peer Review
 General teaching improvement of current instructors.
 Hiring
 Mentoring of junior instructors.
 Promotion or advancement decisions.
 Merit awards.
Components of Peer Evaluation
 In- class observation – in classroom students can be asked to observe their peer/teachers or
student teachers can observe their colleagues classes and provide feedback.
 Course material review- students feed-back on the curriculum
 Student evaluation- useful for information of how students respond to their instruction but they
are not qualified to assess content knowledge or modality of instruction.
 Ongoing evaluation- there is a repeated conversation and reflection by the instructor with
inputs from peers and students.
Advantages of Peer Evaluation
 peer assessment encourages deep learning.
 Peer assessment can help to develop clearer assessment criteria.
 Peer assessment is good way to generate timely feedback.
 It may lead to improvement in your other assessment practice.
 Peer assessment may reduce workload of teachers.
 Students become familiar with the school goals, values and problems.
 Students begin to deeply know the subject matter, curriculum, instructional material.
 Teachers are aware of the actual demand, limitations and opportunities. Teacher get proper
feedback.
 Involves the students into the teaching learning process
 Students would be more willing to accept the comments of their peers and thus lead to
improvement
 Indirectly it promotes the learning of students.
Disadvantages
 It is not easy. It is not realized properly if it is not done properly.
 It can create doubts about students evaluation abilities.
 It is not helpful for individuals who lack proper knowledge about the objectives and goals of the
task.
 It has not a proper trust as students may provide more grades for their friends and less for those
whom they dislike.
 Teachers work load can increase as the teacher has to re-check whether the students have done
peer evaluation correctly.
Criteria for Good Peer Evaluation
 Voluntary participation
 In-depth study
 Co-operation
 Respect.

Outcome Evaluation
Outcome based evaluation is a systematic way to assess the extent to which a program has achieved its intended
results. It identifies process and outcomes, shows relationship of inputs to expected results or outcomes, helps
identify the major questions the evaluation to answer.
The type of evaluation most commonly requested by foundations is called outcome evaluation. Outcome
evaluations assess the effectiveness of a program in producing change. Outcome evaluations focus on difficult
questions that ask what happened to program participants and how much of a difference the program made for
them. Outcome evaluations assess the effectiveness of a program in producing change. Process evaluations help
stakeholders see how a program outcome or impact was achieved. A process evaluation looks at the actual
development and implementation of a ... whether a program should be continued, expanded upon,
refined or eliminated. Impact or outcome evaluations are undertaken when it is important to know whether and
how well the objectives of a project or program were met. For example, outcome questions for a smoking
cessation program might include:
Did the program succeed in helping people to stop smoking?
Was the program more successful with certain groups of people than with others?
What aspects of the program did participants find gave the greatest benefit?
Outcome and Impact Evaluation Decide what outcomes you'd like to evaluate from your program. Generally,
interventions directed at nutrition and physical activity-related behaviors are not able to track the long-term
health benefits that may occur. You may need to assess proximal outcomes that you can use to make a case for
impacting health, for example, amount of fruits and vegetables eaten or amount of physical activity performed by
the target audience.
Advantages of outcome based evaluation
• Improves programs and services.
• Helps in decision making.
• Public and professional recognition as a quality program.
• Gain support from community.
• Determine cost effectiveness.

Objective Based Evaluation


Dr.Benjamin S. Bloom has explained “One has to know where the students were at the beginning if we were
to determine it and what changes are occurring. One has to obtain a record of the changes in pupils by using
appropriate methods of appraisal. One has to judge how good the changes are in the light of the evidence
obtained.” These words sum up the inter-relationship between objectives, learning experiences and
evaluation. This relationship is revealed by the three steps in the instructional procedures
1. Establishing the objectives to be attained.
2. Providing learning experiences appropriate to the objectives.
3. Evaluating to ensure that the desired objectives are attained.
Thus there is an integral relationship between learning experiences and evaluation in such a way that one
influences and strengthens the other. This is represented by the ‘Triangle of Evaluation’

Triangle of Evaluation

All the three are inter-related. Objectives of teaching constitute the pivot of any teaching procedure. Objectives
tells what the minimum level of the students eventual performance should be. Learning experiences are provided
in an effort to attain the objectives. Thus learning experiences and evaluation tools are choosen and planned
according to the formulated objectives. After providing the learning experiences evaluation is done. The
evaluation helps us in testing the effectiveness of the learning experiences and the attainment of objectives. We
modify the learning experiences if they are ineffective in attaining the objectives. The unattainable and
unrealistic objectives are modified or removed. Thus objectives, learning experiences and evaluation are
interdependent of each other.
Thus assessment conducted by pre-determining the objectives before assessment and determining the extent to
which it is attained is known as objective based assessment. Objective based evaluation is a process of
determining the degree to which educational objectives are being achieved. It follows scientific tradition. It
involves specifying and determining degree of attainment of program implementation, utilization and outcome
objectives. Objective oriented approaches focuses on evaluating to what degree the program, policy or product
met the objectives intended to meet.The evaluator focuses the evaluation plan - assessing the intended outcomes
related to the program objectives compares the results of the evaluation in regard to the objectives and makes a
judgement as to what degree the objectives were met based on the findings.
Many pre-requites are required for the teacher and learner before beginning an instruction. The teacher
must be aware of the goals and aims of education and more specifically about objectives of instruction. So his
first job will be to formulate the instructional objectives and based on that construct effective learning
experiences. At the end of instruction evaluation is done to find the extent to which the objectives are achieved.
Thus evaluation lays emphasis on the specification of instructional objectives and the variety of methods of
evaluating them.
The objectives of a particular teaching learning process i.e., instructional objectives can be classified as
given below.
Instructional Objectives

Cognitive Affective Psychomotor


Diverse methods are used to measure the objectives in each domain i.e., for cognitive domain objectives oral or
paper-pencil tests, intelligence tests, achievement tests, for affective domain objectives observation, checklist,
inventories, attitudinal scales and for psychomotor domain objectives performance tests, situational or field or
on-the-job tests are used. There are no clear cut boundaries between these objectives. So a combination of
methods and techniques are used to measure the outcomes of instruction. New innovative methods of evaluation
is developed to measure complex and comprehensive objectives. But the selection of any tool and technique for
evaluation depends on the objective to be tested. Thus instructional objectives become the basis for scientific
evaluation of pupils performance.
Advantages of objective based evaluation
• Provides for clarification and direction as to what observable outcomes can be used to inform the
judgement of the value and worth of the thing being evaluated.
• Holds the program accountable for the intended outcomes.
• Clarifies to the program developers the objectives of the program.
• Provides objectives which to compare the evaluation results for the purpose of forming a judgement.
• Highlights the most important components of the program in intended outcomes to evaluate.
Disadvantages of objective based evaluation
• It can result in too narrow a focus on the stated objectives.
• It can tend to focus on the outcomes rather than the process at times.
• It could miss unintended outcomes.

Factors to be considered for successful assessment

Assessment is usually described as The ongoing process aimed at understanding and improving student
learning. It involves making our expectations explicit and public; setting appropriate criteria and high
standards for learning quality; systematically gathering, analyzing, and interpreting evidence to
determine how well performance matches those expectations and standards; and using the resulting
information to document, explain, and improve performance. When it is embedded effectively within
our institutional system, assessment can help us focus our collective attention, examine our
assumptions, and create a shared academic culture dedicated to assuring and improving the quality of
education.
Assessment is considered as the systematic collection and analysis of information to improve student
lifelong learning. Assessment is the process of gathering and discussing information from multiple and
diverse sources in order to develop a deep understanding of what students know, understand, and can
do with their knowledge as a result of their educational experiences; the process culminates when
assessment results are used to improve subsequent learning.
1. Developing learning goals and objectives: For any assessment to successful the fore most
important thing is to be clear about the purposes and define the intended outcomes as clearly stated
goals and objectives. The assessment tasks should be such that the students are able to demonstrate
achievement of the outcomes or in other words the assessment should agree to the learning goals
and objectives. For this the goals and objectives should be clearly defined and be measurable and
attainable.
2. Planning for assessment: Before starting assessment the nature and the approach for assessment
should be determined. It should be carefully design and planned how assessment would be carried
out, where and when, by whom and who will be assessed, how the results will be used. It can
involve development of guidelines; organizing for assessment (leadership, committees, assessment
offices); and developing an assessment plan. For assessment to be successsful it should result in
useful, applicable results and the methodology used to collect assessment data should be provide
valid and reliable measures.
3. Involving numerous stakeholders: A good assessment should involve many stakeholers like
faculty, staff, students, parents, alumini, community; involve people widelydiscusses the key issue
of involving faculty members and students in assessment. The authors rightly view involvement as
central to the success of academic assessment. They discuss ways of involving people,
responsibilities and rewards for assessment work, barriers to assessment, and ethical issues when
involving students.
4. Selecting and designing methods: There are two basic ways to collect data. One is a direct
approach which students display their knowl edge through testing or essays, while the other is an
indirect approach. Di rect approaches include such meth ods as student portfolios, capstone
courses, standardized tests, and in- class tests. Indirect measures includesuch items as student
retention rates, alumni satisfaction levels, and gradu ate employment indicators. While selecting
assessment techniques use simple but multiple meausre techniques to assess the complex process
of teaching learning. It should also give short and long term indicators and be qua;litative and
quantitative to include multifaceted levels. When selecting assessment methods:
1. Identify the current sources of data that are available for assessment.
2. Determine whether new instruments need to be developed or whether current instruments meet
assessment needs.
3. Study the assessment plans and methodologies of other universities and colleges.
4. Review handbooks of assessment if new methodologies are necessary.
5. Developing criteria to guide choice of methodology,
6. Ensuring that the technical qualities of reliability and validity are present, evaluating costs,
feasability etc.
5. Reporting and using results; After conducting assessment it is essential to study the results,
disseminate, and act on assessment findings. Once assessment results are available, the measured
outcomes should be compared with the expected outcomes. If they are not aligned, recommendations
from the findings can address specific steps to improve the outcomes. Explores ways of describing and
understanding the results and their implications. For example, results may be used to improve
instruction; to initiate curriculum discussion among faculty; to implement revision as necessary; or to
provide data for reporting to outside accrediting agencies. A systematic approach to assessment helps in
refining assessment measures, results in better measures, and provides comparative data for
improvement purposes. A report that presents the results and a process to share the results are the final
products of this step. this includes:
1. Identify the gaps between the measured outcomes and the expected outcomes. These gaps are
the areas on which to focus.
2. Present assessment results in a clear, easy to understand manner.
3. Determine the stakeholders who will receive the information.
4. Identify how stakeholder suggestions and recommendations will be collected, considered and
in-corporated into course, program or service improvements.
6.Assessing the assessment program. Re-examine the assessment process on a regular basis to
confirm that the results are valid and reliable, and that they are meeting the needs of the University
community, including stakeholders. If assessment findings are not meeting the needs, the process may
need to be revised. The persons involved are those who has to do the review and recommend changes.
Changes to the process and the reasons for the changes should be documented. A thoughtful review of
the process used for assessment can lead to improvements in efficiency of the process, accuracy of the
findings, and usability of the results .
CURRENT PRACTICES IN ASSESSMENT AND EVALUATION
CONTINOUS AND COMPREHENSIVE EVALUATION
In the words of Indian Education Commission (1964-1966) :” Evaluation should concern itself with
pupils physical development, personality and character, social achievement, academic achievement and
achievements in various types of skills. ” The National Policy on Education 1986 had also stated that
CCE should incorporate both scholastic and non-scholastic aspects of evaluation spread over the total
span of instructional time.

Six areas with which the teacher requires information for adequate pupil evaluation.

1.scholastic achievement.
2.special abilities
3.Personal interests and plans:
4.Health and physical development.
5.Emotional and social adjustment.
6.Attitudes, character and personality.

Continuous and Comprehensive Evaluation Continuous and Comprehensive Evaluation (CCE) refers to
a system of school-based evaluation of students that covers all aspects of a students’ development. It is
a developmental process of a child which emphasizes on two fold objectives. These objectives are
continuity in evaluation on one hand and assessment of broad based learning and behaviourial
outcomes on the other. The term ‘continuous’ is meant to emphasise that evaluation of identified
aspects of students ‘growth and development’ is a continuous process rather than an event, built into
the total teaching-learning process and spread over the entire span of academic session.

Comprehensive $ continuous evaluation (CCE) is an educational evaluation style used in India for
evaluating elementary and secondary school students. It refers to a system of school based evaluation of
students that covers all aspects of students development. The assessment system is designed to replaced
standardized board examination testing by evaluating students based on academic and personal
progress from the start of their education to its completion, or kindergarten through high school
graduation. Teachers evaluate scholastic performance, arts and sports involvement and personal and
social development It is a developmental process of assessment which emphasizes on two fold
objectives. These objectives are continuity in evaluation and assessment of broad based learning and
behaviourial outcomes on the other. Continuous and comprehensive evaluation is an approach that
aim at assessing those attributes which cannot be assessed through one attempt written
examinations. Evaluation is the process of finding out the extent to which the desired change have
taken place in the pupil.

Continuous evaluation helps in bringing awareness of the achievement to the child, teachers and
parents from time to time. They can look into the probable cause of the fall in achievement if any, and
may take remedial measures of instruction in which more emphasis is required. Many times, because of
some personal reasons, family problems or adjustment problems, the children start neglecting their
studies, resulting in a sudden drop in their achievement. If the teacher, child and parents do not come to
know about this sudden drop in the achievement and the neglect in studies by the child continues for a
longer period then it will result in poor achievement and a permanent deficiency in learning for the
child. The major emphasis of CCE is on the continuous growth of students ensuring their intellectual,
emotional, physical, cultural and social development and therefore will not viii be merely limited to
assessment of learner’s scholastic attainments. It uses assessment as a means of motivating learners in
further programmes to provide information for arranging feedback and follow up work to improve upon
the learning in the classroom and to present a comprehensive picture of a learner’s profile.

Concept and meaning of CCE


• It means regularity of assessment, frequency of unit testing, diagnosis of learning gaps, use of
corrective measures, retesting and feedback of evidence to teachers and students for their self
evaluation.
• The second term `comprehensive' means that the scheme attempts to cover both the scholastic and
the co-scholastic aspects of students' growth and development.
• Since abilities, attitudes and aptitudes can manifest themselves in forms other then the written word,
the term refers to application of variety of tools and techniques (both testing and non-testing) and
aims at assessing a learner's development in areas of learning like :
1)Knowledge,
(2)Understanding/Comprehension
(3)Applying
(4)Analyzing
(5)Evaluating
(6)Creating
• The scheme is thus a curricular initiative, attempting to shift emphasis from testing to holistic
learning.
• It aims at creating good citizens possessing sound health, appropriate skills and desirable qualities
besides academic excellence.
• It is hoped that this will equip the learners to meet the challenges of life with confidence and
success.
• Improvement in learning and diagnosis of weakness and provide remedial measures.
• Scholastic and non scholastic aspects of pupil growth are evaluated. CCE is formal evaluation in
school carried out by teacher.
• Use multiple techniques of evaluation such as written test oral test, observation techniques,
interviews, practical test etc.
The objectives of CCE are :

• To help develop cognitive, psychomotor and affective skills.


• To lay emphasis on thought process and de-emphasise memorization .
• To make evaluation an integral part of teaching-learning process .
• To use evaluation as a quality control devise to maintain desired standard of performance
• To use evaluation for improvement of students achievement and teaching – learning strategies on
the basis of regular diagnosis followed by remedial instruction
• To determine social utility, desirability or effectiveness of a programme and take appropriate
decisions about the learner, the process of learning and the learning environment
• To make the process of teaching and learning a learner-centered activity.

PRINCIPLE OF CONTINUITY
• The term continuous refers regularity in assessment. Evaluation is a continous process which is an
integral part of teaching.
• Evaluation goes on constantly during the lessons and units and is clearly related to teachers goal
and point of view on his teaching of the subject.
• It makes the student s regular, punctual and work systematically for the whole academic year.
• Both the teaching –learning process and the evaluation procedure go on together. Eg: language
learning
PRINCIPLE OF COMPREHENSIVENESS
• The term comprehensive refers to both scholastic and non-scholastic areas of pupil growth.
Evaluation is based on the principle of comprehensiveness.
• Thorough assessment of the personality of the student.
• It is different from examinations. Examination is only testing knowledge skills and abilities in a
systematic way.
Following are the constituents of evaluation.
1. formulation of aims.
2. changes in the behaviour pattern of the pupils through these aims.
3. To reliable tools to observe the behaviour pattern, knowledge and skill.

Features of CCE are:


• The ‘continuous’ aspect of CCE takes care of ‘continual’ and ‘periodicity’ aspect of evaluation.
• Continual means assessment of students in the beginning of instruction (placement evaluation) and
assessment during the instructional process (formative evaluation) done informally using multiple
techniques of evaluation.
• Periodicity means assessment of performance done frequently at the end of unit/ term (summative)
• The ‘comprehensive’ component of CCE takes care of assessment of all round development of the
child’s personality. It includes assessment in Scholastic as well as Co-Scholastic aspects of the pupil’s
growth.
• Scholastic aspects include curricular areas or subject specific areas, whereas coscholastic aspects
include Life Skills, Co-Curricular Activities, Attitudes, and Values.
• Assessment in scholastic areas is done informally and formally using multiple techniques of
evaluation continually and periodically. The diagnostic evaluation takes place at the end of a unit/term
test. The causes of poor performance in some units are diagnosed using diagnostic tests. These are
followed up with appropriate interventions followed by retesting.
• Assessment in Co-Scholastic areas is done using multiple techniques on the basis of identified criteria,
while assessment in Life Skills is done on the basis of Indicators of Assessment and checklists.

The functions of CCE are:


• It helps the teacher to organize effective teaching strategies.
• Continuous evaluation helps in regular assessment to the extent and degree of learner’s progress
(ability and achievement with reference to specific scholastic and co-scholastic areas).
• Continuous evaluation serves to diagnose weaknesses and permits the teacher to ascertain an
individual learner’s strengths and weaknesses and her needs. It provides immediate feedback to the
teacher, who can then decide whether a particular unit or concept needs re-teaching in the whole class
or whether a few individuals are in need of remedial instruction.
• By continuous evaluation, children can know their strengths and weaknesses. It provides the child a
realistic self assessment of how he/she studies. It can motivate children to develop good study habits, to
correct errors, and to direct their activities towards the achievement of desired goals. It helps a learner
to determine the areas of instruction in which more emphasis is required.
• Continuous and comprehensive evaluation identifies areas of aptitude and interest. It helps in
identifying changes in attitudes, and value systems.
• It helps in making decisions for the future, regarding choice of subjects, courses and careers.
• It provides information/reports on the progress of students in scholastic and co-scholastic areas and
thus helps in predicting the future successes of the learner.

ADVANTAGES OF CONTINOUS AND COMPERHENSIVE EVALUATION

1.Ascertains and progress of the students.


2.Inspires and motivates students.
3.diagnose the weaknesses of the students.
4.Given indication about the interests of the pupils.
5.Gradation of the students.
6.Helps in providing individual attention .
7.Helps in achieving the aims
8.Improves instruction.

DISADVANTAGES

1.Personal prejudices and subjectively are likely to creep in and thin may adversely affect the
quality of assessment.
2.Lack of basic infrastructure facilities of the school may negatively affect the right
assessment .
3.Its reliability and validity are questionable in view of several elements of subjectively.
4.It can’t replace standardised achievement test.
5. It requires lot of time and expenditure.
6. It requires honest and sincere teachers.
7. Lack of enthusiasm and interest of the teachers may adversely effective assessment

GRADING SYSTEM - Grading in education is the process of applying standardized measurement of


varying levels of achievement in a course. Grades can be assigned in letters(eg: A, B, C, D, E)
Grading system calls for providing letters grade to the students for their education achivements instead
of declaring them passed or faild or assigning numerical mark on a scale.
5-point scale categories i.e.
A= excellent;
B= good;
C= average;
D= below average;
E= poor.
When students levels of performance are thus classified into a few classifactory units using letter
grades, the system of assessment is called grading system. It is simple to assign grading than the exact
numerical marking involving biases and subjectivity. Grading reduces the subjectivity and unreliability
on the part of examiners. Grading system provides sealing of the evaluation on a uniform basis.

Characteristics
➢ Students performance in scholastic areas is categorized into a point grade.
➢ References to pass and fail are not made.
➢ Rank and classes are not included
➢ students are allowed to improve their grades.
➢ Students level of performance are classified into a few classificatory units.
➢ Fundamentally a grade is a score.
➢ grading is considered to be more scientific way of evaluation.
➢ Identifies students performance level with a wide range.
➢ In grading classification is made on 5 point, 9 point scale.

Advantages
➢ In the grade system only 5,9 points scale system are adopted so the system in
comparatively more reliable.
➢ the achievement of different student can be easily compared.
➢ achievement of the student in different subjects can separately can be known.
➢ difference in difficulty level of the subjects is eradicated in this system.
➢ the educational abilities increase systematically with chronological age.

Disadvantages
➢ It lacks clear and generally accepted meaning.
➢ There is no sufficient ,relevant and objective evidence to use it a basis for assigning
grade.
➢ Grading system is difficult to classify students in terms of their performance on tests due
to large number of classificatory units.
➢ The scholars are not of uniform opinion.
➢ difficult to compare the grade awarded on different grade scales.
➢ the system is very sensitive.
➢ it is too subjective like the numeral system.

TYPES OF GRADING
➢ DIRECT GRADING - It is the process by which any given phenomenon with respects to
each individual of the group concerned, is adjudged by the evaluator in terms of the most
appropriate letter grade only without assigning scores. In examination situations, this would
involve awarding of particular grade to the answer for each individual question, on the basis of
its quality as judged by the evaluator.
Advantages of direct grading
• Simplifies the process of assessment
• Makes a raw assessment on a raw scale
• Uses a uniform scale for the assessment of quality
• Separates assessment of quality and range
Disdvantages of direct grading
From the practical point of view, this process is not feasible for large scale examinations as in
our universities and boards.

➢ INDIRECT GRADING It is the process of awarding grades through marks. In this procedure
marks are first awarded as usual. The marks are awarded to the individual questions on the
basis of the prescribed marking scheme and the total score for the paper arrived at and The
conversion of marks in to grades, which is a technical matter is to be done in two ways,
absolute grading and relative grading
ABSOLUTE GRADING In absolute grading some fixed range of scores is determined in advance for
each grade. On the basis of this, the score obtained by a candidate in a subject is converted to the grade
concerned.
Advantages of absolute grading
• The performance of the students will not be affected by the performance of the whole
class.
• It promotes co-operation among the students
• All students may pass the subject or course when they meet the standard set by the
teacher or institution.
Disdvantages of absolute grading
 It promotes competition among the students rather than cooperation.
 It cannot be used when the class size is smaller than 40.
 Not all the student can pass the given subject or course.
RELATIVE GRADING - In relative grading, the grade range is not fixed. It can vary in tune with
the relative position of the candidates in the group that wrote the examination. Suppose a group is to be
divided in to five grades A,B,C,D,E in the case of a subject concerned, on the basis of the scores
obtained by the examinees in an examination. This system is considered a continuous assessment of
students performance. It builds competitiveness in students to stand out relatively as in absolute
grading system. It considers the dynamics of content quality in various institutions depending upon the
teacher and the resources available. It is a good option for students competing for the same position in
an exam.
GRADE POINT AVERAGE(GPA). All grades from all current classes are arranged to create a
Grade Point Average(GPA). Grade point average (GPA) is a raw score average based on the letter
grades you make each semester. Each letter grade is assigned a numerical value from 0-4 or 5 points,
depending on your institution's scale. To calculate GPA, you'll basically need to find your grading
scale, translate each letter grade to a corresponding numerical value within the scale, then average those
values to find your current GPA. GPA is calculated by taking the numbers of grade points a student
earned in a given period of time divided by the total numbers of grade points taken.
GPA=Total grade points/No. of grades
WEIGHTED GRADE POINT AVERAGE(WGPA). For some schools, especially college courses,
each course has a number of credit hours. Credit hours are units schools use to measure the work load.
Generally, credit hours are based on mode of instruction, number of hours spent inside the classroom,
and number of hours spent studying outside the class. Find out the number of credit hours assigned to
each course you are taking. Multiply each scale value grade by the number of credit hours to get the
weighted grade points. Add the weighted grade points for all of your classes together to calculate your
total grade points. Add together the number of credit hours you have taken in total to get the total
credits. Divide the total weighted grade points total by the credit hours total to get WGPA
WGPA=Total weighted grade points/Total Credits
Cumulative Grade Point Average (GPA) refers to the overall GPA, which includes dividing the
number of quality points earned in all courses attempted by the total degree-credit hours in all
attempted courses. Your semester / term GPA is your Grade Point Average for that one term or
semester. Your cumulative GPA is you grade point average for all attempted courses in the program.
How to calculated Cumulative GPA (CGPA)

Classification of learners according to their level of performance in Grading system (By giving letter grades
such as: A+, A, B+,B etc.
Classification of learners according to their level of performance in Grading system can be done two ways.
Grading on the curve.

The procedure is as follows:

• Rank order students’ overall scores


• Set the percentages of letter grade As, Bs, Cs and so on that a student can fall into

• Divide the range of a normal curve into intervals

• E.g. top 20% of students get A, next 30% get B, next 30% get C, next 15% get
D, lowest 5% get F

• Record the grade for these set grade boundaries

• This method can be arbitrary, and does not give students or their parents any reference to
the learning targets. However, it can be useful with a sound argument to justify the
particular percentages used

Grading using pre-fixed grading scale: In this method the scale for grading will be pre-fixed. The
scale may be showing the grades corresponding to the different rages of percentage scores, total points
or CGPA. The different types in this method are

The fixed percentage method

The fixed percentage method is probably one of the most common systems used. To do this:

• Give a percentage correct score for each student for each task

• Multiply each task’s percentage by its corresponding weight and add these products
together

• Divide the sum of products by the sum of weights to get a composite percentage score

• Translate this final score to letter grade (a common one is above 95% is A+, 95% to
90% A 90 to 80% is B, etc.) based on the earlier fixed scale.

• Here, the relationship between percent and grade is arbitrary; it is helpful to follow any
existing school policy.

We may also have to adjust for task difficulty; if a particular assignment or assessment is
terribly difficult, all students may receive a low percentage score. This is one reason why it is
often better not to use pretests for grading purposes in such a system.

The total points method

The total points method is quite similar to the fixed percentage method.

• Assign a maximum point value for each task

• Sum these maximum points

• Translate this final score to a letter grade by using the maximum possible total values to
find the letter-grade from the class boundaries given in the grading scale. E.g. 10 to 9
grade A, 9 to 8 grade B, etc. )
• This system is easy to adjust by having students redo and revise assignments, or by
giving extra credit points to students who wish to improve their final grade

The CGPA method

In this method the CGPA is found out and then it is converted into grades using the scale.

E,g,.

CGPA Overall Performance


Range Letter Grade

3.50 to 4.00 A Excellent

2.50 to 3.49 B Very Good

1.50 to 2.49 C Good

0.50 to 1.49 D Average

0.00 to 0.49 E Unsatisfactory


UNIT II: Tools and Techniques to assess Learner’s Performance
---------------------------------------------------------------------------------------------------------
General Techniques of Assessment- tests.
Observation, projects, assignments, Diagnostic Test and Achievement Test-
worksheets, practical work, seminars and Concept, Purpose and Distinction between
reports, Interview, Self reporting. the two tests, Steps involved in the
Tools of Assessment- tests, checklist, rating construction of an Achievement test and
scale, cumulative record, questionnaire, Diagnostic test, Types of items-Objective
inventory, schedule, anecdotal record concept, type, Short answer type and Essay type, Item
merits, demerits - relevance in the analysis-concept, Teacher made and
field of research Standardized Achievement tests.
Characteristics of a good evaluation tool validity • Online examination/Computer based
reliability, objectivity and practicability Examination, Portfolio assessment and
Norm-referenced tests and Criterion referenced Evaluation based on Rubrics
-------------------------------------------------------------------------------------------------------------------------------------
General Techniques of Assessment-
Observation Observation means watching things with a purpose. Observation may be defined as a systematic
viewing of a specific phenomenon in the proper setting for the specific purpose of gathering data for a
particular study. Knowledge can be acquired through the use of sense organs. Observation has three
components namely sensation, attention, and perception. The observation method is also called classical
method scientific research.
Observation is a purposeful, systematic and selective way of watching and listening to an interaction or
phenomenon as it takes place. Example : To learn about the interaction in a group and study the behaviour or
personality traits of an individual. Observation is a more natural way of gathering information. Restrictions
imposed in questionnaire or interview are missing in observation. Data collected through observation may be
often more real and true than data collected by any other method.

Features Of Observation
 Direct method
 Primary data
 Deep study
 Relation between researcher and respondent
 Selective and purposeful study
 Use of sense organs
 Observation is carefully planned , systematic and perceptive.
 Observers are aware of the wholeness of what is observed.
 Observers are objective.
 Observations are carefully and expertly recorded.
 Observations are collected in such a way as to make sure that they are valid and reliable.

Process Of Observation
 Preparation and training
 Entry into study environment
 Recording of observation
 Termination of field work
Kinds Of Observation
 Controlled - Introducing a stimulus to the group for it to react to an observing the reaction. and
uncontrolled or natural observations - Observing a group in its natural operation rather than
intervening in its activities.
 Structured - observing in a very systematic way using a schedule, the things to be observed would be
pre-determined and unstructured observations the things to be observed would not be pre-
determined, whatever comes in the way is observed.
 Participant - The researcher participate in the activities of the group being observed in the same
manner as its members. and nonparticipant observations- The researcher do not get involved in
the activities of the group but remains a passive learner.
 Direct using our sense organs and indirect observations using mechanical devices.
Advantages of observation
 Directness is the most important advantage of observation method
 It is one of the cheaper and more effective techniques of data collection
 Subjective bias is eliminated,if the observation is done accurately
 The information obtained under this method relates to what is currently happening
 Data collected is very accurate in nature and also very reliable.
 Problem of depending on respondents is decreased.
 By using good and modern gadgets- observations can be made continuously and also for a larger
duration of time period.
 By obervation, one can identify a problem by making an in depth analysis of the problem.

Disadvantages of observation method


 It is time consuming and expensive method It is a slow and laborious process.
 Unforeseen factors may influence the observation
 It is not used for studying past events
 This method is not suitable for studying opinions and attitudes
 It is can’t be applied in situation where the size of the sample is large
 When individuals or groups become aware that they are being observed, they may change their
behaviour.
 There is always the possibility of observer bias.
 The interpretations drawn from observations may vary from observer to observer.

PROJECTS- The term project is derived from the Latin word ‘projectum’ meaning ‘something prominent’.It
is used for the evaluation of scholastic skills.
DEFINITION 0F PROJECT
 J.A Stevenson – A project is a problematic act carried to completion in its natural setting.
 Snedden – Project is a unit of educative work in which the most prominent feature is some form of
positive and concrete achievement.
 According to W.H. Kilpatrick, A project is a whole hearted purposeful activity proceeding in a social
environment

CHARACTERISTICS OF PROJECT
➢ A project is a :
 Problematic act
 Purposeful activity
 Whole-hearted activity
 An activity in a natural setting
 An activity in social setting
 It is a bit of real life introduced in school.
 Project is a problem solving of practical nature.
 It is a positive and concrete achievement.
 It is an activity through which solutions of various problems are found out.
TYPES OF PROJECT
W.H Kilpatrick mentions four types of projects:
➢ ‘The producer type’
➢ ‘The consumer type’
➢ ‘The problem type’
➢ ‘The drill type’
STEPS IN A PROJECT
 Providing a situation
 Purposing
 Planning
 Executing the plan
 Judging
 Recording
ESSENTIALS OF A GOOD PROJECT
 Timely
 Usefulness
 Interesting
 Challenging
 Economical
 Rich in experience
 Project should be purposeful and complete in itself.
 Project should be aimed at problem solving.
 It should be feasible.
 Undertaking complete itself.
 Learning activity is life-like, purposeful and natural.
 Learners plan and direct their own activity.
 Complexity of project is importance for its success or failure.
CRITERIA FOR EVALUATING PROJECT
 Ability to plan appropriately
 Data collection
 Analysis & Interpretation
 Presentation of report
 Timeliness
 Creativity
 Concepts and thoughts
 Understanding about the topic
 Workmanship and display
 Clarity of explanation

ASSIGNMENTS- An assignment is a job, a piece of work, or a task given out by a teacher to an individual
pupil or to the class. Assignment as a teaching device is widely use in modern schools because assignments
helps the students to develop a habit of self learning. The nature and style of assigning assignments to students
by a teacher also helps the student to develop insight in to the possible pros and cons of the problem on a
particular topic in a subject. Can be individual and group assignments. Should be definite, clear, adjusted to
the needs of the pupils, interesting and effective.
A good assignment assigned by a teacher depends on a factors:
▪ Laying out a task to be performed
▪ Fitting to the task a suitable procedure for accomplishing the task
▪ Teacher’s guidance and pupils will to accept the task and do it accordingly
▪ Assumption that the effective learning takes place as a result of pupil activity self imposed
The importance of assignments
❖ Provides for the arousal of interest
❖ Makes success reasonably sure.
❖ Independent study is not possible without good assignments.
❖ Stimulate thinking.
❖ Encourage initiative.
❖ Clear up misunderstanding.
❖ Strengthen morale.
❖ Develop insight.
❖ Motivation for study.

FUNCTIONS OF ASSIGNMENTS
➢ Reinforcement of learning.
➢ Initiate dialogue / pedagogical interactions.
➢ Continuous assessment.
➢ Student learning.

Characteristics of the good Assignment


❖ It is clear and definite.
❖ It should be motivating, simulative and interesting.
❖ It directs the learning activity.
❖ It removes difficulties.
❖ It takes into account previous learning.
❖ It recognizes individual differences.
❖ It is stimulating.
❖ It should be in syllabus
❖ Clear objectives
❖ Brief
❖ Stimulate reflective thinking
❖ It should be suit to the age, ability and interests of students
❖ It emphasizes essentials.
❖ It develops insight and understanding.
❖ It should be Purposeful and Relevant

Types of assignments
1) Tutor – Marked Assignment: Rely on long answer , short answer, essay type and problem solving
questions set by the course team or the course writer of the faculty concerned.
2 ) Computer – Marked Assignments: It consist of objective type questions. It tests the abilities of
students to recognize or recall certain facts, patterns and information or manipulate specific argument in
the course material.
3) Old Type Assignments: It includes page, paragraph, topic , theme, exercise, question and experiment.
They were too brief and too indefinite to stir up with interest and arouse pupils to effort.
4) New Type Assignments: They are inherit in their forms and purpose. They are unified, clear,
stimulating, directive , challenging and require the exercise of much more skill and more definite
preparation.
5) The Home Assignments: The complex assignment that requires great independence and ingenuity in
devising ways and means or a highly developed power in independent thinking which is not done in class.
6.) Class Assignments
7.) Individual Assignments
8.)Group Assignments
Another classification of Assignments are
1. Study type
2. Memorization type
3. Informative type

STEPS IN ASSIGNMENT MAKING


• Reference to a previous experience.
• Discussion.
• Proposal of a new activity.
• Explanations and clearing up of activity.
• Outlining materials to be used.
• Distributing the tasks to be done
How to evaluate an assignment
A modified version of the schedule suggested by the SCERT for the evaluating of an assignment is
given below:

WORKSHEETS -Worksheet commonly refers to a sheet of paper with questions for students and places to
record answers. A worksheet is an instructional tool that allows a learner to put concepts and ideas into
practice. A worksheet may be used to help a student practice a mathematical process, connect ideas, review
key points from a reading and more. Worksheets are used for a variety of learning needs. In mathematics,
worksheets are commonly used to give students the chance to practice mathematical operations under a variety
of conditions. They may also be used to provide a framework for students to identify key events after reading a
historical text and then to learn how those events led to the eventual outcome. In business, a worksheet can
provide a framework for the learner to pull together key data points to evaluate a situation and guide decision-
making. Worksheets contain the data that you want to analyze, together with a number of Discoverer
components to help you analyze the data. For example, a worksheet can contain parameters, totals,
percentages, exceptions, and calculations. Where a worksheet contains several exercises, all of them should
relate to the same topic – preferably in terms of both topic and content. This ensures that learners will not be
overburdened or distracted. Furthermore, intensive engagement with a single topic anchors the content more
firmly in the memory.

A printed page that a child completes with a writing instrument. No other materials are needed.

• multiple choice questions


• matching exercises
• handwriting practice
• coloring pages
• math problems
• fill-in-the-blank book reports
• word searches and crossword puzzles
• copywork

Worksheets are not

• A data sheet — for example, when we did our water science experiments and our magnet sensory play,
my kids recorded their findings on paper.
• An activity sheet using stickers or other manipulatives — such as my dot sticker pages
• a printable used for pre-writing or organization of thoughts
• A sheet that provides cutting practice
• A play dough mat

A good worksheet follows a set structure:

• A header containing general information which places the worksheet within the lesson context.
• Clear instructions.
• The exercise itself, including illustrations, highlighted points and sufficient space for answers.
• Possibly additional exercises, each with their own instructions.

Advantages Worksheets that are well prepared will promote students thinking. It will be interesting for the
students. It may serve as a supplement for class and home works. Worksheets help learners to engage more
thoroughly with learning – both in the classroom and at home. The advantage of worksheet software is that
pre-defined structures and functions can help save time. teachers can quickly and effectively design worksheets
which will benefit their students. They can keep most of a class busy with minimal effort by the teacher. They
are easy to grade. It's possible that they are effective if the goal is rote memorization of algorithms
They can be non-threatening and cause less anxiety for struggling students, who can feel successful repeating
one process over and over.

Disadvantages the worksheet is likely asking only questions that the worksheet creator considers important. A
student might have a really creative idea or interpretation for a concept, but the worksheet might not ask about
that part; therefore, the student never gets to voice his or her unique viewpoint. Preparation of worksheet
would be a difficult and time consuming process. An average teacher might find it difficult to develop good
worksheets. Ineffectively prepared worksheets might serve only as a drill practice for students. It will not
promote students thinking

INTERVIEW
Interview is a method of child study in which the teacher has ultimate proximity to the child. Mrs. P
.V. Young defines “the interview may be regarded as a systematic method by which a person enters more or
less imaginatively into the inner life of a comparative stranger” It is a meeting of people face to face,
especially for consultation.
Interview is a face to face conversation. interview is used very extensively in every field of
educational research. In interview , a social scientist or someone authorized by him for them about various
things. An interview is a direct method of inquiry. The purpose of interview, however is not to collect
superficial details about the interviewee , but is rather to probe into the inner life of interviewee . Therefore,
the method of interview is direct as well as depth study. “An interview is a conversation between two or more
people where questions are asked by the interviewer to elicit factors or statement from the interviewee”.
Interviews are useful method to discover how to individuals think and feel about a topic and they
have hold certain opinions . This is very time consuming process. When the interview process is typically
short, the candidate can potentially feel nervous or anxious in their interview leading to them not getting a job.
In interview all formalities are laid down and the gate is opened for delving into the intellectual, emotional ,
and subconscious stirrings of the interview. The chief characteristics of an interview are:
❖ It is a close contact or interaction including dialogue between two or more persons.
❖ It has a definite object such as knowing the views and ideas of others .interview is an interactional
process.
❖ Interview can be conducted over the telephone also.
❖ Interview method enables to study the social problems.
❖ It is a direct method of collecting data
❖ It is based on interview
❖ It is a verbal method of securing data in the field of survey
❖ It is a method of social interaction
❖ “The interview is a systematic method by which a person enters more or less imaginatively into the life
of a comparatively stranger”
OBJECTIVES OF INTERVIEW
❖ To establish direct contact
❖ Interview are useful to exchange ideas and to elicit intimate facts and information.
❖ Interview method helps collecting information about unknown facts through face to face contact.
❖ To test or develop hypotheses.Through interview we can formulate hypothesis. Hypothesis implies
forming propositions about various facts.
❖ Social facts are qualitative. They are found in the form of ideas, feelings , views , faith, convictions
etc. through interview it is possible to collect information about qualitative facts.
❖ To verify unique ideas
❖ To evaluate or assess a person in some respect.
❖ To select or promote an employee.
❖ To effect therapeutic change, as in the psychiatric interview.
❖ To gather data, as in surveys or experimental situations.
❖ To sample respondents opinions, as in doorstep interviews.

TYPES OF INTERVIEW
Classification according to formalness
➢ Formal Interview- The teacher or the interviewer presents a set of well defined questions.
➢ Informal Interview- The teacher or the interviewer has full freedom to make suitable alterations in the
question to suit a particular situation
Classification based on style of interviewing
➢ Structured Interview- Formal in nature, Results are often used to make generalizations, Prearranged
schedule of questions which are short, direct, and capable of simple answers
➢ Semi-Structured Interview- More flexible version of structured interview, Provides opportunities to
probe and expand the interviewee’s responses, Allows a deviation from prearranged set of questions
➢ Unstructured Interview - Presupposes nothing about the direction of interview, Follow the
interviewee’s flow of ideas, Respondents develop their own ideas, feelings, expectations or attitudes.
May throw up unexpected findings
CLASSIFICATION BASED ON PURPOSE
• Survey interview
• Diagnostic interview
• Therapeutic interview
• Counselling interview
According to the number.
▪ Personal interview.
▪ Group interview .
According to subject matter.
▪ Qualitative interview.
▪ Quantitative interview.
▪ Mixed interview.
CONDITIONS FOR SUCCESSFUL INTERVIEW
Gardner has pointed out three conditions for successful interviewing
• Accessibility
• Understanding
• Motivation
THE PROCESS OF INTERVIEW
 Preparation of the interview
 Introduction of the interviewer to the respondents
 Developing rapport
 Carrying the interview forward
 Recording the interview
 Closing the interview

ADVANTAGES
• Direct and deep research
• Knowledge of past and future
• Mutual encouragement
• Examination of known data
• They are useful to obtain detailed information about personal feelings, perceptions and opinions.
• They allow more detailed questions to be asked.
• They usually achieve a high response rate.
• Respondents own words recorded.
• Ambiguities can be clarified and incomplete answers followed up.
• Interviews are not influenced by others in the group
• It allows you to gauge the person more so that simply reading a resume.
• The interview is more appropriate for complex situations.
• It is useful for collecting in depth information.
• Information can be supplemented.
• Questions can be explained.
• Interviewing has a wider application.
• Personal information can be obtained
• The interview can be conducted in the language in which respondents can reply
• Interviewer can regulate the interview
DISADVANTAGES
• Defects due to interviewee and Prejudices of interviewer
• Difference in mental outlook of the interviewer and the interviewee
• Art rather than science
• The quality of data depends upon the quality of the interaction and the interviewer.
• Emotionalism
• The quality of data may vary when many interviewer s are used.
• The interviewer may be biased
• Difficulty in persuading the interviewee to give the right answers. The presence of the interviewer
might influence the interviewee in a positive or negative way.
• They can be very time consuming :setting up , interviewing, transcribing, analyzing, feedback,
reporting. They can be costly also.
• Different interviews may understand and transcribe interviews in different ways.
• The interview process typically short, and candidates can potentially feel nervous or anxious in their
interview leading to they not getting a job.
• Expensive and time consuming
• Inadequate response
• As the information obtained from an interview are on the spot data,some of them may be imaginary

REPORTS
Report is an account or statement describing in detail an event, situation or usually as the result of
observation and enquiry etc. A report is a connected discussion of a topic generally more or less
extended in character. It requires that the pupil effectively read , organize, plan and deliver the
information which he has gained from investigation and study. Report writing is the
presentation of one’s findings in an informative and clear manner
• It is primarily the gathering and imparting of information.
• It is factual.
• It is appropriate when accurate information necessary to the solution of a problem or the better
understanding of a subject is essential.
• It is a form of activity calculated to develop originality, initiate and improved expression among
pupils.
• To train the reporter in the gathering and discrimination of information.
FEATURES OF REPORT
1. Complete and compact document
2. Systematic presentation of facts
3. Prepared in writing
4. Self explanatory document
5. Time consuming and costly activity.

PREPARING A REPORT
a) Have a careful outline.
b) Have a good introduction.
c) Arrangement of points in order of their importance.
d) Have a good conclusion.
e) Read over and fix the main ideas.
STRUCTURE OF A REPORT
◼ Title
◼ Introduction
◼ Results
◼ Conclusion.
EVALUATION CRITERIA
1. Relation with lesson
2. Collection of data
3. Ability to formulate idea
4. Conclusion
5. Completeness of report

PRACTICAL WORK It means tasks in which students observe or manipulate real objects or
materials. Practical works forms the basis of scientific study. To arrive at any conclusion
experiment is needed. By practical work we mean task in which students observe or manipulate
real objects or materials or they witness a teacher demonstration. Teaching of laboratory skills
enhance the learning of scientific knowledge. It is an integral part of Science teaching, work
experience and SUPW. The laboratory is central to science instruction. It is in the laboratory that
the students learn to handle apparatus, think independently, and to draw conclusions on the basis
of experiments and observation. Scientific theories and practical works in science are two sides
of a coin. Without experiments students cannot experience reality.
Practical works can;
➢ Motivate pupils by stimulating interest and enjoyment.
➢ Teach laboratory skills.
➢ Enhance the learning of scientific knowledge.
➢ Gave insight into scientific method and develop expertise in using it.
➢ Develop scientific attitude such as open mind ness and objectivity.
➢ It fixes learning to the minds of the pupils as a result everything that the pupil learnt
become permanent.
➢ It satisfies the instincts of curiosity creativeness and self expression.
➢ It provides training in scientific method and inculcates scientific attitude among students.
➢ It develops many socially desirable habits.

Objectives of laboratory works


1. Making abstract scientific understandings concrete.
2. Development of scientific concepts and principles.
3. Development of scientific skills, attitudes, interests and appreciation.
4. Training in scientific method.
5. Awakening the maintenance of curiosity in the environment.
6. Hands-on activities support the development of practical skills and help to understand
scientific concepts and phenomena.
7. Learning by doing is one of the cardinal principles of teaching science. It can be achieved
only by doing experimentation.
8. On every practical students must carry following things to laboratory so that he can
perform various types of experiments. scale , Eraser, A pencil, Auxiliary note book
and Laboratory note books.
Importance
1. Learning by doing
2. Training for adjustment
3. Scientific knowledge and scientific outlook
4. Handing objects
5. Development of good habits
6. Satisfaction of curiosity
7. Development of scientific attitude
8. Motivation
Procedure of laboratory work
✓ The science teacher should check the availability of apparatus required for practical
work.
✓ He should assure that apparatus is ready and working before the students enter the
laboratory.
✓ The broken apparatus is noted down in the breakage register.
✓ In some schools experiments are done by all students at the same time when the number
of students in a class is much more.
• Each group is allotted different experiments. The experiments are cycled in groups. It has
following limitations There is a possibility that weaker students may copy the results of
the brighter students.
• it may become difficult to correlate theory and practical for all students
• Supply different apparatus and chemicals to different groups.
• Guide line rules In order to make practical work effective the laboratory should be made
a place or learning by doing.Guideline should be given by the teacher about the
laboratory rules such as;
• Work area must be cleared
• Strict attention should be paid on work
• Wastage of gas, electricity, water should be strictly avoided
• Directions should be read and followed very carefully
• Teachers should allow the students entry in lab in his or her presence
• Only those experiments should be done which are recommended by the teacher in charge
• There should be coordination between theoretical and practical works.
• The introduction for practical work should be interesting and enthuse in the students to
work independently and to find out something themselves.
• The purpose of experiment should be made very clear to the students and the pupils
should be asked to keep a truthful record to what they do and observe.
• The experiment should be well graded according to the age and intelligence of the
students.
• If assignment method is being followed then the preparatory work should be checked one
day before the practical work starts.
Advantages
 Practical works allow students to demonstrate and practice their knowledge and skills.
 Powerful tool for teachers to assess the competence of manual skills of students.
 Establish link between theory and practice.
 Practical experiments can be extended to become a hands on experimental skills.
Disadvantages
 It is time consuming.
 Unfair distribution of group work
 It is difficulty to develop uniform fair and reliable assessment rubrics to evaluate students
practical skills.
How to evaluate Practical work
Practical work can be evaluated on the basis of the following criteria
1. Timely completion of work or experiment
2. Accuracy of observations
3. Efficiency in conducting the experiment
4. Objective and accurate recording of data
5. Proper analysis and conclusions
6. Reporting of the experiment.
7. Efficiency in handling and correct usage of the apparatuses for the experiment.

SELF REPORTING A self-report study is a type of survey, questionnaire, or poll in which


respondents read the question and select a response by themselves without researcher
interference. A self-report is any method which involves asking a participant about their
feelings, attitudes, beliefs and so on. The main methods of self-reporting are questionnaires,
inventories, interviews. Types of self-reports are questionnaires and interviews, self-reports are
often used as a way of gaining participants responses in observational studies and experiments.A
self reporting questionnaire can have Open Questions - Open questions are those questions which
invite the respondent to provide answers in their own and Closed Question -Closed Questions
are those questions which provide limited choice.

ADVANTAGES

❖ Provides better understanding on student’s learning, development &adjustment


❖ can be carried out relatively cheaply
❖ Gives the respondent’s own view directly
❖ If we use standardised interviews & questionnaires , it is easier to generalise
❖ To obtain information in situation where observational data are not normally available
❖ A good way to measure a participants perception of the thing you are measuring.
❖ Observational and Objective data are not always possible to obtain-for example, life
history studies
❖ They are simple to administer in many cases(eg:questionnaires), no complicated
technology is required
DISADVANTAGES
➢ Inherently biased by the person's feelings at the time they filled out the questionnaire
➢ People are not always truthful
➢ May bear little relationship to reality
➢ Associated with a number of potential validity problems
➢ Difficult to obtain a random sample of the population
➢ Most tests will contain designer bias
➢ People may lie or skew their answers to make themselves look better.
➢ The person may not be able to give an accurate response due to cognitive biases, poor
memory etc….

SEMINAR Etymologically the word seminar is derived from the word “seminarium” meaning
“seed plot” Seminar is simply a group of people coming together for the discussion and learning
of experience, specific techniques and topics. There are several keynote speaker within each
seminar, these speakers are experts in their fields or topics. Copies of the paper or abstract of the
presented matter are distributed to the audience in advance After the presentation there is a
general discussion in which all participants can participate. In this technique a person presents a
readymade paper or lecture on a specific subject.
OBJECTIVES OF SEMINAR
➢ To help the students to get an in depth understanding of the subject matter.
➢ To develop the habit of tolerance and cooperation among the students.
➢ To help the students to overcome the problem of stage fear.
➢ To help in developing the ability for keen attention and to present ideas effectively.
➢ To help in acquiring good manners of raising and answering and answering questions
SEMINAR REPORT
➢ The seminar report should be no more than 4 or 5 pages in length, double spaced
➢ The seminar report must be prepared in LaTex , a good visual presentation is important.
➢ The emphasis of seminar report should be on the idea presented in the seminar
➢ Gives formulas only as necessary to illustrate specific points
➢ Organize the report in to heading and sub heading.
➢ Seminar report should be write concisely.
➢ If you cite any papers , include a list of references at the end of seminar report
ADVANTAGES
➢ Help the learner to develop analytical and critical thinking
➢ Develop the ability to comprehend major ideas by listening
➢ Develop in learners self-reliance and self-confidence
➢ Develop the ability to raise relevant and pin-pointed questions
➢ Wealth of knowledge usually presented by many speakers at one time in one place
➢ A sense of renewed hope and inspiration
DISADVANTAGES
➢ Lack of preparation on the part of the paper presenter may make the seminar a mere
waste
➢ The formal structure of seminar restricts the participants from asking questions as and
when needed
➢ Inability of the presenter will create so many problems
EVALUATION OD SEMINAR
➢ Seminar may be evaluated according to their objectives
➢ In other words according to whether the process in the seminar might be expected to
achieve the objectives
➢ Evaluation may also be conducted by the seminar leader, an academic colleagues, an
external educator or the students
RATING SCALE
1=poor, 2=fair, 3=good, 4=very good, 5=excellent
What is your over all rating of this seminar?
What is your rating of the following aspects of the seminar?
❖ Instructors knowledge of the subjects
❖ Instructors presentation style
❖ Usefulness of print materials
❖ Quality of the audio sound
❖ Effectiveness of web conferencing
❖ Extent the seminar met your expectations
❖ The objectives were achieved
❖ The materials were relevant to the objectives

Tools of Assessment-
Test- A test is a means to elicit and gather responses which would provide legitimate evidence about
the extent of acquisition of a particular attribute such as knowledge, skill, aptitude, intelligence, or the
like, by an individual or a group. Thus a test presents a set of stimuli (a set of questions) elicitng
responses helpful in measuring a particular variable.
There are different type of tests such as Achievements tests, Diagnostic tests, Aptitude test, Intelligence
tests etc, They can also be classified as
1. Individual Vs Group tests
2. Oral Vs Written tests
3. Teacher Made Vs Standardized tests
4. Speed Vs Power tests
5. Verbal (or paper-pencil)Vs Non-Verbal or Performance tests
6. Objective type Vs Essay type tests
7. Norm Referenced Vs Criterion Referenced tests
Testing is the process of measuring the characteristics of individuals or groups. Testing has two major
integredients, the test that is used for measuring and the situation in which it functions. Testing is a
mechanism to assure quality of a product, system, or capability. used to measure how much of the assigned
materials students are mastering, how well student are learning the materials, and how well student are
meeting the stated goals and objectives.
Functions of a test or of testing
1. Assessment of the present status of an individual on a particular trait or variable.
2. Expressing the probability of future success.
3. Diagnosing the causes of lack of expected performance
4. Providing academic and vocational guidance.
5. Classification of individuals
6. Undertaking research to answer various questions
7. Formulating generalizations and Policy decisions.
8. Promotes learning and guess feedback for students and teachers.
General principles of testing
 Testing Shows the Presence of Defects, Not Their Absence
 Exhaustive Testing Is Not Possible
 Testing Activities Should Start As Early As Possible
 The Pesticide Paradox
 Test Is Context Dependent

CHECK LIST:
A checklist is a predetermined list of criteria against which recorder answer yes or no. Checklists are
highly selective only giving the recorder the opportunity to record a decision concerning the criterion.
There are no details to check the recorders’ decision. A checklist is an effective tool to share with parents.
Checklists show the sequence of developmental progress. Checklist measure progress. Checklist can be
used as a curriculum planning tool for individualizing the curriculum. Can be used as a screening tool for
developmental lags.
This is the method of listing a number of discipline phrases which can be checked to indicate the
phrase which is applicable to the pupil whom we evaluate. Thus it consists of a number of statements on
various traits of personality. The statement which applies to the pupil is checked ie, the teacher has to put
a tick mark in the column meant for a particular student showing a particular trait. While preparing the
check list, the teacher must keep in mind what kind of behaviour are important to record and what all
objectives are to be evaluated.Eg.1.Check List for work habits
Sl.No. Name of pupils Comes to Always ready Follows Does Written
class to work instruction assignments
regulatory
1 David ✓ ✓ ✓ ✓
2. Neenu ✓ ✓

2.Check list to evaluate sociability of a student.


Name of the student:
Class :

Sl.No Statements Yes No


1 Co-operates with others, willingly
2 Reluctant to Co-operate, when
asked.
3 Handles others very well
USES OF CHECKLISTS
• Promotion of Good Teaching
• Assistance in Curriculum Planning
• Improvement in Administration
• Maintain discipline in School
• Ensuring good behavior in Schools
ADVANTAGES OF CHECKLIST
• Time and Labor Efficient
• Comprehensive (it may cover many developmental areas)
• A documentation of development
• Individual documentation on each child
• A clear illustration of the developmental continuum
DISADVANTAGES OF CHECKLIST
• Loses details of the event
• Biased by the recorder
• Depends on the criteria to be clearly observable
• Many items to check making it time consuming

RATING SCALE
“Rating is, in essence, directed observation” – Ruth Strand
“Rating is a term applied to expression of opinion or judgment regarding some situation, object or
character; Opinions are usually expressed on a scale or values. Rating techniques are devices by which
such judgments’ may be quantified”.
Ratting is an item applied for an expression of opinion or judgement regarding some situation, object,
character or an attribute. Rating scale is used for assessment of a person by another person. It is a
subjective method. Rating scale are devices by which judgments can be qualified. It is an improvement
over checklist. While checklist, simply records that something happened, a rating scale adds another
dimension, how much or how well it happened. Eg: How good was the performance?
Excellent Very Good Good Average Poor
5 4 3 2 1

There are 3 point, 5 point, 7 point scales.


This method is useful for finding what impression and individual has made on persons with
whom one come into contact with respect to come specifies traits. Teachers can use rating scales
to categorize students on specific traits like honesty, punctuality, emotional stability etc.

CHARACTERISTICS OF RATING SCALE


There are 2 main characteristics;
1. Description of the characteristics to be rated.
2.Some methods by which the quality, frequency or importance of each item to be rated may be given.
PRINCIPLES GOVERNING RATING SCALE
Observable trait
Specific and defined trait
Scale should be clearly defined
Uniform standards of rating scale
The number of characteristics should be limited
The decisions should be clear and comprehensive
Well-informed and experienced persons should be selected for rating
TYPES OF RATING SCALE
 Numerical Scale - In numerical scales the observer or rater is supplied with a sequence of numbers
which is well defined and his task is to rate the objects on the given sequence of numbers on the basis
of his impression. the rater puts a check or circles a number to indicate the degree to which a
characteristic is present.
The Graphic or Descriptive Scale - In this scale, a straight line may be represented by descriptive
phrases at various points.The line is either segmented in units or it is continuous. To rate the subject
for a particular trait a check mark is made at the particular point
 Standard scales - In standard scales a set of standards is presented to the rater with pre-established
scale values.These standards usually consist of objects of same kind. example; Handwriting standards
for judging the quality of handwriting. The scales of handwriting provide specific standard specimens
and with the help of these standard specimens sample of handwriting can be equated to one of the
standards.
 Rating by cumulated points - The rating score for an object or individual is the sum or average of the
weighted or unweighted points. The checklist method and the ‘guess-who-technique’ belong to this
category of rating
 Forced choice ratings- In forced choice rating scale the rater is given a set of attributes in terms of
verbal statements for a single item and he decides which one or ones represents the individual being
rated most appropriately and accurately.The rater is forced to select statements which are readymade.
In this form the rater is asked to indicate which is the most and least descriptive of the persons being
rated.

ADVANTAGES OF RATING SCALE Rating Scale helpful in;


1. Measuring specified outcomes or objectives of education
2. Supplementing other sources of understandings about the child
3. Stimulating effect upon the individual who are rated
4. Writing reports to parents
5. Filling out admission blanks for colleges
6. Finding out students’ needs
7. Making recommendation to the employers
8. To the student to rate himself
DISADVANTAGES OF RATING SCALE
1. More difficult to rate
2. Subjective element is present
3. Lack of opportunities to rate students
4. Rates tend to be generally generous
ERRORS IN RATING SCALES
1. Generosity Error
2. Stringency Error
3. The Halo Error
4. The Error of central tendency
5. The logical Error
Questionnaire
Questionnaire refers to a device for securing answers to questions by using a form which the
respondent fills by himself. A large amount of data on various aspects of the theme can be collected and
the person can be evaluated accordingly. This will help teachers to understand the innate interests and
habits of the pupil. It may contain free-response or fixed response type of questions.
 This is an important method of data collection
 This is adopted by individuals ,organisations and government
 In this method a questionnaire is prepared and sent to respondents by post
 Questionnaire is a printed list of questions
 The questionnaire when sent to the respondents, a request is made that the questions should be
answered and returned
 The success of this method largely depends on the proper drafting of questions
Advantages
 There is low cost even when the field of enquiry is very large
 There is no peronal bias
 Respondents get enough time to furnish thoughout ideas
 Save time
Disadvantages
 The method will be successful only if the respondents are educated and cooperative
 Most of the respondents are lazy and so they do not care to answer and return the questionnaire
 The respondent’s answers may not be adequate
 The respondents may take much time to return the questionnaires

Inventories
An inventory is constructed in the form of a questionnaire. It consist of a series of questions or
statements to which the subjects respond by answering yes or No, agree or disagree or in some similar
way to indicate preferences or to make those items describe his typical behaviour.

In an inventory the statements are put in the first person


Eg:-I think I am more tense than others, In a questionnaire the question is in second person.
Eg:- Do you think you are more tense than the others around you.
Inventories are more exhaustive than questionnaires.
Inventories are mostly used for measuring personality traits, interests, value and adjustment etc ie, for
assessing self-reporting affective behaviour.

Schedule

 A schedule is a device used in collecting field data. It is tool mainly used in direct
interviews and for observation. The different types of schedules - observation schedule
and interview schedule. Rating schedules.Document schedules.
 Institutional survey form or Evaluational schedules.
 The schedules can be structured, unstructured or semi-structured
 A schedule is a proforma containing a set of questions and tables
 This proforma is filled by field staff
G.A. LUNDBERG : “The schedule is a device for isolating one element at a time thus
intensifying ”.
C.A. MOSER : “ A schedule is a fairly formal document in which efficiency of field handling
rather than attractiveness is the operative consideration in design ”.
MERITS
1. The percentage of responses to schedules is much more than those of questionnaires.
2. Since the interviewer is well trained and informed about the interview’s habits, attitudes
etc, he is able to approach them in a way that they are influenced by his personality and
charm.
3. There is a personal contact between the respondent and the field worker. The behaviour,
character and intelligence of the worker succeed often in winning the confidence of the
respondents.
4. In schedules, in case of doubt, the meaning is made clear by the field worker.
5. The presence of the field worker acts as a deterrent against giving artificial replies
because there is a fear of cross examination and being found out.
6. In schedules, knowledge of defects are easily noticed and rectified.
LIMITATIONS
1. Costly and time consuming method.
2. There is the requirement of a large number of well trained field workers.
3. Sometimes there is adverse effect of personal presence on respondents.
4. If the field of research is sprawling, it becomes difficult to organize research. To gather
workers who are well- acquainted with various types of people is a Herculean task.

Anecdotal record-
It is somewhat an informal device used by the teacher to record behaviour of the students as observed by
him from time to time. Green and others define anecdotal record as a written objective description by the
teacher of a significant occurrence or episode in the life of the pupil which a teacher has observed. It is a
lasting record of the behaviour of a student which may be useful later in contributing to a judgement
about a student.
An anecdotal record is a facutral observational record of a specific incidents in the life of a
student. Each anecdote is a significant aspect of the behavior to be evaluated. Such observation
are often enough to indicate direction of growth.
Examples of anecdotal
 The student strikes other students
 The student destroys his own property
 The student is attacked by peers
Anecdotes should be stated accurately and objectively. It should be written immediately after the incident.
The behaviour should be significant. It should reveal both positive and negative behaviour. No judgement
should be added. For evaluation, interpretation can be given under separate head.
◼ The term “anecdote” means a short narrative or story. It is told or recorded in “past” tense. Form
of recording observations of children. Children engaged in an activity or interaction with others
◼ The observation starts when the child begins to engage in an activity or an interaction and finishes
when the child stops participating
◼ Record your observation as soon as possible after the event to ensure that you remember
significant information eg. Direct quotes, hand preference.
An anecdotal record with consist of the following
a) Identifying data-date, time, place of incident
b) A description of the situation in which the incident occurred
c) A factual description of the incident anecdotes are described in behavioral terms
Types
First type The type of anecdotal record contain objective description of a pupil’s behavior record
from time to time
Second type This type of anecdotal record includes description of behavior with one comment or
interpretation
Third type This type of anecdotal record includes into account the record of pupil’s behavior
comments by the observer and the treatment offered to the pupil
Fourth type This type of anecdotal includes description of a pupil’s behavior along with the
comments as well as suggestions for future treatment of the student
Advantages
 Needs no special training
 Open ended and can catch unexpected events
 Can select behavior or events of interest and ignore others, or can sample a wide range of
behaviors (different times, environmental and people)
 Reasonably easy to do
 Do not stop you from interacting with the child – can be recorded later
 Observer can be either participator or non-participator
 Useful for planning and learning
 You can focus on one area of development or skill at a time
Disadvantages
 Only records events of interest to the person doing the observing
 Quality of the record depends on the memory of the person doing the observing
 Incidents can be taken out of context
 May miss out on recording specific types of behavior
 The observer’s involvement may influence the child’s behaviour
 Relies on the memory of the observer
 Some detail may be forgotten eg. Direct quotes
Uses of anecdotal record
 Record unusual events, such as accidents
 Record children’s behavior, skills and interests for planning purposes
 Record how an individual is progressing in a specific area of development
A cumulative record card is that which contains the results of different assessment and judgement
held from time to time during the course of study of a student or pupil. Generally it covers three
consecutive years It contains information regarding all aspects of life of the child –physical, mental,
social, moral and psychological. The significant information gathered periodically on student through
the use of various techniques tests ,inventories ,questionnaire, observation, interview ,case study etc.
Basically a cumulative record card is a document in which it is recorded cumulative useful and
reliable information about a particular pupil or student at one place. Information about every pupil or
child for the maintenance in the CRC should be collected from the following sources: parents or
guardians, peer, personal data, school records, other sources. In the cumulative record, the marks
assigned to the pupils throughout the school years are recorded. These records will contain
information regarding pupils attendance, record of test results, record of participation in school
activities, information about health, family etc.
Characteristics
• It is a permanent record about the pupil or student
• It is maintained up-to-date.
• It presents a complete picture
• It is comprehensive and continuous
• It contains only those information which are authentic, reliable, pertinent, objective and useful
Types
• Card type
• Booklet type
• Folder type
Data contained in cumulative record card should be
• accurate
• complete
• comprehensive
• objective
• usable
• valid
Types of information maintained in the CR
Identification data
name of the pupil ,sex, father’s name , admission no, date of birth ,class,section
Environmental and background data
home –neighbourhood influences, socio –economic status of the family,cultural status of
the family ,number of brothers and sisters ,their educational background ,occupations of the
members of the family
Physical data
weight ,height ,illness,physical disabilities etc
Psychological data
intelligence ,aptitudes interests,personality qualities,emotional and social adjustment and
attitudes.
Educational data
previous school record,educational attainments,school marks,school attendance
Co –curricular data
notable experiences and accomplishment in various fields –
intellectual,artistic,social,recreational etc
Vocational information
Vocational ambitions of the students
Principal’s overall remarks

Uses of CR
• The CR is useful for guidance worker and counsellor as it provides a comprehensive,objective
picture about the student including his strength and weakness
• The CR is useful for guidance counsellor to help pupil in educational achievement,vocational
choice and personal progress so far adjustment is concerned
• The CR is useful for headmaster /principal to ascertain the pupil’s performance in different
subjects and his limitations
• The CR is useful for parents to provide special privileges to makeup the deficiencies what lie
in case of his child
• The CR is useful for teachers to know the students and his progress and weaknesses at a
glance
• The CR is useful in making case study about the students

Limitations of CR
• The entire data is of little use if they are not collected properly objectively and accurately
• The purpose of CR is not served if it is not maintained secretly and confidentially
• Sometimes the information’s and its interpretations of CR becomes confusing as the
informations are collected by different teachers
• The CR needs much money to come to light which is not possible in the part of school to
spend on his head
• The maintenance of CR is a hard some job like clerical work in the part of teachers
• It is a lengthy process which needs much time to be worked out

Characteristics of a good evaluation tool


Or
Characteristics of a good Achievement Test
Evaluation depends on the tool we use, so an evaluation tool should be selected with great care.
There are some criteria to determine the quality of a too. They are
1. Validity : A test is said to be valid if it measures what it intends to measure. It refers to the
accuracy, dependability and trustworthiness of a test. Validity relates to the purpose for which it is
used. Validity is the most important characteristics of a good evaluation tool. E.g. An Intelligence
test is said to be valid if it correctly measures the intelligence of individuals. Validity of a test is
relative as it changes from time to time, situation and the group for whom it is administered.
Factors affecting validity of a test
1. Cultural differences: A cultural background of the test taker always influences his general ability
which in turn affect the validity of a test.
2. Response Sets: Response sets are test taking habits of always saying yes, indifferent, uncertain etc
which affects a person’s score.
3. Excessive reliability at the expense of validity: Sometimes reliability of a test is increased by
increasing the length of the test which affects the validity.
4. Lack of clarity in directions: Difficult language, ambiguous words, faulty structure etc cause lack
of clarity in directions which reduces the validity to a large extent.
Different Types Of Validity
1. Face validity: A test is said to have face validity if it appears or seems to measure what it is to
measure. It does not refer to what the test actually measure. It is not a guarantee as to whether the
test may come out to be valid after its use. Achievement test has the highest face validity.
2. Content validity: Content validity is also known as logical or curricular validity. It is the extent to
which the test adequately covers both the content and objectives of the subject matter. Content
validity is best related to achievement tests. It is ensured by giving proper weightage to the different
content area.
3. Predictive validity: it refers to the extent to which a test can predict the future performance of the
students on the basis of the test scores. Aptitude tests, vocational interest inventories, entrance
examinations, employment tests etc should have high predictive validity.
4. Construct validity: it refers to the extent to which the test scores reflect the underlying construct. It
is found using a statistical technique called factorial analysis. So it is also known as factorial
validity.
5. Concurrent validity: it indicates the extent to which the test scores accurately estimate an
individual’s present position on the relevant criterion. Concurrent validity is measured by
correlating the test scores with a standardized test scores.
Methods of determining the validity of a tool
1. Correlating it with another test.
2. Correlating it with teacher rating.
3. Analyzing the test to ensure that due weightage is given to content and objectives.
4. By item analysis.
How to ensure validity of a tool?
1. By providing weightage to objectives, content, form of questions etc.
2. By preparing the blue print.
4. By including items according to the table of specifications.

2. Reliability: Reliability is the second important characteristics of a tool. It is the consistency of a


test yielding the same results in measuring whatever it does measures. The degree of reliability is
denoted by reliability coefficient.
Factors affecting reliability of a test
1. Length of the test: A longer test is more reliable than a shorter one.
2. The time of administration of the test affects the validity of a test.
3. Consistency and objectivity of the scorer.
4. Lack of clarity in instructions.
5. The state of the pupils: Alertness, mental set, fatigued state etc of the pupils affect the test scores
which affects its reliability.
6. Too easy and too difficult questions are not likely to yield a highly reliable score.
Methods for determining reliability
1. Test-retest method: One test is administered to a group and it is repeated for the same group with a
time interval. The correlation coefficient between the two set of scores gives the reliability
coefficient which is known as the stability coefficient. This method of repetition is the simplest
method of finding the reliability of a test.
2. Alternate or parallel form method: Two parallel test forms (Form A and Form B) are constructed
and given to the same group in close succession. The two forms are identical with regard to content,
difficulty and pattern of questions. The correlation between the set of scores of Form A and Form B
gives the index of equivalence or the coefficient of equivalence.
3. Split-half method: A test is administered to a sample and the set of scores are split into two set of
scores by combining the even numbered and odd numbered items in the test. The correlation
coefficient between these set of scores gives the correlation coefficient for half the test. The
reliability coefficient for the whole test is calculated using Spearman Brown formula
𝑛𝑟
𝑅 = 1+(𝑛−1)𝑟 ;
where r- correlation coefficient for half the test
n- number of parts into which the test is divided.
Here n=2,
2𝑟
𝑅 = 1+𝑟
This reliability coefficient is the coefficient of both stability and equivalence.
4. Method of rational equivalence (Method of internal consistency or Kuder Richardson method): The
test is given to a sample once and the scores are collected and applied in the Kuder Richardson
𝑛 ∑𝑛 qp
formula 𝑟= ⌊1 − 𝑖=1 2 i i ⌋
𝑛−1 𝜎
where n- number of test items
𝜎 - standard deviation of the test scores
p- proportion of correct answers to a particular item
q- proportion of incorrect answers to a particular item and p+q=1
This correlation coefficient is known as the coefficient of internal consistency.

3. Objectivity: An evaluation tool is objective if the scores assigned by equally competent scorers are
not different. The scores should not be affected by judgment, personal bias or opinion of the scorer.
There should be little or no disagreement on what is the correct answer of a test item. Objectivity
coefficient of a test is obtained by finding the correlation between two set of scores assigned by the
same scorer on two different occasions.

4. Practicability: Practicability relates to the practical aspects of the test in respect of administration,
scoring, interpretation and economy. As test is practicable if it can be successfully used without any
unnecessary expenditure of time and energy. A test should be always having a test manual with all
the necessary instructions which increases the practicability of the test.
Practicability of a test depends upon the following factors;
1. Ease of administration- The test manual should contain clear and precise instructions regarding how
to conduct the test.
2. Ease of scoring – Scoring of the test should be easy, objective and simple.
3. Ease of interpretation- Interpretation depends upon the fact that the test is accompanied by complete
norms based on age, grade etc.
4. Economy- A good evaluation tool should not be expensive with respect to money, time and energy.

5. Utility: A test posse’s utility to the degree that it satisfies the definite purpose for which it is used.
Utility is the final check on the value of the test.
A good evaluation tool yields more accurate and precise score. However it will have its own
limitations which must be always considered by the evaluator while conducting evaluation.

Item Analysis :Item Analysis is a process by which a test constructor evaluates the effectiveness of
the test items in terms of the discriminating power and difficulty index of the test items. For item analysis the
answer scripts are scored and arranged in ascending order of the total scores and the top 27% is taken as the
upper group and the bottom 27% is taken as the lower group. Then the students response for each item is
analyzed for the lower and the upper group.
Discriminating Power: Discriminating power of an item in a test is its power to discriminate
between the upper and the lower groups who took the test. If an item is answered by all or not by all
then it is a bad item. Discriminating power the maximum value is1. An item whose discriminating
power is above .04 is chosen.
Then the discriminating power is found out using the formula D= (U-L)/N
And
difficulty index = (U+L)/2N
Where; L- Number of students who answered the item correctly in the lower group.
U- Number of students who answered the item correctly in the upper group.
N- Number of students in each group.

Difficulty Index: Difficulty index is the proportion of students who correctly answered the test item. The
difficulty index of a test item is inversely proportional to the difficulty of an item. The least value of
difficulty index shows the item is very difficult and maximum value for difficulty index shows that the item
is very easy. Items with difficulty index ranging from 0.4 to 0.6 are chosen.
Purposes of item analysis:
1. To find the difficulty level of the test items.
2. To find the discriminating power of the test items.
3. To find the effectiveness of the distracters.
4. Provides useful feedback for the students regarding their performance in the test.
5. Provides insight and skill which leads to the preparation of better tests on future occasions.

CRITERION-REFERENCED TEST (CRT) & NORM-REFERENCED TEST (NRT) - Testing


results in scores, but they are meaningless unless they are interpreted. Interpretation of a test give rise
to NRT and CRT.
Criterion referenced test - Glaser (1963) first used the term CRT to highlight the need for tests that
can describe the position of a learner on a performance continuum, rather than the learner’s rank
within a group of learner. CRT are those tests which are used to ascertain an individual’s status with
respect to some criterion i.e. performance standard. So the meaning of an individual score is not
dependent on comparison with other students. We want to know what the individual can do, not how
much he stands in comparison to others.
Characteristics of criterion-referenced test
• Its main objective is to measure student’s achievement of curriculum based skills
• It is prepared for a particular grade or course level.
• It has balanced representation of goals and objectives.
• It is used to evaluate the curriculum plan instruction progress and group student’s interaction.
• It can be administered before and after instruction
• It is generally reported in the form of
i. Minimum scores for partial and total mastery of main skill areas.
ii. Number of correct items.
iii. Percent of correct items
iv. Derived score based on correct items and other factors
Uses of criterion-reference testing
• To discover the inadequacies in learner’s learning and assist the weaker section of learners to
reach the level of other students through a regular programme of remedial instruction
• To identify the master learners and non-master learner in a class To find out the level of
attainment of various objectives of instruction
• To find out the level at which a particular concept has been learnt. To better placement of
concepts at different grade levels
• To make instructional decisions of what to do with a learner in individually prescribed
instruction programme.
Limitations of criterion referenced testing
1. Criterion-referenced tests tell only whether a learner has reached proficiency in a task area but
does not show how good or poor is the learner’s level of ability.
2. Tasks included in the criterion-referenced test may be highly influenced by a given teacher’s
interests or bases, leading to general validity problem
3. Only some area readily lend themselves for listing specific behavioural objectives around
which criterion-referenced tests can be built and this may be a constructing element for
teachers.
4. Criterion-referenced tests are important for only a small fraction of important educational
achievements. On the contrary, promotion and assessment of various skills is a very important
function of the school and it requires norm-referenced testing.
Norm-referenced test - Norm-Referenced Test as a test designed to provide a measure of
performance that is interpretable in terms of an individual's relative standing in some known group.
This test is used primarily for comparing achievement of an examinee to that of a large representative
group of examinees at the same grade level. The representative group is known as the ‘Norm
Group’,that may be made up of examinees at the local level, district level, state level or national level,
same age group, gender etc.
Chief characteristics of a norm-referenced test
1. Its basic purpose is to measure student’s achievement in curriculum based skills.
2. It is prepared for a particular grade level
3. It is administered after instruction
4. It is used for forming homogeneous or heterogeneous class groups.
5. It classifies achievement a above average, average or below average for a given grade.
6. It is generally reported in the form of Percentile Rank, Linear Standard Score, Normalized
Standard Score and Grade Equivalent Score.
Uses of norm-referenced testing
1. In aptitude testing for making differential prediction
2. To get a reliable rank ordering of the pupils with respect to the achievement we are measuring
3. To identity the pupils who have mastered the essentials of the course more than others
4. To select the best of the applicants for a particular programme
5. To find out how effective programme is in comparison to other possible programmes.
Draw backs of norm-referenced testing
1. Test items that are answered correctly by most of the pupils are not included in this test
because of their inadequate contribution to response variance. They will be the items that deal
with important concepts of course content.
2. There is lack of congruence between what the test measures and what is stressed in a local
curriculum
3. Norm-referencing promotes unhealthy competition and is injurious to self-concepts of low
scoring students.
Similarities between NRT AND CRT
1. Validity and reliability are needed in both
2. Achievement domain is measured in both
3. Sample of test items should be relevant and representative in both
4. Same types of items can be used in both
5. Same rules are followed for writing items in both excepting the item of difficulty

Differences of NRT AND CRT


NRT CRT

NRT covers a large domain of learning task It focuses on a delimited domain of learning tasks
with just a few items measuring each with a relatively large number of items measuring
specific task. each specific task.
It stress discrimination among individuals. It stress what examinees can do and what they can't
do.
Promotes unhealthy competition No such problem of unhealthy competition
The result is reported in terms of Rank, The result is reported in terms of number of correct
Percentile rank, Linear standard score, items, minimum score for total mastery. E.g. Mary
Normal standard score. E.g. Raj secured I answered 90 items out of 100 items correctly in 1
rank in the class. hour.
Test administered only after instruction Test administered before and after instruction
It contains items of average difficulty It contains easy as well as difficult items
Classifies achievement as above average, Classifies achievement as the attainment and non-
below average and average. attainment of objectives.
In this test, interpretation needs a defined Interpretation needs defined as well as delimited
group achievement domain
A student is tested after each unit and 1. A student is tested after each unit for mastery of
allowed to go to the next that along with the objectives and is allowed to proceed to the new
whole class. A student is assigned the marks material only if mastery is obtained .A student is given
or grades to indicate his performance. A remedial instruction if the material presented is not
student is presented with the new materials mastered. A student is tested again after remedial work
of the next unit. A student tested for the new to check for mastery of the material.
material and assigned marks. 2.

Similarities of NRT and CRT: - Both have essentially the same job to do, that is to measure
achievement in learning. Elements of quality are essentially the same for both. An individual test
question used in the two is indistinguishable. In general, criterion-referenced test are best to assist in
categorical pass- fail decision with respect to separate specific items or competencies. Norm-
referenced form is useful in measuring a person’s general level of knowledge or understanding of a
subject.

ACHIEVEMENT TEST

Achievement test is a test which measures the relative accomplishment of the students in specific
areas of learning.

Principles of achievement test construction


Or
Steps in the construction of an Achievement test.
The steps in the construction of an Achievement Test are
A. Planning the test.
B. Preparing the test.
C. Try-out of the test.
D. Evaluation of the test.

A. Planning the test: Planning includes all operations that go into producing the test. We have to plan the
standard, subject, unit, time for the test, objectives to tbe tested, total marks, distribution of marks for
each question, type of questions, difficulty level of question before actually preparing the test.
B. Preparing the test:
1. Preparation of designs- This includes giving weightage to objectives, content, form of questions
and difficulty level. The four types of designs are design for content, design for objectives, design
for form of questions and design for difficulty level.
Design for instructional Objectives.
Sl. No. Objectives Marks %

1. Knowledge 5 20

2. Understanding 6 24

3. Application 10 40

4. Skill 4 16

Total 25 100

Design for content.


Sl. No. Content Marks %

1. Unit 1 10 40

2. Unit 2 8 32

3. Unit 3 7 28

Total 25 100

Design for form of questions.


Sl. No. Form of questions Marks %

1. Objective 10 40

2. Short Answer 11 44
3. Essay 4 16

Total 25 100

Design for difficulty level.


Sl. No. Difficulty level Marks %

1. Easy 6 24

2. Average 15 60

3. Difficult 4 16

Total 25 100

2. Preparation of blue print for the test – A blue print is a three dimensional chart which shows the
weightage given to the objectives, content and form of questions. Blue print is also known as the
table of specifications as it relates the content to the objectives and gives the weightage given to
each.

BLUE PRINT
Objectives
Knowledge Understanding Application Skill
Type of
questions
O S E O S E O S E O S E To
t
a
l
Content
Unit 1 1(2) 1(1) 2(1) 1(2) 1½(2) 10

Unit 2 1(2) 1(1) 1(1) 2(1) 8

Unit 3 1(1) 2(1) 4(1) 7


Total 5 2 4 3 7 4

Grand
total 5 6 10 4 25

O- Objective type question


S- Short answer type question
E- Essay type question
The number inside the bracket shows the number of questions and that outside shows the mark for
each question.
3. Scheme of section – It shows the number of sections into which the question paper is divided.
Usually the question paper is divided into three sections: Section A consisting of objective type
questions, Section B consisting of short answer type questions and Section C consisting of essay
type questions.
4. Scheme of option – It shows the option oe choice given to the students to answer the question
paper. The scheme of option can be either for the whole paper or section wise.
5. Preparation of test items – After preparing the blue print the next step is to select or write the test
items. The test items should be in accordance with the various dimensions of the blue print. It is
desirable to prepare additional items as it will make easier to maintain the distribution of items as
shown in the blue print.
6. Arrangement of items – The test items can be arranged in any way according to the objectives,
content, difficulty level etc. The common procedure is to group the same type of questions
together and arrange in increasing order of difficulty.
7. Instructions for the question paper – Instructions can be either for the whole test or section wise.
Instructions for administration of the test must also be given. The directions should be simple and
concise and yet contain all information concerning the test: name of the test, subject, standard,
time, and marks for each question, section and whole test, how to record the answer etc.
8. Preparation of scoring key and marking scheme – Scoring key and marking scheme are prepared
to make the valuation objective based. They are prepared before the administration of the test.
Scoring key
Question No. Key Answers Marks
1.
2.
3.

Marking scheme:
Question No. Value Points Mark for each Total
value point mark
1.
2.
3.
9. Question-wise analysis – This is done by analyzing each item in the question paper with respect to
all aspects that influence the test result – objectives, content, specification, form of questions,
difficulty level, marks and expected time. This helps in assessing the effectiveness of the test item
with reference to designs and other requirements in the blue print.

Q.N. Topic Objectives Specifications Form of Difficulty Marks Expected


Question level Time
Score Total
for
each
point
I 1.
2.
3.
II 1.
2.
3.

C. Try out of the test – First the test is administered to a sample representing the population. This is to
find the language difficulties and other faults in the test. The faulty items are removed by item
analysis. The final form of the test is prepared and ten administered to the population.
D. Evaluation of the test –This is the final step in the construction of a test. The test is evaluated for
many purposes.
Uses of evaluation of the test:
1. To find out whether the test was easy or difficult, too long or too short.
2. To find out whether the instructions was clear and specific.
3. To find out whether the test is practicable and feasible.
4. To find out whether the items were clear and unambiguous.
Importance of designs and blue print:
1. It helps to improve the validity of the test.
2. Ti relates objectives to the content.
3. Makes the test more objective based.
4. Keeps the process of test construction in track and ensures proper construction of the test.
5. Lays a complete picture of the test before the test maker before its preparation.
Importance of a marking scheme:
A marking scheme is essential because it indicates
1. The number of steps or learning points expected in the answers.
2. The outline of each steps in the answer.
3. The weightage to each point is specified clearly.
4. The level of accuracy expected of each step.
5. This makes scoring objective.
Importance of reviewing and editing the test items.
After pooling the test items for a particular test the items have to be reviewed and edited. This is done
on the basis of the following;
1. Does each item present a clearly formulated task?
2. Is the language simple and clear?
3. Is the item free from extraneous clues?
4. Does each item fit into one of the cells of the blue print?
5. Is each item independent and are the items as a group free from overlapping?
6. Is the difficulty of the item appropriate for the students to be tested?

Concept of educational diagnosis:


Educational diagnosis includes all activities in measurement and evaluation that helps to identify
growth lags and their causaotry factors for individuals or the class. The process of determining the causes of
educational difficulties is known as educational diagnosis. It is important in the teaching-learning process as it
gives an indication of the strength and weakness of the students. All evaluation serves the purpose of diagnosis
but to get real information regarding specific areas of difficulty diagnostic evaluation is necessary.
According to Ross and Stanely the five levels of diagnosis are:

1. Who are the pupils having trouble?


2. Where these errors are located? Corrective diagnosis.
3. Why did the errors occur?
4. What remedies are suggested?

5. How can the errors be prevented? Preventive diagnosis.

The steps in educational diagnosis are


1. Locating the individuals needing diagnosis. For this different procedures are adopted- based on
teacher’s observation, administration of achievement test and intelligence test etc.
2. Locate the errors and areas of difficulty. A careful study of the child, diagnostic tests for specific skills
etc are used for this.
3. Providing remedial measures.
The most widely accepted method of diagnostic evaluation calls for testing, remedial instruction,
retesting, further remedial instruction etc until the difficulty is overcome.
Diagnostic test: A diagnostic test is a test designed to identify and investigate the difficulties and disabilities
and inadequacies of pupils in specific fields. They are designed to analyze individual’s performance and
provide information on the causes of difficulty. It is basically an achievement test but differs from the
following points
1. Coverage of content area
2. Purpose of the test
3. Use of the test

Steps in the construction of a Diagnostic Test.


The specific area of difficulty is located. This is done using an achievement test. After scoring the answer
scripts, question-wise analysis of the answer scripts are done. For this a diagnostic chart is used. Diagnostic
chart is a check list which checks the questions that are answered correctly, partially correct, and wrong or
omitted by each student. In a diagnostic chart the names of students are written in one dimension and the
question number in the other. Any of the following symbols are marked against each question number for each
student.

Fully correct answer.

Partially correct answer

Question omitted.

Wrong answer.
Then the total number of students who have not answered (O), written incorrect (W) and incomplete
answers (pc) and those who have answered the items (fc ) are found. This can be done either for the whole class
or for the students who have secures below µ-σ i.e., for the below average students. Also the number of
questions answered, written incorrect and incomplete answers and omitted by each student is found.
The subject area corresponding to which most students have not answered (including omitted and
incomplete answers) is chosen to prepare the diagnostic test. The areas of difficulty are divided into a number
of small important teaching-learning points and several test items from each teaching-learning point should be
prepared. Replication of items is necessary for confirmation of evidence. The test items should be arranged in a
sequential order and should be divided into two or more sections. Clear instructions should be given.
The diagnostic test is administered to the below average students. Approximate time required to
answer the test may be indicated but the pupils may be allowed their own time to answer the test. It should be
clear to the students that the purpose of the test is not to allocate grade or provide rank but to locate their
difficulties. The students should be asked to attempt all the items.
After administering the diagnostic test the answer scripts are scored and analyzed using a diagnostic
chart. If the entire item under a particular teaching point is answered by a pupil then the pupil has no difficulty
regarding that teaching point. If on the other hand he answers only one or two out of five questions under a
teaching point then it is a difficult area for the pupil. The difficulty found common to a majority of the pupils
should be taken for group remedial teaching and the others, individual teaching should be provided.

DIAGNOSTIC CHART

No. Question No. 1. 2. 3. 4. ……. fc pc O W T


Name O
T
A
L
1. Ann

2. Ana

3. Ben

fc
pc
O
W

Total
Remedial teaching:
Remedial teaching is the process of instruction that follows immediately after diagnostic testing, when the
exact nature of the difficulties and the reason for them are known. The teacher has to take steps for remedial
teaching. Additional learning experiences are provided tot eh pupils to reduce their difficulties.
Remedial teaching consists of remedial activities taking place outside the framework of regular classroom
instruction. It is restricted to a small group with severe learning difficulties. This programme is designed for
the student who is not benefitting from the corrective instruction, which is the remedial instruction carried out
within the framework of regular classroom instruction.
Suggested methods of remedial teaching:
Remedial teaching lessons should be prepared. It should be carefully planned. Begin the lesson from
where the pupil knows. Provide a variety of learning experiences. Give more explanation and use more
examples to explain a single concept. The teacher can make use of audio-visual aids, other methods of
individualization of instruction etc. a large number of exercises and activities should be provides. Avoid
introducing too many concepts in one and in the same class. Conduct small tests after they have mastered a
small content area. the first test should permit the students to experience success. New concepts should be built
on ideas already comprehended and developed. The teacher should be patient and take time to build on the
various concepts regarding that particular content area.

Purposes of diagnostic tests:


1. To find out the specific difficulties of the students.
2. To identify inadequacies in specific skills.
3. To locate areas in which additional instruction is required or in which teaching methods is to be
improved.
4. To identify faulty and incorrect procedures of teaching.
5. Gives evidences of lack of understanding, precision and accuracy.
6. Provides feedback for the teacher.

Two types of test items are fixed or forced response type and free response type. And objective type, essay
type and short answer type.
Differences between achievement test and diagnostic test.

Achievement test Diagnostic test


Achievement test is meant to measure how much Diagnostic test is meant to measure how much a
a student has achieved. student has not been able to achieve.

Achievement test gives weightage to every topic Diagnostic test gives more emphasis to problem area
in the content area. of the content.

Achievement test is strict in its time factor. Diagnostic test does not give much importance to the
time factor.

Achievement test gives weightage to objectives, Diagnostic test considers each and every factor in a
content, form of questions and difficulty level all more critical and analytical way.
in a general way.

Achievement test gives importance to the marks Marks scored in a diagnostic test are not important.
of the students.

Construction of an achievement test is easy. Construction of a diagnostic test is more difficult.

Achievement test can be used for educational A diagnostic test can’t be used as an achievement test.
diagnosis.
A diagnostic test is given after an achievement test
Achievement test precedes diagnostic test. and is always proceeded by remedial teaching.
And may or not give remedial instruction after it.
Diagnostic test is for the below average students only.
Achievement test is given for the whole class.

Objective type questions are questions, that require a specific answer. An objective question usually has only
one potential correct answer and they leave no room for opinion. An Objective test is so named because the
system of scoring is objective rather than subjective. The problem may be stated as a direct question or as an
incomplete statement and is called the stem. The list of suggested solutions may include words, symbols, etc,
are called alternatives.

USES * to measure variety of knowledge out comes ( Specific facts, terminology, principles, methods and
procedures etc.

The different types of Objective type test items are

True or False Questions (Alternative response type) A true or false question is essentially a statement, called
a proposition. The learner judges whether the proposition is true or false. 1. There should be about an equal
number of true and false statements. 2. Both true and false statements should be about equal length. 3. False
items should be plausible.
A multiple choice item consists of a stem, which contains the problem, and a list of suggested responses. The
incorrect responses are called "foils" or "distracters." And the correct response is called the key response.
Multiple choice questions are some of the most useful test items. You can test everything from factual recall to
application of principles to problems. The stem should be a whole, positive statement. Correct answers and
foils should be short. There should be only one correct answer Answer and foils should be mutually exclusive.

A matching item question is one that requires the test taker to match an item in one column with an item from
a second column. In general, the items that have a blank space next to them are called the "questions" and the
items that you choose from to fill in the blank are called the "answers." Instructions should indicate the basis
for matching. Questions and responses are all from the same category. Responses should be same or more in
number than the questions.

A completion item is a form of short answer question in which the learner completes a sentence by supplying
a key word or phrase. A completion item is comprised of two parts, the "cue" and the blank. Completion
questions are the simplest types of test items in which the learner is required to supply the correct answer,
rather than to choose the correct answer. As such, it requires a higher level of learning – recall learning – rather
than simple recognition.

Advantages: Objective evaluation. Students can answer quick.  Evaluation time will be less.  Rapid
scoring is possible.  It covers all the aspects of the content.

Disadvantages: It takes more time in construction. The specific abilities like expression and organization
are not tested.  Content validity cannot be tested.  Blind guessing is possible. 

Short Answer Questions A short answer question is a complete question that requires the learner to supply
the correct answer. The answer should be brief. Short answer questions are another type of question where the
learner must supply the answer rather than recognize it from a list of choices. It differs from its close relative,
the completion question, in that it poses a question to be answered, rather than a blank to be completed. It
differs from the essay question according to the length of its response, which should be brief and specific. It
uses a direct question Short answer type questions are the type, that can be answered by a word or a few
sentences

It contains objectives individually like knowledge, understanding, synthesis, application, analysis and
evaluation.

CHARACTERESTICS Can cover a wide range of content.  Is highly thought provoking.  Can be
answered in few sentences.  Comes between objective and essay type.  It is suitable for measuring a wide
variety of relatively simple learning outcomes.

ADVANTAGES Easy to construct, because it measures simple learning outcomes. Large portion of content
can be covered. It is useful in interpreting diagrams, charts etc. There is little opportunity for guessing.

LIMITATIONS Writing skill cannot be measure properly. It cannot test the expression ability of students. 
Personal bias of teacher and students are involved.  It leads to rote learning

Suggestions while constructing- Ensure that tests measure more than the memorization of factual
knowledge. Avoid the irrelevant clues
Essay Questions An essay question calls for an extended response from the learner. The response can be
extended, with virtually no restrictions on the answer, or it can be restricted according to length. Essay
questions allow the learner maximum freedom to respond. Higher order mental processes can be tested using
essay questions such as description, comparison, evaluation and prediction. Essay test is a test that requires the
student to structure a rather long written response up to several paragraphs. Student get much freedom to
express his ideas.

Characteristics Less time needed for preparation. It is easy to prepare Contains fewer questions than
objectives and short answer questions. Allows freedom of response to a problem. It demands long answers. 

ADVANTAGES Ensure content validity. Enable plan and answer. Reduce chances of on-the-spot copying.
Leads to qualitative evaluation of student’s achievements. Test pupil’s ability to use knowledge. Brings
language mastery. Easy to construct.

DISADVANTAGES It encourages bluffing. Dearth of comprehensiveness. No objectivity. Lack of reliability.


It is of a time consuming type. It covers only few areas.

Distinguish Between Objective type and Essay type questions

Objective type Essay type


Objective in nature. No scorer bias Subjective in nature. Scorer bias
Only one correct answer More than one correct answer
Mostly fixed response type Free response from the students
Chance of copying and guessing Less chance of copying and no chance of guessing
Covers larger content area with more number of Covers lesser content area with less number of
questions questions
Difficult to construct but easy to score Easy to construct but difficult to score
Mostly tests students lower order thinking Tests students higher order thinking
Students language ability, creativity and organization Students language ability, creativity and organization
of ideas and thoughts are not given importance of ideas and thoughts are given importance
Students handwriting does not affect their score Students handwriting affect their score
Takes less time for the test Takes more time for the test
Mechanical scoring possible Mechanical Scoring not possible

TEACHER MADE AND STANDARDIZED TEST-


Teacher- made- test- A teacher made test is an evaluation tool constructed by the classroom teacher
to assess the student’s achievement in a particular unit/content. E.g. Classroom question paper on a
unit.
Characteristics
1. It is usually flexible in scope and format.
2. It is variable in difficulty and significance.
3. It is prepared by the classroom instructor.
4. It usually contains content validity.
5. It is not computer generated or taken from a book.
Uses
1. To know whether the students has attained knowledge in specific field.
2. To determine how far the specific aims of educations have been fulfilled.
3. To classify students in accordance with their achievement.
4. To motivate students towards further learning and teachers towards self evaluation.
5. To determine final grades or make promotion decisions.
6. To identify areas of deficiency.
Advantages of teacher made tests
1. Reflects instruction and curriculum.
2. Sensitive to student's ability and needs.
3. Can be made to reflect small changes in knowledge.
4. Provide immediate feedback about student progress.
5. Teachers can make changes immediately to meet the needs of their students.
Disadvantages of teacher made tests
1. May not reflect content standards.
2. Little variety in types of assessment used.
3. Informal or un-standardized.
4. Concerns about reliability.
5. Concerns about validity.
Standardized test
• Standardized test is an evaluation constructed by researchers or experts to assess a broad range
of behaviors for generalization.
Eg: Aptitude test, personality inventories.
• The first standardized test was made by Cliff .W. Stone in 1908. It was a test to measure
mathematic reasoning.
Characteristics
• They consist of items of high quality.
• The directions for administering, exact time limit and scoring are precisely stated.
• Norms based on representative groups of individuals are provided as an aid for interpreting
the test scores.
• Information needed for judging the value of the test is provided.
• A manual is supplied that explains the purposes and uses of the test.
Uses
1. It is used in comparing achievement of individual or groups.
2. It is used to compare achievement in various fields of knowledge or performance.
3. To assess a student’s proficiency in specific subject such as maths, science and literature.
4. To compare classes or schools among themselves and to measure growth over a period of
years.
Advantages
1. It is constructed by experts who are well qualified and experienced.
2. They are of high validity and reliability.
3. They are norm based.
4. They provide value points and clear directions for valuation.

Disadvantages
1. Narrows curricular format and encourages teaching to the test.
2. Poor predictive quality.
3. Grade inflation of test scores or grades.
4. Culturally or socioeconomically biased.
Similarities
1. Both are constructed on the basis of carefully planned table of specifications.
2. Both have the same type of test items.
3. Both provide clear directions to the students.
4. Both assign grades and can be compared with other students.
Differences
Standardized test Teacher made test
Concerned with whole field of knowledge or ability Concerned with limited and specific field of
tested. knowledge tested.
It is based on different sources. It is based on personal experience of the teacher.
Constructed by experts through the process of Constructed by teachers without any method of
standardization. standardization.
Aimed at objectives shared by educators across the Aimed at local objectives.
country.
Always have a manual which gives all the directions No such manual is provided.
for the test usage, scoring and interpretation
Both reliability and validity is ensured No need of reliability and validity
Quality of items is ensured by item analysis The quality of items need not be found out and is
generally low.
Used by many persons in different context Used by the concerned teacher is a particular
situation
Used to evaluate outcomes and objectives that have Used to evaluate outcomes and content of what
been determined irrespective of what has been taught has been taught in the classroom (limited
(wider content). content).
Procedure of administration and scoring is Procedure of administration and scoring is
standardized and as per instructions given in the flexible.
manual.
Scores can be compared and interpreted within the Scores can be compared and interpreted only in
norm groups. Test manuals are used for the context of the local school situation.
interpretation.
Test results show the students knowledge in various Test results show the students achievement in
fileds or subjects or their intelligence, attitude, specific fields or subjects and the attainment of
personality, aptitude, performance etc. certain objectives.
Norms for various groups are given. No norms are provided
The content chosen is broader on the basis of various The content chosen is limited and is made on the
books, journals, articles and other standardized tests basis of the personal experiences of the teacher.
etc.
ONLINE EXAMINATION - Online examinations, sometimes referred as e-examinations, are the
examination conducted through the internet or intranet. This may utilize
an online computer connected to a network. This definition embraces a wide range of student activity
ranging from the use of a word processor to on-screen testing. Specific types of e-assessment include
multiple choice, online/electronic submission, computerized adaptive testing and computerized
classification testing. Different types of online assessments contain elements of one or more of the
following components, depending on the assessment's purpose: formative, diagnostic, or summative.
Instant and detailed feedback may (or may not) be enabled.
For a remote candidate most of the examinations issue results as the candidate finish the examination
when an answer processing module is also included with the system. Candidates are given a limited
time to answer the questions and after the time expiry the answer paper is disabled automatically and
the answer is send to the examiner. The examiner will evaluate answers, either through automated
process or manually and the results will be send to the candidate through email or made available in
the websites. Today many organizations are conducting online examinations worldwide successfully
and issue results online.
Importance of Online Examinations.
1. Fast Process: Traditional exams are good but it takes them many day or months to display the
results of the examination as the copies are checked manually.While is online examination
checking and result process is completely online performed by a computer that makes it
faster.Results of an online exam can be declared within a few days of the exam.
2. Three major components have to be catered for efficiently.
1.Creation exams.
2.Supervision of examination.
3.Marking of exams.
3. A major highlight of using a web based exam software or an online examination system is
that it gives a high level of transparency as opposed to the traditional method or remote
method.
4. It is almost impossible to compromise exam questions and evaluations because they cannot
be influenced.
5. Most online exams generate their results instantly and it is often possible for the exam taker
to get information on his results immediately.
6. Assessments that are served on desktops, mobiles and tablets at ease. Conduct tests on any
device seamlessly.
7. Built for candidate’s ease
8. Simplify how your conduct assessments
Advantages of Online Examination
• immediate feedback, tailored to help students improve their knowledge and performance
• access for students in different geographical locations and at different times
• sophisticated reporting, allowing you to refine the exercise or identify areas in which more
instruction is needed
• students undertake online tests many times to assess and re-assess their knowledge
• Testing in an online environment can be a lot more interactive than traditional paper and pen
tests. Instructors can embed multimedia in test questions to provide more engaging
assessments. For example: Students may be asked to identify a particular area of an image by
directly clicking on it instead of having to answer in written form.
• Online test can be more accessible to students with disabilities who have assistive
technologies built in to their computers than hand written tests are.
• Low cost, minimum effort, saving time ,instant result and conduct an examination India and
abroad.
• Although creating online tests is labour-intensive, once a test is developed in black board it is
relatively easy to transfer it and repeat it in other black board courses.
• Rapid turnaround on test results
• Greater choice of where and when to test
• Centralized registration and scheduling
• Reduced manual processes and errors
• Increased test security
• More standardized, automated processes
• Quicker updates to test content
• Less human error
Disadvantages of online examination
1. Unlilke collaborative project based online assessment multiple choice or essay tests online can
feel even more impersonal than they do in the class room which may contribute to an online
students sense of isolation.
2. While it is tempting to use the multiple choice quizzes provided by the text book
publisher,these types of assessment lack creativity and many not be suitable to the specific
needs of your learners.
3. Some students will not be accustomed to taking quizzed and tests online and they
may need some hand-holding early in the semester before they feel comfortable with the
technology.
4. Cheating on an online test is as simple as opening up another window and searching google
or asking a classmate for the correct answers. Furthermore, cheating on online multiple choice
tests is near impossible for instructor to prevent or catch.
5. Though the technology that makes online tests possible is a great thing, it can also cause
problems. If you do online testing, have a back-up plan for students who have technical
difficulties and be ready to field some frantic emails from students who have poor internet
connections or faulty computers.
6. May be loss of Internet during examination.
7. Theoretical exam can't be conduct in this criteria.
8. Computer Hardware and Software peripherals problems may encountered.
9. It's a new strategy. So, never use at all levels of education.
10. Basic computer knowledge is compulsory to have.
11. It's risky and may have mental fear than theoretical exam.
12. No way to estimate the intellectual level of individual just by objective type online exam.
13. One can crack the exam just by luck, not by knowledge.

COMPUTER BASED EXAMINATION


A computer based assessment, also known as computer based testing (CBT), e-exams, computerized
testing and computer administrated testing is a method of administrating tests in which the responses
are electronically recorded, assessed or both. As the name implies ,computer-based assessment makes
use of a computer or an equivalent electronic device(i.e. hand held computer) Computer based
assessment enables educators and trainers to author, scheduled deliver and report on surveys, quizzes,
tests and exams. Computer based testing may be a standalone system or a part of a virtual
learning environment, possibly accessed via the world wide web.
Computer based Test simply refers to tests and assessments conducted through the use of the
organized systems on computers. Chalmers (2011) sees Computer Based test as a test that can be used
in a supervised or non-supervised environment, and can allow students to check their own progress
through self-assessment. Test administration via computer (NOT online) . Responses are recorded
and scored electronically . Advanced questions types available: multiple choice, fill-in-the-blanks,
essay, etc.

It is a Computer Based Exam which will be conducted using the Local Area Network (LAN) to make
it safe, secure and un-interrupted.
a) The candidate can review or re-answer any question at any point of time during the examination.
b) The candidate can change the option of the answer during the exam duration and it is one of the
most important feature of computer based examination
c) The candidate also has the option to mark any answer for review at later stage during the
examination.
d) There will be a panel on computer screen showing all the question nos in different colour scheme
which will indicate which are the questions answered, left un answered and marked for review
e) Candidate gets the flexibility in choosing the exam date of his/her choice as per his/her
convenience.
f) It will make Candidate feel confident on use of Information technology.
Analysis of data: An enormous advantage of computerized tests is that data analysis both for
individuals and for groups, is made absurdly easy.
Presentation of results to subjects: Immediately the test is finished the computer can present the
results to the subject either on the screen or a printed document.
Items: However, a computer test even if it consists of what might be called computer bound
items,must still e judged against the stsndard psychometric criteria of reliability, discriminatory
power, validity and quality of normative data where these are applicable.
Comparability between a paper and pencil and a computer-administered test. It is far easier to
present on the computer. Verbal and numerical items than visual. Items where there is always the
possibility that the screen image will be different from the printed test, even with modern graphics
and light sensitive pens. Nevertheless, no matter how identical the two tests appear to be it is essential
that the reliability, validity and standardization of the computer version be checked.
Advantages
1.Reduced testing time
2.Increased test security
3.Provision of instant scoring.
4.Better use of professionals time.
5.Reduce time lag
6.Greater accuracy: Computer can combine a variety human are less accurate and less consistent
when they attempt to do this. Computer can handle extensive amount of normative data but human
are limited. Computer can use very complex way, human are quite limited in these capabilities.
7.Computers can be programmed so that they continuously update the norms, predictive regression
equation etc.
8.Create standardization :The computer demands a high degree of standardization both test
procedures and test interpretation and ordinarily does not tolerate deviance from such standardization.
9.Greater Control: This relates to the previous point but the issue here is that the error variance
attributable to the examiner is greatly reduced if not totally eliminated.
10. Greater utility with special students and group. There are obvious benefits with computerized
testing of special groups, such as the severely disabled for whom-Paper-Pencil tests may be quite
limited or inappropriate.
11.Long term cost savings: Although the initial coasts of purchasing computer equipment of
developing program software etc., can be quite high once a test is automated it can be administered
repeatedly at little extra cost.

Disadvantages
• Higher level anxiety
• Testing reduced the potential for observing the subject’s behavior.
• The need for individual computer terminals for each person limits the number of subject who
can be tested at any one time.

Portfolio - Portfolio are collection of work samples to illustrate a person's accomplishments in a


talent area. Photographers collect portfolios of their best photos, artists collect their art work,
composers their compositions.
Portfolio is a methodology for assembling and organizing student's products and assessment
data.
Assessment portfolios are used to document what a child has achieved in school. Portfolio assessment
refers to the purposeful, selective collection of learner work and reflective self-assessment that is used
to document progress and achievement over a period of time. In this technique teachers or students
collect samples of students work and put them in a folder. Portfolio also includes some reflective
accounts (e.g. diaries/logs). Example: in education, students collect in a portfolio, essays around
particular teaching methods, lesson plans, teaching materials that they have developed and a report
about the teaching experience itself.
Portfolio is an assessment tool that is gaining popularity and is being used in many schools
and colleges. The debate about the benefits and deficiencies of portfolios, questions about portfolio
validity, challenges about the portfolio assessment procedure etc will be raised. A good teacher will
rise to these challenges and become well equipped to use, adapt and create assessment techniques that
will combine the best of traditional and portfolio assessment.

Definition: A portfolio is a purposeful collection of student work that exhibits to the student (and/or
others) the student's effort, progress or achievement in (a) given area(s). The collection must include:
• Student participation in the selection of the portfolio content
• The criteria for selection
• The criteria for judging the merit
• Evidence of student reflection
Characteristics of portfolio:
1. It is primarily created by the student
2. An alternative to traditional testing
3. The portfolio has pedagogic and assessment functions.
4. Portfolios can include a wide variety of materials: teacher notes, teacher-completed, checklists,
students’ self-reflections, written summaries, reading logs, or audiotapes of student talks
5. Comprehensive ways to assess students’ knowledge and skills
6. Portfolios can be either paper or e-portfolios.
7. Authenticity of assessment
8. Students’ active participation in the evaluation process
9. Development of students’ reflective thinking.
10. Includes only ongoing information that is meaningful to the learner and useful in planning
current and instructional goals.
Development of a portfolio
Planning - A teacher who uses portfolio for students assessment have to first plan well in advance
how to prepare and use portfolios. Portfolios can be maintained in different ways. A teacher must fix
its physical and conceptual structure. So first a teacher has to decide how the portfolio would be
maintained and based on that decide the type of documents to be collected in a portfolio. Teacher has
to plan in advance how, when and what will be selected. A portfolio may include student's very best
work or even ordinary work to show the students maximum and daily performance. It can be kept for
a long or a short time. A short term portfolio is kept for a particular unit and a long term portfolio to
compare the development of the child over a period of time. Portfolios can be used at any grade
levels, subjects or courses. It can reflect work samples in one subject area alone or across the whole
curriculum. At school levels it usually maintained for each subject the student is learning. It can also
be created under categories such as verbal work, technical work or artistic work etc. A portfolio can
be maintained manually or electronically. An electronic portfolio can be maintained as a folder in a
computer or a CD that contains the work of the students as word documents, presentations, videos or
audio recordings (reflections can be recorded), photographs etc.
Collecting students work - Based on the purpose of the portfolio assessment student's variety of
works is collected. At primary level it is the responsibility of the teacher to maintain portfolios but in
higher levels it is the responsibility of the student. Teachers must be clear about what products can be
included in the portfolio.
The criteria for selection of materials for portfolio are:
• The products are selected by and personally meaningful to the learner
• The products reflect development and/or learning in all domains, in varying contexts, and on
an ongoing process throughout the period.
• The products are related to instructional objectives
• The products clarify performance expectations
• The products provide a medium for sharing information between the student and others
Several drafts of a work showing initial conception and planning, different attempts made for
implementation, final product, report of the process, student's reflection at different stages can be
collected. A teacher can ask the students to re-examine all the stages of work and reflect on the
process and products from the beginning to end. Self-evaluation is valuable in developing meta-
cognitive abilities of students. At the beginning of a course an initial portfolio can be developed
which created and evaluated. Evaluation of the initial portfolio by the teacher can give feedback to
the students. It can also give awareness for the students about what materials to be included in a
portfolio and how it should be maintained. Then later the students can maintain term portfolios,
semester portfolios or year-end portfolios.
Materials in the portfolio must be dated and sequenced to reflect the most recent work.
Categorizing student products according to the domains of learning can help in organizing the
material and for analysis and interpretation. There is no fixed way to assemble, store and retrieve
portfolio contents. This depends on the type of portfolio products chosen. The important factor is that
it should be readily accessible to the students.
Evaluation of portfolios - The portfolios can be evaluated in different ways. At the end of every
month portfolio weeks can be conducted at the final week. During this time the students consult with
a teacher or an adult mentor to discuss their past accomplishments, future goals etc. It can be
examined whether substantial learning has occurred and necessary feedback can be provided. If the
portfolio contains incomplete or unsuccessful work, the student may be given additional assignments
or special programmes for learning. The assessment report based on the portfolio is shared with the
students and parents in the parent teachers meeting. To maintain consistent standards teachers can
cross-read of portfolios from other teachers. Cross reading of selected portfolios of students of
different levels of learning can be done.
In another method a teacher can establish a process folio of work in progress. This can include
teacher's comments and observations, student self evaluation, progress notes and planning notes. A
teacher works with each child reviewing and revising the work and deciding which works to be
transferred to the archival portfolio. At the end of the year a student can take the archival portfolio
home or is forwarded to the next grade.
The year-end or semester end or course end portfolio or archival portfolio is a portfolio from
which the summative data will be derived. It will give a report of the year's accomplishment of a
student. It contains all the best work of the students is used to evaluate their progress in learning. It
can be used as the basis for providing grades and communicating what they have accomplished.
Criteria for evaluating a portfolio:
• Reflect all context of learning
• Reflect and facilitate individual learning styles
• Contain student reflection
• Show progress towards learning goals
• Reflect individual capabilities and interests
• Meaningful means of communication
• Reflect the three dimensions of growth and development, learning and teaching
Also portfolio exhibitions can be done to display the finest accomplishments and others can ask
questions regarding that work to the student. To validate the accuracy of portfolio assessment
examining by external examiners can be done. They would look at the quality of work, skill of
teacher to assess and give feedback, appropriateness of the grade provided, whether all students had
access to the type of learning they needed etc. Their findings and recommendations can be made to
the school board or examination board.
Advantages portfolio assessment
1. A more comprehensive way to assess their students’ knowledge and skills,
2. Help students be more accountable for the work they do in class and the skills and knowledge
they acquire; involve students in the assessment process, thus giving them a more meaningful
role in improving achievement; invite students to reflect upon their growth and performance
as learners.
3. Develop students’ skills of reflective thinking. It can be used as a means of promoting learner
reflection. Portfolios can serve as a means of motivating students and promoting their self-
evaluation and self-understanding.
4. It documents the students’ learning process. It can either include a record of students’
achievements or simply document their best work.
5. The portfolio can help in assessing product or process according to the context and design of
its development.
6. Portfolio assessment is closely linked to instruction because they reveal weaknesses in
instructional processes. Portfolios provide teachers with a wealth of information upon which
to base instructional decisions and to evaluate student progress.
7. It offers the teacher an in-depth knowledge of the learner and helps in individualization of
instruction. They allow the teacher to see the student as an individual, each with his or her
own unique set of characteristics, needs, and strengths.
8. Portfolios can develop meta-cognition of students, awareness of their own learning and
thinking. May judge their own work and compare performance in different assignments.
9. It is an effective way of getting students to take a second look and think about how they could
improve future work. Portfolios can provide structure for involving students in developing and
understanding criteria for good efforts and in applying the criteria to their own work.
10. Help teachers standardize and evaluate the skills and knowledge students acquire without
limiting creativity in the classroom.
Limitations of portfolios
1. They place additional demands on teachers and students.
2. Teachers need additional time for planning, developing strategies and materials, meeting with
individual students and small groups, and reviewing and commenting on student work.
3. Portfolio assessments may be less reliable. It can be subjective.
4. It can be time consuming for teachers and staff, especially if portfolios are done in addition to
traditional testing and grading.
5. Teachers must develop their own individualized criteria, which can be initially difficult or
unfamiliar.
6. Data from portfolio assessments can be difficult to analyze or aggregate, particularly over
long periods of time.

Traditional vs. Portfolio assessment.


Traditional Assessment Portfolio Assessment
Assessment is not continuous Continuous and comprehensive
Short term assessment Long term assessment
Timed, fixed response format Untimed, free-response format
Primarily created by teachers Primarily created by students
Provides opportunity for students Provides opportunity for students to select and examine
to know their present status of one's own work, reflect on the completed work, review and
achievement. revisit past products
Based mainly on tests and exams Uses techniques, tools, creative works, self reflections etc
Scores suffice for feedback Individualized feedback
Norm-referenced scores Criterion-referenced scores
Focus on the right answer Open-ended, creative answers
Summative Formative
Oriented to product Oriented to process and product
Non-interactive performance Interactive performance
Fosters extrinsic motivation Fosters intrinsic motivation

Rubrics for Evaluation Meaning of Rubric: The traditional meanings of the word Rubric stem from
the Latin word, rubrica which means a heading on a document (often written in red), or a direction for
conducting church services". The term has long been used as medical labels for diseases and
procedures. The bridge from medicine to education occurred through the construction of
"Standardized Developmental Ratings." These were first defined for writing assessment in the mid-
1970s and used to train raters for New York State's Regents Exam in Writing by the late 1970s. That
exam required raters to use multidimensional standardized developmental ratings to determine a
holistic score. The term "rubrics" was applied to such ratings by Grubb, 1981 in a book advocating
holistic scoring. In this new sense, a rubric is a set of criteria and standards typically linked to
learning objectives. It is used to assess or communicate about product, performance, or process tasks.
Authentic assessments typically are criterion-referenced measures. That is, a student's aptitude on a
task is determined by matching the student's performance against a set of criteria to determine the
degree to which the student's performance meets the criteria for the task. To measure student
performance against a pre-determined set of criteria, a rubric, or scoring scale, is typically created
which contains the essential criteria for the task and appropriate levels of performance for each
criterion.
Rubric: A scoring scale used to assess student performance along a task-specific set of criteria.
“A rubric is a tool used to assess or guide a student’s performance on a given task in a given context
given certain standards” (Varvel, 2011,para. 1). Using rubrics is an evaluation approach used to judge
the quality of performance (Morrison, Ross, Kemp, 2004). “A rubric is intended to give a more
descriptive, holistic characterization of the quality of students’ work” (p. 290). Rubrics place
emphasis on explicit descriptions of what a student will do, know, and to what degree.
Rubrics are performance-based assessments that evaluate student performance on any given task or
set of tasks that ultimately leads to a final product, or learning outcome. Rubrics use specific criteria
as a basis for evaluating or assessing student performances as indicated in narrative descriptions that
are separated into levels of possible performance related to a given task. Starting with the highest
level and progressing to the lowest, these levels of performance are used to assess the defined set of
tasks as they relate to a final product or behavior.
A rubric can be defined as a descriptive guideline, a scoring guide or specific pre-established
performance criteria in which each level of performance is described to contrast it with the
performance at other levels. This is in contrast to a rating scale which provides a scale (1-5) and a
description of each number in the scale (1 = Unacceptable to 5 = Exceeds Expectations), but does not
provide a description of what the specific differences are among performances at each level.

Steps in the construction of Rubrics for a Task


A rubric is comprised mainly of two components: criteria and levels of performance. Each rubric
must have at least two criteria and at least two levels of performance.
1. Identify the Criteria for the Task- The criteria are the characteristics of good performance on
a task. A criteria is used to evaluate how well students completed the task and, thus, how well
they have met the standards. So think "What does good performance on this task look like?"
or "How will I know they have done a good job on this task?" In answering those questions
you will be able to identify the criteria for good performance on that task.
2. Identify the level of performance - For each criterion; the evaluator applying the rubric can
determine to what degree the student has met the criterion, i.e., the level of performance. A
score can be assigned to each level of performance.
3. Writing descriptors - In this step find how students can demonstrate that they are fully capable
of meeting the standard. Descriptors are statements of what students should know and be able
to do. A descriptor tells students more precisely what performance looks like at each level and
how their work may be distinguished from the work of others for each criterion. They are
typically narrower in scope and more amenable to assessment; it must be observable and
measurable. A well-written descriptor can spell out what task should students do to
demonstrate their mastery of it.
4. Create the Rubric – After setting the criteria, level of performances and descriptors you next
decide whether to consider the criteria analytically or holistically based on which you create
analytic or holistic rubrics.
Thus a well constructed rubric identifies (Carnegie Mellon, 2001):
1. Criteria: the aspects of performance that will be assessed
2. Performance levels: a rating scale that identifies students’ level of mastery within each
criterion
3. Descriptors: the characteristics associated with each dimension
Types of Performances That Can Be Assessed with Rubrics
Type of Performance Examples
Processes • Playing a musical instrument
• Physical skills • Preparing a slide for the microscope
• Use of equipment • Making a speech to the class
• Oral communication • Conversing in a foreign language
• Work habits • Working independently
Products • Watercolor painting
• Constructed objects • Laboratory report
• Written essays, themes, reports,• Term paper on theatrical conventions in Shakespeare's day
term papers • Written analysis of the effects of a Plan
• Other academic products that• Model or diagram of a structure (atom, flower, planetary
demonstrate understanding of system, etc.)
concepts • Concept map

Types of Rubrics: Rubrics can be holistic or analytic, general or task specific.

Holistic rubrics provide a single score based on an overall impression of a student’s performance on a
task.
• Advantages: Quick scoring provides an overview of student achievement
Easily obtain a single dimension if that is adequate for your purpose.
• Disadvantages: Does not provide detailed information, may be difficult to provide one overall
score.
Not very useful to help plan instruction because they lack a detailed analysis of a
student’s strengths or weaknesses of a product.
• Use when: you want a quick snapshot of achievement.
a single dimension is adequate to define quality.

Analytic rubrics provide feedback along several dimensions.


• Advantages: Provides meaningful and specific feedback along multiple dimensions.
Scoring tends to be more consistent across students and grades.
Easier for the teacher to share with students and parents about certain strengths
and weaknesses.
Helps students to better understand the nature of quality work.

• Disadvantage: It is more difficult to construct analytical rubrics for all tasks.


Tends to be quite time consuming.
Lower consistency among different raters
• Use when:
o you want to see relative strengths and weaknesses.
o you want detailed feedback.
o you want to assess complicated skills or performance.
o you want students to self-assess their understanding or performance.

General rubrics contain criteria that are general across tasks.


• Advantage: can use the same rubric across different tasks
• Disadvantage: feedback may not be specific enough.
• Use when:
o you want to assess reasoning, skills, and products.
o all students are not doing exactly the same task.

Task specific rubrics are unique to a specific task.


• Advantage: more reliable assessment of performance
• Disadvantage: difficult to construct rubrics for all tasks.
• Use when:
o you want to assess knowledge.
o when consistency of scoring is extremely important.
Holistic Template for Holistic Rubrics
Score Description
Demonstrates complete understanding of the problem. All
5
requirements of task are included in response.
Demonstrates considerable understanding of the problem. All
4
requirements of task are included.
Demonstrates partial understanding of the problem. Most
3
requirements of task are included.
Demonstrates little understanding of the problem. Many
2
requirements of task are missing.
1 Demonstrates no understanding of the problem.
0 No response/task not attempted.

Analytical Template for analytic rubrics

Beginning Developing Accomplished Exemplary


Score
1 2 3 4

Criteria Description Description Description Description


#1 reflecting reflecting reflecting reflecting
beginning movement achievement highest
level of toward of mastery level of
performance mastery level of performance
level of performance
performance

Criteria Description Description Description Description


#2 reflecting reflecting reflecting reflecting
beginning movement achievement highest
level of toward of mastery level of
performance mastery level of performance
level of performance
performance

Holistic Rubric for Assessing Student Essay


Rating Detailed Description of Performance at Each Level
Inadequate The essay has at least one serious weakness. It may be unfocused,
underdeveloped, or rambling. Problems with the use of language seriously
interfere with the reader’s ability to understand what is being
communicated.

Developing The essay may be somewhat unfocused, underdeveloped, or rambling, but it


Competence does have some coherence. Problems with the use of language occasionally
interfere with the reader’s ability to understand what is being
communicated.
Acceptable The essay is generally focused and contains some development of ideas, but
the discussion may be simplistic or repetitive. The language lacks syntactic
complexity and may contain occasional grammatical errors, but the reader is
able to understand what is being communicated.
Sophisticated The essay is focused and clearly organized, and it shows depth of
development. The language is precise and shows syntactic variety, and ideas
are clearly communicated to the reader.

Example Analytic Rubric: Articulating thoughts through written communication— final


paper/project.
NEEDS
IMPROVE- ABOVE
MENT DEVELOPING SUFFICIENT AVERAGE
(1) (2) (3) (4)
The purpose of
the student The central
work is not The central purpose purpose of the
well-defined. of the student work student work is
Central ideas The central purpose is clear and ideas clear and
are not focused of the student work are almost always supporting ideas
Clarity (Thesis to support the is identified. Ideas focused in a way always are always
supported by thesis. are generally that supports the well-focused.
relevant Thoughts focused in a way thesis. Relevant Details are
information appear that supports the details illustrate the relevant, enrich
and ideas.) disconnected. thesis. author’s ideas. the work.
Information
and ideas are
poorly
sequenced (the Information and
author jumps Information and Information and ideas are
around). The ideas are presented ideas are presented presented in a
audience has in an order that the in a logical logical sequence
difficulty audience can sequence which is which flows
Organization following the follow with followed by the naturally and is
(Sequencing of thread of minimum reader with little or engaging to the
elements/ideas) thought. difficulty. no difficulty. audience.
Mechanics There are five There are no more There are no more There are no more
(Correctness or more than four than three than two
of grammar misspellings misspellings and/or misspellings and/or misspelled words
and spelling) and/or systematic grammatical errors or grammatical
systematic grammatical errors per page and no errors in the
grammatical per page or six or more than five in document.
errors per page more in the entire the entire
or 8 or more in document. Errors document. The
the entire distract from the readability of the
document. The work. work is minimally
readability of interrupted by
the work is errors.
seriously
hampered by
errors.

Advantages of using rubrics


• Help to better communicating teacher expectations, levels of performance permit the teacher
to more consistently and objectively distinguish between good and bad performance, or
between superior, mediocre and poor performance, when evaluating student work.
• Identifying specific levels of student performance allows the teacher to provide more detailed
feedback to students. The teacher can more clearly recognize areas that need improvement.
• Rubrics assist faculty in rating qualities of learning outcomes. Therefore, rubrics effectively
help teachers to specifically and consistently assess and evaluate qualities of learning and
communicate expected standards of learning.
• When provided to students before and during learning, rubrics also assist students to more
successfully interpret and anticipate expected levels of performance. Motivates students to
reach the standards specified.
• Help students interpret their own level of performance, learn what must be done to improve
performance and achieve higher standards of performance.
• Rubrics can be modified and can reasonably vary from teacher to teacher. So they are flexible
tools that can be shaped to your purposes. Flexible tool, having uses across many contexts, in
many grade levels and for a wide range of abilities.
• Narrows the gap between instruction and assessment.
• Potential to be transferred into grades if necessary.
• Can offer a method of consistency in scoring by clearly defining the performance criteria.
• Helps the grading process become more efficient;
• Helps faculty grade/score more accurately, fairly and reliably;
• Requires faculty to set and define more precisely the criteria used in the grading process;
• Supports uniform and standardized grading processes among different faculty members;
• Students are able to self-assess their own work prior to submitting it;
• Students can understand better the rationale and the reason for grades;
Disadvantages of Rubrics
• Rubrics can also restrict the students mind power in that they will feel that they need to
complete the assignment strictly to the rubric instead of taking the initiative to explore their
learning.
• If the criteria that is in the rubric is too complex, students may feel overwhelmed with the
assignment, and little success may be imminent.
• For the teacher creating the rubric, the task of developing, testing, evaluating, and updating
would be difficult and increases the workload of the teacher.
• Development of rubrics can be complex and time-consuming;
• Using the correct language to express performance expectation can be difficult;
• Defining the correct set of criteria to define performance can be complex;
• Rubrics might need to be continuously revised before it can actually be usable in an easy
fashion.
UNIT III: Basic Statistics for Analysis and Interpretation of Assessment data
----------------------------------------------------------------------------------------------------------------------------------------------------

Role and importance of statistics in analyzing of Central Tendency- Mean, Median, Mode -concept
assessment data, Population and Sample and methods of finding each measure
Data, Types of Data- Primary & Secondary, and when to use each measure.
Quantitative & Qualitative Measures of Variability/Dispersion- Range, Mean
Classification of Data, Frequency Table Deviation, Quartile Deviation, Standard
(Grouped & Ungrouped) Deviation-concepts and methods of finding
Graphical Representation of Data- need and each measure and When to use each measure.
importance, Representing data using Bar Correlation-meaning and importance,
Diagram and Pie Diagram, Histogram, Concept of Coefficient of correlation, Types
Frequency Polygon, Frequency Curve and of Correlation- Positive, Negative, Zero and
Ogives, Interpretation of graphical Perfect Correlation, Rank Difference Method
representations. of calculating Coefficient of correlation,
Descriptive Statistical Measures : Measures interpretation of correlation.
----------------------------------------------------------------------------------------------------------------------------------------------
The word statistics derived from the Latin word ‘Status’ which means a ‘Political State’. It was applied only to
such facts and figures as the state required for its official purpose. Statistics is a body of methods for making
wise decisions in the face of uncertainty. It embodies a methodology of collection, classification, description
and interpretation of data obtained through the conduct of surveys and experiments. In recent time statistics
has come to be used in two sense; as numerical data & as statistical method. The word statistics denotes
some numerical data. In this case it has numerical description of quantitative aspect of things. They take the
form of counts or measurements. Statistical refers to the principles and methods used in collection, analysis
and interpretation of data.
Definition of statistics
“Statistics may be called the science of counting” A L Bowley
“Statistics can be defined as the collection, presentation and interpretation of numerical data” Croxton and
crowed
Statistics as a subject or branch of knowledge is defined as one of the subjects of study that helps us in
the scientific collection , presentation , analysis and interpretation of numerical facts.
“aggregates of facts to a marked extend by multiplicity of causes numerically expressed, enumerated or
estimated according to reasonable standards of accuracy, collected in a systematic manner for a pre
determined purpose and placed in relation to each other” HORACE SECRIST
The term statistics is used as a plural noun as well as a singular noun. In plural form it refers to the
numerical data collected in a systematic manner with some definite aim or object in view. In singular
sense The technique and methods used in collection , analyses and interpretation of data.

Characteristics
❖ Aggregate of facts
❖ Numerically expressed
❖ Affected to a marked extend by multiplicity of causes and not by a single cause
❖ Collected in a systematic manner
❖ Collected for a predetermined purpose
❖ It should be placed in relation to each other
❖ The reasonable standard of accuracy should be maintained in statistics
Functions (Steps of statistical analysis)
➢ Collection
➢ Classification
➢ Tabulation
➢ Analysis
➢ Interpretation
➢ Comparison

Importance of statistics
1. Statistics in business- statistics in extremely used in modern activities in business. A businessman must
make a proper analysis past, record to forecast the future business conditions. Every businessman have to
make use the statistical tools to estimate the trend of prices and of economic activities.
2. Statistics and the state- Statistics are the eyes of state as they help in administration. State conducts
the population census to estimate the figures of National Income and prosperity of the country.
3. Statistics in economic planning -In India various plans that have been prepared or implemented.
National Sample Survey Scheme was introduced to collect the statistical data for the use of planning.
4. Importance in defense and war- Statistical Tools are very useful in the field of defense and war because
it helps to compare the military strength of different countries in terms of manpower, tanks, war aero-
plains, missile etc. It also helps in planning future military strategy of the country. It helps to estimate the
loss due war. It helps to arrange the war finance.
5. Importance in research -In the field of industry and commerce researches are made to find out the
causes of variations of different products.
6. Importance in physical science.- In the sphere of physical science like physics, chemistry, Botany etc a
large number of measurements are taken which are found to vary from actual results.
7. Statistical method are vital in all educational problems.- Books dealing with educational science,
educational articles in magazines, educational surveys are repeat with statistics. If teacher wants to learn
these matters he must have a familiarity with statistical terminology.
8. Statistics in mathematics.-The accuracy of conclusion based on statistical methods can be easily tested
and verified.
9. Study or comparison of group of individuals.-It is not possible to squeeze out general conclusion merely
by examining a set of large number of individuals scores. In such a case certain representative values or
norms have to be calculated.
Other uses of Statistics are
i.Statistics has developed powerful tools which enable as to make valid inferences regarding
characteristics of a population by studying only a representative part of it, called a sample.
ii.Huge amount of quantitative information may be collected in reasonable time at minimum expenses
with the desired degree of accuracy using statistical method.
iii.For a physician to test the effectiveness of a new drug.
iv.For a political commentator of a country in a future date.
v.For a sociologist to forecast the population of a country in a future date.
vi.To enable the investigator find ratios, proportions etc.

Population A population is the aggregate of all the units under study in any field of enquiry. It is a
collection of individuals or of their values which can be numerically specified. It is also called as
universe. A population can be Finite population or Infinite population
Sample A finite subset of a population, selected from it with the objective of investigating its
properties is called a sample of that population. A sample is selected in such a manner that it represents
the population. It is a minute model or replica of the population. The representative proportion of the
population is called a sample. The sample must have sufficient size to warrant statistical analysis.
Sampling Sampling is the process by which a relatively small number of individuals or measures of
individuals, objects or events is selected and analyzed in order to find out something about the entire
population from which it was selected.
It helps to reduce expenditure, save time and energy, permit measurement of great scope, or produce
greater precision and accuracy. Sampling procedures provide generalizations on the basis of a relatively
small proportion of the population
Methods of sampling
Probability sampling Or Random sampling It is based on the probability for selection of each item.
Also known as chance sampling.
Non-probability sampling It is that sampling which does not afford any basis for estimating the
probability for each item to be included in the sample.
Differences between population and sample
population sample

Population refers to the collection of all elements Sample means a subgroup of the members of
possessing common characteristics, that comprises population chosen for participation in the study.
universe.
The target population is the total group of individuals from A sample is the group of people who take part in the
which thesample might be drawn. investigation. The people who take part are referred to as
“participants”
Population is always a large group a part of the population so comparatively smaller
Includes Each and every unit of the group. Includes Only a handful of units of population.
Data collection utilizes Complete enumeration or Data collection utilizes Sample survey or sampling
census
Focus on Identifying the characteristics. Focus on Making inferences about population.

Characteristic is called Parameter Characteristic is called Statistic


Difficult to study population Easier to study sample

DATA AND TYPES Of DATA Statistics is the study of the collection, organization, analysis, interpretation and
presentation of data. The first step in statistical work is to obtain data. Data constitute the foundation of
statistical analysis and interpretation.
 Data denotes raw facts and figures. Data can be defined as a collection of facts or information from
which conclusion may be drawn.
Selection Of Appropriate method For Collection Of Data
 Nature and scope of enquiry
 Availability of financial resources
 Availability of time and money
 Degree of accuracy desired
 Status of the investigator
 Education and level of the respondents
Classification of data
On the basis of who collect data ,data can be classified into two
 Primary data - Primary data are those data which are collected for the first time and are original in
character. Primary data are in the shape of raw materials from which the investigator draws
conclusions by applying statistical methods for analysis and interpretation.
“By primary data we mean those data which are original,that is those in which little or no grouping has
been made, the instance being recorded or itemized as encountered.they are essentially raw materials”
-HORACE SECRIST
ADVANTAGES OF PRIMARY DATA
 They are the first hand information
 The data collected are reliable as they are collected by the investigator for himeself
 The primary data are useful for knowing opinion ,qualities and attitudes of respondents
DISADVANTAGES OF PRIMARY DATA
 Expensive and time consuming
 Scope for personal bias
 Selection of a representative sample is not an easy task
Methods Used For Collecting Primary Data
• Observation method
• Interview method
• Questionnaire method
• Schedule method
Secondary data-Secondary data are those which have been collected by some other person for his purpose
and published, They are in the shape of finished products. “secondary data are those already in existence
and which have been collected for some other purpose than answering of the question of hand”-
M.M.BLAIR
Advantages Of Secondary Data
 The information can be collected by incurring least cost
 The time required for obtaining the information is very less
 Available at large quantity of data
 It helps the researcher to defining the problem and formulating hypothesis
 It helps in interpreting the primary data with more insight
Disadvantages Of Secondary Data
 Inappropriate and inadequate
 Inaccurate and unreliable
 The secondary data may contain certain errors
Sources of secondary data
• external -personal and public
• internal
• Official reports of central ,state and local govt.
• Official publication of the foreign govt.and international bodies like UNO and its subordinate bodies
• reports and publication of trade association, banks,cooperative societiesand similar semi govt.and
autonomous organisations
• Publications of research organisations, centres, institutes,and reports submitted by
economists,research scholars etc.
• Technical journals,news papers, books periodicals etc.
Difference Between Primary And Secondary Data
Primary data Secondary data

Primary data are original in character Secondary data are not original

Primary data are in the form of raw mateial Secondary data are in the form of finished
product

The collection of primary data require large Secondary data are easily available from
sum,energy,and time secondary sources

Primary data after use becomes secondary Secondary data can’t be converted into
data primary data after its use

Primary data Secondary data

Precautions are not necessary in the use of Precautions are necessary in the use of
primary data secondary data

It can be collected by different method via It can be collected by copying down from
observation,interview,questionnire,and published and unpublished sour
schedule method

During the process of assessment or research a large amount of information is gathered which can be
either qualitative or quantitative. On the basis of measurement ,data can be classified into two
Qualitative data - Qualitative data is a categorical measurement expressed not in terms of numbers,
but rather by means of a natural language description. When a person collects data in qualitative terms
the assessment is called qualitative. Qualitative observations are defined as any observation made using
the five senses. Because people often reach different interpretations when using only their senses,
qualitative evaluation becomes harder to reproduce with accuracy; two individuals collecting data
regarding the same thing may end up with different or conflicting results. In research and business,
qualitative data may involve value judgments and emotional responses. A similar example of a
qualitative data is "Our Company created more visually compelling projects last year than this year."
Qualitative data is more concerned with detailed descriptions of situations or performance; therefore it
can be much more subjective but can also be much more valuable in the hands of an experienced
person. The method of qualitative data collection rely on descriptions rather than numbers. It collects
data that are not analyzed by quantitative methods but rather by interpretive criteria. Here informal
methods like observation, interview, field notes, diary, document collection, anecdotes etc are used.
Examples: Description of procedure or skill demonstrated by student (based on observation),
Feedback on a demonstration or skill test, on case study or written assignment etc.
Quantitative data - Quantitative data is a numerical measurement. Expressed not by means of a
natural language description, but rather in terms of numbers. When the person collects data in
quantitative terms the assessment is called quantitative. Quantitative observations are made using
scientific tools and measurements. The results can be measured or counted, and any other person trying
to quantitatively assess the same situation should end up with the same results. An example of a
quantitative evaluation would be "This year our company had a total of 12 clients and completed 36
different projects for a total of three projects per client." Includes methods that rely on numerical
scores or ratings and collected data can be analyzed using quantitative methods. In quantitative data, the
process involves the collection, analyzes and interpretation of data is in terms of numbers. A
quantitative data collection uses values from an instrument based on a standardized system where the
data collected is limited to a selected or predetermined set of possible responses. In this data is
collected using more formal methods like tests, questionnaires, inventories, rating scale etc. Examples
:Number correct responses on a test, Ratings on an end-of-term course evaluation, Number of steps
missed during a skill or procedure demonstration.
Quantitative data Qualitative data
Collection and analysis of data in quantitative Collection and analysis of data in qualitative
terms. Data collected and analyzed in terms of terms. Data collected and analyzed in terms of
numbers (numerical data).Raw data are numbers descriptions (narrative data). Raw data are words
More objective in nature More subjective in nature
Uses numerical score or rating Uses detailed descriptions of situations or
performance
Can be considered as an analytical approach Can be considered as a holistic approach
Uses more structured and well constructed Method of data collection is mostly unstructured
methods of data collection (formal and rigid) (informal and flexible)
Response freedom is limited More freedom of response
Objective scoring Judgmental scoring
Easier analysis possible and can arrive at group Difficult to analyze and arrive at generalizations
generalizations
Gives insight into the child's cognitive, affective Gives insight into the other behavioural
and skills. characteristics
A person should utilize both qualitative and quantitative assessment for the complete evaluation of the
pupil. Both quantitative and qualitative data have their benefits, though one is usually more appropriate
than the other in any given situation. Both are supplementary to each other. A student's score in an
attitude scale can be justified by collecting data by observation.

CLASSIFICATION of data It is a technique with the help of which the collected data are divided into various
groups etc. It helps To reduce the complexities of the data., To facilitate the understanding., To facilitate the
comparison., To analysis and interpretation.
Classification of data should be
1 clearly understood.
2 It should be stable.
3 It should be flexible.
4 It should be clearly defined.
5 Quality or attributes should be expressed quantitatively.
TYPES OF CLASSIFICATION
• Geographical {population distribution}
• Qualitative {sex ,Color, literacy etc.}
• quantitative {Hight,Weight,Mark,Income}
• chronological {Time period}
TABULATION OF DATA - “Tabulation is a process of an orderly arrangement of data in
columns and rows”. -BLAIR.
Tabulation of data is done for Systematic presentation of statistical data., Classification of problem in
brief and simplicity, Facilitating the interpretation., To present the data in the form of Graph, Chart,
Diagram etc., To help comparison study.

FREQUENCY DISTRIBUTION
Frequency distribution is an arrangement of the values that one or more variables take in sample. Each entry
in the table contains the frequency. A frequency distribution has minimum of 2 coloumns. The leftmost one
listing the variable found in the data and the next is giving the frequency for that value.
TYPES OF FREQUENCY DISTRIBUTION
GROUPED FREQUENCY DISTRIBUTION- When there is a large number of scores, It is useful to group them
into a Manageable number of intervals by Creating intervals of equal widths and Computing the frequency
of fall into Each interval. Such a distribution is Called grouped frequency distribution.

UNGROUPED FREQUENCY DISTRIBUTION -If the number of distinct values it takes is Small,classification can
be done by Preparing a table which has no classes And gives only the frequency of each Value.Such a table
is called an Ungrouped frequency distribution.

CONSTRUCTION OF A FREQUENCY DISTRIBUTION TABLE


• First decide the number of classes to include.
• Find the class width.
• Find the classlimits
• Make tally mark for each entry
• Count the tally mark to find the total frequency for each class.
ADVANTAGES
• Data become comprehensive by arranging it as frequency distribution.one can understand easily.
• It makes comparisons easier.
• Raw data cannot be represented in graphical form.To prepare graphs frequency distribution is needed.
• It attracts the attention of even a layman and gives him an insight into the nature of the observation.
• It helps further statistical analysis of the data.

DISADVANTAGES
• If the frequency distribution is grouped,the identity of the observation is lost.
• The selection of the class interval and lower bound of the first class are to a certain extent arbitrary.so
different frequency tables into which the same data is classified may give contradictory impressions.
GRAPHICAL REPRESENTATION OF DATA
Graphical representation of data means the pictorial representation and manipulation of data. Graphic
representation is the geometrical image of a set of data. It is a mathematical picture. It enables us to think
about a statistical problem in visual terms. It is a creative process that combines art and technology to
communicate idea. Different types of graphs are used in data representation. The graphic representation of
data proves quite an effective and an economic device for the presentation, understanding and interpretation
of the collected statistical data. Complicated data through a diagram or graph can easily be understood.
Some of them are listed below:-
For ungrouped data or discrete data
• Line graph
• Bar graph
• Pie graph
• Pictogram
For grouped data
• Histogram
• Frequency Curve
• Frequency Polygon
• Ogive

MERITS OF GRAPHICAL REPRESENTATION OF DATA


• Data can be presented in a more attractive and an appealing form.
• It provides a more lasting effect on the brain.
• Comparative analysis and interpretations may be effectively and easily made.
• Valuable statistics like median, mode, quartiles may be easily computed.
• Such representation may helps in the proper estimation, evaluation and interpretation of the
characteristics of items and individuals.
• It carries a lot of communication power.
• Graphical representation helps in forecasting, as it indicates the trend of the data in the past.
• Acceptability
• Easy to remember
• Facilitates comparison
• Easy to understand
• A complete data
• Use in the notice board
• Less errors and mistakes
• Saves considerable time
• Self explanatory
• Helpful even for less literate audience
DEMERITS OF GRAPHICAL REPRESENTATION OF DATA
• Lack of accuracy
• Subjective
• Misleading conclusions
• Presents only the approximate values
• Presents only the limited amount of information
• Can be confusing with the increase in no. of variables
• Not helpful in analyzing the data.
• Once the graph is constructed the identity of each observation is lost.
RULES FOR THE CONSTRUCTION OF GRAPH
• Every graph must have a suitable title.
• The graph must suit to the size of the paper.
• Footnotes should be given at the bottom to illustrate the main points about the graph.
• Graph should be as simple as possible.
• In order to show many items in a graph, index for identification should be given.
• A graph should be neat and clean.
• Every graph should be given with a table to ensure whether the data has been presented
accurately or not.
• The test of a good graph depends on the ease with which the observer can interpret it. Thus
economy in cost and energy should be exercised in drawing the graph.

BAR DIAGRAM
➢ A bar diagram or a bar graph displays data visually and is sometimes called a bar chart.
➢ Data is displayed either horizontally or vertically.
➢ Displays all kinds of information.
➢ Helps to make generalization and conclusion more quickly and easily.
➢ Bar graph will have a label, axis scales, and bars.
TYPES OF BAR DIAGRAM
a) Simple bar diagrams. Horizontal or vertical bars with the same width drawn with their bases on the
same horizontal or vertical line with equal gaps in between and lengths proportional to the magnitude
of the observations.

b) Subdivided bar diagrams (Component Bar Chart). First a simple bar diagram is drawn with the lengths
of the bars proportional to the totals of the component parts and is subdivided into parts of length
proportional to the component magnitude and each part is given a different color or shading. Used
when the observations have different components and when a comparison of the component parts are
needed.

c) Percentage bar diagrams. This is the modification of the sub divided bar diagram. Here the component
parts are expressed as the percentages of the total and a component bar diagram is drawn with all bars
having equal length.

d) Multiple bar diagrams. Grouped bars are used to represent related sets of data. For
example, imports and exports of a country together are shown in multiple bar chart. Each bar in a
group is shaded or coloured differently for the sake of distinction. Used for representing two or more
interrelated data for facilitating comparison.

e) Deviation bar diagrams. Used to represent net quantities like net profit, balance payable, deficit, etc.
Base line is drawn in the middle of the paper horizontally and positive values are indicated by bars of
proportional length drawn above the horizontal line and negative by bars of proportional length drawn
below the horizontal line.
PIE DIAGRAM
• Pie diagrams or pie charts are circle drawn to represent statistical data. The data is represented
through the sections or portions of a circle. It brings out the relative importance of the various
components. For drawing a pie diagram, we construct a circle of any diameter and this is broken into
various segments. Angle 360 degree represent 100percent and the corresponding angles for each
component can be found by multiplying 360 degree with percentage of the component

HISTOGRAM
A Histogram is a graphical display of frequency distribution. The term Histogram was just termed by ‘Karl
Pearson’ in 1895 as a term for a common form of graphic representation. A histogram is a graphic
representation of a continuous frequency distribution through special kind of vertical bar charts. There are no
gaps between the bars. The scale on the x axis must be continuous, the upper boundary of one class coinciding
with the lower boundary of next class. In the histogram, the class intervals should be in the exclusive form. If
the class intervals are in the inclusive form then it should be converted into exclusive form.
FREQUENCY POLYGON A frequency polygon is a graph of frequency distribution. It is an improvement over the
histogram. It is constructed either after drawing a histogram or without drawing a histogram. In the frequency
polygon, midpoints of all the class intervals are taken and frequencies corresponding to the midpoints are
marked. The points of frequencies are joined through straight lines to get frequency polygon.

FREQUENCY CURVE A continuous frequency distribution represented by a smoothed curve is known as


frequency curve The midpoints of classes are taken along the x axis, and frequencies along the y axis. The
points thus plotted are joined by a free-hand smooth curve.
OGIVE - Ogives are graphs of cumulative frequency distribution drawn on natural scale to determine the values
of certain factors like median, quartiles, deciles etc. Class limits are shown along the x axis and the cumulative
frequencies along the y axis. There are two types of ogives:

LESS THAN OGIVE - in less than ogive we start with the upper limits of the classes and go on adding the
frequencies. When these frequencies are plotted, we get a rising curve.

MORE THAN OGIVE- in more than ogive we start with the lower limit of the classes and from the total
frequencies we subtract the frequency of each class. When these frequencies are plotted we get a declining
curve.
Measurers of central tendency: For a given set of large data we usually find that there will be very
few persons with very high and very low scores. Most of the person’s scores would lie in between the
highest and the lowest scores. This tendency of the distribution to cluster around the middle value is
called central tendency and the typical score around which most of the scores cluster or the value
between the extreme scores that is shared by most of the persons is referred to as measure of central
tendency. It is a measurement of data that indicates where the middle of the information lies. Tate
(1955) defines a Measure of Central Tendency as “a sort of average or typical value of the items in the
series and its function is to summarize the series in terms of this average value.” There are
three common measures of central tendency including the Arithmetic mean or mean, the median,
and the mode.
Some of the common uses of a measure of central tendency are
 Each of them is a representative characteristic of the whole group. The performance of the
group as a whole can be described by a measure of central tendency, in its own way.
 They help in the comparison of two or more groups and samples in terms of their typical
performance.
 They indicate where the center of the distribution tends to be located.
 They tells us about the shape and nature of the distribution (for normal distribution mean=
mode=median).
 They give us a concise picture of large data.
 They give a general picture of the whole group by use of the sample data alone.
 To find the mathematical relationship between different groups.

Characteristics of a good average


 Should be stable, reliable and an accurate measure.
 Should be a representative value of the distribution.
 Its meaning and definition is easily understood and easy to calculate.
 Should be capable of further algebraic treatment.
 Should not be affected by fluctuations in sampling.
 Should be used for further statistical analysis.
 Should depend on all values of the distribution.
 Should not be affected by extreme items.
Mean: The score located at the mathematical center of a distribution is called the mean or arithmetic
mean. Mean is the most common and useful measure of central tendency. It is simply the sum of the
numbers divided by the number of scores in a set of data. This is also known as average. It is
represented by the symbol M or x. it is calculated using the formula
𝑿
𝑴= 𝑵 for ungrouped data
And Where, X is the individual score,
𝒇𝑿
𝑴= 𝑵 for grouped data f is the individual frequency
M is the mean,
N is the number of observations or 𝑁 = 𝑓
𝒇𝒅
Formula for short-cut method 𝑴 = 𝑨 + 𝒇 𝒊 , Where A is the assumed mean
𝑋−𝐴
𝑑= , i-class width
𝑖
When mean is used
 Mean is the most stable, reliable and accurate measure of central tendency which is not affected
by fluctuations in sampling. So when such a measure is needed we compute mean.
 Used to summarize interval or ratio data in situations when the distribution is symmetrical and
unimodal.
 When we need to compute further statistics we use mean as mean is capable of further algebraic
treatment.
 Avoid the usage of mean when extreme items seriously affect the average score.
Advantages of Mean
 Most widely and commonly used measure of central tendency.
 Most stable and accurate measure of central tendency.
 Its meaning and definition is easily understood.
 It best conveys the idea of mean or average value.
 It can be located with any arrangement of the scores.
 It is derived using the exact scores of the items in the series.
 It gives equal weightage to every item in the series whether extreme or not
 It is capable of further algebraic treatment and is used for further statistical analysis.
 It is not affected much by fluctuations in sampling.
 Only one score can be mean.
 Total score, Combined mean can be obtained if the mean of constituent units is known.
Limitations of Mean
 It cannot be easily located from a graph.
 It is affected by the value of each item, so extreme values easily affect it.
 It is most suitable only when the distribution is normal and not skewed.
 It cannot be calculated for open ended classes.
 It cannot be used in the case of nominal and ordinal data
Median: Median is the number present in the middle when the numbers in a set of data are arranged in
ascending or descending order. If the number of scores in a data set is odd then median is the middle
value and if even, then the median is the mean of the two middle numbers. We can also say that median
is the point on the score scale or distribution below and above which half (50%) of the scores fall i.e.,
the score at the 50th percentile, (in the middle). It is the score that divides the distribution into two equal
parts. Note that the central item is not the median but the value of the central item is the median.
Median is also known as the middle quartile or the second quartile (Q 2)
Computing Median for ungrouped data:
Arrange the items in ascending or descending order.
When N is odd, the value of the [(N+1)/2]th item will be the median.
When N is even Median is the average of the value of the (N/2) th item and value of the
[(N/2)+1]th item.
Computing Median for grouped data: First write the true class limits and write the cumulative
frequencies of the classes. Then locate the median class. This is done in the same manner as in the case
of the ungrouped data. Then Median for grouped data is calculated using the formula
𝑵
( 𝟐 )−𝑭
𝑴𝒅 = 𝒍 + [ ]𝒊
𝒇

Where
l - Exact lower limit of the Median class
F – Cumulative frequency up to or above the median class
f – Frequency of the median class
i – Class interval
N – Total frequency( 𝑁 = 𝑓)
When to use median
 Used to summarize ordinal or highly skewed interval or ratio scores
 When we have to get the exact mid-point of the distribution median is computed.
 When a series contains extreme measures median is a more representative measure than mean.
 In the case of open ended distributions computation of mean is impossible so median is more
reliable.
 When we have to calculate a measure of central tendency from a graph median is the most
suitable.
 Median is used specifically for those quantities like health, honesty, intelligence etc. that cannot
be measured in quantities.
Advantages of Median
 It is easily understood and determined and located with greater exactness than mode.
 Median is a better measure of central tendency than mode.
 Only one score can be the median.
 It is the most representative measure of central tendency when the distribution contains extreme
scores.
 It is useful in the case of open ended classes and skewed distributions.
 It will always be around where the most scores are.
 It can be calculated even if a value is missing if its relative position is known.
 It can be computed from a graph.
Limitations of Median
 It is a non-algebraic measure. We cannot calculate the total score or the combined median etc.
 It is a less dependable measure of central tendency than mean.
 It is not used in higher statistical analysis.
 It cannot be used in the case of nominal data.
Mode: Mode is the value that occurs most frequently in a set of data. It is typically useful in describing
the central value when the scores reflect a nominal scale of measurement. It is the point on the scale
that corresponds to the maximum frequency of the distribution. In any series it is the value of the item
which is most characteristic or common and is usually repeated the maximum number of times.
For ungrouped data mode is the value repeating most or with highest frequency.
For grouped data mode is calculated using the formula
𝒇𝒑 𝒇𝒎 − 𝒇p
𝑴𝒐 = 𝒍 + [ 𝒇𝒑 + 𝒇𝒔 ] 𝒊 or 𝑴 = 𝒍 + [ 𝟐𝒇 𝒎− 𝒇p− 𝒇𝒔 ] 𝒊
Where, l - Exact lower limit of the Model class (the class in which mode lies i.e., the class
corresponding to the highest frequency)
fm- Frequency of the modal class
fp – Frequency of the class preceding the modal class (above the modal class)
f s– Frequency of the class succeding the modal class (below the modal class)
i – Class interval
When to use mode
 In nominal data – Since we cannot use mean or median
 Also in ordinal, interval or ratio data, along with mean and median
 When a quick and approximate measure is to be determined, we compute mode.
 Mode is a very useful measure in the manufacturing industry as the most sold item i.e., modal
value is given more priority.
 When a histogram or frequency polygon is given, the measure that can be easily computed is
mode.
 When we wish to know the most typical case.
Advantages of Mode
 It is easily understood even by a common man.
 Mode can be easily be computed merely by looking at the data. All that one has to do is to find
out the score which is repeated maximum number of times.
 It is an average widely used in everyday life. When we speak of average we generally refer to
mode e.g., average shoe size refers to that which is most sold.
 It is useful in situations in which it is desirable to eliminate extreme cases.
 It encourages attention to bimodal and multimodal distribution.
 It can be computed from a graph.

Limitations of Mode
 It is the most unstable measure of central tendency.
 It not at all reliable in small samples. E.g., it the model salary of 50 workers is Rs.500 per
month but 45 out of them gets different salaries the mode is very unreal and gives a false
picture.
 It is incapable of further algebraic treatment
 A distribution can have more than one mode.
 It is not used in higher statistical analysis.

Relationship between Mean, Median and Mode


Mode = 3Median – 2Mean
Measures of variability or dispersion
There is a tendency for the scores to be dispersed, scattered or show variability around the average.
Thus the tendency of the attributes of a group to deviate from the average or central value is known as
dispersion or variability and the expected range of dispersion or variation above or below the average
or central value for a given data is called the measure of variability.
A measure of variability is a single value that gives us the degree of variability or dispersion i.e., the
scatter or spread of the individual scores throughout the distribution or given data.
When comparing two or more groups the measures of central tendency merely gives us an idea of the
general characteristics of the groups as a whole. They do not show how the individual scores are spread
out as a whole. Only with the value of the measure of central tendency we are unable to know how the
scores are distributed in the group.
E.g., Two group’s scores in a test are such as the following
Group A: 40,38,36,27,20,29,28,3,5,4 and Group B: 19,20,22,18,21,23,17, 20,22, 18
The mean value in both cases is 20, so far as mean is considered there is no difference in the
performance of the two groups. But by the observation of the scores of the two groups we find that the
scores of Group A have a wide range while of Group B have a small range. The scores in the latter
group are less variable than those in the former. So the performance of Group A and Group B cannot be
considered as the same.
Therefore measures of central tendency alone provide insufficient base for the comparison of two or
more groups. For better comparison of the groups we need to pay attention to the variability or
dispersion of the set of scores in the set of scores. Here lies the importance of measures of central
variability or dispersion.

The are chiefly four measures of variability or dispersion


1. Range (R)
2. Quartile Deviation (Q)
3. Average Deviation (AD)
4. Standard Deviation (SD)

Range (R)
Range is the simplest measure of variability or dispersion. It is calculated by subtracting the lowest
score from the highest score in the series or data. It takes only extreme scores into consideration and
ignores the variation of individual items.
Range = Highest value – Lowest value
The computation of range is recommended when
 We need to know simply the highest and lowest scores of the total spread.
 The group or distribution is too small
 We want to know the variability within the group with no time.
 We require speed and ease in the computation of a measure of variability.
 The distribution of the scores of the group is such that the computation of other measure of
variability is not much useful.
Merits of range
 It is very easily determined and understood.
 It is very useful as a supplementary measure. In addition to other measures it helps in the
description of data.
 It is a moderately reliable measure in large unimodal samples.
 It is a very simple measure of variability.
Demerits of Range
 It is not a representative measure of variability.
 It is based on only two extreme scores and tells nothing about the variation among other
intermediate scores.

Quartile Deviation (Q)


Quartile deviation is the half of the inter-quartile range and also known as semi inter-quartile range. It
is computed using the formula
(𝐐𝟑 – 𝐐𝟏)
Quartile deviation, 𝐐 = where Q1 – 1st Quartile and Q3 – 3rd Quartile
𝟐

To find Q1 and Q3:


For ungrouped data Q1 and Q3 is found by first arranging the items in Ascending or Descending order
and then value of the N/4th and 3N/4th item gives Q1 and Q3.
For grouped data first write the true class limits and write the cumulative frequencies of the classes.
Then locate the class corresponding to Q1 and Q2. This is done by finding the class corresponding to the
N/4th and 3N/4th item of the distribution. Then Q1 and Q3 are calculated using the formula 𝑸=
𝑵 𝟑𝑵
( 𝟒 )−𝑭 ( )−𝑭
𝒍+[ ]𝒊 And 𝑸 =𝒍+ [ 𝟒𝒇 ] 𝒊
𝒇

Where
l1 - Exact lower limit of the Q1 class, l3 - Exact lower limit of the Q3 class
F1 – Cumulative frequency upto or above the Q1 class
F3 – Cumulative frequency upto or above the Q3 class
f1 – Frequency of the Q1 class, f3 – Frequency of the Q3 class
i – Class interval, N – Total frequency( 𝑁 = 𝑓)
The use of this measure is recommended when
 The distribution is skewed, containing a few very extreme scores.
 The measure of tendency is available in the form of median.
 The distribution is truncated (irregular) or has some indeterminate end values.
 We have to determine the concentration around the middle 50 per cent of the cases
 The various percentiles and quartiles have been already computed.
Merits
 It is more representative than the range as it is not dependent on the extreme values.
 It is very easy to compute, to understand and to interpret.
 It is the most useful measure of variability in which median is used.
 It is applicable even in that frequency distribution which have unequal class-intervals.
 It is quite useful in small samples and when there are extreme measures in the distribution.
Demerits
 25% of the scores fall below Q1 and 25% above Q3. Therefore Q1 and Q3 are measures of only
50% of the scores.
 It is a non-algebraic property and so less reliable than SD.

Average Deviation (AD)


Average Deviation (AD) as the mean of deviations of the acores in the series taken from their mean
(occasionally from median or mode). It is a simple measure that takes into account the fluctuation or
variation of all the items in the series. It is calculated using the formula
 𝑿−𝑴 𝒇 𝑿−𝑴
𝑨𝑫 = for ungrouped data And 𝑨𝑫 = for grouped data
𝑵 𝑵
Where, X is the individual score, f is the individual frequency
M is the mean,
N is the number of observations or 𝑁 = 𝑓
X - M signifies that in the deviation values we ignore the algebraic signs +ve or –ve. The ignoring
of algebraic signs constitutes a major weak point for this type of measure of variability.
This measure is used when
 The deviations of the scores are normal or near normal.
 The SD is unduly influenced by the presence of extreme deviations.
 It is needed to weigh all deviations from the mean according to their size
 A less reliable measure of variability can be employed.
Merits
 It is a very simple measure of variability and is easily understood.
 It is also meaningful, even to common man.
 It is based on all items and takes into account the fluctuations of all items.
 It is reliable even for a small sample.

Demerits
 As it based on all items it may be inflated or depresses by a single extreme value which is very
high or very low.
 As the signs are discarded and only absolute values are taken it is not an algebraic measure and
so cannot be reliably used in mathematical operations

Standard deviation (SD)


Standard deviation of a set of scores is defined as the square root of the average of the squares of the
deviations of each score from the mean. SD is regarded as the most reliable and stable measure of
variability as it employs the mean for its computation. It is often called as root mean square deviation
and is denoted by the Greek letter sigma (σ). The square of SD is known as variance of the
distribution.SD is regarded as the most stable and reliable measure of variability as it employs mean for
its computation and does not ignore algebraic signs.
The use of SD is recommended when
 We need a most reliable measure of variability.
 There is a need of computation of further statistics like the correlation coefficients, significance
of difference between means and the like.
 Measure of central tendency is available in the form of mean
 The distribution is normal or near to normal.
Merits
 It is the most reliable measure of variability and is useful for further statistical operations and in
making inferences.
 It is most useful in those cases when mean has been taken as a measure of central tendency.
 The greater the value of SD the more would scores scatter from the mean. SD can be shown on
distribution curves. 1SD distance includes 68.3% cases, 2SD distance 95.4% cases and so on.
 It is an algebraic measure and does not suffer from the mathematical fallacy of the MD in which
signs are disregarded.
 It can be reliably used most cases.
Demerits
 SD is not easily understood.
 It is sensitive to extreme values. (X-M)2
It is calculated using the formula
 (𝑿−𝑴)𝟐 𝑿 𝑿 𝟐 𝟐
𝑺𝑫 = √ 𝒐𝒓 𝑺𝑫 = √ −( ) for ungrouped data
𝑵 𝑵 𝑵
And
 𝒇(𝑿−𝑴)𝟐 𝒇𝑿 𝒇𝑿 𝟐 𝟐
𝑺𝑫 = √ 𝒐𝒓 𝑺𝑫 = √ 𝑵 − ( 𝑵 ) for grouped data.
𝑵
Where, X is the individual score, f is the individual frequency
M is the mean, N is the number of observations or 𝑁 = 𝑓

CORRELATION
In measures of central Tendency and Dispersion, our studies had been confined to one variable only. But we
often come across problems involving two or more variables, where items of one variable bears some relation
with the item of the other variable or influence the values of the other variable. For example rainfall and
agricultural yield, height and weight, age of husband and wife. The term correlation is used to indicate the
relationship between two such variables in which with changes in the values of one variable, the values of the
other variable also change. Thus, if with a change in the price of a commodity, the demand for that
commodity changes, we would say that the price and demand are related with each other. “A connection or
relationship between two or more things that is not caused by chance. “
Thus correlation analysis refers to the technique used in measuring the closeness of the relationship between
the variables.
L R CORNER “If two or more quantities vary in sympathy, so that movements in one tend to be accompanied by
corresponding movements in the other then they are said to be correlated “
A.M. Tuffle defined correlation “ an analysis of the co-variation of two or more variables”.

Importance of correlation
 Most of the variables show some kind of relationship. For instance, there is relationship between price
and supply, income and expenditure etc... With the help of correlation analysis we can measure in one
figure the degree of relationship.
 It helps to ascertain the traits and capabilities of pupils while giving guidance or counselling.
 Once we know variables are closely related, we can estimate the value of one variable given the value
of another. This is known with the help of regression.
 Correlation analysis contributes to the understanding of economic behaviour, aids in locating the
critically important variable on which others depend.
 Progressive development In the methods of science and philosophy has been characterized by increase
in the knowledge of relationship.
 The effect of correlation is to reduce the range of uncertainty. The prediction based on correlation
analysis is likely to be more variable and near to reality.
 Co-efficient of correlation is vital for all kinds of research work
 It helps in establishing validity or reliability of an evaluation tool.
TYPES OF CORRELATION
Simple, partial and multiple correlation
The distinction between simple, partial and multiple correlation is based on the number of variables studied.
 When the relationship between any two variables only is studied. It is a case of SIMPLE CORRELATION.
 When the relationship between any two out of three or more variables is studied ignoring the effect of
the other related variables, it is a case of PARTIAL CORRELATION.
 When the relationship between three or more variable is simultaneously, it is a case of MULTIPLE
CORRELATION.

Positive and negative correlation A correlation may be positive or negative depending upon the direction of
range of the variables.
POSITIVE CORRELATION is one where values of both the variables under study move in the same direction. The
data of positive correlation when plotted on a graph paper give an upward curve.

Increase in one variable → Increase in the other variable.


Decrease in one variable → Decrease in the other variable.
Example : (1) Demand and Production.
(2) Diameter and circumference of a circle.
NEGATIVE CORRELATION is one where both the variables under study move in the opposite direction. The
value of negative correlation if plotted on a graph paper give a downward curve.

Decrease in one variable → Increase in the other variable.


Increase in one variable → Decrease in the other variable.
Example : (1) Price and Demand.
(2) Speed and time.
PERFECT AND IMPERFECT CORRELATION
 When the values of both variables under study change at a constant ratio irrespective of the direction,
it is a case of PERFECT CORRELATION (ideal correlation). The graph plotted would be a perfect straight
line in the upward direction for perfect positive correlation and a perfect straight line in the downward
direction for a perfect positive correlation.
 When the value of the variable under study change at different radio it is a case of IMPERFECT
CORRELATION.
 When correlation are measured mathematically, the value of perfect correlation will be either 1 or -
1 and the value of imperfect correlation between 1 or -1.

Perfect positive correlation Perfect negative correlation


SCATTER DIAGRAM
Let (x1,y1), (x2,y2), …………. (xn, yn) be the set of observation obtained in a study of a population. In which two
characteristics are considered A diagram obtained by plotting points with co-ordinates (x1,y1), (x2,y2), ..…. (xn, yn)
in called a scatter diagram. It consists of ‘n’ points scattered over the x-y plane.

Positive Correlation Zero Correlation Negative Correlation


LINEAR CORRELATION
When the variation in the values of two variables are in a constant ratio correlation is said to be liner.
NON - LINEAR CORRELATION
In some cases, the ratio of the change in the two variables may not be constant and hence not be linear. Thus it
may be curve-linear or non linear.
COEFFICIENT OF CORRELATION The ratio indicating the degree of relationship between a pair of variables is
called coefficient of correlation. A correlation coefficient is statistical measure of the degree to which changes
to the value of one variable predict changes to the value of another. The statistical tool with the help of which
the relationship between two or more than two variables is studied is called measures of correlation. The
measures of correlation, called the correlation co-efficient summaries in one figure the direction and extend of
correlation.
Coefficient of correlation is calculated to study the extent or degree of correlation between two variables. It is
numerical index which explains between two variables. It is usually respected by the letter ‘r’.
 Correlation can be calculated as a number called the correlation coefficient .The coefficient correlation can
help identify what type of relationship the data sets have and how strong or weak the relationship is.
 Coefficient of correlation varies from -1 to 1.
 Positive coefficient correlation varies from zero to 1 and negative correlation coefficient varies from zero to
-1.
 Zero correlation indicates no consist relationship, and it is written as “ 0”

USES OF COEFFICIENT OF CORRELATION


 It may be used in determining the reliability of a test.
 One of the main purpose of use of correlation is prediction. Predictions are possible with the use of
regression equations.
 Partial and multiple correlation may be helpful in the analysis of relationship between various factors.
 Correlation is also useful in the calculation of validity.
ADVANTAGES OF COEFFICIENT OF CORRELATION
 Easy to work out and its easy to interpret.
 It not only gives an idea about the co-variation of the two series but also indicate the direct of
relationship.
 It gives a precise and quantitative figure which can be interpreted meaningfully.
 It can answer the validity of arguments for or against a statement.
DISADVANTAGES OF COEFFICIENT OF CORRELATION
 The value of correlation coefficient is unduly affected by extreme item.
 The coefficient of correlation may give a misleading picture of the extent of the relationship between
the variables if the data are not reasonably homogeneous.
 It is tedious to calculate.
 It assumes a linear relationship between the variables even though it may not be there.

Spearman’s rank correlation

Where D represents the difference in ranks and N represents number of observation.

INTERPRETATION OF CORRELATION
By interpretation we intend to point out how high is any given coefficient of correlation is. Any coefficient
of correlation that is not zero and that is also statistically significant denotes some degree of relationship
between the two variables. As regards the strength of relationship in between the two variables, the
coefficient of correlation does not give directly anything like percentage that is indicated by an ‘r’. The
coefficient of correlation is an index number, not a measurement on a linear scale of equal units. There is
no denying fact that correlation enable us to find out relationship between the two variables. The values of
r (correlation ) reflects the strength of relationship between the variables. The strength of relationship
between the two variables can be described roughly as under for various r’s:
 less than .20 slight, at most negligible relationship
 .20 to .40 low correlation
 .40 to .70 moderate correlation
 .70 to .90 high correlation
 .90 to 1.00 very high correlation.
 It may be noted that the relationship i.e., correlation may be either positive or negative
but in no case the value of correlation may exceed (the value of r more than ) plus/
minus 1.
UNIT IV: Introduction to Research in Education
----------------------------------------------------------------------------------------------------------
Research- meaning, characteristics, functions of research ,characteristics of a good researcher, Teacher
as a researcher, need and importance of Educational research. • Hypothesis- meaning,
relevance/role/functions, forms of hypothesis-null form, prediction form, question form and statement
form • Types of research (based on purpose only)- basic/fundamental research, applied research and
action research. • Action research- Need, scope, characteristics, Steps involved:- Problem
identification, Defining and Analyzing the problem, Formulating and Testing action hypotheses and
Preparing the report - and Advantages and Limitations of action research, Integrating action research
practices -need and scope, Preparation of Action research reports. • Research Projects – Definition of a
project, Steps involved:- Initiation (Providing/creating situations), Selection/Choosing,
Planning/Designing, Execution, Evaluation and Recording/Reporting. Preparation of Project reports
------------------------------------------------------------------------------------------------------------------------------
---------------------
What is Research?
Research is a logical and systematic search for new and useful information on a particular topic.
In the well-known nursery rhyme
Twinkle Twinkle Little Star
How I Wonder What You Are?
the use of the words how and what essentially summarizes what research is.

Research is a vast multidimensional concept. It is an investigation of finding solutions to


scientific and social problems through objective and systematic analysis. It is a search for knowledge,
that is, a discovery of hidden truths. Here knowledge means information about matters. The information
might be collected from different sources like experience, human beings, books, journals, nature, etc. A
research can lead to new contributions to the existing knowledge. Only through research is it possible to
make progress in a field.

Word Meaning of Research


• Research is composed of two syllables, a prefix re and a verb search.
Re means again, a new, over again.
Search means to examine closely and carefully, to test and try, to probe.
• The two words form a noun to describe a careful and systematic study in some field of knowledge,
undertaken to establish facts or principles. Research is an organized and systematic way of finding
answers to questions.

Research has to be an active, diligent and systematic process of inquiry in order to discover, interpret or
revise facts, events, behaviours and theories. It seeks predictions of events, explanations, relationships
and theories for them. Applying the outcome of research for the refinement of knowledge in other
subjects, or in enhancing the quality of human life also becomes a kind of research and development.

The prime objectives of research are


(1) to discover new facts
(2) to verify and test important facts
(3) to analyse an event or process or phenomenon to identify the cause and effect relationship
(4) to develop new scientific tools, concepts and theories to solve and understand scientific and non-
scientific problems
(5) to find solutions to scientific, non-scientific and social problems and
(6) to overcome or solve the problems occurring in our everyday life.
The main characteristics of research are:
1. Research is highly purposive – Research is directed towards the solution of a significant problem
that demands a solution. It tries to find answers to questions or relation between variables.
2. Research emphasizes the development of generalizations, principles or theories that will be
helpful in predicting future occurrences. Research usually goes beyond specific situations to
general characteristics.
3. Research gathers new data from primary or firsthand sources and uses existing data for a new
purpose. Research does not simply restate or reorganize the known information but tries to get
authentic and in-depth data for deeper analysis.
4. Research is exact, systematic and accurate investigation. It demands accurate measurement so
that the researcher uses precise and valid means of data collection and analysis.
5. Research is based upon observable experience or empirical evidence. It rejects revelation and
dogma as methods of establishing knowledge and accepts only verifiable data as evidence.
Mostly the data is expressed in quantitative terms.
6. Research strives to be objective and logical. It applies every possible test to validate the
procedures employed and data collected. It eliminates personal bias and does not try to prove
one’s personal convictions.
7. Research is patient and unhurried. The investigator must be able totake painstaking effort and
accept disappointment and peruse ones research.
8. Research needs courage. The researcher should not be afraid of the unpleasant consequences of
his findings or that it would not be accepted by others. He should speak and record truth no
matter how unwelcome his conclusions may be.
9. Research is carefully recorded and reported. Each and every term is recorded, procedures
described, results objectively recorded, limitations recognized and conclusions presented with
scholarly cautions and restraint.
10. Research maintains rigorous standards. Each step in research should be according to certain
norms and specifications and free from loopholes. It is characterized by carefully designed
procedures and rigorous analysis. It is a job of great responsibility.
11. Research requires expertise. The researcher tries to secure an a expertise before undertaking an
investigation. He should be thoroughly grounded with the theoretical background, related
literature, methods of research and data collection, statistical analysis,
12. Originality is the important characteristics of a good research. Replication of research is done to
confirm or to raise questions about the conclusions of previous studies. But researching into new
untouched areas is important.
Functions of Research
Research is important both in scientific and non-scientific fields. In our life new problems,
events, phenomena and processes occur every day. Practically, implementable solutions and suggestions
are required for tackling new problems that arise. Scientists have to undertake research on them and find
their causes, solutions, explanations and applications. Precisely, research assists us to understand nature
and natural phenomena. Some important avenues of research are:
(1) A research problem refers to a difficulty which a researcher or a scientific community or an industry
or a government organization or a society experiences. It may be a theoretical or a practical situation. It
calls for a thorough understanding and possible solution. It obtains knowledge for practical purpose like
solving problems on population explosion.
(2) Research on existing theories and concepts help us identify the range and applications of them.
Develops and evaluate concepts, practices and theories and it evaluates methods that test concepts.
(3) It is the fountain of knowledge and provides guidelines for solving problems.
(4) Research provides basis for many government policies. For example, research on the needs and
desires of the people and on the availability of revenues to meet the needs helps a government to prepare
a budget.
(5) It is important in industry and business for higher gain and productivity and to improve the quality of
products.
(6) Mathematical and logical research on business and industry optimizes the problems in them.
(7) It leads to the identification and characterization of new materials, new living things, new stars, etc.
(8) Only through research inventions can be made; for example, new and novel phenomena and
processes such as superconductivity and cloning have been discovered only through research.
(9) Social research helps find answers to social problems. They explain social phenomena and seek
solution to social problems.
(10) Research leads to a new style of life and makes it delightful and glorious.
(11) Gathers information on subject or phenomena people lacks or have little knowledge about.
The Attributes/Characteristics of a Good Researcher
Any researcher should be motivated by a noble goal. The attributes of a good research scholar may be
summarized as:
• Self-confidence & Commitment
• Dedication & Determination
• Concentration & patience
• Analytical mind and critical way of thinking
• Scientific discipline & systematic
• Global outlook and open mind
• Innovative approach
• Originality/creativity
• Intellectual curiosity
• Freedom from the obsessions of clock and calendar
• Flexibility
• Intelligence
• Passion for knowledge
• Questioning attitude
• Insight & Spirit of enquiry
• Precision and accuracy
• Resilience to withstand temporary setbacks
• Persistence& Patience
• Freedom from personal prejudices, beliefs, dogmas etc.
• Social skills & communication skills
• Presentation skills & Writing skills
• Keen observation & listening skills
Furthermore, a modern researcher must be resourceful and inventive in order to transform the scientific
queries and hypotheses into a realisable objective. Moreover, he has to acquire an excellent knowledge
of the various methods of research, measurement tools and techniques of the relevant field. When he
interprets and presents results, he/she must be precise and honest. Misinterpretation or even falsification
of data will lead to deviation of future research and invalidate the work of future researchers. Although
there is no need to be a statistician, he has to be aware of basic mathematical and statistical principles in
order to be able to appreciate and interpret results, up to a certain level, and to study critically the
findings of other works. He should have expertise knowledge about research.
Teacher as a Researcher
Teacher research expands teachers role as inquirer about teaching and learning through
systematic classroom research. Teachers, shall only teach better if we learn and research our actions. It
is the teacher who, in the end, will change the classroom practices and the school by exploring and
understanding them.
Teacher-researchers have access to first hand information about students, schools etc. They become
those practitioners who attempt to better understand their practice, and its impact on their students, by
researching the relationship between teaching and learning in their real world of work.
Research makes a teacher, a reflective teacher who thinks in action as well after it is over. Teacher-
researchers raise questions about what they think and observe about their teaching and their students'
learning. They collect student work in order to evaluate performance, but they also see student work as
data to analyze in order to examine the teaching and learning that produced it. Most importantly, by
engaging in reflective practice, the Teacher Researcher improves the lives of students by always seeking
to discover better, more effective ways of implementing teaching/learning.
A teacher researcher approaches learner, learning, teaching and issues from different viewpoints.
This makes a teacher constantly change his roles.
Teachers are subjective insiders involved in classroom instruction as they go about their daily routines
of instructing students, grading papers, taking attendance, evaluating their performance as well as
looking at the curriculum. Traditional educational researchers who develop questions and design studies
around those questions and conduct research within the schools are considered objective outside
observers of classroom interaction. When teachers become teacher-researchers, the "traditional
descriptions of both teachers and researchers change".
The goal of Teacher Research is to put "Best Practices" about teaching/learning into actual practice
in your classroom. Research conducted by teachers that follows a process of examining existing
practices, implementing new practices, and evaluating the results, leads to an improvement cycle that
benefits both students and teachers.
Teacher Research empowers teachers to make a positive difference in terms of classroom practice; it
enables us to provide relevant information about teaching and learning in actual classrooms.
Real classrooms have to be teachers’ laboratories. The teacher as a practitioner has to diagnose before
he or she prescribes treatment or intervene and then vary the prescription after seeing the effect.
Research helps a teacher in this.
Teachers are accountable to all the stakeholders for the various decisions (programs, policies,
practices). These decisions should be well informed or data driven, for which research is needed.
A teacher researcher would have continuing professional development. A long lasting quest for
updated information about the approaches and trends to deal responsibly with the issues raised as part of
their practice.
For teachers as practitioners, the research act is their professional obligation. The one who… creates
professional knowledge; is a lifelong learner; and continually improves on the quality of Teaching and
Learning. It is said that "teachers often leave a mark on their students, but they seldom leave a mark on
their profession". Through research they would be able to do both.

What Do Teacher Researchers Do?


• Develop questions based on their own curiosity about their students' learning and their teaching
• Investigate their questions with their students systematically documenting what happens
• Collect and analyze data from their classes including their own observations and reflections
• Examine their assumptions and beliefs
• Articulate their theories
• Discuss their research with their colleagues for support as "critical friends" to validate their findings
and interpretations of their data
• Present findings to others
• Talk to their students
• Give presentations (talk to teacher in room next door, go to conferences)
• Write about their research (school-wide publication, national) Participate in teacher research web
sites, online forums, and e-mail communications

MEANING OF EDUCATIONAL RESEARCH


Research is no longer confined to the science laboratory. it is carried out in fields including education.
Educational Research is a systematic effort to gain a better understanding of the educational process.
Educational Research tries to understand, explain and to some degree predict and control the human
behaviour in educational situations. It aims to make contributions towards the solutions of problems in
the field of education by the use of scientific and systematic method of research.
According to Mouly, ―Educational Research is the systematic application of scientific method for
solving for solving educational problem.
Travers thinks, ―Educational Research is the activity for developing science of behaviour in
educational situations. It allows the educator to achieve his goals effectively.
According to Whitney, ―Educational Research aims at finding out solution of educational problems by
using scientific philosophical method.
Thus, Educational Research is to solve educational problem in systematic and scientific manner, it is
to understand, explain, predict and control human behaviour.

Educational Research Characterizes as follows : -


• It is highly purposeful.
• It deals with educational problems regarding students and teachers as well.
• It is precise, objective, scientific and systematic process of investigation.
• It attempts to organize data quantitatively and qualitatively to arrive at statistical inferences.
• It discovers new facts in new perspective. i. e. It generates new knowledge.
• It is based on some philosophic theory.
• It depends on the researchers ability, ingenuity and experience for its interpretation and
conclusions.
• It needs interdisciplinary approach for solving educational problem.
• It demands subjective interpretation and deductive reasoning in some cases.
• It uses classrooms, schools, colleges department of education as the laboratory for conducting
researches.

Functions of Educational Research


1. It obtains the scientist knowledge about all educational problems. It also helps in obtaining specific
knowledge about the subjects involved in the study.
2. In action research, the researchers are teachers, curriculum workers, principals, supervisors or others
whose main task is to help, provide good learning experiences for pupils.
3. In it, a person tries to enable him to realise his purposes more effectively. For example: A teacher
conducts his teaching more effectively. An administrator, in the education department performs his
action to improve his administrative behaviour.
4. Action research is a procedure which tries to keep problem solving in close contact with reality at
every stage.
5. In educational system it conduits for the progress of the technique of teaching.
6. It strengthens and emphasizes the work of the teacher.
7. It has a great utility of creating new interest and new confidence in the ability of the individual
teacher.
8. Action research provides practical utility. For class-room teacher, he applies his own observations
into his class-room practices to make the observed problems solved. Minor problems in the classroom
can be solved by applying the teachers' intelligence.
9. Action research brings changes in the teachers. It makes them co-operative and active in facing the
situation easily. It also happens to bring about changes in the behaviour, attitude and teaching
performance.

10. Planning is the primary criteria in educational research as well action research. To go through the
problems much in sight is needed. For solving all these problems the teacher goes on reading references,
literatures and also research techniques. So theoretical learning becomes fruitful when it is practically
applied in the proper situation to solve problems in action research.
SCOPE OF EDUCATIONAL RESEARCH:
Education changes with the gradual development which occurs with respect to knowledge and
technology, so Educational Research needs to extend its horizon. Educational Research can include
various areas like educational psychology, educational technology, educational management, legal
education, environmental education, curriculum, methods and techniques of teaching and learning etc. It
relates education to various other subjects. It can be interdisciplinary.
Results of democratic education are slow and sometimes defective. There are a numerous problems in
the field of education. So it needs Educational Research to solve educational problems. It discovers facts
and relationship in order to make educational process more effective. Research leads to the improvement
of education - its practices and policies. Educational research is also important for the government in
making decisions and policies to provide good education to its citizens.
Being scientific study of educational process, it involves :
- individuals (Student, teachers, educational managers, parents.)
- institutions (Schools, colleges, research – institutes)
It covers areas from formal education, informal education, non formal education, adult education,
distance and continuing education as well.
It includes process like investigation, planning (design) collecting data, processing of data, their
analysis, interpretation and drawing inferences. Educational Research can use various designs and
methods and techniques. It is mainly applied in nature but can be fundamental also.
The Hypothesis
Once a problem is selected for research the next important step is formulation of Hypothesis. A
Hypothesis is a tentative - assumption about relations between variables or explanation of the research
problem or guess about the research outcome. Etymologically hypothesis is made up of two words,
“hypo” (less than) and thesis, which means less than or less certain than a thesis. It is the pre assumptive
statement of a proposition or a reasonable guess, based upon available evidence which the researcher
seeks to prove through the study.
Hypothesis is precisely defined as a tentative or working proposition suggested as a solution to a
problem while a Theory is the final hypothesis when is defensibly supported by all evidences. In
deductive research, a hypothesis is focused statement which predicts an answer to your research
question. It is based on the findings of previous research (gained from your review of the literature) and
perhaps your previous experience with the subject. The ultimate objective of deductive research is to
decide whether to accept or reject the hypothesis as stated. When formulating research methods
(subjects, data collection instruments, etc.), wise researchers are guided by their hypothesis. In this way,
the hypothesis gives direction and focus to the research. In heuristic research, a hypothesis is not
necessary. This type of research employs a "discovery approach." In spite of the fact that this type of
research does not use a formal hypothesis, focus and structure is still critical. If the research question is
too general, the search to find an answer to it may be futile or fruitless. Therefore, after reviewing the
relevant literature, the researcher may arrive at a focused research question.
Researchers do not carry out work without any aim or expectation. Research is not of doing
something and presenting what is done. Every research problem is undertaken aiming at certain
outcomes. That is, before starting actual work such as performing an experiment or theoretical
calculation or numerical analysis, we expect certain outcomes from the study. The expectations form the
hypothesis. Hypotheses are scientifically reasonable predictions. They are often stated in terms of if-then
sentences in certain logical forms. A hypothesis should provide what we expect to find in the chosen
research problem. That is, the expected or proposed solutions based on available data and tentative
explanations constitute the hypothesis. Hypothesizing is done only after survey of relevant literature and
learning the present status of the field of research. It can be formulated based on previous research and
observation. To formulate a hypothesis the researcher should acquire enough knowledge in the topic of
research and a reasonably deep insight about the problem. In formulating a hypothesis construct
operational definitions of variables in the research problem. Hypothesis is due to an intelligent guess or
for inspiration which is to be tested in the research work rigorously through appropriate methodology.
Testing of hypothesis leads to explanation of the associated phenomenon or event.
A hypothesis is an assumption about:

The relationship between/among variables or


The level of influence of independent variables on the dependent variable or
The value of population parameter.

DEFINITIONS OF HYPOTHESIS
1. A hypothesis may be precisely defined as a tentative proposition suggested as a solution to a problem
or as an explanation of some phenomenon. (Ary, Jacobs and Razavieh, 1984)
2. A hypothesis is a conjectural statement of the relation between two or more variables. (Kerlinger,
1956)
3. Hypothesis is a formal statement that presents the expected relationship between an independent and
dependent variable. (Creswell, 1994)
4. Hypothesis relates theory to observation and observation to theory. (Ary, Jacobs and Razavieh, 1984)
5. Hypotheses are relational propositions. (Kerlinger, 1956)
6. Hypothesis is a tentative explanation that accounts for a set of facts and can be tested by further
investigation.
On the basis of the above discussion, three major points can be identified:
(1) That a hypothesis is a necessary condition for successful research;

(2) That formulation of the hypothesis must be given considerable attention, to clarify its relation to
theory, remove vague or value judgemental terms, and specify the test to be applied, and

(3) That hypotheses may be formulated on different levels of abstraction.

A hypothesis should always:

Explain what you expect to happen


Be clear and understandable
Be testable
Be measurable
And contain an independent and dependent variable

Relevance/Role/Functions// Importance of Hypotheses in Research


It provides a tentative explanation of phenomena and facilitates the extension of knowledge in an area. It
leads to explanation of facts which are to be tested and will later lead to generalizations that will extend
the existing knowledge in the area.
It provides the investigator with a relational statement that is directly testable in a research study. Some
relationships given in hypothesis are facts while some transcend the known facts to give explanations for
the known facts. It helps the researcher to relate logically known facts to intelligent guesses about
unknown conditions.
It provides direction to the research. They represent specific objectives and thus keep the researcher
always in track. It helps him to decide what method to adopt, type of data needed etc. it specifically tells
him what he need to do and find out in his study.
It provides a framework for reporting conclusions of the study. It is very convenient for the researcher to
test each hypothesis and state conclusions relevant to each separately. This would make the report more
meaningful and interesting for the reader.
• It could be considered as the working instrument of theory. Hypotheses can be deduced from theory
and from other hypotheses.
• It could be tested and shown to be probably supported or not supported, apart from man’s own values
and opinions. It the hypothesis is stated it can be easily tested.
Characteristics of a hypothesis
Hypothesis needs to be structured before the data-gathering and interpretation phase of the research: A
well-grounded hypothesis indicates that the researcher has sufficient knowledge in the area to undertake
the investigation. The hypothesis gives direction to the collection and interpretation of data. Finding the
data first and then formulating the hypothesis is like…. throwing the dice first and then betting. A
researcher may deduce hypothesis inductively after making observations of behavior, noticing trends or
probable relationships. Background knowledge is essential for formulating a good hypothesis.
• It should be specific and precise.
• It must have explanatory power.
• It should specify the variables between which the relationship is to be established.
• It must state the expected relationship between variables.
• It must be empirically testable, within a reasonable time. It should be in agreement with the
observed facts.
• It should be consistent with the existing body of knowledge and do not conflict with the law of
nature which is known to be true.
• It should be stated as simply and concisely as possible.
• The statements in the hypothesis should not be contradictory.
• It should describe one issue only.
• It must accurately reflect the relevant sociological fact.
• It must not be in contraction with approved relevant statements of other scientific disciplines.
• It must consider the experiences of other researchers.
• It should be related to the available techniques.
• It should be conceptually clear.
• It should be stated in scientific and research language and not in ordinary language.
• It should be limited in scope.
Hypotheses cannot be described as true or false. They can only be relevant or irrelevant to the research
topic.
Sources of Hypothesis.First, an explorative research work might lead to the establishment of hypothesis.
Second, the environment is a source of hypothesis, because environment portrays broad relationship
across factors which form the basis for drawing an inference. Third, previous research studies are a great
source of hypotheses. That is why review of literature is made.
General Culture in which a Science Develops:A cultural pattern influences the thinking process of the
people and the hypothesis may be formulated to test one or more of these ideas. Cultural values serve to
direct research interests.
For example in the Western society race is thought to be an important determinant of human behaviour

Scientific Theory:A major source of hypothesis is theory. A theory binds a large body of facts by
positing a consistent and lawful relationship among a set of general concepts representing those facts.
Further generalizations are formed on the basis of the knowledge of theory. Corollaries are drawn from
the theories.
Assumptionsofcertaintheoriesbecomeasourceofhypothesisinresearch.Similarly,exceptionstocertaintheory
aregroundfornewhypotheses.

Analogies:Observation of a similarity between two phenomena may be a source of formation of a


hypothesis aimed at testing similarity in any other respect. Julian Huxley has pointed out that “casual
observation in nature or in the framework of another science may be a fertile source of hypothesis. The
term analogies refer to parallelism. Though human system and animal system are different, there is some
parallelism. That is why medicines are tried first on rats or monkeys then used for human consumption.
So, hypothesis on animal behavior can be done based on proven behavior of human and vice versa.

Consequences of Personal, Idiosyncratic Experience as the Sources of Hypothesis:Not only culture,


scientific theory and analogies provide the sources of hypothesis, but also the way in which the
individual reacts to each of these is also a factor in the statement of hypotheses. Certain facts are present,
but every one of us is not able to observe them and formulate a hypothesis.Thus emergence of a
hypothesis is a creative manner. To quote Mc Guigan, “to formulate a useful and valuable hypothesis, a
scientist needs first sufficient experience in that area, and second the quality of the genius.”
Finally, for the research mind, the whole universe is a source of hypotheses. Yes. the searching mind
fathoms out new hypotheses from seemingly events of insignificance.

Forms of Hypothesis – A research hypothesis may take either declarative form, null form or
question form.
STATEMENT FORM – Here a hypothesis is stated as an affirmative statement stating the relationship
between two variables or predicting the outcome etc. It generally stated the relationship between the
variables concerned.
A hypothesis can be stated in two ways: Directional Hypothesis Non-directional Hypothesis.
Directional hypothesisIs a type of hypothesis that specifies the direction of expected findings. ·
Sometimes directional hypothesis are created to examine the relationship among variables rather than to
compare groups. · Directional hypothesis may read,”…is more than..”, “…will be lesser..” · Example: “
Children with high IQ will exhibit more anxiety than children with low IQ”
Non-directional hypothesisIs a type of hypothesis in which no definite direction of the expected
findings is specified. · The researcher may not no what can be predicted from the past literature. · It may
read, “..there is a difference between..” · Example: “ There is a difference in the anxiety level of the
children of high IQ and those of low IQ.
QUESTION FORM - Here a hypothesis is stated as a question to which the researcher tries to find an
answer. E.g. Is there a significant difference in the achievement of boys and girls of this school?

NULL FORM- A null hypothesis is a statement that there is no actual relationship between variables.
(Ho or Hn). A null hypothesis may read, “There is no difference between…..”
Ho states the opposite of what the experimenter would expect or predict. The final conclusion of the
investigator will either retain a null hypothesis or reject a null hypothesis in favor of an alternative
hypothesis. Not rejecting Ho does not really mean that Ho is true. There might not be enough evidence
against Ho. Example: “There is no significant difference in the anxiety level of children of High IQ and
those of low IQ.” The null form is preferred by most of the experienced researchers. This form of
statement more readily defines the mathematical model to be utilized in the statistical test of hypothesis.
The no-difference statement assumes that the two groups will be tested and found to be equal. Since null
hypothesis can be tested statistically it is also known as statistical hypothesis. They are also called the
testing hypothesis when declarative hypothesis are tested statistically by converting them into null form..
PREDICTIONFORM - Aprediction hypothesis is a statement that suggests a potential outcome that
the researcher may expect. It is a type of hypothesis that states what would be the expected outcome of
the research or what would be the effect of the cause or what the relationship between two variables. It
comes from prior literature or studies. It is chosen because it allows the research worker to state
principles which he actually expects to emerge from an experiment. This type of hypothesis is more
useful in action research studies.
TYPES OF RESEARCH : Research is broadly classified into three main classes:
1. Fundamental or basic research
2. Applied research
3. Action Research
Basic Research or Fundamental Researchor Pure research
Study or investigation of some natural phenomenon or relating to pure science is termed as basic
research. The main aim of basic research is the discovery of knowledge solely for the sake of
knowledge. Fundamental research is usually carried on in a laboratory or other sterile environment,
sometimes with animals.
This type of research, which has no immediate or planned application, may later result in further
research of an applied nature. It is not concerned with solving any practical problems of immediate
interest. But it is original or basic in character. Basic researches involve the development of theory. It is
not concerned with practical applicability and most closely resembles the laboratory conditions and
controls usually associated with scientific research. It is concerned establishing generally principles of
learning. For example, much basic research has been conducted with animals to determine principles of
reinforcement and their effect on learning. Like the experiment of skinner on cats gave the principle of
conditioning and reinforcement. It is also called theoretical research.
Its major aim is to obtain and use the empirical data to formulate, expand or evaluate theory.
This type of research draws its pattern and spirit from the physical sciences. It represents a rigorous and
structured type of analysis. It employs careful sampling procedures in order to extend the findings
beyond the group or situations and thus develops theories by discovering proved generalizations or
principles. Fundamental research leads to a new theory, the knowledge of which has not been known or
reported earlier.
The outcomes of basic research form the basis for many applied research. Researchers working
on applied research have to make use of the outcomes of basic research and explore the utility of them.
Research on improving a theory or a method is also referred as fundamental research. Modifying the
theory to apply it to a general situation is a basic research.
Applied Researchor Field Research
The second type of research which aims to solve an immediate practical problem is referred to as
applied research. According to Travers, ―applied research is undertaken to solve an immediate practical
problem and the goal of adding to scientific knowledge is secondary. It is research performed in relation
to actual problems and under the conditions in which they are found in practice.
In an applied research one solves certain problems employing well known and accepted theories
and principles. Most of the experimental research, case studies and inter-disciplinary research are
essentially applied research.
Applied research is helpful for basic research. Basic research may depend upon the findings of
the applied research to complete its theoretical formulations. A research, the outcome of which has
immediate application is also termed as applied research. Such a research is of practical use to current
activity. For example, research on social problems has immediate use. Applied research is concerned
with actual life research. Obviously, they have immediate potential applications.
Differences between basic and applied researches.
Basic research Applied research
Basic research is a type of research, driven purely Applied research is one type of research that is
by curiosity and a desire to expand our used to answer a specific question that has direct
knowledge. This type of research tends to applications to the world.
enhance our understanding of the world around
us.
Seeks generalization Studies individual or specific cases
without the objective to generalize

Aims at basic processes Aims at any variable in which the


desired difference is to be obtained

Aim is theoretical -the research increases our Aim is practical - understand a real world
general information problem and solve it
Attempts to explain why things happen Tries to say how things can be changed
Tries to get all the facts Tries to correct the facts which are problematic

It is experimental - usually conducted in a It is conducted in the field/at the place where the
laboratory theory is used.

Reports in technical language of the topic Reports in common language

Action research:
Research is a form of disciplined enquiry leading to the generation of knowledge. Your approach to
research may vary according to the context of your study, your beliefs, the strategies you employ and the
methods you use. Action research is a specific method of conducting research and interpreting findings
by professionals and practitioners with the ultimate aim of improving practice. Action research is a type
of applied research. Action research supports practitioners to seek ways in which they can provide good
quality education by transforming the quality of teaching and enhancing learning.Action research can be
described as: any research into practice undertaken by those involved in that practice, with an aim to
change and improve it.
Action research is the term which describes the integration of action (implementing a plan) with
research (developing an understanding of the effectiveness of this implementation). The original concept
is sometimes attributed to Kurt Lewin (1890–1947).Historically, the term ‘action research’ has been
long associated with the work of Kurt Lewin, who viewed this research methodology as cyclical,
dynamic, and collaborative in nature. Action research is about both ‘action’ and ‘research’ and the links
between the two. It is quite possible to take action without research or to do research without taking
action, but the unique combination of the two is what distinguishes action research from other forms of
enquiry. Through repeated cycles of planning, observing, and reflecting, individuals and groups engaged
in action research can implement changes required for social improvement.
Research designed to uncover effective ways of dealing with problems in the real world can be referred
to as action research. This kind of research is not confined to a particular methodology or paradigm. For
example, a study of the effectiveness of training teenage parents to care for their infants. In education
Action research is defined as any systematic inquiry conducted by teachers, administrators, counselors,
or others with a vested interest in the teaching and learning process or environment for the purpose of
gathering information about how their particular schools operate, how they teach, and how their students
learn. More important, action research is characterized as research that is done by teachers for
themselves. It is truly a systematic inquiry into one’s own practice. Action research allows teachers to
study their own classrooms—for example, their own instructional methods, their own students, and their
own assessments—in order to better understand them and to be able to improve their quality or
effectiveness.
Action research is an attractive option for teacher researchers, school administrative staff, and other
stakeholders in the teaching and learning environment to consider. Specifically, action research in
education can be defined as the process of studying a school situation to understand and improve the
quality of the educative process.
The term ‘action research’ has often been used in a similar way to other terms used to describe research
undertaken by educational practitioners, such as: ‘classroom research’ (Hopkins, 1985); ‘self-reflective
enquiry’ (Kemmis, 1982); ‘educational action research’ (Carr and Kemmis, 1986); and, ‘exploratory
teaching and learning’ (Allwright and Bailey, 1991). You may also find it referred to as 'practitioner
enquiry', 'reflective analysis' or 'evidence-based practice'. The most important component of action
research is that it does include both action and reflection that lead to enhance practice. The concept of
action research under the leadership of Corey has been instrumental in bringing educational research
nearer to educational practitioners.
Following entry into the workforce, there are limited opportunities for new graduate teachers to engage
in critically reflective activities about their educative practice. In an increasingly complex and
challenging profession, the need for teachers, administrators and school systems to become involved in
professional development activities is ever present. Undertaking a unit in action research methodology
provides those professionals working in the education system with a systematic, reflective approach to
address areas of need within their respective domains. It provides practitioners with new knowledge and
understanding about how to improve educational practices or resolve significant problems in classrooms
and schools. Action research uses a systematic process, is participatory in nature, and offers multiple,
beneficial opportunities for those professionals working within the teaching profession. These
opportunities include facilitating the professional development of educators, increasing teacher
empowerment, and bridging the gap between research and practice. Within education, the main goal of
action research is to determine ways to enhance the lives of children. At the same time, action research
can enhance the lives of those professionals who work within educational systems. To illustrate, action
research has been directly linked to the professional growth and development of teachers. Action
research (a) helps teachers develop new knowledge directly related to their classrooms, (b) promotes
reflective teaching and thinking, (c) expands teachers’ pedagogical repertoire, (d) puts teachers in charge
of their craft, (e) reinforces the link between practice and student achievement, (f) fosters an openness
toward new ideas and learning new things, and (g) gives teachers ownership of effective practices.
In education, action research is also known as teacher research. It is one method teachers use for
improvement in both their practice and their students’ learning outcomes. The central goal of action
research is positive educational change.
Comparison of academic or formal research with action research
Formal research Action research
training needed extensive little
goals knowledge that is results for improving practice
generalisable to a wider audience in a local situation
method of review of previous research findings problems currently faced or
identifying problems identifying problems improvements needed in a set of
and extensions of them classrooms or a school

literature review extensive enquiry into all research some primary sources but also use
previously conducted on this topic of secondarysources plus what
using primary sources practitioners are doing in other
schools

sampling random or representative students and/or members of the


preferably with large populations school community

research design rigorous controls over long periods flexible, quick time frame, control
through triangulation

approach deductive reasoning – theory to inductive reasoning – observations,


hypothesis to data to confirmation patterns,interpretations,
recommendations

analysis of data tests leading to statistical significance generally grouping of raw


data using
descriptive statistics

application of results theoretical significance practical significance

Action research is characterized as being:


• Integrated- conducted as part of a teacher’s normal daily practice
• Reflective- a process which alternates between plan implementation and critical reflection
• Flexible- methods, data and interpretation are refined in the light of the understanding gained
during the research process
• Active- a process designed to generate change in small steps
• Relevant- meets the needs of teachers and/or their students
• Cyclical- involving a number of cycles with each clarifying issue leading to a deeper
understanding and more meaningful outcomes
• Focused- on a single issue of school improvement
• Collaborative- teachers and leaders working together to improve student outcomes
• Planned- an organised approach to answering a question
• Learning- simultaneous construction of new knowledge by teachers about their practice.
• AR is a method used for improving educational practices.
• AR is participative and collaborative.
• AR is situation based
• AR develops reflections based on the interpretations made by participants.
• Knowledge is created through action and at the point of application.
• AR can involve problem solving, if it leads to the improvement of practice.
• In AR the findings emerge as action, but they are not conclusive or absolute.
Advantages of Action Research for Teachers
• Develops an increased awareness of the discrepancies between goals and practices
• Improves teachers’ ability to be analytical about their practices
• Increases receptiveness to educational change
• Improves instructional effectiveness
• Improves decision-making skills/awareness
• Helps teachers view teaching as a type of inquiry or experimentation
• Increases reflection about teaching
• Increases understanding about the dynamics of a classroom
• Heightens the curiosity of teachers
• Empowers teachers by giving them greater confidence in their ability to promote change
• Can expand career opportunities and roles for teachers
• Can revitalize teaching and reduce burnout
• Increases appreciation for theory, provides an avenue for informing theory, and demystifies
research
• Encourages positive change and enables teachers to become agents of change
• Identifies or verifies which methods work
• Increases awareness, evaluation, and accountability of decisions made
• Promotes ownership of effective practices
• Promotes the selection of research questions that are personally meaningful
• Encourages teacher-researchers to be active learners
• Increases willingness to accept research findings for use in teaching • Encourages more critical
and responsive consumers of research
• Increases teachers’ knowledge about situations and contexts
• Facilitates defence of pedagogic actions
• Strengthens connection between pure and applied research
• Increases commitment to goals they have formulated themselves rather than those imposed on
them
• Increases opportunity to gain knowledge and skill in research methodology and applications
• Makes distinction between researcher and teacher irrelevant

Limitations of Action Research


• Unknown and unaccepted by many researchers
• Cause, effect, outcomes maybe not generalizable
• Sometimes confusion with consultancy
• Unsuited for people unwilling to work democratically
• Difficult to meet the needs and expectations of everyone
• The method of research is not considered scientific.
• Action research also raises a number of practical challenges for the would-be action researcher
• Potential tension between the demands of the practical problem and the research can arise
• Risk of the researcher becoming over-involved in the situation.
• Some ethical issues may be especially problematic in action research.
Need and importance of action research:
Action research studies a problematic situation in an on-going systematic and recursive way to
take action to change that situation.Action research is a process of concurrently inquiring about
problems and taking action to solve them. It is a sustained, intentional, recursive, and dynamic process
of inquiry in which the teacher takes an action—purposefully and ethically in a specific classroom
context— to improve teaching/learning.
The professionalization of teaching, reflecting the idea of teachers investigating their own
practice; the perceived irrelevance of much contemporary educational research for practice; the
emergence of methods in educational research which gave importance to participants’ knowledge,
perspectives, and categories in shaping educational practices and situations; the adoption of a self-
monitoring role in teaching; the organization of teacher support networks committed to the continuing
development of education; increased the importance of action research and its integration into education.
Action research is change research, a nonlinear, recursive, cyclical process of study designed to
achieve concrete change in a specific situation, context, or work setting to improve teaching/learning. It
seeks to improve practice, the understanding of practice by its practitioners, and the situations in which
practice is located. Although it is focused on actions leading to change, action research is also a mental
disposition—a way of being in the classroom and the school—a lifelong habit of inquiry. It is recursive
in that teacher-researchers frequently work simultaneously within several research steps and circle back
to readdress issues and modify research questions based on reflection for, reflection in, and reflection on
action. The reflection-action-reflection-action process can be considered a spiralling cyclical process in
which research issues change and actions are improved or discarded or become more focused. In
education, action research generates actionable hypotheses about teaching, learning, and curriculum
from reflection on and study of teaching, learning, and curriculum to improve teaching, learning, and
curriculum.
Action research assumes that teachers are the agents and source of educational reform and not
the objects of reform. Action research empowers teachers to own professional knowledge because
teachers— through the process of action inquiry—conceptualize and create knowledge, interact around
knowledge, transform knowledge, and apply knowledge. Action research enables teachers to reflect on
their practice to improve it, become more autonomous in professional judgment, develop a more
energetic and dynamic environment for teaching and learning, articulate and build their craft knowledge,
and recognize and appreciate their own expertise.
Action research assumes caring knowledge is contextual knowledge, with the understanding that
human actions always take place in context and must be understood in context. It assumes knowledge is
tentative and probabilistic, continually subject to modification.
Action research assumes teacher development involves lifelong learning in changing and
multidimensional contexts. Action research is grounded in the reality of the school, classroom, teachers,
and students. It is a process in which study and inquiry lead to actions that make a difference in teaching
and learning, that bridge doing (practice), learning (study), and reflection (inquiry).
Through action research, we intellectually and affectively nurture ourselves, our classrooms, and our
students. Classrooms and schools become sites where new meanings and understanding are created and
shared.
Action research challenges certain assumptions about the research process and educational change
(Grundy, 1994, pp. 28–29). It challenges the separation of research from action, the separation of the
researcher from the researched
Action research is by, with, of, and for people, rather than on people
“No research without action—no action without research”. Teachers are privileged through the action
research process to produce knowledge and consequently experience that “knowledge is power.”
In educational action research, teachers, who traditionally have been the subjects of research, conduct
research on their own situations and circumstances in their classrooms and schools.
As knowledge and action are joined in changing practice, there is growing recognition of the power of
teachers to change and reform education from the inside rather than having change and reform imposed
top down from the outside.
Practicing the strategies and skills of teacher action research can help aspiring teachers in designing their
own meaningful pedagogy, shift the identity of teacher as expert to one of inquirer
teachers who engaged in teacher research wrote more honestly about classroom problems, became more
self-assured, began to see teaching more as a learningprocess, found their research plans became their
lesson plans in response to discoveries they were making in their classrooms, and changed their focus
from teaching to finding out what their students knew and then helping their students to learn.
teachers were able and encouraged to try new ways of teaching as they became sensitive to classroom
variables and examined the classroom context simultaneously with their teachingwhich led them to
become more creative in their thinking and writing.
Action research enables you to live your questions; in a way they become the focal point of your
thinking.

Steps of Action Research:


The process first starts with identifying a problem. Then, you must devise a plan and implement the
plan. This is the part of the process where the action is taking place. After you implement the plan, you
will observe how the process is working or not working. After you've had time to observe the situation,
the entire process of action research is reflected upon. Perhaps the whole process will start over again!
This is action research.

The basic steps of Action Research are


1. Problem Identification
2. Defining and Analysing the Problem
3. Formulating and Testing Action Hypothesis
4. Preparing the Report

1. Problem Identification: The most important step in action research as in any research is the
identification of the topic for research. Identifying the problem occurs when the situation is observed
and there is a recognition that things can be done better. The topic selected should be relevant and
important to the teachers or other field workers. Usually action research involves issues relating to a
pressing problem or any new technique or method or tool that the researcher hypothesizes will improve
the present situation. Classroom problems or issues that need to be solved or improved are the sources of
research problem. For this one has to reflect on ones daily practices and ask themselves what are the real
problems to be solved. If one cannot readily identify a topic try brainstorming to arrive at a topic. By
studying life in the natural setting of the school and the classroom, by looking for “patterns in the rug,”
and by mulling, contemplating, and closely observing authentic events in teaching and learning
situations, one can identify a research question that will enlist personal passion and energy. “A teacher
researcher, among other things, is a questioner. Meaningful questions can emerge from: conversations
with your colleagues; professional literature; examination of your journal entries and teaching portfolio
to identify, for example, patterns of teacher/student behaviour or anomalies, paradoxes, and unusual
situations; difference between your teaching intentions and outcomes; problematic learning situations in
your classroom that you want to resolve; a new teaching strategy you are eager to implement; an
ambiguous and puzzling classroom management concern; or your curiosity about testing a particular
theory in the classroom. Also the topic selected should be feasible. Your selected topic can be evaluated
and refined by discussing with others.
Sometimes it helps to use a variety of questions as starting points to identify an issue you would like to
research (Caro-Bruce, 2000):
I would like to improve ____________________________________
I am perplexed by _________________________________________
I am really curious about ___________________________________
Something I think would really make a difference is ___________
Something I would like to change is _________________________
What happens to student learning in my classroom when I ______?
How can I implement ______________________________________?
How can I improve ________________________________________?
Identifying a good research question from these possibilities requires reflection, observation,
conversation, and study of the natural life of the classroom. It is important to remember that the first
question propelling an action research study may change as the research is under way. The recursive,
iterative, and spiralling nature of action research suggests that a research question may change and be
refined as new data and issues surface in the research study.
A good classroom action research question should bemeaningful, compelling, and important
to you as a teacher-researcher. It should engage your passion, energy, and commitment. It has to be
important for your personal and professional growth; it should stretch you intellectually and affectively.
You should love the question. A good research question is manageable and within your sphere of
influence. It is consonant with your work; you can address it within the confines of your classroom. It is
focused and not so ambitious, big, or complex that it requires extraordinary resources, time, and energy.
A good research question should be important for learners. A good research question benefits your
students by informing your teaching and the curriculum, by providing new insights about students and
their learning, by broadening and deepening your perspectives, or by improving practice. A good
research question leads to taking an action, to trying something out, to improving a teaching/learning
situation, to implementing actions that can make a difference in the lives of students.A good research
question doesn’t lead to a yes or no answer. It is specific but sufficiently open-ended to facilitate
meaningful exploration and to provide opportunities for deep and rich understandings of teaching and
learning in the classroom. The question needs to be “open-ended enough to allow possibilities to emerge
2.Defining and Analysing the Problem: Problems cannot be solved unless they are identified and then
defined. In our eagerness to begin a research study, there is sometimes a tendency to try to state the
question as soon as possible. It is advisable not to hurry the question. Identifying and framing the
research question should be done carefully done. Once the topic or general problem for research is
identified the topic is to be narrowed down to a researchable topic. Action research involves refining
questions until you feel you have landed upon the right ones. You have to specifically define the
problem into a research question. Once you have narrowed down the question, it should be framed so
that the issue you are investigating is clearly and concisely stated. Defining and analysing a problem
involves seeking to understand the nature of the situation and discovering possible causal factors. You
look to see why things are as they are. The action or intervention you intend to implement needs to be
clearly stated. The question should be free of jargon and value-laden terms. In framing and analysing
the problem, it helps to consider the wide range of variables that can affect your study. In some
situations for this one have to study related literature. The way you define a problem will, inevitably,
determine the methodology you plan to study them. Differentiated solutions and subsequent
understandings will be generated by the way problems are stated.
3. Formulating and Testing Action Hypothesis:After defining and analysing the problem one can
think of the various solutions to the problems. Tentative guesses can be made. One can think and reflect
over the problem and search related literature and state the hypothesis. The actions that can be
undertaken to overcome the problem can be stated as the hypothesis of the action research. Then one has
to collect data regarding the problem to test the hypothesis.The biggest challenge in conducting action
research is to collect and analyze data while you are in the midst of taking an action. As you are
implementing an intervention to improve student learning or to make a change in your teaching practice,
you have to be mindful of the details that will make the intervention successful while at the same time
remembering to carefully collect and analyze data that will determine the degree of success or the need
to modify the intervention.
Design a systematic approach to analyze your data. Study the research question from at least
three separate pieces of data and three points of view.As you collect your data, ask yourself if the
research question still fits the data that are emerging from the study.As you examine the data,
continually compare the data that were collected earlier in the study with data collected later in the
study. Use different bases for comparison. Examine and study your data several times. New ideas will
occur to you with a fresh perspective. Try out different hunches about what the data mean. Look to see if
there are any factors or variables that might cause you to distrust the data. Make an educated guess and
then see if it is supported by the data. Formulate new action hypothesis. Don’t stick rigidly to an
assumption or hypothesis that was originally held. A variety of methods can be adopted to solve the
problem and find a solution. After finding the solution the last step is to implement the solution and test
it to see whether it works. Every new programme, plan or solution needs some sort of adjustment during
the implementation stage.
4. Preparing the Report:After the action research is over the report has to be prepared. It should
include
INTRODUCTION What was the focus of my study? What was the basis of my interest in this topic or
focus? What was I trying to learn about and understand? What were my overall goals? What factors in
my own history and experiences led me to be interested in this inquiry? What are my specific research
questions for this study?
BACKGROUND FOR THE STUDY/REVIEW OF RELATED LITERATUREWhat is the
background of this topic or focus and why is that background important to understand? What is the
context of previous work that has been done on this topic? To what else does the topic relate? How can I
situate my study within related professional literature? What is the theoretical framework that I bring to
this study? What are the areas of research and specific research studies that relate to my study? What are
related professional references (research, theory, and/or practice) that inform me?
DESCRIPTION OF THE RESEARCH CONTEXT Where was the study conducted and the data
gathered? What is the specific context in which the study was conducted (e.g. school population, the
classroom environment, curriculum, etc.)? May include a description of the school, the physical layout
of the classroom, the curriculum or specific curricular engagement that was studied. What did I do in the
classroom setting to create a context from which I collected data? Were there certain engagements that I
did with my students? Who were the participants in this research? How did I select the participants?
What is my relationship to the people involved? Describe these participants. Did I need to gain
permission ("informed consent") from parents, guardians, or other "gatekeepers"? If so, how did I gain
this consent? How did I assure participants that they are protected from harm and that they will not be
exposed to risks?
DATA COLLECTION AND ANALYSIS What is my general approach to research design (teacher
research, experimental, case study, qualitative, etc.)? How and why did I choose this approach? What
important kinds of data did I collect? What specific methods of data collection did I use (e.g. field notes,
teaching journal, interviewing, taping, collecting artifacts, etc.)? For each method - What did I do? What
did the data look like? How did I collect it? When? How often? What did I do with it? How did I
analyze the data? What did I do to organize and analyze data as I collected it? What kind of more
intensive analysis did I do once the data was collected? How did I establish trustworthiness for the
study? What was my research time line? FINDINGS What did I learn? What are the major findings of
the study? What examples from the study support these findings? A common way to organize findings is
by themes or categories that were generated from the data. Typically each category/theme becomes a
subheading and begins with a general description to define the category. This is followed by examples
from the research to show the range of types of responses that went into that category -- actual quotes
from student responses, journals, field notes, artifacts, etc are included and interpreted by the researcher.
These categories might also be broken into smaller subcategories. Other possible ways to organize the
data include chronology (in the order events happened), life history (used for case study with the life
history organized around analytical points to be made), composite (present findings as a composite
picture such as "the day in the life of...."), critical events (significant events that reflect the major themes
from the data), or portraits (of individuals or institutions). In this section you are reporting the findings
of the study and so must stay close to the data in the statements that you make. Don't make broad
statements/interpretations that extend beyond the participants in the research. Do include “thick”
description - lots of specific examples of student talk, actions, etc. You have to show that you have the
data to support the statements that you are making and so others understand what is in your categories.
CONCLUSIONS AND DISCUSSION/IMPLICATIONS So what? What are the possible implications
of these findings for your own participants, other students, teachers, researchers? What sense do you
make of this study? What are you taking away for yourself and for others? Whose interests were served
by this research? Who benefited? What is the study's potential significance for my classroom or local
context? for education or society as a whole? Who might care about this study? What new questions
emerged from this study? How will you continue this inquiry?
Summary of study and concluding remarks that highlight thoughts you want to leave the reader with -
the major insights or wonderings you are taking from the study.
REFERENCES / Works cited list (any source of information and ideas other than the author’s must be
referenced in the Action Research Report. References must conform to current APA publication
standards.
APPENDICES (any ancillary materials should be included in the Action Research Project in
appendices) Copies of research permission form, written surveys, interview questions, etc (forms used in
the research or as part of the curricular engagement)
The contents of an Action Research Report includes:
The manuscript will include the following items:
Cover page (title information, name, date).
Table of contents (list the items with appropriate page numbers).
Introduction:
Objectives
Literature review
Methodology and work plan
The study:
The Context
Data collection and Analysis
The findings
The plan of action
Conclusions:
Summary
Outcomes
Implications
References
Appendices.

Need and Scopeof integrating action research practices:


Action research and the teacher:AR is widely recognized as a powerful tool for professional
development. It can improve teacher practice, heighten teacher professionalism, lead to positive
educational change, expand the knowledge base for teaching, and provide a platform for teachers’ voices
in educational reform
Action research is a process of practical and grounded inquiry that empowers teachers to identify and
solve their own problems. Action research is a transformative experience for a teacher. Action research
integrates research with the teaching learning process.It opens up new possibilities and new insights,
allowing teachers to see students, teaching, the curriculum etc. in a differently new way.A teacher
researcher is a listener—someone actively engaged in making new discoveries about her students, her
teaching and herself and it enhanced classroom teaching.Action research has a transformative power that
helps teachers in changing teaching approaches, in developing deeper understanding of their students
and of who they are as teachers, in enhancing their confidence and self-esteem, in gaining new
perspectives, and in revitalizing their careers. Finally, they affirm that teacher action research is a valid
and energizing process for constructing knowledge about teaching and learning and for empowering
teachers to take leadership in bringing about educational change.
Action research and classroom practices: Action research can address many issues related to various
classroom practices which would help a teacher to solve many classroom problems. Some of the topics
are:
1. Changes in classroom practice (e.g., What effect will daily writing have on my students?)
2. Effects of program restructuring (e.g., How will an approach affect student work habits?)
3. New understandings of students (e.g., What happens when at-risk students perceive they can be
successful?)
4. Understanding of self as teacher (e.g., What skills do I need to refine to be more effective in
teaching students to work together?)
5. New professional relationships with colleagues and students (e.g., How can regular and special
education teachers effectively co-teach?)
6. Teaching a new process to the students (e.g., How can I teach third graders to use reflection?)
7. Seeking a quantifiable answer (e.g., To what extent are portfolios an appropriate assessment tool
for kindergartners?)
Action research and school: We can no longer afford to conceive schools simply as centres distributing
knowledge developed by other units. The complexities of teaching and learning in formal classrooms
have become so large and the intellectual demands upon the system so enormous that the school must be
much more than a place of instruction. It must be a centre of inquiry—a producer as well as a transmitter
of knowledge.So action research should become the school norm. Students as well as teachers should
become involved in academic inquiry and that experimentation with teaching and learning.
In schools where action research took hold, there was clarity about school goals, priorities were
protected, faculty felt they had the collective power to change teaching and learning in meaningful ways,
teachers had similar perceptions of school norms, teachers saw that their work was supported by the
school’s leadership, viewed it as a strategy to reform schools etc. Through inquiry, collaborating
teachers would design new instructional approaches and curriculum materials and try them out to see
what worked and what didn’t work. Their work would then inform further inquiry and trials, and their
schools would become “knowledge creating schools” in which the intellectual assets of teachers would
be deeply valued and supported.
A research project is a project work conducted to do some form of research. A research
project may also be an expansion on past work in the field. To test the validity of instruments,
procedures, or experiments, research may replicate elements of prior projects, or the project as a whole.
Why do we conduct Research Projects?
• To invent new things
• To solve a prevailing problem
• To support development programmes of a country
• To uplift living standards
• Because we are inquisitive about things happening around us
What are the Components of a Research Project?
Rationale
• Underlying reasons or
• Reasoning or principle that underlies or explains something, or
• a statement setting out this reasoning or principle
Objectives
• A goal or aim
• Expected end result
Project description
• Duration (short-term, medium term, Long-term)
• Methods- Practical- Laboratory, Field
Theoretical- Using published or written information, Using IT facilities
• Materials- Equipment- Major, Minor
Consumables- Glassware, Chemicals, Stationary etc.
• Activity plan- How you would carry out the research, time schedule etc.
• Analyses of data -Using statistical methods, Computer programmes etc.
• How to report the results -Tables, Graphs, Flow charts, Photographs, Text, Film, etc.
Budget
• Equipment
• Consumables
• Salaries & Personnel allowances
• Travelling & subsistence
• Stationary
• Unforeseen – 5-10% of the total cost for above items
Steps of a research project
Initiation (Providing/creating situations): The first step in a research project is to determine the why
and what project you are going to do. A Research project can:
• replicate an existing study in a different setting;
• explore an under-researched area;
• extend a previous study;
• review the knowledge thus far in a specific field;
• develop or test out a methodology or method;
• address a research question in isolation, or within a wider programme of work; or
• apply a theoretical idea to a real world problem.
Determine an area in which you would conduct a project. You may have several areas in mind but
no specific area. You can provide many situations to choose a specific area for the project.
• Talk to others: what topics are others considering? Does this spark an interest? Don’t wait until you
have a fully formed research question before discussing your ideas with others, as their comments
and questions may help you to refine your focus. Discuss your proposed topic with a member of
academic staff who you think might be appropriate to supervise the project.
• Review pertinent literature to learn what has been done in the field and to become familiar enough
with the field to allow you to discuss it with others. The best ideas often cross disciplines and
species, so a broad approach is important.
• Look at others writings: set aside some time to spend in the library, skimming through the titles of
research papers and dessertations in your field over the past five years, and reading the abstracts of
those you find most interesting. The topics may give you inspiration, and they may have useful
suggestions for further research.
• Think about your own interests: which topic have you found most interesting, and is there an
element that could be developed into a research project?
• Be extra critical: is there something in your area so far that you have been skeptical about, or which
you think needs further study?
• Read about an interesting topic and keep asking the question ‘Why?’ :this may identify a research
question you could address.
You should also think realistically about the practical implications of your choice, in terms of:
• the time requirement;
• necessary travelling;
• access to equipment or room space;
• access to the population of interest; and
• possible costs.

Selection/Choosing: Select how you would do a research project and find ways and means of carrying
out a research project. Determine what type of project (Minor or Major). You choose afunding agency or
some other organizations that would support your project. You would have to submit a detailed research
project proposal (is a more detailed description of the project you are going to undertake) to that agency
and in certain situations you would be personally interviewed to explain your project proposal . Some
departments does not require you to submit a research proposal but it is worth preparing one even if it is
not a formal requirement. You should inform the target audience about your topic or persuade them to
your opinion? - if your purpose is to inform an audience about a topic, then unbiased, factual
information may be most appropriate and if your purpose is to persuade an audience, then you may need
targeted information that supports your opinion.
Once your topic has been accepted by the agency, you need to begin the process of refining the topic and
turning it into something that is focused enough to guide your project. Describe it as a research problem
that sets out:
• the issue that you are going to be investigating;
• your argument or thesis (what you want to prove, disprove, or explore); and
• the limits of your research (i.e. what you are not going to be investigating).

It is important that you establish a research problem at, or close to the start of, your project. It is one of
the key tools, to ensure that the project keeps going in the right direction. Every task undertaken should
begin with checking the research problem and asking “will this help address this problem?”. Revise the
research problem as you find out more about the topic. For example, when you discover that the data to
analyze is not available, or finding a new piece of information or a new concept while undertaking a
literature search, you should rethink the basis of your research problem. Finally it should be clearly
stated what you will be studying. The selected research statement should be understandable to someone
who doesn’t know much about your field of study. Define the terms in operational terms and choose
your goals and objectives. Also demonstrate the rationale for your research, and describe how it fits
within the wider research context in your area.
Planning/Designing: Plan a project Design - a work plan of how you would carry out the project.
Usually your project proposal should explain the details of the proposed plan. It is essential that you
create a plan that helps you allocate enough time to each task you have to complete. How will you go
about exploring your research question? What will be your methods, your sample, sample size, tools to
be used, data collection procedures? If you are not the only person working on the project, who else will
be involved? Be specific on what you will be doing. Create a Project timeline (Give an overview of
when you are going to do specific steps of your project). This does not need to be a day to day list but
depending on the length of your project it may give an overview biweekly or monthly. It is useful to
work out how many weeks you have until you need to complete your project, and draw a chart showing
these weeks. Block out the weeks when you would do what work and the resources you need for each
stage. Some focused thought at the beginning, then at the planning stage of each phase, could save your
time and energy.
Execution: The next step is the actual implementation of the project through your proposed plan. It is
one of the important phase of a research project. Before the actual execution of your project one can
conduct a pilot study. A pilot study involves preliminary data collection, using your planned methods,
but with a very small sample. It aims to test out your approach, and identify any details that need to be
addressed before the main data collection goes ahead. Spend time reflecting on the implications that
your pilot study might have for your research project, and make the necessary adjustment to your plan.
Even if you do not have the time or opportunity to run a formal pilot study, you should try and reflect on
your methods after you have started to generate some data. Be organized and take detailed notes when
you are undertaking data collection.
In this phase one should
• record data accurately as you collect it;
• retrieve data quickly and efficiently;
• analyze and compare the data you collect; and
• create appropriate outputs for your report e.g. tables and graphs, if appropriate.

Evaluation: On completion of your project you evaluate it. Ex ante evaluation refers to the evaluation
of a project proposal, for example for deciding whether or not to finance it, or to join the researchers, or
to provide scientific support. Ex post evaluation is conducted after a research is completed, again for a
variety of reasons such as deciding to publish or to apply the results, to grant an award or a fellowship to
the author(s), or to build a new research along a similar line. Evaluation should take place at every phase
of your project. An intermediate evaluation is aimed basically at helping to decide to go on, or to
reorient the course of the research. A project can be evaluated in terms of the project results. This
includes an interpretation and explanation of results as related to the research question; a discussion on
or suggestions for further work that may help address the problem to be solved; an analysis of the impact
of the project on the audience; or a discussion on any problems that occurred during the project.

Recording/Reporting: A researcher has the obligation to record all research procedures systematically
and accurately and prepare a complete, correct, and readable report of the project. In certain cases one
has to record and archive it for a reasonable length of time, and make it available for review under
appropriate circumstances. You will have to report your project as documents containing factual and
objective information collected through research. The project can be reported as publishable manuscript,
conference paper, invention, software, exhibit, performance, etc. It is a means to share your results or
project with others.
The funders of your research, and the institution at which you are carrying out your research will
both want to be informed at regular intervals about the progress of your project. Continuation of funding
may be dependent on submitting required reports on time. Some funders apply financial and other
penalties for late reports so project management is extremely important.
Preparation of Project Report
FORMAT FOR PROJECT REPORT

1. Title of the Project:

2. Principal Investigator and Co-Investigators

3. Implementing Institution and other collaborating Institutions

4. Date of commencement

5. Duration

6. Date of completion

7. Objectives as approved

8. Deviation made from original objectives if any, while implementing the project and reasons
thereof.

9. Experimental work giving full details of experimental set up, methods adopted, data collected
supported by necessary tables, charts, diagrams and photographs.

10. Detailed analysis of results indicating contributions made towards increasing the state of
knowledge in the subject.

11. Conclusions summarizing the achievements and indication of scope for future work.

12. Appendices and Bibliography

13. S&T benefits accrued:


I. List of research publications with complete details:
Authors, Title of paper, Name of Journal, Vol., page, year
II Manpower trained on the project:
a. Research Scientists or Research Fellows
b. No. of Ph.Ds produced
c. Other Technical Personnel trained
III. Patents taken, if any:
IV Products developed, if any.

13. Abstract (300 words for possible publication in ICMR Bulletin).

14. Procurement/usage of Equipment

Project Proposal: Your proposal should consist of the following:


1. Clear statement of research question – Very clearly state what you will be studying. Be sure that this
is understandable to someone who doesn’t know much about your field of study. If needed, define terms.
2. Project Goal and Objectives - Goals and Objectives are desired outcomes of work done by a person
but what sets them apart is the time frame, attributes they're set for and the effect they inflict. They are
very important part of your proposal. The rest of your proposal supports these statements. They don’t
need to be long – one short paragraph should be enough – but it is the most critical. The rest of your
proposal will explain why you want to explore this question, how you will do it, and what it means to
you.
3. Background/Statement of the Problem/Significance of the Project - Clearly support your statement
with documentation and references, and include a review of the literature that supports the need for your
research or creative endeavor. A discussion of present understanding and/or state of knowledge
concerning the question/problem or a discussion of the context of the scholarly or creative work. This
section presents and summarizes the problem you intend to solve and your solution to that problem. What
is the question that you want to explore in your research and why is this an interesting and important
question? For most proposals, this section will have references. If your project is a portion of a larger
project, the background should describe the research in general, on a large scale, but the Project
Description should be all about what you are going to do. This section should also include how your
project benefits or impacts the project as a whole and what knowledge is gained from your piece of the
project.
4. Experimental/Project Design - Design and describe a work plan consistent with your academic
discipline. This section of the proposal should explain the details of the proposed plan. How will you go
about exploring your research question? What will be your methods? If you are not the only person
working on the project, who else will be involved? Be specific on what you will be doing.
5. Project timeline – Give an overview of when you are going to do specific steps of your project. This
does not need to be a day to day list but depending on the length of your project it may give an overview
biweekly or monthly. Be sure to include time to review/synthesize your data or to reflect on the
experience. You should include time to write the final report/paper.
6. Anticipated results/Final Products and Dissemination. Describe possible forms of the final product,
e.g., publishable manuscript, conference paper, invention, software, exhibit, performance, etc. Be specific
about how you intend to share your results or project with others. This section may also include an
interpretation and explanation of results as related to your question; a discussion on or suggestions for
further work that may help address the problem you are trying to solve; an analysis of the expected
impact of the scholarly or creative work on the audience; or a discussion on any problems that could
hinder your creative endeavor.
7. Researcher's personal statement – This section is read carefully by the reviewers and does impact
their decision. You may wish to include why you want to do this project, what got you interested in it,
your career goals, and how this award would further those goals.
8.Project References – Use the standard convention of your discipline to write the references.
9. Budget - Your list of budget items and the calculations you have done to arrive at a figure for each
item should be summarized on the Budget form. Projects that include travel should be specific about
benefit/reasons and locations.
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner

You might also like