0% found this document useful (0 votes)
441 views53 pages

ASSESSMENT IN LEARNING 1 Module 1

This document discusses assessment in education. It defines assessment as determining students' learning needs, monitoring progress, and examining performance against learning outcomes. Assessment occurs before, during, and after instruction. The document outlines the purpose of different types of assessments, including formative, summative, and self-assessment. It describes how assessment informs instruction and benefits various stakeholders, including students, teachers, parents, and policymakers. Key terms related to assessment like measurement and testing are also explained.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
441 views53 pages

ASSESSMENT IN LEARNING 1 Module 1

This document discusses assessment in education. It defines assessment as determining students' learning needs, monitoring progress, and examining performance against learning outcomes. Assessment occurs before, during, and after instruction. The document outlines the purpose of different types of assessments, including formative, summative, and self-assessment. It describes how assessment informs instruction and benefits various stakeholders, including students, teachers, parents, and policymakers. Key terms related to assessment like measurement and testing are also explained.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

ASSESSMENT IN LEARNING

1
 Assessment is a vital element in the curriculum
development process. It is used to determine
students’ learning needs, monitor their progress
and examine their performance against
identified student learning outcomes. As such, it
is implemented at different phases of
instruction: before (pre-assessment); during
(formative assessment) and after instruction
(summative).
 With the directive of the Commission on Higher Education (CHED) to implement outcome-
based education (OBE) across all programs (CMO 46, s. 2012), it is imperative that educators
are aware of the emphasis of OBE in terms of assessment. CHED defines OBE as an “approach
that focuses and organizes the educational system around what is essential for all learners to
know, value, and be able to do to achieve the desired level of competence” 9CHED, 2014, p. 9).
CHED recognizes that OBE requires the use of appropriate assessments especially non-
conventional methods to measure student performance.
 At the micro-level, OBE begins with a clear-cut statement of the learning outcomes – what
the students should know, understand and be able to do. These intended learning outcomes
9ILOs) are the foundation for designing teaching and learning activities (TLAs) and assessment
tasks (ATs). Biggs and Tang (2007) recommended a constructive alignment of ILOs, TLAs and
Ats. This means that the TLAs and Ats should embody the target verbs specified in the ILOs.
These are cited and contained in the CHED Handbook (2014).
 In view of assessment, Biggs and Tang (2007) asserted that assessment tasks should provide
evidences of how learners can use acquired knowledge academically and professionally in
appropriate ways. This is where authentic assessment would come in. Authentic assessment
provides tasks that enable learners to solve real-life problems and situations.
Purposes of Assessment

 Assessment for Learning (AfL)


 It pertains to diagnostic and formative tasks which are used to determine
learning needs, monitor academic progress of students during a unit or block of
instruction and guide instruction. Students are given on-going and immediate
descriptive feedback concerning their performance. Based on assessment results,
teachers can make adjustments when necessary in their teaching methods and
strategies to support learning. They can decide whether there is a need to
differentiate instruction or design more appropriate learning activities to clarify
and consolidate students’ knowledge, understanding and skills. Examples are
pre-tests, written assignments, quizzes, concept maps, focused questions, among
others.
 Assessment as Learning (AaL)
 It employs tasks or activities that provide students with an opportunity to monitor and further
their own learning – to think about their personal learning habits and how they can adjust their
learning strategies to achieve their goals. It involves metacognitive processes like reflection and
self-regulation to allow students to utilize their strengths and work on their weaknesses by
directing and regulating their learning. Hence, students are responsible and accountable for their
own learning. Self-and-peer-assessment rubrics and portfolios are examples of AaL. It is also
formative which may be given at any phase of the learning process (DepEd Order8,s.2015).

 Assessment of Learning (AoL)


 It is summative and done at the end of a unit, task, process or period. Its purpose is to
provide evidence of a student’s level of achievement in relation to curriculum outcomes.
Unit tests and final projects are typical examples of summative assessment. It is used
for grading, evaluation and reporting purposes. Evaluative feedback on the student’s
proficiency level is given to the student concerned, likewise to his/her parents and other
stakeholders. It provides the foundation for decisions on student’s placement and
promotion.
Users of educational assessment

 Students
 Through varied learner-centered and constructive assessment tasks, students become
actively engaged in the learning process. They take responsibility for their own learning.
With the guidance of the teacher, they can learn to monitor changes in their learning
patterns. They become aware of how they think, how they learn, how they accomplish
tasks and how they feel about their own work. These redound to higher levels of
motivation, self-concept and self-efficacy (Mikre, 2010) and ultimately better student
achievement (Black & William, 1998).
 Teachers
 Assessment informs instructional practice. It gives teachers information about a student’s
knowledge and performance base. It tells them how their students are currently doing. Assessment
results can reveal which teaching methods and approaches are most effective. They provide
direction as to how teacher can help students more and what teachers should do next.
 As a component of curriculum practice, assessment procedures support instructors’ decisions on
managing instruction, assessing student competence, placing students to levels of education
opportunities and certifying competence (Mikre, 2010).
 Parents
 Education is shared partnership. Following this tenet, parents should be involved in the
assessment process. They are valued source of assessment information on the educational history
and learning habits of their children, most especially for pre-schoolers who do not yet understand
their developmental progress. In return, teachers should communicate vital information to parents
concerning their children’s progress and learning.
 Administrators and Program Staff
 Administrators and school planners use assessment to identify strengths and weaknesses of the
program. They designate program priorities, assess options and lay down plans for improvement.
Moreover, assessment data are used to make decisions regarding promotion or retention of students
and arrangement of faculty development programs.
 Policymakers
 Assessment provides information about students’ achievements which in turn reflect the quality
of education being provided by the school. With this information, government agencies can set or
modify standards, reward or sanction schools and direct educational resources. The Commission on
Higher Education in response to their program on quality assurance shut down substandard
academic programs of schools with low graduation and passing rates in licensure examinations.
 Assessment results also serve as basis for formulation of new laws. A current example is RA
10533, otherwise known as the K to 12 Enhanced Basic Education Act of 2013. the rationale for
the implementation of this law was the low scores obtained by Filipino pupils in standardized tests
such as the National Achievement Tests (NAT) and international tests like the TIMSS (Trends in
International Mathematics Study).
 Assessment plays a vital role in the K to 12 program. In Kindergarten, children are given a
School Readiness Yearend Assessment (SReYA) in the mother tongue to assess readiness across
the different developmental domains aligned with the National Early Learning Framework.
School-based Early Grade Reading Assessment (EGRA) and Early Grade Math Assessment
(EGMA) in the mother tongue are given in grade 1; and EGRA in English and Filipino in grade
3. National achievement tests are conducted in key stages to assess readiness of learners for
subsequent grade/year levels. In helping students choose specializations in senior high school,
they will undergo several assessments to uncover their strengths and weaknesses. Among these
is the National Career Assessment Examination (NCAE). The National Basic Education
Competency Assessment 9NBECA) completes the assessment stages. It measures the
attainment of the K to 12 standards. As we can see, there are mechanisms in place to monitor
the quality of basic education in the country under the new K to 12 BEC. Assessment data
provide a basis for evaluative decisions and policy formulation to sustain or improve the
program and adapt to emerging needs.
Common Terminologies
 Measurement
 Measurement comes from the old French word measure which means “limit or quantity”.
It is a quantitative description of an object’s characteristic or attribute. In science,
measurement is a comparison of an unknown quantity to a standard. There are appropriate
measuring tools to gather numerical data on variables such as height, mass, time, temperature,
among others. In the field of education, what do teachers measure and what instruments do
they use?
 Teachers are particularly interested in determining how much learning a student has
acquired compared to a standard (criterion) or in reference to other learners in a group (norm-
referenced). They measure particular elements of learning like their readiness to learn, recall
of facts, demonstration of specific skills, or their ability to analyze and solve applied
problems. They use tools or instruments like test, oral presentations, written reports,
portfolios and rubrics to obtain pertinent information. Among these, tests are the most
pervasive.
 A quantitative measure like a score of 30 out of 50 in a written examination does not hold meaning
unless interpreted. Measurement stops once a numerical value is ascribed. Making a value judgment
belongs to evaluation.

 TESTING
 It is a formal, systematic procedure for gathering information (Russell &Airasian, 2012). It is a tool
comprised of a set of questions administered during a fixed period of time under comparable conditions
for all students (Miller, Linn & Gronlund, 2009). It is an instrument used to measure a construct and
make decisions. Educational tests may be used to measure the learning progress of a student which is
formative in purpose, or comprehensive covering a more extended time frame which is summative.
 Teachers score tests in order to obtain numerical descriptions of students’ performance. Examples of
measures are raw scores and percentages obtained in tests. For example, Nico’s score of 16 out of 20
items in a completion type quiz in Araling Panlipunan (Social Studies) is a measure of his cognitive
knowledge on a particular topic. This indicates that he got 80% of the items correctly. This is an
objective way of measuring a student’s knowledge of the subject matter. Another method is through
perception which is less stable because of its subjectivity. For instance, a teacher can rate a student’s
knowledge about history using a scale of 1 to 5. Subjective types of measurement are useful especially
in quantifying latent variables like creativity, motivation, commitment, work satisfaction, among others.
 Tests are the most dominant form of assessment. The issue concerning its effectiveness to
measure and effectively evaluate learning is resolved if questions target and reflect learning
outcomes and covers the different learning domains. Tests are traditional assessments. They may
not be the best way to measure how much students have learned but they still provide valuable
information about student learning and their progress.
 ASSESSMENT
 Assessment comes from the Latin word assidere which means “to sit beside a judge”.

 Miller, Linn & Grolund (2009) defined assessment as any method utilized to gather
information about student performance. Black and Wiliam (1998, p.82) gave a lengthier
definition emphasizing the importance of feedback and signifying its purpose. They stated that
assessment pertains to all “activities undertaken by teacher – and by their students in assessing
themselves – that provide information to be used to modify the teaching and learning activities in
which they are engaged.” This means that assessment data direct teaching in order to meet the
needs of the students. It should be pointed out however, that assessment is not just about
collecting data.
 These data are processed, interpreted and acted upon. They aid teachers to make informed
decisions and judgment to improve teaching and learning. It is a continuous process used to
identify and address problems on teaching methods, learning milieu, student mastery and
classroom management. Hence, it is no surprise that assessment subsumes measurement and
instigates evaluation.
 Tests are form of assessment. However, the term “testing” appears to have a negative
connotation among educators and somewhat threatening to learners. Thus, the term “assessment” is
preferably used. While a test gives a snapshot of a student’s learning, assessment provides a bigger
and more comprehensive picture. It should now be clear that not all assessments re test. Although
many educators are still focused on traditional tests, schools implementing an outcome-based
teaching and learning 9OBTL) approach are now putting more emphasis on performance tasks and
other assessments like portfolios, observation, oral questioning and case studies for authentic
assessment. These are non-test assessment techniques.
 EVALUATION
 It comes in after the data had been collected from an assessment task. According to Russell
and Airasian (2012), evaluation is the process of judging the quality of a performance or course of
action. As what is etymology indicates (French word evaluer), evaluation entails finding the value
of an educational task. This means that assessment data gathered by the teacher have to be
interpreted in order to make sound decisions about students and the teaching-learning process. It
is carried out both by the teach and his/her students to uncover how the learning process is
developing.
TYPES OF TESTS

 According to Mode of Response


 - ORAL – answers are spoken

 - WRITTEN – activities where students either select or provide a response to a prompt –


examples are alternative response – true/false, multiple choice, matching, short-answer,
essays, completion and identification. It can be administered to a large group at one time.
It can measure students’ written communication skills. It can also be used to assess lower
and higher levels of cognition provided that questions are phrased appropriately. It is fair
and efficient.
 - PERFORMANCE-BASED – activities that require students to demonstrate their skills
or ability to perform specific actions. Thy include problem-based
 learning, inquiry tasks, demonstration tasks, exhibits, presentation tasks and capstone performances.
These tasks are designed to be authentic, meaningful, in-depth and multidimensional. However, cost
and efficiency are some of the drawbacks.
 According to Ease of Quantification of Response
 As to way of scoring, a test may be classified as objective or subjective. An objective test can be
corrected and quantified quite easily. Scores can be readily compared. It includes true-false, multiple
choice, completion and matching items. The test items have a single or specific convergent response.
In contrast, a subjective test elicits varied responses. A test question of this type may have more than
one answer. Subjective test include restricted and extended-response essays. Because students have the
liberty to write their answers to a test question, it is not easy to check. Answers to this type of test are
usually divergent. Scores are likely to be influenced by personal opinion or judgment by the person
doing the scoring.
 According to Mode of Administration
 An individual test is given to one person at a time. Individual cognitive and achievement tests are
administered to gather extensive information about each student’s cognitive functioning and his/her
ability to process and perform specific tasks. They can help identify intellectually gifted students.
Likewise, they can also pinpoint those with learning disabilities (LDs). LDs are neurological disorders
that
 impede a learner’s ability to store, process or produce information properly. Testing can aid in
identifying learners who are struggling in reading (dyslexia), math (dyscalculia), writing
(dysgraphia), motor skills (dyspraxia), language (dysphasia), or visual or auditory processing.
Aside from assessment data obtained from a wide array of given tasks, the teacher can also
observe individual students closely during the test to gather additional information.
 A group test is administered to a class of students or group of examinees simultaneously. It was
developed to address the practical need of testing. The test is usually objective and responses are
more or less restricted. It does not lend itself for in-depth observations of individual students.
There is less opportunity to establish rapport or help students maintain interest in the test.
Additionally, students are assessed on all items of the test. Students may become bored with
easy items and anxious over difficult ones. Information obtain from group tests is not as
comprehensive as those from individual tests.
 According to Test Constructor
 Classified based on the constructor, a test may either be standardized or non-standardized.
Miller, Linn & Gronlund (2009) enumerated four properties that differentiate standardized tests
from classroom or informal test: learning outcomes and content measured; quality of test items;
reliability; and administration and scoring interpretation.
 Standardized tests are prepared by specialists who are versed in the principles of assessment.
They are administered to a large group of students or examiners under similar conditions. Scoring
procedures and interpretations are consistent. There are available manuals and guides to aid in the
administration and interpretation of test results. Because of high validity and reliability, they can be
used for a long period of time provided they are used for whatever they wee intended for. Results
are generally consistent. Commonly, standardized tests consists of multiple choice items used to
distinguish between students. Results of standardized tests serve as an indicator of instructional
effectiveness and a reflection of the school’s performance.
 Non-standardized tests are prepared by teachers who may not be adept at the principles of test
construction. At times, teacher-made tests are constructed haphazardly due to limited time and lack
of opportunity to pre-test the items or pilot test. Compared to a standardized test, the quality of
items is uncertain, or it known, they are generally lower. Non-standardized tests are usually
administered to one or a few classes to measure subject or course achievement. One or several test
formats are used; hence items may not be entirely objective. Test items are not thoroughly
examined for validity. Scores are not subjected to any statistical procedure to determine reliability.
Unlike a standardized test, it not intended to be used repeatedly for a long time. There are no
established standards for scoring and interpreting results.
 According to Mode of Interpreting Results
 Tests that yield norm-referenced interpretations are evaluative instruments that measure a
student’s performance in relation to the performance of a group on the same test. Comparisons are
made and the student’s relative position is determined. For instance, a student may rank third in a
class of fifty. Examples of norm-referenced tests are teacher-made survey tests and interest
inventories. Standardized achievement tests also fall under this type.
 Tests that allow criterion-referenced interpretations describe each student’s performance against
an agreed upon or pre-established criterion or level of performance. The criterion is not actually a
cutoff score but rather the domain of subject matter- the range of well-defined instructional
objectives or outcomes. Nonetheless, in a mastery test, the cut score is used to determine whether or
not a student has achieved mastery of a given unit of instruction. Surprisingly, the methods for
setting a cut score for a test vary therefore making it somehow subjective.
 You will find that some educators classify tests as norm or criterion-referenced tests. However,
Popham (2011) stressed that there are no such things. Instead, he clarified that these are
interpretations of student performance.
 According to Nature of Answer
 The following are popular types of test classified according to the construct they are
measuring:
 Personality tests were first developed in the 1920s, initially intended to aid in the
selection of personnel in the armed forces. Since then, quite a number of personality tests
were developed. It has no right or wrong answer, but it measures one’s personality and
behavioral style. It is used in recruitment as it aids employers in determining how a
potential employee will respond to various work-related activities. Apart from evaluating
and staffing, it is also used in career guidance, in individual and relationship counseling
and in diagnosing personality disorders. In schools, personality tests determine personality
strengths and weaknesses. Personality development activities can then be arranged for
students.
 Achievement tests measure students’ learning as a result of instruction and training
experiences. When used summatively, they serve as a basis for promotion to the next
grade. In contrast, aptitude tests determine a student’s potential to learn and do new tasks.
The College Scholastic Aptitude Test by the Center for Educational Measurement, Inc.
measures student ability and predicts success in college. A career aptitude test aids in
choosing the best line of work for an individual based on his/her skills and interest.
 Intelligence tests measure learners’ innate intelligence or mental ability. The first modern
intelligence test was published in 1905 by Alfred Binet and Theodore Simon. Intelligence tests
have continually evolved because of efforts to accurately measure intelligence. It had been
exploited extensively as a predictor of academic achievement. Intelligence tests contain items on
verbal comprehension, quantitative and abstract reasoning, among others, in accordance with some
recognized theory of intelligence. For instance, Sternberg constructed a set of multiple choice
questions grounded on his Triarchic Theory of Human Intelligence. The intelligence test taps into
the three independent aspects of intelligence: analytic, practical and creative.
 A sociometric test measures interpersonal relationships in a social group. Introduced in the 1930s,
the test allows learners to express their preferences in terms of likes and dislikes for other
members of the group. It includes peer nomination, peer rating and sociometric rankins of social
acceptance. For instance, a child may be asked to nominate three students whom they like to play
with, or rate them accordingly.
 A trade or vocational test assesses an individual’s knowledge, skills, and competence in a
particular occupation. A trade test may consist of a theory test and a practical test. Upon
successful completion of the test, the individual is given certification for qualification. Trade test
can likewise be used to determine the fruitfulness of training programs.
PRINCIPLES OF HIGH QUALITY
ASSESSMENT
1. Clarity of learning targets
2. (knowledge, reasoning, skills, products, affects)
3. Appropriateness of Assessment Methods
4. Validity
5. Reliability
6. Fairness
7. Positive Consequences
8. Practicality and Efficiency
9. Ethics
 1. CLARITY OF LEARNING TARGETS
(knowledge, reasoning, skills, products, affects)
Assessment can be made precise, accurate and dependable
only if what are to be achieved are clearly stated and
feasible. The learning targets, involving knowledge,
reasoning, skills, products and effects, need to be stated in
behavioral terms which denote something which can be
observed through the behavior of the students.
CLARITY OF LEARNING TARGETS ( CONT)

Cognitive Targets
Benjamin Bloom ( 1954 ) pro po sed a hierarchy of educ ati o nal objectives at the co gnitive
level. These are:

• Know ledg e – a cqui si tio n of facts, co nc epts and theo ries


• Comprehen sio n - u nde rs ta nd i ng , involves co gnition or awareness of the
inter relat io ns hips
• Application – t ransfer of know ledg e f rom one f ield of study to another of f rom one concept
to another concept in the same discipline
• Analysis – breaking down of a concept or idea into i ts com ponents and ex plaining g the
concept as a com position of these concepts
• Sy nthesi s – opposite of analysis, entails putting together the com po nents in order to
summ arize the concept
• Evaluation and R easoning – valuing and judgm ent or putting the “ wor th” of a concept
or principle.
CLARITY OF LEARNING TARGETS(CONT)

Skills, Competencies and Abilities Targets


 Skills – specific activities or tasks that a student can proficiently do
 Competencies – cluster of skills
 Abilities – made up of relate competencies categorized as:
i. Cognitive
ii. Affective
iii. Psychomotor
Products, Outputs and Project Targets
 - tangible and concrete evidence of a student’s ability
 need to clearly specify the level of workmanship of projects i. expert
ii. skilled
iii. novice
2. APPROPRIATENESS OF ASSESSMENT
METHODS

a. Written-Response Instruments
 Objective tests – appropriate for assessing the various levels of hierarchy of
educational objectives

 Essays – can test the students’ grasp of the higher level


 cognitive skills

 Checklists – list of several characteristics or activities presented to the


subjects of a study, where they will analyze and place a mark opposite to the
characteristics.
2 . APPROPRIATENESS OF ASSESSMENT
2 . APPROPRIATENESS OF ASSESSMENT
METHODS
METHODS

b. Product Rating Scales


 Used to rate products like book reports, maps, charts,
 diagrams, notebooks, creative endeavors
 Need to be developed to assess various products over the years

c. Performance Tests - Performance checklist

 Consists of a list of behaviors that make up a certain type of performance


 Used to determine whether or not an individual behaves
 in a certain way when asked to complete a particular task
2 . APPROPRIATENESS OF ASSESSMENT
METHODS

d. Oral Questioning – appropriate assessment method


when the objectives are to:
 Assess the students’ stock knowledge and/or
 Determine the students’ ability to communicate ideas in coherent
verbal sentences.

e. Observation and Self Reports


 Useful supplementary methods when used in
 conjunction with oral questioning and performance tests
3. PROPERTIES OF ASSESSMENT METHODS

Validity
Reliability
Fairness
Positive Consequences
Practicality and Efficiency
Ethics
3. VALIDITY

 Something valid is something fair.


 A valid test is one that measures what it
 is supposed to measure.

Types of Validity
 Face: What do students think of the test?
 Construct: Am I testing in the way I taught?
 Content: How does this compare with the existing valid test?
 Tests can be made more valid by making them more
subjective (open items).
MORE ON VALIDITY

Validity – appropriateness, correctness, meaningfulness and


usefulness of the specific conclusions that a teacher reaches
regarding the teaching-learning situation.

 Content validity – content and format of the instrument


i. Students’ adequate experience
ii. Coverage of sufficient material
iii. Reflect the degree of emphasis

 Face validity – outward appearance of the test, the lowest form of test
validity
 Criterion-related validity – the test is judge against a specific criterion
 Construct validity – the test is loaded on a “construct” or factor
RELIABILITY
 Something reliable is something that works well
 and that you can trust.
 A reliable test is a consistent measure of what it is
supposed to measure.

Questions:
 Can we trust the results of the test?
 Would we get the same results if the tests were
taken again and scored by a different person?

 Tests can be made more reliable by making



Reliability is the extent to which an
experiment, test, or any measuring
procedure yields the same result on
repeated trials.
Equivalency reliability is the extent to which
two items measure identical concepts at an
identical level of difficulty. Equivalency
reliability is determined by relating two sets
of test scores to one another to highlight the
degree of relationship or association.
Stability reliability (sometimes called
test, re-test reliability) is the agreement
of measuring instruments over time. To
determine stability, a measure or test is
repeated on the same subjects at a
future date.
Internal consistency is the extent to
which tests or procedures assess the
same characteristic, skill or quality. It is
a measure of the precision between the
observers or of the measuring
instruments used in a study.
Interrater reliability is the extent to
which two or more individuals (coders or
raters) agree. Interrater reliability
addresses the consistency of the
implementation of a rating system.
RELIABILITY – CONSISTENCY, DEPENDABILITY,
STABILITY WHICH CAN BE ESTIMATED BY

 Split-half method
 Calculated using the
i. Spearman-Brown prophecy formula
ii. Kuder-Richardson – KR 20 and KR21
Consistency of test results when the same test is
 administered at two different time periods
i. Test-retest method
ii. Correlating the two test results
5. FAIRNESS

 The concept that assessment should be 'fair' covers a


number of aspects.
Student Knowledge and learning targets of
assessment
Opportunity to learn
Prerequisite knowledge and skills
Avoiding teacher stereotype
Avoiding bias in assessment tasks and
procedures
6. POSITIVE CONSEQUENCES

 Learning assessments provide students with


effective feedback and potentially improve
 their motivation and/or self-esteem. Moreover,
assessments of learning gives students the tools to
assess themselves and understand how to improve.
- Positive consequence on students, teachers,
parents, and other stakeholders
7. PRACTICALITY AND EFFICIENCY

 Something practical is something effective in


 real situations.
 A practical test is one which can be practically
administered.

Questions:
 Will the test take longer to design than apply?
 Will the test be easy to mark?

 Tests can be made more practical by making


 it more objective (more controlled items)
 Teacher Familiarity with Teachers should be familiar with the
 the Method test,
- does not require too much time
 Time required - implementable
 Complexity of
 Administration
 Ease of scoring
 Ease of Interpretation
 Cost
RELIABILITY, VALIDITY &
PRACTICALITY

 The problem:

 The more reliable a test is, the less valid.


 The more valid a test is, the less reliable.
 The more practical a test is, (generally) the
less valid.

 The solution:
 As in everything, we need a balance (in
both exams and exam items)
8. ETHICS

Informed consent
Anonymity and Confidentiality
1. Gathering data
2. Recording Data
3. Reporting Data
ETHICS IN ASSESSMENT – “RIGHT
AND WRONG”
Conforming to the standards of conduct of a given
profession or group
Ethical issues that may be raised
i. Possible harm to the participants.
ii. Confidentiality.
iii. Presence of concealment or deception.
iv. Temptation to assist students.
RECENT TRENDS AND FOCUS IN
ASSESSMENT
 Accountability means informing parents and the public about how well a school is
educating its students and about the quality of the social and learning environment.

 Fairness refers to the consideration of learner's needs and characteristics, and any
reasonable adjustments that need to be applied to take account of them. It also
includes an opportunity for the person being assessed to challenge the result of
the assessment and to be reassessed if necessary. ...
RECENT TRENDS AND FOCUS IN
ASSESSMENT
 In education, the term standards-based refers to systems
of instruction, assessment, grading, and academic reporting that are based on
students demonstrating understanding or mastery of the knowledge and skills they
are expected to learn as they progress through their education
RECENT TRENDS AND FOCUS IN
ASSESSMENT
 Outcome based assessment means that the assessment process must be
aligned with the learning outcomes. This means that it should support the
learners in their progress (formative assessment) and validate the achievement of
the intended learning outcomes at the end of the process
(summative assessment).
RECENT TRENDS AND FOCUS IN
ASSESSMENT

Item response theory provides a useful and theoretically


well-founded framework for educational measurement. It
supports such activities as the construction of
measurement instruments, linking and equating
measurements, and evaluation of test bias and
differential item functioning.
THANK YOU!!!

You might also like