0% found this document useful (0 votes)
35 views14 pages

Assessment of Learning

Uploaded by

Cassey Lesley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views14 pages

Assessment of Learning

Uploaded by

Cassey Lesley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

LUCAN REVIEW CENTER

and Tutorials, Inc.


Alano St. San Francisco Dist., Pagadian City
Tel No. (062) 215-3307
Cel No. 09206316136 (Smart)
Cel No. 09277695668 (Globe)

ASSESSMENT AND EVALUATION OF LEARNING 1

PART I – CONTENT UPDATE


BASIC CONCEPTS
Test – an instrument designed to measure any characteristic, quality, ability, knowledge or
skill. It compromised of items in the area it is designed to measure.
Measurement – a process of quantifying the degree to which someone/something possesses a given
trait. i.e., quality, characteristics, or feature.
Assessment – a process of gathering and organizing quantitative or qualitative data into an
interpretable form to have a basis for judgment or decision-making.
- It is a prerequisite to evaluation. It provides the information which enables evaluation
to take place.
Evaluation – a process of systematic interpretation, analysis, appraisal or judgment of the worth of
organized data as basis for decision-making. It involves judgment about the
desirability of changes in students.
Traditional Assessment – it refers to the use of methods other than pen-and-paper objective test
which includes performance tests, projects, portfolios, journals, and the likes.
Authentic Assessment – it refers to the use of assessment methods that simulate true-to-life
situations. This could be written tests that reflect real life situations or
performance tasks that are parallel to what we experience in real life.

PURPOSES OF CLASSROOM ASSESSMENT


1. Assessment FOR learning – this includes three types of assessment done before and during
instruction. These are placement, formative and diagnostic.
a. Placement – done prior to instruction
 Its purpose is to assess the needs of the learners to have basis in planning for a
relevant instruction.
 Teachers use this assessment to know what their students are bringing into the
learning situation and use this as a starting point for instruction.
 The results of this assessment place students in specific learning groups to facilitate
teaching and learning.
b. Formative – done during instruction
 This assessment is where teachers continuously monitor the student’s level of
attainment of the learning objectives (Stiggins, 2005).
 The results of this assessment are communicated clearly and promptly to the students
for them to know their strengths and weaknesses and the progress of their learning.
c. Diagnostic – done during instruction
 This is used to determine student’s recurring or persistent difficulties.
 It searches for the underlying causes of student’s learning problems that do not
respond to first aid treatment.
 It helps formulate a plan for detailed remedial instruction.
2. Assessment OF learning – this is done after instruction. This is usually referred to as the
summative assessment.
 It is used to certify what students know and can do and the level of their proficiency
or competency.
 Its results reveal whether or not instructions have successfully achieved the
curriculum outcomes.
 The information from assessment of learning is usually expressed as marks or letter
grades.
 The results of which are communicated to the students, parents, and other
stakeholders for decision making.
 It is also a powerful factor that could pave the way for educational reforms.
3. Assessment AS Learning – this is done for teachers to understand and perform well their
role of assessing FOR and OF learning. It requires teachers to undergo training on how to
assess learning and be equipped with the following competencies needed in performing their
work as assessors.

STANDARDS FOR TEACHER COMPETENCE IN EDUCATIONAL ASSESSMENT OF


STUDENTS
(Developed by the American Federation of Teachers National Council on Measurement in
Education National Education Association)
1. Teachers should be skilled in choosing assessment methods appropriate for instructional
decisions.
2. Teachers should be skilled in developing assessment methods appropriate for instructional
decisions.
3. Teachers should be skilled in administering, scoring, and interpreting the results of both
externally-produced and teacher- produced assessment methods.
4. Teachers should be skilled in using assessment results when making decisions about individual
students, planning teaching, developing curriculum, and school improvement.
5. Teachers should be skilled in developing valid pupil grading procedures which use pupil
assessments.
6. Teachers should be skilled in communicating assessment results to students, parents, other lay
audience, and other educators.
7. Teachers should be skilled in recognizing unethical, illegal, and otherwise inappropriate
assessment methods and uses of assessment information.

PRINCIPLES OF HIGH QUALITY CLASSROOM ASSESSMENT

Principle 1: Clarity and Appropriateness of Learning Targets


 Learning targets should be clearly stated, specific, and centers on what is truly important.
Learning Targets
(Mc Millan, 2007; Stiggins, 2007)
Knowledge Student mastery of substantive subject matter
Reasoning Student ability to use knowledge to reason and solve problems
Skills Student ability to demonstrate achievement-related skills
Products Student ability to create achievement-related products
Affect/Disposition Student attainment of affective states such as attitudes, values, interests and
self-efficacy
Principle 2: Appropriateness of Methods
Assessment Methods
Objective Objective Essay Performance Oral question Observation Self-report
supply selection based
Self-report Multiple Restricted Presentations Oral Informal Attitude
choice papers examinations formal survey

Completion Matching Response Projects Conferences Sociometric


test type extended Athletics interviews devices

True/ Demons- Question-


false trations naires
inventories
Exhibitions
Portfolios
Learning Targets and their Appropriate Assessment Methods
Assessment Methods
Targets Objective Essay Performance Oral Observation Self-report
Based Question
Knowledge 5 4 3 4 3 2
Reasoning 2 5 4 4 2 2
Skills 1 3 5 2 5 3
Products 1 1 5 2 4 4
Affect 1 2 4 4 4 5
Note: Higher numbers indicate better matches (e.g. 5 = high, 1 = low)
Modes of Assessment
Mode Description Examples Advantages Dis-advantages
Traditional The paper-and  Standardized  Scoring is  Preparation of
pen test used in and teacher- objective the instrument is
assessing made tests  Administration time consuming
knowledge and is easy because  Prone to
thinking skills students can guessing and
take the test at cheating
the same time
Performance A mode of  Practical test  Preparation of  Scoring tends to
assessment that  Oral and the instrument is be subjective
requires actual aural test relatively easy without rubrics
demonstration of  Projects, etc.  Measures  Administration
skills or creation behavior that is time
of products of cannot be consuming
learning deceived

Portfolio A process of  Working  Measures  Development is


gathering Portfolios students growth time consuming
multiple  Show and  Rating tends to
indicators of Portfolios development be subjective
student progress  Documentary  Intelligence-fair without rubrics
to support course Portfolios
goals in dynamic,
ongoing and
collaborative
process.
Principle 3: Balance
 A balanced assessment sets targets in all domains of learning (cognitive, affective,
psychology) or domains of intelligence (verbal-linguistic, logical-mathematical, bodily-
kinesthetic, visual-spatial, musical-rhythmic, intrapersonal-social, interpersonal-
introspection, physical world-natural-existential-spiritual).
 A balanced assessment makes use of both traditional and alternative assessment.
Principle 4: Validity
Validity – is the degree to which the assessment instrument measures what it intends to measure. It
also refers to the usefulness of the instrument for a given purpose. It is the most important criterion
of a good assessment instrument.
Ways in Establishing Validity
1. Face Validity – is done by examining the physical appearance of the instrument
2. Content Validity – is done through a careful and critical examination of the objectives of
assessment so that it reflects the curricular objectives.
3. Criterion-related Validity – is established statistically such that a set of scores revealed by the
measuring instrument is correlated with the scores obtained in another external predictor or
measure. It has two purposes: concurrent and predictive
a. Concurrent validity – describes the present status of the individual by correlating the sets
of scores obtained from two measures given concurrently.
b. Predictive validity – describes the future performance of an individual by correlating the
sets of scores obtained from two measures given at a longer time interval.
4. Construct validity – is established statistically by comparing psychological traits or factors
that theoretically influence scores in a test.
a. Convergent Validity – is established if the instrument defines another similar trait other
than what it is intended to measure.
e.g. Critical Thinking Test may be correlated with Creative Thinking Test.
G
b. Divergent Validity – is established if an instrument can describe only the intended trait
and not the other traits.
e.g. Critical Thinking Test may not be correlated with Reading Comprehension Test.
Principle 5: Reliability
Reliability – it refers to the consistency of scores obtained by the same person when retested using
the same instrument or it’s parallel.

Method Type of Reliability Procedure Statistical Measure


Measure
1. Test-Retest Measure of stability Give a test twice to the Pearson r
same group with any
time interval between
tests from several
minutes to several
years.
2. Equivalent Measure of Give parallel forms of Person r
Forms equivalence tests with close time
interval between
forms.
3. Test-retest Measure of stability Give parallel forms of Person r
with and equivalence tests with increased
Equivalent time interval between
Forms forms.
4. Split Half Measure of Internal Give a test once. Score Pearson r and
Consistency equivalent halves of Spearman Brown
the test. Formula
e.g. odd- and even-
numbered items
5. Kuder- Measure of Internal Give a test once then Kuder-Richardson
Richardson Consistency corresponding formula 20 and 21
proportion/percentage
of the students passing
and not passing a
given item.

Principle 6: Fairness
A fair assessment provides all students with an equal opportunity to demonstrate achievement. The
key to fairness are as follows:
 Students have knowledge of learning targets and assessment.
 Students are given equal; opportunity to learn.
 Students possess the pre-requisite knowledge and skills.
 Students are free from teacher stereotypes.
 Students are free from biased assessment tasks and procedures.
Principle 7: Practicality and efficiency
When assessing learning, the information obtained should be worth the resources and time required
to obtain it. The factors to consider are as follows:
 Teacher Familiarity with the Method – the teacher should know the strengths and weaknesses
of the method and how to use them.
 Time Required – Time includes construction and use of the instrument and the interpretation
of results. Other things being equal, it is desirable to use the shortest assessment time
possible that provides valid and reliable results.
 Complexity of the Administration – Directions and procedures for administrations and
procedures are clear and that little time and effort is needed.
 Ease of Scoring – Use scoring procedures appropriate to your method and purpose. The
easier the procedures, the more reliable the assessment is.
 Ease of Interpretation – Interpretation is easier if there was a plan on how to use the results
prior to assessment.
 Cost – Other things being equal, the less expense used to gather information, the better.
Principle 8: Continuity
 Assessment takes place in all phases of instruction. It could be done before, during and after
instruction.
Activities Occurring Prior to Instruction
 Understanding students’ cultural backgrounds, interests, skills, and abilities as they apply
across a range of learning domains and/or subject areas;
 Understanding students’ motivations and their interests in specific class content;
 Clarifying and articulating the performance outcomes expected of pupils; and
 Planning instruction for individuals or groups of students.

Activities Occurring During Instruction


 Monitoring pupil progress toward instructional goals;
 Identifying gains and difficulties pupils are experiencing in learning and performing;
 Adjusting instruction;
 Giving contingent, specific, and credible praise and feedback;
 Motivating students to learn; and
 Judging the extent of pupil attainment of instructional outcomes.
Activities Occurring After Appropriate Instructional Segment
(e.g. lesson, class, semester, grade)
 Describing the extent to which each student has attained both short and long-term
instructional goals;
 Communicating strengths and weaknesses based on assessment results to students, and
parents or guardians;
 Recording and reporting assessment results for school-level analysis, evaluation, and
decision-making;
 Analyzing assessment information gathered before and during instruction to understand each
student’s progress to date and to inform future instructional planning.
 Evaluating the effectiveness of instruction; and
 Evaluating the effectiveness of the curriculum and materials in use.
Principle 9: Authenticity
Features of Authentic Assessment
 Meaningful performance task
 Clear standards and public criteria
 Quality products and performance
 Positive interaction between the assesse and assessor
 Emphasis on meta-cognition and self-evaluation
 Learning that transfers
Criteria of authentic Achievement (Burke, 1999)
1. Disciplined Inquiry – requires in-depth understanding of the problem and a move beyond
knowledge produced by others to a formulation of new ideas.
2. Integration of Knowledge – considers things as a whole rather than fragments of knowledge.
3. Value Beyond Evaluation – what students do have some value beyond the classroom.
Principle 10: Communication
 Assessment targets and standards should be communicated.
 Assessment results should be communicated to their important users.
 Assessment results should be communicated to students through direct interaction or regular
ongoing feedback on their progress.
Principle 11: Positive Consequences
 Assessment should have a positive consequence to students; that is it should motivate them to
learn.
 Assessment should have a positive consequences on teachers; that is it, should help them
improve the effectiveness of their instruction.
Principle 12: Ethics
 Teachers should free the students from harmful consequences of misuse or overuse of
various assessment procedures such as embarrassing students and violating student’s right to
confidentiality.
 Teachers should be guided by laws and policies that affect their classroom assessment.
 Administrators and teachers should understand that it is inappropriate to use standardized to
measure teaching effectiveness.

PERFORMANCE- BASED ASSESSMENT


Performance – Based Assessment is a process of gathering information about students’ learning
through actual demonstration of essential and observable skills and creation of products that are
grounded in real world contexts and constraints. It is an assessment that is open to many possible
answers and judged using multiple criteria or standards of excellence that are pre-specified and
public.
Reasons for Using Performance-Based Assessment
 Dissatisfaction of the limited information obtained from selected-response test.
 Influence of cognitive psychology, which demands not only for the learning of declarative
but also procedural knowledge.
 Negative impact of conventional tests e.g., high-stake assessment, teaching for the test
 It is appropriate in experiential, discovery-based, integrated, and problem-based learning
approaches.
Types of Performance-based Task
1. Demonstration – type – this is a task that requires no product
Examples: constructing a building, cooking demonstrations, entertaining tourists, teamwork,
presentations
2. Creation – type – this is a task that requires tangible products
Examples: project plan, research paper, project flyers
Methods of Performance – based Assessment
1. Written-open ended – a written prompt is provided
Formats: Essays, open-ended test
2. Behavior-based – utilizes direct observations of behaviors in situations or simulated contexts
Formats: structured (a specific focus of observation is set at once) and unstructured (anything
observed is recorded or analyzed)
3. Interview-based – examinees respond in one-to-one conference setting with examiner to
demonstrate mastery of the skills
Formats: structured (interview questions are set at once) and unstructured (interview
questions depend on the flow of conversation)
4. Product-based – examinees create a work sample or a product utilizing the skills/abilities
Formats: restricted (products of the same objective are the same for all students) and
extended (students vary in their products for the same objective)
5. Portfolio-based – collections of works that are systematically gathered to serve many
purposes.
How to Assess a Performance
1. Identify the competency that has to be demonstrated by the students with or without a
product.
2. Describe the task to be performed by the students either individually or as a group, the
resources needed, time allotment and other requirements to be able to assess the focused
competency.
7 Criteria in Selecting a Good Performance Assessment Task
 Generalizability – the likelihood that the students’ performance on the task will generalize
the comparable tasks.
 Authenticity – the task is similar to what the students might encounter in the real world as
opposed to encountering only in the school.
 Multiple Foci – the task measures multiple instructional outcomes.
 Teachability – the task allows one to master the skill that one should be proficient in.
 Feasibility – the task is realistically implementable in relation to its cost, space, time, and
equipment requirements.
 Scorability – the task can be reliably and accurately evaluated.
 Fairness – the task is fair to all the students regardless of their social status or gender.
3. Develop a scoring rubric reflecting the criteria, levels of performance and the scores.

PORTFOLIO ASSESSMENT
Portfolio Assessment is also an alternative to pen-and-paper objective test. It is a purposeful,
ongoing, dynamic, and collaborative process of gathering multiple indicators of the learner’s growth
and development. Portfolio assessment is also performance-based but more authentic than any
performance-based task.

Reasons for Using Portfolio Assessment


Burke (1999) actually recognizes portfolio as another type of assessment and is considered
authentic because of the following reasons:
 It tests what is really happening in the classroom.
 It offers multiple indicators of students’ progress.
 It gives the students the responsibility of their own learning.
 It offers opportunities for students to document reflections of their learning.
 It demonstrates what the students know in ways that encompass their personal learning styles
and multiple intelligences.
 It offers teachers new role in the assessment process.
 It allows teachers to reflect on the effectiveness of their instruction.
 It provides teachers freedom of gaining insights into the student’s development or
achievement over a period of time.

Principles Underlying Portfolio Assessment


There are three underlying principles of portfolio assessment: content, learning, and equity
principles.
1. Content principle suggests that portfolios should reflect the subject matter that is important
for the students to learn.
2. Learning principle suggests that portfolios should enable the students to become active and
thoughtful learners.
3. Equity principle explains that portfolios should allow students to demonstrate their learning
styles and multiple intelligences.
Types of Portfolios
Portfolios could come in three types: working, show, or documentary.
1. The working portfolio is a collection of a student’s day-to-day works which reflect his/her
learning.
2. The show portfolio is a collection of a student’s best works.
3. The documentary portfolio is a combination of a working and a show portfolio.
Steps in Portfolios Development

1. Set Goals

3. Collect
2. Confer/ Exhibit
(Evidences)

4. Evaluate
5. Select
(Using Rubrics)

7. Organize 6. Reflect

DEVELOPING RUBRICS
Rubric is a measuring instrument used in rating performance-based tasks. It is the “key to
corrections” for assessment tasks designed to measure the attainment of learning competencies that
require demonstration of skills or creation of products of learning. It offers a set of guidelines or
descriptions in scoring different levels of performance or qualities of products of learning. It can be
used in scoring both the process and the products of learning.
Similarity of Rubric with Other Scoring Instruments
Rubric is a modified checklist and rating scale.
1. Checklist
 Presents the observed characteristics of a desirable performance or product
 The rater checks the trait/s that has/have been observed in one’s performance or
product.
2. Rating Scale
 Measures the extent or degree to which a trait has been satisfied by one’s work or
performance
 Offers an overall description of the different levels of quality of a work or a
performance
 Uses 3 to more levels to describe the work or performance although the most
common rating scales have 4 to 5 performance levels.
Below is a Venn diagram that shows the graphical comparison of rubric, rating scale and
checklist.
R
- shows the U
- shows degree of
observed traits of a B Rating
Checklist quality of
work/performance R
work/performance Scale
I
C

Types of Rubrics
Type Description Advantages Disadvantages
Holistic Rubric It describes the overall  It allows fast It does not clearly
quality of a performance assessment describe the degree of
or product. In this  It provides one score the criterion satisfied
rubric, there is only one to describe the overall by the performance or
rating given to the entire performance or product.
work or performance. quality of work. It does not permit
 It can indicate the differential weighing
general strengths and of the qualities of a
weaknesses of the product or a
work or performance performance.

Analytic Rubric It describes the quality  It clearly describes It is more time
of a performance or whether the degree of consuming to use.
product in terms of the the criterion used in It is more difficult to
identified dimensions performance or construct.
and/or criteria for which product has been
they are rated satisfied or not.
independently to give a  It permits differential
better picture of the weighting of the
quality of work or qualities of a product
performance. or a performance.
 It helps raters’
pinpoint specific
areas of strengths and
weaknesses.
Important Elements of a Rubric
Whether the format is holistic or analytic, the following information should be made
available in a rubric.
 Competency to be tested – this should be a behavior that requires either a demonstration or
creation of products of learning.
 Performance task – the task should be authentic, feasible, and has multiple foci.
 Evaluative Criteria and their Indicators – these should be made clear using observable traits.
 Performance levels- these levels could vary in number from 3 or more
 Qualitative and Quantitative descriptions of each performance level
- These descriptions should be observable and measurable.
Guidelines when Developing Rubrics
 Identify the important and observable features or criteria of an excellent performance or
quality product.
 Clarify the meaning of each trait or criterion and the performance levels.
 Describe the gradations of quality product or excellent performance levels.
 Aim for an even number of levels to avoid the central tendency source of error.
 Keep the number of criteria reasonable enough to be observed or judged.
 Arrange the criteria in order in which they will likely to be observed.
 Determine the weight/ points of each criterion and the whole work or performance in the
final grade.
 Put the descriptions of a criterion or a performance level on the same page.
 Highlight the distinguishing traits of each performance level.
 Check if the rubric encompasses all possible traits of a work.
 Check again if the objectives of assessment were captured in the rubric.

PART II: ANALYZING TEST ITEMS

1. Who among the teachers described below is doing assessment?


a. Mrs. Bautista who is administering a test to her students.
b. Mr. Ferrer who is counting the scores obtained by the students in his test.
c. Ms. Leyva who is computing the final grade of the students after completing all their
requirements.
d. Prof. Cuevas who is planning for a remedial instruction after knowing that students
perform poorly in her test.

2. Mr. Fernandez is judging the accuracy of these statements. Which statements will he
consider as correct?
I. Test is a tool to measure a trait.
II. Measurement is the process of qualifying a given trait.
III. Assessment is the gathering of quantitative and qualitative data.
IV. Evaluation is the analysis of quantitative and qualitative data or decision making.
a. I and II only c. I, II, and III
b. III and IV only d. I, III, and IV

3. If I have to use the most authentic method of assessment, which of these procedures should I
consider?
a. Traditional Test c. Written Test
b. Performance-based Assessment d. Objective Assessment

4. After doing the exercise on verbs, Ms. Borillo gave a short quiz to find out how well the
students have understood the lesson. What type of assessment was done?
a. Summative Assessment c. Diagnostic Assessment
b. Formative Assessment d. Placement Assessment

5. Who among the teachers below performed a diagnostic assessment?


a. Ms. Santos who asked questions when the discussion was going on to know who among
her students understood what she was trying to emphasize.
b. Mr. Colubong who gave a short quiz after discussing thoroughly the lesson to determine
the outcome of instruction.
c. Ms. Ventura who gave a 10-item test to find out the specific lessons which the students
failed to understand.
d. Mrs. Lopez who administered a readiness test to the incoming grade one pupils.

6. You are assessing FOR learning. Which of these will you likely do?
a. Giving grades to students
b. Reporting to parents the performance of their child.
c. Recommending new policies in grading students.
d. Assessing the strengths and weaknesses of students.

7. Ms. Saplan is planning to do an assessment OF learning. Which of these should she include
in her plan considering her purpose for assessment?
a. How to give immediate feedback to student’s strengths and weaknesses.
b. How to determine the area of interest of students
c. How to certify student’s achievement
d. How to design one’s instruction

8. You targeted that after instruction, your students should be able to show their ability to solve
problems with speed and accuracy. You then designed a tool to measure this ability. What
principle of assessment did you consider in this situation?
a. Assessment should be based on clear and appropriate learning targets or objectives.
b. Assessment should have a positive consequence on student’s learning
c. Assessment should be reliable.
d. Assessment should be fair.
9. Ms. Ortega tasked her students to show how to play basketball. What learning target is she
assessing?
a. Knowledge c. Skills
b. Reasoning d. Products
10. Mr. Ravelas made an essay test for the objective “Identify the planets in the solar system”.
Was the assessment method used the most appropriate for the given objective? Why?
a. Yes, because essay test is easier to construct than objective test.
b. Yes, because essay test can measure any type of objective.
c. No, he should have conducted oral questioning.
d. No, he should have prepared an objective test.

11. Mr. Cidro wants to test students’ knowledge of the different places in the Philippines, their
capital and their products and so he gave her students an essay test. If you are the teacher will
you do the same?
a. No, the giving of an objective test is more appropriate than the use of essay.
b. No, such method of assessment is inappropriate because essay is difficult.
c. Yes, essay test could measure more than what other tests could measure.
d. Yes, essay test is the best in measuring any type of knowledge.

12. What type of validity does the Pre-board Examination possesses if its results can explain how
the students will likely perform in their Licensure Examination?
a. Concurrent c. Construct
b. Predictive d. Content

13. Ms. Alviz wants to determine if the students’ scores in their Final Test is reliable. However,
she has only one set of test and her students are already on vacation. What test of reliability
can she employ?
a. Test-Retest c. Equivalent Forms
b. Kuder Richardson Method d. Test-Retest with Equivalent Forms

Refer to this case in answering item 14-15


Two teachers of the same grade level have set the following objectives for the day’s lesson:
At the end of the period, the students should be able to:
a. Construct bar graph;
b. Interpret bar graphs
To assess the attainment of the objectives, Teacher A required the students to construct a bar
graph for the given set of data then she asked them to interpret this using a set of questions as guide.
Teacher B presented a bar graph then asked them to interpret this using also a set of guide questions.
14. Whose practice is acceptable based on the principles of assessment?
a. Teacher A c. Both Teacher A and B
b. Teacher B d. Neither Teacher A nor Teacher B

15. Which is true about the given case?


a. Objective A matched with performance-based assessment while B can be assessed using
the traditional using the traditional pen-and-paper objective test.
b. Objective A matched with traditional assessment while B can be assessed using a
performance-based method.
c. Both objective A and B matched with performance-based assessment.
d. Both objective A and B matched with traditional assessment.

16. In the context of the Theory of Multiple Intelligence, which is a weakness of the paper-pencil
test?
a. It puts non-linguistically intelligent at a disadvantage.
b. It is not easy to administer.
c. It utilizes so much time.
d. It lacks reliability.

17. Mr. Umayam is doing a performance-based assessment for the day’s lesson. Which of the
following will most likely happen?
a. Students are evaluated in one sitting.
b. Students do an actual demonstration of their skill.
c. Students are evaluated in the most objective manner.
d. Students are evaluated based on varied evidences of learning.

18. Ms. Despi rated her students in terms of appropriate and effective use of some laboratory
equipment and measurement tools and the students’ ability to follow the specified
procedures. What mode of assessment should Miss del Rosario use?
a. Portfolio assessment c. Traditional assessment
b. Journal assessment d. Performance-based assessment
19. Mrs. Hilario presented the lesson on baking through a group activity so that the students will
not just learn how to bake but also develop their interpersonal skills. How should this lesson
be assessed?
I. She could give the students an essay test explaining how they baked the cake.
II. The students should be graded on the quality of their baked cake using a rubric.
III. The students in a group should rate the members based on their ability to cooperate in
their group activity.
IV. She should observe how the pupils perform their task.
a. I, II, and III only c. I, II, IV only
b. II, III, and IV only d. I, II, III, and IV

20. If a teacher has set objectives in all domains or learning targets and which could be assessed
using a single performance task, what criterion in selecting a task should she consider?
a. Generalizability c. Multiple Foci
b. Fairness d. Teachability

21. Which term refers to the collection of students’ products and accomplishments in a given
period for evaluation purposes?
a. Diary c. Anecdotal record
b. Portfolio d. Observation report

22. Mrs. Catalan allowed the students to develop their own portfolio in their own style as long as
they show all non-negotiable evidences of learning. What principle in portfolio assessment
explains this practice?
a. Content Principle c. Equity Principle
b. Learning Principle d. Product Principle

23. How should the following steps in portfolio assessment be arranged logically?
I. Set targets
II. Select evidences
III. Collect evidences
IV. Rate collection
V. Reflect on evidences
a. I, II, III, IV, V c. I, II, III, V, IV
b. I, III, II, V, IV d. I, III, V, II, IV

24. Which could be seen in a rubric?


I. Objective in a high level of cognitive behavior
II. Multiple criteria in assessing learning
III. Quantitative descriptions of the quality of work.
IV. Qualitative descriptions of the quality of work.
a. I, and II only c. I, II, and III
b. II, III, and IV only d. I, II, III and IV

25. The pupils are to be judged individually on their mastery of the singing of the national
anthem and so their teacher let them sing individually. What should the teacher use in rating
the performance of the pupils considering the fact that the teacher has only one period to
spend in evaluating her 20 pupils?
a. Analytic c. Either holistic or analytic
b. Holistic d. Both holistic and analytic

26. Mrs. Pua is judging the worth of the project of the students in her Science class based on a set
of criteria. What process describes what she is doing?
a. Testing c. Evaluating
b. Measuring d. Assessing

27. Mrs. Acebuche is comparing measurement from evaluation. Which statement explains the
difference?
a. Measurement is assigning a numerical value to a given trait whiel evaluation is givng
meaning to the numerical value of the trait.
b. Measurement is the process of gathering data while assessment is the process of
quantifying the data gathered.
c. Measurement is the process of quantifying data while evaluation is the process of
organizing data.
d. Measurement is a pre-requisite of assessment while evaluation is the pre-requisite of
testing.

28. Ms. Ricafort uses alternative methods of assessment. Which of the following will she NOT
likely use?
a. Multiple Choice Test c. Oral Presentation
b. Reflective Journal Writing d. Developing Portfolios

29. Ms. Camba aims to measure a product of learning. Which of these objectives will she most
likely set for her instruction?
a. Show positive attitude towards learning common nouns
b. Identify common nouns in a reading selection
c. Construct a paragraph using common nouns
d. Use a common noun in a sentence

30. The students of Mrs. Valino are very noisy. To keep them busy, they were given any test
available in the classroom and then the results were graded as a way to punish them. Which
statement best explains if the practice is acceptable or not?
a. The practice is acceptable because the students behaved well when they were given test.
b. The practice is not acceptable because it violates the principle of reliability.
c. The practice is not acceptable because it violates the principle of validity.
d. The practice is acceptable since the test results are graded.

31. Ms. Delos Angeles advocates assessment FOR learning. Which will she NOT likely do?
a. Formative assessment c. Placement assessment
b. Diagnostic assessment d. Summative assessment

32. At the beginning of the school year, the 6-year old pupils were tested to find out who among
them can already read. The result was used to determine their sections. What kind of test was
given to them?
a. Diagnostic c. Placement
b. Formative d. Summative

33. The grade six pupils were given a diagnostic test in addition and subtraction of whole
numbers to find out if they can proceed to the next unit. However, the results of the test were
very low. What should the teacher do?
a. Proceed to the next lesson to be able to finish all the topics in the course.
b. Construct another test parallel to the given test to determine the consistency of the scores.
c. Count the frequency of errors to find out the lessons that the majority of students need to
relearn.
d. Record the scores then inform the parents about the very poor performance of their child
in mathematics.

34. Mrs. Nogueras is doing an assessment OF learning. At what stage of instruction should she
do it?
a. Before instruction c. prior to instruction
b. After instruction d. during the instructional process

35. Mr. Cartilla developed an Achievement test in Math for his grade three pupils. Before he
finalized the test, he examined carefully if the test items were constructed based on the
competencies that have to be tested. What test of validity was he trying to establish?
a. Content-validity c. predictive validity
b. Concurrent validity d. construct validity

36. Mrs. Robles wants to establish the reliability of her achievement test in English. Which of the
following activities will help achieve her purpose?
a. Administer two parallel tests to different groups of students.
b. Administer two equivalent tests to the same groups of students.
c. Administer a single tests but to two different groups of students.
d. Administer two different tests but to the same groups of students.
Refer to the situation below in answering items 37 and 38.
A teacher set the following objectives for the day’s lesson:
At the end of the period, the students should be able to:
a. Identify the parts of a friendly letter,
b. Construct a friendly letter using the MS Word, and
c. Show interest towards the day’s lesson
To assess the attainment of the objectives, Ms. Cidro required the students to construct a
friendly letter and have it encoded at their Computer Laboratory using the MS Word. The letter
should inform one’s friend about what one has learned in the day’s lesson and how one felt about it.
37. Which is NOT true about the given case?
a. Ms Cidro practices a balanced assessment.
b. Ms. Cidro’s assessment method is performance-based.
c. Ms. Cidro needs a rubric in scoring the work of the students.
d. Ms. Cidro’s assessment targets are all in the cognitive domain.

38. If Mr. Paraiso will have to make a scoring rubric for the student’s output, what format is
better to construct considering that the teacher has limited time to evaluate their work?
a. Analytic rubric c. Either A or B
b. Holistic rubric d. Neither A or B

39. The school principal has 3 teacher applicants all of whom graduated from the same
institution and are licensed teachers. She only needs to hire one. What should she do to
choose the best teacher from the three?
I. Give them placement test.
II. Interview them on why they want to apply in the school.
III. Let them demonstrate how to teach a particular lesson.
IV. Study their portfolios to examine the qualities of the students’ outputs when they
were in College?
a. I and II c. I and III, IV
b. II and III d. II, IIII, and IV

40. What should be done first when planning for a performance-based assessment?
a. Determine the “table of Specifications” of the tasks.
b. Set the competency to be assessed.
c. Set the criteria in scoring the task.
d. Prepare a scoring rubric.
41. To maximize the amount of time spent for performance-based assessment, which one should
be done?
a. Plan a task that can be used for instruction and assessment at the same time.
b. Assess one objective for one performance task.
c. Set objectives only for cognitive domains.
d. Limit the task to one meeting only.

42. Who among the teachers below gave the most authentic assessment task for the objective
“Solve word problems involving the four basic operations?
a. Mrs. Juliano who presented a word problem involving the four fundamental operations
and then asked the pupils to solve it.
b. Mrs. Mandia who asked her pupils to construct a word problem for a given number
sentence that involves four fundamental operations and then asked them to solve the
word problem they constructed.
c. Mrs. Malang who asked her pupils to construct any word problem involves the four
fundamental operations and then asked them to show how to solve it.
d. Mrs. Pontipedra who asked her pupils to construct any word problem that involves the
four fundamental operations then formed them by twos so that each pair exchanged
problems and helped solve each other’s problem.
43. Which is WRONG to assume about traditional assessment?
a. It can assess individuals objectively.
b. It can assess individuals at the same time.
c. It is easier to administer than performance test.
d. It can assess fairly all the domains of intelligence of an individual.
44. Which statement about performance-based assessment is FALSE?
a. It emphasizes merely process.
b. It also stresses doing, not only knowing.
c. It accentuates on process as well as product
d. Essay tests are an example of performance-based assessments.

45. Under which assumption is portfolio assessment based?


a. Portfolio assessment is a dynamic assessment.
b. Assessment should stress the reproduction of knowledge.
c. An individual learner is adequately characterized by a test score.
d. An individual learner is inadequately characterized by a test score.

46. Which is a good portfolio evidence of a student’s acquired knowledge and writing skills?
a. Project c. Reflective journal
b. Test Results d. Critiqued Outputs

47. When planning for portfolio assessment, which should you do first?
a. Set the targets for portfolio assessment.
b. Exhibit one’s work and be proud of one’s collection.
c. Select evidences that could be captured in one’s portfolio.
d. Reflect on one’s collection and identify strengths and weakness.

48. Which kind of rubric is BEST to use in rating student’s projects done for several days?
a. Analytic c. Either holistic or analytic
b. Holistic d. Both holistic and analytic

49. Which is NOT TRUE of an analytic rubric?


a. It is time consuming.
b. It is easier to construct than the holistic rubric.
c. It gives one’s level of performance per criterion.
d. It allows one to pinpoint the strengths and weaknesses of one’s work.

50. Mrs. Bacani prepared a rubric with 5 levels of performance described as 5=excellent, 4 =
very satisfactory, 3= satisfactory, 2= needs improvement, 1= poor. After using this rubric
with these descriptions, she found out that most of her students had a rating of 3. Even those
who are evidently poor in their performance had a rating of satisfactory. Could there be a
possible error in the use of rubric?
a. Yes, the teacher could have committed the generosity error.
b. Yes, the teacher could have committed the central tendency source of error.
c. No, it is just common to see more of the students having a grade of 3 in a 5-point scale.
d. No, such result is acceptable as long as it has a positive consequence to the students.

You might also like