0% found this document useful (0 votes)
17 views

Ped 7

Uploaded by

Lara Birad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Ped 7

Uploaded by

Lara Birad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 75

Lesson Proper for Week 1

Authentic assessment is the measurement of "intellectual accomplishments that are worthwhile, significant, and
meaningful, [1] as contrasted to multiple choice standardized tests. Authentic assessment can be devised by the
teacher, or in collaboration with the student by engaging student voice. When applying authentic assessment to
student learning and achievement, a teacher applies criteria related to “construction of knowledge, disciplined
inquiry, and the value of achievement beyond the school.
Authentic assessment tends to focus on contextualized tasks, enabling students to demonstrate their
competency in a more 'authentic' setting. Examples of authentic assessment categories include:
Ø Performance Assessment- test students’ ability to use skills in a variety of authentic contexts. They frequently
require students to work collaboratively and to apply skills and concepts to solve complex problems.
Ø Short Investigations - Many teachers use short investigations to assess how well students have mastered
basic concepts and skills. Most short investigations begin with a stimulus, like a math problem, political cartoon,
map, or excerpt from a primary source. The teacher may ask students to interpret, describe, calculate, explain, or
predict. These investigations may use enhanced multiplechoice questions.
Ø Open-Response Questions - present students with a stimulus and asks them to respond. Responses include
a brief written or oral answer, a mathematical solution, a drawing, a diagram, chart, or graph.
Short- and long-term tasks include such activities as writing, revising, and presenting a report to the class. Or
they may use concept mapping, a technique that assesses how well students understand relationships among
concepts.
Ø Portfolios Self-Assessment- A portfolio documents learning over time. This long-term perspective accounts
for student improvement and teaches students the value of self-assessment, editing, and revision. A student
portfolio can include journal entries and reflective writing, peer reviews, artwork, diagrams, charts, and graphs,
group reports, and student notes and outlines rough drafts and polished writing.
Ø Self-assessment requires students to evaluate their own participation, process, and products. Evaluative
questions are the basic tools of self-assessment. Students give written or oral responses to questions like what
the most difficult part of this project for you, what you think you should do next, and what you learnt from this
project.
Principles and Practices
1. A school’s mission is to develop useful citizen
2. To be a useful citizen, one has to be capable of performing useful tasks in the real- world.
3. The School’s duty is to help students develop proficiency in performing the tasks that they will be require to
perform after graduation in the work place.
4. The school must then require student to perform tasks that duplicate or imitate real-world situations.
Characteristics of Authentic Assessment
Ø It starts with clear definite criteria of performance made known to students.
Ø It is a criterion-referenced rather than norm-reference and so it identifies strengths and weaknesses, but does
not compare students nor rank their levels of performance.
Ø It requires students to make their own answer to questions rather than to select in given options or multiple
choice. They are required to use the range of HOTS. (Higher Order Thinking Skills)
Ø It often emphasizes performance and therefore students are required to demonstrate their knowledge and
skills.
Ø Encourages both teacher and students to determine their rate of progress in collaboratively to attain the
desired student learning outcomes.
Ø It does not encourage role learning and passive taking of test; instead students are required to demonstrate
analytical skills, ability to work in group, skills in oral and written communications.
Ø It changes the role of students as passive test takers into become active and involve participants in
assessment activities that emphasize what are their skills and capability.
Traditional Assessment is commonly associated with pre-determine choice measure of assessment such as
multiple choice tasks, fill-in-the blanks, true – false, matching type and others. Students typically recall or select
the answers. Essentially, Traditional Assessment springs from the educational which involves the following
principles and practices.
1. A school mission is to develop useful citizens.
2. To be a useful citizen, one must process a certain body of knowledge and skills.
3. The school is entrusted to teach this body of knowledge and skills
4. To determine if the student have acquired these knowledge and skills, the school must test the student on
these knowledge and skills.
Comparison of Authentic and Traditional Assessment

Attributes Traditional Assessment Authentic Assessment


1. Action/ Options Selecting a response Performing Task
2. Setting Contrived / imagined Simulation/Real-Life
3. Method Recall/ Recognition Construction/ Application
4. Focus Teacher-structured Student-structured
5. Outcome Indirect evidence Direct evidence

4.1 Objective Exercise 1 What is student learning outcomes?


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
Exercise 2 Why do we need to have a closer look in student learning outcomes?
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

4.2 Objective 2 Exercise 1. What is the meaning of Authentic Assessment?


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
Exercise 2. Enumerate the characteristics of Authentic Assessment.
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
4.3 Objective Exercise 1 Identify the assessment activities

Authentic Assessment Traditional Assessment

Assessment attaches much importance to any sort of teaching learning process. The usual and common
assessment we do is known as traditional assessment. Today we should use authentic assessment keeping
pace with the growing necessities of the world. What we mean by Authentic Assessment? It’s a form of
assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of
essential knowledge and skills’ as is defined by Jon Mueller. It can be characterised by open-ended tasks that
require students to construct extended responses, to perform an act, or to produce a product in a real-world
context—or a context that mimics the real world. Project works, portfolios, writing an article for newsletter or
newspaper, performing a dance or drama, designing a digital artifact, creating a poster for science fair, debates,
and oral presentations can be the examples of authentic assessment. It involves students in the actual
challenges, standards, and habits needed for success in the academic disciplines or
in the workplace‖ said Wiggins (1989). Authentic assessment tasks make the students motivated as they get
opportunity to perceive the relevance of the tasks to the real world. They find it as a meaningful learning.
In our academic life, mostly we do traditional assessment. It refers to the forced-choice measures of multiple-
choice tests, fill-in-the-blanks, true-false, matching and the like that have been and remain so common in
education. Students typically select an answer or recall information to complete the assessment. These tests
may be standardized or teacher-created. They may be administered locally or education board-wise, or globally.
As a nation’s mission is to develop productive citizens educational institutions must then test students to see if
they acquire the expected knowledge and skills. Teachers first determine the tasks that students will perform to
demonstrate their mastery, and then a curriculum is developed that will enable students to perform those tasks
well, which would include the acquisition of essential knowledge and skills.
A comparison of authentic assessment and conventional assessment reveals that different purposes are served,
as evidenced by the nature of the assessment and item response format. We can teach students how to do
mathematics, learn history and science, not just know them. Then, to assess what our students have learned,
we can ask students to perform tasks that "replicate the challenges" faced by those using mathematics, doing
history or conducting scientific investigation. Traditional assessment follows selecting a response from learners
whereas authentic assessment engages learners to perform a task on the basis of the item they are informed.
Traditional assessment is contrived but authentic is in reallife. Traditional assessment says recalling or
recognition, it is teacher structured and indirect evidence is put but authentic one is construction or application, it
is student structured and direct evidence is set.
Authentic assessments have several advantages over conventional or traditional tests. They are likely to be more
valid than conventional tests, particularly for learning outcomes that require higher-order thinking skills. Because
they involve real-world tasks, they are also likely to be more interesting for students, and thus more motivating.
And finally, they can provide more specific and usable information about what students have succeeded in
learning as well as what they have not learned.
Authentic assessment has played a pivotal role in driving curricular and instructional changes in the context of
global educational reforms. Since the 1990s, teacher education and professional development programmes in
many education systems around the globe have focused on the development of assessment literacy for teachers
and teacher candidates which encompasses teacher competence in the design, adaptation, and use of authentic
assessment tasks or performance assessment tasks to engage students in in-depth learning of subject matter
and to promote their mastery of the 21st-century competencies.
Authentic assessment serves as an alternative to conventional assessment. Conventional assessment is limited
to standardized paper-and-pencil/pen tests, which emphasize objective measurement. Standardized tests
employ closed-ended item formats such as true‒false, matching, or multiple choice. The
use of these item formats is believed to increase efficiency of test administration, objectivity of scoring, reliability
of test scores, and cost-effectiveness as machine scoring and large-scale administration of test items are
possible. However, it is widely recognised that traditional standardised testing restricts the assessment of higher-
order thinking skills and other essential 21st-century competencies due to the nature of the item format. From an
objective measurement or psychometric perspective, rigorous and higher-level learning outcomes such as critical
thinking, complex problem solving, collaboration, and extended communication are too subjective to be tested.
In traditional assessment student’s attention will understandably be focused on and limited to what is on the test.
In contrast, authentic assessments allow more student choice and construction in determining what is presented
as evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually
multiple acceptable routes towards constructing a product or performance. Obviously, assessments more
carefully controlled by the teachers offer advantages and disadvantages. Similarly, more student-structured tasks
have strengths and weaknesses that must be considered when choosing and designing an assessment.
The amount of new information is increasing at an exponential rate due to the advancement of digital technology.
Hence, rote learning and regurgitation of facts or procedures are no longer suitable in contemporary educational
contexts. Rather, students are expected to be able to find, organise, interpret, analyse, evaluate, synthesise, and
apply new information or knowledge to solve non-routine problems.
Authentic tasks replicate real-world challenges and standards of performance that experts or professionals
typically face in the field. It is an effective measure of intellectual achievement or ability because it requires
students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through
the performance of exemplary tasks. Hence authentic assessment can serve as a powerful tool for assessing
students’ 21st-century competencies in the context of global educational reforms.

Lesson Proper for Week 2


Assessment attaches much importance to any sort of teaching learning process. The usual and common
assessment we do is known as traditional assessment. Today we should use authentic assessment keeping
pace with the growing necessities of the world. What we mean by Authentic Assessment? It’s a form of
assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of
essential knowledge and skills’ as is defined by Jon Mueller. It can be characterized by open-ended tasks that
require students to construct extended responses, to perform an act, or to produce a product in a realworld
context—or a context that mimics the real world. Project works, portfolios, writing an article for newsletter or
newspaper, performing a dance or drama, designing a digital artifact, creating a poster for science fair, debates,
and oral presentations can be the examples of authentic assessment. It involves students in the actual
challenges, standards, and habits needed for success in the academic disciplines or in the workplace” said
Wiggins (1989). Authentic assessment tasks make the students motivated as they get opportunity to perceive the
relevance of the tasks to the real world. They find it as a meaningful learning.
Authentic assessments have several advantages over conventional or traditional tests.
Ø They are likely to be more valid than conventional tests, particularly for learning outcomes that require higher-
order thinking skills.
Ø
Ø They involve real-world tasks, they are also likely to be more interesting for students, and thus more
motivating.
Ø They can provide more specific and usable information about what students have succeeded in learning as
well as what they have not learned.

Ø encompasses teacher competence in the design, adaptation, and use of authentic assessment tasks or
performance assessment tasks to engage students in in-depth learning of subject matter and to promote their
mastery of the 21st-century competencies.
Ø Authentic assessment serves as an alternative to conventional assessment. Conventional assessment is
limited to standardized paper-and-pencil/pen tests, which emphasize objective measurement. Standardized tests
employ closed-ended item formats such as true‒false, matching, or multiple choice.
Ø Authentic assessments allow more student choice and construction in determining what is presented as
evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually
multiple acceptable routes towards constructing a product or performance.
Ø Authentic tasks replicate real-world challenges and standards of performance that experts or professionals
typically face in the field. It is an effective measure of intellectual achievement or ability because it requires
students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through
the performance of exemplary tasks. Hence authentic assessment can serve as a powerful tool for assessing
students’ 21st-century competencies in the context of global educational reforms.
What is an assessment?
Assessment is the systematic process of documenting and using empirical data on the knowledge, skills,
attitudes and beliefs. By taking the assessment, teachers try to improve student learning. This is a short definition
of assessment.If you want to read more about assessment, click on this link.
What is testing?
What is testing in education? Almost everybody has experienced testing during his or her life. Grammar tests,
driving license test etc. A test is used to examine someone’s knowledge of something to determine what that
person knows or has learned. It measures the level of skill or knowledge that has been reached. An evaluative
device or procedure in which a sample of an examinee’s behavior in a specified domain is obtained and
subsequently evaluated and scored using a standardized process (The Standards for Educational and
Psychological Testing, 1999)
So, what’s the difference?
Test and assessment are used interchangeably, but they do mean something different. A test is a “product” that
measures a particular behavior or set of objectives. Meanwhile assessment is seen as a procedure instead of a
product. Assessment is used during and after the instruction has taken place. After you’ve received the results of
your assessment, you can interpret the results and in case needed alter the instruction.
Tests are done after the instruction has taken place, it’s a way to complete the instruction and get the results.
The results of the tests don’t have to be interpreted, unlike assessment.
What is an assessment?
What's the definition of assessment in education? Assessment is the systematic process of documenting and
using empirical data to measure knowledge, skills, attitudes and beliefs. By taking the assessment, teachers try
to improve the student's path towards learning. This is a short definition of assessment
What is evaluation?
What's the definition of evaluation in education? Evaluation focuses on grades and might reflect classroom
components other than course content and mastery level. An evaluation can be used as a final review to
gauge the quality of instruction. It’s product-oriented. This means that the main question is: “What’s been
learned?” In short, evaluation is judgmental.
Authentic Assessment Tools
a. Observation-Based Tools
Observation provides the opportunity to monitor or assess a process or situation and document evidence of what
is seen and heard. Seeing actions and behaviours within a natural context, or as they usually occur provides
insights and understanding of the event, activity or situation being evaluated.
b. Performance Samples Assessment Tools
Examples include dance, recital, dramatic enactment. There may be prose or poetry interpretation. This form of
performance-based assessment can take time, so there must be a clear pacing guide..
c. Performance Assessment Tools
Performance task assessment lists are assessment tools that provide the structure students need to work more
independently and to encourage them to pay attention to the quality of their work.
Authentic Assessment examples:
1.Conduction research and writing a report. 5.Character analysis.
2.Student debates (individual or group) 6.Drawing and writing about a story or chapter.
3.Experiments - trial and error learning. 7.Journal entries (reflective writing)
4.Discussion partners or groups. 8.Student self-assessment.

4.1 Objective 1 Exercise What is the meaning of the following :


1. Assessment
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
2. Evaluation
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
3. Testing
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
4.2 Objective 2 Exercise 2 Explain :
1.observe-based
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
_________________________________________________________________________________
2.performance-based
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
3.actual-performance based tools.
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
4.3Objective 3 Exercise 3 Enumerate examples of assessment tools.
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
Lesson Proper for Week 3
Too often, we tend to assess students’ learning through their outputs or products or through some kind of
traditional testing. Process-oriented, performance-based assessment, assessment is not an end in itself but a
vehicle for educational improvement.
Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and
revealed in performance over time. Learning is a complex process.
Process-Oriented performance - based assessment is concerned with the actual task performance rather than
the output or product of an activity. It evaluates the actual task performance. It does not emphasize on the output
or product of the activity. This assessment aims to know what processes a person undergoes when given a task.
Performance based assessments allow teachers to pin point their students weaknesses and strengths which
gives them insight into what they did a good job of covering as well as what material may need to be recovered
and possibly in a different way to get the students to understand better ("What Should").
In general, a performance-based assessment measures students' ability to apply the skills and knowledge
learned from a unit or units of study. Typically, the task challenges students to use their higher-order thinking
skills to create a product or complete a process (Chun, 2010).
Process-Oriented Learning Competencies
Information about outcomes is of high importance where students “end-up” matters greatly. Assessment can help
us understand which students learn best under what condition with such knowledge comes the capacity to
improve the whole of their learning. To improve outcomes, we need to know about the student experience along
the way – about the curricula, teaching, and kind of students that lead to particular outcomes. The learning
objectives are stated in directly observable behavior of the student.
Ø competencies are defined as groups or clusters of skills and abilities needed for a particular task.
Ø The objectives focus on the behaviors which exemplify “best practice” for the particular task.
Ø Such behavior ranges from a “beginner” or novice level up to the level of expert.
Example: Task: Recite a Poem by Edgar Allan Poe, “The Raven”
Objectives: to enable the students to recite a poem entitled “The Raven” by Edgar Allan Poe.
The specific objectives identified constitute the learning competencies for this particular task.
Examples of simple competencies:
Ø Speak with a well-modulated voice
Ø Draw a straight line from one point to another point Ø Color a leaf with a green crayon.
Examples of complex competencies
Ø Recite a poem with feeling using appropriate voice quality, facial expression and hand gestures
Ø Construct an equilateral triangle given three non- collinear points
Ø Draw and color a leaf with green crayon
Task Designing
Task Design definition: it is a manner of how a task plan and its workflow are organized. In other words, the
meaning actually stands for how profoundly a task's plan is projected. The better the task design, the less
administrative questions and problems may appear during the work.
Once you have your learning outcomes, you will then need to decide how you would ask your students to
evidence their learning through assessment tasks. Assessment tasks are the activities learners will undertake to
confirm whether or not 'the outcome has in fact been achieved' (Biggs & Tang, 2007)
Standards for designing a task
1. Identifying an activity that would highlight the competencies to be evaluated.

2. Identifying an activity that would entail more or less the same sets of competencies. Finding a task that
would be interesting enjoyable for the students.
Example: Topic : Understanding Biological Diversity
Possible Task Design
Ø Bring the students to the pond or creek
Ø Ask them to find all living organisms near the pond or creek
Ø Bring them to school playground to find as may living organisms they can find.
Scoring Rubrics
Rubric is a scoring scale used to assess student performance along a task-specific set of criteria.
Authentic assessment are criterion-referenced measures: A student’s aptitude on a task is determined by
matching the student’s performance against a set of criteria to determine the degree to which the student’s
performance meets the criteria for the task

Example of Criteria

1 2 3
Number of appropriate hand XI 1-4 5-9 10-12
gestures
XI Lots of Few of No apparent
appropriate facial appropriate facial appropriate facial
Appropriate facial expression
expression expression expression

X2 Monotone voice Can vary voice Can easily vary


used inflection with voice infliction
Voice inflection
difficulty

X3 Recitation Recitation has Recitation fully


contains very some feelings captures
Incorporate proper ambiance
little feelings
through feelings in the voice ambiance
through feelings
in the voice

Rubric is a scoring scale used to assess student performance. A coherent set of criteria for students’ work that
includes descriptions of levels of performance quality on the criteria. Typically, rubrics are used in scoring or
grading written assignments or oral presentations: however, they may be used to score any form of student
performance.
There is no specific number of levels a rubric should or should not possess. It will vary on the task and your
needs as long as you decide that it is appropriate. Generally, it is better to start with a smaller number of levels of
performance for a criterion and then expand if necessary.
Why Includes Levels of Performance?
1. Clearer Expectations It is very useful for the students and the teacher if the criteria are identified and
communicated prior to completion of the task.
2. More consistent and objective assessment. In addition to better communicating teacher expectations, levels
of performance permit the teacher to more consistently and objectively distinguish between good and bad
performance, or between superior, mediocre and poor performance when evaluating.
3. Better Feedback, Furthermore, identifying specific levels of student performance allows the teacher to
provide more detailed feedback to student

4.1 Objective 1 Exercise 1 Define what is process-oriented performance-based assessment.


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

4.2 Objective 2 Exercise 2 Explain

1. Identify process-oriented learning competencies


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
2. Task Designing
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
3. Scoring rubrics
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
4.3 Objective Exercise 3 What is the use of Rubrics
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________

Lesson Proper for Week 4


Product-Oriented Performance-Based Assessment
Performance-based assessments in which the actual student performance is assessed through a
product, such as a completed project or work that demonstrates levels of task achievement.

Performance-based assessment has led to the use of a variety of alternative ways of evaluating student
progress (journals, checklists, portfolios, projects, rubrics, etc.) as compared to more traditional methods of
measurement (paper-and-pencil testing).
A kind of assessment wherein the assessor views and scores the final product made and not on the actual
process of making that product.
It is concerned on the product alone and not on the process. It is focused to the outcome or the performance
output of the learner. It also focuses on the achievement of the learner. • P-OPBA focuses on evaluating the
result or outcome of a process.
Product-Oriented Learning Competencies. Student performances can be defined as targeted tasks that lead to a
product or overall learning outcome.
Performance-based education poses a challenge for teachers to design instruction that is taskoriented. The trend
is based on the premise that learning needs to be connected to the lives of the students through relevant tasks
that focus on students’ ability to use their knowledge and skills in meaningful ways. In this case, performance-
based tasks require performance-based assessment in which the actual student performance is assessed
through a product, such as a completed project or work that demonstrates levels of task achievement.
Product-Oriented Leaning Competencies Student performances can be defined as targeted tasks that lead to a
product or overall learning outcomes. Product can include a wide range of student works that target specific
skills. Examples: COMMUNICATION SKILLS Reading Writing Speaking Listening
PSYCHOMOTOR SKILLS (requiring physical abilities to perform a given task). Using rubrics is one way that
teachers can evaluate or assess student performance or proficiency in any given task as it relates to a final
product or leaning outcomes. The learning competencies associated with products or outputs are linked with an
assessment of the level of “expertise” manifested by the product. Thus, product oriented learning competencies
target at least three (3) levels: novice or beginner’s level, skilled level, and expert level.
There are other ways to state product-oriented learning competencies. For instance, we can define learning
competencies for products or outputs in the following way:
• Example: communication skills such as those demonstrated in reading, writing, speaking, and
listening.
The learning competencies associated with products or outputs are linked with an assessment of the level of
expertise manifested by the product.

Ø Level 1 (Beginner) – does the finished product or the project illustrates the minimum expected parts or
function?
Learning Competencies: • Contains pictures, clippings, and other illustrations for the scenes and characters
(Beginner)
Ø Level 2 (Skilled) – does the finished product or project contains additional parts and functions on top of the
minimum requirements which tend to enhance the final output?
Contains remarks and captions for the illustrations made by the student himself for each scene and the
characters (Skilled)
Learning Competencies: The final product submitted by the students must: Possess the correct dimensions (5” x
5” x 5”) – (minimum specifications) Be sturdy, made of durable cardboard and properly fastened together –
(skilled specifications) Be pleasing to the observer, preferably properly colored for aesthetic purposes – (Expert
level) Example: The desired product is a scrapbook illustrating the historical event called EDSA I People Power.
Ø Level 3 (Expert) – does the finished product or project contain the basic minimum parts and function, have
additional features on top of the minimum, and aesthetically pleasing? Presentable, complete, informative, and
pleasing to the reader of the scrapbook (Expert) Learning Competencies: The scrapbook presented by the
students must: Contain pictures, newspaper clippings and other illustrations for the main characters of EDSA I
People Power namely: Corazon Aquino, Fidel V. Ramos, Juan Ponce Enrile, Ferdinand E. Marcos, Cardinal Sin.
– (minimum specifications) Contain remarks and captions for the illustrations made by the student himself for the
roles played by the characters of EDSA I People Power – (skilled level) Be presentable, complete, informative
and pleasing to the reader of the scrapbook – (expert level)
Performance-based assessment for products and projects can also be used for assessing outputs of short-term
tasks such as the one illustrated below for outputs in a typing class. Example: The desired output consists of the
output in a typing class.
Learning Competencies: The final typing outputs of the students must: • Possess no more than five (5) errors in
spelling – (minimum specifications) • Possess no more than five (5) errors in spelling observing proper format
based on the document to be typewritten – (skilled level) • Possess no more than five (5) errors in spelling, has
the proper format, and is readable and presentable – (expert level)
Notice that in all of the above examples, product oriented performance based learning competencies are
evidence-based. The teacher needs concrete evidence that the student has achieved a certain level of
competence based on submitted products and projects.
Comparison of Process-Oriented and Product-Oriented
Performance-Based Assessment
Process-Oriented Product-Oriented
concerned with the actual task performance the assessor views and scores the final product
made
evaluate how a movement is performed. evaluates the outcome of a movement.
to evaluate the actual process of doing an object a management philosophy, concept, focus or
of learning. state of mind which emphasizes the quality of
the product
assessment aims to know what processes a a kind of assessment where in the assessor
person undergoes when given a task. views and scores the final product made

Task Designing
Task Designing How should a teacher design a task for product-oriented performance based assessment? The
design of the task in this context depends on what the teacher desires to observe as output of the students. The
concepts that may be associated with task designing include:
Ø Complexity. The level of complexity of the project needs to be within the range of ability of the students.
Projects that are too simple tend to be uninteresting for the students while projects that are too complicated will
most likely frustrate them.
Ø Appeal. The project or activity must be appealing to the students. It should be interesting enough so that
students are encouraged to pursue the task to completion. It should lead self-discovery of information by the
students.
Ø Creativity. The projects need to encourage students to exercise creativity and divergent thinking. Given the
same set of materials and project inputs, how does one best present the project? It should lead the students into
exploring the various possible ways of presenting the final output.
Ø Goal-Based. Finally, the teacher must bear in mind that the project is produced in order to attain a learning
objective. Thus, projects are assigned to students not just for the sake of producing something but for the
purpose of reinforcing learning. Example: Paper folding is a traditional Japanese art. However, it can be used as
an activity to teach concept of plane and solid figures in geometry. Provide students with a given number of
colored papers and ask them to construct as many plane and solid figures from these papers without cutting
them (by paper folding only)

Task Oriented Approach


The aim of this teaching approach is to help the child/ young person improve their performance on a specific
activity by teaching specific tasks step by step. The tasks selected by the teacher prior to the lesson are geared
towards the individual’s level of learning. The aim is that the individual will achieve independence in a single task
very quickly.
There are many benefits to using a task oriented approach, for example:
Ø Rapid progress may be seen in the specific tasks addressed. Ø Easily applied to those who are visual
learners.
However, as the skills are taught for each discrete task it is more challenging for the child/ young person to
generalize these skills to other situations, therefore each task should be taught individually.
Backward Chaining vs. Forward Chaining
Chaining involves breaking the desired task into small steps, this process is known as creating a ‘task analysis’.
Each step is then taught separately to assist the child/ young person in achieving the desired skill. This is a very
helpful teaching technique when a child/ young person needs to learn a routine task which is repetitive for
example toileting, getting dressed or undressed, brushing teeth or making a sandwich.
Forward Chaining
This involves teaching the steps involved in the task from beginning to end. The teacher will start with the first
step and when it is learned moves onto the second by adding it on to the routine already started and so the
process continues until they can complete the task independently.
Examples: Brushing Teeth Preparing Breakfast Cereal Grilled cheese
Backward Chaining
This technique follows the same principle as forward chaining but begins at the final step of the skill rather than
the first. For example, if you were teaching the skill of toileting you would prompt and assist the child throughout
the process until it was time to flush the toilet. When they had mastered this step, you would then provide help in
the acquisition of the step before the last which in this example would be pulling up their trousers or skirt.
Examples: Washing hands, Replacing Toilet Paper, Doing the laundry and Grocery Shopping
Tends to be a more logical process as you work though the steps in order from the beginning. The individual will
experience a sense of achievement as they always have the opportunity to complete the task. In some cases, for
example tying laces, brushing teeth and washing hands the easier steps are the initial ones and help therefore
will be provided for the more challenging steps until they are mastered independently.

It is easier to visualize the end result as they start by completing the final step and work backwards.
For example, when getting dressed they will know how they should look when they are ready for school in their
uniform.
A visual task analysis can be created whether in picture or word form and the individual can follow it
independently. It creates a link between the most work (last step) and the biggest reinforcer (what is achieved
e.g. eating the toast if the task was to make toast.
Backward and forward chaining
An assessment of the efficiency of and child preference for forward and backward chaining
Information on forward and backward chaining

Process Oriented Approach


This teaching approach aims to teach a range of prerequisite and foundational skills such as play, turn taking,
nonverbal communication, language and conversation. For this to be effective the environment must be
conducive to supporting the child’s growth and development. As with the task oriented approach the teacher or
therapist may select the target skills, although where possible involving the child or young person in selecting the
materials to be used may help encourage/motivate them to undertake the task at hand and results in more
effective results (Cihak, D.F., (2011).

Scoring Rubrics
Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the
analysis of the products or processes of students’ efforts (Brookhart, 1999).
Scoring rubrics are typically employed when a judgment of quality is required and may be used to evaluate a
broad range of subjects and activities.
From the major criteria, the next task is to identify sub-statements that would make the major criteria more
focused ad objectives. For instance, if we were scoring an essay on : “Three Hundred Years of Spanish Rules in
the Philippines”, the major criterion “Quality” may possess the following sub-statements:
Ø Interrelates the chronological events in an interesting manner
Ø Identifies the key players in each period of the Spanish rule and the roles that they played
Succeeds in relating the history of Philippine Spanish rule (related as Professional, Not quite professional, and
Novice)

The example displays a scoring rubric that was developed to aid in the evaluation of essays written by college
students in the classroom (based loosely on Leydens & Thompson, 1997).
When are scoring rubrics an appropriate evaluation technique?
Grading essay is just one example of performances that may be evaluated using scoring rubrics. There are many
other instances in which scoring rubrics may be used successfully: evaluate group activities, extended projects
and oral presentations.
Also scoring rubrics scoring cuts across disciplines and subject matter for they are equally appropriate in English,
Mathematics and Science classrooms.
Other Methods Authentic assessment schemes apart from scoring rubrics exist in the arsenal of a teacher. For
example, checklists may be used rather that scoring rubrics in the evaluation essays. Checklists enumerate a set
of desirable characteristics for a certain product and the teacher marks those characteristics which are actually
observed.

General versus Task-Specific


In the development of scoring rubrics, it is well bear in mind that it can be used to assess or evaluate specific
tasks or general or broad category of tasks. For instance, suppose that we are interested in assessing the
student’s oral communication skills.
Process of Developing Scoring Rubrics The development through a process. The first step in the process
entails the identification of the qualities and attributes that the teacher wishes to observe in the students’ outputs
that would demonstrate their level of proficiency. (Brookhart, 1992). The next step after defining the criteria for
the top level of performance is the identification and definition if the criteria for the lowest level of
performance. Resources
Currently, there is a broad range of resources available to teachers who wish to use scoring rubrics in their
classrooms. These resources differ both in the subject that they cover and the level that they are designed to
assess.
Resources are also available to assist college instructors who are interested in developing and using scoring
rubrics in their classrooms.
The purpose of performance assessment is to evaluate the actual process of doing an object of learning.
Students are expected to be able to apply knowledge learnt in class to solve problems in the task. Apart from
that, students may need to use their thinking skill in order to complete the task.

4.1 Objective 1 Exercise 1 Define what is product-oriented performance-based assessment.


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

4.2 Objective 2 Exercise 2 Explain

1. Identify product -oriented learning competencies.


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

2. Task Designing in Product- Oriented- Competencies


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

3. How to evaluate products through Scoring rubrics


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

4.3 Objective Exercise 3 What is the aim of Task-Oriented Approach


__________________________________________________________________________________
__________________________________________________________________________________
__________________________________________________________________________________

Lesson Proper for Week 5


Assessment in the Affective Domain
The affective domain is one of three domains in Bloom's Taxonomy, with the other two being the cognitive and
psychomotor (Bloom, et al., 1956).
The affective domain (Krathwohl, Bloom, Masia, 1973) includes the manner in which we deal with things
emotionally such as feelings, values, appreciation, enthusiasms, motivations, and attitudes. The Affective domain
describes learning objectives that emphasize a feeling tone, an emotion, or a degree of acceptance or rejection.

The affective domain describes learning objectives that emphasize a feeling tone, an emotion, or a degree of
acceptance or rejection. Affective objectives vary from simple attention to selected phenomena to complex but
internally consistent qualities of character and conscience.
Affective domain describes learning objectives that emphasize a feeling tone, an emotion, or a degree of
acceptance or rejection.

David Krathwol’s Taxonomy of Affective domain

Krathwohl's taxonomy is a model that describes how individual's process and internalize learning objects on an
affective or emotional level. There are 5 levels to the taxonomy. Verbs for expressing learning outcomes: ask,
choose, describe, follow, give, hold, identify, reply, select, use.
Krathwohl's affective domain taxonomy is perhaps the best known of any of the affective taxonomies. "The
taxonomy is ordered according to the principle of internalization. Internalization refers to the process whereby a
person's affect toward an object passes from a general awareness level to a point where the affect is
'internalized' and consistently guides or controls the person's behavior (Seels & Glasgow, 1990, p. 28)." How is
the taxonomy presented?
The taxonomy is presented in five stages:
1. Receiving describes the stage of being aware of or sensitive to the existence of certain ideas, material, or
phenomena and being willing to tolerate them. Examples include: to differentiate, to accept, to listen (for), to
respond to.
2. describes the second stage of the taxonomy and refers to a committment in some small measure to the ideas,
materials, or phenomena involved by actively responding to them.

3. Examples are: to comply with, to follow, to commend, to volunteer, to spend leisure time in, to acclaim.

4. Valuing means being willing to be perceived by others as valuing certain ideas, materials, or phenomena.
Examples include: to increase measured proficiency in, to relinquish, to subsidize, to support, to debate.

5. Organization is the fourth stage of Krathwohl’s taxonomy and involves relating the new value to those one
already holds and bringing it into a harmonious and internally consistent philosophy. Examples are: to discuss, to
theorize, to formulate, to balance, to examine.

6. Characterization by value or value set means acting consistently in accordance with the values the individual
has internalized. Examples include: to revise, to require, to be rated high in the value, to avoid, to resist, to
manage, to resolve.

Affective Domain of the Taxonomy of Education

In 1964, David R.Krathwohl, together with his colleagues, extended Bloom's Taxonomy of Education Objectives
by publishing the second taxonomy of objectives, this time giving emphasis on he affective domain. Krathwohl
and his collaborators attempted to subdivide the affective realm into relatively distinct divisions.
Krathwol’s Taxonomy of Education Level Description Example Receiving (Attending) Concerned with student's
sensitivity to the existence of certain phenomena and stimuli that is, with student's willingness to receive or to
attend to this stimuli It is categorized in three subdivisions that shows the different levels of attending to
phenomena Awareness of the phenomena Willingness to receive the phenomena Controlled or selected
attention to phenomena Students does mathematics activities for grades

Benjamin Blooms’s Taxonomy

The affective domain is one of the three domains in Bloom's Taxonomy. It involves feelings, attitudes, and
emotions. It includes the ways in which people deal with external and internal phenomenon emotionally, such as
values, enthusiasms, and motivations. Bloom’s Revised Taxonomy—Affective Domain The affective domain
(Krathwohl, Bloom, Masia, 1973) includes the manner in which we deal with things emotionally, such as feelings,
values, appreciation, enthusiasms, motivations, and attitudes
Affective Learning Competencies

According to William James Popham (2003), the reasons why it is important to assess affect are:
1. Educators should be interested in assessing affective variables because these variables are
excellent predictors of students’ future behavior,
2. teachers should assess affect to remind themselves that there’s more to being a successful
teacher than helping students obtain high scores on achievement tests;
3. Information regarding students’ affect can help teachers teach more effectively on a day-to-day
basis

Importance of Affective Targets


Ø Students are more proficient in problem-solving if they enjoy what they do.
Ø A more positive environment fosters good student engagement and learning than in a classroom with negative
climate (Fraser, 1994).
Ø Motivation and involvement of students in learning activities are affected by students’ attitude toward learning,
respect for others, and concern for other.
Why most teachers do not utilize any kind of formal
affective assessment?
Ø School routines are organized based on subject areas; and
Ø Assessment of affective targets is fraught with difficulties.(McMillian,2007). •
Ø Many potential sources of error in measuring affective traits often result in low reliability.

Positive Affective Traits and Skills are Essential for:


Ø Effective learning
Ø Being an involved and productive member of our society
Ø Preparing for occupational and vocational satisfaction and productivity (ex; work habits, willingness to learn,
interpersonal skills)
Ø Maximizing the motivation to learn at present and in the future • Preventing students from dropping out of
school

Affective Traits and Learning Targets


Ø The word affective refers to variety of traits and dispositions that are different from knowledge, reasoning, and
skills (Hohn,1995).
Ø Technically, this term means the emotions or feelings that one has toward someone or something.
Ø Nevertheless, attitudes, values, self concept, citizenship, and other traits are usually considered to be non-
cognitive, include more than emotions or feelings.
Affective Traits Trait Description Attitudes Predisposition to respond favorably or unfavorably to specified
situations, concepts, objects, institutions, or persons Interests Personal preference for certain kinds of activities
Values Importance, worth, or usefulness of modes or conduct and end states of existence Opinions Beliefs about
specific occurrences and situations Preferences Desire to select one object over another
Motivation Desire and willingness to be engaged in behavior including intensity of involvement Academic self-
concept Self-perception of competence in school and learning Self- esteem Attitudes toward oneself; degree of
self-respect, worthiness, or desirability of self-concept Locus of Control Self-perception of whether success and
failure is controlled by the students or by external influences.
Emotional Development Growth, change, and awareness of emotions and ability to regulate emotional
expression Social relationships Nature of interpersonal interactions and functioning in group setting

Altruism Willingness and propensity to help others Moral Development Attainment of ethical principles that guide
decision-making and behavior Classroom Development Nature of feeling tones and interpersonal relationship in
a class

Learning Targets
1. Attitude Targets
2. Value Targets
3. Motivation Targets
4. Academic Self-Concept Targets
5. Social Relationship Targets 6. Classroom Environment Targets

Attitude Targets
Ø McMillan(1980)defines attitudes as internal states that influence what students are likely to do.
Ø The internal state can in some degree determine positive or negative or favorable or unfavorable reaction
toward an object, situation, person or group of objects, general environment, or group of persons.
Ø In a learning institution, attitude is contingent on subjects, teachers, other students, homework, and other
objects or persons.
A Positive Attitude Toward A Negative attitude Toward Learning Math, Science, English other subjects
Assignments Classroom rules Teachers Cheating Drug use Bullying Cutting classes Dropping out

Three Components of Attitudes (Contributing Factor)


Ø Affective Component • consists of the emotion or feeling associated with an object or a person
Ø Cognitive Component • is an evaluative belief (such as thinking something as valuable, useful, worthless,
etc.).
Ø Behavioral Component • is actually responding in a positive way.
Value Targets • End states of existence • refer to a conditions and aspects of oneself and the kind of world that
a person wants such as safe life, world peace, freedom, happiness, social acceptance, and wisdom. • Mode of
conduct • are manifested in what a person believe is appropriate and needed in everyday existence such as
being honest, cheerful, ambitious, loving, responsible, and helpful.
Value Sample Value Target Honesty Students should learn to value honesty in their dealing with others. Integrity
Students should firmly observe their own code of values. Justice Students should support the view that all
citizens should be the recipients of equal justice from government law enforcement agencies. Freedom Students
should believe that democratic countries must provide the maximum level of freedom to their citizens.

McMillan (2007) suggested that in setting value targets, it is necessary to stick to non- controversial and those
that are clearly related to academic learning and school and department of educational goals.
McMillan (2007) and Popham (2005) suggested other non-controversial values (aside from those mentioned)
like kindness, generosity, perseverance, loyalty, respect, courage, compassion, and tolerance. • It is better to an
excellent job assessing a few important traits than to try to assess many traits casually.

Motivation Targets implies that motivation is determined by students' expectation, their belief about whether
they are likely to be successful, and the relevance of the outcome. • Expectations • refers to the self efficacy of
the students. • Values • are self-perception of the importance of the performance

Kinds Of Motivation
Ø Intrinsic Motivation • when students do something or engage themselves in activities because they find the
activities interesting, enjoyable, or challenging.
Ø Extrinsic Motivation • is doing something because it leads rewards or punishment.

Motivation as self-efficacy
In addition to being influenced by their goals, interests, and attributions, students’ motives are affected by specific
beliefs about the student’s personal capacities. In self-efficacy theory the beliefs become a primary, explicit
explanation for motivation (Bandura, 1977, 1986, 1997). Self-efficacy is the belief that you are capable of carrying
out a specific task or of reaching a specific goal. Note that the belief and the action or goal are specific. Self-
efficacy is a belief that you can write an acceptable term paper, for example, or repair an automobile, or make
friends with the new student in class. These are relatively specific beliefs and tasks. Self-efficacy is not about
whether you believe that you are intelligent in general, whether you always like working with mechanical things,
or think that you are generally a likeable person. These more general judgments are better regarded as various
mixtures of self-concepts (beliefs about general personal identity) or of self-esteem (evaluations of identity). They
are important in their own right, and sometimes influence motivation, but only indirectly (Bong & Skaalvik, 2004).
Self-efficacy beliefs, furthermore, are not the same as “true” or documented skill or ability. They are self-
constructed, meaning that they are personally developed perceptions. There can sometimes therefore be
discrepancies between a person’s self-efficacy beliefs and the person’s abilities. You can believe that you can
write a good term paper, for example, without actually being able to do so, and vice versa: you can believe
yourself incapable of writing a paper, but discover that you are in fact able to do so. In this way self-efficacy is like
the everyday idea of confidence, except that it is defined more precisely. And as with confidence, it is possible to
have either too much or too little self-efficacy. The optimum level seems to be either at or slightly above true
capacity (Bandura, 1997). As we indicate below, large discrepancies between self-efficacy and ability can create
motivational problems for the individual.

Since self-efficacy is self-constructed, furthermore, it is also possible for students to miscalculate or misperceive
their true skill, and the misperceptions themselves can have complex effects on students’ motivations. From a
teacher’s point of view, all is well even if students overestimate their capacity but actually do succeed at a
relevant task anyway, or if they underestimate their capacity, yet discover that they can succeed and raise their
self-efficacy beliefs as a result. All may not be well, though, if students do not believe that they can succeed and
therefore do not even try, or if students overestimate their capacity by a wide margin, but are disappointed
unexpectedly by failure and lower their self-efficacy beliefs.

Academic Self-concept Targets

Self-concept and self-esteem are multidimensional. • Each person has a self-description in each area, that form
one's self- concept or self image. • Moreover, individuals have a sense of self regards, self affirmation, and self
worth in each area.(self-esteem) peer relations friendship cooperation collaboration taking a stand conflict
resolution functioning in group assertiveness Pro social behavior* empathy

Social Relationship Targets


Ø A complex set of interaction skills,including identification of and appropriate responses to social
indication,defines social relationship.
Ø Social Relationship Target Concern Example Peer Relationship showing interest in others listening to peers
sharing to a group contributing to group activities Students will share their ideas in a small group discussion
Cooperative Skills Sharing Listening Volunteering ideas and suggest ion supporting and accepting other s' ideas
Taking turns Criticizing constructively Students will demonstrate that they are able to negotiate with others and
compro mise

Classroom Environment Targets

In every classroom there is a unique climate that is felt at every point in time. Some manifest a comfortable
atmosphere, others have relaxed and productive ambiance. As a result there are classes that are happy and
content while others are serious and tensed due to the effect of the classroom climate. It follows that students
behave differently as dictated also by the classroom climate, some shows warm and supportive class while
others register as cold and rejecting.

Characteristics Descriptions Affiliation The extent to which student like and accept each other Involvement The
extent to which students are interested in and engaged in learning Task Orientation The extent to which
classroom activities are focused on the completion of academic task Cohesiveness The extent to which students
share norms and expectation. Favoritism Whether each student enjoys the same privileged Influence The extent
to which each student influences classroom decisions Friction The extent to which students bicker with one
another Formality The emphasis on imposing rules Communication The extent to which communication among
students and with teacher i s honest and authentic. Warmth The extent to which students care about each other
and show concern
What is the relevance of the affective domain in education?

If we are striving to apply the continuum of Krathwohl et al. to our teaching, then we are encouraging students to
not just receive information at the bottom of the affective hierarchy. We'd like for them to respond to what they
learn, to value it, to organize it and maybe even to characterize themselves as science students, science majors
or scientists.

We are also interested in students' attitudes toward science, scientists, learning science and specific science
topics. We want to find teaching methods that encourage students and draw them in. Affective topics in
educational literature include attitudes, motivation, communication styles, classroom management styles,
learning styles, use of technology in the classroom and nonverbal communication. It is also important not to turn
students off by subtle actions or communications that go straight to the affective domain and prevent students
from becoming engaged.

In the educational literature, nearly every author introduces their paper by stating that the affective domain is
essential for learning, but it is the least studied, most often overlooked, the most nebulous and the hardest to
evaluate of Bloom's three domains. In formal classroom teaching, the majority of the teacher's efforts

typically go into the cognitive aspects of the teaching and learning and most of the classroom time is designed for
cognitive outcomes. Similarly, evaluating cognitive learning is straightforward but assessing affective outcomes is
difficult. Thus, there is significant value in realizing the potential to increase student learning by tapping into the
affective domain. Similarly, students may experience affective roadblocks to learning that can neither be
recognized nor solved when using a purely cognitive approach.
4.1 Objective 1 Exercise 1 Define the different concepts related to assessing affective learning outcomes.

________________________________________________________
__________________________
________________________________________________________
__________________________
________________________________________________________
__________________________

4.2 Objective 2 Exercise 2 Explain the Krathwol’s Taxonomy of Education


_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
4.2 objective 2 Exercise 3 What is Bloom’s Taxonomy of Affective Domain
________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________
4.2 Objective 2 Exercise 4 Determine different levels of affective domain;
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________

4.3 Objective Exercise 5 Differentiate the three methods of assessing affective learning outcomes;
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
4.4 Objective Exercise 6 What are the Kinds of Motivation ? Explain

________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________

Lesson Proper for Week 7


Portfolios are collections of student work representing a selection of performance. Portfolios in classrooms today
are derived from the visual and performing arts tradition in which they serve to showcase artists'
accomplishments and personally favored works. A portfolio may be a folder containing a student's best pieces
and the student's evaluation of the strengths and weaknesses of the pieces. It may also contain one or more
works-in-progress that illustrate the creation of a product, such as an essay, evolving through various stages of
conception, drafting, and revision.
More teachers have recently begun using portfolios in all curricular areas. Portfolios are useful as a support to
the new instructional approaches that emphasize the student's role in constructing understanding and the
teacher's role in promoting understanding. For example, in writing instruction, portfolios can function to illustrate
the range of assignments, goals, and audiences for which a student produced written material. In addition,
portfolios can be a record of the activities undertaken over time in the development of written products. They can
also be used to support cooperative teaming by offering an opportunity for students to share and comment on
each other's work. For example, a videotape of students speaking French in the classroom can be used to evoke
a critical evaluation of each other's conversational skills at various points during the school year.
Recent changes in education policy, which emphasize greater teacher involvement in designing curriculum and
assessing students, have also been an impetus to increased portfolio use. Portfolios are valued as an
assessment tool because, as representations of classroom-based performance, they can be fully integrated into
the curriculum. And unlike separate tests, they supplement rather than take time away from instruction.
Moreover, many teachers, educators, and researchers believe that portfolio assessments are more effective than
"old-style" tests for measuring academic skills and informing instructional decisions.
Students have been stuffing assignments in notebooks and folders for years, so what's so new and exciting
about portfolios? Portfolios capitalize on students' natural tendency to save work and become an effective way to
get them to take a second look and think about how they could improve future work. As any teacher or student
can confirm, this method is a clear departure from the old write, hand in, and forget mentality, where first drafts
were considered final products.
Although there is no single correct way to develop portfolio programs, in all of them students are expected to
collect, select, and reflect. Early in the school year, students are pressed to consider: What would I like to reread
or share with my parents or a friend? What makes a particular piece of writing, an approach to a mathematics
problem, or a write-up of a science project a good product? In building a portfolio of selected pieces and
explaining the basis for their choices, students generate criteria for good work, with teacher and peer input.
Students need specifics with clear guidelines and examples to get started on their work, so these discussions
need to be well guided and structured. The earlier the discussions begin, the better.
While portfolios were developed on the model of the visual and performing arts tradition of showcasing
accomplishments, portfolios in classrooms today are a highly flexible instructional and assessment tool,
adaptable to diverse curricula, student age/grade levels, and administrative contexts. For example:
The content in portfolios is built from class assignments and as such corresponds to the local classroom
curriculum. Often, portfolio programs are initiated by teachers, who know their classroom curriculum best. They
may develop portfolios focused on a single curricular area--such as writing, mathematics, literature, or science--
or they may develop portfolio programs that span two or more subjects, such as writing and reading, writing
across the curriculum, or mathematics and science. Still others span several course areas for particular groups of
students, such as those in vocational-technical, English as a second language, or special arts programs.
The age/grade level of students may determine how portfolios are developed and used. For example, in
developing criteria for judging good writing, older students are more likely to be able to help determine the criteria
by which work is selected, perhaps through brainstorming sessions with the teacher and other students. Younger
students may need more directed help to decide on what work to include. Older students are generally better at
keeping logs to report their progress on readings and other recurrent projects. Also, older students often expand
their portfolios beyond written material to include photographs or videos of peer review sessions, science
experiments, performances, or exhibits.
Administrative contexts also influence the structure and use of portfolios. While the primary purpose of portfolios
for most teachers is to engage students, support good curricula and instruction, and improve student teaming,
some portfolio programs are designed to serve other purposes as well. For example, portfolios can be used to
involve parents in their children's education programs and to report individual student progress. Teachers and
administrators need to educate parents about how portfolios work and what advantages they offer over traditional
tests. Parents are generally more receptive if the traditional tests to which they are accustomed are not being
eliminated. Once portfolios are explained and observed in practice, parents are often enthusiastic supporters.
Portfolios may also be used to compare achievement across classrooms or schools. When they are used for this
purpose, fairness requires that standards be developed to specify the types of work that can be included and the
criteria used to evaluate the work. Guidelines may also address issues of teacher or peer involvement in revising
draft work or in deciding on what to identify as a best piece.
In all administrative contexts, teachers need administrative support to initiate a portfolio program. They need
support material such as folders, file drawers, and access to a photocopy machine, and time to plan, share ideas,
and develop strategies.
All portfolios--across these diverse curricular settings, student populations, and administrative contexts--involve
students in their own education so that they take charge of their personal collection of work, reflect on what
makes some work better, and use this information to make improvements in future work.
Research shows that students at all levels see assessment as something that is done to them on their classwork
by someone else. Beyond "percent correct," assigned letter grades, and grammatical or arithmetic errors, many
students have little knowledge of what is involved in evaluating their classwork. Portfolios can provide structure
for involving students in developing and understanding criteria for good efforts, in coming to see the criteria as
their own, and in applying the criteria to their own and other students' work.
Research also shows that students benefit from an awareness of the processes and strategies involved in
writing, solving a problem, researching a topic, analyzing information, or describing their own observations.
Without instruction focused on the processes and strategies that underlie effective performance of these types of
work, most students will not learn them or will learn them only minimally. And without curriculum-specific
experience in using these processes and strategies, even fewer students will carry them forward into new and
appropriate contexts. Portfolios can serve as a vehicle for enhancing student awareness of these strategies for
thinking about and producing work--both inside and beyond the classroom.
Portfolio Assessment
Purposeful collection of student work that has been selected and organized to show student learning progress
(developmental portfolio) or to show samples of students best work (showcase portfolio) u Portfolio assessment
can be used in addition to other assessments or the sole source of assessment. u Some schools even use
portfolio assessment as a basis for high school graduation
A portfolio is a purposeful collection of student work that exhibits the student’s efforts, progress and
achievements in one or more areas. The collection must include student participation in selecting contents, the
criteria for selection, the criteria for judging merit and evidence of student self-reflection. (Paulson, Paulson,
Meyer 1991)
Portfolio assessment is one of the several authentic and non-traditional assessment techniques in education. The
use of portfolio assessment that became popular in the early to late 1980’s in response to the growing clamor for
more “reasonable” and authentic means of assessing students’ growth and development in school.
Portfolio assessment is a term with many meanings, and it is a process that can serve a variety of purposes. A
portfolio is a collection of student work that can exhibit a student's efforts, progress, and achievements in various
areas of the curriculum. A
portfolio assessment can be an examination of student-selected samples of work experiences and documents
related to outcomes being assessed, and it can address and support progress toward achieving academic goals,
including student efficacy. Portfolio assessments have been used for large-scale assessment and accountability
purposes (e.g., the Vermont and Kentucky statewide assessment systems), for purposes of school-to-work
transitions, and for purposes of certification. For example, portfolio assessments are used as part of the National
Board for Professional Teaching Standards assessment of expert teachers.
The Development of Portfolio Assessment
Portfolio assessments grew in popularity in the United States in the 1990s as part of a widespread interest in
alternative assessment. Because of high-stakes accountability, the 1980s saw an increase in norm-referenced,
multiple-choice tests designed to measure academic achievement. By the end of the decade, however, there
were increased criticisms over the reliance on these tests, which opponents believed assessed only a very
limited range of knowledge and encouraged a "drill and kill" multiple-choice curriculum. Advocates of alternative
assessment argued that teachers and schools modeled their curriculum to match the limited norm-referenced
tests to try to assure that their students did well, "teaching to the test" rather than teaching content relevant to the
subject matter. Therefore, it was important that assessments were worth teaching to and modeled the types of
significant teaching and learning activities that were worthwhile educational experiences and would prepare
students for future, realworld success.
Involving a wide variety of learning products and artifacts, such assessments would also enable teachers and
researchers to examine the wide array of complex thinking and problem-solving skills required for subject-matter
accomplishment. More likely than traditional assessments to be multidimensional, these assessments also could
reveal various aspects of the learning process, including the development of cognitive skills, strategies, and
decision-making processes. By providing feedback to schools and districts about the strengths and weaknesses
of their performance, and influencing what and how teachers teach, it was thought portfolio assessment could
support the goals of school reform. By engaging students more deeply in the instructional and assessment
process, furthermore, portfolios could also benefit student learning.
Are Portfolios Authentic Assessments?
Some suggest that portfolios are not really assessments at all because they are just collections of previously
completed assessments. But, if we consider assessing as gathering of information about someone or something
for a purpose, then a portfolio is a type of assessment. Sometimes the portfolio is also evaluated or graded, but
that is not necessary to be considered an assessment.
Are portfolios authentic assessments? Student portfolios have most commonly been associated with collections
of artwork and, to a lesser extent, collections of writing. Students in these disciplines are performing authentic
tasks which capture meaningful application of knowledge and skills. Their portfolios often tell compelling stories
of the growth of the students' talents and showcase their skills through a collection of authentic performances.
Educators are expanding this story-telling to other disciplines such as physical education, mathematics and the
social sciences to capture the variety of demonstrations of meaningful application from students within these
disciplines.
Furthermore, in the more thoughtful portfolio assignments, students are asked to reflect on their work, to engage
in self-assessment and goal-setting. Those are two of the most authentic skills students need to develop to
successfully manage in the real world. Research has found that students in classes that emphasize
improvement, progress, effort and the process of learning rather than grades and normative performance are
more likely to use a variety of learning strategies and have a more positive attitude toward learning. Yet in
education we have shortchanged the process of learning in favor of the products of learning. Students are not
regularly asked to examine how they succeeded or failed or improved on a task or to set goals for future work;
the final product and evaluation of it receives the bulk of the attention in many classrooms. Consequently,
students are not developing the metacognitive skills that will enable them to reflect upon and make adjustments
in their learning in school and beyond.
Portfolios provide an excellent vehicle for consideration of process and the development of related skills. So,
portfolios are frequently included with other types of authentic assessments because they move away from telling
a student's story though test scores and, instead, focus on a meaningful collection of student performance and
meaningful reflection and evaluation of that work.
So, portfolios are frequently included with other types of authentic assessments because they move away
from telling a student's story though test scores and, instead, focus on a meaningful collection of student
performance and meaningful reflection and evaluation of that work.
What do portfolios contain? Developmental Portfolio (or working portfolios)
Ø Samples of independent work (initial work compared to more current work)
Ø Evaluations by teacher, peer, self Reflections on the growth over a period of
Ø May be used for instructional purposes and may include various stages of products, various drafts,
What do portfolios contain? Finished portfolio
Ø Samples of best independent work
Ø Evaluations by teacher, peer, self
Ø Samples organized according to some system (e.g., creative writing, scientific writing)
Ø Usually used to provide a summative evaluation and is standard format.
Features and Principles of Portfolio Assessment
Ø A portfolio is a form of assessment that students do together with their teachers.
Ø A portfolio represents a selection of what the students believe are the best included from among the possible
collection of things related to the concept being studied.
Ø A portfolio provides samples of the student’s work which show growth over time.
d. The criteria for selecting and assessing the portfolio contents must be clear to the teacher and the students at
the outset of the process.
Guidelines for portfolio entries Give students
Ø Purpose of the portfolio
Ø Time period that it should span
Ø Name people who will have access to it.
Ø Description/list of types of work to include
Ø If applicable, what criteria will be used to evaluate portfolio
Ø Ensure that you allow for flexibility (however, for summative/showcase portfolios, you might have to include
strict guidelines for organization)
Ø Ensure that students have access to resources to construct portfolios (e.g., technology, materials)
Ø Ground rules for working independently or collaboratively
Ø Guidance on physical structure of portfolios u How portfolios fit into their grades
Who decides what goes into a portfolio?
Ø The student choice is the primary determinant of entries in a portfolio. Teacher guides by giving a general
structure to the portfolio. Student and teacher may be asked to explain why they selected each entry.
Ø Teacher may meet with student regularly to reflect on student growth. (S)he provides input, student reflects on
growth, and they talk about agreements, disagreements on evaluations
How do we decide what to include in the portfolio?
Ø Start with early works to provide a basis for comparison of later work
Ø Include a variety of works in each category
Ø Include works that reflect the learning objectives that would need to be taught
Ø Include works that address the criteria that may be used for judging the portfolio
Ø Works should be complex (assess many different elements) to enable
reflections
Ø Entries should be selected by student
Ø Because in selecting, student has to apply a higher level of understanding/thinking about his own learning
Ø Portfolio should be assessed using criteria developed ahead of time
Where should we store the portfolio?
Usually, portfolios should be manageable (not serve as a collection of ALL of students’ work) and should be
within reach of a student (preferably stored in the classroom)They should be referred to regularly – teacher
should provide for time to place entries in portfolio.
How is a portfolio organized?
It may be organized by concepts, skills, subjects, learning objectives… whatever seems appropriate For
example, in writing, it may organized as different types of writing
Evaluating entries in portfolio
Ø Teacher and student both independently provide evaluation of growth/learning.
Ø Entries in portfolio should be evaluated using standards from learning objectives
How do we evaluate the portfolio?
What is important for the students to have learned over 4 months (or 8 months, or 1 unit)?
Ø Based on what is important (comes from learning objectives) Ø decide on elements of portfolio to evaluate. Ø
We can develop ratings for each element

How do we evaluate the portfolio? • For •


example – scientific thinking can be rated on the following scale:
4 = conclusions are based on hypotheses guesses, hypotheses are set based on valid
reasons
3 = More than half the conclusions are based on hypotheses
…. Analytical or Holistic?
If it’s a developmental portfolio, use analytic rating scale
• E.g,http://www.umes.edu/education/exhibit/docs/PORTFOLIO%20RUBRIC.doc
http://www.mashell.com/~parr5/techno/content.html u
If it’s a showcase portfolio, you can use holistic rating scale . No rating scale is perfect.
No rubric is perfect. As you use these, you will continue to improve them.
In evaluating a portfolio, remember to :
Ø Share the rubric with students before they work on the portfolio
Ø Allow students to reflect on their portfolio, using the rubric
Ø Ensure that you have checks for biases (e.g., rate portfolio with another teacher)
Ø Rate portfolio without looking at student name
Ø When making major decisions based on evaluations, ensure that you use more than one rater and all raters
are trained
In summary
Ø Entries in portfolio must be selected by student
Ø Entries are biased toward selecting the best work
Ø Reflections are an important part of the portfolio u Criteria for evaluating portfolio must be shared with student
beforehand
Ø Portfolios can be an excellent communication tool between students an teacher; parents;.

Lesson Proper for Week 8


Advantages of Portfolio Assessments for Students
Portfolios are collections of student activities, accomplishments and achievements to demonstrate growth over
time, offering an alternative authentic assessment for students and teachers. Working portfolios contain works in
process as well as finished works. Display portfolios showcases students’ best work and assessment portfolios
demonstrate the specific curriculum standards students have learned. While each type of portfolio has value, the
portfolio assessment has many advantages for students.

Self-Evaluation
Assessment portfolios require students to continuously reflect and perform selfevaluations of their work.
Teachers should convey to students the purpose of the portfolio, what constitutes quality work and how the
portfolio is graded. As students judge their work using explicit criteria to identify strengths and weaknesses, they
are monitoring their own progress. According to the article, “Student Self-Evaluation: What Research Says And
What Practice Shows,” by Carol Rolheiser and John A. Ross, students who participate in self-evaluations are
motivated, have a positive outlook and develop cognitive skills.
Portfolios used well in classrooms have several advantages. They provide a way of documenting and evaluating
growth in a much more nuanced way than selected response tests can. Also, portfolios can be integrated easily
into instruction, i.e. used for assessment for learning. Portfolios also encourage student self-evaluation and
reflection, as well as ownership for learning (Popham, 2005). Using classroom assessment to promote student
motivation is an important component of assessment for learning which is considered in the next section.

Individualized
Portfolios permit individualized assessment. Some students are not good testtakers and portfolios offer them an
alternative to demonstrate mastery of content. Numerous work samples can show students moving from basic to
advanced skills, demonstrating continued learning growth. Because assessment portfolios are individualized,
students and teachers have the opportunity to choose the documents they want to include in the portfolio and to
make decisions about how to improve the student's work.

Promote Communication
Assessment portfolios promote communication between teachers and students. Some shy students who fail to
initiate conversations within the classroom benefit from one-onone interaction with the teacher, while other
students may enjoy speaking about their accomplishments. During conferences, students can discuss their
progress, ask questions and receive suggestions and strategies for improving work. Dialogues with peers and
parents also help students in meaningful reflection and goal-setting.
Accountability
Portfolio assessment can hold students accountable for mastering content standards in a subject area. Portfolios
offer students tangible evidence to show their academic achievements as well as their participation in community
service projects. Because high school graduation is contingent on mastery of essential elements of the
curriculum, portfolios can give students an alternate avenue to show documentation of skills. In addition, many
colleges and employers request portfolios to see if students have basic skills, problem solving and collaborative
work skills.
Major Disadvantages of Portfolio Use .
First, good portfolio assessment takes an enormous amount of teacher time and organization. The time is
needed to help students understand the purpose and structure of the portfolio, decide which work samples to
collect, and to self reflect. Some of this time needs to be conducted in one-to-one conferences. Reviewing and
evaluating the portfolios out of class time is also enormously time consuming. Teachers have to weigh if the time
spent is worth the benefits of the portfolio use.
Second evaluating portfolios reliability and eliminating bias can be even more difficult than in a constructed
response assessment because the products are more varied. The experience of the state-wide use of portfolios
for assessment in writing and mathematics for fourth and eighth graders in Vermont is sobering. Teachers used
the same analytic scoring rubric when evaluating the portfolio. In the first two years of implementation samples
from schools were collected and scored by an external panel of teachers. In the first year the agreement among
raters (i.e. inter-rater reliability) was poor for mathematics and reading; in the second year the agreement among
raters improved for mathematics but not for reading. However, even with the improvement in mathematics the
reliability was too low to use the portfolios for individual student accountability (Koretz, Stecher, Klein &
McCafirey, 1994). When reliability is low, validity is also compromised because unstable results cannot be
interpreted meaningfully.

Purposes of Portfolio
1. Portfolio assessment matches assessment to teaching.
2. Portfolio assessment has clear goals. In fact, they are decided on at the beginning of instruction and are
clear to teacher and students alike.
3. Portfolio assessment gives a profile of learners’ abilities in terms of depth, breadth, and growth.
4. Portfolio assessment is a tool for assessing a variety of skills not normally testable in a single setting for
traditional testing.
5. Portfolio assessment develops awareness of students’ own learning.
6. Portfolio assessment caters to individuals in a heterogeneous class.
7.Portfolio assessment develops social skills. Students interact with other students in the development of their
own portfolios.
8. Portfolio assessment develops independent and active learners.
9. Portfolio assessment can improve motivation for learning and this achievement. 10. Portfolio assessment
provides opportunity for student-teacher dialogue.

Different purposes of Portfolios


All content in a portfolio must be linked to the learning objectives/outcomes. In addition to learning objectives,
there are many general purposes of portfolios:
Ø Enhancing student learning (little emphasis on content, more emphasis on student reflection)
Ø Assigning a grade (give clear guidelines to ensure that the portfolio consists of standard items)
Ø Displaying current achievement (pick the best complete work)
Ø Demonstrating progress (show changes over time, include various drafts)
Ø Showcasing student work (only best work)
Ø Documentation (showing work at variety of levels)
Ø Show finished work Ø Show works in progress.
Essential Elements of Portfolio Every portfolio must contain the following essential elements:
1. Cover letter “About the author” and “What my portfolio shows about my progress as a learner” (written at
the end, but put at the beginning).
2. Table of Contents with numbered pages
3. 3. Entries – both core (items students have to include) and optional (items of student’s choice). The core
elements be required for each student and will provide a common base from which to make decisions on
assessment. The optional items will allow the folder to represent the uniqueness of each student
4. Dates on all entries, to facilitate proof of growth over time.
5. Drafts of aural/oral and written products and revised versions.
6. Reflection can appear at different stages in the learning process (for formative and/or summative purposes)
and at the lower levels can be written in the mother tongue or by students who find it difficult to express
themselves in
English.
Students can choose to reflect upon some or all of the following:
Ø What did I learn from it?
Ø What did I do well?
Ø Why (based on the agreed teacher-student assessment criteria) did I choose this item?
Ø What do I want to improve in the item?
Ø How do I feel about my performance?
Ø What were the problem areas?

Uses of Portfolios
Much of the literature on portfolio assessment has focused on portfolios as a way to integrate assessment and
instruction and to promote meaningful classroom learning. Many advocates of this function believe that a
successful portfolio assessment program requires the ongoing involvement of students in the creation and
assessment process. Portfolio design should provide students with the opportunities to become more reflective
about their own work, while demonstrating their abilities to learn and achieve in academics.
For example, some feel it is important for teachers and students to work together to prioritize the criteria that will
be used as a basis for assessing and evaluating student progress. During the instructional process, students and
teachers work together to identify significant pieces of work and the processes required for the portfolio. As
students develop their portfolio, they are able to receive feedback from peers and teachers about their work.
Because of the greater amount of time required for portfolio projects, there is a greater opportunity for
introspection and collaborative reflection. This allows students to reflect and report about their own thinking
processes as they monitor their own comprehension and observe their emerging understanding of subjects and
skills. The portfolio process is dynamic and is affected by the interaction between students and teachers.

The Benefits of Portfolios


In the schools where they have been widely used, portfolios have been credited with transforming the learning
environment in ways that even their strongest proponents had not anticipated. Ruth Mitchell (1992, pp. 103–108)
clustered the powerful consequences of portfolio use into the categories of: instruction, professional
development, assessment, and research. To these we add the essential area of communication with parents and
the larger community.
It should be noted that the benefits of portfolios result principally from the process of building and using them.
While the portfolios themselves have value, particularly in the area of assessment (permitting the evaluation of a
wide range of outcomes and documenting growth over time), it is the process of creation that offers great power
to educators. Students become highly engaged in their own learning through the steps of selection and reflection,
assume considerable responsibility for that learning, and enter into a different relationship with their teachers,
one characterized as more collegial than hierarchical.

Student Portfolios as an Assessment Tool


Teachers and administrators have been making a move from traditional paperand-pencil type tests to alternate
forms of assessment. Teacher observation, projects, essays, and other more creative ways of evaluating student
achievement have gained a larger following within the classroom. Although its use has declined, one type of
assessment tool that can be used very effectively is the student portfolio. Portfolios remain quite popular in
education coursework and with administrators evaluating senior teachers. Why, then, do so many classroom
teachers forego the use of portfolios as assessment tools?
One reason might be that the portfolio is a very subjective form of assessment. For anyone uncomfortable
without a grading key or answer sheet, subjective evaluation can be a scary task. Secondly, teachers often are
unsure themselves of the purpose of a portfolio and its uses in the classroom. Third, there is a question of how
the portfolio can be most effectively used to assess student learning.
The following suggestions will help you come to terms with those three factors and allow you to utilize student
portfolios to evaluate the learning occurring in your classroom.
Set a goal, or purpose, for the portfolio. Your goal should be tied to how you plan to use the portfolio. Do you
want to see student improvement over the long term or a mastery of a specific set of skills? Is it important for you
to see the scope of student learning over time or do you merely want to collect samples of student work to pass
along to the next teacher? Are you looking for a concrete way to show parents the amount of work completed
and their child's improvement over time? Take some time to think about what kind of data you want to collect and
how you plan to use it.
Next, determine how -- or if -- you will grade the portfolios. If your purpose is merely to collect work samples
to pass along to another teacher or parent, there is no need to actually grade the portfolios. If, however, you are
looking for an overall mastery of skills, you will want to grade the work collected. The most efficient way to grade
a portfolio is through a rating scale. If you're looking for specific skills, you might begin with a checklist. That
checklist will ensure that all necessary pieces are included. I use the following guidelines: Is the work completed
correctly (mechanics), completely (information), and comprehensively (depth)? Each area is marked on a scale
of 1-4. My scale is 1 = not at all; 2 = somewhat; 3 = mostly; and 4 = entirely.
Say, for example, that as a teacher of writing, I'm looking for examples within the student portfolios that show
each writing mode covered during my course. Each piece then is determined to be correct, complete, and
comprehensive based on a scale of 1-4. The three scores are averaged giving each piece an overall score. I then
average all the scores to give a grade for the entire portfolio. A math teacher might be looking for samples
showing various problems solved based on the skills taught during a particular unit or year. A social studies
teacher might be looking for comprehension and understanding of major events during a specific time period.
Each teacher must determine what skills or learning are to be evaluated through student portfolios.
It also is important -- especially if you plan to use the portfolio as a major grade for your course -- that you
get another teacher to help with the evaluations. That ensures that your assessment is reliable. Teachers often
cut some slack for less academically inclined students, while holding others to higher standards. That is
especially prevalent in subjective assessments. By asking a teacher who is unfamiliar with your students to read
over the work and assess it using your rating scale, you are making a more authentic evaluation. The two scores
then can be averaged to get a final grade. That will show you and the student a more accurate assessment of
their work products.
You can have students create portfolios of their work for a particular unit. One thing to keep in mind is that,
although many portfolios reflect long-term projects completed over the course of a semester or year, it does not
have to be that way. That portfolio might count as a project for that particular topic of study. The next unit might
not include the use of a portfolio as an assessment tool. There is no need to collect work in a portfolio, give an
end-of-unit test, and have students complete a major project in connection with the unit. All three activities are
tools to evaluate student learning and its overkill for both you and the students to use all three. Choose the type
of assessment that best meets the goals and objectives of a particular unit.
Finally, student involvement is very important in the portfolio process. It is vital that students also
understand the purpose of the portfolio, how it will be used to evaluate their work, and how grades for it will be
determined. Make sure students are given a
checklist of what is expected in the portfolio before they begin submitting work. Take time at the beginning of the
unit to explain the type of evaluation it is, so students clearly understand what is expected in terms of work
product.
It also is important that you allow students a choice of what is placed in their portfolios. Although you
might have a few specific pieces you require, permit students to include two or three pieces of their own
choosing. Additionally, be sure to offer students the opportunity to reflect about the work included in the portfolio.
What are their thoughts and feelings about each piece? Does it represent their best work or were they goofing off
when they completed it? Why did a student choose a particular piece? What was his or her thought process in
determining which pieces to submit? Those kinds of questions force students to actively think about their work
and the portfolio as a whole rather than simply throwing any old assignment into a folder. Reflection provides
further meaning to the assessment.
The portfolio is not the easiest type of assessment to implement, but it can be a very effective
tool. Portfolios show the cumulative efforts and learning of a particular student over time. They offer valuable
data about student improvement and skill mastery. Along with student reflection, that data provides valuable
information about how each student learns and what is important to him or her in the learning process.
When starting the portfolio process, remember to keep it simple. Start with a single unit. Determine your
goals and purpose for the portfolio. Create a checklist. Explain the process to students and encourage them to
take an active role in the development of their portfolios. What you might discover is a very valuable and
meaningful evaluation tool that effectively assesses student learning.

Lesson Proper for Week 9


It is necessary to assess the students’ performances as an individual or in a group during the learning process
rather than assessment with traditional methods or multiplechoice methods. Portfolios are alternative assessment
methods to observe students’ developments and assess their performances during learning process. Moreover,
portfolios are assessment tool based on contemporary learning approach such as constructivist learning theory,
multiple-intelligences theory and brain-based learning theory Portfolio assessment enables students to reflect
their real performance, to show their weak and strong domain and to observe student’s progress during the
learning process, and encourages students to take responsibilities for their own learning.
Since portfolio enable collecting information from different source such as students’ parents, friends, teachers,
and himself, it provides teachers to have reliable information about student. They are important tools for
assessment of students’ learning products and process. Different theoretical and applicable researches show
that portfolio can be used both as learning and assessment tools (Birgin, 2007; Ersoy, 2006; Klenowski, 2000;
Kuhs, 1994; Norman, 1998). Thus, portfolio has a potential which enables students to learn during assessment
and to be assessed during learning (to assess for learning and to assess of learning).
Therefore, it should be exactly applied in primary education for different courses such as Science and
Technology, Mathematics, Social Science to observe the students’ progress during the learning process and to
provide the required assistance depending on their performances.
During the preparation of a portfolio, first of all, it is necessary to determine a purpose of the portfolio, to plan its
items by covering the students’ different skills and learning dimensions (cognitive, emotional, and psychomotor)
and to explain its assessment criteria clearly. In addition, it should be considered that there are different extents
which portfolio has a restricted usage. It should be use computer-based portfolio and electronic portfolio to
decrease the problems such as carrying, reaching, and saving portfolios. Considering this situation, it decreases
the burden in crowded classes in Turkey. Effective use of the portfolio as a learning and assessment tools
depends on the knowledgeable and experienced teachers who apply them on a large scale.
However, some researches (Birgin, 2003; Çakan, 2004; Özsevgeç et al., 2004; Yiğit et al., 1998) emphasized
that teachers don’t have enough knowledge and experience about portfolio assessment methods and other
alternative assessment methods. It is stated that teachers don’t have sufficient information about portfolio
assessment in the in-service seminars organized within the new primary education programs (Battal, 2006; Birgin
& Tutak, 2006).

The Types of Portfolios


As more and more educators use portfolios, they increasingly recognize that the process has the power to
transform instruction. Some teachers, however, are confused by the many types of portfolios, their different uses,
and the practical issues surrounding storage, ownership, and the like.
The three major types of portfolios are: working portfolios, display portfolios, and assessment portfolios. Although
the types are distinct in theory, they tend to overlap in practice. Consequently, a district's program may include
several different types of portfolios, serving several different purposes. As a result, it is important for educators to
be clear about their goals, the reasons they are engaging in a portfolio project, and the intended audience for the
portfolios.

A. Working Portfolios
A working portfolio is so named because it is a project “in the works,” containing work in progress as well as
finished samples of work. It serves as a holding tank for work that may be selected later for a more permanent
assessment or display portfolio.A working portfolio is different from a work folder, which is simply a receptacle for
all work, with no purpose to the collection. A working portfolio is an intentional collection of work guided by
learning objectives.

Purpose
The major purpose of a working portfolio is to serve as a holding tank for student work. The pieces related to a
specific topic are collected here until they move to an assessment portfolio or a display portfolio, or go home with
the student. In addition, the working portfolio may be used to diagnose student needs. Here both student and
teacher have evidence of student strengths and weaknesses in achieving learning objectives, information
extremely useful in designing future instruction.

Audience
Given its use in diagnosis, the primary audience for a working portfolio is the student, with guidance from the
teacher. By working on the portfolio and reflecting on the quality of work contained there, the student becomes
more reflective and selfdirected. With very young children, however, the primary audience is the teacher, with the
participation of the student.
Parents may be another important audience of a working portfolio, since it can help inform parent/teacher
conferences. The portfolio is particularly useful for those parents who do not accept the limitations of their child's
current skills or do not have a realistic picture of the way their child is progressing compared with other children.
In such situations, evidence from a portfolio can truly “speak a thousand words.” In addition, a portfolio can serve
to document the progress a student has made, progress of which a parent may be unaware.

Process
A working portfolio is typically structured around a specific content area; pieces collected relate to the objectives
of that unit and document student progress toward mastery of those objectives. Therefore, sufficient work must
be collected to provide ample evidence of student achievement. Because diagnosis is a major purpose of the
working portfolio, some of the pieces included will show less than complete understanding and will help shape
future instruction.
The working portfolio is reviewed as a whole and its pieces evaluated—either periodically or at the end of the
learning unit. Some pieces may be shifted to an assessment portfolio to document student acquisition of
instructional objectives. Other pieces may be moved to a student's own display (or best works) portfolio or
celebration of individual learning. Still other pieces are sent home with the student.
As students move pieces from a working portfolio into either an assessment or display portfolio, they describe the
reasons for their choices. In this process of selection and description, students must reflect seriously on their
work and what it demonstrates about them as learners. As students and their teachers look through the portfolio,
they set short-term objectives for achieving certain curriculum goals. The portfolio thus provides evidence of
strengths and weaknesses and serves to define the next steps in learning.

B. Display, Showcase, or Best Works Portfolios


Probably the most rewarding use of student portfolios is the display of the students' best work, the work that
makes them proud. Students, as well as their teachers, become most committed to the process when they
experience the joy of exhibiting their best work and interpreting its meaning. Many educators who do not use
portfolios for any other purpose engage their students in the creation of display portfolios. The pride and sense of
accomplishment that students feel make the effort well worthwhile and contribute to a culture for learning in the
classroom.

Purpose
The purpose of a display portfolio is to demonstrate the highest level of achievement attained by the student.
Collecting items for this portfolio is a student's way of saying “Here's who I am. Here is what I can do.”A display
portfolio may be maintained from year to year, with new pieces added each year, documenting growth over time.
And while a best works portfolio may document student efforts with respect to curriculum objectives, it may also
include evidence of student activities beyond school (a story written at home, for example).
There are many possibilities for the contents of a display portfolio. The benefits of portfolios were first recognized
in the area of language arts, specifically in writing. Therefore, writing portfolios are the most widely known and
used. But students may elect to put many types of items in their portfolio of best works—a drawing they like, a
poem they have written, a list of books they have read, or a difficult problem they have solved.

Audience
Since the student selects her or his own best works, the audience for a display portfolio is that student and the
other important individuals, such as parents and older siblings, to whom the student chooses to show the
portfolio. Other audiences include a
current teacher or next year's teacher, who may learn a lot about the student by studying the portfolio.
In addition, a student may submit portfolios of best works to colleges or potential employers to supplement other
information; art students have always used this approach. The contents of these portfolios are determined by the
interests of the audience and may include videos, written work, projects, resumés, and testimonials. The act of
assembling a display portfolio for such a practical purpose can motivate high school students to produce work of
high quality.

Process
Most pieces for a display portfolio are collected in a working portfolio of school projects. Sometimes, however, a
student will include a piece of work from outside the classroom, such as a project from scouts or a poem written
at home. Students select the items to be included in a display portfolio. Their choices define them as students
and as learners. In making their selections, students illustrate what they believe to be important about their
learning, what they value and want to show to others.

C. Assessment Portfolios
The primary function of an assessment portfolio is to document what a student has learned. The content of the
curriculum, then, will determine what students select for their portfolios. Their reflective comments will focus on
the extent to which they believe the portfolio entries demonstrate their mastery of the curriculum objectives. For
example, if the curriculum specifies persuasive, narrative, and descriptive writing, an assessment portfolio should
include examples of each type of writing. Similarly, if the curriculum calls for mathematical problem solving and
mathematical communication, then the display portfolio will include entries documenting both problem solving
and communication, possibly in the same entry.

Purpose
The primary purpose of an assessment portfolio is to document student learning on specific curriculum
outcomes. As such, the items in the portfolio must be designed to elicit the knowledge and skill specified in the
outcomes. It is the assessment tasks that bring the curriculum outcomes to life; only by specifying precisely what
students must do and how well they must do it do these statements of learning have meaning.
Assessment portfolios may be used to demonstrate mastery in any curricular area. They may span any period of
time, from one unit to the entire year. And they may be dedicated to one subject or many subjects. For example,
a teacher may wish to have evidence that a child has sufficient skills in a content area to move to the next level
or grade. The criteria for moving on and the types of necessary evidence must be established. Then the portfolio
is compiled and assessed.

Audience
There are many possible audiences for an assessment portfolio, depending on its specific purpose. One
audience may be the classroom teacher, who may become convinced that the objectives of an instructional unit
have been mastered or who may decide to place a student in advanced classes or special sections. Alternatively,
the audience may be the school district or even the state, seeking documentation of student learning, and
permitting a student to move to the high school or receive a diploma. A secondary, though very important,
audience is always the student, who provides evidence of significant learning.

Process
There are eight basic steps in developing an assessment portfolio system. Since portfolio entries represent a
type of performance, these steps resemble the principles for developing good performance assessments.
1. Determine the curricular objectives to be addressed through the portfolio.
2. Determine the decisions that will be made based on the portfolio assessments. Will the assessments be
used for high-stakes assessment at certain levels of schooling (for example, to enable students to make the
transition from middle school to high school)?
3. Design assessment tasks for the curricular objectives. Ensure that the task matches instructional intentions
and adequately represents the content and skills (including the appropriate level of difficulty) students are
expected to attain. These considerations will ensure the validity of the assessment tasks.
4. Define the criteria for each assessment task and establish performance standards for each criterion.
5. Determine who will evaluate the portfolio entries. Will they be teachers from the students' own school?
Teachers from another school? Or does the state identify and train evaluators?
6. Train teachers or other evaluators to score the assessments. This will ensure the reliability of the
assessments.
7. Teach the curriculum, administer assessments, collect them in portfolios, score assessments.
8. As determined in Step 2, make decisions based on the assessments in the portfolios.

Challenges
But even in a classroom environment where the stakes are lower, assessment portfolios are more formal affairs
than those designed to diagnose learning needs (working portfolios) or to celebrate learning (best works
portfolios). In an assessment portfolio, the content matters and it must demonstrate and document what students
have learned. The origin of an assessment portfolio may be quite external to the student and his world. The
mandate may come from outside the classroom—for instance, via curriculum committees and board action, or
directly from the state department of education. Moreover, the eventual owner of the portfolio's contents may be
someone other than the student. In addition, the selection process is more controlled and dictated, since the
portfolio entries must document particular learning outcomes. And there may be no opportunity for the student to
“show off” his or her portfolio.

Steps in implementing a classroom portfolio program


1. Make sure students own their portfolios.
Talk to your students about your ideas of the portfolio, the different purposes, and the variety of work samples. If
possible, have them help make decisions about the kind of portfolio you implement.
2. Decide on the purpose.
Will the focus be on growth or current accomplishments? Best work showcase or documentation? Good
portfolios can have multiple purposes but the teacher and students need to be clear about the purpose.
3. Decide what work samples to collect,
For example, in writing, is every writing assignment included? Are early drafts as well as final products included?
4. Collect and store work samples,
Decide where the work sample will be stored. For example, will each student have a file folder in a file cabinet, or
a small plastic tub on a shelf in the classroom?
5. Select criteria to evaluate samples,
If possible, work with students to develop scoring rubrics. This may take considerable time as different rubrics
may be needed for the variety of work samples. If you are using existing scoring rubrics, discuss with students
possible modifications after the rubrics have been used at least once.
6. Teach and require students conduct self evaluations of their own work,
Help students learn to evaluate their own work using agreed upon criteria. For younger students, the self
evaluations may be simple (strengths, weaknesses, and ways to improve); for older students a more analytic
approach is desirable including using the same scoring rubrics that the teachers will use.
7. Schedule and conduct portfolio conferences.
Teacher-student conferences are time consuming but conferences are essential for the portfolio process to
significantly enhance learning. These conferences should aid students' self evaluation and should take place
frequently.
8. Involve parents.
Parents need to understand the portfolio process. Encourage parents to review the work samples. You may wish
to schedule parent, teacher-students conferences in which students talk about their work samples.

Stages in Impleme nting Portfolio Assessment


Stage 1: Identifying teaching goals to assess through portfolio It is very important at this stage to be very clear
about what the teacher hopes to achieve in teaching. These goals will guide the selection and assessment of
students’ work for the portfolio
Stage 2: Introducing the idea of portfolio assessment to your class Portfolio assessment is a new thing for many
students who are used to traditional testing. For this reason, it is important for the teacher to introduce the
concept to the class
Stage 3: Specification of Portfolio Content Specify what and how much have to be included in the portfolio– both
core and options (it is important to include options as these enable self-expression and independence). Specify
for each entry how it will be assessed.
Stage 4: Giving clear and detailed guidelines for portfolio presentation There is a tendency for students to
present as many evidence of learning as they can when left on their own. The teacher must therefore set clear
guidelines and detailed information on how the portfolios will be presented.
Stage 5: Informing key school officials, parents and other stakeholders Do not attempt to use the portfolio
assessment method without notifying your department head, dean or principal. This will serve as a precaution in
case students will later complain about your new assessment procedure.
Stage 6: Development of the Portfolio

The Portfolio Development Process


Portfolios are actually a composite of two major components, the process and the product (Burke, Fogarty,
Belgrad 1994, p. 3). While there may be the temptation on the part of practitioners and students to focus primarily
on the product (the completed portfolio), the portfolio development process is certainly as important. To derive
the greatest benefit from the use of portfolios, it is imperative to fully understand the relationship between the
development process and the product. The portfolio is the actual collection of works that results from going
through the development process; the development process is at the heart of successful portfolio use. Although
this process may be a radical new experience for students and initially a time-consuming one for their teachers,
most find it well worth the time and effort. For them, the development process transforms instruction and
assessment.

Lesson Proper for Week 10


Importance of Educational Measure ment, Assessment and Evaluation
As teachers become more familiar with data-driven instruction, they are making decisions about what and how
they teach based on the information gathered from their students. In other words, teachers first find out what their
students know and what they do not know, and then determine how best to bridge that gap. How Are
Measurement, Assessment and Evaluation Different?
During the process of gathering information for effective planning and instruction, the words measurement,
assessment and evaluation are often used interchangeably. These words, however, have significantly different
meanings.

Measurement
The word measurement, as it applies to education, is not substantially different from when it is used in any other
field. It simply means determining the attributes or dimensions of an object, skill or knowledge. We use common
objects in the physical world to measure, such as tape measures, scales and meters. These measurement tools
are held to standards and can be used to obtain reliable results. When used properly, they accurately gather data
for educators and administrators.
standard measurements in education are raw scores, percentile ranks and standard scores.

Assessment
One of the primary measurement tools in education is the assessment. Teachers gather information by giving
tests, conducting interviews and monitoring behavior. The assessment should be carefully prepared and
administered to ensure its reliability and validity. In other words, an assessment must provide consistent results
and it must measure what it claims to measure.

Evaluation
Creating valid and reliable assessments is critical to accurately measuring educational data. Evaluating the
information gathered, however, is equally important to the effective use of the information for instruction.
In education, evaluation is the process of using the measurements gathered in the assessments. Teachers use
this information to judge the relationship between what was intended by the instruction and what was learned.
They evaluate the information gathered to determine what students know and understand, how far they have
progressed and how fast, and how their scores and progress compare to those of other students.
Why Are Measurement, Assessment and Evaluation Important in Education?
According to educator and author, Graham Nuthall, in his book The Hidden Lives
of Learners, "In most of the classrooms we have studied, each student already knows about 40-50% of what the
teacher is teaching." The goal of data-driven instruction is to avoid teaching students what they already know and
teach what they do not know in a way the students will best respond to.
For the same reason, educators and administrators understand that assessing students and evaluating the
results must be ongoing and frequent. Scheduled assessments are important to the process, but teachers must
also be prepared to reassess students, even if informally, when they sense students are either bored with the
daily lesson or frustrated by material they are not prepared for. Using the measurements of these intermittent
formative assessments, teachers can fine-tune instruction to meet the needs of their students on a daily and
weekly basis.
Why Is Data-D riven Instruction So Effective?
Accurately measuring student progress with reliable assessments and then evaluating the information to make
instruction more efficient, effective and interesting is what data-driven instruction is all about. Educators who are
willing to make thoughtful and intentional changes in instruction based on more than the next chapter in the
textbook find higher student engagement and more highly motivated students.
In fact, when students are included in the evaluation process, they are more likely to be self-motivated. Students
who see the results of their work only on the quarterly or semester report card or the high-stakes testing report
are often discouraged or deflated, knowing that the score is a permanent record of their past achievement.
When students are informed about the results of more frequent formative assessments and can see how they
have improved or where they need to improve, they more easily see the value of investing time and energy in
their daily lessons and projects. students are introduced "to elements of assessment that are essential to good
teaching. It provides students with an understanding of the role of assessment in the instructional process,"
including the proper evaluation of assessments and standardized tests, and how to make better use of the data
in their daily classroom instruction.
Data-driven instruction, using accurate measurements, appropriate assessments and in-depth evaluation, is
changing the way we view tests and instruction, as well as the way we communicate information to both students
and families. Teachers who have a clear understanding of how and why these issues are important will find these
changes give them a better understanding of their students and better opportunities to help their students
achieve academic success.

Grading and Reporting System


The purpose of a grading system is to give feedback to students so they can take charge of their learning and to
provide information to all who support these students—teachers, special educators, parents, and others. The
purpose of a reporting system is to communicate the students’ achievement to families, post-secondary
institutions, and employers. These systems must, above all, communicate clear information about the skills a
student has mastered or the areas where they need more support or practice. When schools use grades to
reward or punish students, or to sort students into levels, imbalances in power and privilege will be magnified and
the purposes of the grading and reporting systems will not be achieved. This guide is intended to highlight the
central practices that schools can use to ensure that their grading and reporting systems help them build a
nurturing, equitable, creative, and dynamic culture of learning.
Grading System: The system that a school has developed to guide how teachers assess and grade student
work.
Reporting System: The system that a school has developed for the organization of assignment scores in
gradebooks (either online or paper), and the determination of final grades for report cards and transcripts.
Assigning students grades is an important component of teaching and many school districts issue progress
reports, interim reports, or mid term grades as well as final semester grades. Traditionally these reports were
printed on paper and sent home with students or mailed to students’ homes. Increasingly, school districts are
using webbased grade management systems that allow parents to access their child’s grades on each individual
assessment as well as the progress reports and final grades.
Grading can be frustrating for teachers as there are many factors to consider. In addition, report cards typically
summarize in brief format a variety of assessments and so cannot provide much information about students’
strengths and weaknesses. This means that report cards focus more on assessment of learning than
assessment for learning. There are a number of decisions that have to be made when assigning students’ grades
and schools often have detailed policies that teachers have to follow. In the next section, we consider the major
questions associated with grading.

Types of Grading Systems


There are 7 types of grading systems available. They are :
1. Percentage Grading – From 0 to 100 Percent
2. Letter grading and variations – From A Grade to F Grade
3. Norm-referenced grading – Comparing students to each other usually letter grades
4. Mastery grading – Grading students as “masters” or “passers” when their attainment reaches a
prespecified level
5. Pass/Fail – Using the Common Scale as Pass/Fail
6. Standards grading(or Absolute-Standards grading) – Comparing student performance to a pre-
established standard (level) of performance
7. Narrative grading -Writing Comments about students
There are many people who declare there are many advantages of the modern education system and there are
also others who say the complete opposite. Well, both sides have a fair share of arguments to support their views
completely.
What Made The Schools To Choose This Grading System?
School is a sacrosanct place and is touted to be the second home of children. Today in the rapid life that we
are living in, most of the parents are office goers and school becomes a safe haven to leave their children behind
and go. Thus, schools play an essential part in the wholesome and the holistic development of each and every
student they have got enrolled with.
It does not merely perform as an intermediate in which the children study and imbibe new things and habits but,
they also are depicted to the actual world where they get to interrelate with their landed gentry and learn many
things through understanding which nothing else can provide.
They feel that as technology is advancing, new forms of teaching, guiding and other features should also be
improved. One such feature is using a grading system in education to judge a student’s capability and
knowledge.
The main reason for the schools to exist in the world is to impart knowledge to the students who are studying in it
and assessing the students thereby forms a vital part of the performance of the school which is usually carried as
a two-way method.
Here, in this article, we are going to look in detail at the various dimensions of the grading systems in the field of
education and the various advantages and disadvantages of grading system in education

Grading System in Education


Types of Grading and Reporting Systems
1. Traditional letter-grade system
Easy and can average them. But of limited value when used as the sole report, because:
Ø they end up being a combination of achievement, effort, work habits, behavior
Ø teachers differ in how many high (or low) grades they give
Ø they are therefore hard to interpret they do not indicate patterns of strength and weakness
2. Pass-fail Popular in some elementary schools .Used to allow exploration in high school/college Should be
kept to the minimum, because:
Ø do not provide much information, students work to the minimum In mastery learning courses,
Ø can leave blank till “mastery” threshold reached.

3. Checklists of objectives
Most common in elementary school. Can either replace or supplement letter grades Each item in the checklist
can be rated: Outstanding, Satisfactory, Unsatisfactory; A, B, C, etc. Problem is to keep the list manageable and
understandable.
4. Letters to parents/guardians. Useful supplement to grades. Limited value as sole report, because: very time
consuming accounts of weaknesses often misinterpreted not systematic or cumulative. Great tact needed in
presenting problems (lying, etc.)
5. Portfolios
Set of purposefully selected work, with commentary by student and teacher Useful for:
Ø showing student’s strengths and weaknesses
Ø illustrating range of student work
Ø showing progress over time or stages of a project
Ø teaching students about objectives/standards they are to meet
6. Parent-teacher conferences Used mostly in elementary school. Portfolios (when used) are useful basis for
discussion Useful for:
Ø two-way flow of information
Ø getting more information and cooperation from parents
Ø Limited in value as the major report, because time consuming
Ø provides no systematic record of progress some parents won’t come

Systems with Multiple Forms of Grading and Reporting


Ø They’re a good idea
Ø Sensible to supplement letter grade
Ø Have separate ratings for achievement, citizenship, etc.
How should you develop one? The system should be:
Ø Guided by the functions to be served will probably be a compromise, because functions often conflict but
always keep achievement separate from effort.
Ø Developed cooperatively (parents, students, school personnel
Ø more adequate system
Ø more understandable to all

Based on clear statement of learning objectives


Ø are the same objectives that guided instruction and assessment
Ø some are general, some are course-specific
Ø aim is to report progress on those objectives
Ø practicalities may impose limits, but should always keep the focus on objectives

Consistent with school standards


Ø should support, not undermine, school standards
Ø should use the school’s categories for grades and performance standards
Ø should actually measure what is described in those standards Based on adequate assessment
Ø implication: don’t promise something you cannot deliver
Ø design a system for which you can get reliable, valid data

Based on the right level of detail


Ø detailed enough to be diagnostic
Ø but compact enough to be practical
Ø not too time consuming to prepare and use
Ø understandable to all users
Ø easily summarized for school records
Ø probably means a letter-grade system with more detailed supplementary reports
Ø Providing for parent-teacher conferences as needed
Ø regularly scheduled for elementary school as needed for high school

Functions of Grading and Reporting Systems


1. Improve students ’ learning by:
• clarifying instructional objectives for them
• showing students’ strengths & weaknesses
• providing information on personal-social development
• enhancing students’ motivation (e.g., short-term goals)
• indicating where teaching might be modified Best achieved by:
• day-to-day tests and feedback
• plus periodic integrated summaries
2. Reports to parents/guardians
• Communicates objectives to parents, so they can help promote learning
• Communicates how well objectives being met, so parents can better plan
3. Administrative and guidance uses
• Help decide promotion, graduation, honors, athletic eligibility
• Report achievement to other schools or to employers
• Provide input for realistic educational, vocational, and personal counseling

Guidelines •
A. Properly weight each component to create a Copyright • Normally agreed upon by school officials 30% 25%
30% 15% Quiz Project/Assignment
Class Participation Periodic Test
B. Principal Components Analysis – more scientific approach; hardly practiced in schools because of
difficulty • Put all components on same scale to weight properly: – Equate range of scores – Convert all to T-
scores or other standard scores
C. Norm-Referenced Grading System
• Grades may reflect relative performance – Score compared to other students
(rank)
• Grade depends on what group you are in, not just your own performance
• Typical grade may be shifted up or down, depending on group’s ability
• Widely used; most classroom testing is norm-referenced
• Grades may reflect absolute performance – Score compared to specified performance standards
(what you can do)
• Grade does not depend on what group you are in, but only on your own performance compared to a
set of performance standards
• Grading is a complex task
D. Criterion-Referenced Grading System
• Grades must: – Clearly define the domain – Clearly define and justify the
performance standards Be based on criterion-referenced assessment
• Conditions are hard to meet except in complete mastery learning settings
E. Score Compared to Learning Potential
• Grades are inconsistent with a standards-based performance – Each child
has his/her own standard
• Reliably estimating learning ability is very difficult
• One cannot reliably measure change with classroom measures
• Should only be used as supplement

Distribution of Grades and Guidelines for Effective Grading


Norm-Referenced or Relative Performance
• Normal curve is defensible only when
• When “grading on the curve”
• When “grading on the curve”, any pass-fail decision should be based on an absolute standard (failed the
minimum essentials)
• Standards and ranges should be understood and followed by all teachers Criterion-
Referenced or Absolute Grading System • Seldom uses letter-grade
alone
• Often includes checklists of what has been mastered • The distribution of grades is not predetermined

Guidelines for Effective Grading


• Describe grading procedures to students at the beginning of instruction.
• Clarify that course grade will be based on achievement only.
• Explain how other factors (effort, work habits, etc.) will be reported.
• Relate grading procedures to intended learning outcomes.
• Obtain valid evidence (tests) for assigning grades
• Try to prevent cheating.
• Return and review all test results as soon as possible.
Properly weight the various types of achievements included in the grade.
• Do not lower an achievement grade for tardiness, weak effort, misbehavior.
• Be fair. • Avoid bias. • When in doubt, review the evidence.
• If still in doubt, give the higher grade.
Parent-Teacher Conference
• Productive: – When carefully planned – When the teacher is skilled in handling such conferences
• The teacher’s skill can be developed

Guidelines for a Good Conference


• Make plans » Review your goals. » Organize the information to present. » Make a list
of points to cover and questions to ask » If using portfolios, select and review carefully
• Start positive and maintain a positive focus » Present student’s strong points first » Be helpful to have example
of works to show needs » Compare early vs. late work to show learning progress
• Encourage parents to participate and share their ideas » Be willing to listen » Be willing to answer questions •
Plan actions cooperatively » What steps you can each take?
» Summarize at the end
• End with a positive comment » Should not be a vague generality » Should be true
• Use good human relation skills.

Lesson Proper for Week 11


General Concepts on Statistics and Learning Inquiry
Learning is more adequately achieved through research-based inquiry Research! based inquiry
makes our academic and pursuit scientific and there while guarding us against errors. As we look for truth in
reality science has laid down criteria as solid bases or knowing based on empirical perception. So as human
inquiry became more conscious and rigorous. approaches and techniques were devised supported by
quantitative data and statistics. In education the use of statistics can cover much ofthe whole process of teaching
and learning.
Understanding Statistics
Statistics is a term used to summarize a process that an analyst uses to characterize a data set. If the data
set depends on a sample of a larger population, then the analyst can develop interpretations about the population
primarily based on the statistical outcomes from the sample. Statistical analysis involves the process of gathering
and evaluating data and then summarizing the data into a mathematical form. Statistics is used in various
disciplines such as psychology, business, physical and social sciences, humanities, government, and
manufacturing. Statistical data is gathered using a sample procedure or other method. Two types of statistical
methods are used in analyzing data: descriptive statistics and inferential statistics. Descriptive statistics are used
to synopsize data from a sample exercising the mean or standard deviation. Inferential statistics are used when
data is viewed as a subclass of a specific population. . Statistics is useful in the teaching-learning process,
along several research-based inquiries:
A. Experimental Studies.These inquiries investigates causes, in addition to drawing conclusions on the effect
of changes in elements (called variables) being studied.
B. Inferential Studies.Data are gathered and the correlations between intervention (predictors) and the result
derived from a single group is investigated.

Statistical Inquiries Five Basic Steps


A) Planning the research-based inquiry around size, hypothesis,
variability, subjects, etc;
B) designing the experiment by blocking to reduce error; random assignment for unbiased estimates,
and mapping the procedures;
C) implementation and analyzing data;
D) documentation
E) presentation of results or conclusion

Types of Statistics Descriptive Statistics


Use descriptive statistics to summarize and graph the data for a group that you choose. This process allows
you to understand that specific set of observations. Descriptive statistics describe a sample. That’s pretty
straightforward. You simply take a group that you’re interested in, record data about the group members, and
then use summary statistics and graphs to present the group properties. With descriptive statistics, there is no
uncertainty because you are describing only the people or items that you actually measure. You’re not trying to
infer properties about a larger population.
The process involves taking a potentially large number of data points in the sample and reducing them down to a
few meaningful summary values and graphs. This procedure allows us to gain more insights and visualize the
data than simply pouring through row upon row of raw numbers.

Common tools of descriptive statistics


Descriptive statistics frequently use the following statistical measures to describe groups:
Central tendency: Use the mean or the median to locate the center of the dataset. This measure tells you where
most values fall.
Dispersion: How far out from the center do the data extend? You can use the range or standard deviation to
measure the dispersion. A low dispersion indicates that the values cluster more tightly around the center. Higher
dispersion signifies that data points fall further away from the center. We can also graph the frequency
distribution.
Skewness: The measure tells you whether the distribution of values is symmetric or skewed.
You can present this summary information using both numbers and graphs. These are the standard descriptive
statistics, but there are other descriptive analyses you can perform, such as assessing the relationships of paired
data using correlation and scatterplots.
Related posts: Measures of Central Tendency and Measures of Dispersion

Example of descriptive statistics


Suppose we want to describe the test scores in a specific class of 30 students. We record all of the test
scores and calculate the summary statistics and produce graphs. Here is the CSV data
file: Descriptive_statistics.
These results indicate that the mean score of this class is 79.18. The scores range from 66.21 to 96.53, and
the distribution is symmetrically centered around the mean. A score of at least 70 on the test is acceptable. The
data show that 86.7% of the students have acceptable scores.
Collectively, this information gives us a pretty good picture of this specific class. There is no uncertainty
surrounding these statistics because we gathered the scores for everyone in the class. However, we can’t take
these results and extrapolate to a larger population of students.
We’ll do that later.
Inferential statistics takes data from a sample and makes inferences about the larger population from which the
sample was drawn. Because the goal of inferential statistics is to draw conclusions from a sample and generalize
them to a population, we need to have confidence that our sample accurately reflects the population. This
requirement affects our process. At a broad level, we must do the following:
1. Define the population we are studying.
2. Draw a representative sample from that population.
3. Use analyses that incorporate the sampling error.
Proportion >= 70 86.7%
We don’t get to pick a convenient group. Instead, random sampling allows us to have confidence that the sample
represents the population. This process is a primary method for obtaining samples that mirrors the population
on average. Random sampling produces statistics, such as the mean, that do not tend to be too high or too low.
Using a random sample, we can generalize from the sample to the broader population. Unfortunately, gathering a
truly random sample can be a complicated process.

Pros and cons of working with samples


You gain tremendous benefits by working with a random sample drawn from a population. In most cases, it is
simply impossible to measure the entire population to understand its properties. The alternative is to gather a
random sample and then use the methodologies of inferential statistics to analyze the sample data.
While samples are much more practical and less expensive to work with, there are tradeoffs. Typically, we
learn about the population by drawing a relatively small sample from it. We are a very long way off from
measuring all people or objects in that population. Consequently, when you estimate the properties of a
population from a sample, the sample statistics are unlikely to equal the actual population value exactly. For
instance, your sample mean is unlikely to equal the population mean exactly. The difference between the sample
statistic and the population value is the sampling error. Inferential statistics incorporate estimates of this error into
the statistical results. In contrast, summary values in descriptive statistics are straightforward. The average score
in a specific class is a known value because we measured all individuals in that class. There is no uncertainty.

Standard analysis tools of inferential statistics


The most common methodologies in inferential statistics are hypothesis
tests, confidence intervals, and regression analysis. Interestingly, these inferential methods can produce similar
summary values as descriptive statistics, such as the mean and standard deviation. However, as I’ll show you,
we use them very differently when making inferences.
Hypothesis tests
Hypothesis tests use sample data answer questions like the following:
o Is the population mean greater than or less than a particular value?
o Are the means of two or more populations different from each other?
For example, if we study the effectiveness of a new medication by comparing the outcomes in a treatment and
control group, hypothesis tests can tell us whether the drug’s effect that we observe in the sample is likely to exist
in the population. After all, we don’t want to use the medication if it is effective only in our specific sample.
Instead, we need evidence that it’ll be useful in the entire population of patients. Hypothesis tests allow us to
draw these types of conclusions about entire populations.

Confidence intervals (CIs)


In inferential statistics, a primary goal is to estimate population parameters. These parameters are the
unknown values for the entire population, such as the population mean and standard deviation.
These parameter values are not only unknown but almost always unknowable. Typically, it’s impossible to
measure an entire population. The sampling error I mentioned earlier produces uncertainty, or a margin of error,
around our estimates.
Suppose we define our population as all high school basketball players. Then, we draw a random sample
from this population and calculate the mean height of 181 cm. This sample estimate of 181 cm is the best
estimate of the mean height of the population. However, it’s virtually guaranteed that our estimate of the
population parameter is not exactly correct.
Confidence intervals incorporate the uncertainty and sample error to create a range of values the actual
population value is like to fall within. For example, a confidence interval of [176 186] indicates that we can be
confident that the real population mean falls within this range.
Related post: Understanding Confidence Intervals

Regression analysis
Regression analysis describes the relationship between a set of independent variables and a dependent
variable. This analysis incorporates hypothesis tests that help determine whether the relationships observed in
the sample data actually exist in the population.
For example, the fitted line plot below displays the relationship in the regression model between height and
weight in adolescent girls.
Because the relationship is statistically significant, we have sufficient evidence to conclude that this relationship
exists in the population rather than just our sample.

Example of inferential statistics


For this example, suppose we conducted our study on test scores for a specific class as I detailed in the
descriptive statistics section. Now we want to perform an inferential statistics study for that same test. Let’s
assume it is a standardized statewide test. By using the same test, but now with the goal of drawing inferences
about a population, I can show you how that changes the way we conduct the study and the results that we
present.
In descriptive statistics, we picked the specific class that we wanted to describe and recorded all of the test
scores for that class. Nice and simple. For inferential statistics, we need to define the population and then draw a
random sample from that population.
Let’s define our population as 8th-grade students in public schools in the State of Pennsylvania in the United
States. We need to devise a random sampling plan to help ensure a representative sample. This process can
actually be arduous. For the sake of this example, assume that we are provided a list of names for the entire
population and draw a random sample of 100 students from it and obtain their test scores. Note that these
students will not be in one class, but from many different classes in different schools across the state.
Inferential statistics results
For inferential statistics, we can calculate the point estimate for the mean, standard deviation, and proportion
for our random sample. However, it is staggeringly improbable that any of these point estimates are exactly
correct, and there is no way to know for sure anyway. Because we can’t measure all subjects in this population,
there is a margin of error around these statistics. Consequently, I’ll report the confidence intervals for the
mean, standard deviation, and the proportion of satisfactory scores (>=70). Here is the CSV data
file: Inferential_statistics.
Standard deviation
Proportion scores >= 70
Given the uncertainty associated with these estimates, we can be 95% confident that the population mean is
between 77.4 and 80.9. The population standard deviation (a measure of dispersion) is likely to fall between 7.7
and 10.1. And, the population proportion of satisfactory scores is expected to be between 77% and 92%.

Differences between Descriptive and Inferential Statistics


As you can see, the difference between descriptive and inferential statistics lies in the process as much as it
does the statistics that you report.
Ø For descriptive statistics, we choose a group that we want to describe and then measure all subjects in that
group. The statistical summary describes this group with complete certainty (outside of measurement error).
Ø For inferential statistics, we need to define the population and then devise a sampling plan that produces a
representative sample. The statistical results incorporate the uncertainty that is inherent in using a sample to
understand an entire population.
A study using descriptive statistics is simpler to perform. However, if you need evidence that an effect or
relationship between variables exists in an entire population rather than only your sample, you need to
use inferential statistics.

Grading Principles and Guidelines


One of the primary goals of a proficiency-based grading system is to produce grades that more accurately reflect
a student’s learning progress and achievement, including situations in which students struggled early on in a
semester or school year, but then put in the effort and hard work needed to meet expected standards. If you ask
nearly any adult, they will tell you that failures—and learning to overcome them—are often among the most
important lessons in life.
When building a proficiency-based grading and reporting system, schools should begin by developing—ideally, in
collaboration with faculty, staff, students, and families— a set of common principles and guidelines that apply to
all courses and learning experiences. The guidelines should represent the school’s grading philosophy, including
how grading will be used to support the educational process.
The following exemplar guidelines are offered as suggestions to schools as they implement a
proficiency-based leaning system:
1. The primary purpose of the grading system is to clearly, accurately, consistently, and fairly
communicate learning progress and achievement to students, families, postsecondary institutions, and
prospective employers.
2. The grading system ensures that students, families, teachers, counselors, advisors, and support
specialists have the detailed information they need to make important decisions about a student’s education.
3. The grading system measures, reports, and documents student progress and proficiency against a set
of clearly defined cross-curricular and content-area standards and learning objectives collaboratively developed
by the administration, faculty, and staff. 4. The grading system measures, reports, and documents academic
progress and achievement separately from work habits, character traits, and behaviors, so that educators,
counselors, advisors, and support specialists can accurately determine the difference between learning needs
and behavioral or work-habit needs.
5. The grading system ensures consistency and fairness in the assessment of learning, and in the
assignment of scores and proficiency levels against the same leaning standards, across students, teachers,
assessments, learning experiences, content areas, and time.
6. The grading system is not used as a form of punishment, control, or compliance. In proficiency-based
learning systems, what matters most is where students end up—not where they started out or how they behaved
along the way. Meeting and exceeding challenging standards defines success, and the best grading systems
motivate students to work harder, overcome failures, and excel academically.

Importance of Statistics
The field of statistics is the science of learning from data. Statistical knowledge helps you use the proper
methods to collect the data, employ the correct analyses, and effectively present the results. Statistics is a crucial
process behind how we make discoveries in science, make decisions based on data, and make predictions.
Statistics allows you to understand a subject much more deeply
Two main reasons why studying the field of statistics is crucial in modern society. First, statisticians are
guides for learning from data and navigating common problems that can lead you to incorrect
conclusions. Second, given the growing importance of decisions and opinions based on data, it’s crucial that you
can critically assess the quality of analyses that others present to you. Statistics is an exciting field about the thrill
of discovery, learning, and challenging your assumptions. Statistics facilitates the creation of new knowledge. Bit
by bit, we push back the frontier of what is known.
Why statistics are important in our life? Statistics are the sets of mathematical equations that we used to analyze
the things. It keeps us informed about, what is happening in the world around us. Statistics are important
because today we live in the information world and much of this information’s are determined mathematically
by Statistics Help. It means to be informed correct data and statics concepts are necessary.

Lesson Proper for Week 13


Conceptualization, Operationalization and Measurement
Measurement is an essential element of education research. It is of extreme importance for determining the
efficacy and legitimacy of educational practices that are now operationalized without benefit of professional
consensus. Measures can be evaluated on the basis of multiple criteria; a discussion of the more important of
these criteria is offered. Measures that prove viable are of special importance in three areas: the evaluation of
students, communication among educators, and the investigation of hypotheses.
Within the definition that is offered, measurement procedures may involve a variety of formats; these are briefly
reviewed and evaluated.
The level of measurement is about how each variable is measured – qualitative or quantitative -- and how precise
each variable is. There are four levels of measurement – nominal, ordinal, and interval/ratio – with nominal
being the least precise and informative and interval/ratio variable being most precise and informative. Given a
choice, choose an interval/ratio variable, as it gives you more freedom and choice when it comes to choosing an
appropriate statistical technique.
Nominal level of measurement is the least precise and informative, because it only names the ‘characteristic’ or
‘identity’ we are interested. In other words, in nominal variables, the numerical values just "name" the attribute
uniquely. In this case, numerical value is simply a label. For example, if you are interested in knowing the sex of
your respondents, typically there are only two categories – man versus woman, and let’s say you could mark a
male respondent as 1 and a female respondent as 0. Now when it comes to the sex variable in your study, if
respondent Jenny was marked as 0, that means the sex of Jenny is female.
Other Examples of Nominal Variable:
• Name – Charlie, Ann, Richard, Stephanie
• Geographic location – Luzon ,Visaya, Mindanao, Philippines
• Zip Code – 14213, 14222, 14211
• Partisanship – Republican, Democrat, Independent
In ordinal measurement , the values stress the order or rank of the values, but the differences between each
one is not really known. You might consider yourself middle class, but how much better off are you compared to
a friend of yours who identified him/herself as lower class? In ordinal variables, the numerical values name the
attribute or characteristics but also allow us to place the categories in a natural and reasonable order.
Other Examples of Ordinal Variable:
• Likert scale – Strongly disagree; Disagree; Neither agree nor disagree; Agree; Strongly agree.
• Class standing – Freshman, sophomore, junior, senior
• Socioeconomic standing – Lower, middle, and upper class
• Quality of democracy – Very high, high, medium, low, very low
Because many social science and political science variables tend to be nominal (think of NAME) or ordinal (think
of ORDER), it is important that you are able to understand and distinguish them clearly.
Interval/ordinal measurements provide the most information about any variable. For interval/ratio level
variables not only can you order the values of the cases but you know the distance among each of the cases.
While in ordinal level variables we know the position of each case compared to each other, it is only with
interval/ratio level we know how far apart each case value is to one another.
Other Examples of Interval/Ratio Variable:
Country GDP - $2.35T; $6.42T; $675B; $1.43T
Prison Sentences – Six months; three years; 36 months; 120 days
Approval ratings – 32%; 67%; 51%; 92%
Gini coefficients – 0.21; 0.47; 0.12; 0.33

Ratio Measure describing a variable with attributes that have all the qualities of nominal, ordinal and interval and
based on a ”true zero” point.
Ratio measure refers to the highest (most complex) level of measurement that a variable can possess. The
properties of a variable that is a ratio measure are the following: (a) Each value can be treated as a unique
category (as in a nominal measure); (b) different values have order of magnitude, such as greater than or less
than or equal to (as in an ordinal measure); (c) basic mathematical procedures can be conducted with the values,
such as addition and division (as with an interval measure); and (d) the variable can take on the value of zero. An
example of a ratio measure is someone's annual income.
A ratio measure may be expressed as either a fraction or percentage

KEY STEPS OF MEASUREMENTS Conceptualization


In conceptualize, you see the word concept which means an idea. Don't think of a simple idea though, like taking
a walk. Imagine a complex concept involving many elements, so a little brain work is involved. When you
conceptualize, you either create a concept or you grasp one. The ability to invent or formulate an idea or concept.
The conceptualization phase of a project occurs in the initial design activity when the scope of the project is
drafted and a list of the desired design features and requirements is created.
Defining and agreeing on the definition of a concept. When researchers conceptualize a topic they search for
existing definitions of a given concept both generally (e.g., Google search) and academically (e.g., in journal
articles, textbooks, and definitions given by a respected academic/professional group) and use the results from
their search to conceptualize.
Purpose: Refinement and specification of abstract concepts
Example: You are studying the mental health outcomes for older adults with physical disabilities.
Thus you will need to conceptualize three things (1) mental health, (2) older adults, (3) physical disabilities.
(1) Mental Health: a person’s condition with regard to their psychological and emotional well-being.
(2) Older Adults: people who are 55 and older
(3) Physical Disabilities: is a limitation on a person's physical functioning, mobility, dexterity or stamina

Conceptualization is breaking and converting research ideas into common meanings to develop an agreement
among the users . This process eventually leads to framing meaningful concepts which ultimately lead to
creation of a theory.
Importance of conceptualization in research:
In deductive research, conceptualization helps to translate portions of an abstract theory into
testable hypotheses involving specific variables. In
inductive research, conceptualization is an important part of the process used to make sense of related
observations.
Steps :
1. What is the topic? The first step of any project is to determine what you want to study.
2. What is my problem? Why should anyone care about my problem? You must then establish the problem
your project hopes to solve, including filling in a gap or extending the literature in a new and exciting direction.
Interval/Ratio variables give us the most amount of precision and information to work with. With that greater
precision and information, you have more freedom when it comes to statistical analysis.

Operationalization
This is the process by which researchers conducting quantitative research spell out precisely how a concept will
be measured. It involves identifying the specific research procedures we will use to gather data about our
concepts. This of course requires that we know what research method(s) we will employ to learn about our
concepts, and we’ll examine specific research methods later on in the text.
Development of specific research definitions that will result in empirical observations representing those concepts
in the real world. This is a process of strictly defining variables into measurable factors thus you will need hyper
specific operationalization’s. This process defines “fuzzy” concepts and allows them to be measured empirically.
Purpose: To remove vagueness- all variables in the study must be defined.
Example: You are studying mental health outcomes for older adults with physical disabilities.
Thus you will need to operationalize your three conceptualizations of (1) mental health,
(2) older adults, (3) physical disabilities.
(1) Mental Health: a person's condition with regard to their psychological and emotional well-being including
stress, anxiety, depression, and loneliness.
(2) Older Adults: people who are 55-85 years old
(3) Physical Disabilities: a limitation on a person's physical functioning that is related to
accomplishing instrumental activities of daily living (IADLS)
Note: Operationalizations are usually similiar across studies but are often different in their specifics. For example,
some studies operationalize older adults as someone who is over 65, some measure mental health by a
diagnosis by a mental health professional of certain conditions such as depression or schizophrenia, and other
studies operationalize physical disability by specific diagnosis of a given condition.
Operationalization works by identifying specific indicators that will be taken to represent the ideas we are
interested in studying. Operationalisation is the term used to describe how a variable is clearly defined by the
researcher. The term operationalisation can be applied to independent variables (IV), dependent variables (DV)
or co-variables (in a correlational design).
Operationalization means turning abstract concepts into measurable observations. Although some concepts,
like height or age, are easily measured, others, like spirituality or anxiety, are not. Through operationalization ,
you can systematically collect data on processes and phenomena that aren't directly observable.

Operationalization is an essential component in a theoretically centered science because it provides the means
of specifying exactly how a concept is being measured or produced in a particular study.

Indicators
Survey or interview questions used to measure study variables defined and outlined through the operational
definition
Purpose: To generate questions that directly relate to a study's topic
Example: You are studying mental health outcomes for older adults with physical disabilities.
Using the operationalizations for the study you will design indicators. Each indicator should serve a specific
purpose in the study.
operationalization for mental health : a person's condition with regard to their psychological and emotional well-
being including stress, anxiety, depression, and loneliness.
Indicators for stress : Perceived Stress Scale PSS)
Indicators are established measures used to determine how well a result has been achieved in a particular area
of interest. For example, the rate of formal school qualifications helps quantify whether students are succeeding
at school. Indicators are used at different levels of the education system for different purposes.
At the national level, they provide a means of evaluating how well the system is performing in particular areas of
policy interest, for example: education and learning outcomes, student engagement and participation, family and
community engagement, and resourcing. This information is supplemented by a range of demographic and
contextual data1 and by ERO’s national reports on education issues and effective education practice.
Key Performance Indicators (KPIs)

KPIs in education
A key performance indicator (KPI) is a type of performance measurement that helps you understand how your
organization, department, or institution is performing and allows you to understand if you're headed in the right
direction with your strategy.
Here are the 5 Key Indicators of School Performance:
Ø Student Achievement,
Ø Discipline Referrals.,
Ø Attendance Rates.,
Ø Graduation Rates,
Ø Teacher Satisfaction.
Key Performance Indicators (KPIs) are the elements of your plan that express what you want to achieve
by when. They are the quantifiable, outcome-based statements you'll use to measure if you're on track to
meet your goals or objectives. Good plans use 5-7 KPIs to manage and track the progress of their plan.

Differences

Conceptualization VS Operationalization
It is the process of defining or It is the process by which a
researcher precisely specify how
specifying concepts
a concept will be
measured
Involves defining or Involves developing specific
specifying what we mean when using research definitions that will
certain terms
bring about empirical
observations representing those
concepts in the real
world
The main purpose is refining and The main purpose is removing
vagueness and making sure
specifying abstract concepts
that concepts are measurable
First step in measurement Second step in the
process measurement process
Why do we need to study conceptualization, operationalization, measurement?
Research is always based on reliable data and the methods used to capture this data. Scientific methods
facilitate this process to obtain quality output in research. Formulation of research problem is the first step to
begin with research. It is at this stage, the researcher should have a clear understanding of the words and terms
used in the research such that there are no conflicts arising later regarding their interpretation and
measurements. This necessitates the understanding of the conceptualization process.
For many fields, such as social science, which often use ordinal measurements, operationalization is essential. It
determines how the researchers are going to measure an emotion or concept, such as the level of distress or
aggression.
Measure is important in research. Measure aims to ascertain the dimension, quantity, or capacity of the
behaviors or events that researchers want to explore. Thus, researchers can interpret the data with
quantitative conclusion which leads to more accurate and standardized outcomes.

Lesson Proper for Week 14


How to Construct an Index for Research
To account for a concept’s dimensions a researcher might rely on indexes, scales, or typologies. An index is a
type of measure that contains several indicators and is used to summarize some more general concept. Like an
index, a scale is also a composite measure. But unlike indexes, scales are designed in a way that accounts for
the possibility that different items on an index may vary in intensity. A typology, on the other hand, is a way of
categorizing concepts according to particular themes.
This focus considers the characteristics of an index and the typical steps followed in the construction of such a
variable. From the literature, selective approaches to index construction are more fully described, with particular
emphasis on the steps and methods used during the processes. These results in the formulation of some key
considerations that can be regarded highly relevant in the construction of a commercial farming sophistication
index.

CHARACTERISTICS OF AN INDEX
Both Babbie (2011:169) and Spector (1992:1) make reference to various characteristics of index
variables.
• Firstly, an index is derived from multiple items. This means that the items are summated or combined,
thereby converting a specific procedure into a single measurement or scale.
• Secondly, the individual items that form the basis of the index measure something that is underlying,
quantitative and on a measurement continuum. Index variables are therefore typically ordinal in nature.
• Thirdly, an answer or response to an item cannot be classified in terms of ‘right’ or ‘wrong’. An index
variable therefore constitutes a scale measurement that is indicative of some hypothetical construct that can
typically not be measured by a single question or item. Higher index values might indicate ‘more off’ and lower
values ‘less off’, with neither being ‘right’ or ‘wrong’.
• Lastly, a good index is evaluated in terms of its reliability and validity. Both these aspects are
considered as part of the last step in index construction.

How to Construct an Index for Research


An index is a composite measure of variables, or a way of measuring a construct--like religiosity or racism--using
more than one data item. An index is an accumulation of scores from a variety of individual items. To create one,
you must select possible items, examine their empirical relationships, score the index, and validate it.
Item Selection
The first step in creating an index is selecting the items you wish to include in the index to measure the variable
of interest. There are several things to consider when selecting the items. First, you should select items that have
face validity. That is, the item should measure what it is intended to measure. If you are constructing an index of
religiosity, items such as church attendance and frequency of prayer would have face validity because they
appear to offer some indication of religiosity.
A second criterion for choosing which items to include in your index is unidimensionality. That is, each item
should represent only one dimension of the concept you are measuring. For example, items reflecting depression
should not be included in items measuring anxiety, even though the two might be related to one another.
Third, you need to decide how general or specific your variable will be. For example, if you only wish to measure
a specific aspect of religiosity, such as ritual participation, then you would only want to include items that
measure ritual participation, such as church attendance, confession, communion, etc. If you are measuring
religiosity in a more general way, however, you would want to also include a more balanced set of items that
touch on other areas of religion (such as beliefs, knowledge, etc.).
Lastly, when choosing which items to include in your index, you should pay attention to the amount
of variance that each item provides. For example, if an item is intended to measure religious conservatism, you
need to pay attention to what proportion of respondents would be identified as religiously conservative by that
measure. If the item identifies nobody as religiously conservative or everyone as a religiously conservative, then
the item has no variance and it is not a useful item for your index.

Examining Empirical Relationships


The second step in index construction is to examine the empirical relationships among the items you wish to
include in the index. An empirical relationship is when respondents’ answers to one question help us predict how
they will answer other questions. If two items are empirically related to each other, we can argue that both items
reflect the same concept and we can, therefore, include them in the same index. To determine if your items are
empirically related, crosstabulations, correlation coefficients, or both may be used.

Index Scoring
The third step in index construction is scoring the index. After you have finalized the items you are including in
your index, you then assign scores for particular responses, thereby making a composite variable out of your
several items. For example, let’s say you are measuring religious ritual participation among Catholics and the
items included in your index are church attendance, confession, communion, and daily prayer, each with a
response choice of "yes, I regularly participate" or "no, I do not regularly participate." You might assign a 0 for
"does not participate" and a 1 for "participates." Therefore, a respondent could receive a final composite score of
0, 1, 2, 3, or 4 with 0 being the least engaged in Catholic rituals and 4 being the most engaged.

Index Scoring
• After finalizing the items to be included, scores/weights are assigned for particular responses, thereby
making a composite variable out of the several items.
• Unweighted aggregate index - each item score is weighted equally.
• Multivariate statistical techniques, such as exploratory factor analysis and principal component analysis
could be considered in the construction of the index.
• Both methods work by assigning different weights to items through the calculation of factor scores.
• Weight assignment can be done by 4 ways:
Ø equal weights among items;
Ø theoretically categorized weights; Ø schematic weights and Ø variable weights.

Index Validation
The final step in constructing an index is validating it. Just like you need to validate each item that goes into the
index, you also need to validate the index itself to make sure that it measures what it is intended to measure.
There are several methods for doing this. One is called item analysis in which you examine the extent to which
the index is related to the individual items that are included in it. Another important indicator of an index’s validity
is how well it accurately predicts related measures. For example, if you are measuring political conservatism,
those who score the most conservative in your index should also score conservative in other questions included
in the survey.
Broad aims and objectives envisaged for the construction of a specific index
Ø Aim one: To construct an index that is a measure of a specific construct. In other words, to present an index
that is one-dimensional.
Ø Aim two: To construct an index using a combination of variables that could measure the construct better than
any single variable.
Ø Aim three: To construct an index that is a direct measure of the construct, based on non-monetary descriptive
indicators.
Ø Aim four: Items identified for the construction of the index should, on face value, relate to the construct being
measured. This suggests that secondary data can be utilised as source for the construction of the index, given
that the data is evaluated to be valid in the context of the study.
Ø Aim five: To present an index that is reliable and valid. In other words, the commercial farming sophistication
index should measure what it is supposed to measure. This relates to construct validity, which can be
decomposed into the assessments of convergent, discriminant and nomological validity. In addition, it should
provide scores that are consistent across repeated measures. This relates to the reliability of measurement.
Ø Aim six: To construct an index that has broad application value across the full spectrum of the market,
allowing for sub-group analysis, including examination of individual groups and comparisons between groups.
Ø Aim seven: To present a measurement process that is useful in future surveys from which separate samples
are drawn. In other words, the calculation of index scores should be a simple procedure, and easily replicated by
a wide range of researchers and survey practitioners across other surveys conducted in the market. Scores
should be readily interpretable.
Ø Aim eight: To construct an index that is stable over time, but sensitive enough to register changes. In other
words, to provide scores that would make trend analysis possible.
Ø Aim nine: To present a standard set of index score intervals that segments the market. These intervals will
provide a practical and standardised procedure that other researchers can follow in future to segment the market.

Assumptions and model


In order to achieve the stated aims, three key assumptions underlie the index construction process, namely, that
the measured construct can be:
• Presented by a single underlying continuous dimension that is the source of the associations between a
number of observable variables;
• Could be ranked along a continuum, reflecting levels of the measured construct; and
• Used as base of descriptor variable in market segmentation studies.

Derivation of a general use form of the index scale


Resulting from the above three steps, index scores are produced that might not necessarily be easily replicable
in future studies. Due to the typically large number of variables that are used in the original index construction, a
reduction of the number of explanatory variables is sought. This is achieved by identifying those variables that
have the greatest discriminatory power, and then weighing them optimally. This produces a general-use form of
the index that could be easily calculated by other researchers and survey practitioners without depending on
using advanced multivariate statistical techniques. Weighting is done in such a way that each variable carries a
different weight, positive or negative. An element’s position on the index scale could then be derived by adding
together the calculated weights of the variables. In some instances, a constant is also added to the total score to
remove negative total scores.

Application value of index


It is evident from the review that the three derived indices offered extensive value to practitioners, in particular as
a method of market segmentation. The antecedents of market segmentation were extensively discussed in
Section 2.5, 116 and apply to these three approaches also. These include aspects, such as a better
understanding of the market, assisting in the design of more suitable marketing strategies and programmes,
helping businesses focus on those buyers that have the greatest chance of being satisfied, as well as identifying
new marketing opportunities, and the more effective allocation of financial and other resources.

The Differences Between Indexes and Scales


Indexes and scales are important and useful tools in social science research. They have both similarities and
differences among them. An index is a way of compiling one score from a variety of questions or statements that
represents a belief, feeling, or attitude. Scales, on the other hand, measure levels of intensity at the variable
level, like how much a person agrees or disagrees with a particular statement. If you are conducting a social
science research project, chances are good that you will encounter indexes and scales. If you are creating your
own survey or using secondary data from another researcher’s survey, indexes and scales are almost
guaranteed to be included in the data.
Indexes in Research
Indexes are very useful in quantitative social science research because they provide a researcher a way to
create a composite measure that summarizes responses for multiple rank-ordered related questions or
statements. In doing so, this composite measure gives the researcher data about a research participant's view on
a certain belief, attitude, or experience.
For example, let’s say a researcher is interested in measuring job satisfaction and one of the key variables is job-
related depression. This might be difficult to measure with simply one question. Instead, the researcher can
create several different questions that deal with job-related depression and create an index of the included
variables. To do this, one could use four questions to measure job-related depression, each with the response
choices of "yes" or "no":
• "When I think about myself and my job, I feel downhearted and blue."
• "When I’m at work, I often get tired for no reason."
• "When I’m at work, I often find myself restless and can’t keep still."
• "When at work, I am more irritable than usual."
To create an index of job-related depression, the researcher would simply add up the number of "yes" responses
for the four questions above. For example, if a respondent answered "yes" to three of the four questions, his or
her index score would be three, meaning that job-related depression is high. If a respondent answered no to all
four questions, his or her job-related depression score would be 0, indicating that he or she is not depressed in
relation to work.

Scales in Research
A scale is a type of composite measure that is composed of several items that have a logical or empirical
structure among them. In other words, scales take advantage of differences in intensity among the indicators of a
variable. The most commonly used scale is the Likert scale, which contains response categories such as
"strongly agree," "agree," "disagree," and "strongly disagree." Other scales used in social science research
include the Thurstone scale, Guttman scale, Bogardus social distance scale, and the semantic differential scale.
For example, a researcher interested in measuring prejudice against women could use a Likert scale to do so.
The researcher would first create a series of statements reflecting prejudiced ideas, each with the response
categories of "strongly agree," "agree," "neither agree nor disagree," "disagree," and "strongly disagree." One of
the items might be "women shouldn’t be allowed to vote," while another might be "women can’t drive as well as
men." We would then assign each of the response categories a score of 0 to 4 (0 for "strongly disagree," 1 for
"disagree," 2 for "neither agree or disagree," etc.). The scores for each of the statements would then be added for
each respondent to create an overall score of prejudice. If a respondent answered "strongly agree" to five
statements expressing prejudiced ideas, his or her overall prejudice score would be 20, indicating a very high
degree of prejudice against women.

Compare and Contrast


Scales and indexes have several similarities. First, they are both ordinal measures of variables. That is, they
both rank-order the units of analysis in terms of specific variables. For example, a person’s score on either a
scale or index of religiosity gives an indication of his or her religiosity relative to other people. Both scales and
indexes are composite measures of variables, meaning that the measurements are based on more than one data
item. For instance, a person’s IQ score is determined by his or her responses to many test questions, not simply
one question.
Even though scales and indexes are similar in many ways, they also have several differences. First, they are
constructed differently. An index is constructed simply by accumulating the scores assigned to individual items.
For example, we might measure religiosity by adding up the number of religious events the respondent engages
in during an average month.
A scale, on the other hand, is constructed by assigning scores to patterns of responses with the idea that some
items suggest a weak degree of the variable while other items reflect stronger degrees of the variable. For
example, if we are constructing a scale of political activism, we might score "running for office" higher than simply
"voting in the last election." "Contributing money to a political campaign" and "working on a political campaign"
would likely score in between. We would then add up the scores for each individual based on how many items
they participated in and then assign them an overall score for the scale.

Ordinal Scale
The ordinal scale is the 2nd level of measurement that reports the ordering and ranking of data without
establishing the degree of variation between them. Ordinal represents the “order.” Ordinal data is known as
qualitative data or categorical data. It can be grouped, named and also ranked. Characteristics of the Ordinal
Scale
• The ordinal scale shows the relative ranking of the variables
• It identifies and describes the magnitude of a variable
• Along with the information provided by the nominal scale, ordinal scales give the rankings of those variables
• The interval properties are not known
• The surveyors can quickly analyse the degree of agreement concerning the identified order of
variables Example:
• Ranking of school students – 1st, 2nd, 3rd, etc.
• Ratings in restaurants
• Evaluating the frequency of occurrences
• Very often
• Often
• Not often
• Not at all
• Assessing the degree of agreement
• Totally agree
• Agree
• Neutral
• Disagree
• Totally disagree

Interval Scale
The interval scale is the 3rd level of measurement scale. It is defined as a quantitative measurement scale in
which the difference between the two variables is meaningful. In other words, the variables are measured in an
exact manner, not as in a relative way in which the presence of zero is arbitrary.
Characteristics of Interval Scale:
• The interval scale is quantitative as it can quantify the difference between the values
• It allows calculating the mean and median of the variables
• To understand the difference between the variables, you can subtract the values between the variables
• The interval scale is the preferred scale in Statistics as it helps to assign any numerical values to arbitrary
assessment such as feelings, calendar types, etc. Example:
• Likert Scale
• Net Promoter Score (NPS)
• Bipolar Matrix Table

Ratio Scale
The ratio scale is the 4th level of measurement scale, which is quantitative. It is a type of variable measurement
scale. It allows researchers to compare the differences or intervals. The ratio scale has a unique feature. It
possesses the character of the origin or zero points.
Characteristics of Ratio Scale:
• Ratio scale has a feature of absolute zero
• It doesn’t have negative numbers, because of its zero-point feature
• It affords unique opportunities for statistical analysis. The variables can be orderly added, subtracted,
multiplied, divided. Mean, median, and mode can be calculated using the ratio scale.
• Ratio scale has unique and useful properties. One such feature is that it allows unit conversions like
kilogram – calories, gram – calories, etc. Example:
An example of a ratio scale is:
What is your weight in Kgs?
• Less than 55 kgs
• 55 – 75 kgs
• 76 – 85 kgs
• 86 – 95 kgs
• More than 95 kgs

Conclusion
An index is a way of compiling one score from a variety of questions or statements that represents a belief,
feeling, or attitude. Scales, on the other hand, measure levels of intensity at the variable level, like how much a
person agrees or disagrees with a particular statement.
A scale is an index that in some sense only measures one thing. For example, a final exam in a given course
could be thought of as a scale: it measures competence in a single subject. In contrast, a person's gpa can be
thought of as an index: it is a combination of a number of separate, independent competencies.

Lesson Proper for Week 15


Scales
The Likert scale is one of the most commonly used scales in the research community. The scale consists of
assigning a numerical value to intensity (or neutrality) of emotion about a specific topic, and then attempts to
standardize these response categories to provide an interpretation of the relative intensity of items on the scale.
Responses such as “strongly agree,” “moderately agree,” “moderately disagree,” and “strongly disagree” are
responses that would likely be found in a likert scale, or a survey based upon the scale.
The semantic differential scale is similar to Likert scaling, however, rather than allowing varying degrees of
response, it asks the respondent to rate something in terms of only two completely opposite adjectives.
An example of a scale used in real-life situations is the Bogardus Social Distance Scale. This scale, developed
by Emory Bogardus is used to determine people’s willingness to associate and socialize with people who are
unlike themselves, including those of other races, religions, and classes.
Thurstone scaling is quite unlike Bogardus or Likert scaling. Developed by Louis Thurstone, this scale is a
format that seeks to use respondents both to answer survey questions, and to determine the importance of the
questions. One group of respondents, a group of “judges,” assign various weights to different variables, while
another group actually answers the questions on the survey
Guttman scaling, developed by Louis Guttman, is the type of scaling used most today. Guttman scaling, like the
Thurstone scale, recognizes that different questions provide different intensities of indication of preferences. It is
based upon the assumption that the agreement with the strongest indicators also signifies agreement with
weaker indicators. It uses a simple “agree” or “disagree” scale, without any variation in the intensities of
preference.
There are two misconceptions of scaling, one of which is the combination of data into a scale is influenced by the
observation of the sample of the study. Thus the data of one scale from a sample may not comply with another
scale. Therefore that combination of data can be scaled multiple times because it was originally was able to
earlier in the study. A second misconception pertains to specific scales. By this, given items or data may aid in
determine what constitutes as a scale opposed to a scale itself.

Scales versus Indices


In general, scales are considered to function better than indexes, due to the fact that scales usually consider
intensity of the questions they ask and feelings they measure, despite the fact that both are ordinal measures.
One example of a weighted index is the Bureau of Labor Statistics' Consumer Price Index (CPI), which
represents the sum of the prices of goods that a typical consumer would purchase. When computing this index,
the goods are weighted according to how many of them are purchased in the general population (relative to other
goods), so that items purchased with greater frequency will have a greater impact on the value of the index.
Scales of measurement in research and statistics are the different ways in which variables are defined and
grouped into different categories. Sometimes called the level of measurement, it describes the nature of the
values assigned to the variables in a data set.
The term scale of measurement is derived from two keywords in statistics, namely; measurement and scale.
Measurement is the process of recording observations collected as part of a research.
Scaling, on the other hand, is the assignment of objects to numbers or semantics. These two words merged
together refers to the relationship among the assigned objects and the recorded observations.

What is a Measurement Scale


A measurement scale is used to qualify or quantify data variables in statistics. It determines the kind of
techniques to be used for statistical analysis.
There are different kinds of measurement scales, and the type of data being collected determines the kind of
measurement scale to be used for statistical measurement. These measurement scales are four in number,
namely; nominal scale, ordinal scale, interval scale, and ratio scale.
The measurement scales are used to measure qualitative and quantitative data. With nominal and ordinal scale
being used to measure qualitative data while interval and ratio scales are used to measure quantitative data.
Characteristics of a Measurement Scale

Identity
Identity refers to the assignment of numbers to the values of each variable in a data set. Consider a
questionnaire that asks for a respondent's gender with the options Male and Female for instance. The values 1
and 2 can be assigned to Male and Female respectively.
Arithmetic operations can not be performed on these values because they are just for identification purposes.
This is a characteristic of a nominal scale.
Magnitude
The magnitude is the size of a measurement scale, where numbers (the identity) have an inherent order from
least to highest. They are usually represented on the scale in ascending or descending order. The position in a
race, for example, is arranged from the 1st, 2nd, 3rd to the least.
This example is measured on an ordinal scale because it has both identity and magnitude.

Equal intervals
Equal Intervals means that the scale has a standardized order. I.e., the difference between each level on the
scale is the same. This is not the case for the ordinal scale example highlighted above.
Each position does not have an equal interval difference. In a race, the 1st position may complete the race in 20
secs, 2nd position in 20.8 seconds while the 3rd in 30 seconds.
A variable that has an identity, magnitude, and the equal interval is measured on an interval scale.

Absolute zero
Absolute zero is a feature that is unique to a ratio scale. It means that there is an existence of zero on the scale,
and is defined by the absence of the variable being measured (e.g. no qualification, no money, does not identify
as any gender, etc.

Types of Measurement Scale


There are two main types of measurement scales: Ø comparative scales and
Ø non-comparative scales.

Comparative Scales
In comparative scaling, respondents are asked to make a comparison between one object and the other. When
used in market research, customers are asked to evaluate one product in direct comparison to the others.
Comparative scales can be further divided into the pair comparison, rank order, constant sum and q-sort scales.

Paired Comparison Scale


Paired Comparison scale is a scaling technique that presents the respondents with two objects at a time and
asks them to choose one according to a predefined criterion. Product researchers use it in comparative product
research by asking customers to choose the most preferred to them in between two closely related products.
For example, there are 3 new features in the last release of a software product. But the company is planning to
remove 1 of these features in the new release. Therefore, the product researchers are performing a comparative
analysis of the most and least preferred feature.
1. Which feature is most preferred to you between the following pairs?
• Filter - Voice recorder
• Filter - Video recorder  Voice recorder - Video recorder  Rank Order Scale:
In rank order scaling technique, respondents are simultaneously provided with multiple options and asked to
rank them in order of priority based on a predefined criterion. It is mostly used in marketing to measure
preference for a brand, product, or feature.
When used in competitive analysis, the respondent may be asked to rank a group of brands in terms of personal
preference, product quality, customer service, etc. The results of this data collection are usually obtained in the
conjoint analysis, as it forces customers to discriminate among options.
The rank order scale is a type of ordinal scale because it orders the attributes from the most preferred to the least
preferred but does not have a specific distance between the attributes.
For example:
Rank the following brands from the most preferred to the least preferred.
• Coca-Cola
• Pepsi Cola
• Dr pepper
• Mountain Dew

Constant Sum Scale


Constant Sum scale is a type of measurement scale where the respondents are asked to allocate a constant sum
of units such as points, dollars, chips or chits among the stimulus objects according to some specified criterion.
The constant sum scale assigns a fixed number of units to each attribute, reflecting the importance a respondent
attaches to it.
This type of measurement scale can be used to determine what influences a customer's decision when choosing
which product to buy. For example, you may wish to determine how important price, size, fragrance, and
packaging is to a customer when choosing which brand of perfume to buy.
Some of the major setbacks of this technique are that respondents may be confused and end up allocating more
or fewer points than those specified. The researchers are left to deal with a group of data that is not uniform and
may be difficult to analyze.
Avoid this with the logic feature on Formplus. This feature allows you to add a restriction that prevents the
respondent from adding more or fewer points than specified to your form.  Q-Sort Scale
Q-Sort scale is a type of measurement scale that uses a rank order scaling technique to sort similar objects with
respect to some criterion. The respondents sort the number of statements or attitudes into piles, usually of 11.
The Q-Sort Scaling helps in assigning ranks to different objects within the same group, and the differences
among the groups (piles) are clearly visible. It is a fast way of facilitating discrimination among a relatively large
set of attributes.
For example, a new restaurant that is just preparing its menu may want to collect some information about what
potential customers like:
The document provided contains a list of 50 meals. Please choose 10 meals you like, 30 meals you are neutral
about (neither like nor dislike) and 10 meals you dislike.

Non-Comparative Scales
In non-comparative scaling, customers are asked to only evaluate a single object. This evaluation is totally
independent of the other objects under investigation. Sometimes called monadic or metric scale,
NonComparative scale can be further divided into continuous and the itemized rating scales  Continuous
Rating Scale
In continuous rating scale, respondents are asked to rate the objects by placing a mark appropriately on a line
running from one extreme of the criterion to the other variable criterion. Also called the graphic rating scale, it
gives the respondent the freedom to place the mark anywhere based on personal preference. Once the ratings
are obtained, the researcher splits up the line into several categories and then assign the scores depending on
the category in which the ratings fall. This rating can be visualized in both horizontal and vertical form.
Although easy to construct, the continuous rating scale has some major setbacks, giving it limited usage in
market research.

 Itemized Rating Scale


The itemized rating scale is a type of ordinal scale that assigns numbers each attribute. Respondents are usually
asked to select an attribute that best describes their feelings regarding a predefined criterion.
Itemized rating scale is further divided into 2, namely; Likert scale, Stapel scale, and semantic scale.
• Likert Scale: A Likert scale is an ordinal scale with five response categories, which is used to order a list
of attributes from the best to the least. This scale uses adverbs of degree like very strongly, highly, etc. to
indicate the different levels.
• Stapel Scale: This a scale with 10 categories, usually ranging from -5 to 5 with no zero point. It is a vertical
scale with 3 columns, where the attributes are placed in the middle and the least (-5) and highest (5) is in the 1st
and 3rd columns respectively.
• Semantic Differential Scale: This is a seven-point rating scale with endpoints associated with bipolar
labels (e.g. good or bad, happy, etc.). It can be used for marketing, advertising and in different stages of product
development.If there is more than one item being inherently investigated, it can be visualized on a table with
more than 3 columns.

Conclusion
In a nutshell, scales of measurement refers to the various measures used in quantifying the variables
researchers use In performing data analysis. They are an important aspect of research and statistics because the
level of data measurement is what determines the data analysis technique to be used.
Understanding the concept of scales of measurements is a prerequisite to working with data and performing
statistical analysis. The different measurement scales have some similar properties and are therefore important
to properly analyze the data to determine its measurement scale before choosing a technique to use for analysis.
A number of scaling techniques are available for the measurement of the same measurement scale. Therefore,
there is no unique way of selecting a scaling technique for research purposes.

Typology
Typologies are well-established analytic tools in the social sciences. They can be “put to work” in forming
concepts, refining measurement, exploring dimensionality, and organizing explanatory claims. Yet some critics,
basing their arguments on what they believe are relevant norms of quantitative measurement, consider
typologies old-fashioned and unsophisticated. This critique is methodologically unsound, and research based on
typologies can and should proceed according to high standards of rigor and careful measurement. These
standards are summarized in guidelines for careful work with typologies, and an illustrative inventory of
typologies, as well as a brief glossary, are included online.

The Template: Concept Formation and the Structure of Typologies


We now analyze the role of typologies in concept formation and develop a template for rigorous construction of
typologies. Our concern is with multidimensional conceptual typologies, yet many elements of the template are
also relevant to unidimensional and explanatory typologies.
Concept Formation Conceptual typologies make a fundamental contribution to concept formation in both
qualitative and quantitative research. Developing rigorous and useful concepts entails four interconnected goals:
Ø clarifying and refining their meaning,
Ø establishing an informative and productive connection between these meanings and the terms used to
designate them,
Ø situating the concepts within their semantic field, that is, the constellation of related concepts and terms, and
Ø identifying and refining the hierarchical relations among concepts, involving kind hierarchies.
Ø
Thinking in terms of kind hierarchies brings issues of conceptual structure into focus, addresses challenges such
as conceptual stretching, and productively organizes our thinking as we work with established concepts and seek
to create new ones. A key point must be underscored: The cell types in a conceptual typology are related to the
overarching concept through a kind hierarchy. Understanding this hierarchy helps to answer the following
question: What establishes the meaning of the cell types, that is, of the concept that corresponds to each cell?
The answer is twofold.
1. Each cell type is indeed “a kind of” in relation to the overarching concept around which the typology is
organized, and
2. the categories that establish the row and column variables provide the core defining attributes of the cell
type.
A typology is the classification of observations in terms of their attributes on two or more variables. Often,
individuals may seek to put variables into an organized format. This is where typologies come into play.
Typologies consist of the sets of categories created by the intersection of multiple variables.
Typologies A nominal-level variable that summarizes two or more variables.

Typologies Critique
As with stereotypes, typologies do not accurately reflect anyone and provide oversimplifications of everyone.One
should use typologies mainly to organize one’s thinking as part of exploratory research.It is extremely difficult to
analyze a typology as a dependent variable because too much variation exists within each category.

Validity and Reliability Introduction


Validity : The extent to which a measuring device measures what it is intended to measure. Reliability : The
extent to which a measuring device provides consistent values for the same phenomenon being measured.
These terms are described in detail at this accompanying web site: Validity and Reliability .
Importance of scaling in research
Scales are used frequently in marketing research because they help to convert qualitative (thoughts, feelings,
opinions) information into quantitative data, numbers that can be statistically analyzed. You create a scale by
assigning an object (could be a description) to a number.
Purpose of Typologies or classifications use similarities of form and function to impose order on a variety of
natural stream morphologies. Basically, they are intellectual constructs in which objects with similar relevant
attributes are grouped together to meet the purposes of the classifier.

Lesson Proper for Week 16


A definition of action research appears in the workshop materials we use at the Institute for the
Study of Inquiry in Education. That definition states that action research “ is a disciplined process
of inquiry conducted by and for those taking the action”. The primary reason for engaging in action
research is to assist the “actor” in improving and/or refining his or her actions. Practitioners who
engage in action research inevitably find it to be an empowering experience. Action research has
this positive effect for many reasons. Obviously, the most important is that action research is
always relevant to the participants. Relevance is guaranteed because the focus of each research
project is determined by the researchers, who are also the primary consumers of the findings.

Perhaps even more important is the fact that action research helps educators be more effective at
what they care most about—their teaching and the development of their students. Seeing students
grow is probably the greatest joy educators can experience. When teachers have convincing
evidence that their work has made a real difference in their students' lives, the countless hours and
endless efforts of teaching seem worthwhile.

Action research is simply a form of self-reflective enquiry undertaken by participants in social


situations in order to improve the rationality and justice of their own practices, their understanding
of these practices, and the situations in which the practices are carried out (Carr and Kemmis
1986: 162).

Importance of Action Research


Within education, the main goal of action research is to determine ways to enhance the lives of
children (Mills, 2011). At the same time, action research can enhance the lives of those
professionals who work within educational systems. To illustrate, action research has been directly
linked to the professional growth and development of teachers (Hensen, 1996; Osterman &
Kottkamp, 1993; Tomlinson,

1995). According to Hensen, action research

(a) helps teachers develop new knowledge directly related to their classrooms,

(b) promotes reflective teaching and thinking,

(c) expands teachers’ pedagogical repertoire,

(d) puts teachers in charge of their craft,

(e) reinforces the link between practice and student achievement,(f) fosters an openness toward
new ideas and learning new things, and (g) gives teachers ownership of effective practices.

Moreover, action research workshops can be used to replace traditional, ineffective teacher in
service training (Barone et al., 1996) as a means for professional development activities (Johnson,
2012). To be effective, teacher in service training needs to be extended over multiple sessions,
contain active learning to allow teachers to manipulate the ideas and enhance their assimilation of
the information, and align the concepts presented with the current curriculum, goals, or teaching
concerns. (Johnson, p. 22). Therefore, providing teachers with the necessary skills, knowledge, and
focus to engage in meaningful inquiry about their professional practice will enhance this practice,
and effect positive changes concerning the educative goals of the learning community.

The action research process can help you understand what is happening in your classroom and
identify changes that improve teaching and learning. Action research can help answer questions
you have about the effectiveness of specific instructional strategies, the performance of specific
students, and classroom management techniques.

Educational research often seems removed from the realities of the classroom. For many
classroom educators, formal experimental research, including the use of a control group, seems to
contradict the mandate to improve learning for all students. Even quasi-experimental research with
no control group seems difficult to implement, given the variety of learners and diverse learning
needs present in every classroom. Action research gives you the benefits of research in the
classroom without these obstacles. Believe it or not, you are probably doing some form of research
already. Every time you change a lesson plan or try a new approach with your students, you are
engaged in trying to figure out what works. Even though you may not acknowledge it as formal
research, you are still investigating, implementing, reflecting, and refining your approach.

Qualitative research acknowledges the complexity of the classroom learning environment. While
quantitative research can help us see that improvements or declines have occurred, it does not
help us identify the causes of those improvements or declines. Action research provides qualitative
data you can use to adjust your curriculum content, delivery, and instructional practices to improve
student learning. Action research helps you implement informed change!

The term “action research” was coined by Kurt Lewin in 1944 to describe a process of investigation
and inquiry that occurs as action is taken to solve a problem. Today we use the term to describe a
practice of reflective inquiry undertaken with the goal of improving understanding and practice.
You might consider “action” to refer to the change you are trying to implement and “research” to
refer to your improved understanding of the learning environment.

Action research also helps you take charge of your personal professional development. As you
reflect on your own actions and observe other master teachers, you will identify the skills and
strategies you would like to add to your own professional toolbox. As you research potential
solutions and are exposed to new ideas, you will identify the skills, management, and instructional
training needed to make the changes you want to see.

The Action Research Cycle


Action research is a cycle of inquiry and reflection. During the process, you will determine 1) where
you are, 2) where you want to be, and 3) how you are going to get there. In general terms, the
cycle follows these steps:

1. Identify the problem and envision success

2. Develop a plan of action

3. Collect data

4. Analyze data and form conclusions


5. Modify your theory and repeat the cycle

6. Report the results


Identify the Problem
The process begins when you identify a question or problem you want to address. Action research
is most successful when you have a personal investment, so make sure the questions you are
asking are ones YOU want to solve. This could be an improvement you want to see happen in your
classroom (or your school if you are a principal), or a problem you and your colleagues would like
to address in your district.

Learning to develop the right questions takes time. Your ability to identify these key
questions will improve with each iteration of the research cycle. You want to select a

question that isn’t so broad it is almost impossible to answer or so narrow that the only answer is
yes or no. Choose questions that can be answered within the context of your daily teaching. In
other words, choose a question that is both answerable and worthy of the time investment
required to learn the answer.

Questions you could ask might involve management issues, curriculum implementation,
instructional strategies, or specific student performance. For example, you might consider:

• How successful is random grouping for project work?

• Why is the performance of one student lacking in a particular area?

• Will increasing the amount of feedback I provide improve students’ writing skills?

• What is the best way to introduce the concept of fractions?

• Which procedure is most effective for managing classroom conflict?

Determining the question helps focus your inquiry.

Before you can start collecting data, you need to have a clear vision of what success looks like.
Start by brainstorming words that describe the change you want to see. What strategies do you
already know that might help you get there? Which of these ideas do you think might work better
than what you are currently doing?

To find out if a new instructional strategy is worth trying, conduct a review of literature. This
doesn’t have to mean writing up a formal lit review like you did in graduate school. The important
thing is to explore a range of articles and reports on your topic and capitalize on the research and
experience of others. Your classroom responsibilities are already many and may be overwhelming.
A review of literature can help you identify useful strategies and locate information that helps you
justify your action plan.

The Web makes literature reviews easier to accomplish than ever before. Even if the full text of an
article, research paper, or abstract is not available online, you will be able to find citations to help
you locate the source materials at your local library. Collect as much information on your problem
as you can find. As you explore the existing literature, you will certainly find solutions and
strategies that others have implemented to solve this problem. You may want to create a visual
map or a table of your problems and target performances with a list of potential solutions and
supporting citations in the middle.

Develop an Action Plan


Now that you have identified the problem, described your vision of how to successfully solve it, and
reviewed the pertinent literature, you need to develop a plan of action. What is it that you intend to
DO? Brainstorming and reviewing the literature should have provided you with ideas for new
techniques and strategies you think will produce better results. Refer back to your visual map or
table and color-code or reorder your potential solutions. You will want to rank them in order of
importance and indicate the amount of time you will need to spend on these strategies.

How can you implement these techniques? How will you? Translate these solutions into concrete
steps you can and will take in your classroom. Write a description of how you will implement each
idea and the time you will take to do it.

Once you have a clear vision of a potential solution to the problem, explore factors you think might
be keeping you and your students from your vision of success. Recognize and accept those factors
you do not have the power to change–they are the constants in your equation. Focus your attention
on the variables–the parts of the formula you believe your actions can impact.
Develop a plan that shows how you will implement your solution and how your behavior,
management style, and instruction will address each of the variables.

Sometimes an action research cycle simply helps you identify variables you weren’t even aware of,
so you can better address your problem during the next cycle!

Collect Data
Before you begin to implement your plan of action, you need to determine what data will help you
understand if your plan succeeds, and how you will collect that data. Your target performances will
help you determine what you want to achieve. What results or other indicators will help you know if
you achieved it? For example, if your goal is improved attendance, data can easily be collected
from your attendance records. If the goal is increased time on task, the data may include
classroom and student observations.

There are many options for collecting data. Choosing the best methodologies for collecting
information will result in more accurate, meaningful, and reliable data.

Obvious sources of data include observation and interviews. As you observe, you will want to type
or write notes or dictate your observations into a cell phone, iPod, or PDA. You may want to keep a
journal during the process, or even create a blog or wiki to practice your technology skills as you
collect data.

Reflective journals are often used as a source of data for action research. You can also collect
meaningful data from other records you deal with daily, including attendance logs, grade reports,
and student portfolios. You could distribute questionnaires, watch videotapes of your classroom,
and administer surveys. Examples of student work are also performances you can evaluate to see
if your goal is being met.

Create a plan for data collection and follow it as you perform your research. If you are going to
interview students or other teachers, how many times will you do it? At what times during the day?
How will you ensure your respondents are representative of the student population you are
studying, including gender, ability level, experience, and expertise?

Your plan will help you ensure that you have collected data from many different sources. Each
source of data provides additional information that will help you answer the questions in your
research plan.

You may also want to have students collect data on their own learning. Not only does this provide
you with additional research assistants, it empowers students to take control of their own learning.
As students keep a journal during the process, they are also reflecting on the learning environment
and their own learning process.

Analyze Data and Form Conclusions


The next step in the process is to analyze your data and form conclusions. Start early! Examining
the data during the collection process can help you refine your action plan. Is the data you are
collecting sufficient? If not, you have an opportunity to revise your data collection plan. Your
analysis of the data will also help you identify attitudes and performances to look for during
subsequent observations.

Analyzing the data also helps you reflect on what actually happened. Did you achieve the
outcomes you were hoping for? Where you able to carry out your actions as planned? Were any of
your assumptions about the problem incorrect?

Adding data such as opinions, attitudes, and grades to tables can help you identify trends
(relationships and correlations). For example, if you are completing action research to determine if
project-based learning is impacting student motivation, graphing attendance and disruptive
behavior incidents may help you answer the question. A graph that shows an increase in
attendance and a decrease in the number of disruptive incidents over the implementation period
would lead you to believe that motivation was improved.

Draw tentative conclusions from your analysis. Since the goal of action research is positive change,
you want to try to identify specific behaviors that move you closer to your vision of success. That
way you can adjust your actions to better achieve your goal of improved student learning.

Action research is an iterative process. The data you collect and your analysis of it will affect how
you approach the problem and implement your action plan during the next cycle.
Even as you begin drawing conclusions, continue collecting data. This will help you confirm your
conclusions or revise them in light of new information. While you can plan how long and often you
will collect data, you may also want to continue collecting until the trends have been identified and
new data becomes redundant.

As you are analyzing your data and drawing conclusions, share your findings. Discussing your
results with another teacher can often yield valuable feedback. You might also share your findings
with your students who can also add additional insight. If they agree with your conclusions, you
have added credibility to your data collection plan and analysis. If they disagree, you will know to
reevaluate your conclusions or refine your data collection plan.

Modify Your Theory and Repeat


Now that you have formed a final conclusion, the cycle begins again. In light of your findings, you
should have adjusted your theory or made it more specific. Modify your plan of action, begin
collecting data again, or begin asking new questions!

Report the Results


While the ultimate goal of your research is to promote effective change in your classroom or
schools, do not underestimate the value of sharing your findings with others. Sharing your results
helps you further reflect on the process and problem, and it allows others to use your results to
help them in their own endeavors to improve the education of their students.

You can report your findings in many different ways. You most certainly will want to share the
experience with your students, parents, teachers, and principal. Provide them with an overview of
the process and share highlights from your research journal. Because each of these audiences is
different, you will need to adjust the content and delivery of the information each time you share.
You may also want to present your process at a conference so educators from other districts can
benefit from your work.

As your skill with the action research cycle gets stronger, you may want to develop an abstract and
submit an article to an educational journal. To write an abstract, state the problem you were trying
to solve, describe your context, detail your action plan and methods, summarize your findings,
state your conclusions, and explain your revised action plan.

If your question focused on the implementation of an action plan to improve the performance of a
particular student, what better way to show the process and results than through digital
storytelling? Using a tool like Wixie, you can share images, audio, artifacts and more to show the
student’s journey. Action research is outside-the-box thinking… so find similarly unique ways to
report your findings!

In Summary
All teachers want to reach their students more effectively and help them become better learners
and citizens. Action research provides a reflective process you can use to implement changes in
your classroom and determine if those changes result in the desired outcome.

A Simple Guide to Writing an Action Research Report What Should We Include in an


Action Research Report?

The components put into an action research report largely coincide with the steps used in the
action research process. This process usually starts with a question or an observation about a
current problem. After identifying the problem area and narrowing it down to make it more
manageable for research, the development process continues as you devise an action plan to
investigate your question. This will involve gathering data and evidence to support your solution.
Common data collection methods include observation of individual or group behavior, taking audio
or video recordings, distributing questionnaires or surveys, conducting interviews, asking for peer
observations and comments, taking field notes, writing journals, and studying the work samples of
your own and your target participants. You may choose to use more than one of these data
collection methods. After you have selected your method and are analyzing the data you have
collected, you will also reflect upon your entire process of action research. You may have a better
solution to your question now, due to the increase of your available evidence. You may also think
about the steps you will try next, or decide that the practice needs to be observed again with
modifications. If so, the whole action research process starts all over again.

In brief, action research is more like a cyclical process, with the reflection upon your action and
research findings affecting changes in your practice, which may lead to extended questions and
further action. This brings us back to the essential steps of action research: identifying the
problem, devising an action plan, implementing the plan, and finally, observing and reflecting upon
the process. Your action research report should comprise all of these essential steps. Feldman and
Weiss (n.d.) summarized them as five structural elements, which do not have to be written in a
particular order. Your report should:

• Describe the context where the action research takes place. This could be, for example, the
school in which you teach. Both features of the school and the population associated with it (e.g.,
students and parents) would be illustrated as well.

• Contain a statement of your research focus. This would explain where your research
questions come from, the problem you intend to investigate, and the goals you want to achieve.
You may also mention prior research studies you have read that are related to your action research
study.

• Detail the method(s) used. This part includes the procedures you used to collect data, types
of data in your report, and justification of your used strategies.

• Highlight the research findings. This is the part in which you observe and reflect upon your
practice. By analyzing the evidence you have gathered, you will come to understand whether the
initial problem has been solved or not, and what research you have yet to accomplish.

• Suggest implications. You may discuss how the findings of your research will affect your
future practice, or explain any new research plans you have that have been inspired by this
report’s action research.

The overall structure of your paper will actually look more or less the same as what we commonly
see in traditional research papers.

Three Purposes for Action Research


As stated earlier, action research can be engaged in by an individual teacher, a collaborative group
of colleagues sharing a common concern, or an entire school faculty. These three different
approaches to organizing for research serve three compatible, yet distinct, purposes:

• Building the reflective practitioner

• Making progress on schoolwide priorities

• Building professional cultures

Building the Reflective Practitioner


When individual teachers make a personal commitment to systematically collect data on their
work, they are embarking on a process that will foster continuous growth and development. When
each lesson is looked on as an empirical investigation into factors affecting teaching and learning
and when reflections on the findings from each day's work inform the next day's instruction,
teachers can't help but develop greater mastery of the art and science of teaching. In this way, the
individual teachers conducting action research are making continuous progress in developing their
strengths as reflective practitioners.

Making Progress on Schoolwide Priorities


Increasingly, schools are focusing on strengthening themselves and their programs through the
development of common focuses and a strong sense of esprit de corps.

Peters and Waterman (1982) in their landmark book, In Search of Excellence, called the
achievement of focus “sticking to the knitting.” When a faculty shares a commitment to achieving
excellence with a specific focus—for example, the development of higher-order thinking, positive
social behavior, or higher standardized test scores—then collaboratively studying their practice will
not only contribute to the achievement of the shared goal but would have a powerful impact on
team building and program development. Focusing the combined time, energy, and creativity of a
group of committed professionals on a single pedagogical issue will inevitably lead to program
improvements, as well as to the school becoming a “center of excellence.” As a result, when a
faculty chooses to focus on one issue and all the teachers elect to enthusiastically participate in
action research on that issue, significant progress on the schoolwide priorities cannot help but
occur.
Building Professional Cultures
Often an entire faculty will share a commitment to student development, yet the group finds itself
unable to adopt a single common focus for action research. This should not be viewed as indicative
of a problem. Just as the medical practitioners working at a “quality” medical center will hold a
shared vision of a healthy adult, it is common for all the faculty members at a school to share a
similar perspective on what constitutes a well-educated student. However, like the doctors at the
medical center, the teachers in a “quality” school may well differ on which specific aspects of the
shared vision they are most motivated to pursue at any point in time.

Schools whose faculties cannot agree on a single research focus can still use action research as a
tool to help transform themselves into a learning organization. They accomplish this in the same
manner as do the physicians at the medical center. It is common practice in a quality medical
center for physicians to engage in independent, even idiosyncratic, research agendas. However, it
is also common for medical researchers to share the findings obtained from their research with
colleagues (even those engaged in other specialties).

School faculties who wish to transform themselves into “communities of learners” often empower
teams of colleagues who share a passion about one aspect of teaching and learning to conduct
investigations into that area of interest and then share what they've learned with the rest of the
school community. This strategy allows an entire faculty to develop and practice the discipline that
Peter Senge (1990) labeled “team learning.” In these schools, multiple action research inquiries
occur simultaneously, and no one is held captive to another's priority, yet everyone knows that all
the work ultimately will be shared and will consequently contribute to organizational learning.

Why Action Research Now?

If ever there were a time and a strategy that were right for each other, the time is now and the
strategy is action research! This is true for a host of reasons, with none more important than the
need to accomplish the following:

• Professionalize teaching.

• Enhance the motivation and efficacy of a weary faculty.

• Meet the needs of an increasingly diverse student body.

• Achieve success with “standards-based” reforms.

Lesson Proper for Week 17


Assessment is the systematic basis for making inferences about the learning and development of
students. It is the process of defining, selecting, designing, collecting, analyzing, interpreting, and
using information to increase students' learning and development.

Assessment is a key part of today’s educational system. Assessment serves as an individual


evaluation system, and as a way to compare performance across a spectrum and across
populations. However, with so many different kinds of assessments for so many different
organizations available (and often required) these days, it can sometimes be hard to keep the real
purpose of assessing in view. So, what’s really at the heart of all these assessments?

The purpose of assessment is to gather relevant information about student performance or


progress, or to determine student interests to make judgments about their learning process. After
receiving this information, teachers can reflect on each student’s level of achievement, as well as
on specific inclinations of the group, to customize their teaching plans.

Why Assessment of Learning Necessary


Continuous assessment provides day-to-day feedback about the learning and teaching process.
Assessment can reinforce the efficacy of teaching and learning. It also encourages the
understanding of teaching as a formative process that evolves over time with feedback and input
from students. This creates good classroom rapport. Student assessments are necessary because:
• Throughout a lesson or unit, the teacher might want to check for understanding by using a
formative assessment.

• Students who are experiencing difficulties in learning may benefit from the administration of
a diagnostic test, which will be able to detect learning issues such as reading comprehension
problems, an inability to remember written or spoken words, hearing or speech difficulties, and
problems with hand–eye coordination.

• Students generally complete a summative assessment after completing the study of a topic.
The teacher can determine their level of achievement and provide them with feedback on their
strengths and weaknesses. For students who didn’t master the topic or skill, teachers can use data
from the assessment to create a plan for remediation.

• Teachers may also want to use informal assessment techniques. Using self-assessment,
students express what they think about their learning process and what they should work on. Using
peer assessment, students get information from their classmates about what areas they should
revise and what areas they’re good at.

Types of Classroom Assessment


Assessment is integral to the teaching–learning process, facilitating student learning and improving
instruction, and can take a variety of forms. Classroom assessment is generally divided into three
types: assessment for learning, assessment of learning and assessment as learning.

Assessment for Learning (Formative Assessment)


The philosophy behind assessment for learning is that assessment and teaching should be
integrated into a whole. The power of such an assessment doesn't come from intricate technology
or from using a specific assessment instrument. It comes from recognizing how much learning is
taking place in the common tasks of the school day – and how much insight into student learning
teachers can mine from this material. McNamee and Chen 2005.

Assessment for learning is ongoing assessment that allows teachers to monitor students on a
day-to-day basis and modify their teaching based on what the students need to be successful. This
assessment provides students with the timely, specific feedback that they need to make
adjustments to their learning.

After teaching a lesson, we need to determine whether the lesson was accessible to all students
while still challenging to the more capable; what the students learned and still need to know; how
we can improve the lesson to make it more effective; and, if necessary, what other lesson we
might offer as a better alternative.

This continual evaluation of instructional choices is at the heart of improving our teaching
practice. Burns 2005.

Assessment of Learning (Summative Assessment)


Assessment of learning is the snapshot in time that lets the teacher, students and their parents
know how well each student has completed the learning tasks and activities. It provides
information about student achievement. While it provides useful reporting information, it often has
little effect on learning.

Comparing Assessment for Learning and Assessment of Learning

Assessment for Learning Assessment of Learning (Summative


(Formative Assessment) Assessment)

Checks learning to determine what to do next Checks what has been learned to date.
and then provides suggestions of what to do—
teaching and learning are indistinguishable
from assessment.
Is designed to assist educators and students in Is designed for the information of those not
improving learning. directly involved in daily learning and teaching
(school administration, parents, school board,
Alberta Education, post-secondary institutions)
in addition to educators and students.
Is used continually by providing descriptive Is presented in a periodic report.
feedback.
Usually uses detailed, specific and descriptive Usually compiles data into a single number,
feedback—in a formal or informal report. score or mark as part of a formal report.
Is not reported as part of an achievement grade. Is reported as part of an achievement grade.
Usually focuses on improvement, compared Usually compares the student's learning either
with the student's “previous best” (self- with other students' learning (norm-referenced,
referenced, making learning more personal). making learning highly competitive) or the
standard for a grade level (criterion-referenced,
making learning more collaborative and
individually focused).
Involves the student. Does not always involve the student.

Assessment as Learning
Assessment as learning develops and supports students' metacognitive skills. This form of
assessment is crucial in helping students become lifelong learners. As students engage in peer and
selfassessment, they learn to make sense of information, relate it to prior knowledge and use it for
new learning. Students develop a sense of ownership and efficacy when they use teacher, peer and
self-assessment feedback to make adjustments, improvements and changes to what they
understand.

The Assessment Process


Assessment is a constant cycle of improvement. Data gathering is ongoing. The goal of
assessment, whether for an academic department or a program, is to provide: (a) a clear
conceptualization of intended student learning outcomes, (b) a description of how these outcomes
are assessed and measured, (c) a description of the results obtained from these measures, and (d)
a description of how these results validate current practices or point to changes needed to improve
student learning.
The Four Steps of the Assessment Cycle

Step 1: Clearly define and identify the learning outcomes


Each program should formulate between 3 and 5 learning outcomes that describe what students
should be able to do (abilities), to know (knowledge), and appreciate (values and attitudes)
following completion of the program. The learning outcomes for each program will include Public
Affairs learning outcomes addressing community engagement, cultural competence, and ethical
leadership.

Step 2: Select appropriate assessment measures and assess the learning


outcomesMultiple ways of assessing the learning outcomes are usually selected and used.
Although direct and indirect measures of learning can be used, it is usually recommended to focus
on direct measures of learning. Levels of student performance for each outcome is often described
and assessed with the use of rubrics.

It is important to determine how the data will be collected and who will be responsible for data
collection.

Results are always reported in aggregate format to protect the confidentiality of the students
assessed.

Step 3: Analyze the results of the outcomes assessed


It is important to analyze and report the results of the assessments in a meaningful way. A small
subgroup of the DAC would ideally be responsible for this function. The assessment division of the
FCTL would support the efforts of the DAC and would provide data analysis and interpretation
workshops and training.

Step 4: Adjust or improve programs following the results of the learning outcomes
assessed Assessment results are worthless if they are not used. This step is a critical step of the
assessment process. The assessment process has failed if the results do not lead to adjustments or
improvements in programs. The results of assessments should be disseminated widely to faculty in
the department in order to seek their input on how to improve programs from the assessment
results. In some instances, changes will be minor and easy to implement. In other instances,
substantial changes will be necessary and recommended and may require several years to be fully
implemented.
Teachers’ Roles in Assessment of Learning
Because the consequences of assessment of learning are often far-reaching and affect students
seriously, teachers have the responsibility of reporting student learning accurately and fairly,
based on evidence obtained from a variety of contexts and applications. Effective assessment of
learning requires that teachers provide

• a rationale for undertaking a particular assessment of learning at a particular point in time

• clear descriptions of the intended learning

• processes that make it possible for students to demonstrate their competence and skill

• a range of alternative mechanisms for assessing the same outcomes

• public and defensible reference points for making judgements

• transparent approaches to interpretation • descriptions of the assessment process strategies for


recourse in the event of disagreement about the decisions With the help of their teachers, students
can look forward to assessment of learning tasks as occasions to show their competence, as well as
the depth and breadth of their learning.

Planning Assessment of Learning Why am I assessing?

The purpose of assessment of learning is to measure, certify, and report the level of students’
learning, so that reasonable decisions can be made about students. There are many potential users
of the information:

• teachers (who can use the information to communicate with parents about their children’s
proficiency and progress)

• parents and students (who can use the results for making educational and vocational decisions)

• potential employers and post-secondary institutions (who can use the information to make
decisions about hiring or acceptance)

• principals, district or divisional administrators, and teachers (who can use the information to
review and revise programming)

What am I assessing?

Assessment of learning requires the collection and interpretation of information about students’
accomplishments in important curricular areas, in ways that represent the nature and complexity
of the intended learning. Because genuine learning for understanding is much more than just
recognition or recall of facts or algorithms, assessment of learning tasks need to enable students to
show the complexity of their understanding. Students need to be able to apply key concepts,
knowledge, skills, and attitudes in ways that are authentic and consistent with current thinking in
the knowledge domain.

What assessment method should I use?

In assessment of learning, the methods chosen need to address the intended curriculum outcomes
and the continuum of learning that is required to reach the outcomes. The methods must allow all
students to show their understanding and produce sufficient information to support credible and
defensible statements about the nature and quality of their learning, so that others can use the
results in appropriate ways. Assessment of learning methods include not only tests and
examinations, but also a rich variety of products and demonstrations of learning—portfolios,
exhibitions, performances, presentations, simulations, multimedia projects, and a variety of other
written, oral, and visual methods What assessment method should I use?

How can I ensure quality in this assessment process?

Assessment of learning needs to be very carefully constructed so that the information upon which
decisions are made is of the highest quality. Assessment of learning is designed to be summative,
and to produce defensible and accurate descriptions of student competence in relation to defined
outcomes and, occasionally, in relation to other students’ assessment results. Certification of
students’ proficiency should be based on a rigorous, reliable, valid, and equitable process of
assessment and evaluation. Reliability Reliability in assessment of learning depends on how
accurate, consistent, fair, and free from bias and distortion the assessment is.

Teachers might ask themselves:


• Do I have enough information about the learning of this particular student to make a definitive
statement?

• Was the information collected in a way that gives all students an equal chance to show their
learning?

• Would another teacher arrive at the same conclusion?

• Would I make the same decision if I considered this information at another time or in another
way?

Reference Points Typically, the reference points for assessment of learning are the learning
outcomes as identified in the curriculum that make up the course of study. Assessment tasks
include measures of these learning outcomes, and a student’s performance is interpreted and
reported in relation to these learning outcomes. In some situations where selection decisions need
to be made for limited positions (e.g., university entrance, scholarships, employment
opportunities), assessment of learning results are used to rank students. In such norm-referenced
situations, what is being measured needs to be clear, and the way it is being measured needs to be
transparent to anyone who might use the assessment results. Validity Because assessment of
learning results in statements about students’ proficiency in wide areas of study, assessment of
learning tasks must reflect the key knowledge, concepts, skills, and dispositions set out in the
curriculum, and the statements and inferences that emerge must be upheld by the evidence
collected.

Record-Keeping
Whichever approaches teachers choose for assessment of learning, it is their records that provide
details about the quality of the measurement. Detailed records of the various components of the
assessment of learning are essential, with a description of what each component measures, with
what accuracy and against what criteria and reference points, and should include supporting
evidence related to the outcomes as justification. When teachers keep records that are detailed
and descriptive, they are in an excellent position to provide meaningful reports to parents and
others. Merely a symbolic representation of a student’s accomplishments (e.g., a letter grade or
percentage) is inadequate. Reports to parents and others should identify the intended learning that
the report covers, the assessment methods used to gather the supporting information, and the
criteria used to make the judgement.

How can I use the information from this assessment?

Feedback to Students Because assessment of learning comes most often at the end of a unit or
learning cycle, feedback to students has a less obvious effect on student learning than assessment
for learning and assessment as learning. Nevertheless, students do rely on their marks and on
teachers’ comments as indicators of their level of success, and to make decisions about their
future learning endeavours.

Differentiating Learning
In assessment of learning, differentiation occurs in the assessment itself. It would make little sense
to ask a near-sighted person to demonstrate driving proficiency without glasses. When the driver
uses glasses, it is possible for the examiner to get an accurate picture of the driver’s ability, and to
certify him or her as proficient. In much the same way, differentiation in assessment of learning
requires that the necessary accommodations be in place that allow students to make the particular
learning visible. Multiple forms of assessment offer multiple pathways for making student learning
transparent to the teacher. A particular curriculum outcome requirement, such as an
understanding of the social studies notion of conflict, for example, might be demonstrated through
visual, oral, dramatic, or written representations. As long as writing were not an explicit component
of the outcome, students who have difficulties with written language, for example, would then have
the same opportunity to demonstrate their learning as other students. Although assessment of
learning does not always lead teachers to differentiate instruction or resources, it has a profound
effect on the placement and promotion of students and, consequently, on the nature and
differentiation of the future instruction and programming that students receive. Therefore,
assessment results need to be accurate and detailed enough to allow for wise recommendations.

Reporting
There are many possible approaches to reporting student proficiency. Reporting assessment of
learning needs to be appropriate for the audiences for whom it is intended, and should provide all
of the information necessary for them to make reasoned decisions. Regardless of the form of the
reporting, however, it should be honest, fair, and provide sufficient detail and contextual
information so that it can be clearly understood. Traditional reporting, which relies only on a
student’s average score, provides little information about that student’s skill development or
knowledge. One alternate mechanism, which recognizes many forms of success and provides a
profile of a student’s level of performance on an emergent-proficient continuum, is the parent
student-teacher conference. This forum provides parents with a great deal of information, and
reinforces students’ responsibility for their learning.

The purpose of assessment that typically comes at the end of a course or unit of instruction is to
determine the extent to which the instructional goals have been achieved and for grading or
certification of student achievement. (Linn and Gronlund, Measurement and Assessment in
Teaching)

You might also like