Ped 7
Ped 7
Authentic assessment is the measurement of "intellectual accomplishments that are worthwhile, significant, and
meaningful, [1] as contrasted to multiple choice standardized tests. Authentic assessment can be devised by the
teacher, or in collaboration with the student by engaging student voice. When applying authentic assessment to
student learning and achievement, a teacher applies criteria related to “construction of knowledge, disciplined
inquiry, and the value of achievement beyond the school.
Authentic assessment tends to focus on contextualized tasks, enabling students to demonstrate their
competency in a more 'authentic' setting. Examples of authentic assessment categories include:
Ø Performance Assessment- test students’ ability to use skills in a variety of authentic contexts. They frequently
require students to work collaboratively and to apply skills and concepts to solve complex problems.
Ø Short Investigations - Many teachers use short investigations to assess how well students have mastered
basic concepts and skills. Most short investigations begin with a stimulus, like a math problem, political cartoon,
map, or excerpt from a primary source. The teacher may ask students to interpret, describe, calculate, explain, or
predict. These investigations may use enhanced multiplechoice questions.
Ø Open-Response Questions - present students with a stimulus and asks them to respond. Responses include
a brief written or oral answer, a mathematical solution, a drawing, a diagram, chart, or graph.
Short- and long-term tasks include such activities as writing, revising, and presenting a report to the class. Or
they may use concept mapping, a technique that assesses how well students understand relationships among
concepts.
Ø Portfolios Self-Assessment- A portfolio documents learning over time. This long-term perspective accounts
for student improvement and teaches students the value of self-assessment, editing, and revision. A student
portfolio can include journal entries and reflective writing, peer reviews, artwork, diagrams, charts, and graphs,
group reports, and student notes and outlines rough drafts and polished writing.
Ø Self-assessment requires students to evaluate their own participation, process, and products. Evaluative
questions are the basic tools of self-assessment. Students give written or oral responses to questions like what
the most difficult part of this project for you, what you think you should do next, and what you learnt from this
project.
Principles and Practices
1. A school’s mission is to develop useful citizen
2. To be a useful citizen, one has to be capable of performing useful tasks in the real- world.
3. The School’s duty is to help students develop proficiency in performing the tasks that they will be require to
perform after graduation in the work place.
4. The school must then require student to perform tasks that duplicate or imitate real-world situations.
Characteristics of Authentic Assessment
Ø It starts with clear definite criteria of performance made known to students.
Ø It is a criterion-referenced rather than norm-reference and so it identifies strengths and weaknesses, but does
not compare students nor rank their levels of performance.
Ø It requires students to make their own answer to questions rather than to select in given options or multiple
choice. They are required to use the range of HOTS. (Higher Order Thinking Skills)
Ø It often emphasizes performance and therefore students are required to demonstrate their knowledge and
skills.
Ø Encourages both teacher and students to determine their rate of progress in collaboratively to attain the
desired student learning outcomes.
Ø It does not encourage role learning and passive taking of test; instead students are required to demonstrate
analytical skills, ability to work in group, skills in oral and written communications.
Ø It changes the role of students as passive test takers into become active and involve participants in
assessment activities that emphasize what are their skills and capability.
Traditional Assessment is commonly associated with pre-determine choice measure of assessment such as
multiple choice tasks, fill-in-the blanks, true – false, matching type and others. Students typically recall or select
the answers. Essentially, Traditional Assessment springs from the educational which involves the following
principles and practices.
1. A school mission is to develop useful citizens.
2. To be a useful citizen, one must process a certain body of knowledge and skills.
3. The school is entrusted to teach this body of knowledge and skills
4. To determine if the student have acquired these knowledge and skills, the school must test the student on
these knowledge and skills.
Comparison of Authentic and Traditional Assessment
Assessment attaches much importance to any sort of teaching learning process. The usual and common
assessment we do is known as traditional assessment. Today we should use authentic assessment keeping
pace with the growing necessities of the world. What we mean by Authentic Assessment? It’s a form of
assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of
essential knowledge and skills’ as is defined by Jon Mueller. It can be characterised by open-ended tasks that
require students to construct extended responses, to perform an act, or to produce a product in a real-world
context—or a context that mimics the real world. Project works, portfolios, writing an article for newsletter or
newspaper, performing a dance or drama, designing a digital artifact, creating a poster for science fair, debates,
and oral presentations can be the examples of authentic assessment. It involves students in the actual
challenges, standards, and habits needed for success in the academic disciplines or
in the workplace‖ said Wiggins (1989). Authentic assessment tasks make the students motivated as they get
opportunity to perceive the relevance of the tasks to the real world. They find it as a meaningful learning.
In our academic life, mostly we do traditional assessment. It refers to the forced-choice measures of multiple-
choice tests, fill-in-the-blanks, true-false, matching and the like that have been and remain so common in
education. Students typically select an answer or recall information to complete the assessment. These tests
may be standardized or teacher-created. They may be administered locally or education board-wise, or globally.
As a nation’s mission is to develop productive citizens educational institutions must then test students to see if
they acquire the expected knowledge and skills. Teachers first determine the tasks that students will perform to
demonstrate their mastery, and then a curriculum is developed that will enable students to perform those tasks
well, which would include the acquisition of essential knowledge and skills.
A comparison of authentic assessment and conventional assessment reveals that different purposes are served,
as evidenced by the nature of the assessment and item response format. We can teach students how to do
mathematics, learn history and science, not just know them. Then, to assess what our students have learned,
we can ask students to perform tasks that "replicate the challenges" faced by those using mathematics, doing
history or conducting scientific investigation. Traditional assessment follows selecting a response from learners
whereas authentic assessment engages learners to perform a task on the basis of the item they are informed.
Traditional assessment is contrived but authentic is in reallife. Traditional assessment says recalling or
recognition, it is teacher structured and indirect evidence is put but authentic one is construction or application, it
is student structured and direct evidence is set.
Authentic assessments have several advantages over conventional or traditional tests. They are likely to be more
valid than conventional tests, particularly for learning outcomes that require higher-order thinking skills. Because
they involve real-world tasks, they are also likely to be more interesting for students, and thus more motivating.
And finally, they can provide more specific and usable information about what students have succeeded in
learning as well as what they have not learned.
Authentic assessment has played a pivotal role in driving curricular and instructional changes in the context of
global educational reforms. Since the 1990s, teacher education and professional development programmes in
many education systems around the globe have focused on the development of assessment literacy for teachers
and teacher candidates which encompasses teacher competence in the design, adaptation, and use of authentic
assessment tasks or performance assessment tasks to engage students in in-depth learning of subject matter
and to promote their mastery of the 21st-century competencies.
Authentic assessment serves as an alternative to conventional assessment. Conventional assessment is limited
to standardized paper-and-pencil/pen tests, which emphasize objective measurement. Standardized tests
employ closed-ended item formats such as true‒false, matching, or multiple choice. The
use of these item formats is believed to increase efficiency of test administration, objectivity of scoring, reliability
of test scores, and cost-effectiveness as machine scoring and large-scale administration of test items are
possible. However, it is widely recognised that traditional standardised testing restricts the assessment of higher-
order thinking skills and other essential 21st-century competencies due to the nature of the item format. From an
objective measurement or psychometric perspective, rigorous and higher-level learning outcomes such as critical
thinking, complex problem solving, collaboration, and extended communication are too subjective to be tested.
In traditional assessment student’s attention will understandably be focused on and limited to what is on the test.
In contrast, authentic assessments allow more student choice and construction in determining what is presented
as evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually
multiple acceptable routes towards constructing a product or performance. Obviously, assessments more
carefully controlled by the teachers offer advantages and disadvantages. Similarly, more student-structured tasks
have strengths and weaknesses that must be considered when choosing and designing an assessment.
The amount of new information is increasing at an exponential rate due to the advancement of digital technology.
Hence, rote learning and regurgitation of facts or procedures are no longer suitable in contemporary educational
contexts. Rather, students are expected to be able to find, organise, interpret, analyse, evaluate, synthesise, and
apply new information or knowledge to solve non-routine problems.
Authentic tasks replicate real-world challenges and standards of performance that experts or professionals
typically face in the field. It is an effective measure of intellectual achievement or ability because it requires
students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through
the performance of exemplary tasks. Hence authentic assessment can serve as a powerful tool for assessing
students’ 21st-century competencies in the context of global educational reforms.
Ø encompasses teacher competence in the design, adaptation, and use of authentic assessment tasks or
performance assessment tasks to engage students in in-depth learning of subject matter and to promote their
mastery of the 21st-century competencies.
Ø Authentic assessment serves as an alternative to conventional assessment. Conventional assessment is
limited to standardized paper-and-pencil/pen tests, which emphasize objective measurement. Standardized tests
employ closed-ended item formats such as true‒false, matching, or multiple choice.
Ø Authentic assessments allow more student choice and construction in determining what is presented as
evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually
multiple acceptable routes towards constructing a product or performance.
Ø Authentic tasks replicate real-world challenges and standards of performance that experts or professionals
typically face in the field. It is an effective measure of intellectual achievement or ability because it requires
students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through
the performance of exemplary tasks. Hence authentic assessment can serve as a powerful tool for assessing
students’ 21st-century competencies in the context of global educational reforms.
What is an assessment?
Assessment is the systematic process of documenting and using empirical data on the knowledge, skills,
attitudes and beliefs. By taking the assessment, teachers try to improve student learning. This is a short definition
of assessment.If you want to read more about assessment, click on this link.
What is testing?
What is testing in education? Almost everybody has experienced testing during his or her life. Grammar tests,
driving license test etc. A test is used to examine someone’s knowledge of something to determine what that
person knows or has learned. It measures the level of skill or knowledge that has been reached. An evaluative
device or procedure in which a sample of an examinee’s behavior in a specified domain is obtained and
subsequently evaluated and scored using a standardized process (The Standards for Educational and
Psychological Testing, 1999)
So, what’s the difference?
Test and assessment are used interchangeably, but they do mean something different. A test is a “product” that
measures a particular behavior or set of objectives. Meanwhile assessment is seen as a procedure instead of a
product. Assessment is used during and after the instruction has taken place. After you’ve received the results of
your assessment, you can interpret the results and in case needed alter the instruction.
Tests are done after the instruction has taken place, it’s a way to complete the instruction and get the results.
The results of the tests don’t have to be interpreted, unlike assessment.
What is an assessment?
What's the definition of assessment in education? Assessment is the systematic process of documenting and
using empirical data to measure knowledge, skills, attitudes and beliefs. By taking the assessment, teachers try
to improve the student's path towards learning. This is a short definition of assessment
What is evaluation?
What's the definition of evaluation in education? Evaluation focuses on grades and might reflect classroom
components other than course content and mastery level. An evaluation can be used as a final review to
gauge the quality of instruction. It’s product-oriented. This means that the main question is: “What’s been
learned?” In short, evaluation is judgmental.
Authentic Assessment Tools
a. Observation-Based Tools
Observation provides the opportunity to monitor or assess a process or situation and document evidence of what
is seen and heard. Seeing actions and behaviours within a natural context, or as they usually occur provides
insights and understanding of the event, activity or situation being evaluated.
b. Performance Samples Assessment Tools
Examples include dance, recital, dramatic enactment. There may be prose or poetry interpretation. This form of
performance-based assessment can take time, so there must be a clear pacing guide..
c. Performance Assessment Tools
Performance task assessment lists are assessment tools that provide the structure students need to work more
independently and to encourage them to pay attention to the quality of their work.
Authentic Assessment examples:
1.Conduction research and writing a report. 5.Character analysis.
2.Student debates (individual or group) 6.Drawing and writing about a story or chapter.
3.Experiments - trial and error learning. 7.Journal entries (reflective writing)
4.Discussion partners or groups. 8.Student self-assessment.
2. Identifying an activity that would entail more or less the same sets of competencies. Finding a task that
would be interesting enjoyable for the students.
Example: Topic : Understanding Biological Diversity
Possible Task Design
Ø Bring the students to the pond or creek
Ø Ask them to find all living organisms near the pond or creek
Ø Bring them to school playground to find as may living organisms they can find.
Scoring Rubrics
Rubric is a scoring scale used to assess student performance along a task-specific set of criteria.
Authentic assessment are criterion-referenced measures: A student’s aptitude on a task is determined by
matching the student’s performance against a set of criteria to determine the degree to which the student’s
performance meets the criteria for the task
Example of Criteria
1 2 3
Number of appropriate hand XI 1-4 5-9 10-12
gestures
XI Lots of Few of No apparent
appropriate facial appropriate facial appropriate facial
Appropriate facial expression
expression expression expression
Rubric is a scoring scale used to assess student performance. A coherent set of criteria for students’ work that
includes descriptions of levels of performance quality on the criteria. Typically, rubrics are used in scoring or
grading written assignments or oral presentations: however, they may be used to score any form of student
performance.
There is no specific number of levels a rubric should or should not possess. It will vary on the task and your
needs as long as you decide that it is appropriate. Generally, it is better to start with a smaller number of levels of
performance for a criterion and then expand if necessary.
Why Includes Levels of Performance?
1. Clearer Expectations It is very useful for the students and the teacher if the criteria are identified and
communicated prior to completion of the task.
2. More consistent and objective assessment. In addition to better communicating teacher expectations, levels
of performance permit the teacher to more consistently and objectively distinguish between good and bad
performance, or between superior, mediocre and poor performance when evaluating.
3. Better Feedback, Furthermore, identifying specific levels of student performance allows the teacher to
provide more detailed feedback to student
Performance-based assessment has led to the use of a variety of alternative ways of evaluating student
progress (journals, checklists, portfolios, projects, rubrics, etc.) as compared to more traditional methods of
measurement (paper-and-pencil testing).
A kind of assessment wherein the assessor views and scores the final product made and not on the actual
process of making that product.
It is concerned on the product alone and not on the process. It is focused to the outcome or the performance
output of the learner. It also focuses on the achievement of the learner. • P-OPBA focuses on evaluating the
result or outcome of a process.
Product-Oriented Learning Competencies. Student performances can be defined as targeted tasks that lead to a
product or overall learning outcome.
Performance-based education poses a challenge for teachers to design instruction that is taskoriented. The trend
is based on the premise that learning needs to be connected to the lives of the students through relevant tasks
that focus on students’ ability to use their knowledge and skills in meaningful ways. In this case, performance-
based tasks require performance-based assessment in which the actual student performance is assessed
through a product, such as a completed project or work that demonstrates levels of task achievement.
Product-Oriented Leaning Competencies Student performances can be defined as targeted tasks that lead to a
product or overall learning outcomes. Product can include a wide range of student works that target specific
skills. Examples: COMMUNICATION SKILLS Reading Writing Speaking Listening
PSYCHOMOTOR SKILLS (requiring physical abilities to perform a given task). Using rubrics is one way that
teachers can evaluate or assess student performance or proficiency in any given task as it relates to a final
product or leaning outcomes. The learning competencies associated with products or outputs are linked with an
assessment of the level of “expertise” manifested by the product. Thus, product oriented learning competencies
target at least three (3) levels: novice or beginner’s level, skilled level, and expert level.
There are other ways to state product-oriented learning competencies. For instance, we can define learning
competencies for products or outputs in the following way:
• Example: communication skills such as those demonstrated in reading, writing, speaking, and
listening.
The learning competencies associated with products or outputs are linked with an assessment of the level of
expertise manifested by the product.
Ø Level 1 (Beginner) – does the finished product or the project illustrates the minimum expected parts or
function?
Learning Competencies: • Contains pictures, clippings, and other illustrations for the scenes and characters
(Beginner)
Ø Level 2 (Skilled) – does the finished product or project contains additional parts and functions on top of the
minimum requirements which tend to enhance the final output?
Contains remarks and captions for the illustrations made by the student himself for each scene and the
characters (Skilled)
Learning Competencies: The final product submitted by the students must: Possess the correct dimensions (5” x
5” x 5”) – (minimum specifications) Be sturdy, made of durable cardboard and properly fastened together –
(skilled specifications) Be pleasing to the observer, preferably properly colored for aesthetic purposes – (Expert
level) Example: The desired product is a scrapbook illustrating the historical event called EDSA I People Power.
Ø Level 3 (Expert) – does the finished product or project contain the basic minimum parts and function, have
additional features on top of the minimum, and aesthetically pleasing? Presentable, complete, informative, and
pleasing to the reader of the scrapbook (Expert) Learning Competencies: The scrapbook presented by the
students must: Contain pictures, newspaper clippings and other illustrations for the main characters of EDSA I
People Power namely: Corazon Aquino, Fidel V. Ramos, Juan Ponce Enrile, Ferdinand E. Marcos, Cardinal Sin.
– (minimum specifications) Contain remarks and captions for the illustrations made by the student himself for the
roles played by the characters of EDSA I People Power – (skilled level) Be presentable, complete, informative
and pleasing to the reader of the scrapbook – (expert level)
Performance-based assessment for products and projects can also be used for assessing outputs of short-term
tasks such as the one illustrated below for outputs in a typing class. Example: The desired output consists of the
output in a typing class.
Learning Competencies: The final typing outputs of the students must: • Possess no more than five (5) errors in
spelling – (minimum specifications) • Possess no more than five (5) errors in spelling observing proper format
based on the document to be typewritten – (skilled level) • Possess no more than five (5) errors in spelling, has
the proper format, and is readable and presentable – (expert level)
Notice that in all of the above examples, product oriented performance based learning competencies are
evidence-based. The teacher needs concrete evidence that the student has achieved a certain level of
competence based on submitted products and projects.
Comparison of Process-Oriented and Product-Oriented
Performance-Based Assessment
Process-Oriented Product-Oriented
concerned with the actual task performance the assessor views and scores the final product
made
evaluate how a movement is performed. evaluates the outcome of a movement.
to evaluate the actual process of doing an object a management philosophy, concept, focus or
of learning. state of mind which emphasizes the quality of
the product
assessment aims to know what processes a a kind of assessment where in the assessor
person undergoes when given a task. views and scores the final product made
Task Designing
Task Designing How should a teacher design a task for product-oriented performance based assessment? The
design of the task in this context depends on what the teacher desires to observe as output of the students. The
concepts that may be associated with task designing include:
Ø Complexity. The level of complexity of the project needs to be within the range of ability of the students.
Projects that are too simple tend to be uninteresting for the students while projects that are too complicated will
most likely frustrate them.
Ø Appeal. The project or activity must be appealing to the students. It should be interesting enough so that
students are encouraged to pursue the task to completion. It should lead self-discovery of information by the
students.
Ø Creativity. The projects need to encourage students to exercise creativity and divergent thinking. Given the
same set of materials and project inputs, how does one best present the project? It should lead the students into
exploring the various possible ways of presenting the final output.
Ø Goal-Based. Finally, the teacher must bear in mind that the project is produced in order to attain a learning
objective. Thus, projects are assigned to students not just for the sake of producing something but for the
purpose of reinforcing learning. Example: Paper folding is a traditional Japanese art. However, it can be used as
an activity to teach concept of plane and solid figures in geometry. Provide students with a given number of
colored papers and ask them to construct as many plane and solid figures from these papers without cutting
them (by paper folding only)
It is easier to visualize the end result as they start by completing the final step and work backwards.
For example, when getting dressed they will know how they should look when they are ready for school in their
uniform.
A visual task analysis can be created whether in picture or word form and the individual can follow it
independently. It creates a link between the most work (last step) and the biggest reinforcer (what is achieved
e.g. eating the toast if the task was to make toast.
Backward and forward chaining
An assessment of the efficiency of and child preference for forward and backward chaining
Information on forward and backward chaining
Scoring Rubrics
Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the
analysis of the products or processes of students’ efforts (Brookhart, 1999).
Scoring rubrics are typically employed when a judgment of quality is required and may be used to evaluate a
broad range of subjects and activities.
From the major criteria, the next task is to identify sub-statements that would make the major criteria more
focused ad objectives. For instance, if we were scoring an essay on : “Three Hundred Years of Spanish Rules in
the Philippines”, the major criterion “Quality” may possess the following sub-statements:
Ø Interrelates the chronological events in an interesting manner
Ø Identifies the key players in each period of the Spanish rule and the roles that they played
Succeeds in relating the history of Philippine Spanish rule (related as Professional, Not quite professional, and
Novice)
The example displays a scoring rubric that was developed to aid in the evaluation of essays written by college
students in the classroom (based loosely on Leydens & Thompson, 1997).
When are scoring rubrics an appropriate evaluation technique?
Grading essay is just one example of performances that may be evaluated using scoring rubrics. There are many
other instances in which scoring rubrics may be used successfully: evaluate group activities, extended projects
and oral presentations.
Also scoring rubrics scoring cuts across disciplines and subject matter for they are equally appropriate in English,
Mathematics and Science classrooms.
Other Methods Authentic assessment schemes apart from scoring rubrics exist in the arsenal of a teacher. For
example, checklists may be used rather that scoring rubrics in the evaluation essays. Checklists enumerate a set
of desirable characteristics for a certain product and the teacher marks those characteristics which are actually
observed.
The affective domain describes learning objectives that emphasize a feeling tone, an emotion, or a degree of
acceptance or rejection. Affective objectives vary from simple attention to selected phenomena to complex but
internally consistent qualities of character and conscience.
Affective domain describes learning objectives that emphasize a feeling tone, an emotion, or a degree of
acceptance or rejection.
Krathwohl's taxonomy is a model that describes how individual's process and internalize learning objects on an
affective or emotional level. There are 5 levels to the taxonomy. Verbs for expressing learning outcomes: ask,
choose, describe, follow, give, hold, identify, reply, select, use.
Krathwohl's affective domain taxonomy is perhaps the best known of any of the affective taxonomies. "The
taxonomy is ordered according to the principle of internalization. Internalization refers to the process whereby a
person's affect toward an object passes from a general awareness level to a point where the affect is
'internalized' and consistently guides or controls the person's behavior (Seels & Glasgow, 1990, p. 28)." How is
the taxonomy presented?
The taxonomy is presented in five stages:
1. Receiving describes the stage of being aware of or sensitive to the existence of certain ideas, material, or
phenomena and being willing to tolerate them. Examples include: to differentiate, to accept, to listen (for), to
respond to.
2. describes the second stage of the taxonomy and refers to a committment in some small measure to the ideas,
materials, or phenomena involved by actively responding to them.
3. Examples are: to comply with, to follow, to commend, to volunteer, to spend leisure time in, to acclaim.
4. Valuing means being willing to be perceived by others as valuing certain ideas, materials, or phenomena.
Examples include: to increase measured proficiency in, to relinquish, to subsidize, to support, to debate.
5. Organization is the fourth stage of Krathwohl’s taxonomy and involves relating the new value to those one
already holds and bringing it into a harmonious and internally consistent philosophy. Examples are: to discuss, to
theorize, to formulate, to balance, to examine.
6. Characterization by value or value set means acting consistently in accordance with the values the individual
has internalized. Examples include: to revise, to require, to be rated high in the value, to avoid, to resist, to
manage, to resolve.
In 1964, David R.Krathwohl, together with his colleagues, extended Bloom's Taxonomy of Education Objectives
by publishing the second taxonomy of objectives, this time giving emphasis on he affective domain. Krathwohl
and his collaborators attempted to subdivide the affective realm into relatively distinct divisions.
Krathwol’s Taxonomy of Education Level Description Example Receiving (Attending) Concerned with student's
sensitivity to the existence of certain phenomena and stimuli that is, with student's willingness to receive or to
attend to this stimuli It is categorized in three subdivisions that shows the different levels of attending to
phenomena Awareness of the phenomena Willingness to receive the phenomena Controlled or selected
attention to phenomena Students does mathematics activities for grades
The affective domain is one of the three domains in Bloom's Taxonomy. It involves feelings, attitudes, and
emotions. It includes the ways in which people deal with external and internal phenomenon emotionally, such as
values, enthusiasms, and motivations. Bloom’s Revised Taxonomy—Affective Domain The affective domain
(Krathwohl, Bloom, Masia, 1973) includes the manner in which we deal with things emotionally, such as feelings,
values, appreciation, enthusiasms, motivations, and attitudes
Affective Learning Competencies
According to William James Popham (2003), the reasons why it is important to assess affect are:
1. Educators should be interested in assessing affective variables because these variables are
excellent predictors of students’ future behavior,
2. teachers should assess affect to remind themselves that there’s more to being a successful
teacher than helping students obtain high scores on achievement tests;
3. Information regarding students’ affect can help teachers teach more effectively on a day-to-day
basis
Altruism Willingness and propensity to help others Moral Development Attainment of ethical principles that guide
decision-making and behavior Classroom Development Nature of feeling tones and interpersonal relationship in
a class
Learning Targets
1. Attitude Targets
2. Value Targets
3. Motivation Targets
4. Academic Self-Concept Targets
5. Social Relationship Targets 6. Classroom Environment Targets
Attitude Targets
Ø McMillan(1980)defines attitudes as internal states that influence what students are likely to do.
Ø The internal state can in some degree determine positive or negative or favorable or unfavorable reaction
toward an object, situation, person or group of objects, general environment, or group of persons.
Ø In a learning institution, attitude is contingent on subjects, teachers, other students, homework, and other
objects or persons.
A Positive Attitude Toward A Negative attitude Toward Learning Math, Science, English other subjects
Assignments Classroom rules Teachers Cheating Drug use Bullying Cutting classes Dropping out
McMillan (2007) suggested that in setting value targets, it is necessary to stick to non- controversial and those
that are clearly related to academic learning and school and department of educational goals.
McMillan (2007) and Popham (2005) suggested other non-controversial values (aside from those mentioned)
like kindness, generosity, perseverance, loyalty, respect, courage, compassion, and tolerance. • It is better to an
excellent job assessing a few important traits than to try to assess many traits casually.
Motivation Targets implies that motivation is determined by students' expectation, their belief about whether
they are likely to be successful, and the relevance of the outcome. • Expectations • refers to the self efficacy of
the students. • Values • are self-perception of the importance of the performance
Kinds Of Motivation
Ø Intrinsic Motivation • when students do something or engage themselves in activities because they find the
activities interesting, enjoyable, or challenging.
Ø Extrinsic Motivation • is doing something because it leads rewards or punishment.
Motivation as self-efficacy
In addition to being influenced by their goals, interests, and attributions, students’ motives are affected by specific
beliefs about the student’s personal capacities. In self-efficacy theory the beliefs become a primary, explicit
explanation for motivation (Bandura, 1977, 1986, 1997). Self-efficacy is the belief that you are capable of carrying
out a specific task or of reaching a specific goal. Note that the belief and the action or goal are specific. Self-
efficacy is a belief that you can write an acceptable term paper, for example, or repair an automobile, or make
friends with the new student in class. These are relatively specific beliefs and tasks. Self-efficacy is not about
whether you believe that you are intelligent in general, whether you always like working with mechanical things,
or think that you are generally a likeable person. These more general judgments are better regarded as various
mixtures of self-concepts (beliefs about general personal identity) or of self-esteem (evaluations of identity). They
are important in their own right, and sometimes influence motivation, but only indirectly (Bong & Skaalvik, 2004).
Self-efficacy beliefs, furthermore, are not the same as “true” or documented skill or ability. They are self-
constructed, meaning that they are personally developed perceptions. There can sometimes therefore be
discrepancies between a person’s self-efficacy beliefs and the person’s abilities. You can believe that you can
write a good term paper, for example, without actually being able to do so, and vice versa: you can believe
yourself incapable of writing a paper, but discover that you are in fact able to do so. In this way self-efficacy is like
the everyday idea of confidence, except that it is defined more precisely. And as with confidence, it is possible to
have either too much or too little self-efficacy. The optimum level seems to be either at or slightly above true
capacity (Bandura, 1997). As we indicate below, large discrepancies between self-efficacy and ability can create
motivational problems for the individual.
Since self-efficacy is self-constructed, furthermore, it is also possible for students to miscalculate or misperceive
their true skill, and the misperceptions themselves can have complex effects on students’ motivations. From a
teacher’s point of view, all is well even if students overestimate their capacity but actually do succeed at a
relevant task anyway, or if they underestimate their capacity, yet discover that they can succeed and raise their
self-efficacy beliefs as a result. All may not be well, though, if students do not believe that they can succeed and
therefore do not even try, or if students overestimate their capacity by a wide margin, but are disappointed
unexpectedly by failure and lower their self-efficacy beliefs.
Self-concept and self-esteem are multidimensional. • Each person has a self-description in each area, that form
one's self- concept or self image. • Moreover, individuals have a sense of self regards, self affirmation, and self
worth in each area.(self-esteem) peer relations friendship cooperation collaboration taking a stand conflict
resolution functioning in group assertiveness Pro social behavior* empathy
In every classroom there is a unique climate that is felt at every point in time. Some manifest a comfortable
atmosphere, others have relaxed and productive ambiance. As a result there are classes that are happy and
content while others are serious and tensed due to the effect of the classroom climate. It follows that students
behave differently as dictated also by the classroom climate, some shows warm and supportive class while
others register as cold and rejecting.
Characteristics Descriptions Affiliation The extent to which student like and accept each other Involvement The
extent to which students are interested in and engaged in learning Task Orientation The extent to which
classroom activities are focused on the completion of academic task Cohesiveness The extent to which students
share norms and expectation. Favoritism Whether each student enjoys the same privileged Influence The extent
to which each student influences classroom decisions Friction The extent to which students bicker with one
another Formality The emphasis on imposing rules Communication The extent to which communication among
students and with teacher i s honest and authentic. Warmth The extent to which students care about each other
and show concern
What is the relevance of the affective domain in education?
If we are striving to apply the continuum of Krathwohl et al. to our teaching, then we are encouraging students to
not just receive information at the bottom of the affective hierarchy. We'd like for them to respond to what they
learn, to value it, to organize it and maybe even to characterize themselves as science students, science majors
or scientists.
We are also interested in students' attitudes toward science, scientists, learning science and specific science
topics. We want to find teaching methods that encourage students and draw them in. Affective topics in
educational literature include attitudes, motivation, communication styles, classroom management styles,
learning styles, use of technology in the classroom and nonverbal communication. It is also important not to turn
students off by subtle actions or communications that go straight to the affective domain and prevent students
from becoming engaged.
In the educational literature, nearly every author introduces their paper by stating that the affective domain is
essential for learning, but it is the least studied, most often overlooked, the most nebulous and the hardest to
evaluate of Bloom's three domains. In formal classroom teaching, the majority of the teacher's efforts
typically go into the cognitive aspects of the teaching and learning and most of the classroom time is designed for
cognitive outcomes. Similarly, evaluating cognitive learning is straightforward but assessing affective outcomes is
difficult. Thus, there is significant value in realizing the potential to increase student learning by tapping into the
affective domain. Similarly, students may experience affective roadblocks to learning that can neither be
recognized nor solved when using a purely cognitive approach.
4.1 Objective 1 Exercise 1 Define the different concepts related to assessing affective learning outcomes.
________________________________________________________
__________________________
________________________________________________________
__________________________
________________________________________________________
__________________________
4.3 Objective Exercise 5 Differentiate the three methods of assessing affective learning outcomes;
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
_____________________________________________________________________________________
4.4 Objective Exercise 6 What are the Kinds of Motivation ? Explain
________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________
________________________________________________________
_____________________________
Self-Evaluation
Assessment portfolios require students to continuously reflect and perform selfevaluations of their work.
Teachers should convey to students the purpose of the portfolio, what constitutes quality work and how the
portfolio is graded. As students judge their work using explicit criteria to identify strengths and weaknesses, they
are monitoring their own progress. According to the article, “Student Self-Evaluation: What Research Says And
What Practice Shows,” by Carol Rolheiser and John A. Ross, students who participate in self-evaluations are
motivated, have a positive outlook and develop cognitive skills.
Portfolios used well in classrooms have several advantages. They provide a way of documenting and evaluating
growth in a much more nuanced way than selected response tests can. Also, portfolios can be integrated easily
into instruction, i.e. used for assessment for learning. Portfolios also encourage student self-evaluation and
reflection, as well as ownership for learning (Popham, 2005). Using classroom assessment to promote student
motivation is an important component of assessment for learning which is considered in the next section.
Individualized
Portfolios permit individualized assessment. Some students are not good testtakers and portfolios offer them an
alternative to demonstrate mastery of content. Numerous work samples can show students moving from basic to
advanced skills, demonstrating continued learning growth. Because assessment portfolios are individualized,
students and teachers have the opportunity to choose the documents they want to include in the portfolio and to
make decisions about how to improve the student's work.
Promote Communication
Assessment portfolios promote communication between teachers and students. Some shy students who fail to
initiate conversations within the classroom benefit from one-onone interaction with the teacher, while other
students may enjoy speaking about their accomplishments. During conferences, students can discuss their
progress, ask questions and receive suggestions and strategies for improving work. Dialogues with peers and
parents also help students in meaningful reflection and goal-setting.
Accountability
Portfolio assessment can hold students accountable for mastering content standards in a subject area. Portfolios
offer students tangible evidence to show their academic achievements as well as their participation in community
service projects. Because high school graduation is contingent on mastery of essential elements of the
curriculum, portfolios can give students an alternate avenue to show documentation of skills. In addition, many
colleges and employers request portfolios to see if students have basic skills, problem solving and collaborative
work skills.
Major Disadvantages of Portfolio Use .
First, good portfolio assessment takes an enormous amount of teacher time and organization. The time is
needed to help students understand the purpose and structure of the portfolio, decide which work samples to
collect, and to self reflect. Some of this time needs to be conducted in one-to-one conferences. Reviewing and
evaluating the portfolios out of class time is also enormously time consuming. Teachers have to weigh if the time
spent is worth the benefits of the portfolio use.
Second evaluating portfolios reliability and eliminating bias can be even more difficult than in a constructed
response assessment because the products are more varied. The experience of the state-wide use of portfolios
for assessment in writing and mathematics for fourth and eighth graders in Vermont is sobering. Teachers used
the same analytic scoring rubric when evaluating the portfolio. In the first two years of implementation samples
from schools were collected and scored by an external panel of teachers. In the first year the agreement among
raters (i.e. inter-rater reliability) was poor for mathematics and reading; in the second year the agreement among
raters improved for mathematics but not for reading. However, even with the improvement in mathematics the
reliability was too low to use the portfolios for individual student accountability (Koretz, Stecher, Klein &
McCafirey, 1994). When reliability is low, validity is also compromised because unstable results cannot be
interpreted meaningfully.
Purposes of Portfolio
1. Portfolio assessment matches assessment to teaching.
2. Portfolio assessment has clear goals. In fact, they are decided on at the beginning of instruction and are
clear to teacher and students alike.
3. Portfolio assessment gives a profile of learners’ abilities in terms of depth, breadth, and growth.
4. Portfolio assessment is a tool for assessing a variety of skills not normally testable in a single setting for
traditional testing.
5. Portfolio assessment develops awareness of students’ own learning.
6. Portfolio assessment caters to individuals in a heterogeneous class.
7.Portfolio assessment develops social skills. Students interact with other students in the development of their
own portfolios.
8. Portfolio assessment develops independent and active learners.
9. Portfolio assessment can improve motivation for learning and this achievement. 10. Portfolio assessment
provides opportunity for student-teacher dialogue.
Uses of Portfolios
Much of the literature on portfolio assessment has focused on portfolios as a way to integrate assessment and
instruction and to promote meaningful classroom learning. Many advocates of this function believe that a
successful portfolio assessment program requires the ongoing involvement of students in the creation and
assessment process. Portfolio design should provide students with the opportunities to become more reflective
about their own work, while demonstrating their abilities to learn and achieve in academics.
For example, some feel it is important for teachers and students to work together to prioritize the criteria that will
be used as a basis for assessing and evaluating student progress. During the instructional process, students and
teachers work together to identify significant pieces of work and the processes required for the portfolio. As
students develop their portfolio, they are able to receive feedback from peers and teachers about their work.
Because of the greater amount of time required for portfolio projects, there is a greater opportunity for
introspection and collaborative reflection. This allows students to reflect and report about their own thinking
processes as they monitor their own comprehension and observe their emerging understanding of subjects and
skills. The portfolio process is dynamic and is affected by the interaction between students and teachers.
A. Working Portfolios
A working portfolio is so named because it is a project “in the works,” containing work in progress as well as
finished samples of work. It serves as a holding tank for work that may be selected later for a more permanent
assessment or display portfolio.A working portfolio is different from a work folder, which is simply a receptacle for
all work, with no purpose to the collection. A working portfolio is an intentional collection of work guided by
learning objectives.
Purpose
The major purpose of a working portfolio is to serve as a holding tank for student work. The pieces related to a
specific topic are collected here until they move to an assessment portfolio or a display portfolio, or go home with
the student. In addition, the working portfolio may be used to diagnose student needs. Here both student and
teacher have evidence of student strengths and weaknesses in achieving learning objectives, information
extremely useful in designing future instruction.
Audience
Given its use in diagnosis, the primary audience for a working portfolio is the student, with guidance from the
teacher. By working on the portfolio and reflecting on the quality of work contained there, the student becomes
more reflective and selfdirected. With very young children, however, the primary audience is the teacher, with the
participation of the student.
Parents may be another important audience of a working portfolio, since it can help inform parent/teacher
conferences. The portfolio is particularly useful for those parents who do not accept the limitations of their child's
current skills or do not have a realistic picture of the way their child is progressing compared with other children.
In such situations, evidence from a portfolio can truly “speak a thousand words.” In addition, a portfolio can serve
to document the progress a student has made, progress of which a parent may be unaware.
Process
A working portfolio is typically structured around a specific content area; pieces collected relate to the objectives
of that unit and document student progress toward mastery of those objectives. Therefore, sufficient work must
be collected to provide ample evidence of student achievement. Because diagnosis is a major purpose of the
working portfolio, some of the pieces included will show less than complete understanding and will help shape
future instruction.
The working portfolio is reviewed as a whole and its pieces evaluated—either periodically or at the end of the
learning unit. Some pieces may be shifted to an assessment portfolio to document student acquisition of
instructional objectives. Other pieces may be moved to a student's own display (or best works) portfolio or
celebration of individual learning. Still other pieces are sent home with the student.
As students move pieces from a working portfolio into either an assessment or display portfolio, they describe the
reasons for their choices. In this process of selection and description, students must reflect seriously on their
work and what it demonstrates about them as learners. As students and their teachers look through the portfolio,
they set short-term objectives for achieving certain curriculum goals. The portfolio thus provides evidence of
strengths and weaknesses and serves to define the next steps in learning.
Purpose
The purpose of a display portfolio is to demonstrate the highest level of achievement attained by the student.
Collecting items for this portfolio is a student's way of saying “Here's who I am. Here is what I can do.”A display
portfolio may be maintained from year to year, with new pieces added each year, documenting growth over time.
And while a best works portfolio may document student efforts with respect to curriculum objectives, it may also
include evidence of student activities beyond school (a story written at home, for example).
There are many possibilities for the contents of a display portfolio. The benefits of portfolios were first recognized
in the area of language arts, specifically in writing. Therefore, writing portfolios are the most widely known and
used. But students may elect to put many types of items in their portfolio of best works—a drawing they like, a
poem they have written, a list of books they have read, or a difficult problem they have solved.
Audience
Since the student selects her or his own best works, the audience for a display portfolio is that student and the
other important individuals, such as parents and older siblings, to whom the student chooses to show the
portfolio. Other audiences include a
current teacher or next year's teacher, who may learn a lot about the student by studying the portfolio.
In addition, a student may submit portfolios of best works to colleges or potential employers to supplement other
information; art students have always used this approach. The contents of these portfolios are determined by the
interests of the audience and may include videos, written work, projects, resumés, and testimonials. The act of
assembling a display portfolio for such a practical purpose can motivate high school students to produce work of
high quality.
Process
Most pieces for a display portfolio are collected in a working portfolio of school projects. Sometimes, however, a
student will include a piece of work from outside the classroom, such as a project from scouts or a poem written
at home. Students select the items to be included in a display portfolio. Their choices define them as students
and as learners. In making their selections, students illustrate what they believe to be important about their
learning, what they value and want to show to others.
C. Assessment Portfolios
The primary function of an assessment portfolio is to document what a student has learned. The content of the
curriculum, then, will determine what students select for their portfolios. Their reflective comments will focus on
the extent to which they believe the portfolio entries demonstrate their mastery of the curriculum objectives. For
example, if the curriculum specifies persuasive, narrative, and descriptive writing, an assessment portfolio should
include examples of each type of writing. Similarly, if the curriculum calls for mathematical problem solving and
mathematical communication, then the display portfolio will include entries documenting both problem solving
and communication, possibly in the same entry.
Purpose
The primary purpose of an assessment portfolio is to document student learning on specific curriculum
outcomes. As such, the items in the portfolio must be designed to elicit the knowledge and skill specified in the
outcomes. It is the assessment tasks that bring the curriculum outcomes to life; only by specifying precisely what
students must do and how well they must do it do these statements of learning have meaning.
Assessment portfolios may be used to demonstrate mastery in any curricular area. They may span any period of
time, from one unit to the entire year. And they may be dedicated to one subject or many subjects. For example,
a teacher may wish to have evidence that a child has sufficient skills in a content area to move to the next level
or grade. The criteria for moving on and the types of necessary evidence must be established. Then the portfolio
is compiled and assessed.
Audience
There are many possible audiences for an assessment portfolio, depending on its specific purpose. One
audience may be the classroom teacher, who may become convinced that the objectives of an instructional unit
have been mastered or who may decide to place a student in advanced classes or special sections. Alternatively,
the audience may be the school district or even the state, seeking documentation of student learning, and
permitting a student to move to the high school or receive a diploma. A secondary, though very important,
audience is always the student, who provides evidence of significant learning.
Process
There are eight basic steps in developing an assessment portfolio system. Since portfolio entries represent a
type of performance, these steps resemble the principles for developing good performance assessments.
1. Determine the curricular objectives to be addressed through the portfolio.
2. Determine the decisions that will be made based on the portfolio assessments. Will the assessments be
used for high-stakes assessment at certain levels of schooling (for example, to enable students to make the
transition from middle school to high school)?
3. Design assessment tasks for the curricular objectives. Ensure that the task matches instructional intentions
and adequately represents the content and skills (including the appropriate level of difficulty) students are
expected to attain. These considerations will ensure the validity of the assessment tasks.
4. Define the criteria for each assessment task and establish performance standards for each criterion.
5. Determine who will evaluate the portfolio entries. Will they be teachers from the students' own school?
Teachers from another school? Or does the state identify and train evaluators?
6. Train teachers or other evaluators to score the assessments. This will ensure the reliability of the
assessments.
7. Teach the curriculum, administer assessments, collect them in portfolios, score assessments.
8. As determined in Step 2, make decisions based on the assessments in the portfolios.
Challenges
But even in a classroom environment where the stakes are lower, assessment portfolios are more formal affairs
than those designed to diagnose learning needs (working portfolios) or to celebrate learning (best works
portfolios). In an assessment portfolio, the content matters and it must demonstrate and document what students
have learned. The origin of an assessment portfolio may be quite external to the student and his world. The
mandate may come from outside the classroom—for instance, via curriculum committees and board action, or
directly from the state department of education. Moreover, the eventual owner of the portfolio's contents may be
someone other than the student. In addition, the selection process is more controlled and dictated, since the
portfolio entries must document particular learning outcomes. And there may be no opportunity for the student to
“show off” his or her portfolio.
Measurement
The word measurement, as it applies to education, is not substantially different from when it is used in any other
field. It simply means determining the attributes or dimensions of an object, skill or knowledge. We use common
objects in the physical world to measure, such as tape measures, scales and meters. These measurement tools
are held to standards and can be used to obtain reliable results. When used properly, they accurately gather data
for educators and administrators.
standard measurements in education are raw scores, percentile ranks and standard scores.
Assessment
One of the primary measurement tools in education is the assessment. Teachers gather information by giving
tests, conducting interviews and monitoring behavior. The assessment should be carefully prepared and
administered to ensure its reliability and validity. In other words, an assessment must provide consistent results
and it must measure what it claims to measure.
Evaluation
Creating valid and reliable assessments is critical to accurately measuring educational data. Evaluating the
information gathered, however, is equally important to the effective use of the information for instruction.
In education, evaluation is the process of using the measurements gathered in the assessments. Teachers use
this information to judge the relationship between what was intended by the instruction and what was learned.
They evaluate the information gathered to determine what students know and understand, how far they have
progressed and how fast, and how their scores and progress compare to those of other students.
Why Are Measurement, Assessment and Evaluation Important in Education?
According to educator and author, Graham Nuthall, in his book The Hidden Lives
of Learners, "In most of the classrooms we have studied, each student already knows about 40-50% of what the
teacher is teaching." The goal of data-driven instruction is to avoid teaching students what they already know and
teach what they do not know in a way the students will best respond to.
For the same reason, educators and administrators understand that assessing students and evaluating the
results must be ongoing and frequent. Scheduled assessments are important to the process, but teachers must
also be prepared to reassess students, even if informally, when they sense students are either bored with the
daily lesson or frustrated by material they are not prepared for. Using the measurements of these intermittent
formative assessments, teachers can fine-tune instruction to meet the needs of their students on a daily and
weekly basis.
Why Is Data-D riven Instruction So Effective?
Accurately measuring student progress with reliable assessments and then evaluating the information to make
instruction more efficient, effective and interesting is what data-driven instruction is all about. Educators who are
willing to make thoughtful and intentional changes in instruction based on more than the next chapter in the
textbook find higher student engagement and more highly motivated students.
In fact, when students are included in the evaluation process, they are more likely to be self-motivated. Students
who see the results of their work only on the quarterly or semester report card or the high-stakes testing report
are often discouraged or deflated, knowing that the score is a permanent record of their past achievement.
When students are informed about the results of more frequent formative assessments and can see how they
have improved or where they need to improve, they more easily see the value of investing time and energy in
their daily lessons and projects. students are introduced "to elements of assessment that are essential to good
teaching. It provides students with an understanding of the role of assessment in the instructional process,"
including the proper evaluation of assessments and standardized tests, and how to make better use of the data
in their daily classroom instruction.
Data-driven instruction, using accurate measurements, appropriate assessments and in-depth evaluation, is
changing the way we view tests and instruction, as well as the way we communicate information to both students
and families. Teachers who have a clear understanding of how and why these issues are important will find these
changes give them a better understanding of their students and better opportunities to help their students
achieve academic success.
3. Checklists of objectives
Most common in elementary school. Can either replace or supplement letter grades Each item in the checklist
can be rated: Outstanding, Satisfactory, Unsatisfactory; A, B, C, etc. Problem is to keep the list manageable and
understandable.
4. Letters to parents/guardians. Useful supplement to grades. Limited value as sole report, because: very time
consuming accounts of weaknesses often misinterpreted not systematic or cumulative. Great tact needed in
presenting problems (lying, etc.)
5. Portfolios
Set of purposefully selected work, with commentary by student and teacher Useful for:
Ø showing student’s strengths and weaknesses
Ø illustrating range of student work
Ø showing progress over time or stages of a project
Ø teaching students about objectives/standards they are to meet
6. Parent-teacher conferences Used mostly in elementary school. Portfolios (when used) are useful basis for
discussion Useful for:
Ø two-way flow of information
Ø getting more information and cooperation from parents
Ø Limited in value as the major report, because time consuming
Ø provides no systematic record of progress some parents won’t come
Guidelines •
A. Properly weight each component to create a Copyright • Normally agreed upon by school officials 30% 25%
30% 15% Quiz Project/Assignment
Class Participation Periodic Test
B. Principal Components Analysis – more scientific approach; hardly practiced in schools because of
difficulty • Put all components on same scale to weight properly: – Equate range of scores – Convert all to T-
scores or other standard scores
C. Norm-Referenced Grading System
• Grades may reflect relative performance – Score compared to other students
(rank)
• Grade depends on what group you are in, not just your own performance
• Typical grade may be shifted up or down, depending on group’s ability
• Widely used; most classroom testing is norm-referenced
• Grades may reflect absolute performance – Score compared to specified performance standards
(what you can do)
• Grade does not depend on what group you are in, but only on your own performance compared to a
set of performance standards
• Grading is a complex task
D. Criterion-Referenced Grading System
• Grades must: – Clearly define the domain – Clearly define and justify the
performance standards Be based on criterion-referenced assessment
• Conditions are hard to meet except in complete mastery learning settings
E. Score Compared to Learning Potential
• Grades are inconsistent with a standards-based performance – Each child
has his/her own standard
• Reliably estimating learning ability is very difficult
• One cannot reliably measure change with classroom measures
• Should only be used as supplement
Regression analysis
Regression analysis describes the relationship between a set of independent variables and a dependent
variable. This analysis incorporates hypothesis tests that help determine whether the relationships observed in
the sample data actually exist in the population.
For example, the fitted line plot below displays the relationship in the regression model between height and
weight in adolescent girls.
Because the relationship is statistically significant, we have sufficient evidence to conclude that this relationship
exists in the population rather than just our sample.
Importance of Statistics
The field of statistics is the science of learning from data. Statistical knowledge helps you use the proper
methods to collect the data, employ the correct analyses, and effectively present the results. Statistics is a crucial
process behind how we make discoveries in science, make decisions based on data, and make predictions.
Statistics allows you to understand a subject much more deeply
Two main reasons why studying the field of statistics is crucial in modern society. First, statisticians are
guides for learning from data and navigating common problems that can lead you to incorrect
conclusions. Second, given the growing importance of decisions and opinions based on data, it’s crucial that you
can critically assess the quality of analyses that others present to you. Statistics is an exciting field about the thrill
of discovery, learning, and challenging your assumptions. Statistics facilitates the creation of new knowledge. Bit
by bit, we push back the frontier of what is known.
Why statistics are important in our life? Statistics are the sets of mathematical equations that we used to analyze
the things. It keeps us informed about, what is happening in the world around us. Statistics are important
because today we live in the information world and much of this information’s are determined mathematically
by Statistics Help. It means to be informed correct data and statics concepts are necessary.
Ratio Measure describing a variable with attributes that have all the qualities of nominal, ordinal and interval and
based on a ”true zero” point.
Ratio measure refers to the highest (most complex) level of measurement that a variable can possess. The
properties of a variable that is a ratio measure are the following: (a) Each value can be treated as a unique
category (as in a nominal measure); (b) different values have order of magnitude, such as greater than or less
than or equal to (as in an ordinal measure); (c) basic mathematical procedures can be conducted with the values,
such as addition and division (as with an interval measure); and (d) the variable can take on the value of zero. An
example of a ratio measure is someone's annual income.
A ratio measure may be expressed as either a fraction or percentage
Conceptualization is breaking and converting research ideas into common meanings to develop an agreement
among the users . This process eventually leads to framing meaningful concepts which ultimately lead to
creation of a theory.
Importance of conceptualization in research:
In deductive research, conceptualization helps to translate portions of an abstract theory into
testable hypotheses involving specific variables. In
inductive research, conceptualization is an important part of the process used to make sense of related
observations.
Steps :
1. What is the topic? The first step of any project is to determine what you want to study.
2. What is my problem? Why should anyone care about my problem? You must then establish the problem
your project hopes to solve, including filling in a gap or extending the literature in a new and exciting direction.
Interval/Ratio variables give us the most amount of precision and information to work with. With that greater
precision and information, you have more freedom when it comes to statistical analysis.
Operationalization
This is the process by which researchers conducting quantitative research spell out precisely how a concept will
be measured. It involves identifying the specific research procedures we will use to gather data about our
concepts. This of course requires that we know what research method(s) we will employ to learn about our
concepts, and we’ll examine specific research methods later on in the text.
Development of specific research definitions that will result in empirical observations representing those concepts
in the real world. This is a process of strictly defining variables into measurable factors thus you will need hyper
specific operationalization’s. This process defines “fuzzy” concepts and allows them to be measured empirically.
Purpose: To remove vagueness- all variables in the study must be defined.
Example: You are studying mental health outcomes for older adults with physical disabilities.
Thus you will need to operationalize your three conceptualizations of (1) mental health,
(2) older adults, (3) physical disabilities.
(1) Mental Health: a person's condition with regard to their psychological and emotional well-being including
stress, anxiety, depression, and loneliness.
(2) Older Adults: people who are 55-85 years old
(3) Physical Disabilities: a limitation on a person's physical functioning that is related to
accomplishing instrumental activities of daily living (IADLS)
Note: Operationalizations are usually similiar across studies but are often different in their specifics. For example,
some studies operationalize older adults as someone who is over 65, some measure mental health by a
diagnosis by a mental health professional of certain conditions such as depression or schizophrenia, and other
studies operationalize physical disability by specific diagnosis of a given condition.
Operationalization works by identifying specific indicators that will be taken to represent the ideas we are
interested in studying. Operationalisation is the term used to describe how a variable is clearly defined by the
researcher. The term operationalisation can be applied to independent variables (IV), dependent variables (DV)
or co-variables (in a correlational design).
Operationalization means turning abstract concepts into measurable observations. Although some concepts,
like height or age, are easily measured, others, like spirituality or anxiety, are not. Through operationalization ,
you can systematically collect data on processes and phenomena that aren't directly observable.
Operationalization is an essential component in a theoretically centered science because it provides the means
of specifying exactly how a concept is being measured or produced in a particular study.
Indicators
Survey or interview questions used to measure study variables defined and outlined through the operational
definition
Purpose: To generate questions that directly relate to a study's topic
Example: You are studying mental health outcomes for older adults with physical disabilities.
Using the operationalizations for the study you will design indicators. Each indicator should serve a specific
purpose in the study.
operationalization for mental health : a person's condition with regard to their psychological and emotional well-
being including stress, anxiety, depression, and loneliness.
Indicators for stress : Perceived Stress Scale PSS)
Indicators are established measures used to determine how well a result has been achieved in a particular area
of interest. For example, the rate of formal school qualifications helps quantify whether students are succeeding
at school. Indicators are used at different levels of the education system for different purposes.
At the national level, they provide a means of evaluating how well the system is performing in particular areas of
policy interest, for example: education and learning outcomes, student engagement and participation, family and
community engagement, and resourcing. This information is supplemented by a range of demographic and
contextual data1 and by ERO’s national reports on education issues and effective education practice.
Key Performance Indicators (KPIs)
KPIs in education
A key performance indicator (KPI) is a type of performance measurement that helps you understand how your
organization, department, or institution is performing and allows you to understand if you're headed in the right
direction with your strategy.
Here are the 5 Key Indicators of School Performance:
Ø Student Achievement,
Ø Discipline Referrals.,
Ø Attendance Rates.,
Ø Graduation Rates,
Ø Teacher Satisfaction.
Key Performance Indicators (KPIs) are the elements of your plan that express what you want to achieve
by when. They are the quantifiable, outcome-based statements you'll use to measure if you're on track to
meet your goals or objectives. Good plans use 5-7 KPIs to manage and track the progress of their plan.
Differences
Conceptualization VS Operationalization
It is the process of defining or It is the process by which a
researcher precisely specify how
specifying concepts
a concept will be
measured
Involves defining or Involves developing specific
specifying what we mean when using research definitions that will
certain terms
bring about empirical
observations representing those
concepts in the real
world
The main purpose is refining and The main purpose is removing
vagueness and making sure
specifying abstract concepts
that concepts are measurable
First step in measurement Second step in the
process measurement process
Why do we need to study conceptualization, operationalization, measurement?
Research is always based on reliable data and the methods used to capture this data. Scientific methods
facilitate this process to obtain quality output in research. Formulation of research problem is the first step to
begin with research. It is at this stage, the researcher should have a clear understanding of the words and terms
used in the research such that there are no conflicts arising later regarding their interpretation and
measurements. This necessitates the understanding of the conceptualization process.
For many fields, such as social science, which often use ordinal measurements, operationalization is essential. It
determines how the researchers are going to measure an emotion or concept, such as the level of distress or
aggression.
Measure is important in research. Measure aims to ascertain the dimension, quantity, or capacity of the
behaviors or events that researchers want to explore. Thus, researchers can interpret the data with
quantitative conclusion which leads to more accurate and standardized outcomes.
CHARACTERISTICS OF AN INDEX
Both Babbie (2011:169) and Spector (1992:1) make reference to various characteristics of index
variables.
• Firstly, an index is derived from multiple items. This means that the items are summated or combined,
thereby converting a specific procedure into a single measurement or scale.
• Secondly, the individual items that form the basis of the index measure something that is underlying,
quantitative and on a measurement continuum. Index variables are therefore typically ordinal in nature.
• Thirdly, an answer or response to an item cannot be classified in terms of ‘right’ or ‘wrong’. An index
variable therefore constitutes a scale measurement that is indicative of some hypothetical construct that can
typically not be measured by a single question or item. Higher index values might indicate ‘more off’ and lower
values ‘less off’, with neither being ‘right’ or ‘wrong’.
• Lastly, a good index is evaluated in terms of its reliability and validity. Both these aspects are
considered as part of the last step in index construction.
Index Scoring
The third step in index construction is scoring the index. After you have finalized the items you are including in
your index, you then assign scores for particular responses, thereby making a composite variable out of your
several items. For example, let’s say you are measuring religious ritual participation among Catholics and the
items included in your index are church attendance, confession, communion, and daily prayer, each with a
response choice of "yes, I regularly participate" or "no, I do not regularly participate." You might assign a 0 for
"does not participate" and a 1 for "participates." Therefore, a respondent could receive a final composite score of
0, 1, 2, 3, or 4 with 0 being the least engaged in Catholic rituals and 4 being the most engaged.
Index Scoring
• After finalizing the items to be included, scores/weights are assigned for particular responses, thereby
making a composite variable out of the several items.
• Unweighted aggregate index - each item score is weighted equally.
• Multivariate statistical techniques, such as exploratory factor analysis and principal component analysis
could be considered in the construction of the index.
• Both methods work by assigning different weights to items through the calculation of factor scores.
• Weight assignment can be done by 4 ways:
Ø equal weights among items;
Ø theoretically categorized weights; Ø schematic weights and Ø variable weights.
Index Validation
The final step in constructing an index is validating it. Just like you need to validate each item that goes into the
index, you also need to validate the index itself to make sure that it measures what it is intended to measure.
There are several methods for doing this. One is called item analysis in which you examine the extent to which
the index is related to the individual items that are included in it. Another important indicator of an index’s validity
is how well it accurately predicts related measures. For example, if you are measuring political conservatism,
those who score the most conservative in your index should also score conservative in other questions included
in the survey.
Broad aims and objectives envisaged for the construction of a specific index
Ø Aim one: To construct an index that is a measure of a specific construct. In other words, to present an index
that is one-dimensional.
Ø Aim two: To construct an index using a combination of variables that could measure the construct better than
any single variable.
Ø Aim three: To construct an index that is a direct measure of the construct, based on non-monetary descriptive
indicators.
Ø Aim four: Items identified for the construction of the index should, on face value, relate to the construct being
measured. This suggests that secondary data can be utilised as source for the construction of the index, given
that the data is evaluated to be valid in the context of the study.
Ø Aim five: To present an index that is reliable and valid. In other words, the commercial farming sophistication
index should measure what it is supposed to measure. This relates to construct validity, which can be
decomposed into the assessments of convergent, discriminant and nomological validity. In addition, it should
provide scores that are consistent across repeated measures. This relates to the reliability of measurement.
Ø Aim six: To construct an index that has broad application value across the full spectrum of the market,
allowing for sub-group analysis, including examination of individual groups and comparisons between groups.
Ø Aim seven: To present a measurement process that is useful in future surveys from which separate samples
are drawn. In other words, the calculation of index scores should be a simple procedure, and easily replicated by
a wide range of researchers and survey practitioners across other surveys conducted in the market. Scores
should be readily interpretable.
Ø Aim eight: To construct an index that is stable over time, but sensitive enough to register changes. In other
words, to provide scores that would make trend analysis possible.
Ø Aim nine: To present a standard set of index score intervals that segments the market. These intervals will
provide a practical and standardised procedure that other researchers can follow in future to segment the market.
Scales in Research
A scale is a type of composite measure that is composed of several items that have a logical or empirical
structure among them. In other words, scales take advantage of differences in intensity among the indicators of a
variable. The most commonly used scale is the Likert scale, which contains response categories such as
"strongly agree," "agree," "disagree," and "strongly disagree." Other scales used in social science research
include the Thurstone scale, Guttman scale, Bogardus social distance scale, and the semantic differential scale.
For example, a researcher interested in measuring prejudice against women could use a Likert scale to do so.
The researcher would first create a series of statements reflecting prejudiced ideas, each with the response
categories of "strongly agree," "agree," "neither agree nor disagree," "disagree," and "strongly disagree." One of
the items might be "women shouldn’t be allowed to vote," while another might be "women can’t drive as well as
men." We would then assign each of the response categories a score of 0 to 4 (0 for "strongly disagree," 1 for
"disagree," 2 for "neither agree or disagree," etc.). The scores for each of the statements would then be added for
each respondent to create an overall score of prejudice. If a respondent answered "strongly agree" to five
statements expressing prejudiced ideas, his or her overall prejudice score would be 20, indicating a very high
degree of prejudice against women.
Ordinal Scale
The ordinal scale is the 2nd level of measurement that reports the ordering and ranking of data without
establishing the degree of variation between them. Ordinal represents the “order.” Ordinal data is known as
qualitative data or categorical data. It can be grouped, named and also ranked. Characteristics of the Ordinal
Scale
• The ordinal scale shows the relative ranking of the variables
• It identifies and describes the magnitude of a variable
• Along with the information provided by the nominal scale, ordinal scales give the rankings of those variables
• The interval properties are not known
• The surveyors can quickly analyse the degree of agreement concerning the identified order of
variables Example:
• Ranking of school students – 1st, 2nd, 3rd, etc.
• Ratings in restaurants
• Evaluating the frequency of occurrences
• Very often
• Often
• Not often
• Not at all
• Assessing the degree of agreement
• Totally agree
• Agree
• Neutral
• Disagree
• Totally disagree
Interval Scale
The interval scale is the 3rd level of measurement scale. It is defined as a quantitative measurement scale in
which the difference between the two variables is meaningful. In other words, the variables are measured in an
exact manner, not as in a relative way in which the presence of zero is arbitrary.
Characteristics of Interval Scale:
• The interval scale is quantitative as it can quantify the difference between the values
• It allows calculating the mean and median of the variables
• To understand the difference between the variables, you can subtract the values between the variables
• The interval scale is the preferred scale in Statistics as it helps to assign any numerical values to arbitrary
assessment such as feelings, calendar types, etc. Example:
• Likert Scale
• Net Promoter Score (NPS)
• Bipolar Matrix Table
Ratio Scale
The ratio scale is the 4th level of measurement scale, which is quantitative. It is a type of variable measurement
scale. It allows researchers to compare the differences or intervals. The ratio scale has a unique feature. It
possesses the character of the origin or zero points.
Characteristics of Ratio Scale:
• Ratio scale has a feature of absolute zero
• It doesn’t have negative numbers, because of its zero-point feature
• It affords unique opportunities for statistical analysis. The variables can be orderly added, subtracted,
multiplied, divided. Mean, median, and mode can be calculated using the ratio scale.
• Ratio scale has unique and useful properties. One such feature is that it allows unit conversions like
kilogram – calories, gram – calories, etc. Example:
An example of a ratio scale is:
What is your weight in Kgs?
• Less than 55 kgs
• 55 – 75 kgs
• 76 – 85 kgs
• 86 – 95 kgs
• More than 95 kgs
Conclusion
An index is a way of compiling one score from a variety of questions or statements that represents a belief,
feeling, or attitude. Scales, on the other hand, measure levels of intensity at the variable level, like how much a
person agrees or disagrees with a particular statement.
A scale is an index that in some sense only measures one thing. For example, a final exam in a given course
could be thought of as a scale: it measures competence in a single subject. In contrast, a person's gpa can be
thought of as an index: it is a combination of a number of separate, independent competencies.
Identity
Identity refers to the assignment of numbers to the values of each variable in a data set. Consider a
questionnaire that asks for a respondent's gender with the options Male and Female for instance. The values 1
and 2 can be assigned to Male and Female respectively.
Arithmetic operations can not be performed on these values because they are just for identification purposes.
This is a characteristic of a nominal scale.
Magnitude
The magnitude is the size of a measurement scale, where numbers (the identity) have an inherent order from
least to highest. They are usually represented on the scale in ascending or descending order. The position in a
race, for example, is arranged from the 1st, 2nd, 3rd to the least.
This example is measured on an ordinal scale because it has both identity and magnitude.
Equal intervals
Equal Intervals means that the scale has a standardized order. I.e., the difference between each level on the
scale is the same. This is not the case for the ordinal scale example highlighted above.
Each position does not have an equal interval difference. In a race, the 1st position may complete the race in 20
secs, 2nd position in 20.8 seconds while the 3rd in 30 seconds.
A variable that has an identity, magnitude, and the equal interval is measured on an interval scale.
Absolute zero
Absolute zero is a feature that is unique to a ratio scale. It means that there is an existence of zero on the scale,
and is defined by the absence of the variable being measured (e.g. no qualification, no money, does not identify
as any gender, etc.
Comparative Scales
In comparative scaling, respondents are asked to make a comparison between one object and the other. When
used in market research, customers are asked to evaluate one product in direct comparison to the others.
Comparative scales can be further divided into the pair comparison, rank order, constant sum and q-sort scales.
Non-Comparative Scales
In non-comparative scaling, customers are asked to only evaluate a single object. This evaluation is totally
independent of the other objects under investigation. Sometimes called monadic or metric scale,
NonComparative scale can be further divided into continuous and the itemized rating scales Continuous
Rating Scale
In continuous rating scale, respondents are asked to rate the objects by placing a mark appropriately on a line
running from one extreme of the criterion to the other variable criterion. Also called the graphic rating scale, it
gives the respondent the freedom to place the mark anywhere based on personal preference. Once the ratings
are obtained, the researcher splits up the line into several categories and then assign the scores depending on
the category in which the ratings fall. This rating can be visualized in both horizontal and vertical form.
Although easy to construct, the continuous rating scale has some major setbacks, giving it limited usage in
market research.
Conclusion
In a nutshell, scales of measurement refers to the various measures used in quantifying the variables
researchers use In performing data analysis. They are an important aspect of research and statistics because the
level of data measurement is what determines the data analysis technique to be used.
Understanding the concept of scales of measurements is a prerequisite to working with data and performing
statistical analysis. The different measurement scales have some similar properties and are therefore important
to properly analyze the data to determine its measurement scale before choosing a technique to use for analysis.
A number of scaling techniques are available for the measurement of the same measurement scale. Therefore,
there is no unique way of selecting a scaling technique for research purposes.
Typology
Typologies are well-established analytic tools in the social sciences. They can be “put to work” in forming
concepts, refining measurement, exploring dimensionality, and organizing explanatory claims. Yet some critics,
basing their arguments on what they believe are relevant norms of quantitative measurement, consider
typologies old-fashioned and unsophisticated. This critique is methodologically unsound, and research based on
typologies can and should proceed according to high standards of rigor and careful measurement. These
standards are summarized in guidelines for careful work with typologies, and an illustrative inventory of
typologies, as well as a brief glossary, are included online.
Typologies Critique
As with stereotypes, typologies do not accurately reflect anyone and provide oversimplifications of everyone.One
should use typologies mainly to organize one’s thinking as part of exploratory research.It is extremely difficult to
analyze a typology as a dependent variable because too much variation exists within each category.
Perhaps even more important is the fact that action research helps educators be more effective at
what they care most about—their teaching and the development of their students. Seeing students
grow is probably the greatest joy educators can experience. When teachers have convincing
evidence that their work has made a real difference in their students' lives, the countless hours and
endless efforts of teaching seem worthwhile.
(a) helps teachers develop new knowledge directly related to their classrooms,
(e) reinforces the link between practice and student achievement,(f) fosters an openness toward
new ideas and learning new things, and (g) gives teachers ownership of effective practices.
Moreover, action research workshops can be used to replace traditional, ineffective teacher in
service training (Barone et al., 1996) as a means for professional development activities (Johnson,
2012). To be effective, teacher in service training needs to be extended over multiple sessions,
contain active learning to allow teachers to manipulate the ideas and enhance their assimilation of
the information, and align the concepts presented with the current curriculum, goals, or teaching
concerns. (Johnson, p. 22). Therefore, providing teachers with the necessary skills, knowledge, and
focus to engage in meaningful inquiry about their professional practice will enhance this practice,
and effect positive changes concerning the educative goals of the learning community.
The action research process can help you understand what is happening in your classroom and
identify changes that improve teaching and learning. Action research can help answer questions
you have about the effectiveness of specific instructional strategies, the performance of specific
students, and classroom management techniques.
Educational research often seems removed from the realities of the classroom. For many
classroom educators, formal experimental research, including the use of a control group, seems to
contradict the mandate to improve learning for all students. Even quasi-experimental research with
no control group seems difficult to implement, given the variety of learners and diverse learning
needs present in every classroom. Action research gives you the benefits of research in the
classroom without these obstacles. Believe it or not, you are probably doing some form of research
already. Every time you change a lesson plan or try a new approach with your students, you are
engaged in trying to figure out what works. Even though you may not acknowledge it as formal
research, you are still investigating, implementing, reflecting, and refining your approach.
Qualitative research acknowledges the complexity of the classroom learning environment. While
quantitative research can help us see that improvements or declines have occurred, it does not
help us identify the causes of those improvements or declines. Action research provides qualitative
data you can use to adjust your curriculum content, delivery, and instructional practices to improve
student learning. Action research helps you implement informed change!
The term “action research” was coined by Kurt Lewin in 1944 to describe a process of investigation
and inquiry that occurs as action is taken to solve a problem. Today we use the term to describe a
practice of reflective inquiry undertaken with the goal of improving understanding and practice.
You might consider “action” to refer to the change you are trying to implement and “research” to
refer to your improved understanding of the learning environment.
Action research also helps you take charge of your personal professional development. As you
reflect on your own actions and observe other master teachers, you will identify the skills and
strategies you would like to add to your own professional toolbox. As you research potential
solutions and are exposed to new ideas, you will identify the skills, management, and instructional
training needed to make the changes you want to see.
3. Collect data
Learning to develop the right questions takes time. Your ability to identify these key
questions will improve with each iteration of the research cycle. You want to select a
question that isn’t so broad it is almost impossible to answer or so narrow that the only answer is
yes or no. Choose questions that can be answered within the context of your daily teaching. In
other words, choose a question that is both answerable and worthy of the time investment
required to learn the answer.
Questions you could ask might involve management issues, curriculum implementation,
instructional strategies, or specific student performance. For example, you might consider:
• Will increasing the amount of feedback I provide improve students’ writing skills?
Before you can start collecting data, you need to have a clear vision of what success looks like.
Start by brainstorming words that describe the change you want to see. What strategies do you
already know that might help you get there? Which of these ideas do you think might work better
than what you are currently doing?
To find out if a new instructional strategy is worth trying, conduct a review of literature. This
doesn’t have to mean writing up a formal lit review like you did in graduate school. The important
thing is to explore a range of articles and reports on your topic and capitalize on the research and
experience of others. Your classroom responsibilities are already many and may be overwhelming.
A review of literature can help you identify useful strategies and locate information that helps you
justify your action plan.
The Web makes literature reviews easier to accomplish than ever before. Even if the full text of an
article, research paper, or abstract is not available online, you will be able to find citations to help
you locate the source materials at your local library. Collect as much information on your problem
as you can find. As you explore the existing literature, you will certainly find solutions and
strategies that others have implemented to solve this problem. You may want to create a visual
map or a table of your problems and target performances with a list of potential solutions and
supporting citations in the middle.
How can you implement these techniques? How will you? Translate these solutions into concrete
steps you can and will take in your classroom. Write a description of how you will implement each
idea and the time you will take to do it.
Once you have a clear vision of a potential solution to the problem, explore factors you think might
be keeping you and your students from your vision of success. Recognize and accept those factors
you do not have the power to change–they are the constants in your equation. Focus your attention
on the variables–the parts of the formula you believe your actions can impact.
Develop a plan that shows how you will implement your solution and how your behavior,
management style, and instruction will address each of the variables.
Sometimes an action research cycle simply helps you identify variables you weren’t even aware of,
so you can better address your problem during the next cycle!
Collect Data
Before you begin to implement your plan of action, you need to determine what data will help you
understand if your plan succeeds, and how you will collect that data. Your target performances will
help you determine what you want to achieve. What results or other indicators will help you know if
you achieved it? For example, if your goal is improved attendance, data can easily be collected
from your attendance records. If the goal is increased time on task, the data may include
classroom and student observations.
There are many options for collecting data. Choosing the best methodologies for collecting
information will result in more accurate, meaningful, and reliable data.
Obvious sources of data include observation and interviews. As you observe, you will want to type
or write notes or dictate your observations into a cell phone, iPod, or PDA. You may want to keep a
journal during the process, or even create a blog or wiki to practice your technology skills as you
collect data.
Reflective journals are often used as a source of data for action research. You can also collect
meaningful data from other records you deal with daily, including attendance logs, grade reports,
and student portfolios. You could distribute questionnaires, watch videotapes of your classroom,
and administer surveys. Examples of student work are also performances you can evaluate to see
if your goal is being met.
Create a plan for data collection and follow it as you perform your research. If you are going to
interview students or other teachers, how many times will you do it? At what times during the day?
How will you ensure your respondents are representative of the student population you are
studying, including gender, ability level, experience, and expertise?
Your plan will help you ensure that you have collected data from many different sources. Each
source of data provides additional information that will help you answer the questions in your
research plan.
You may also want to have students collect data on their own learning. Not only does this provide
you with additional research assistants, it empowers students to take control of their own learning.
As students keep a journal during the process, they are also reflecting on the learning environment
and their own learning process.
Analyzing the data also helps you reflect on what actually happened. Did you achieve the
outcomes you were hoping for? Where you able to carry out your actions as planned? Were any of
your assumptions about the problem incorrect?
Adding data such as opinions, attitudes, and grades to tables can help you identify trends
(relationships and correlations). For example, if you are completing action research to determine if
project-based learning is impacting student motivation, graphing attendance and disruptive
behavior incidents may help you answer the question. A graph that shows an increase in
attendance and a decrease in the number of disruptive incidents over the implementation period
would lead you to believe that motivation was improved.
Draw tentative conclusions from your analysis. Since the goal of action research is positive change,
you want to try to identify specific behaviors that move you closer to your vision of success. That
way you can adjust your actions to better achieve your goal of improved student learning.
Action research is an iterative process. The data you collect and your analysis of it will affect how
you approach the problem and implement your action plan during the next cycle.
Even as you begin drawing conclusions, continue collecting data. This will help you confirm your
conclusions or revise them in light of new information. While you can plan how long and often you
will collect data, you may also want to continue collecting until the trends have been identified and
new data becomes redundant.
As you are analyzing your data and drawing conclusions, share your findings. Discussing your
results with another teacher can often yield valuable feedback. You might also share your findings
with your students who can also add additional insight. If they agree with your conclusions, you
have added credibility to your data collection plan and analysis. If they disagree, you will know to
reevaluate your conclusions or refine your data collection plan.
You can report your findings in many different ways. You most certainly will want to share the
experience with your students, parents, teachers, and principal. Provide them with an overview of
the process and share highlights from your research journal. Because each of these audiences is
different, you will need to adjust the content and delivery of the information each time you share.
You may also want to present your process at a conference so educators from other districts can
benefit from your work.
As your skill with the action research cycle gets stronger, you may want to develop an abstract and
submit an article to an educational journal. To write an abstract, state the problem you were trying
to solve, describe your context, detail your action plan and methods, summarize your findings,
state your conclusions, and explain your revised action plan.
If your question focused on the implementation of an action plan to improve the performance of a
particular student, what better way to show the process and results than through digital
storytelling? Using a tool like Wixie, you can share images, audio, artifacts and more to show the
student’s journey. Action research is outside-the-box thinking… so find similarly unique ways to
report your findings!
In Summary
All teachers want to reach their students more effectively and help them become better learners
and citizens. Action research provides a reflective process you can use to implement changes in
your classroom and determine if those changes result in the desired outcome.
The components put into an action research report largely coincide with the steps used in the
action research process. This process usually starts with a question or an observation about a
current problem. After identifying the problem area and narrowing it down to make it more
manageable for research, the development process continues as you devise an action plan to
investigate your question. This will involve gathering data and evidence to support your solution.
Common data collection methods include observation of individual or group behavior, taking audio
or video recordings, distributing questionnaires or surveys, conducting interviews, asking for peer
observations and comments, taking field notes, writing journals, and studying the work samples of
your own and your target participants. You may choose to use more than one of these data
collection methods. After you have selected your method and are analyzing the data you have
collected, you will also reflect upon your entire process of action research. You may have a better
solution to your question now, due to the increase of your available evidence. You may also think
about the steps you will try next, or decide that the practice needs to be observed again with
modifications. If so, the whole action research process starts all over again.
In brief, action research is more like a cyclical process, with the reflection upon your action and
research findings affecting changes in your practice, which may lead to extended questions and
further action. This brings us back to the essential steps of action research: identifying the
problem, devising an action plan, implementing the plan, and finally, observing and reflecting upon
the process. Your action research report should comprise all of these essential steps. Feldman and
Weiss (n.d.) summarized them as five structural elements, which do not have to be written in a
particular order. Your report should:
• Describe the context where the action research takes place. This could be, for example, the
school in which you teach. Both features of the school and the population associated with it (e.g.,
students and parents) would be illustrated as well.
• Contain a statement of your research focus. This would explain where your research
questions come from, the problem you intend to investigate, and the goals you want to achieve.
You may also mention prior research studies you have read that are related to your action research
study.
• Detail the method(s) used. This part includes the procedures you used to collect data, types
of data in your report, and justification of your used strategies.
• Highlight the research findings. This is the part in which you observe and reflect upon your
practice. By analyzing the evidence you have gathered, you will come to understand whether the
initial problem has been solved or not, and what research you have yet to accomplish.
• Suggest implications. You may discuss how the findings of your research will affect your
future practice, or explain any new research plans you have that have been inspired by this
report’s action research.
The overall structure of your paper will actually look more or less the same as what we commonly
see in traditional research papers.
Peters and Waterman (1982) in their landmark book, In Search of Excellence, called the
achievement of focus “sticking to the knitting.” When a faculty shares a commitment to achieving
excellence with a specific focus—for example, the development of higher-order thinking, positive
social behavior, or higher standardized test scores—then collaboratively studying their practice will
not only contribute to the achievement of the shared goal but would have a powerful impact on
team building and program development. Focusing the combined time, energy, and creativity of a
group of committed professionals on a single pedagogical issue will inevitably lead to program
improvements, as well as to the school becoming a “center of excellence.” As a result, when a
faculty chooses to focus on one issue and all the teachers elect to enthusiastically participate in
action research on that issue, significant progress on the schoolwide priorities cannot help but
occur.
Building Professional Cultures
Often an entire faculty will share a commitment to student development, yet the group finds itself
unable to adopt a single common focus for action research. This should not be viewed as indicative
of a problem. Just as the medical practitioners working at a “quality” medical center will hold a
shared vision of a healthy adult, it is common for all the faculty members at a school to share a
similar perspective on what constitutes a well-educated student. However, like the doctors at the
medical center, the teachers in a “quality” school may well differ on which specific aspects of the
shared vision they are most motivated to pursue at any point in time.
Schools whose faculties cannot agree on a single research focus can still use action research as a
tool to help transform themselves into a learning organization. They accomplish this in the same
manner as do the physicians at the medical center. It is common practice in a quality medical
center for physicians to engage in independent, even idiosyncratic, research agendas. However, it
is also common for medical researchers to share the findings obtained from their research with
colleagues (even those engaged in other specialties).
School faculties who wish to transform themselves into “communities of learners” often empower
teams of colleagues who share a passion about one aspect of teaching and learning to conduct
investigations into that area of interest and then share what they've learned with the rest of the
school community. This strategy allows an entire faculty to develop and practice the discipline that
Peter Senge (1990) labeled “team learning.” In these schools, multiple action research inquiries
occur simultaneously, and no one is held captive to another's priority, yet everyone knows that all
the work ultimately will be shared and will consequently contribute to organizational learning.
If ever there were a time and a strategy that were right for each other, the time is now and the
strategy is action research! This is true for a host of reasons, with none more important than the
need to accomplish the following:
• Professionalize teaching.
• Students who are experiencing difficulties in learning may benefit from the administration of
a diagnostic test, which will be able to detect learning issues such as reading comprehension
problems, an inability to remember written or spoken words, hearing or speech difficulties, and
problems with hand–eye coordination.
• Students generally complete a summative assessment after completing the study of a topic.
The teacher can determine their level of achievement and provide them with feedback on their
strengths and weaknesses. For students who didn’t master the topic or skill, teachers can use data
from the assessment to create a plan for remediation.
• Teachers may also want to use informal assessment techniques. Using self-assessment,
students express what they think about their learning process and what they should work on. Using
peer assessment, students get information from their classmates about what areas they should
revise and what areas they’re good at.
Assessment for learning is ongoing assessment that allows teachers to monitor students on a
day-to-day basis and modify their teaching based on what the students need to be successful. This
assessment provides students with the timely, specific feedback that they need to make
adjustments to their learning.
After teaching a lesson, we need to determine whether the lesson was accessible to all students
while still challenging to the more capable; what the students learned and still need to know; how
we can improve the lesson to make it more effective; and, if necessary, what other lesson we
might offer as a better alternative.
This continual evaluation of instructional choices is at the heart of improving our teaching
practice. Burns 2005.
Checks learning to determine what to do next Checks what has been learned to date.
and then provides suggestions of what to do—
teaching and learning are indistinguishable
from assessment.
Is designed to assist educators and students in Is designed for the information of those not
improving learning. directly involved in daily learning and teaching
(school administration, parents, school board,
Alberta Education, post-secondary institutions)
in addition to educators and students.
Is used continually by providing descriptive Is presented in a periodic report.
feedback.
Usually uses detailed, specific and descriptive Usually compiles data into a single number,
feedback—in a formal or informal report. score or mark as part of a formal report.
Is not reported as part of an achievement grade. Is reported as part of an achievement grade.
Usually focuses on improvement, compared Usually compares the student's learning either
with the student's “previous best” (self- with other students' learning (norm-referenced,
referenced, making learning more personal). making learning highly competitive) or the
standard for a grade level (criterion-referenced,
making learning more collaborative and
individually focused).
Involves the student. Does not always involve the student.
Assessment as Learning
Assessment as learning develops and supports students' metacognitive skills. This form of
assessment is crucial in helping students become lifelong learners. As students engage in peer and
selfassessment, they learn to make sense of information, relate it to prior knowledge and use it for
new learning. Students develop a sense of ownership and efficacy when they use teacher, peer and
self-assessment feedback to make adjustments, improvements and changes to what they
understand.
It is important to determine how the data will be collected and who will be responsible for data
collection.
Results are always reported in aggregate format to protect the confidentiality of the students
assessed.
Step 4: Adjust or improve programs following the results of the learning outcomes
assessed Assessment results are worthless if they are not used. This step is a critical step of the
assessment process. The assessment process has failed if the results do not lead to adjustments or
improvements in programs. The results of assessments should be disseminated widely to faculty in
the department in order to seek their input on how to improve programs from the assessment
results. In some instances, changes will be minor and easy to implement. In other instances,
substantial changes will be necessary and recommended and may require several years to be fully
implemented.
Teachers’ Roles in Assessment of Learning
Because the consequences of assessment of learning are often far-reaching and affect students
seriously, teachers have the responsibility of reporting student learning accurately and fairly,
based on evidence obtained from a variety of contexts and applications. Effective assessment of
learning requires that teachers provide
• processes that make it possible for students to demonstrate their competence and skill
The purpose of assessment of learning is to measure, certify, and report the level of students’
learning, so that reasonable decisions can be made about students. There are many potential users
of the information:
• teachers (who can use the information to communicate with parents about their children’s
proficiency and progress)
• parents and students (who can use the results for making educational and vocational decisions)
• potential employers and post-secondary institutions (who can use the information to make
decisions about hiring or acceptance)
• principals, district or divisional administrators, and teachers (who can use the information to
review and revise programming)
What am I assessing?
Assessment of learning requires the collection and interpretation of information about students’
accomplishments in important curricular areas, in ways that represent the nature and complexity
of the intended learning. Because genuine learning for understanding is much more than just
recognition or recall of facts or algorithms, assessment of learning tasks need to enable students to
show the complexity of their understanding. Students need to be able to apply key concepts,
knowledge, skills, and attitudes in ways that are authentic and consistent with current thinking in
the knowledge domain.
In assessment of learning, the methods chosen need to address the intended curriculum outcomes
and the continuum of learning that is required to reach the outcomes. The methods must allow all
students to show their understanding and produce sufficient information to support credible and
defensible statements about the nature and quality of their learning, so that others can use the
results in appropriate ways. Assessment of learning methods include not only tests and
examinations, but also a rich variety of products and demonstrations of learning—portfolios,
exhibitions, performances, presentations, simulations, multimedia projects, and a variety of other
written, oral, and visual methods What assessment method should I use?
Assessment of learning needs to be very carefully constructed so that the information upon which
decisions are made is of the highest quality. Assessment of learning is designed to be summative,
and to produce defensible and accurate descriptions of student competence in relation to defined
outcomes and, occasionally, in relation to other students’ assessment results. Certification of
students’ proficiency should be based on a rigorous, reliable, valid, and equitable process of
assessment and evaluation. Reliability Reliability in assessment of learning depends on how
accurate, consistent, fair, and free from bias and distortion the assessment is.
• Was the information collected in a way that gives all students an equal chance to show their
learning?
• Would I make the same decision if I considered this information at another time or in another
way?
Reference Points Typically, the reference points for assessment of learning are the learning
outcomes as identified in the curriculum that make up the course of study. Assessment tasks
include measures of these learning outcomes, and a student’s performance is interpreted and
reported in relation to these learning outcomes. In some situations where selection decisions need
to be made for limited positions (e.g., university entrance, scholarships, employment
opportunities), assessment of learning results are used to rank students. In such norm-referenced
situations, what is being measured needs to be clear, and the way it is being measured needs to be
transparent to anyone who might use the assessment results. Validity Because assessment of
learning results in statements about students’ proficiency in wide areas of study, assessment of
learning tasks must reflect the key knowledge, concepts, skills, and dispositions set out in the
curriculum, and the statements and inferences that emerge must be upheld by the evidence
collected.
Record-Keeping
Whichever approaches teachers choose for assessment of learning, it is their records that provide
details about the quality of the measurement. Detailed records of the various components of the
assessment of learning are essential, with a description of what each component measures, with
what accuracy and against what criteria and reference points, and should include supporting
evidence related to the outcomes as justification. When teachers keep records that are detailed
and descriptive, they are in an excellent position to provide meaningful reports to parents and
others. Merely a symbolic representation of a student’s accomplishments (e.g., a letter grade or
percentage) is inadequate. Reports to parents and others should identify the intended learning that
the report covers, the assessment methods used to gather the supporting information, and the
criteria used to make the judgement.
Feedback to Students Because assessment of learning comes most often at the end of a unit or
learning cycle, feedback to students has a less obvious effect on student learning than assessment
for learning and assessment as learning. Nevertheless, students do rely on their marks and on
teachers’ comments as indicators of their level of success, and to make decisions about their
future learning endeavours.
Differentiating Learning
In assessment of learning, differentiation occurs in the assessment itself. It would make little sense
to ask a near-sighted person to demonstrate driving proficiency without glasses. When the driver
uses glasses, it is possible for the examiner to get an accurate picture of the driver’s ability, and to
certify him or her as proficient. In much the same way, differentiation in assessment of learning
requires that the necessary accommodations be in place that allow students to make the particular
learning visible. Multiple forms of assessment offer multiple pathways for making student learning
transparent to the teacher. A particular curriculum outcome requirement, such as an
understanding of the social studies notion of conflict, for example, might be demonstrated through
visual, oral, dramatic, or written representations. As long as writing were not an explicit component
of the outcome, students who have difficulties with written language, for example, would then have
the same opportunity to demonstrate their learning as other students. Although assessment of
learning does not always lead teachers to differentiate instruction or resources, it has a profound
effect on the placement and promotion of students and, consequently, on the nature and
differentiation of the future instruction and programming that students receive. Therefore,
assessment results need to be accurate and detailed enough to allow for wise recommendations.
Reporting
There are many possible approaches to reporting student proficiency. Reporting assessment of
learning needs to be appropriate for the audiences for whom it is intended, and should provide all
of the information necessary for them to make reasoned decisions. Regardless of the form of the
reporting, however, it should be honest, fair, and provide sufficient detail and contextual
information so that it can be clearly understood. Traditional reporting, which relies only on a
student’s average score, provides little information about that student’s skill development or
knowledge. One alternate mechanism, which recognizes many forms of success and provides a
profile of a student’s level of performance on an emergent-proficient continuum, is the parent
student-teacher conference. This forum provides parents with a great deal of information, and
reinforces students’ responsibility for their learning.
The purpose of assessment that typically comes at the end of a course or unit of instruction is to
determine the extent to which the instructional goals have been achieved and for grading or
certification of student achievement. (Linn and Gronlund, Measurement and Assessment in
Teaching)