0% found this document useful (0 votes)
13 views23 pages

O 186

project of b .e.d

Uploaded by

hghjghgjgj399
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views23 pages

O 186

project of b .e.d

Uploaded by

hghjghgjgj399
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 2

REVIEW OF RELATED LITERATURE

Introduction
In the present age, measurement has influenced the progress in education and
psychology too. Today, the age of theoretical education is over, and effort is being
made to make education and psychology more and more practical. Under
education and psychology are studied different human behaviours and problems.
For this, it becomes necessary to measure human behaviours.

Educational measurement is not a new concept. The teacher has been testing
students since times immemorial in order to know their progress in studies, and to
see what type of behavioural changes have occurred in students, if they are
optimal and what direction these behavioural changes have taken. A teacher wants
to know the shortcomings of the method of teaching he uses, for which these tests
have been found to be very important. These tests have become all the more
important in the present age, and it is expected from the future teacher or pupil-
teacher that he would gain skill in constructing several types of tools of
measurement and evaluation.

The introduction of evaluation in the educational world is comparatively new. In


fact, it was introduced in this field in order to get rid of the serious shortcomings
of measurement. From the beginning of the twentieth century, three types of
important progresses were noted down in the field of educational measurement,
they were testing, measurement and evaluation. This progress in education is
known as progress in measurement. However, in the present times, evaluation is
considered to be an important contribution of psychology and education.
Recently, the attention of educationists and psychologists has been drawn towards
evaluation.

Almost all the Commissions on education as also the National Policy on


Education (1986), Programmer of Action (1992) and National Curriculum
Framework (2000) of NCERT have all stressed the need for a more continuous
and comprehensive evaluation of students in order to pass more sound judgments
about students’ learning and growth. Regular Testing assessment through pupils’
lives in school is considered essential for remedial treatment of those who fall
below the acceptable performance level. It is in this context that the role of
Continuous and Comprehensive Evaluation (C.C.E.) is highlighted in appraising
the whole child and recognising it as a complement of external examination by
providing due weight age to C.C.E. in the final assessment of the total
development of the child.

Identity has emerged as a topic of keen interest in management and organizational


studies. Scholars across myriad disciplines, including organizational psychology
and organizational behavior, are intrigued with the construct of identity and
consider it a prominent factor behind many work-related behaviors. Sociologists
and psychologists have considered various aspects of individual’s self-concept
formore than a century, but organizational researchers have turned their attention
to the topic only relatively recently (Sveningsson & Alvesson, 2003). Because of
the various theoretical perspectives available to study identity, the field of work
identity studies has developed in somewhat haphazard fashion. As a result, there
is an increasingly vast, het- erogenous, and fragmented body of literature. Even
more troubling, researchers working with a specific conceptualization of identity
rarely consider the work of colleagues who have adopted different theoretical
lenses (Alvesson, 2010). If we are to advance the field, there must be a recognized
need to engage in ‘‘informed conversations across field and paradigmatic
boundaries’’ (Brown, 2015, p. 23).

In the present review of work identity literature we attempt to make sense of and
structure the studies published thus far to encourage and facilitate these
conversations. We acknowledge that several insightful reviews have been pub-
lished on the topic of work identity and identification in the last decade (e.g.,
Alvesson, Ashcraft, & Thomas, 2008; Ashforth, Harrison, & Corley, 2008;
Brown, 2015; Haslam & Ellemers, 2005; Ramarajan, 2014; van Dick, 2004; van
Knippenberg, van Knippenberg, De Cremer, & Hogg, 2004). Nonetheless, these
reviews either offer in-depth accounts of certain aspects of work identity (e.g.,
identity work) or consider just one of the available theoretical lenses to study
work identity (e.g., social identity theory). What we hope to accomplish is to
provide the reader with insight into the var- ious topics and theories discussed to
date in the literature and facilitate future work across topical and theoretical
boundaries. We accomplish this aim by reviewing peer-reviewed journal articles
published on the topic of work identity and identification.

We first introduce the concept of identity and its main characteristics, then present
the frame- work used to structure the review, and suggest directions for future
research. This framework is based on two dimensions: levels of identity
inclusiveness (individual, interpersonal, collective) and static versus dynamic
approaches to work identity. In a concluding section, we broaden the lens to
review theory and research that consider multiple identities, which presents a
more complex picture of identities at work..(Miscenko & Day, 2016)
This toolkit is designed to guide educators in developing and improving practical
mea- surement instruments for use in networked improvement communities
(NICs) and other education contexts in which principles of continuous
improvement are applied. Measures for continuous improvement should be
closely aligned to student learning goals and implementation of instructional
strategies driving the continuous improvement effort, and they should be practical
to use in a classroom setting. Continuous improvement includes distinct repeating
processes: understanding the problem, identifying specific targets for
improvement, determining the change to introduce, implementing the change, and
evalu- ating if and how the change led to improvements (Langley et al., 1996).

This toolkit is intended to help a team of educators with the final step in the cycle,
which includes collecting data to measure implementation of changes and
intended outcomes and using those data to inform future action. A team of
educators can use this toolkit to proceed through a series of steps to identify what
to measure, consider existing instru- ments, draft instruments, evaluate and refine
instruments, plan data collection routines, and plan for data discussions to
interpret the data and inform action. Appendix A includes supporting tools
associated with each of the steps.

Regional Educational Laboratory (REL) Southwest developed resources in the


toolkit in partnership with the Oklahoma State Department of Education (OSDE)
team working with the Oklahoma Excel (OK Excel) NICs. Through the
Southwest Networked Improvement Communities Research Partnership, REL
Southwest has supported OSDE’s OK Excel project with a series of coaching and
training projects. These projects were designed to build state capacity for
implementing content-area NICs at the district level to test and scale up high-
impact, evidence-based instructional strategies selected by the NICs. The projects
were also intended to increase state and district understanding of improvement
science and the use of data for continuous improvement. Examples of how the
toolkit resources are applied in the OK Excel NICs’ work are in appendix B.
(Walston & Conley, 2022)

One final issue related to the two identification dimensions is their temporal
sequence. Is cognitive identification a necessary precondition for affective
identification? In other words, must one think of oneself as identifying with the
social group before one can feel oneness with the group? Some research points in
this direction. For example, Carmeli, Gilat, and Weisberg (2006) suggested that
cognitive identification preceded affective commitment in external organizational
audiences (customers, suppliers, competitors).
In contrast, research on motivated cognition may suggest that the causal pathway
runs in the opposite direction.

The concept of motivated cognition is that affective states and expectations


impact how people seek out and pro- cess information (Chen, Shechter, &
Chaiken, 1996). Similarly, the affect-as-information model (Clore, Gasper, &
Garvin, 2001) suggests that people use emotions as embodied information that
they use in their thinking about people, objects, situations, and groups. Together,
these theories might suggest that affective identification is a necessary
precondition for cognitive identification. Because theory does not point in a
definitive causal direction between the two dimensions, we investigate their
reciprocal relationships in an exploratory manner.(Gottman et al., 1998)

Researchers have noted that the cultural value system espoused in a certain
national context has critical implications for the effects of identity/identification
in the organization (Baker, Carson, & Car- son, 2009; Erez & Earley, 1993), but
no cumulative evidence, to our knowledge, exists in the literature. Building upon
the notion from cross-cultural research that national culture provides consid-
erable inputs in shaping individual attitudes and behaviors (Gel- fand Erez, &
Aycan, 2007; Taras, Kirkman, & Steel, 2010), we expect that the effects of
organizational identification will vary depending on whether the national culture,
wherein the organizational setting is embedded, concurrently values pursuing an
iden- tity overlap between an individual and a social collective (i.e., organization).
In this study, we thus seek for meta-analytic evi- dence of how national culture
moderates the relations between organizational identification and its
attitudinal/behavioral out- comes. We believe that our consideration of national
culture will substantially contribute to the organization identification literature by
providing a comprehensive picture of organizational identification effects.(Lee et
al., 2015)

Creating a sense of connection (i.e., identification), however, between


organizations and their members has become a growing challenge in an age where
employees have increasing numbers of potential attachment targets and
organizations appear less loyal to their members (Scott, 2001). Trying to keep
members constantly identified with an organization may exert substantial costs on
all involved (Gossett, 2002). What may matter in terms of efforts to motivate and
retain members is when a per- son is identified. If opportunities and situations can
be created that foster strong identifications at appropriate times, those positive
outcomes may be possible even amid more fleeting attachments.

Creating a sense of connection (i.e., identification), however, between


organizations and their members has become a growing challenge in an age where
employees have increasing numbers of potential attachment targets and
organizations appear less loyal to their members (Scott, 2001). Trying to keep
members constantly identified with an organization may exert substantial costs on
all involved (Gossett, 2002). What may matter in terms of efforts to motivate and
retain members is when a per- son is identified. If opportunities and situations can
be created that foster strong identifications at appropriate times, those positive
outcomes may be possible even amid more fleeting attachments.

This activity- and communication-based view of identification is put forth most


clearly in Scott et al.’s (1998) structurational model of identification, where they
connect organizational members’ various identities to corresponding
identification targets and then link both concepts to workplace activities. In this
view, the issue is not so much if one is identified, but more precisely when
various identifications are strong=weak. In other words, key workplace outcomes
(e.g., satisfaction, turnover intentions) may be heavily shaped by the relevant
identification during communicative interactions associated with certain others.
Similarly, these more situated forms of attachment may be differentially
influenced by organizational antecedents thought to shape identification. (Scott &
Stephens, 2009)

Which identity or identities become operative at a given time is a joint function of


stable and dynamic forces (Kondo, 1990; Markus & Kunda, 1986; Markus &
Wurf, 1987). For researchers in the self-categorization tradition, emphasis is
placed on the intrinsically variable nature of identity, and how it shifts
dynamically with changes in the comparative context (Turner, Oakes, Haslam, &
McGarty, 1994). While not denying the significance of situational variability in
personal and social identity salience, the present research will emphasize the
dispositional tendency toward self- definition at either the personal or social level
of identification. We maintain that the importance assigned to either of these two
levels helps to define the self across situations, as perceivers actively select self-
categorizations that are central, relevant, and useful. Just as research on the
structure and content of the self concept has documented individual variations in
cognitive complexity (Linville, 1985), self- schemas (Markus, 1977), the
tendency to focus one’s attention toward private versus public self standards
(Fenigstein, Scheier, & Buss, 1975), and the storage of private and collective self-
cognitions (Trafimow, Triandis, & Goto, 1991), we suggest that there are
individual differences in the centrality and importance of personal and social
identity domains.(Koopmans, 2014)

Concept of Measurement
Generally, to measure and show the weight, length and volume of an object in
definite units is called measurement; for example, to show the weight of a person
in kilograms, length of cloth in metres and volume of milk in litres. But the field
of measurement is very wide. It includes to define any characteristic of any object
or person or activity in words, symbols or units.

As far as explaining the qualities of objects, persons and activities is concerned, it


has been in vogue from very ancient times, of course, without any definite base of
measurement. In the present times, the bases of most of the qualities of objects,
persons and activities have been defined; their standards and units have been
specified; measuring tools and methods have been devised and methods to
demonstrate the results of measurement in brief have been decided. Now, a
characteristic of an object, person or activity is described in definite words,
symbols and units in brief. Many scholars have attempted to delimit the definition
of this process. Most scholars are in agreement with the definition given by James
M. Bradefield. In his words :

Measurement is the process of assigning symbols to the dimension of


phenomenon in order to characterise in order to characterise the status of
phenomenon as precisely as possible.

In this definition of measurement only the measurement of qualities of objects and


activities has been included, and not the measurement of qualities of persons.
Though the persons are included in the objects of the universe; however, the
objects are meant to include only concrete materials, so it is necessary to show the
measurement of qualities of the persons separately. This definition of Bradefield
does not point to any such basis of measurement. We are of the opinion that it
should also be necessarily included in it and in that case measurement should be
defined as :

Measurement is the process by which a characteristic of an object, person or


activity is perceived and understood on specific standards and is described in
standard words, symbols or definite units.

Factors of Measurement

The above definition of measurement shows that there are four factors of
measurement :

(1) The object, person or activity any of which characteristic has to he measured.

(2) The characteristic of that object, person or activity which has to be measured.
(3) The tools and devices of measuring such characteristic.

(4) The person who measures it.

Measuring Variables and Their Types

From the viewpoint of measurement, variables are meant by those characteristics


of objects, persons and activities in which the objects, persons and activities are
not uniform in a group; for example, the weight, intelligence and personality of
persons. There is possibility that the weight of a few members may be equal,
however all persons of a group cannot have the equal weight; and if a group is
such formed that the weight of all members is equal, then the weight of such
persons would become constant in place of being variable. It is evident that a
characteristic can be variable for one group while it can be constant for another.
We can clarify it further by another illustration. Sex in a group of boys and girls is
a variable and they can be divided into two groups of boys and girls on the basis
of their sex; however, in separate groups of boys or girls the sex is not a variable.

Variables are of two types—Qualitative variables and Quantitative variables.

Qualitative Variables

Notes 2.1.1 Measuring Variables and Their Types

From the viewpoint of measurement, variables are meant by those characteristics


of objects, persons and activities in which the objects, persons and activities are
not uniform in a group; for example, the weight, intelligence and personality of
persons. There is possibility that the weight of a few members may be equal,
however all persons of a group cannot have the equal weight; and if a group is
such formed that the weight of all members is equal, then the weight of such
persons would become constant in place of being variable. It is evident that a
characteristic can be variable for one group while it can be constant for another.
We can clarify it further by another illustration. Sex in a group of boys and girls is
a variable and they can be divided into two groups of boys and girls on the basis
of their sex; however, in separate groups of boys or girls the sex is not a variable.
Variables are of two types—Qualitative variables and Quantitative variables.

Qualitative Variables

Some qualities of objects and persons are such which can only be perceived, but
they cannot be measured in definite units; for example, the complexion, caste,
religion and sex of people. These qualities or characteristics are called qualitative
variables. The level or class of the students is another example of qualitative
variable. On the basis of this variable, they can be classified into the students of
primary, middle and higher classes or levels. The students of higher classes can
also be classified on the basis of their subjects — art, commerce, engineering,
medical, etc. At this level, the subjects (disciplines) also function as qualitative
variable.

Quantitative Variables

Some qualities of objects and persons are such which can be measured in definite
units or quantity; for example, height, weight and I.Q. of persons. Such qualities
of persons are called quantitative variables. Proficiency of students in a particular
subject is also included in quantitative variable because it can be measured in
definite units by testing. Quantitative variables are of two types — Continuous
variables and Discrete variables.

1. Continuous Variables : Those quantitative variables are included in


continuous variables which can be of any quantitative value between any
two continuous quantitative whole numbers for example, height of a
person. It is not necessary for a person’s height to be in the whole number,
such as 171 cm following 170 cm. It can also be 170.1 cm, 170.2 cm,
170.3 cm, 170.4 cm, 170.5 cm, 170.6 cm, etc. or it can be even 170.11 cm
and 170.12 cm. The units used in the continuous variables are never whole
numbers by themselves, rather they are approximate numbers, they have a
part in them.
2. Discrete Variables : Those quantitative variables are included in discrete
variables which are always measured in whole numbers; for example the
number of students in a class. This number can always be a whole number
(40, 41, 50, etc.) and never a part of it (40.5, 41.51, 45.52, etc.). The
discrete variables are always in whole numbers, so their units of
measurement are always exact numbers.

Qualitative and Quantitative Measurement Qualitative

Qualitative Measurement

Perceiving the characteristics of an object, person or activity in the form of a


quality is called qualitative measurement; for example, describing a student as
very intelligent, or dull is qualitative measurement.

Quantitative Measurement

Measuring the characteristics of an object, person or activity in the form of


quantity is called quantitative measurement; for example, to measure the I.Q
(Intelligence Quotient) of a student as 140, 120 or 110 is quantitative
measurement.

Steps of Measurement in Education

The process of measurement is completed in four steps in any field, including


the field of education. These steps are :

1. Determination and Defining the Measuring Traits or Aims and


Objectives : At first the measurer determines which quality of which
person (teacher, student, administrator, guardian, etc.) or which
educational achievements of the students he has to measure. After having
determined it, he gives it a definite form and defines it. For example, if he
has to provide educational guidance, he has to determine the traits that he
has to measure, such as intelligence, interest, aptitude, attitude, etc. In
case he has to measure the educational achievements, then he has to
determine the aims and objectives that he has kept in mind while teaching
a subject or training in an activity, and he has to determine the extent to
which he has to achieve it.
2. Selection of Suitable Measurement Tools or Methods : After having
determined the traits aims and objectives to be measured, the measurer
selects suitable tools or techniques for their measurement. For example, if
he has to measure the intelligence of students, then he has to select the
intelligence test according to the age of the students, and if he has to
measure the educational achievements of the students, then he has to
select a suitable performance test according to the aims and objectives. In
the absence of clear knowledge of the traits or aims and objectives to be
measured, he would not be able to select a suitable measurement tool or
technique; and in the absence of it, he cannot measure the traits or
educational achievements of the students.
3. Use or Administration of Measurement Tool or Technique : After
having selected the suitable measurement tool or technique, the measurer
uses or administers it. For example, if he has selected an intelligence test
to measure the students’ intelligence, then he will administer it; or if he
has selected or constructed an achievement test for the measurement of
educational achievement, then he will administer it. He will be able to
administer it properly only when he is acquainted with the knowledge of
its administration.
4. Results and Records : This is the last step of measurement process. At this
step, the measurer obtains the results of measurement and records them.
For example, if he has administered an intelligence test on students, he
will calculate intelligence quotient (IQ) on the basis of the obtained
results and will record it. In case he has administered a performance test,
then he will award marks on it and record the scores.(Pal, n.d.)

The Effects of Organizational Identification

As a dominant psychological approach to identity and iden- tification, social


identity theory explains how individuals con- struct their self-concepts from
the identity of the collectives they belong to (Tajfel, 1978; Tajfel & Turner,
1985). Tajfel (1978) defined social identity as “that part of an individual’s
self-concept which derives from his knowledge of his member- ship of a
social group (or groups) together with the value and emotional significance
attached to that membership” (p. 63). Social identities are shared by members
and accentuate mem- bers’ perceived similarity. Members share the group’s
proto- typical traits, and thus depersonalize their self-concepts (Hogg, Terry,
& White, 1995; Turner, Hogg, Oakes, Reicher, & Weth- erell, 1987); through
this process of categorizing the self into a more inclusive social entity, “I
becomes we”(Brewer, 1991)

As a salient social domain in modern society, organizations

provide a significant social identity (Haslam, 2004; Hogg & Terry, 2000).
Organizational identification thus roots individuals in the organization,
leading organizational attributes such as espoused values, goals, and norms to
become salient and self-defining for individuals; through organizational
identification, the identity boundary between individual and organization
becomes blurred. In turn, an organizationally identified employee, as a
“microcosm of the organization” (Ashforth et al., 2008, p. 333), is likely to
have attitudes and take actions that benefit the whole organization rather than
benefitting individual self-interest (Ashforth & Mael, 1989; Haslam &
Ellemers, 2005; Pratt, 1998; van Knippenberg, 2000). To illustrate, when a
Google employee describes herself as cre- ative and innovative which are the
attributes she ascribes to the Google organization (i.e., categorizing herself as
a “Googler”), it reflects her organizational identification, and she is likely to
think, feel, and behave in ways that are expected among prototypical
Googlers. Below, we detail the theoretical rationales for how an individual’s
work attitudes and behaviors are shaped by organiza- tional identification.(Lee
et al., 2015)

Evaluate and refine instruments

The next step is to collect evidence that your measure will elicit reliable and
valid data that answer your research questions and inform your conversations
about whether there is improvement in teaching and student learning. This
section includes tools to support three types of pretesting activities: collecting
reviewer feedback, conducting cognitive inter- views, and evaluating
interrater reliability.

Collect reviewer feedback

Reviewers may provide valuable insights about how best to capture important
constructs in your measures, what important items might be missing, or how
to improve the wording of the instrument. For example, the validity of an
instrument to measure student learning can be verified through expert review
of the instrument in light of the learning outcomes or standards it is intended
to measure. Depending on the context, local curriculum experts might include
district- or school-based instructional coaches or content-area leaders, con-
sultants working with a district or school, and regional or state education
agency experts. For a teacher survey, you also might want to collect feedback
from teachers about the wording of the items. The likelihood of collecting
useful feedback will be improved if you provide your reviewers with
information about the intended purpose of the instrument and a set of
questions.

Conduct cognitive interviews

A common method for evaluating drafted survey items and other instruments
is to conduct cognitive interviews, which are one-on-one interviews that are
designed to find out how respondents understand, interpret, and respond to the
items. The respondent completes the survey items with the interviewer
present. The interviewer asks the respon- dent in real time to explain how they
came up with their responses to each item and asks additional probing
questions to uncover any misconceptions or areas that need more clarity
(Willis, 2005).

Cognitive interviews can include:

 A concurrent “think-aloud” process during which the participant is


asked to verbalize what they are thinking as they consider and select
answers to questions.
 Unscripted probes that the interviewer might ask in response to
something the partic- ipant says or does. For example, the interviewer
might ask a teacher participant who seems to be having difficulty
deciding between two rating points in scoring a student’s work to
explain how they interpret the two ratings and what made it difficult to
choose.
 Scripted probes are prepared in advance to target predetermined
potential problems. For example, if the team has drafted a survey item
that it thinks might be hard to under- stand, it may prepare a question
such as this: “In your own words, what is this question asking about?”

Cognitive interviews are often used to test survey items but can also be used
to test any kind of instrument that requires someone to respond to questions or
follow instructions to provide information (such as rating scales, checklists, or
assessment items).

You will want to recruit different kinds of respondents for cognitive


interviews so that you can gather a variety of perspectives. For example, if
you are testing teacher survey items, you will want to include new and veteran
teachers and teachers from relevant grade levels. In general, three or four
interviews can provide adequate feedback for an instrument used in a
continuous improvement context when time and resources are limited;
additional interviews may be beneficial if time and resources allow.

Notes taken during the cognitive interviews can be used to inform revisions to
improve the clarity and wording of the items. To ensure a complete record of
what was said during the interviews, the team should assign someone to take
notes or audio record the interviews (with respondents’ permission).

The results of the cognitive interviews will provide insight into how well the
items are interpreted as intended. Instruments that are clearly and accurately
understood by each type of potential respondent will provide more reliable
data. Common misconceptions about an item across multiple interviews
would indicate a strong need to improve clarity, but revisions might be
considered even if a single interviewee encounters a problem. The team
should spend adequate time reviewing the results to reflect on the implications
of the feedback and make decisions about potential revisions.

Plan data collection routines

Plan for timing and logistics of data collection

The timing and logistics for administering each practical measure should be
planned well in advance to minimize burden and ensure consistency across
classrooms. You should include repeated measures across multiple timepoints
to address questions about improvement over time. Networked improvement
communities (NICs) may structure their measurement routines in different
ways. NICs implementing relatively simple change ideas may use the same
measure over multiple Plan-Do-Study-Act (PDSA) cycles. Some NICs may
use a larger-scale measure aligned to the aim statement at the start and end of
a series of PDSA cycles and use a more focused set of measures within the
PDSA cycles aligned to the discreet changes in a change package that builds
toward that ultimate outcome.

A measurement plan is key to creating routines that ensure that measurement


activities occur on the same schedule and in the same way for all participants.
The measurement plan is organized by instrument.

For each instrument, the plan will describe:

 Who will collect the data and from whom. (These might be the same
person.)
 What action steps are needed to ensure that the instrument is
administered and that data are recorded correctly.
 When and how frequently measurement will occur (for example, daily,
at the end of each week, every two weeks, at the beginning and end of
the cycle) and a plan to ensure that there is sufficient time in the
schedule to collect the data.
 What other resources, training, and instructions related to data
collection will be needed.
 Plans to obtain parental consent for student surveys or other new data
collection from students, if needed.

Plan for data discussions

Create data displays

During the Study phase of the Plan-Do-Study-Act (PDSA) improvement


cycle, you will want to ensure that the team is guided by clear and organized
representations of the data that are designed to inform actionable discussion
about your research questions. People tend to understand information better
when it is presented visually. Data visualizations are most effective when they
accurately reflect the data, using clear labels and uncluttered design and the
graph type that is best suited to the data and the questions that you want the
data to help answer (Evergreen, 2017).

Typically, a subset of the networked improvement community (NIC) who are


experienced working with data and graphics will plan for and create the data
displays to share with the full NIC team. To begin planning for data displays,
organize your data by research ques- tion. Identify which data (instruments
and item numbers) will help answer each research question. Some questions
may be answered by just one item at one timepoint, and others may be
answered by multiple items within an instrument, across instruments, or
across timepoints. For example, a question about how often the teacher
implemented an element of the change idea might be examined by looking at
data from a teacher’s self-report log, a student survey, an observer’s checklist,
or a combination of all three. Data about student academic outcomes may
come from student assessments and one or more items on a teacher survey.

Group all data sources that relate to each research question. Data from
separate instru- ments should be shown in separate data displays, except for
instances where an item has exactly the same wording and response options
(or scale) across instruments. For example, if the same item is asked of
teachers and students, you could display a summary of both sets of responses
in the same graphic clearly labeling the teacher and student data.

For each research question and set of related data, make notes about how best
to display the data to facilitate NIC members’ explo- ration. Consider what
kinds of display are appropriate. For example, if you are looking at change
over time, graph the values with consistent scales so that comparisons are easy
to see. Include the exact wording of a survey item with the data and use labels
for response catagories.

Plan data inquiry activities

The PDSA cycle includes the Study phase in which NIC members examine
and interpret the data to inform decisions to act on for further implementation
of the change idea. Data inquiries support the data interpretation part of the
Study phase. This step will include planned activities to examine and interpret
the data for evidence of expected changes, considerations for how subgroup
comparisons can help team members understand factors that may be related to
changes, and examination of data from different perspectives for a fuller
picture of the changes.

Discussions about how to interpret the data should be semistructured. It is


helpful to have some structure and planned activities to make the most of data
discussions for informing next steps and for discussing implications for
continued implementation of the change ideas. Using a structured inquiry
process can be key to building capacities for school- and classroom-level
improvements (Copland, 2003; Timperley, 2008). Keep in mind, though, that
continuous improvement efforts can yield unexpected discoveries about how
the quantitative data on drivers interact and that unexpected research questions
can arise from team members’ experiences in implementing their change
ideas.(Walston & Conley, 2022)
Where do errors and uncertainties come from?
Many things can undermine a measurement. Flaws in the measurement may
be visible or invisible. Because real measurements are never made under
perfect conditions, errors and uncertainties can come from:

The measuring instrument – instruments can suffer from errors including


bias, changes due to ageing, wear, or other kinds of drift, poor readability,
noise (for electrical instruments) and many other problems.

The item being measured - which may not be stable. (Imagine trying to
measure the size of an ice cube in a warm room.)

The measurement process - the measurement itself may be difficult to make.


For example measuring the weight of small but lively animals presents
particular difficulties in getting the subjects to co-operate.

‘Imported’ uncertainties - calibration of your instrument has an uncertainty


which is then built into the uncertainty of the measurements you make. (But
remember that the uncertainty due to not calibrating would be much worse.)

Operator skill - some measurements depend on the skill and judgement of the
operator. One person may be better than another at the delicate work of setting
up a measurement, or at reading fine detail by eye. The use of an instrument
such as a stopwatch depends on the reaction time of the operator. (But gross
mistakes are a different matter and are not to be accounted for as
uncertainties.)

Sampling issues - the measurements you make must be properly


representative of the process you are trying to assess. If you want to know the
temperature at the work-bench, don’t measure it with a thermometer placed on
the wall near an air conditioning outlet. If you are choosing samples from a
production line for measurement, don’t always take the first ten made on a
Monday morning.

The environment - temperature, air pressure, humidity and many other


conditions can affect the measuring instrument or the item being measured.

Where the size and effect of an error are known (e.g. from a calibration
certificate) a correction can be applied to the measurement result. But, in
general, uncertainties from each of these sources, and from other sources,
would be individual ‘inputs’ contributing to the overall uncertainty in the
measurement.
The general kinds of uncertainty in any measurement

Random or systematic

The effects that give rise to uncertainty in measurement can be


either:

Random - where repeating the measurement gives a randomly


different result. If so, the more measurements you make, and then
average, the better estimate you generally can expect to get.(Bell,
n.d.)

Basic statistical calculations

You can increase the amount of information you get from your
measurements by taking a number of readings and carrying out
some basic statistical calculations. The two most important
statistical calculations are to find the average or arithmetic mean,
and the standard deviation for a set of numbers.

Getting the best estimate - taking the average of a number of


readings

If repeated measurements give different answers, you may not be


doing anything wrong. It may be due to natural variations in what
is going on. (For example, if you measure a wind speed outdoors,
it will not often have a steady value.) Or it may be because your
measuring instrument does not behave in a completely stable way.
(For example, tape measures may stretch and give different
results.)

If there is variation in readings when they are repeated, it is best to


take many readings and take an average. An average gives you an
estimate of the ‘true’ value. An average or arithmetic mean is
usually shown by a symbol with a bar above it, e.g.

x (‘x-bar’) is the mean value of x.


How many readings should you average?

Broadly speaking, the more measurements you use, the better the
estimate you will have of the ‘true’ value. The ideal would be to
find the mean from an infinite set of values. The more

results you use, the closer you get to that ideal estimate of the
mean. But performing more readings takes extra effort, and yields
‘diminishing returns’. What is a good number? Ten is a popular
choice because it makes the arithmetic easy. Using 20 would only
give a slightly better estimate than 10. Using 50 would be only
slightly better than 20. As a rule of thumb usually between 4 and
10 readings is sufficient.

Example 1. Taking the average or arithmetic mean of a


number of values

Suppose you have a set of 10 readings. To find the average, add


them together and divide by the number of values (10 in this case).

The readings are: 16, 19, 18, 16, 17, 19, 20, 15, 17 and 13 The sum
of these is:

The average of the 10 readings is: 170

170/ 10 = 17

Types of Standardized Tests for School Age

Parents always wonder if their child might have gifted and talented
mind. They also wonder how their child stacks up to the
competition. From early education through high school, we
constantly measure academic progress. Standardized testing, in
many forms, is the most common way of measuring progress and
intelligence. These tests are as follows:-

Intelligence test

Intelligence tests are standardized tests that aim to determine how a


person can handle problem solving using higher level cognitive
thinking. Often just called an IQ test for common use, a typical IQ
test asks problems involving pattern recognition and logical
reasoning. It then takes into account the time needed and how
many questions the person completes correctly, with penalties for
guessing. Specific tests and how the results are used change from
district to district but intelligence testing is common during the
early years of schooling.

Academic Progress

Standardized testing in schools for academic progress and


intelligence are not the same, although they use similar questions
and methodologies. Academic progress tests such as the Iowa
Basic Skills Test give schools an idea of how their students
perform on a national level in core areas and how well the school
has taught certain subjects. While intelligence tests are often used
for gifted and talented programs, academic progress tests usually
identify poor performance among students and the effectiveness of
teaching.

College Entrace Exams

Colleges often require results from a standardized test, such as the


SAT or ACT, to measure college readiness. College entrance
exams are similar to other academic progress exams but require a
higher level of reading and mathematics. The SAT and ACT allow
colleges to measure the aptitude of different applicants, instead of
having to compare the scores of many tests, classes and grades
from different schools.

Planning Procedure of a Test

Once the teacher or the test constructor is aware of the


characteristics that a good test must possess, s/he can proceed to
construct a test, which may be either a unit test or a full-fledged
question paper covering all the aspects of the syllabus. Planning
for every type of test is almost same, Whether the test is a unit test
for use in classroom testing or a question paper for use in final
examinations, the steps of test construction are the same, which are
as follows :

Prepare a Design

The first step in preparing a test is to construct a design. A test is


not merely a collection of assorted questions. To he of any
effective use, it has to be planned in advance keeping in view
objectives and the content of the course and the forms of questions
to be used for testing these. For this weightage to different
objectives, different areas or content, and different forms of
questions are to be decided, along with the scheme of options and
sections, and these are the dimensions which are known as a design
of a test.(Pal, n.d.)

Identify what to measure

Start with completed driver diagram and change idea


description

Before developing practical measures to use in your Plan-Do-


Study-Act (PDSA) cycles, you will want to make sure that you
have consensus among members of your team about your driver
diagram, including the aim statement, primary and secondary
drivers, and the ele- ments of the change idea (for example,
specific classroom activities and strategies) that will be
implemented to achieve the expected improvements.

Write research questions

To clarify what should be measured, networked NICs should prioritize


questions improvement communities (NICs) will want to about how
the change idea was identify what questions to answer during the
implemented, the status or change Study phase of the PDSA cycle.
educator outcomes the NIC has as a to develop specific research
questions will want to spend time thinking about which aspects of the
change idea, drivers, and aim you want to test first and what you
might wait to test in another PDSA cycle. Consider what questions are
most important to help your team understand what is working and
make decisions about what should be changed for the next cycle of
implementation and testing.

Implementation questions are about how the change idea was


implemented. These questions may be about how well or how
often the teacher integrated the elements and activities of the
change idea into classroom activities or about whether there were
roadblocks to implementing the change idea, such as inadequate
resources. These kinds of questions are sometimes called process
or formative questions. Implementation ques- tions might refer to
implementation during a specific period or a change in
implementa- tion over time, or they might explore differences in
implementation between groups (for example, did the
implementation of a new instructional strategy occur with the same
frequency among grade 1 and grade 2 teachers?).

Outcome questions address the extent to which the change idea


was associated with improvements in the primary drivers, student
or teacher behaviors, attitudes or knowl- edge, or skills.
Continuous improvement efforts often focus on short- and
medium-term outcomes in a single or small series of PDSA cycles.
Longer-term outcomes may be mea- sured annually or after a
longer series of PDSA cycles over an appropriate amount of time
to fully implement the change and realize results. Like
implementation questions, outcome questions might refer to a
single time period or change over time, or they might address
differences between groups.

Evaluate and refine instruments

The next step is to collect evidence that your measure will elicit
reliable and valid data that answer your research questions and
inform your conversations about whether there is improvement in
teaching and student learning. This section includes tools to
support three types of pretesting activities: collecting reviewer
feedback, conducting cognitive inter- views, and evaluating
interrater reliability.

Discussion

The main goal of the current study was to gain consensus on how to
measure IWP, which would enable the development of a standardized,
generic, short instrument. Four broad, generic dimensions of IWP
were used as a theoretical basis: task performance, contextual
performance, adaptive performance, and counter- productive work
behavior. Using a multi-disciplinary approach, possible employee
behaviors or actions (indicators) were identified for each dimension,
via a review of the literature, existing questionnaires, and data from
interviews with experts. In total, 128 unique IWP indicators were
identified, of which 23 were considered most relevant for measuring
IWP, based on notable consensus among experts. On a?e?age, task
pe?fo??a??e ?e?ei?ed g?eatest ?eight ?he? ?ati?g a? e?plo?ee?s work
performance. Contextual performance, adaptive performance, and
counterproductive work behavior received almost equal weightings.
There was agreement on 84% of the indicators between experts from
different professional backgrounds. Furthermore, experts agreed on
the relative weight of each IWP dimension in rating work
performance. However, researchers weighed task performance slightly
higher than managers. Almost half of the experts believed in the
possibility of developing a completely generic questionnaire of IWP.

The empirical exploration of relational and collective identification in


organizational settings is still in its infancy. We believe that the
present research makes an important contribution to the literature by
developing valid measures for relational and collective identification,
revealing their motivational underpinnings, and unpacking their
differential antecedents and consequences in organizational
workgroups. Clarifying the distinct bases of social identification in the
organizational context, we hope, will encourage further pursuit of
knowledge of the identification processes in the workplace.

Reference
Bell, S. (n.d.). Good Practice Guide A Beginner ’ s Guide to
Uncertainty of. 11.
Gottman, J. M., Coan, J., Carrere, S., Swanson, C., Gottman, J. M.,
Coan, J., Carrere, S., & Swanson, C. (1998). Predicting Marital
Happiness and Stability from Newlywed Interactions Published
by : National Council on Family Relations Predicting Marital
Happiness and Stability from Newlywed Interactions. Journal of
Marriage and Family, 60(1), 5–22. https://doi.org/10.1002/job
Koopmans, L. (2014). Measuring individual work performance. In
Measuring Individual Work Performance.
Lee, E. S., Park, T. Y., & Koo, B. (2015). Identifying organizational
identification as a basis for attitudes and behaviors: A meta-
analytic review. Psychological Bulletin, 141(5), 1049–1080.
https://doi.org/10.1037/bul0000012
Miscenko, D., & Day, D. V. (2016). Identity and identification at
work. Organizational Psychology Review, 6(3), 215–247.
https://doi.org/10.1177/2041386615584009
Pal, K. (n.d.). Educational Measurement and Evaluation.
Scott, C. R., & Stephens, K. K. (2009). It depends on who you’re
talking to...: Predictors and outcomes of situated measures of
organizational identification. Western Journal of Communication,
73(4), 370–394. https://doi.org/10.1080/10570310903279075
Walston, J., & Conley, M. (2022). Practical Measurement for
Continuous Improvement in the Classroom: A Toolkit for
Educators. Regional Educational Laboratory Southwest.

You might also like