O 186
O 186
Introduction
In the present age, measurement has influenced the progress in education and
psychology too. Today, the age of theoretical education is over, and effort is being
made to make education and psychology more and more practical. Under
education and psychology are studied different human behaviours and problems.
For this, it becomes necessary to measure human behaviours.
Educational measurement is not a new concept. The teacher has been testing
students since times immemorial in order to know their progress in studies, and to
see what type of behavioural changes have occurred in students, if they are
optimal and what direction these behavioural changes have taken. A teacher wants
to know the shortcomings of the method of teaching he uses, for which these tests
have been found to be very important. These tests have become all the more
important in the present age, and it is expected from the future teacher or pupil-
teacher that he would gain skill in constructing several types of tools of
measurement and evaluation.
In the present review of work identity literature we attempt to make sense of and
structure the studies published thus far to encourage and facilitate these
conversations. We acknowledge that several insightful reviews have been pub-
lished on the topic of work identity and identification in the last decade (e.g.,
Alvesson, Ashcraft, & Thomas, 2008; Ashforth, Harrison, & Corley, 2008;
Brown, 2015; Haslam & Ellemers, 2005; Ramarajan, 2014; van Dick, 2004; van
Knippenberg, van Knippenberg, De Cremer, & Hogg, 2004). Nonetheless, these
reviews either offer in-depth accounts of certain aspects of work identity (e.g.,
identity work) or consider just one of the available theoretical lenses to study
work identity (e.g., social identity theory). What we hope to accomplish is to
provide the reader with insight into the var- ious topics and theories discussed to
date in the literature and facilitate future work across topical and theoretical
boundaries. We accomplish this aim by reviewing peer-reviewed journal articles
published on the topic of work identity and identification.
We first introduce the concept of identity and its main characteristics, then present
the frame- work used to structure the review, and suggest directions for future
research. This framework is based on two dimensions: levels of identity
inclusiveness (individual, interpersonal, collective) and static versus dynamic
approaches to work identity. In a concluding section, we broaden the lens to
review theory and research that consider multiple identities, which presents a
more complex picture of identities at work..(Miscenko & Day, 2016)
This toolkit is designed to guide educators in developing and improving practical
mea- surement instruments for use in networked improvement communities
(NICs) and other education contexts in which principles of continuous
improvement are applied. Measures for continuous improvement should be
closely aligned to student learning goals and implementation of instructional
strategies driving the continuous improvement effort, and they should be practical
to use in a classroom setting. Continuous improvement includes distinct repeating
processes: understanding the problem, identifying specific targets for
improvement, determining the change to introduce, implementing the change, and
evalu- ating if and how the change led to improvements (Langley et al., 1996).
This toolkit is intended to help a team of educators with the final step in the cycle,
which includes collecting data to measure implementation of changes and
intended outcomes and using those data to inform future action. A team of
educators can use this toolkit to proceed through a series of steps to identify what
to measure, consider existing instru- ments, draft instruments, evaluate and refine
instruments, plan data collection routines, and plan for data discussions to
interpret the data and inform action. Appendix A includes supporting tools
associated with each of the steps.
One final issue related to the two identification dimensions is their temporal
sequence. Is cognitive identification a necessary precondition for affective
identification? In other words, must one think of oneself as identifying with the
social group before one can feel oneness with the group? Some research points in
this direction. For example, Carmeli, Gilat, and Weisberg (2006) suggested that
cognitive identification preceded affective commitment in external organizational
audiences (customers, suppliers, competitors).
In contrast, research on motivated cognition may suggest that the causal pathway
runs in the opposite direction.
Researchers have noted that the cultural value system espoused in a certain
national context has critical implications for the effects of identity/identification
in the organization (Baker, Carson, & Car- son, 2009; Erez & Earley, 1993), but
no cumulative evidence, to our knowledge, exists in the literature. Building upon
the notion from cross-cultural research that national culture provides consid-
erable inputs in shaping individual attitudes and behaviors (Gel- fand Erez, &
Aycan, 2007; Taras, Kirkman, & Steel, 2010), we expect that the effects of
organizational identification will vary depending on whether the national culture,
wherein the organizational setting is embedded, concurrently values pursuing an
iden- tity overlap between an individual and a social collective (i.e., organization).
In this study, we thus seek for meta-analytic evi- dence of how national culture
moderates the relations between organizational identification and its
attitudinal/behavioral out- comes. We believe that our consideration of national
culture will substantially contribute to the organization identification literature by
providing a comprehensive picture of organizational identification effects.(Lee et
al., 2015)
Concept of Measurement
Generally, to measure and show the weight, length and volume of an object in
definite units is called measurement; for example, to show the weight of a person
in kilograms, length of cloth in metres and volume of milk in litres. But the field
of measurement is very wide. It includes to define any characteristic of any object
or person or activity in words, symbols or units.
Factors of Measurement
The above definition of measurement shows that there are four factors of
measurement :
(1) The object, person or activity any of which characteristic has to he measured.
(2) The characteristic of that object, person or activity which has to be measured.
(3) The tools and devices of measuring such characteristic.
Qualitative Variables
Qualitative Variables
Some qualities of objects and persons are such which can only be perceived, but
they cannot be measured in definite units; for example, the complexion, caste,
religion and sex of people. These qualities or characteristics are called qualitative
variables. The level or class of the students is another example of qualitative
variable. On the basis of this variable, they can be classified into the students of
primary, middle and higher classes or levels. The students of higher classes can
also be classified on the basis of their subjects — art, commerce, engineering,
medical, etc. At this level, the subjects (disciplines) also function as qualitative
variable.
Quantitative Variables
Some qualities of objects and persons are such which can be measured in definite
units or quantity; for example, height, weight and I.Q. of persons. Such qualities
of persons are called quantitative variables. Proficiency of students in a particular
subject is also included in quantitative variable because it can be measured in
definite units by testing. Quantitative variables are of two types — Continuous
variables and Discrete variables.
Qualitative Measurement
Quantitative Measurement
provide a significant social identity (Haslam, 2004; Hogg & Terry, 2000).
Organizational identification thus roots individuals in the organization,
leading organizational attributes such as espoused values, goals, and norms to
become salient and self-defining for individuals; through organizational
identification, the identity boundary between individual and organization
becomes blurred. In turn, an organizationally identified employee, as a
“microcosm of the organization” (Ashforth et al., 2008, p. 333), is likely to
have attitudes and take actions that benefit the whole organization rather than
benefitting individual self-interest (Ashforth & Mael, 1989; Haslam &
Ellemers, 2005; Pratt, 1998; van Knippenberg, 2000). To illustrate, when a
Google employee describes herself as cre- ative and innovative which are the
attributes she ascribes to the Google organization (i.e., categorizing herself as
a “Googler”), it reflects her organizational identification, and she is likely to
think, feel, and behave in ways that are expected among prototypical
Googlers. Below, we detail the theoretical rationales for how an individual’s
work attitudes and behaviors are shaped by organiza- tional identification.(Lee
et al., 2015)
The next step is to collect evidence that your measure will elicit reliable and
valid data that answer your research questions and inform your conversations
about whether there is improvement in teaching and student learning. This
section includes tools to support three types of pretesting activities: collecting
reviewer feedback, conducting cognitive inter- views, and evaluating
interrater reliability.
Reviewers may provide valuable insights about how best to capture important
constructs in your measures, what important items might be missing, or how
to improve the wording of the instrument. For example, the validity of an
instrument to measure student learning can be verified through expert review
of the instrument in light of the learning outcomes or standards it is intended
to measure. Depending on the context, local curriculum experts might include
district- or school-based instructional coaches or content-area leaders, con-
sultants working with a district or school, and regional or state education
agency experts. For a teacher survey, you also might want to collect feedback
from teachers about the wording of the items. The likelihood of collecting
useful feedback will be improved if you provide your reviewers with
information about the intended purpose of the instrument and a set of
questions.
A common method for evaluating drafted survey items and other instruments
is to conduct cognitive interviews, which are one-on-one interviews that are
designed to find out how respondents understand, interpret, and respond to the
items. The respondent completes the survey items with the interviewer
present. The interviewer asks the respon- dent in real time to explain how they
came up with their responses to each item and asks additional probing
questions to uncover any misconceptions or areas that need more clarity
(Willis, 2005).
Cognitive interviews are often used to test survey items but can also be used
to test any kind of instrument that requires someone to respond to questions or
follow instructions to provide information (such as rating scales, checklists, or
assessment items).
Notes taken during the cognitive interviews can be used to inform revisions to
improve the clarity and wording of the items. To ensure a complete record of
what was said during the interviews, the team should assign someone to take
notes or audio record the interviews (with respondents’ permission).
The results of the cognitive interviews will provide insight into how well the
items are interpreted as intended. Instruments that are clearly and accurately
understood by each type of potential respondent will provide more reliable
data. Common misconceptions about an item across multiple interviews
would indicate a strong need to improve clarity, but revisions might be
considered even if a single interviewee encounters a problem. The team
should spend adequate time reviewing the results to reflect on the implications
of the feedback and make decisions about potential revisions.
The timing and logistics for administering each practical measure should be
planned well in advance to minimize burden and ensure consistency across
classrooms. You should include repeated measures across multiple timepoints
to address questions about improvement over time. Networked improvement
communities (NICs) may structure their measurement routines in different
ways. NICs implementing relatively simple change ideas may use the same
measure over multiple Plan-Do-Study-Act (PDSA) cycles. Some NICs may
use a larger-scale measure aligned to the aim statement at the start and end of
a series of PDSA cycles and use a more focused set of measures within the
PDSA cycles aligned to the discreet changes in a change package that builds
toward that ultimate outcome.
Who will collect the data and from whom. (These might be the same
person.)
What action steps are needed to ensure that the instrument is
administered and that data are recorded correctly.
When and how frequently measurement will occur (for example, daily,
at the end of each week, every two weeks, at the beginning and end of
the cycle) and a plan to ensure that there is sufficient time in the
schedule to collect the data.
What other resources, training, and instructions related to data
collection will be needed.
Plans to obtain parental consent for student surveys or other new data
collection from students, if needed.
Group all data sources that relate to each research question. Data from
separate instru- ments should be shown in separate data displays, except for
instances where an item has exactly the same wording and response options
(or scale) across instruments. For example, if the same item is asked of
teachers and students, you could display a summary of both sets of responses
in the same graphic clearly labeling the teacher and student data.
For each research question and set of related data, make notes about how best
to display the data to facilitate NIC members’ explo- ration. Consider what
kinds of display are appropriate. For example, if you are looking at change
over time, graph the values with consistent scales so that comparisons are easy
to see. Include the exact wording of a survey item with the data and use labels
for response catagories.
The PDSA cycle includes the Study phase in which NIC members examine
and interpret the data to inform decisions to act on for further implementation
of the change idea. Data inquiries support the data interpretation part of the
Study phase. This step will include planned activities to examine and interpret
the data for evidence of expected changes, considerations for how subgroup
comparisons can help team members understand factors that may be related to
changes, and examination of data from different perspectives for a fuller
picture of the changes.
The item being measured - which may not be stable. (Imagine trying to
measure the size of an ice cube in a warm room.)
Operator skill - some measurements depend on the skill and judgement of the
operator. One person may be better than another at the delicate work of setting
up a measurement, or at reading fine detail by eye. The use of an instrument
such as a stopwatch depends on the reaction time of the operator. (But gross
mistakes are a different matter and are not to be accounted for as
uncertainties.)
Where the size and effect of an error are known (e.g. from a calibration
certificate) a correction can be applied to the measurement result. But, in
general, uncertainties from each of these sources, and from other sources,
would be individual ‘inputs’ contributing to the overall uncertainty in the
measurement.
The general kinds of uncertainty in any measurement
Random or systematic
You can increase the amount of information you get from your
measurements by taking a number of readings and carrying out
some basic statistical calculations. The two most important
statistical calculations are to find the average or arithmetic mean,
and the standard deviation for a set of numbers.
Broadly speaking, the more measurements you use, the better the
estimate you will have of the ‘true’ value. The ideal would be to
find the mean from an infinite set of values. The more
results you use, the closer you get to that ideal estimate of the
mean. But performing more readings takes extra effort, and yields
‘diminishing returns’. What is a good number? Ten is a popular
choice because it makes the arithmetic easy. Using 20 would only
give a slightly better estimate than 10. Using 50 would be only
slightly better than 20. As a rule of thumb usually between 4 and
10 readings is sufficient.
The readings are: 16, 19, 18, 16, 17, 19, 20, 15, 17 and 13 The sum
of these is:
170/ 10 = 17
Parents always wonder if their child might have gifted and talented
mind. They also wonder how their child stacks up to the
competition. From early education through high school, we
constantly measure academic progress. Standardized testing, in
many forms, is the most common way of measuring progress and
intelligence. These tests are as follows:-
Intelligence test
Academic Progress
Prepare a Design
The next step is to collect evidence that your measure will elicit
reliable and valid data that answer your research questions and
inform your conversations about whether there is improvement in
teaching and student learning. This section includes tools to
support three types of pretesting activities: collecting reviewer
feedback, conducting cognitive inter- views, and evaluating
interrater reliability.
Discussion
The main goal of the current study was to gain consensus on how to
measure IWP, which would enable the development of a standardized,
generic, short instrument. Four broad, generic dimensions of IWP
were used as a theoretical basis: task performance, contextual
performance, adaptive performance, and counter- productive work
behavior. Using a multi-disciplinary approach, possible employee
behaviors or actions (indicators) were identified for each dimension,
via a review of the literature, existing questionnaires, and data from
interviews with experts. In total, 128 unique IWP indicators were
identified, of which 23 were considered most relevant for measuring
IWP, based on notable consensus among experts. On a?e?age, task
pe?fo??a??e ?e?ei?ed g?eatest ?eight ?he? ?ati?g a? e?plo?ee?s work
performance. Contextual performance, adaptive performance, and
counterproductive work behavior received almost equal weightings.
There was agreement on 84% of the indicators between experts from
different professional backgrounds. Furthermore, experts agreed on
the relative weight of each IWP dimension in rating work
performance. However, researchers weighed task performance slightly
higher than managers. Almost half of the experts believed in the
possibility of developing a completely generic questionnaire of IWP.
Reference
Bell, S. (n.d.). Good Practice Guide A Beginner ’ s Guide to
Uncertainty of. 11.
Gottman, J. M., Coan, J., Carrere, S., Swanson, C., Gottman, J. M.,
Coan, J., Carrere, S., & Swanson, C. (1998). Predicting Marital
Happiness and Stability from Newlywed Interactions Published
by : National Council on Family Relations Predicting Marital
Happiness and Stability from Newlywed Interactions. Journal of
Marriage and Family, 60(1), 5–22. https://doi.org/10.1002/job
Koopmans, L. (2014). Measuring individual work performance. In
Measuring Individual Work Performance.
Lee, E. S., Park, T. Y., & Koo, B. (2015). Identifying organizational
identification as a basis for attitudes and behaviors: A meta-
analytic review. Psychological Bulletin, 141(5), 1049–1080.
https://doi.org/10.1037/bul0000012
Miscenko, D., & Day, D. V. (2016). Identity and identification at
work. Organizational Psychology Review, 6(3), 215–247.
https://doi.org/10.1177/2041386615584009
Pal, K. (n.d.). Educational Measurement and Evaluation.
Scott, C. R., & Stephens, K. K. (2009). It depends on who you’re
talking to...: Predictors and outcomes of situated measures of
organizational identification. Western Journal of Communication,
73(4), 370–394. https://doi.org/10.1080/10570310903279075
Walston, J., & Conley, M. (2022). Practical Measurement for
Continuous Improvement in the Classroom: A Toolkit for
Educators. Regional Educational Laboratory Southwest.