Name : Levia Salsabillah
NIM : 210210401074
Chapter 7
Monitoring and Assessment
Types of Monitoring and Assessment
The purpose of the monitoring and assessment part of curriculum design is to make sure that the
learners will get the most benefit from the course.
Assessment is a major source of information for the evaluation of a course and thus its gradual
improvement. Assessment also contributes significantly to the teacher’s and learners’ sense of
achievement in a course and thus is important for motivation.
1. Placement assessment The learners are assessed at the beginning of a course to see what level of the
course they should be in, and it usually occurs under environment constraints. It is used to decide
what level of the course a learner should enter, what class the learner should join, and whether the
learner should join the course at all.
Possible placement tests:
1) The Eurocentres Vocabulary Test, This test takes about ten minutes to sit and is
automatically scored by the computer that administers it (Meara and Buxton, 1987).
2) The Vocabulary Levels Test (Nation, 1990; Schmitt et al., 2001), Designed to see where
learners needed to develop their vocabulary knowledge and thus is basically a diagnostic
test.
3) Structured interviews, In a structured interview, the learners are interviewed individually.
The interviewer has a series of questions, beginning with common short questions and
moving gradually to more complex questions or commands.
4) A cloze test, Particularly one where the deleted words are selected by the test maker.
Although the cloze is considered to be a reasonable test of general language proficiency, a
selective cloze can focus on particular aspects of vocabulary and grammar.
5) Sentence completion, Allen’s (1970) “thumbnail test of English competence” is a modest
example of this. Example:
I have been here___
Since you are late we___
2. Observation of learning While the course is running, the activities that the learners do are carefully
monitored to see if each particular activity is likely to achieve its learning goal. This involves
technique analysis and classroom observation. It does not assess the learners but is directed towards
the tasks that they do. The purpose of the monitoring is to see if it is necessary to make changes to the
learning activities in order to encourage learning.
There are four questions that should be asked when observing learning activities (Nation, 2001: 60–
74).
1) What is the learning goal of the activity?
2) What are the learning conditions that would lead to the achievement of this goal?
3) What are the observable signs that these learning conditions are occurring?
4) What are the design features of the activity that set up the learning conditions or that need
to be changed to set up the learning conditions?
3. Short-term achievement assessment It is called “achievement” assessment because it examines
items and skills drawn from the course. At regular intervals during the course, the learners may be
monitored to see what they are learning from the course. The purpose of this assessment is to see if
the learners are making progress on a daily or weekly basis.
4. Diagnostic assessment In order to plan a programme, it is useful to know where learners’ strengths
and weaknesses lie and where there are gaps in their knowledge. It is used to find what learners know
well so that time is not wasted on teaching that. The findings of diagnostic assessment are used to
determine what goes into a course, so good diagnostic assessment is accurate and easy to interpret in
terms of what should be done as a result. The vocabulary levels test (Schmitt et al., 2002) is an
example of a diagnostic test. This test helps a teacher decide whether learners should be focusing on
high-frequency vocabulary, academic vocabulary or low-frequency vocabulary.
5. Achievement assessment Usually at the end of a course, and perhaps at one or two other points
during the course, the learners are assessed on what they have learned from the course. The
characteristics of achievement assessment:
1. They are based on material taught in the course.
2. Learners usually know what kinds of questions will be asked and what material will be
covered.
3. They are criterion referenced. That means that there will be a standard or criterion set which
will indicate whether learners have achieved enough to be given a pass for the course.
6. Proficiency assessment Tries to measure a learner’s language knowledge in relation to other learners
who may have studied different courses, or in relation to areas of language knowledge that are based
upon an analysis of the language. The purpose of a proficiency test is to show how much the learners
know of the language or a particular part of the language. Sometimes, a proficiency test, such as the
TOEFL or IELTS test, awaits a learner at the end of a course. This test may be working as a criterion-
referenced test to determine whether a learner goes into an English-medium university or not. An
important value of proficiency tests is that they are one source of evaluation data for a programme.
They represent an independent measure of the relevance and adequacy of a language course.An
important value of proficiency tests is that they are one source of evaluation data for a programme.
They represent an independent measure of the relevance and adequacy of a language course.An
important value of proficiency tests is that they are one source of evaluation data for a programme.
They represent an independent measure of the relevance and adequacy of a language course.
Good Assessment: Reliability, Validity and Practicality
Most investigative procedures, including the tools for needs analysis, course evaluation
procedures, and tests and other measures for assessment can be examined by considering three criteria
– reliability, validity and practicality.
Reliability
A reliable test gives results that are not greatly upset by conditions that the test is not intended to
measure. Reliability is measured by having the learners sit the test twice, or more commonly, by
splitting the scores on the individual test items into two equal groups and seeing if the learners get the
same score on both groups. A test is more reliable if:
1. it is always given under the same conditions
2. it is consistently marked
3. it has a large number of points of assessment, that is, many questions or as in a dictation many
items that are marked
4. its questions and instructions are clear and unambiguous.
Validity
The most practical ways for a teacher or curriculum designer to check the validity of a test are
to look at its face validity and content validity.
Face validity simply means that if the test is called a reading test, does it look like a reading
test? If it is called a vocabulary test, does it look like a vocabulary test? There is nothing very
scientific about deciding on face validity, but face validity is important because it reflects how the
learners and perhaps their parents, and other teachers will react to the test. Content validity is a little
like face validity, except that the decisionmaking about validity is not made by looking at the test’s
“face”, but by analyzing the test and comparing it to what it is supposed to test.
One of the major obstacles in examining content validity is to find some well-supported
description of what the language skills like reading and writing involve, or what knowledge of the
language items like vocabulary and grammar involves.
Practicality
Practicality is examined by looking at:
1. the cost involved in administering and scoring the test,
2. the time taken to administer and sit the test,
3. the time taken to mark the test,
4. the number of people needed to administer and mark the test,
5. the ease in interpreting and applying the results of the test.
Tests can be made more practical by having reusable test papers, by being carefully formatted
for easy marking, by being not too long, and by using objectively scored items such as true/false or
multiple choice.