Frederickson Et Al 2005 Web Learning
Frederickson Et Al 2005 Web Learning
DOI 10.1007/s10734-004-6370-0
Abstract. A graduate level research methods and statistics course offered on the World-
Wide Web was evaluated relative to the traditional lecture version of the course. With
their consent, course members were randomly assigned to the two versions of the course
for the first block of sessions. For the second block of sessions the groups crossed over
to access the alternative version of the course. Quantitative and qualitative outcome
data were collected to sample cognitive and affective domains. Improvements in
knowledge and reductions in anxiety were apparent following both versions, with no
significant differences between versions being detected. Analysis of course member
comments indicated less satisfaction with the teaching input on the web-based version
but more satisfaction with the peer collaboration that was stimulated. An activity theory
framework is applied in conceptualising the findings and generating recommendations
for further course development and evaluation.
The course members could work through this material at their own pace
during computer lab sessions run in parallel to the lectures, and follow
added links to other sites of relevance to the topic. Both the web-sup-
ported group and the lecture-based group continued to have access to
face-to-face seminar and workshop sessions where questions could be
raised with the lecturer responsible for the module. This lecturer was
also the principal author of the web sessions. Hence attempts were made
to control for a number of variables that are typically confounded in
comparisons between web-based and traditional teaching formats: staff
involved, presence of other learners, length of study time.
However more detailed analysis of the teaching context in the lecture
and web programmes highlights a number of differences. These focus on
the first part of each timetabled session where the group was split and
the objective was to enable the students to acquire knowledge and de-
velop their understanding of the curriculum content. Those receiving the
lecture version went to a teaching room with the lecturer and received
an oral presentation supported by overhead projector transparencies
(also copied as handouts). Those receiving the web version went to a
computer cluster room with a graduate student demonstrator and log-
ged into the associated section of the course materials.
It should be noted that during the second part of each timetabled
session, after a short break, the two groups came back together either in
the lecture room or in the computer cluster room and undertook to-
gether the face-to-face seminar or workshop sessions referred to above,
the objective of which was to provide an opportunity for application
and reflection. This part was typically activity and discussion-based. For
example, a session held in the lecture room might ask students to apply
knowledge of reliability and validity acquired during the first part of the
session in evaluating the data presented in some psychometric test
manuals and discuss their conclusions. A session held in the computer
cluster room might ask students to analyse some data using SPSS and
discuss the conclusions they could draw from the analysis.
An important difference between the lecture and web sessions con-
cerns the specificity of the elaboration provided and the questioning that
was possible. In both programmes the written information initially
provided for the students was very similar, presented in overhead pro-
jector transparencies (also copied as handouts) in the lecture version as
opposed to web pages. In the lecture version these formed the basis for
highly specific oral elaboration in the presentation of the lecture and
written elaboration by the students in annotating the handout. In the
web version a search for elaboration would be instigated at the student’s
650 NORAH FREDERICKSON ET AL.
Method
Participants
Procedure
With their consent the participants were randomly divided into two
groups (n ¼ 8) and each group was then randomly assigned for the first
of the two teaching blocks to one of two learning environments (Web or
lecture). Prior to the first teaching block, all participants were given a
paper-and-pencil multiple-choice test covering the material to be taught
in the first block of teaching, as well as a statistics anxiety test. One
group then received six, one-hour sessions of teaching via a traditional
lecture format. The other group received the same material presented
through six, one-hour web-sessions, which were conducted at the same
time as the lectures in a nearby computer cluster room. Following these
sessions a paper-and-pencil multiple-choice test on the material cov-
ered was administered and the anxiety test repeated. A measure of self-
perceived confidence and competence with research methods and
statistics and a feedback questionnaire designed to elicit participants’
satisfaction with and responses to their learning experiences were also
completed.
Prior to the second block of six one-hour sessions, all of the partic-
ipants were given a paper-and-pencil multiple-choice test with questions
relating to the material to be covered in the next block of teaching, and
an anxiety test. The group that had received web-based teaching in the
first block now received lecture teaching, and the group that had re-
ceived lecture teaching now received the web-course. Following the
three weeks of teaching on the second block, conducted as described
above, all participants received a paper-and-pencil multiple-choice test
on the material covered in these sessions. They were also given an
anxiety test, a measure of self-perceived confidence and competence
with research methods and statistics, and asked for feedback on satis-
faction with and other aspects of their learning experiences. The design
of the study is schematically represented in Figure 1.
652 NORAH FREDERICKSON ET AL.
Group 1 Group 2
Measures
Knowledge
Course member knowledge of Research Methods and Statistics was
assessed by means of two multiple-choice paper-and-pencil tests. These
tests each contained 30 questions that covered the material in the two
course blocks. For each item one point was given for a correct answer,
giving a maximum score of 30 points. The tests were administered to the
group as a whole, and 45 minutes was allowed for completion. The tests
were closed book, and the course members were not given any indica-
tion that they needed to revise for the tests beforehand.
Anxiety
Course members completed the short form of the Mathematics Anxiety
Rating Scale (Plake and Parker 1982). Ashcraft et al. (1998) report that
the full 98 item version of the Mathematics Anxiety Rating Scale
(MARS) (Richardson and Suinn 1972) has become the standard
assessment for the maths anxiety construct. The 24 item shortened
version was developed specifically to measure class-related anxiety in
statistics courses and hence was particularly appropriate to the purpose
of the present study. Plake and Parker (1982) report alpha reliability of
0.98 and a correlation of 0.97 with the full scale MARS.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 653
The items on the short form of the MARS consist of statements that
describe a statistically related situation which may produce anxiety, e.g.
‘Signing up for a course in statistics’, ‘Being told how to interpret
probability statements’. Each situation is rated using a 1–5 scale where
‘1’ indicates low anxiety and ‘5’ indicates high anxiety. Scores on each
item were summed and averaged to produce a summary score for the
test ranging between 1 and 5.
Self-confidence/competence
A measure of course member perceived confidence/competence with
research methods and statistics was developed. It consisted of 6 items
where course members were asked to rate their confidence/competence
on a five point scale from ‘not at all’ (scored 1) to ‘very much’ (scored 5).
The first three items were: How confident are you with statistics? How
well do you feel you are understanding statistics? How competently
could you apply statistics in practice? The next three items asked about
research methods the same questions that had been asked about sta-
tistics. Scores on each item were summed and averaged to produce a
summary score for the measure ranging between 1 and 5.
Results
Table 1. Means and standard deviations on evaluation measures for each type of course
presentation
Knowledge quiz 13.14 3.94 17.24 5.04 13.19 4.00 16.63 4.27
Anxiety test 2.56 0.75 2.28 0.50 2.40 0.76 2.10 0.62
Perceived confidence – – 2.07 0.56 – – 2.36 0.58
/competence
Satisfaction – – 1.75 0.74 – – 2.05 0.60
comments about the way in which the quality of teaching had helped
learning ‘Good lectures – examples and explanations.’ ‘Having informa-
tion ‘delivered’ rather than fruitless search.’ By contrast the web-
supported course attracted criticism because it was a substitute for
lecturer-structured teaching which some students felt would help them
use their learning time more efficiently and effectively. E.g. ‘The lack of a
structured lesson/lecture to package the learning objectives to maximise
my learning time.’ ‘Wasting time looking through long websites for rele-
vant examples/information.’ Course members also were concerned that
they were unclear about what they should be learning from the web
course: ‘Not knowing/being told what exactly I was supposed to know.’
Peer collaboration among members of the group working in the
computer cluster room was consistently highlighted as a helpful aspect
of the web-based version that occurred very little in the lecture-based
version. This was a familiar way of working for course members who
had undertaken professional placement casework in pairs in schools
under tutor supervision. However some qualifications were raised.
There was one suggestion that peer collaboration was a compensatory,
rather than a preferred strategy: ‘Discussing with peers when the website/
handouts weren’t sufficient.’ Also the one negative comment expressed
some concerns that peer collaboration may not produce correct an-
swers: ‘In pairs we’d decide what we thought something might mean by
following logic and working with that definition – which could easily be
completely wrong.’
Discussion
and these may mask any differences between the conditions. There are
also questions about the validity of quiz-type assessments as opposed to
more authentic tasks that essentially sample performance in the activi-
ties for which the course members will need to utilise the learning
(Friedrich and Armer 1999). The quiz did incorporate a number of
scenario questions where the course members were required to apply
knowledge in solving problems and it was sufficiently sensitive to detect
significant differences on scores over time for both groups. However the
results reported by Jacobson and Spiro (1993) suggest that web-based
learning may give a particular advantage on the kind of complex, ill-
structured problem solving tasks that educational psychologists face in
their work (Monsen et al. 1998). It is therefore a limitation of this study
that no such measures were used. Educational psychology course
members are required to apply their knowledge of research methods and
statistics in undertaking a research project. This was done well after the
end of both blocks of the course being evaluated here.
The qualitative analyses highlighted a number of issues which also
reflected concerns raised by course representatives on behalf of the
group about the web-based version, as follows. They wanted a clearer
statement of what they were supposed to know and be able to do at the
end of each session, so that they could check whether they had achieved
this. They pointed out that the statement of objectives at the start of
each session was more like a list of session contents and could helpfully
be re-written as a set of learning objectives (such as those provided for
other curriculum areas in the educational psychology training pro-
gramme) that would specify what they would know and be able to do if
successful learning had taken place. They considered that the explana-
tions on the web pages were too brief and insufficiently illustrated with
examples. They wanted specific links to examples, exercises and sections
on other web-sites relevant to the specific topic they were studying. They
considered that too much time was wasted searching through general
recommended web-sites. They wanted a much more interactive learning
experience. In particular they were concerned to be able to assess the
extent to which they had understood the content of a page. They wanted
self-checks so they could see if they were on the right track or whether
they needed to seek additional explanation/examples. An exercise on
one of the web sites to which they were directed was cited with approval
where a correct response received a ‘big green tick’! Clearly the
importance of feedback and reinforcement should not be underesti-
mated. It was also suggested that each session needed to be set in the
context of the answer to a broader set of questions about the place of
660 NORAH FREDERICKSON ET AL.
Tools
Transformation
Process
Acknowledgement
References
Ashcraft, M.H., Kirk, E.P. and Hopko, D. (1998). ‘On the cognitive consequences of
mathematics anxiety’, in Donlan, C. (ed.), The Development of Mathematical Skills.
Hove: Psychology Press, pp. 175–196.
Chen, C. and Rada, R. (1996). ‘Interacting with hypertext: A meta-analysis of experi-
mental studies’, Human–Computer Interaction 11, 125–156.
Daniel, J.S. (1998). Mega Universities and the Knowledge Media: Technology Strategies
for Higher Education. London: Kogan Page.
Dillon, A. and Gabbard, R. (1998). ‘Hypermedia as an educational technology: A
review of the quantitative research literature on learning comprehension, control
and style’, Review of Educational Research 68, 322–349.
Engeström, Y. (1987). Learning by Expanding: An Activity-Theoretical Approach to
Developmental Research. Helsinki: Orienta-Konsultit.
Engeström, Y. (2001). Expansive Learning at Work. Towards an Activity-Theoretical
Reconceptualisation. Lifelong Learning Group, Occasional Paper No. 1. London:
Institute of Education.
Friedrich, K.R. and Armer, L. (1999). ‘The instructional and technological challenges of
a web based course in educational statistics and measurement’. Presented at the
Society for Educational Technology and Teacher Education 10th International
Conference, San Antonio, TX, February 28–March 4 1999.
Harding, R.D., Lay, S.W., Moule, H. and Quinney, D.A. (1995). ‘A mathematical tool-
kit for interactive hypertext courseware: Part of the mathematics experience within
the Renaissance project’, Computers Education 24, 127–135.
Hewson, L. and Hughes, C. (2001). ‘Generic structures for on-line teaching and
learning’, in Lockwood, F. and Gooley, A. (eds.), Innovation in Open and Distance
Learning. London: Kogan Page, pp. 76–87.
Huang, H-M. (2002). ‘Towards constructivism for adult learners in online learning
environments’, British Journal of Educational Technology 33, 27–37.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 663
Schön, D.A. (1987). Education the Reflective Practitioner. San Francisco, CA: Jossey-
Bass.
Vaughn, S., Schumm, J.S. and Sinagub, J. (1996). Focus Group Interviews in Education
and Psychology. Thousand Oaks, CA: Sage.
Vygotsky, L. (1978). Mind in Society: The Development of Higher Psychological
Processes. Cambridge: Cambridge University Press.
Whalley, P. (1993). ‘An alternative rhetoric for hypertext’, in McKnight, C., Dillon, A.
and Richardson, J. (eds.), Hypertext: A Psychological Perspective. London: Ellis
Horwood.
Williams, W.M. and Ceci, S.J. (1997). ‘‘How’m I doing?’ Problems with student ratings
of instructors and courses’, Change 29, (Sept/Oct), 13–23.
Wood, D., Bruner, J.S. and Ross, G. (1976). ‘The role of tutoring in problem-solving’,
Journal of Child Psychology and Psychiatry 17, 149–161.
Zimitat, C. (2001). ‘Designating effective in-line continuing medical education’, Medical
Teacher 23, 117–122.