0% found this document useful (0 votes)
122 views20 pages

Frederickson Et Al 2005 Web Learning

A study evaluated a graduate-level research methods and statistics course offered online via the World Wide Web compared to the traditional lecture-based version. Students were randomly assigned to take the first half of the course in one format and crossed over to the other format for the second half, with quantitative and qualitative data collected. Both versions improved students' knowledge and reduced anxiety with no significant differences found between formats. However, comments indicated less satisfaction with teaching online but more satisfaction with peer collaboration stimulated online. An activity theory framework was applied to conceptualize the findings and generate recommendations.

Uploaded by

graemedixon
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views20 pages

Frederickson Et Al 2005 Web Learning

A study evaluated a graduate-level research methods and statistics course offered online via the World Wide Web compared to the traditional lecture-based version. Students were randomly assigned to take the first half of the course in one format and crossed over to the other format for the second half, with quantitative and qualitative data collected. Both versions improved students' knowledge and reduced anxiety with no significant differences found between formats. However, comments indicated less satisfaction with teaching online but more satisfaction with peer collaboration stimulated online. An activity theory framework was applied to conceptualize the findings and generate recommendations.

Uploaded by

graemedixon
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Higher Education (2005) 50: 645–664  Springer 2005

DOI 10.1007/s10734-004-6370-0

Evaluating web-supported learning versus lecture-based teaching:


Quantitative and qualitative perspectives

NORAH FREDERICKSON, PHIL REED & VIV CLIFFORD


Department of Psychology, University College London, Gower Street, London WC1E
6BT, UK

Abstract. A graduate level research methods and statistics course offered on the World-
Wide Web was evaluated relative to the traditional lecture version of the course. With
their consent, course members were randomly assigned to the two versions of the course
for the first block of sessions. For the second block of sessions the groups crossed over
to access the alternative version of the course. Quantitative and qualitative outcome
data were collected to sample cognitive and affective domains. Improvements in
knowledge and reductions in anxiety were apparent following both versions, with no
significant differences between versions being detected. Analysis of course member
comments indicated less satisfaction with the teaching input on the web-based version
but more satisfaction with the peer collaboration that was stimulated. An activity theory
framework is applied in conceptualising the findings and generating recommendations
for further course development and evaluation.

Keywords: activity theory, evaluation, learning outcomes, statistics course, student


satisfaction, web-based.

The impact of Information and Communication Technology (ICT) on


teaching and learning is an issue that has risen rapidly up the agenda of
higher education providers world-wide in recent years (Daniel 1998;
King 2001). Lockwood (2001) argued that the sector was poised on the
threshold of a revolution in this area and highlighted the importance of
evaluating educational innovations, and learning from the experience of
others. However subsequent investigations of practice have suggested
evolutionary rather than revolutionary development (Roberts 2003)
while particular criticism has been directed at the lack of attention to
evaluation in this area. Laurillard (2002) suggests that ICT development
programmes in the UK over the past 20 years ‘have always paid lip
service to evaluation, but very little has ever been carried out, as
development costs expand to usurp the entire budget’ (p. 239). Among
the range of approaches subsumed under the ICT umbrella (Katz 2002
lists: radio, television, interactive video, electronic mail, World-Wide
Web), concerns about evaluation have been specifically highlighted in
646 NORAH FREDERICKSON ET AL.

relation to hypermedia applications. Hypermedia are defined, following


Whalley (1993), as comprising machine supported links which allow
some measure of interaction by users between blocks of text (hypertext)
and graphic components of animation, simulation and video forms.
Dillon and Gabbard (1998) characterise the literature on the use of
hypermedia as ‘strong on claims, but so far short on supporting evi-
dence from studies of learners’ (p. 323).
Among the educational benefits frequently claimed for hypermedia
technologies in general and the World-Wide Web in particular are: non-
linear access to vast amounts of information and international resources
is facilitated; 24 h access to the learning environment is possible;
information can be explored in depth on demand; different levels of
support or scaffolding can be offered in parallel; the pace of interaction
with instructional material is controlled by the learner; and opportu-
nities for facilitating a range of learning strategies such as small group
discussion and collaborative projects are offered (Dillon and Gabbard
1998; Lockyer et al. 1999).
Hypothesised benefits range from those that are theoretically based to
those which appear to be regarded as self-evident. For example, Oliver
and McLoughlin (2001) present a theoretically grounded argument that
Web-based environments can ‘scaffold’ learning in a unique way. The
term ‘scaffolding’ was introduced by Wood et al. (1976) to describe the
effective intervention by a more competent person in the learning of an-
other person. It involves the more competent person in providing support
for some elements of a task that is initially too difficult for the learner to
complete unaided, so enabling the learner to focus on elements within
their competence and to succeed overall. Learning from successive
experiences of task completion allows support to be reduced over time.
Much scaffolding may be carried out by peers who are slightly more
competent in particular tasks. Oliver and McLoughlin (2001) discuss Web
tools that can be used to support learning in this way, for example through
enabling learners to search for peers with relevant skills or information
who can then provide support in the task that the learner is attempting.
There are many ways in which ICT can be used to support teaching
and learning and the hypothesised benefits might be expected to vary
with the approach adopted. At one end of the spectrum is classroom-
based teaching, which is supplemented by lecture notes posted on a Web
site or by electronic communication such as e-mail. At the other end of
the spectrum, materials are made available and interactions occur
exclusively through networked technologies (Salmon 2000). In some
cases the technology may simply provide additional forums to support
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 647

collaboration that is already happening within more traditional, face-to-


face contexts (such as discussion, action group meetings, tutorial,
revision groups). In other cases it may offer new or greatly enhanced
opportunities to facilitate collaboration (such as tutor monitoring of
and contribution to simultaneous discussion groups).
The hypothesised benefits might also be expected to vary with the
characteristics of the learners and it has been argued that hypermedia
learning environments are likely to offer particular advantages to adult
learners (Haung 2002), especially those undertaking professional edu-
cation (Friedrich and Armer 1999; Hewson and Hughes 2001; Zimitat
2001). These arguments have been heavily influenced by adult learning
theory (Knowles 1990) and the concept of the reflective practitioner
(Schon 1987). Knowles’ theory is based on four central assumptions, the
first of which embodies the concept of the adult learner as inherently self-
directed. The other assumptions refer to the importance of the learner’s
experiences as a source of material for learning activities, of life tasks as
sources of motivation for learning and of a problem-centred rather than
a subject centred orientation to learning. It is also argued that for the
busy professional practitioner easy access is offered to the support and
challenge of peer ‘critical friends’ to enhance reflection on practice.
At present these benefits are largely hypothetical. A number of re-
views have concluded that very few of the evaluative studies conducted
in this area meet minimally acceptable standards of rigour. While much
on-line educational research does not utilise scientific designs, the small
number of quantitative, experimental studies carried out have produced
weak and inconclusive evidence for any superiority of hypermedia in
achieving learning outcomes (Landauer 1995; Chen and Rada 1996;
Dillon and Gabbard 1998). However in discussing the inconsistencies in
the literature Dillon and Gabbard (1998) suggest that they may be
substantially attributable to differential effectiveness of hypermedia for
different learners and types of learning tasks. For example the ability to
control pace and level of information was found to offer most advan-
tage to learners of high ability/levels of prior knowledge (Recker and
Pirolli 1995). Jacobson and Spiro (1993) examined recall of factual
knowledge and the solving of problems that were conceptually ill
structured by virtue of their multidisciplinary, diverse and dynamic
content. They reported an advantage of hypermedia when learners were
asked to solve complex, ill-structured problems which was not apparent
in the recall of factual knowledge.
Conclusions drawn from studies of learner satisfaction are generally
more enthusiastically supportive of the benefits of hypermedia based
648 NORAH FREDERICKSON ET AL.

learning (Friedrich and Armer 1999; Harding et al. 1995). However


learner satisfaction and learning outcomes may not be related in a
straightforward way. Maki et al. (2000) reported that course satisfaction
and learning were dissociated in the evaluation of a web-based intro-
ductory psychology course relative to traditional lecture format course
delivery. Better content learning was found to result from the web-based
version while the lecture course received higher satisfaction ratings. The
authors suggest that the more active learning format of the web-based
version (which involved mastery-quizzes and computerised exercises)
may have lead to better learning of the textbook material that was
presented in both formats. Satisfaction ratings may have been influ-
enced by the absence of an enthusiastic lecturer in the web-based version
as all aspects of course satisfaction ratings have been found to be
influenced by instructor enthusiasm (Williams and Ceci 1997). Support
for this view comes from a study which compared interactive
synchronous video conferencing with web-based distance learning (Katz
2002). Students who participated in the video conferencing system
which was highly interactive and very similar to conventional lecture-
style teaching had higher scores on satisfaction with learning, level of
control of learning and study motivation than those who participated in
the web-based distance learning. However students who participated in
the web-based system had higher scores on independence in the learning
process.
At this stage in the development of hypermedia use in teaching and
learning, there is a need for well-designed and carefully controlled
evaluation studies of learner outcomes as well as the systematic col-
lection of information about learner satisfaction with aspects of the
experience. Evaluation needs to be planned for as an integral part of the
process of innovation, rather than being seen as secondary to the
development work, introduced as an afterthought or squeezed out by
time pressures in the manner described by Laurillard (2002). This paper
describes an attempt to implement a multi-level evaluation as part of a
small-scale innovation. It presents the evaluation of a web-supported
research methods and statistics module compared with the traditional,
lecture-based module from which it had been developed.
The topics covered were: Measurement Theory, Observational
Techniques, Surveys and Correlations, Experimental Design and
Hypothesis Testing, n ¼ 1 Designs, and Qualitative Methods. The web
materials developed for the current course were chosen so as to be very
simple. They comprised the material covered during the lectures pre-
sented in a similar sequence in the form of menu-driven electronic pages.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 649

The course members could work through this material at their own pace
during computer lab sessions run in parallel to the lectures, and follow
added links to other sites of relevance to the topic. Both the web-sup-
ported group and the lecture-based group continued to have access to
face-to-face seminar and workshop sessions where questions could be
raised with the lecturer responsible for the module. This lecturer was
also the principal author of the web sessions. Hence attempts were made
to control for a number of variables that are typically confounded in
comparisons between web-based and traditional teaching formats: staff
involved, presence of other learners, length of study time.
However more detailed analysis of the teaching context in the lecture
and web programmes highlights a number of differences. These focus on
the first part of each timetabled session where the group was split and
the objective was to enable the students to acquire knowledge and de-
velop their understanding of the curriculum content. Those receiving the
lecture version went to a teaching room with the lecturer and received
an oral presentation supported by overhead projector transparencies
(also copied as handouts). Those receiving the web version went to a
computer cluster room with a graduate student demonstrator and log-
ged into the associated section of the course materials.
It should be noted that during the second part of each timetabled
session, after a short break, the two groups came back together either in
the lecture room or in the computer cluster room and undertook to-
gether the face-to-face seminar or workshop sessions referred to above,
the objective of which was to provide an opportunity for application
and reflection. This part was typically activity and discussion-based. For
example, a session held in the lecture room might ask students to apply
knowledge of reliability and validity acquired during the first part of the
session in evaluating the data presented in some psychometric test
manuals and discuss their conclusions. A session held in the computer
cluster room might ask students to analyse some data using SPSS and
discuss the conclusions they could draw from the analysis.
An important difference between the lecture and web sessions con-
cerns the specificity of the elaboration provided and the questioning that
was possible. In both programmes the written information initially
provided for the students was very similar, presented in overhead pro-
jector transparencies (also copied as handouts) in the lecture version as
opposed to web pages. In the lecture version these formed the basis for
highly specific oral elaboration in the presentation of the lecture and
written elaboration by the students in annotating the handout. In the
web version a search for elaboration would be instigated at the student’s
650 NORAH FREDERICKSON ET AL.

discretion through accessing links to other web-sites recommended for a


particular section, but not containing highly specific elaboration of
every point.
In the lecture specific questions could be asked at any point whereas
students working on the web version had either to institute a search for
the information they sought, starting from the recommended sites which
may not contain the information required, ask their fellow students or
to wait until the second part of the session when they could ask the
lecturer. Students could talk freely to each other in the computer cluster
room during the web session but in the lecture they were not expected to
speak other than to ask the lecturer a question. The potential for peer
collaboration and support therefore also differed between the two ver-
sions of the course.
Given reports that many students enrolled in statistics courses
experience something akin to ‘maths anxiety’ (Friedrich and Armer
1999), affective outcomes were examined as well as cognitive outcomes.
The course members were undertaking a graduate professional training
programme in educational (school) psychology. They were all psy-
chology graduates (who had passed compulsory undergraduate statis-
tics courses) and qualified school teachers with a minimum of two years,
teaching experience. Because of this background and experience it was
considered that they would be both likely to give informed consent to
participation in a controlled experimental design based on an under-
standing of the scientific principles involved and able to contribute rich
qualitative data as informed participant observers of their experiences
with the different educational media.

Method

Participants

The 16 first-term students on a graduate professional training course in


educational psychology at University College London were invited to
participate in the project. The course members were assured that none
of the research assessments would count in any way towards their
course assessment. All 16 course members agreed to take part. There
were 3 male and 13 female course members, with an age range of 25–46
years (mean 30 years, sd 6.4 years). An IT skills audit conducted to
assess pre-requisite skills indicated that all course members reported
adequate levels of basic general IT skills (e.g. use of mouse and
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 651

keyboard) and ability to use e-mail. The majority reported adequate


skills in gathering information from the web (15 out of 16), basic word
processing (14 out of 16) and use of electronic data bases such as Psy-
chInfo and ERIC (12 out of 16). A minority reported adequate skills in
evaluating the quality of web-sites (7 out of 16), using statistical pack-
ages such as SPSS (2 out of 16) and using computer conferencing (2 out
of 16).

Procedure

With their consent the participants were randomly divided into two
groups (n ¼ 8) and each group was then randomly assigned for the first
of the two teaching blocks to one of two learning environments (Web or
lecture). Prior to the first teaching block, all participants were given a
paper-and-pencil multiple-choice test covering the material to be taught
in the first block of teaching, as well as a statistics anxiety test. One
group then received six, one-hour sessions of teaching via a traditional
lecture format. The other group received the same material presented
through six, one-hour web-sessions, which were conducted at the same
time as the lectures in a nearby computer cluster room. Following these
sessions a paper-and-pencil multiple-choice test on the material cov-
ered was administered and the anxiety test repeated. A measure of self-
perceived confidence and competence with research methods and
statistics and a feedback questionnaire designed to elicit participants’
satisfaction with and responses to their learning experiences were also
completed.
Prior to the second block of six one-hour sessions, all of the partic-
ipants were given a paper-and-pencil multiple-choice test with questions
relating to the material to be covered in the next block of teaching, and
an anxiety test. The group that had received web-based teaching in the
first block now received lecture teaching, and the group that had re-
ceived lecture teaching now received the web-course. Following the
three weeks of teaching on the second block, conducted as described
above, all participants received a paper-and-pencil multiple-choice test
on the material covered in these sessions. They were also given an
anxiety test, a measure of self-perceived confidence and competence
with research methods and statistics, and asked for feedback on satis-
faction with and other aspects of their learning experiences. The design
of the study is schematically represented in Figure 1.
652 NORAH FREDERICKSON ET AL.

Group 1 Group 2

Pre-Test 1 Multiple Choice Knowledge Test


Anxiety Test
Web-teaching Lectures
Teaching Block 1 Measurement Theory Measurement Theory
(6 hours over 3 weeks) Observation Observation
Surveys & Correlations Surveys & Correlations
Multiple Choice Knowledge Test
Post Test 1 Anxiety Test
Self-confidence Rating Scale
Course Feedback Scale
Pre-Test 2 Multiple Choice Knowledge Test
Anxiety Test
Lectures Web Teaching
Teaching Block 2 Hypothesis Testing Hypothesis Testing
(6 hours over 3 weeks) N=1 N=1
Qualitative Methods Qualitative Methods
Multiple Choice Knowledge Test
Post Test 2 Anxiety Test
Self-confidence Rating Scale
Course Feedback Scale

Figure 1. Schematic representation of the design of the study.

Measures

Knowledge
Course member knowledge of Research Methods and Statistics was
assessed by means of two multiple-choice paper-and-pencil tests. These
tests each contained 30 questions that covered the material in the two
course blocks. For each item one point was given for a correct answer,
giving a maximum score of 30 points. The tests were administered to the
group as a whole, and 45 minutes was allowed for completion. The tests
were closed book, and the course members were not given any indica-
tion that they needed to revise for the tests beforehand.

Anxiety
Course members completed the short form of the Mathematics Anxiety
Rating Scale (Plake and Parker 1982). Ashcraft et al. (1998) report that
the full 98 item version of the Mathematics Anxiety Rating Scale
(MARS) (Richardson and Suinn 1972) has become the standard
assessment for the maths anxiety construct. The 24 item shortened
version was developed specifically to measure class-related anxiety in
statistics courses and hence was particularly appropriate to the purpose
of the present study. Plake and Parker (1982) report alpha reliability of
0.98 and a correlation of 0.97 with the full scale MARS.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 653

The items on the short form of the MARS consist of statements that
describe a statistically related situation which may produce anxiety, e.g.
‘Signing up for a course in statistics’, ‘Being told how to interpret
probability statements’. Each situation is rated using a 1–5 scale where
‘1’ indicates low anxiety and ‘5’ indicates high anxiety. Scores on each
item were summed and averaged to produce a summary score for the
test ranging between 1 and 5.

Self-confidence/competence
A measure of course member perceived confidence/competence with
research methods and statistics was developed. It consisted of 6 items
where course members were asked to rate their confidence/competence
on a five point scale from ‘not at all’ (scored 1) to ‘very much’ (scored 5).
The first three items were: How confident are you with statistics? How
well do you feel you are understanding statistics? How competently
could you apply statistics in practice? The next three items asked about
research methods the same questions that had been asked about sta-
tistics. Scores on each item were summed and averaged to produce a
summary score for the measure ranging between 1 and 5.

Feedback on Learning Experiences


Course member perceptions of aspects of their learning experience were
obtained in two ways. First, five items were presented in a rating scale
questionnaire format to assess aspects of satisfaction with the course.
The five-point scale described above was adopted, which ranges from
‘not at all’ (scored 1) to ‘very much’ (scored 5). Course members were
asked to consider their experience of statistics/research methods during
the previous three weeks and rate their enjoyment, interest, motivation,
sense of achievement and effectiveness of learning. Scores on each item
were summed and averaged to produce a summary score for satisfaction
ranging between 1 and 5. Secondly, open-ended feedback was invited in
response to questions about what had contributed to and what had
hindered the effectiveness of their learning of statistics/research methods
during the previous three weeks.

Results

Table 1 shows the means and standard deviations on evaluation mea-


sures for each type of course presentation. The first row displays the
mean scores on the knowledge test before and after the blocks of
654 NORAH FREDERICKSON ET AL.

Table 1. Means and standard deviations on evaluation measures for each type of course
presentation

Measures Web-based Lecture-based

Before After Before After

Mean (sd) Mean (sd) Mean (sd) Mean (sd)

Knowledge quiz 13.14 3.94 17.24 5.04 13.19 4.00 16.63 4.27
Anxiety test 2.56 0.75 2.28 0.50 2.40 0.76 2.10 0.62
Perceived confidence – – 2.07 0.56 – – 2.36 0.58
/competence
Satisfaction – – 1.75 0.74 – – 2.05 0.60

learning with either web-based or lecture-based presentation. Inspection


of these data shows that there was an increase in the test scores after the
blocks of teaching as compared to the initial test scores. This increase
represented a mean 30% increase over baseline over both conditions.
There was little difference in the scores either before or after teaching
across the two types of teaching.
These observations were corroborated by means of a three-factor
mixed-model analysis of variance (ANOVA), with counterbalancing as
a between-subject factor, and teaching condition (web versus lecture)
and time (before versus after), as within-subject factors. A rejection
criterion of p < 0.05 was adopted for this and all subsequent analyses.
The analysis revealed a statistically significant main effect of time,
F(1,12) ¼ 33.77. No other main effects, nor any interaction proved to be
statistically significant (all Fs < 1).
The second row of Table 1 displays the mean anxiety scores before
and after the blocks of teaching that were either web- or lecture-based.
Inspection of these data shows that there was a decrease in the anxiety
scores after the blocks of teaching as compared to before the teaching. A
mean 15% decrease was found for both conditions. There was little
difference in these scores either before or after blocks across the two
types of teaching. These observations were corroborated by means of a
three-factor mixed-model ANOVA (counterbalancing · condition ·
time). The analysis revealed a statistically significant main effect of time,
F(1,13) ¼ 8.53. No other main effects, nor any interaction proved to be
statistically significant (all ps > 0.15).
The third row of Table 1 displays the mean perceived confidence/
competence scores after the sessions where teaching was either web- or
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 655

lecture-based. A two-factor mixed-model ANOVA (counterbalanc-


ing · condition) revealed that neither main effect nor the interaction
between the factors was statistically significant, all ps > 0.70.
The bottom row of Table 1 displays the mean satisfaction scores
after each type of teaching. Inspection of these data shows that there
was little difference between the types of teaching in the aspects of
satisfaction assessed. A two-factor mixed-model ANOVA (counterbal-
ancing · condition) revealed that neither main effect nor the interaction
between the factors was statistically significant, all ps > 0.10.
Written qualitative feedback on the two methods of teaching was
also analysed. Course members had been invited to write about what
had contributed to and what had hindered their learning of statistics/
research methods in the previous three weeks. The written comments
were analysed using a qualitative approach based on procedures de-
scribed by Vaughn et al. (1996) as follows:
1. Identification of key themes or ‘big ideas’ within the data, following
reading and re-reading of the comments.
2. Identification and highlighting of units of information (phrases and/
or sentences) relevant to the research purposes. In this study each
unit consisted of a statement about something that had contributed
to or hindered learning of research methods and statistics.
3. Selection of category headings to sort and group these units of
information.
4. Units of information are coded according to category headings, to
enable most of the units to be placed within a category.
5. Negotiation between the researchers to agree the category headings
that most economically accommodate the relevant units of infor-
mation.
6. Categories generated in the first phase of data analysis are reviewed,
revised and defined and a final categorisation of each unit is made.
In this study the final categorisation was carried out by the first
author and, as the number of statements was small, the complete
categorisation was checked by the second author. At this point
queries were raised about the categorisation of only two statements,
one of which was re-categorised following discussion.

The statements about what helped or hindered learning were


categorised under the same six headings. The number of statements in
each category is shown in Table 2, together with the definitions agreed
for each category and exemplar statements. Inspection of the number of
statements that fell into each of the categories highlighted three notable
656 NORAH FREDERICKSON ET AL.

Table 2. Categorisation of open-ended feedback

What Helped? What Hindered?

Web Lecture Web Lecture

Resource material provided 8 2 7 0


Resource material sought out 2 3 0 2
Examples/practical applications 2 4 3 4
Questions answered 5 5 2 0
Teaching input 0 5 14 9
Peer collaboration 9 2 1 0

Category definitions and exemplars.


Resource material provided – comments about the learning resources provided by staff.
E.g. ‘Handouts provided.’
Resource material sought out – comments about learning resources located by course
members on their own initiative. E.g. ‘Borrowing books from library and reading up
before lectures.’
Examples/practical applications – responses such as the following were included: ‘Going
through concrete examples of where a specific statistical method has been applied.’
Questions answered’ – included responses such as ‘Asking questions in class and getting
immediate feedback. This clarified points of confusion to some extent.’
Teaching input – comments about how material was organised and/or delivered e.g.
‘Initial revision of terms and concepts I haven’t been familiar with in a long time.‘Good
lectures – examples and explanations.’
Peer collaboration – comments about working with other course members ‘Peer learning
and support, discussion of difficult areas and interactive web-based examples from
feedback.’

features. The majority of the comments on the resource material pro-


vided referred to the web-supported version of the course. Comments
were quite evenly split between identifying features that helped and
those that hindered. An example of a comment on helpful web resource
materials was ‘Going into websites that gave me real life, relevant
examples that help me apply statistics and see the value of it.’ An example
of a comment about web resource material as a hindrance was ‘Frus-
trating on the Web as you can spend a long time looking only to find the
site is unhelpful, not tailored to my needs.’
The teaching input received a variable response in the lecture-based
version. Some aspects were found to represent a hindrance, e.g. the pace
of session, i.e. number of research designs presented in one session. I need
time to process them as most of material covered is new to me.’ ‘Lecture-
style format, lack of discussion.’ However there were also positive
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 657

comments about the way in which the quality of teaching had helped
learning ‘Good lectures – examples and explanations.’ ‘Having informa-
tion ‘delivered’ rather than fruitless search.’ By contrast the web-
supported course attracted criticism because it was a substitute for
lecturer-structured teaching which some students felt would help them
use their learning time more efficiently and effectively. E.g. ‘The lack of a
structured lesson/lecture to package the learning objectives to maximise
my learning time.’ ‘Wasting time looking through long websites for rele-
vant examples/information.’ Course members also were concerned that
they were unclear about what they should be learning from the web
course: ‘Not knowing/being told what exactly I was supposed to know.’
Peer collaboration among members of the group working in the
computer cluster room was consistently highlighted as a helpful aspect
of the web-based version that occurred very little in the lecture-based
version. This was a familiar way of working for course members who
had undertaken professional placement casework in pairs in schools
under tutor supervision. However some qualifications were raised.
There was one suggestion that peer collaboration was a compensatory,
rather than a preferred strategy: ‘Discussing with peers when the website/
handouts weren’t sufficient.’ Also the one negative comment expressed
some concerns that peer collaboration may not produce correct an-
swers: ‘In pairs we’d decide what we thought something might mean by
following logic and working with that definition – which could easily be
completely wrong.’

Discussion

The current results demonstrate that similar gains in knowledge oc-


curred whether the educational psychology research methods and sta-
tistics course was presented electronically or by means of traditional
lectures. Anxiety about research methods and using statistical tech-
niques declined similarly after teaching input whether the input was
web-based or lecture-based. In terms of course member perceptions of
their self-confidence and competence in using statistical methods there
was no difference between the two forms of teaching. Analysis of
quantitative feedback on aspects of the learning experience such as
enjoyment, interest and a sense of achievement did not show any sig-
nificant differences between the web- and lecture-based versions.
However this should be contrasted with the results of the qualitative
feedback, which indicated that course members were more critical of the
658 NORAH FREDERICKSON ET AL.

teaching input on the web-based version and identified more helpful


aspects of the traditional lecture input. The one distinctive feature of the
web-based version which received clear endorsement from course
members was peer-collaboration. In this respect the hypothesised ben-
efits of web-based learning were supported.
In considering the quantitative results of this evaluation, it must be
acknowledged that the small sample size and the short duration of the
study represent important limitations. Since the whole cohort of stu-
dents on the post-graduate professional training programme in educa-
tional psychology was included, there was no scope for increasing the
sample size. As all the time devoted to statistics teaching was utilised, it
was not possible to extend the intervention. However both these fea-
tures may have operated to limit the likelihood of detecting differences
between groups. In a small sample the statistical power of the analyses is
likely to be limited, as is the confidence with which any generalisations
can be drawn. The larger the sample the less likely it is that the presence
of a few individuals who are unrepresentative of the population as a
whole in some important respects, will have a substantial impact on the
results. It would therefore be desirable to treat the present study as a
pilot investigation and to identify ways in which data might be collected
from a larger sample in future studies.
It must also be acknowledged that this initial version of the web
course was essentially restricted to the hypermedia feature. This study
did not attempt to evaluate the full range of approaches, reviewed
earlier, that are available with electronic delivery, such as scaffolding,
conferencing and self-testing. It is of some interest that gains in
knowledge resulting from this rather rudimentary form of web-based
course were as good as traditional face-to-face methods. However it
must be acknowledged that there was no independent assessment of the
quality of design for either version of the programme, yet programme
quality might be expected to affect the achievement of learning out-
comes. Furthermore it is possible that poor design is particularly
problematic where the absence of face-to-face contact with students
prevents monitoring of responses and fine-tuning of explanations.
A further possible limitation of the study relates to some of the
measures used. While all have satisfactory face validity, only the anxiety
test had available information on reliability and construct validity.
Problems may be identified with the knowledge quiz, the principal
measure of learning outcome in this study, and the way in which it was
used. The problem with its repeated use is that increases in scores over
time could be attributable to practice effects, familiarity, alerting etc.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 659

and these may mask any differences between the conditions. There are
also questions about the validity of quiz-type assessments as opposed to
more authentic tasks that essentially sample performance in the activi-
ties for which the course members will need to utilise the learning
(Friedrich and Armer 1999). The quiz did incorporate a number of
scenario questions where the course members were required to apply
knowledge in solving problems and it was sufficiently sensitive to detect
significant differences on scores over time for both groups. However the
results reported by Jacobson and Spiro (1993) suggest that web-based
learning may give a particular advantage on the kind of complex, ill-
structured problem solving tasks that educational psychologists face in
their work (Monsen et al. 1998). It is therefore a limitation of this study
that no such measures were used. Educational psychology course
members are required to apply their knowledge of research methods and
statistics in undertaking a research project. This was done well after the
end of both blocks of the course being evaluated here.
The qualitative analyses highlighted a number of issues which also
reflected concerns raised by course representatives on behalf of the
group about the web-based version, as follows. They wanted a clearer
statement of what they were supposed to know and be able to do at the
end of each session, so that they could check whether they had achieved
this. They pointed out that the statement of objectives at the start of
each session was more like a list of session contents and could helpfully
be re-written as a set of learning objectives (such as those provided for
other curriculum areas in the educational psychology training pro-
gramme) that would specify what they would know and be able to do if
successful learning had taken place. They considered that the explana-
tions on the web pages were too brief and insufficiently illustrated with
examples. They wanted specific links to examples, exercises and sections
on other web-sites relevant to the specific topic they were studying. They
considered that too much time was wasted searching through general
recommended web-sites. They wanted a much more interactive learning
experience. In particular they were concerned to be able to assess the
extent to which they had understood the content of a page. They wanted
self-checks so they could see if they were on the right track or whether
they needed to seek additional explanation/examples. An exercise on
one of the web sites to which they were directed was cited with approval
where a correct response received a ‘big green tick’! Clearly the
importance of feedback and reinforcement should not be underesti-
mated. It was also suggested that each session needed to be set in the
context of the answer to a broader set of questions about the place of
660 NORAH FREDERICKSON ET AL.

the content in professional training in educational psychology. This was


expressed as ‘‘we need to be told why we are learning the statistics’’.
One interesting reflection on these concerns is that many of them
might equally have been leveled at the lecture-based version of the
course. For example the way in which objectives were stated at that time
was the same for the lecture sessions as for the web sessions. Likewise
the students following the lectures had no way of assessing how much
they were understanding of the content being presented. Yet these issues
were raised only in relation to the web-based course. Following Issroff
and Scanlon (2002) a possible account of these findings may be provided
by applying an activity theory analysis. Activity theory derives from the
work of Vygotsky (1978) and Leont’ev (1981). It is ‘a philosophical and
cross-disciplinary framework for studying different forms of human
practices as development processes, with both individual and social
levels interlinked at the same time.’ (Kuutti 1996, p. 25). The basic unit
of analysis is an activity which is defined a form of doing by a subject
directed at an object using tools in order to transform it into an out-
come. Engestrom (1987, 2001) represents activity systems as shown in
Figure 2 in terms of the relationships between an individual (subject),
an object in their environment and the community. The relationships
between these components is mediated in different ways. The relation-
ship between a subject and an object is mediated by cultural artifacts or
tools. These cultural artifacts can be material objects or symbol systems
or procedures, anything that is used in the transformation process. The
relationship between a subject and a community is mediated by rules
such as norms and conventions while the relationship between an object
and a community is mediated by the division of labour. The division of

Tools

Transformation
Process

Subject Object Outcomes

Rules Community Division of Labour

Figure 2. Structure of a human activity system.


EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 661

labour describes formal and informal ways in which the community is


organised in relation to the transformation process. Activity systems are
typically in flux as contradictions result from the operation of external
influences. As Issroff and Scanlon (2002) suggest, the introduction of a
new tool is likely to create a contradiction where a community does not
have rules of practice to make effective use of it.
In the present study, course member reaction to the web-supported
approach suggested that the introduction of this new tool for learning
had substantially altered their role as a learner and the division of
responsibility between themselves and the tutor for their learning of the
statistics course material. The lecture-based and web-based versions of
the course had the same session objectives, stated in the same teacher-
focused way to describe the content that would be covered in the
session. Only in the web-based version did course members express
dissatisfaction with the objectives and suggest that they should be stated
in a learner-focused way to describe what they should know and be able
to do following participation in the session. Again, only in the web-
based version did course members request self-check activities that
would allow them to assess the extent to which they had successfully
understood the session content. It might be hypothesised that these
differences are consistent with the view that the introduction of the more
learner-controlled web-based learning tool created contradictions with
existing definitions of roles and division of responsibility. Participants in
the web-supported sessions seemed motivated to take responsibility for
directing and assessing their own learning while participants in the
lecture sessions appeared, without question and despite the lecturer’s
best endeavors, to vest these roles and responsibilities in the lecturer.
Laurillard (2002) identifies the control features for ICT interface
design that are important to students’ ability to maintain control of
their own learning. The present rudimentary web-based course does
offer a few of these such as a structured map of the content and an
indication of the amount of material in each section to allow planning
for self-pacing. However a number of others that are absent are the very
features for which our course members identified a need: ‘Concealed
multiple choice questions with keyword analysis to allow student to
express their conception and obtain extrinsic feedback on it… Access to
statement of objectives for program and for section of content so they
know what counts as achieving the topic goal… Clear task goals, so that
they know when they have achieved them… Intrinsic feedback that is
meaningful, accompanied by access to extrinsic feedback (such as a help
option) that interprets it’. (p. 193). Revision of the web-supported
662 NORAH FREDERICKSON ET AL.

version of the educational psychology research methods and statistics


course to incorporate more of these recommended and requested fea-
tures is currently being undertaken. Further evaluation of learner out-
comes and satisfaction will then be sought in the next phase of an action
research cycle (Kemmis 1993). Such an approach offers opportunities to
learn from even small-scale innovations about ways in which benefits
for both learning and teaching can be realized.

Acknowledgement

We would like to acknowledge the stimulating critique offered by


Professor Lewis Elton which assisted our learning from this piece of
work and our planning of future developements.

References

Ashcraft, M.H., Kirk, E.P. and Hopko, D. (1998). ‘On the cognitive consequences of
mathematics anxiety’, in Donlan, C. (ed.), The Development of Mathematical Skills.
Hove: Psychology Press, pp. 175–196.
Chen, C. and Rada, R. (1996). ‘Interacting with hypertext: A meta-analysis of experi-
mental studies’, Human–Computer Interaction 11, 125–156.
Daniel, J.S. (1998). Mega Universities and the Knowledge Media: Technology Strategies
for Higher Education. London: Kogan Page.
Dillon, A. and Gabbard, R. (1998). ‘Hypermedia as an educational technology: A
review of the quantitative research literature on learning comprehension, control
and style’, Review of Educational Research 68, 322–349.
Engeström, Y. (1987). Learning by Expanding: An Activity-Theoretical Approach to
Developmental Research. Helsinki: Orienta-Konsultit.
Engeström, Y. (2001). Expansive Learning at Work. Towards an Activity-Theoretical
Reconceptualisation. Lifelong Learning Group, Occasional Paper No. 1. London:
Institute of Education.
Friedrich, K.R. and Armer, L. (1999). ‘The instructional and technological challenges of
a web based course in educational statistics and measurement’. Presented at the
Society for Educational Technology and Teacher Education 10th International
Conference, San Antonio, TX, February 28–March 4 1999.
Harding, R.D., Lay, S.W., Moule, H. and Quinney, D.A. (1995). ‘A mathematical tool-
kit for interactive hypertext courseware: Part of the mathematics experience within
the Renaissance project’, Computers Education 24, 127–135.
Hewson, L. and Hughes, C. (2001). ‘Generic structures for on-line teaching and
learning’, in Lockwood, F. and Gooley, A. (eds.), Innovation in Open and Distance
Learning. London: Kogan Page, pp. 76–87.
Huang, H-M. (2002). ‘Towards constructivism for adult learners in online learning
environments’, British Journal of Educational Technology 33, 27–37.
EVALUATING WEB-SUPPORTED LEARNING VERSUS LECTURE-BASED 663

Issroff, K. and Scanlon, E. (2002). ‘Using technology in higher education: An activity


theory perspective’, Journal of Computer Assisted Learning 18, 77–83.
Jacobson, M.J. and Spiro, R.J. (1993). Hypertext Learning Environments, Cognitive
Flexibility and Transfer of Complex Knowledge: An Empirical Investigation. Tech
Report, Champagne IL, Centre for the Study of Reading University of Illinois.
Katz, Y.J. (2002). ‘Attitudes affecting college students’ preferences for distance learn-
ing’, Journal of Computer Assisted Learning 18, 2–9.
Kemmis, S. (1993). ‘Action research’, in Hammersley, M. (ed.), Educational Research:
Current Issues. London: Chapman.
King, B. (2001). ‘Making a virtue of necessity – a low-cost, comprehensive online
teaching and learning environment’, in Lockwood, F. and Gooley, A. (eds.), Inno-
vation in Open and Distance Learning. London: Kogan Page, pp. 51–62.
Knowles, M. (1990). The Adult Learner: A Neglected Species, 4th edition. Houston:
Gulf Publishing.
Kuutti, K. (1996). ‘Activity theory as a potential framework for human–computer
interaction research’, in Nardi, B. (ed.), Context and Consciousness: Activity Theory
and Human–Computer Interaction. Mass: MIT Press, pp. 17–44.
Landauer, R. (1995). The Trouble with Computers. Cambridge, MA: MIT Press.
Laurillard, D. (2002). Rethinking University Teaching: A Conversational Framework for
the Effective use of Educational Technology, 2nd edition. London: Routledge.
Leont’ev, A.N. (1981). Problems of Development of the Mind. Moscow: Progress.
Lockwood, F. (2001). ‘Innovation in distributed learning: Creating the environment’, in
Lockwood, F. and Gooley, A. (eds.), Innovation in Open and Distance Learning.
London: Kogan Page, pp. 1–14.
Lockyer, L., Patterson, J. and Harper, B. (1999). ‘Measuring effectiveness of health
education in a web-based learning environment: A preliminary report’, Higher
Education Research and Development 18, 233–246.
Maki, R.H., Maki, W.S., Patterson, M. and Whittaker, P.D. (2000). ‘Evaluation of a
web-based introductory psychology course: I. Learning and satisfaction on-line
versus lecturer courses’, Behavior Research Methods, Instruments and Computers 32,
230–239.
Monsen, J., Graham, B., Frederickson, N. and Cameron, R.J (1998). ‘Problem analysis
and professional training in educational psychology: An accountable model of
practice’, Educational Psychology in Practice 13, 234–249.
Oliver, R. and McLoughlin, C. (2001). ‘Using networking tools to support online
learning’, in Lockwood, F. and Gooley, A. (eds.), Innovation in Open and Distance
Learning. London: Kogan Page, pp. 148–159.
Plake, B.S. and Parker, C.S. (1982). ‘The development and validation of a revised
version of the Mathematics Anxiety rating Scale’, Educational and Psychological
Measurement 42, 551–557.
Recker, M. and Pirolli, P. (1995). ‘Modelling individual differences in students’ learning
strategies’, Journal of the Learning Sciences 4, 1–38.
Richardson, F.C. and Suinn, R.M. (1972). ‘The Mathematics Anxiety Rating Scale:
Psychometric data’, Journal of Counselling Psychology 19, 551–554.
Roberts, G. (2003). ‘Teaching using the Web: Conceptions and approaches from a
phenomenographic perspective’. Instructional Science 31, 127–150.
Salmon, G. (2000). E-Moderating: The Key to Teaching and Learning Online. London:
Kogan Page.
664 NORAH FREDERICKSON ET AL.

Schön, D.A. (1987). Education the Reflective Practitioner. San Francisco, CA: Jossey-
Bass.
Vaughn, S., Schumm, J.S. and Sinagub, J. (1996). Focus Group Interviews in Education
and Psychology. Thousand Oaks, CA: Sage.
Vygotsky, L. (1978). Mind in Society: The Development of Higher Psychological
Processes. Cambridge: Cambridge University Press.
Whalley, P. (1993). ‘An alternative rhetoric for hypertext’, in McKnight, C., Dillon, A.
and Richardson, J. (eds.), Hypertext: A Psychological Perspective. London: Ellis
Horwood.
Williams, W.M. and Ceci, S.J. (1997). ‘‘How’m I doing?’ Problems with student ratings
of instructors and courses’, Change 29, (Sept/Oct), 13–23.
Wood, D., Bruner, J.S. and Ross, G. (1976). ‘The role of tutoring in problem-solving’,
Journal of Child Psychology and Psychiatry 17, 149–161.
Zimitat, C. (2001). ‘Designating effective in-line continuing medical education’, Medical
Teacher 23, 117–122.

Address for correspondence: Norah Frederickson, Department of Psychology, Univer-


sity College London, Gower Street, London WC1E 6BT, UK
Phone: +44-20-7679-7555; Fax: +44–20-7679-5354; E-mail: [email protected]

You might also like