0% found this document useful (0 votes)
16 views7 pages

Formative Student-Authored Question Bank: Perceptions, Question Quality and Association With Summative Performance

This study evaluates the PeerWise platform, which allows medical students to author, answer, and discuss multiple-choice questions (MCQs), focusing on its impact on learning and summative performance. Over two academic years, students produced 4,671 questions and demonstrated a correlation between question writing frequency and exam performance. Despite concerns about question quality, item analysis indicated acceptable standards, suggesting PeerWise is a valuable educational tool for medical students.

Uploaded by

Hibah Mustafa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views7 pages

Formative Student-Authored Question Bank: Perceptions, Question Quality and Association With Summative Performance

This study evaluates the PeerWise platform, which allows medical students to author, answer, and discuss multiple-choice questions (MCQs), focusing on its impact on learning and summative performance. Over two academic years, students produced 4,671 questions and demonstrated a correlation between question writing frequency and exam performance. Despite concerns about question quality, item analysis indicated acceptable standards, suggesting PeerWise is a valuable educational tool for medical students.

Uploaded by

Hibah Mustafa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Original article

Formative student-authored question bank:


perceptions, question quality and association with
summative performance
Jason L Walsh,1 Benjamin H L Harris,1 Paul Denny,2 Phil Smith1

1
Centre for Medical Education, Abstract and discuss MCQs pertinent to their course. It is a
Cardiff University, Cardiff, UK Purpose of the study There are few studies on the non-commercial product created and maintained by
2
Department of Computer
Science, University of Auckland, value of authoring questions as a study method, the the University of Auckland, New Zealand.
Auckland, New Zealand quality of the questions produced by students and We introduced PeerWise to Cardiff University
student perceptions of student-authored question banks. School of Medicine in October 2013 to first-year

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


Correspondence to Here we evaluate PeerWise, a widely used and free medical students (2013–2014, year 1; n=297).
Professor Phil Smith, Centre online resource that allows students to author, answer Examination of its usage data over the first
for Medical Education, Cardiff and discuss multiple-choice questions. 6 months suggested it was a popular resource.5
University, Heath Park, Cardiff,
South Glamorgan CF14 4XW, Study design We introduced two undergraduate These students continued to use PeerWise during
UK; ​smithpe@​cardiff.​ac.​uk medical student cohorts to PeerWise (n=603). We their second year (2014–2015, year 2; n=273).
looked at their patterns of PeerWise usage; identified Subsequently, in October 2014, we introduced
Received 15 April 2017 associations between student engagement and PeerWise to the new cohort of first-year students
Revised 31 July 2017 summative exam performance; and used focus groups to (2014–2015, year 1; n=306). A separate PeerWise
Accepted 31 July 2017 assess student perceptions of the value of PeerWise for course was created for each academic year, and
Published Online First
2 September 2017 learning. We undertook item analysis to assess question each course was only accessible to students within
difficulty and quality. that year group.
Results Over two academic years, the two cohorts There has been no formal evaluation of the use
wrote 4671 questions, answered questions 606 658 of PeerWise within medicine. Here, we describe
times and posted 7735 comments. Question writing the introduction of PeerWise to medical students;
frequency correlated most strongly with summative present descriptive statistics on its usage; examine
performance (Spearman’s rank: 0.24, p=<0.001). if there are associations between question writing,
Student focus groups found that: (1) students valued answering and commenting frequency with summa-
curriculum specificity; and (2) students were concerned tive exam performance; and gauge student percep-
about student-authored question quality. Only two tions of the value of PeerWise, using focus groups
questions of the 300 ’most-answered’ questions and subsequent thematic analysis. We assessed the
analysed had an unacceptable discriminatory value quality of questions using item analysis.
(point-biserial correlation <0.2).
Conclusions Item analysis suggested acceptable
Methods
question quality despite student concerns. Quantitative
We obtained ethical approval for the project
and qualitative methods indicated that PeerWise is a
from Cardiff School of Medicine Research
valuable study tool.
Ethics Committee.

Introduction of PeerWise
Introduction We delivered a 1-hour session to introduce PeerWise
Multiple-choice questions (MCQs) are widely used to the entire cohort of first-year Cardiff medical
to assess medical student knowledge, resulting in students in 2013 (2013–2014, year 1, n=297). All
a demand for formative questions from students. students were asked to attend with an internet-con-
However, faculty members rarely have time or nected device (eg, smart phone, tablet or laptop).
incentives to develop formative questions and The session began with a brief 10 min description of
instead focus primarily on developing material PeerWise. We then asked all students to access the
for high-stakes assessments. Student demand for PeerWise website and helped them to register on to
formative MCQs is reflected by the growing use a PeerWise course that we had previously created.
of commercial question databases among medical Next, we asked students to write one question each.
students.1 After allowing approximately 20 min, students were
A potential solution is to involve students in asked to answer, rate and if appropriate comment
creating formative questions. A few small-scale on the questions written by their peers (20 min).
approaches have involved medical students in ques- Facilitators circulated offering technical support
tion writing to produce banks of formative ques- and question-writing advice. Students were subse-
tions, with the assumption that the question writing quently free to use PeerWise at their discretion. We
To cite: Walsh JL, Harris BHL, itself is a valuable learning activity.2–4 PeerWise repeated this introductory session in the following
Denny P, et al. Postgrad Med J is a freely and globally available online platform year to the new cohort of first-year students (2014–
2018;94:97–103. that allows students to write, share, answer, rate 2015, year 1, n=306).
Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018 97
Original article
Faculty input difficulty of each question, calculating a ‘P value’. The 100 most
In the inaugural year, two staff members (clinical academics) answered questions were sampled for analysis as most students
administrated the course. Principally, they responded to emails had attempted these questions in every cohort. The analyses
related to technical difficulties. In the following year, as the popu- were carried out using Iteman software (V.4.3. 2013; Assessment
larity of the resource increased, two medical school academics Systems, Woodbury, Minnesota, USA). Where students answered
volunteered to give feedback on the questions for their specific an item more than once, we used only the first attempt in the
specialty (immunology and biochemistry). This involved reading analysis.
and commenting on student written questions, specifically
commenting on question accuracy, relevance and whether the Discrimination measure
difficulty was appropriate for the course. The Pearson r-pbis was used as a measure of discriminatory
ability for each of the 100 most answered questions in each of
Descriptive statistics of usage the three academic years (2013–2014, year 1; 2014–2015, year
PeerWise automatically collects data on user activity. For both 2; and 2014–2015, year 1). It is the correlation between item
cohorts, we examined: scores and total scores on all questions in the set. The r-pbis can
►► number of student-written questions, range between – 1.0 and 1.0; the higher the r-pbis, the better the
►► number of answers to questions, item is discriminating between students; it is typically desired

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


►► number of student comments on questions, that the r-pbis be as high as possible. Locally (at Cardiff Univer-
►► number of students writing questions, sity School of Medicine), an r-pbis of >0.2 is the threshold for
►► temporal relationship of writing and answering questions in which questions are considered appropriately discriminatory to
relation to summative examination results. be used/reused in summative medical school examinations.
We studied usage data from two cohorts, following one cohort
over two academic years (2013–2014, year 1; 2014–2015, year Question difficulty
2) and one cohort over one academic year (2014–2015, year 1). To measure the difficulty of each item, we calculated a p value
ranging from 0 to 1, representing the proportion of examinees
Associations of PeerWise activity with summative answering correctly. A p value of 1 indicates that all candidates
examination performance answered the question correctly, and a value of 0 indicates that
The main aspects of PeerWise activity are question writing, no candidates answered correctly. Very high or very low values
answering and commenting. We recorded the frequency of these might indicate that an item was too easy or too hard.
three activities for each student over each academic year and
correlated the frequency of each activity with summative exam Student perceptions
performance. At the end of each academic year, students sat Preliminary usage data indicated that PeerWise is popular.5
two summative examinations. The mean raw score over these However, these data do not explain the reasons for its popularity
two assessments was converted to a percentage and correlated or if students perceived it as valuable for learning. We conducted
with question writing, answering and commenting frequency by four focus groups to gather student perceptions on the value of
Spearman’s rank correlation coefficient. We excluded from the PeerWise.
respective correlation calculation those students who did not
engage with question writing, answering and/or commenting
Focus groups and thematic analysis
following the introductory sessions.
In order to recruit participants, we sent a circular email to each
Additionally, for all academic years, we divided students into
of the two cohorts. The email invited volunteers to attend focus
categories determined by their level of usage (table 1). Cate-
groups, including those who did not use the resource often. We
gories were devised after consulting usage data and discussion
asked volunteers to reply with their availability and to indicate if
with students and faculty. We compared the summative perfor-
they use PeerWise rarely, sometimes, often or very often.
mance of students in these groups using one-way analysis of
Four semistructured focus groups were held with 3–10 partic-
variance (ANOVA) and subsequent independent t-tests. We also
ipants in each group. Before focus groups commenced, the
compared the summative performance of PeerWise users versus
purpose of the study was explained, and students were informed
non-users across all 3 years (t-tests).
about measures to maximise confidentiality and their right to
withdraw.
Item analysis Thematic analysis was used to analyse focus group data, as
We examined the 100 most answered questions in each cohort, described by Braun and Clarke.6
looking at the discriminatory ability of each question, measured
using the Pearson point-biserial correlation (r-pbis), and the
Results
Descriptive statistics of usage
The high usage of PeerWise was notable. The two cohorts
Table 1 Writing, answering and commenting frequency categories produced a bank of 4671 questions, answered questions 606 658
(over one academic year) times and posted 7735 comments discussing questions (table 2).
Writing Answering Commenting Spikes in question writing and answering activity invariably
coincided with exam periods (figure 1).
Prolific (≥50) Prolific (≥1000) Prolific (≥50)
The maximum number of questions written by a single student
Frequent (11–49) Frequent (301–999) Frequent (11–49)
over a single academic year was 297. In the year groups, 55%
Occasional (1–10) Occasional (101–300) Occasional (1–10)
(2013–2014, year 1), 40% (2014–2015, year 2) and 57%
PeerWise user but non- Rare (1–100) PeerWise user but non- (2014–2015, year 1) of students that used PeerWise wrote at
writer (0) commenter (0)
least one question outside of the introductory sessions. Approx-
Non-users (0) Non-users (0) Non-users (0)
imately 20% of students authored 90% of questions across all
98 Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018
Original article

Table 2 Number of questions written, answers submitted, comments made and students that contributed
Questions generated Questions answered Comments made Students contributing questions
2013 Cohort year 1 (n=297) 1551 185 703 2381 162
2013 Cohort year 2 (n=273) 1751 245 818 3432 108
2014 Cohort year 1 (n=306) 1369 175 137 1922 175
Total 4671 606 658 7735 468

year groups. The absolute number of students writing ques- The summative performance of students in the different
tions in the 2013 cohort dropped from first to second year by writing, answering and commenting frequency groups (table 1)
33% (table 2). However, activity on PeerWise increased overall was compared. One-way ANOVA showed that there were signif-
for this cohort. Question writing, answering and commenting icant differences in mean summative examination performance
activity increased by 32%, 13% and 44%, respectively (table 2). between the writing, answering and commenting frequency
In all cohorts, there was a clear increase in both question groups (p<0.0001). Independent t-tests were subsequently
writing and answering activity coinciding with the period of performed.

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


1–2 weeks immediately before summative examinations. There For question writing, mean summative score increased as
were also smaller spikes in activity before formative examina- question writing frequency increased. There was a significant
tions. Figure 1 illustrates this effect. difference between the mean summative scores of all frequency
groups (p<0.05), except between frequent and prolific writer
Associations of PeerWise activity with summative groups. Figure 2B illustrates this trend.
examination performance For question answering, the mean summative score of
Mean raw scores over two summative assessments (taken at the non-users was significantly lower than all other groups
end of each academic year) were converted to percentages and (p<0.05). Prolific answerers scored significantly higher than
correlated with question writing, answering and commenting all other groups (p<0.0001). There were no significant differ-
frequency by Spearman’s Rank correlation coefficient. There ences between the mean summative scores of the rare, occa-
were significant correlations between writing, answering and sional and frequent question answering frequency groups
commenting frequency with summative examination perfor- (figure 2C).
mance (p<0.001, R=0.24, 0.13 and 0.15, respectively). For question commenting, mean summative score increased
Comparison of summative performance between PeerWise as commenting frequency increased. The differences between
users and non-users showed that users performed significantly mean summative scores were significant between all groups
better (p<0.001; figure 2A). (p<0.05), except between occasional and frequent commenters
(figure 2D).

Item analysis
Discrimination marker
The mean r-pbis for the top 100 most answered questions for
each academic year were: 0.485 (2013–2014, year 1), 0.446
(2014–2015, year 2) and 0.480 (2014–2015, year 1). The year
2 questions were significantly less discriminatory than those
questions generated by year 1 students (p<0.05).

Question difficulty
The mean difficulty (p value) in the three groups was 0.370
(2013–2014, year 1) 0.438 (2014–2015, year 2) and 0.362
(2014–2015, year 1). The year 2 questions were significantly
easier compared with both year 1 academic years (p=0.001).
Two questions out of the 300 questions analysed had an r-pbis
of <0.20. All questions analysed in the year 2 2014–2015 cohort
have an r-pbis of >0.20.

Student perceptions
Focus groups and thematic analysis
Four semistructured focus groups were held gauging student
perceptions of PeerWise, containing a total of 23 participants.
Focus group duration ranged from 44 to 62 min. Table 3 shows
Figure 1 PeerWise activity for the 2013 cohort, year 1 the composition of the focus groups. Two, five, eight and eight
(n=297). Examination periods are indicated by arrows (formative of the students reported to using PeerWise rarely, sometimes,
examination=green; summative examination period, containing two often and very often, respectively.
examinations=red). (A) shows student writing frequency and (B) shows Thematic analysis of focus group transcripts generated 25
student answering frequency. Each blue bar represents 1 day. initial codes, which were refined into 16 key themes (figure 3).
Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018 99
Original article

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


Figure 2 Box plots illustrating student summative examination performance (y-axis) by: engagement (users vs non-users) (A); question writing
frequency category (B); answering frequency category (C); and commenting frequency category (D).

Discussion The use of answering questions for learning is supported by a


We took a mixed methods approach to evaluate the use of Peer- large body of evidence, suggesting that repeated retrieval prac-
Wise at a UK medical school. We looked for associations between tice (testing) is effective for enhancing learning.7–10 This finding
PeerWise engagement and summative examination performance might also suggest that there is value in increasing assessment
and undertook item analysis to investigate student-authored frequency at medical schools to drive learning.11 Surprisingly,
question quality. In addition, we used focus groups to gauge question writing frequency also increased around examinations.
student perceptions of the resource. This suggests a proportion of students found writing questions
The usage data showed that question writing and answering a worthwhile revision technique, despite the time commitment:
on PeerWise increased prior to formative and summative exams:
Writing questions is a great way to learn things.—Year 1 student
this was reflected in the focus groups. Students often reported
using PeerWise more frequently when closer to exams:
There were weak but significant correlations between writing,
I tend to do a lot of questions during the exam period.—Year 1 answering and commenting frequency with summative perfor-
student mance. Question writing frequency showed the strongest
This could suggest students find PeerWise most useful in the correlation. This trend was also reflected in the stepwise increase
period when they are seeking to reinforce their knowledge. in mean summative score between subsequent question writing
groups (figure 2B). In line with the focus group data, this may
suggest that question writing is a valuable study method. This
Table 3 Focus group demographics supports the emerging literature advocating question writing
for learning.12–14 However, in this study, it is difficult to pick
Focus group Focus group 2 Focus group 3 Focus group
1 (n=3) (n=4) (n=6) 4 (n=10) apart the impact of question writing on summative exam score
from other confounders, for instance question writing frequency
Male 1 2 1 3
could be a marker of student work ethic. Similar to writing
Female 2 2 5 7
frequency, there was a stepwise increase in mean summative
Year 1 1 1 3 6
exam score with increasing commenting frequency (figure 2C).
Year 2 2 3 3 4
This may suggest that online discussion of questions supports
100 Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018
Original article

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


Figure 3 Thematic map of key themes raised during focus groups on student perceptions of PeerWise. Bracketed numbers indicate number of
extracts identified in focus group transcripts relevant to the theme.

learning, but again, there may be other potential confounders, We identified two major themes relating to question quality.
such as commenting being a marker of conscientiousness and These were (1) faculty question review is highly valued (joint
knowledge. Students frequently reported that they often found most prevalent theme) and (2) students had concerns over ques-
discussions on PeerWise informative: tion quality, for example:
I think you learn more from the discussions than the question on PeerWise the questions are written by students, so you can’t
sometimes.—Year 2 student always trust the answer.—Year 2 student

Answering frequency demonstrated the weakest (but signif- Students felt strongly that faculty input helped to ensure that
icant) correlation with summative performance. Interestingly, questions were relevant (curriculum specific) and factually accu-
there was no stepwise improvement of summative exam score rate. However, in practice, the proportion of questions reviewed
between the rare, occasional and frequent answerer groups. by faculty was relatively small (<5%). Despite concerns around
However, prolific question answerers did do better than all other question quality, item analysis of the top 100 questions from
groups. This may suggest there is a threshold effect of answering each year indicated that most of the student-authored questions
questions on examination performance. Answering was by far had adequate discriminatory ability and appropriate difficulty
the most common activity on PeerWise, with students reporting for inclusion into local summative examinations. This may indi-
it to be particularly useful for reinforcing knowledge and identi-
cate that the most answered questions are of high quality. One
fying knowledge gaps.
could posit, although highly valued, faculty review (of the most
The joint most prevalent theme arising during focus groups
popular questions at least) may not be necessary. However, a
was the curriculum specificity of PeerWise content. Students
high r-pbis and appropriate difficulty do not necessarily mean
frequently indicated that the questions on PeerWise were rele-
the questions are well structured. Further subjective item anal-
vant to their course and that this was a very positive feature:
ysis may be appropriate to assess student-authored question
Questions are written by people that are in the same [teaching] quality. Perhaps incorporating a formal review process of ques-
sessions and they know what is relevant.—Year 1 student tions and/or question writing training may improve subjective
and objective question quality.3 15 16 In addition, the top 100
One way this specificity appeared to be manifested is that
questions from each year may not represent all 4671 questions
questions on PeerWise tended to resemble or predict questions
available. However, this finding does raise the possibility that
in summative assessments:
incorporating student-authored questions into summative exams
[in the recent summative exam] there were a number of questions I may be appropriate.
thought, I have literally answered this on PeerWise.—Year 2 student The fourth most prevalent theme identified was that students
felt using PeerWise was a fun/enjoyable experience. This was
This curriculum specificity of PeerWise was also frequently
attributed to the interactivity of PeerWise:
cited as being an advantage over commercially available online
question banks aimed at medical students. It’s nicer than just going through a textbook as it’s more interactive.

Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018 101


Original article
Year 2 student, and the use of humour: In conclusion, PeerWise is well used and well received by
medical students. Some interesting observations arose including:
Personally, it sounds really sad, but I really enjoy doing PeerWise.
There is bare [[a lot of]] banter on there, it’s good fun. (1) engaging with question writing and a higher frequency of
question writing is associated with better summative perfor-
Questions about Billy the Bacterium, Nigel Farage and medical mance; (2) answering questions was by far the most popular
student lifestyle were particularly well received. Certainly, enjoy- activity on PeerWise, with students invariably reporting that
ment has been linked to engagement with study and learning.17–20 they found it useful for learning. However, the association with
Another aspect of PeerWise students often referred to as ‘fun’ answering frequency and summative performance is less clear
were virtual badges. Virtual badges are an example of gamifi- cut. (3) Commenting frequency was weakly associated with
cation, which involves integrating elements of game design better summative performance, students often finding discussion
in non-game contexts.21 There is a growing body of evidence of questions motivational and informative. However, trolling on
to show virtual rewards enhance engagement in educational PeerWise was identified as a negative aspect of the commenting
activities.21–24 function. (4) Item analysis indicated acceptable question quality
We love the badges…She was going on answering questions after
(of the most popular questions) despite student concern; and (5)
the exams had finished, when we didn’t need to go on it anymore, students valued the curriculum specificity of the generated ques-
tions, faculty review, virtual rewards and overall found PeerWise

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


just to get the badge for answering questions 30 days in a row!—
Year 2 student to be an enjoyable study tool. This evaluation justifies the use of
student-authored question banks at medical schools.
PeerWise uses 26 distinct badges, awarded for achievements
related to writing, answering, commenting on and rating ques- Main messages
tions. A randomised controlled trial showed that badges signifi-
cantly increased student engagement with PeerWise.24 Another ►► Quantitative and qualitative methods indicated that the
motivating feature of PeerWise was the ability for students to student-authored question bank is a valuable study tool.
compare their performance with one another. Performance ►► Item analysis suggested acceptable question quality of
comparison has been shown to increase medical student engage- student-authored questions.
ment with an e-learning module in a randomised controlled
trial.25
An interesting and unanticipated negative theme was the Current research questions
phenomenon referred to by students as ‘PeerWise trolling’.
Trolling has been defined as ‘disruptive online deviant behaviour ►► Answering questions is known to improve recall via the
directed towards individuals or groups’.26 On PeerWise, trolling phenomenon of test-enhanced learning.
manifests as posts that unnecessarily attack questions or previous ►► There are few studies on the value of authoring questions
comments, or aggressive critiques of questions lacking social as a study method, the quality of the questions produced
etiquette. Students generally viewed this as demotivational: by students and student perceptions of student-authored
question banks.
I spoke to somebody yesterday and he said he wrote a question and
got loads of abuse so doesn’t write any anymore.—Year 2 student
Acknowledgements We thank Professor David Harris, Dr Lee Coombes and Dr
Students believe that PeerWise trolling is precipitated by Saadia Tayyaba for their support and advice on the project; and Dr James Matthews
anonymity, and this is supported by the literature.27–29 Three for his input as an expert reviewer of student questions and for contributing an
common interventions to reduce trolling include: defining clear undergraduate prize to incentivise student PeerWise engagement in Cardiff.
rules for online communities, moderators enforcing standards Contributors All named authors have contributed the following: JLW: contributed
and having persistent identifiers for individuals available to substantially to the conception and design of the study, acquisition, analysis and
moderators while maintaining anonymity to other users.26 These interpretation of the data; wrote the first draft and revised and critically reviewed
all subsequent drafts; gives final approval for the submitted version to be published;
interventions could be used on PeerWise. Perhaps faculty could and agrees to be accountable for all aspects of the work in ensuring that questions
police comments and remove students who persistently offend. related to the accuracy or integrity of any part of the work are appropriately
Persistent anonymous identifiers already exist on PeerWise but investigated and resolved. BHLH: contributed to the conception and design of the
could be made more visible. Additionally, it may be appropriate study, acquisition, analysis and interpretation of the data; cowrote the second draft
and revised and critically reviewed all subsequent drafts; gives final approval for
to make clear in the introductory sessions that critique and
the submitted version to be published; and agrees to be accountable for all aspects
freedom of expression on PeerWise are strongly encouraged but of the work in ensuring that questions related to the accuracy or integrity of any
that students must maintain social etiquette. Recently, trolling part of the work are appropriately investigated and resolved. PD: contributed to the
has been associated with a higher likelihood of possessing nega- conception and design of the study, analysis and interpretation of the data; cowrote
tive traits such as sadism and psychopathy.30 Therefore, perhaps the third draft and revised and critically reviewed all subsequent drafts; gives final
approval for the submitted version to be published; and agrees to be accountable
identifying trolls on PeerWise could be used as a novel mech-
for all aspects of the work in ensuring that questions related to the accuracy or
anism to identify individuals likely to exhibit unprofessional integrity of any part of the work are appropriately investigated and resolved. PS:
behaviours. contributed to the conception and design of the study, analysis and interpretation of
Conversely, comments were often viewed as having positive the data; critically reviewed all drafts; gives final approval for the submitted version
value, due to being perceived as motivational and/or informa- to be published; and agrees to be accountable for all aspects of the work in ensuring
that questions related to the accuracy or integrity of any part of the work are
tive: ‘It is really uplifting an amazing feeling when someone gives appropriately investigated and resolved.
you a positive comment like ‘amazing question’ and it makes you
Competing interests None declared.
want to write more’ year one student. Examining the ratio of
positive or informative comments to negative comments would Ethics approval Cardiff School of Medicine Research Ethics Committee.
be interesting. We suggest students and facilitators be encouraged Provenance and peer review Not commissioned; externally peer reviewed.
to write positive comments where appropriate to reinforce ques- Open Access This is an Open Access article distributed in accordance with the
tion-writing behaviour and to minimise the impact of trolling. Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which

102 Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018


Original article
permits others to distribute, remix, adapt, build upon this work non-commercially, 14 Gooi AC, Sommerfeld CS. Medical school 2.0: How we developed a student-generated
and license their derivative works on different terms, provided the original work question bank using small group learning. Med Teach 2015;37:892–6.
is properly cited and the use is non-commercial. See: http://​creativecommons.​org/​ 15 Malau-Aduli BS, Zimitat C. Peer review improves the quality of MCQ examinations.
licenses/​by-​nc/​4.​0/ Assess Eval High Educ 2012;37:919–31.
16 Abdulghani HM, Ahmad F, Irshad M, et al. Faculty development programs improve the
© Article author(s) (or their employer(s) unless otherwise stated in the text of the
quality of Multiple Choice Questions items’ writing. Sci Rep 2015;5:9556.
article) 2018. All rights reserved. No commercial use is permitted unless otherwise
17 Csikszentmihályi M. Flow: The Psychology of Optimal Experience. New York: Harper &
expressly granted.
Row, 1990.
18 Goetz T, Hall NC, Frenzel AC, et al. A hierarchical conceptualization of enjoyment in
students. Learn Instr 2006;16:323–38.
References 19 Schüler J. Arousal of Flow Experience in a Learning Setting and Its Effects on Exam
1 Rughani G. Online Question Banks. Student BMJ 2013;21(f3500(). Performance and Affect. Z Padagog Psychol 2007;21:217–27.
2 Bobby Z, Radhika MR, Nandeesha H, et al. Formulation of multiple choice questions 20 Blunsdon B, Reed K, McNeil N, et al. Experiential Learning in Social Science Theory:
as a revision exercise at the end of a teaching module in biochemistry. Biochem Mol An investigation of the relationship between student enjoyment and learning. HERD
Biol Educ 2012;40:169–73. 2003;22:43–56.
3 Harris BH, Walsh JL, Tayyaba S, et al. A novel student-led approach to multiple-choice 21 Deterding S, Dixon D, Khaled R, et al. From game design elements to gamefulness:
question generation and online database creation, with targeted clinician input. Teach defining "gamification". Proceedings of the 15th International Academic MindTrek
Learn Med 2015;27:182–8. Conference: Envisioning Future Media Environments. Tampere, Finland, 2011:9–15.
4 Jobs A, Twesten C, Göbel A, et al. Question-writing as a learning tool for students-- 2181040: ACM.

Downloaded from https://academic.oup.com/pmj/article/94/1108/97/6984023 by guest on 09 May 2025


outcomes from curricular exams. BMC Med Educ 2013;13:89. 22 Anderson A, Huttenlocher D, Kleinberg J, et al. Steering user behavior with badges.
5 Walsh JL, Denny P, Smith PE. Encouraging maximal learning with minimal effort using Proceedings of the 22nd international conference on World Wide Web. Rio de Janeiro,
PeerWise. Med Educ 2015;49:521–2. Brazil, 2013:95–106. 2488398: ACM.
6 Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 23 Crumlish C, Malone E. Designing Social Interfaces: Principles, Patterns, and Practices
2006;3:77–101. for Improving the User Experience. Sebastopol, USA: O’Reilly Media, 2009.
7 Larsen DP, Butler AC, Aung WY, et al. The effects of test-enhanced learning on long- 24 Denny P. The effect of virtual achievements on student engagement. Proceedings
term retention in AAN annual meeting courses. Neurology 2015;84:748–54. of the SIGCHI Conference on Human Factors in Computing Systems. Paris, France,
8 Cook DA, Thompson WG, Thomas KG. Test-enhanced web-based learning: 2013:763–72. 2470763: ACM.
optimizing the number of questions (a randomized crossover trial). Acad Med 25 Worm BS, Buch SV. Does competition work as a motivating factor in e-learning? A
2014;89:169–75. randomized controlled trial. PLoS One 2014;9:e85434.
9 Baghdady M, Carnahan H, Lam EW, et al. Test-enhanced learning and its effect on 26 Finchman P, Sanfilippo MR. Online trolling and its perpetrators: under the cyberbridge.
comprehension and diagnostic accuracy. Med Educ 2014;48:181–8. 1 ed. London: Rowman and Littlefield, 2016.
10 Pan SC, Wong CM, Potter ZE, et al. Does test-enhanced learning transfer for triple 27 Fichman P, Sanfilippo MR. The Bad Boys and Girls of Cyberspace: How Gender
associates? Mem Cognit 2016;44:24–36. and Context Impact Perception of and Reaction to Trolling. Soc Sci Comput Rev
11 Devine OP, Harborne AC, McManus IC. Assessment at UK medical schools varies 2015;33:163–80.
substantially in volume, type and intensity and correlates with postgraduate 28 Hardaker C. ’I refuse to respond to this obvious troll’: an overview of responses to
attainment. BMC Med Educ 2015;15:146. (perceived) trolling. Corpora 2015;10:201–29.
12 Foos PW. Effects of Student-Written Questions on Student Test Performance. Teach 29 Suler J, Phillips P. The Bad Boys of Cyberspace: Deviant Behavior in a Multimedia Chat
Psychol 1989;16:77–8. Community. Cyberpsychol Behav 1998;1:275–94.
13 Walsh J, Harris B, Tayyaba S, et al. Student-written single-best answer questions 30 Buckels EE, Trapnell PD, Paulhus DL. Trolls just want to have fun. Pers Individ Dif
predict performance in finals. Clin Teach 2016;13:352–6. 2014;67:97–102.

Walsh JL, et al. Postgrad Med J 2018;94:97–103. doi:10.1136/postgradmedj-2017-135018 103

You might also like