0% found this document useful (0 votes)
15 views27 pages

AI-Literature Review 4.docx

This document outlines a research proposal investigating the negative impacts of digital technology on university students' academic experiences, guided by Neil Selwyn's framework of 'digital downsides.' The study aims to explore themes such as digital distractions, cognitive overload, and academic integrity risks through qualitative interviews, ultimately seeking to inform better digital education strategies. The findings are expected to provide insights into students' challenges and recommend ways to enhance their academic resilience and well-being in increasingly digitized learning environments.

Uploaded by

Hugs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views27 pages

AI-Literature Review 4.docx

This document outlines a research proposal investigating the negative impacts of digital technology on university students' academic experiences, guided by Neil Selwyn's framework of 'digital downsides.' The study aims to explore themes such as digital distractions, cognitive overload, and academic integrity risks through qualitative interviews, ultimately seeking to inform better digital education strategies. The findings are expected to provide insights into students' challenges and recommend ways to enhance their academic resilience and well-being in increasingly digitized learning environments.

Uploaded by

Hugs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Page 1 of 27 - Cover Page Submission ID trn:oid:::3117:457631178

428057167 220843438
Literature Review 4.docx
Turnitin

Document Details

Submission ID

trn:oid:::3117:457631178 25 Pages

Submission Date 6,358 Words

May 10, 2025, 11:02 AM UTC


40,716 Characters

Download Date

May 10, 2025, 11:04 AM UTC

File Name

Literature_Review_4.docx

File Size

30.6 KB

Page 1 of 27 - Cover Page Submission ID trn:oid:::3117:457631178


Page 2 of 27 - AI Writing Overview Submission ID trn:oid:::3117:457631178

48% detected as AI Caution: Review required.

The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.

Detection Groups
24 AI-generated only 48%
Likely AI-generated text from a large-language model.

0 AI-generated text that was AI-paraphrased 0%


Likely AI-generated text that was likely revised using an AI-paraphrase tool
or word spinner.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.

Frequently Asked Questions

How should I interpret Turnitin's AI writing percentage and false positives?


The percentage shown in the AI writing report is the amount of qualifying text within the submission that Turnitin’s AI writing
detection model determines was either likely AI-generated text from a large-language model or likely AI-generated text that was
likely revised using an AI-paraphrase tool or word spinner.

False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.

AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).

The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.

What does 'qualifying text' mean?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be likely AI-generated will be
highlighted in cyan in the submission, and likely AI-generated and then likely AI-paraphrased will be highlighted purple.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

Page 2 of 27 - AI Writing Overview Submission ID trn:oid:::3117:457631178


Page 3 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

EXPLORING THE NEGATIVE ACADEMIC IMPACTS OF DIGITAL


TECHNOLOGY: A QUALITATIVE STUDY ON UNIVERSITY STUDENTS’
PERCEPTIONS AND EXPERIENCES

THE NAME OF THE COURSE

TUTOR

THE NAME OF THE SWCHOOL

THE DATE

Page 3 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 4 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

Abstract

The integration of digital technology in higher education has transformed the academic
landscape, offering both opportunities and challenges for university students. While digital
tools such as learning management systems, online libraries, productivity apps, and artificial
intelligence (AI) platforms have enhanced access to academic content and supported flexible
learning, emerging evidence suggests that these tools also present significant negative
impacts. This research proposal outlines a qualitative study aimed at exploring how
undergraduate students perceive and experience the negative effects of digital technology in
their academic lives. Guided by Neil Selwyn’s (2016) framework of “digital downsides,” the
study focuses on four main themes: digital distractions, diminished depth of study, cognitive
overload, and academic integrity risks.

The research will employ a qualitative design using semi-structured interviews with a
purposive sample of undergraduate students from diverse academic backgrounds. The
interviews will be audio-recorded, transcribed, and analyzed through thematic analysis to
identify common patterns and themes in students’ narratives. Ethical considerations will
include informed consent, data anonymization, voluntary participation, and approval from the
university’s ethics committee.

Expected findings include a detailed understanding of how students navigate the challenges
posed by digital technology in academic settings. The study anticipates identifying key digital
stressors and disruptions that affect students’ learning processes, emotional well-being, and
academic engagement. By extending Selwyn’s (2016) typology in a contemporary university
context, particularly post-pandemic, the research aims to contribute to more balanced and
student-centered digital education strategies.

Ultimately, this study seeks to inform higher education stakeholders educators, curriculum
designers, and institutional policymakers on how to mitigate the adverse effects of
educational technologies while promoting digital tools that support meaningful learning
experiences. The outcomes of this study will offer practical recommendations to reduce
technostress and improve digital literacy and academic resilience among university students.

Keywords: Digital technology in higher education, Digital downsides, University students’


perceptions, Technostress, Cognitive overload, Academic distractions, Academic integrity
and AI, Digital learning challenges, Student well-being, Qualitative research, Thematic
analysis.

Page 4 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 5 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

INTRODUCTION

1.1 Background of the study

The integration of digital technology into higher education has revolutionised the teaching
and learning environment, redefining how students interact with academic content, faculty,
and one another. Technological advancements such as virtual learning environments (VLEs),

Page 5 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 6 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

artificial intelligence (AI)-powered tools, online databases, and communication platforms like
Microsoft Teams and Zoom have enhanced access, flexibility, and engagement in academic
settings. Digital tools are now considered essential to the modern university experience,
especially in the wake of the COVID-19 pandemic, which necessitated a rapid shift to online
and hybrid learning models. While digitalization has undeniably enabled new forms of
academic participation, collaboration, and content delivery, it has also introduced several
unintended consequences that warrant critical exploration.

Despite the benefits of digital learning, a growing body of research highlights its adverse
effects on students' academic well-being and performance. University students increasingly
report digital fatigue, reduced attention spans, cognitive overload, and academic stress
associated with excessive screen time, multiple platform usage, and the pressure to be
constantly connected. This paradox where digital technology is both a facilitator and inhibitor
of academic success raises important questions about the quality of student engagement in
increasingly digitised learning environments. The pervasive use of educational technology,
while often seen as a solution to institutional and pedagogical challenges, may, in practice,
hinder deep learning, concentration, and academic motivation.

Neil Selwyn’s (2016) concept of “digital downsides” serves as a valuable theoretical lens to
examine these issues. His framework identifies four primary categories of negative student
experiences with digital technology: distractions and interruptions, diminished depth of study,
cognitive overload, and academic integrity risks. These categories underscore how
technology can undermine rather than support students' educational goals. For instance,
constant notifications from messaging apps, social media platforms, and digital multitasking
can divert attention from academic tasks, contributing to fragmented learning. Similarly,
over-reliance on digital shortcuts and AI-generated content can compromise students’ critical
thinking, reflection, and integrity.

In addition, the pandemic exacerbated many of these digital challenges. Research by Guerra,
Manríquez, and Sierra (2022) indicates that technostress intensified during COVID-19 as
students were forced to adapt to full-time remote learning with minimal preparation. The shift
exposed gaps in digital literacy, infrastructure inequalities, and mental health strains
stemming from constant virtual engagement. Upadhyaya and Vrinda (2021) further argue that
the rise in technostress correlates with reduced academic productivity, highlighting a need for

Page 6 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 7 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

more comprehensive support systems that address the emotional and cognitive burdens
students experience in digital learning environments.

Moreover, the emergence of AI technologies in education has added new complexities. Tools
such as ChatGPT and paraphrasing software raise concerns over academic integrity, with
students facing ethical dilemmas in distinguishing between legitimate academic assistance
and misconduct. Chan (2023) explores how students perceive “AI-giarism” and the blurred
lines between human and machine-generated content. These developments demand a re-
evaluation of institutional policies and pedagogical strategies to ensure academic honesty
while acknowledging the evolving digital landscape.

Despite increasing awareness of these issues, there is still a lack of in-depth qualitative
understanding of how students perceive and internalise these negative digital experiences.
Most existing research relies on quantitative metrics that fail to capture the nuanced,
everyday realities of digital academic life. A qualitative approach, through direct engagement
with students' voices, offers a richer, contextually grounded insight into the emotional,
behavioural, and cognitive dimensions of their digital struggles.

This study, therefore, seeks to explore university students’ lived experiences of the negative
impacts of digital technology on their academic lives. Grounded in Selwyn’s (2016) typology
and supported by current research, it will investigate how students navigate challenges such
as distraction, cognitive fatigue, shallow engagement, and ethical uncertainty. By examining
these issues through semi-structured interviews and thematic analysis, the research aims to
generate a more student-centred understanding of digital learning challenges.

The findings will contribute to a growing discourse on digital wellbeing in higher education,
supporting educators and institutions in designing more balanced digital learning ecosystems.
Ultimately, the research intends to shift the focus from simply integrating more technology
into classrooms to critically assessing its academic and psychological implications for
students.

1.2 Research Aim and Objectives

Research Aim

Page 7 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 8 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

The primary aim of this research is to explore how university students perceive and
experience the negative effects of digital technology on their academic lives. While digital
tools are widely promoted as facilitators of academic success, this study seeks to examine the
lesser-discussed challenges that students face in increasingly digitised learning environments.
The research will be guided by Selwyn’s (2016) “digital downsides” typology and will aim to
extend his framework within a contemporary, post-pandemic context. Through in-depth
qualitative inquiry, the research seeks to uncover recurring patterns of distraction, cognitive
strain, academic disengagement, and ethical dilemmas experienced by students in relation to
digital technology.

This study also aims to contribute to the development of more balanced and student-centred
digital learning strategies in higher education institutions. By giving voice to students’ lived
experiences, the research will offer fresh insights that can inform curriculum design,
institutional policy, and digital infrastructure planning. It will identify the nuanced ways in
which technology may hinder, rather than help, students’ academic progress and mental well-
being, and propose recommendations that prioritise both innovation and care in educational
technology adoption.

Research Objectives

To achieve the research aim, the following specific objectives will guide the study:

1. To explore students’ lived experiences of digital learning challenges.


This objective seeks to gather rich, first-hand accounts of the struggles students face
when engaging with digital tools, platforms, and technologies in their academic
routines.

2. To identify perceived negative consequences of using educational technology.


The research will investigate how students describe the psychological, behavioural,
and academic effects of prolonged exposure to and reliance on digital devices and
platforms.

3. To categorise common types of digital disruptions and their impacts on academic


performance and well-being.
The aim here is to synthesise recurring themes such as distractions, technostress,
cognitive overload, and compromised academic depth from the collected qualitative
data.

Page 8 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 9 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

4. To extend Selwyn’s (2016) typology of “digital downsides” by applying it in a


contemporary higher education setting.
By comparing existing categories with present-day student experiences, the research
will assess the relevance of Selwyn’s framework and propose possible extensions or
refinements based on current realities.

5. To generate practical recommendations for institutions and educators on how to


mitigate digital learning challenges.
The findings will inform actionable strategies that enhance digital learning while
safeguarding students’ academic integrity, cognitive health, and emotional resilience.

1.3 Research Question

Primary Research Question:

“How do university students perceive and experience the negative effects of digital
technology in their academic lives?”

This question aims to investigate not only what students find problematic about educational
technologies but also how these issues manifest in their day-to-day academic routines. It
encourages participants to reflect on the emotional, cognitive, and behavioural impacts of
digital engagement.

LITERATURE REVIEW

The integration of digital technology into higher education has become indispensable,
offering flexibility, speed, and accessibility in academic delivery and participation. However,
as Selwyn (2016) critically highlights in his work Digital Downsides, the rise of digital tools
in academia is accompanied by a range of student challenges that are often overlooked.
Selwyn’s qualitative study explores the unintended consequences of educational technology,
establishing a foundational typology consisting of four key negative categories: distractions
and interruptions, diminished depth of study, cognitive overload, and academic integrity

Page 9 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 10 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

risks. This section reviews Selwyn’s framework and integrates recent studies that support and
extend his observations in contemporary contexts, particularly post-pandemic.

2.1 Theoretical Review

This study is anchored in Neil Selwyn’s (2016) framework of “digital downsides,” which
identifies four primary categories of negative engagement university students often have with
educational technology: distractions and interruptions, diminished depth of study, cognitive
overload, and academic integrity risks. These categories offer a clear and comprehensive lens
through which the study examines students’ experiences of digital challenges in academic
contexts.

Selwyn’s model provides structure for both the interview design and the thematic analysis in
this research. His qualitative approach and focus on student narratives make the framework
particularly relevant to this study’s methodology. However, the study also draws on recent
literature to contextualise and extend his categories within the current digital learning
environment.

Digital Stress and Cognitive Load

Upadhyaya and Vrinda (2021) expand on Selwyn’s concept of cognitive overload through
their exploration of technostress, which they define as mental strain caused by digital
overexposure. Their findings show that technostress negatively impacts academic
performance, aligning with Selwyn’s concern about how excessive platform use can hinder
student focus and productivity. Similarly, Biggins and Holley (2022) link poor learning
design and platform complexity to digital fatigue, supporting the view that stress from digital
learning is often built into systems rather than student behaviours.

These works confirm that cognitive overload is both a design issue and a psychological
experience, reinforcing the need for qualitative investigation into how students perceive and
manage such burdens.

Ethical Risks and the Role of AI

Selwyn’s fourth category academic integrity risks has evolved significantly with the advent
of AI tools. Chan (2023) introduces the concept of “AI-giarism,” exploring students’ ethical
concerns around using AI-generated content. His study shows that while students often rely

Page 10 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 11 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

on such tools for assistance, many feel conflicted about whether their use constitutes
misconduct.

Kundu et al. (2024) add a further dimension by examining biometric surveillance


technologies, such as keystroke dynamics, proposed to detect AI use or cheating. Their work
highlights ethical tensions and the psychological impact of monitoring systems, showing that
digital integrity now involves institutional trust and student autonomy, not just plagiarism
policies.

Distractions and Shallow Engagement

Selwyn’s categories of distraction and diminished depth are supported by Pérez-Juárez et al.
(2023), who found that students experience continuous digital interruptions that undermine
focus. Their study also notes that many students rationalise multitasking as necessary, despite
clear evidence it reduces learning quality. This behaviour illustrates how students adapt to
digital overload in ways that may become normalised but are ultimately harmful.

Guerra et al. (2022) also document how the pandemic exacerbated digital fatigue, showing
that overexposure to screens and platforms led to disengagement and isolation. Their findings
suggest that even digitally proficient students struggled when learning environments offered
no physical or cognitive boundaries.

2.2 Digital Distractions and Interruptions

Digital distractions remain a persistent concern in academic environments. Selwyn (2016)


emphasises how students are frequently diverted by non-academic content, such as social
media notifications, messages, and multimedia consumption, during study sessions or online
lectures. These distractions lead to fragmented attention and reduced learning retention.
Pérez-Juárez et al. (2023) confirm that such interruptions are a prevalent concern for higher
education students, who report being unable to sustain focus for extended periods due to the
immediacy and omnipresence of digital platforms. Their study in Sustainability further
highlights that students often rationalise multitasking as efficient, despite clear evidence that
it hinders academic performance.

The constant influx of digital notifications, tab-switching behaviour, and platform hopping
contribute to what Biggins and Holley (2022) term "technological clutter" a phenomenon
where the educational value of digital tools is diminished by excessive digital noise. This

Page 11 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 12 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

clutter undermines students’ ability to remain mentally present and fully engaged in learning
tasks, compounding stress and confusion.

2.3 Diminished Depth of Study

Another critical dimension in Selwyn’s (2016) framework is the diminished depth of study,
referring to how digital tools may discourage deep, reflective learning. While technology
facilitates access to vast information, it can also promote superficial engagement. Selwyn
notes that the ease of retrieving summaries, video explanations, or AI-generated responses
often discourages critical thinking and comprehensive analysis. Guerra, Manríquez, and
Sierra (2022) extend this concern by discussing how students, during the COVID-19
pandemic, increasingly relied on quick digital solutions as coping mechanisms,
unintentionally cultivating shallow learning habits.

In the same vein, Chan (2023) interrogates the evolving academic culture considering AI-
powered tools. His findings suggest that while students are aware of the learning benefits AI
tools can offer, they often struggle to maintain academic rigour when such tools become
shortcuts rather than supplements. The resulting reliance on algorithmic outputs over personal
analysis reflects a broader shift toward convenience-driven study patterns that inhibit
academic growth and intellectual independence.

2.4 Cognitive Overload and Technostress

Cognitive overload a situation where students experience mental fatigue due to excessive
information input and tool complexity is another key theme in Selwyn’s (2016) typology. The
modern digital learner is exposed to a multitude of applications, tabs, files, and interfaces,
each demanding attention and decision-making. Selwyn contends that this saturation leads to
poor time management, decreased motivation, and stress.

Upadhyaya and Vrinda (2021) directly link cognitive overload to technostress, a term that
encapsulates the anxiety and strain caused by prolonged digital engagement. Their empirical
study demonstrates that higher levels of technostress negatively correlate with academic
productivity among university students. Symptoms such as concentration difficulties,
exhaustion, and emotional disconnection from learning tasks are common. Biggins and
Holley (2022) reinforce this by identifying "critical learning design factors" that must be
considered to prevent digital environments from becoming overwhelming. Their study

Page 12 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 13 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

highlights the need for structured, intuitive, and student-centred digital platforms that align
with cognitive capacity.

Additionally, Guerra et al. (2022) show that technostress increased significantly during the
pandemic due to the abrupt shift to online learning without sufficient orientation or
psychological support. Students reported feeling constantly “plugged in,” yet disconnected
from meaningful academic dialogue and peer support, further intensifying emotional and
cognitive fatigue.

2.5 Academic Integrity Risks in the Age of AI

The final dimension in Selwyn’s (2016) framework concerns academic integrity. He argues
that digital technologies, while expanding educational possibilities, simultaneously offer
more opportunities for dishonesty. The copy-paste culture, online essay banks, and now AI-
generated content have made it easier for students to bypass genuine academic effort.

This concern is heightened in the work of Chan (2023), who explores the concept of “AI-
giarism” academic misconduct arising from misuse of AI tools. Students’ attitudes towards
AI-generated assignments vary, but the overall perception is one of ambiguity and
temptation. The challenge lies in distinguishing between appropriate AI use and unethical
practices, a grey area that many students struggle to navigate without explicit institutional
guidance.

Kundu et al. (2024) propose biometric tools such as keystroke dynamics as a countermeasure,
allowing institutions to track students’ writing patterns to detect discrepancies. While
innovative, such technologies introduce ethical concerns around surveillance, data privacy,
and the psychological pressure they may place on learners. These tensions illustrate the
growing complexity of maintaining academic integrity in a digital age, where students are
both empowered and endangered by technological advancement.

2.6 Summary and Research Gap

Together, these studies validate and expand upon Selwyn’s (2016) digital downsides
framework, revealing that university students continue to face considerable challenges in
navigating their academic lives amidst an increasingly digital ecosystem. While tools and

Page 13 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 14 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

platforms evolve, the core concerns of distraction, shallow learning, technostress, and
integrity issues persist and, in some cases, have worsened.

Despite the growing body of literature, a significant gap remains in qualitative, student-
centred research that captures the lived experiences of these challenges. Most studies apply
quantitative methods, missing the nuanced, personal stories that illustrate how these digital
downsides manifest in day-to-day academic routines. This proposal, therefore, aims to fill
this gap by using thematic analysis of semi-structured interviews to explore students'
narratives. Through this, it seeks to provide updated insights that can inform more humane,
practical, and context-sensitive digital education strategies.

METHODOLOGY

This section outlines the methodological design adopted for the study, focusing on the use of
qualitative approaches, participant selection, semi-structured interviews, thematic analysis,
and ethical considerations. The objective is to capture the depth, diversity, and complexity of
university students’ experiences with the negative effects of digital technology on their
academic lives.

3.1 Qualitative Research Approach

Given the exploratory nature of the research question "How do university students perceive
and experience the negative effects of digital technology in their academic lives?" a
qualitative research methodology is the most suitable. Qualitative research is effective in
uncovering rich, contextualised data that illuminate subjective perceptions and lived
experiences (Selwyn, 2016). It enables a deeper understanding of how students interact with
digital technologies and how these interactions impact their academic performance, cognitive
well-being, and learning behaviours.

Selwyn (2016), whose framework forms the core theoretical underpinning of this study,
emphasised the need for qualitative inquiry in understanding the nuanced “digital downsides”
experienced by students, such as distractions, cognitive overload, diminished study depth,
and integrity risks. This approach is echoed in the works of Biggins and Holley (2022), who
argue that student wellbeing issues linked to technostress can only be fully understood
through student narratives and direct engagement.

Page 14 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 15 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

3.2 Participant Selection

This study will use purposive sampling to select participants with relevant experience and
exposure to digital technology in academic settings. The target population will consist of 12
to 15 undergraduate students, drawn from various faculties within a university setting. This
sample size is appropriate for a small-scale qualitative study and allows for data saturation
while maintaining depth in individual accounts.

Inclusion criteria include:

 Full-time enrolment in undergraduate programmes.

 Frequent use of digital platforms for academic purposes (e.g., LMS, research
databases, productivity tools).

 Experience with academic challenges or frustrations linked to digital technologies.

 Willingness to participate in a recorded interview and provide informed consent.

Participants will be recruited via email invitations, online student portals, and university
noticeboards. Care will be taken to ensure a diverse sample in terms of gender, year of study,
and course discipline, to reflect varied experiences with digital tools.

3.3 Interview Design

The primary method of data collection will be semi-structured interviews, which are well-
suited for exploring participants’ experiences while allowing for flexibility and in-depth
discussion. This format is particularly effective when examining emotional, behavioural, and
ethical dimensions of academic technology use (Guerra, Manríquez & Sierra, 2022).

An interview guide will be developed based on Selwyn’s (2016) four “digital downside”
categories:

1. Distractions and interruptions (e.g., social media, notifications),

2. Diminished depth of study (e.g., over-reliance on summaries and AI tools),

3. Cognitive overload (e.g., multitasking fatigue, digital overwhelm),

4. Academic integrity risks (e.g., use of AI, plagiarism tools).

Questions will be open-ended to encourage reflection. For example:

Page 15 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 16 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

 “Can you describe how digital tools have impacted your ability to concentrate on
academic tasks?”

 “What feelings do you associate with using educational technology in your daily
routine?”

 “Have you encountered ethical dilemmas related to AI or digital shortcuts?”

Each interview will last 45–60 minutes and will be conducted either in person or via a secure
online platform, depending on participants’ preferences. Interviews will be audio-recorded
with consent, transcribed verbatim, and supplemented with observational notes.

3.4 Data Analysis – Thematic Analysis

The data will be analysed using thematic analysis, a flexible method that enables researchers
to identify, organise, and interpret key patterns (themes) within qualitative data. This method
is particularly effective in educational research where lived experience and meaning making
are central.

Following Braun and Clarke’s (2006) six-phase model, the analysis will include:

1. Familiarisation with transcripts through repeated reading;

2. Initial coding to label relevant data extracts;

3. Searching for themes by grouping codes;

4. Reviewing themes for internal consistency and distinction;

5. Defining and naming themes;

6. Producing the report to link themes with the research question and literature.

The thematic coding will be both inductive (emerging from the data) and deductive (informed
by Selwyn’s categories). This hybrid approach ensures that while Selwyn’s (2016) model
provides structure, the findings are grounded in contemporary student voices.

Biggins and Holley (2022) and Upadhyaya and Vrinda (2021) both stress the importance of
understanding technostress and cognitive fatigue as emerging themes in higher education.

Page 16 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 17 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

Their findings support the decision to use thematic analysis to uncover emotional and
behavioural responses to digital overload. Additionally, Pérez-Juárez et al. (2023) advocate
for thematic approaches when exploring digital distraction narratives among students.

3.5 Ethical Considerations

Ethical approval for the study will be obtained from the university’s research ethics
committee prior to data collection. The study will strictly adhere to the principles of research
ethics, ensuring respect for participants’ rights and dignity throughout.

Key ethical measures include:

 Informed Consent: Participants will receive an information sheet outlining the study’s
purpose, procedures, risks, and benefits. They will be required to sign a consent form
before participating.

 Voluntary Participation: All participation is voluntary, with the option to withdraw at


any point without penalty.

 Confidentiality: Participants' identities will be anonymised using pseudonyms. No


personally identifiable information will be shared in any reports or publications.

 Data Protection: Interview recordings and transcripts will be stored securely on


encrypted devices. Only the researcher and supervisor will have access.

 Minimising Harm: The research design considers emotional well-being, especially


when discussing stress or ethical conflict. Participants will be reminded they may skip
questions or stop the interview at any point.

Kundu et al. (2024) raise valid concerns about digital surveillance and academic monitoring,
which indirectly relate to participant trust in research environments. As such, extra care will
be taken to explain how data will be used, stored, and protected to ensure full transparency
and build trust.

3.6 Ethical Considerations

Conducting research involving human participants, especially on sensitive and experience-


based topics, requires a rigorous ethical framework. This study, which explores university
students’ perceptions and experiences of the negative effects of digital technology in their
academic lives, involves direct engagement with individuals who may be revealing

Page 17 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 18 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

emotional, cognitive, or ethically complex experiences. As such, strict attention to ethical


principles is vital to ensure the study is respectful, non-intrusive, and compliant with
established research ethics standards.

Informed Consent

Informed consent is a cornerstone of ethical research. It ensures that participants are fully
aware of the research purpose, procedures, potential risks, and their rights before agreeing to
take part. In this study, each participant will receive a detailed participant information sheet
outlining:

 The title and purpose of the research

 What participation involves (e.g., interview duration, topics)

 How the data will be used and stored

 The voluntary nature of participation

 Contact information for queries or complaints

This information will be delivered both electronically and in print (where applicable), using
clear, jargon-free language. Following this, participants will sign a consent form that
confirms they understand their rights, including the right to withdraw at any point without
any consequences.

This process is supported by Upadhyaya and Vrinda (2021), who stress that transparent
communication in studies involving digital stress helps participants feel psychologically
secure. Similarly, Guerra, Manríquez, and Sierra (2022) emphasise that students dealing with
technostress and cognitive overload should be approached with sensitivity, ensuring they
understand the implications of their involvement in the study.

Informed consent will also cover audio recording, making it explicit that interviews will be
recorded for transcription and analysis purposes. Participants will be informed that all
recordings will be securely stored and used only for research purposes, in compliance with
data protection regulations.

Anonymisation and Confidentiality

Ensuring participant confidentiality is essential in fostering trust and upholding ethical


integrity. All identifying information collected during the study—including names, email

Page 18 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 19 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

addresses, course names, or specific institutional references—will be anonymised during


transcription.

Participants will be assigned pseudonyms, which will be used throughout data analysis,
reporting, and publication. This step is crucial in protecting individuals’ identities, especially
when discussing sensitive issues such as academic dishonesty, overuse of AI tools, or digital
stress. As Chan (2023) notes in his investigation into AI-related academic misconduct, many
students are hesitant to speak freely about ethically ambiguous behaviours unless
confidentiality is explicitly guaranteed. Anonymisation allows them to reflect honestly
without fear of reprisal or judgment.

In addition, the anonymised transcripts will remove or generalise any references that might
indirectly identify the participants, such as specific academic modules, distinctive personal
experiences, or university systems. These measures will help minimise re-identification risks
while maintaining the richness of the data.

Data Protection and Security

This research will strictly comply with the university’s data protection policies and relevant
national legislation (e.g., the UK Data Protection Act 2018 and GDPR standards, where
applicable). Ensuring data security involves both technical and procedural safeguards to
prevent unauthorised access, loss, or misuse of participants’ information.

Audio recordings will be stored on an encrypted digital device and then uploaded to a secure,
university-approved cloud storage platform. Transcripts will be saved in password-protected
Word or PDF documents with access restricted solely to the researcher and supervisor.

All physical documents (e.g., signed consent forms) will be stored in a locked cabinet within
a secure academic office and will not be taken off-site. Digital data will be backed up
regularly, and redundant copies will be deleted once no longer needed.

In line with ethical research practice, all data will be retained for five years post-study
completion (as per university policy), after which it will be securely destroyed. Kundu et al.
(2024) highlight the increased sensitivity of student-related data in research involving digital
monitoring, and while this study does not employ surveillance techniques, it is imperative to
safeguard data due to the personal nature of students' disclosures about digital overload and
ethical concerns.

Page 19 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 20 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

Participants will also be informed about how their data will be used in academic
dissemination, such as conference presentations or journal publications. They will be assured
that no identifiable information will be included in any such outputs.

Ethical Approval Process

Before any data collection takes place, this study will seek formal ethical approval from the
university’s Research Ethics Committee (REC). The ethics application will include the full
proposal, interview guide, informed consent forms, participant information sheets, and data
protection plan.

The ethical review process ensures that the research design:

 Minimises harm and maximises benefit to participants

 Demonstrates fairness and respect

 Includes risk mitigation strategies

 Protects the privacy and autonomy of individuals

Particular attention will be given to the emotional risks participants may face, especially
when recounting distressing or ethically complicated digital experiences. While the study
does not involve vulnerable groups or high-risk activities, the REC will be asked to review
whether additional support mechanisms (e.g., referral to counselling services) should be
provided in case a participant becomes distressed during the interview.

Researcher Reflexivity and Ethical Mindfulness

Ethical research also involves ongoing reflexivity, where the researcher is critically aware of
their own influence on the research process and power dynamics in the interview setting.
Throughout the study, the researcher will maintain an ethics log to document ethical
decisions, adjustments, and reflections that arise during data collection and analysis. This
adds a layer of transparency and accountability to the research process.

As Selwyn (2016) and Pérez-Juárez et al. (2023) highlight, qualitative research into student
experiences of digital learning is inherently interpretive and must be conducted with
sensitivity to personal narratives. Ethical mindfulness will therefore be maintained from
recruitment through reporting.

Page 20 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 21 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

3.7 Expected Outcomes

This study anticipates several meaningful outcomes that will contribute to a deeper and more
nuanced understanding of how university students perceive and experience the negative
effects of digital technology on their academic lives. These outcomes will not only affirm and
extend existing frameworks particularly Selwyn’s (2016) “digital downsides” but also offer
practical, student-informed recommendations for improving digital learning environments in
higher education.

Rich, Contextualised Understanding of Student Experiences

The first expected outcome is the development of a rich, contextualised picture of how digital
technologies impact students’ academic routines, mental wellbeing, and ethical behaviour.
Through semi-structured interviews, participants will provide first-hand narratives of their
daily struggles with digital tools. These may include feelings of being overwhelmed by
constant notifications, difficulty maintaining focus due to platform switching, or reliance on
AI tools that create ethical dilemmas.

As demonstrated by Biggins and Holley (2022), technostress is deeply tied to students’


emotional wellbeing, often manifesting in anxiety, burnout, and disengagement. By capturing
students’ reflections on these symptoms, the study will contribute fresh qualitative insights
into the psychological and cognitive burdens of digital learning. Similarly, Guerra,
Manríquez, and Sierra (2022) observed that even students who are digitally literate
experience fatigue and isolation due to prolonged digital immersion an insight likely to be
echoed in this study’s findings.

This qualitative richness will go beyond numbers and statistics, offering thick descriptions
that convey the depth of student experiences. This is especially significant given that much of
the existing literature on digital learning stressors is quantitative, leaving gaps in our
understanding of the human side of educational technology.

Contemporary Application and Extension of Selwyn’s Framework

A second expected outcome is the application and possible extension of Selwyn’s (2016)
typology of digital downsides. Selwyn categorised negative student experiences into four
domains: distractions and interruptions, diminished depth of study, cognitive overload, and
academic integrity risks. While this framework provides a solid foundation, it is likely that

Page 21 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 22 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

contemporary student experiences especially in a post-pandemic and AI-integrated academic


landscape will highlight new or evolved forms of these challenges.

For instance, Chan (2023) highlights how AI-generated content is blurring the boundaries
between support and plagiarism, introducing new ethical tensions that may not have been
fully anticipated in Selwyn’s original work. The study may therefore suggest additional
subcategories or refinements within the existing four themes, offering an updated model that
reflects current digital realities.

Thematic analysis of student interviews will allow for the mapping of Selwyn’s framework
against lived experience, validating where it remains relevant and revealing where it may
need updating. By doing so, the research will provide a modernised interpretation of
Selwyn’s digital downsides, grounded in real-world student narratives.

Patterns of Coping and Resistance

Beyond challenges, the study is also expected to uncover patterns of coping, adaptation, and
resistance that students use to manage the negative effects of digital technology. For example,
some participants may describe strategies such as disabling notifications, using website
blockers, or scheduling offline study sessions to avoid distraction. Others may reflect on their
personal growth in navigating ethical issues related to AI use, echoing findings from Pérez-
Juárez et al. (2023), who noted that students often develop informal codes of conduct when
formal institutional policies are lacking.

These insights will be important in identifying resilience factors that can be scaled or
supported by institutional interventions. They may also point to opportunities for peer-led
digital literacy training, mental health support tailored to technostress, or clearer policy
communication around acceptable AI use in coursework.

Practical Recommendations for Higher Education

A major outcome of the research will be a set of practical, evidence-based recommendations


for universities, educators, and digital learning designers. These recommendations will aim to
create more student-centred digital learning environments that mitigate harm while
maintaining the benefits of technology.

Recommendations may include:

Page 22 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 23 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

 Curriculum design that incorporates digital detox periods, blended offline activities,
or fewer simultaneous online platform requirements;

 Staff training on recognising signs of digital fatigue and implementing flexible


teaching strategies;

 Student workshops on managing technostress, setting digital boundaries, and ethical


use of AI tools;

 Policy development that clearly defines acceptable digital practices and promotes
digital wellbeing;

 Infrastructure improvements such as simplified learning platforms or customisable


notification settings.

Upadhyaya and Vrinda (2021) emphasise that educational institutions have a duty to
acknowledge and respond to the cognitive and emotional toll of digital learning. This study’s
findings will support that obligation by equipping decision-makers with qualitative evidence
drawn from students’ voices. Furthermore, Kundu et al. (2024) point to the importance of
balancing innovation (e.g., biometric and AI tracking systems) with ethical considerations a
balance that this study will help contextualise from the user perspective.

Contribution to Research and Institutional Practice

Finally, this research is expected to contribute to the growing academic conversation around
digital wellbeing in higher education, particularly by demonstrating the value of qualitative
inquiry in this domain. It will fill a noted gap in literature by providing empirical, student-led
insights into the complexities of digital learning that go beyond surface-level assumptions.

It will also encourage institutions to reframe their approach to digitalisation not merely as a
technological upgrade but as a pedagogical and psychological shift that must be carefully
managed to support rather than undermine academic success.

3.8 Timeline

A structured timeline is essential for effective project management, especially for research
that involves multiple stages such as proposal writing, participant recruitment, data
collection, analysis, and reporting. The proposed study will span a period of five months,
ensuring adequate time for each phase of the research process, from ethical approval to the
final submission of findings.

Page 23 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 24 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

The table below outlines the major activities and estimated time allocations for each phase:

Activity Description Timeframe Duration

Final edits to research proposal;


1. Proposal Finalisation
submission to university ethics Week 1–2 2 weeks
& Ethics Submission
committee.

2. Ethics Review and Wait period for feedback/approval


Week 3–5 3 weeks
Approval from the ethics board.

3. Interview Guide Development and testing of interview


Week 6–7 2 weeks
Design & Pilot Testing questions with 1–2 trial participants.

Distribution of recruitment notices via


4. Participant 2 weeks
email, noticeboards, and course Week 7–8
Recruitment (overlap)
forums.

5. Data Collection –
Conducting interviews with 12–15
Semi-structured Week 9–12 4 weeks
students (in-person or online).
Interviews

6. Transcription of Verbatim transcription of audio Week 10– 4 weeks


Interviews recordings and anonymisation of data. 13 (overlap)

Coding, theme development, and


7. Thematic Data Week 13–
synthesis based on Braun & Clarke’s 4 weeks
Analysis 16
model.

8. Drafting of Findings Writing up key themes, linking results Week 17–


2 weeks
and Discussion to Selwyn (2016) and related sources. 18

Reviewing the full report, refining


9. Final Editing and
arguments, formatting, and reference Week 19 1 week
Proofreading
checks.

10. Submission of Submission of the complete, finalised


Week 20 1 week
Research Report report.

Page 24 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 25 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

CONCLUSION

This research proposal presents a qualitative investigation into how university students
perceive and experience the negative effects of digital technology in their academic lives.
While educational technologies have brought about substantial improvements in flexibility,
access, and resource delivery, they also introduce a host of challenges that are often
overshadowed by the dominant narrative of innovation. Drawing on Selwyn’s (2016)
influential framework of “digital downsides,” the study focuses on four primary themes
distractions, diminished depth of study, cognitive overload, and academic integrity risks that
collectively highlight the darker side of digital learning.

Through semi-structured interviews and thematic analysis, this research aims to uncover the
nuanced realities of students’ digital academic lives. The study will not only reaffirm existing
concerns outlined by scholars such as Selwyn (2016), Biggins and Holley (2022), and Guerra
et al. (2022) but will also bring to light contemporary developments such as AI-related
misconduct and digital fatigue, as discussed by Chan (2023) and Pérez-Juárez et al. (2023).

The expected outcomes include a richer understanding of students’ lived experiences, a re-
evaluation and potential extension of Selwyn’s typology, and practical recommendations for
creating more balanced and humane digital learning environments. Ethical integrity and
participant well-being are central to the research design, with measures in place to ensure
informed consent, anonymity, and data security.

Ultimately, this study intends to contribute meaningful, student-led insights that support
institutional decision-makers in higher education. It urges a shift from tech-centric policies to
more student-centred digital strategies ones that not only embrace innovation but also
acknowledge and mitigate the academic and psychological challenges posed by digital
learning environments.

Page 25 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 26 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

REFERENCES

Biggins, D. and Holley, D., 2022. Student wellbeing and technostress: Critical learning
design factors. Journal of Learning Development in Higher Education, (25). Available at:
https://doi.org/10.47408/jldhe.vi25.985 [Accessed 8 May 2025].

Chan, T., 2023. Is AI changing the rules of academic misconduct? An in-depth look at
students' perceptions of ‘AI-giarism’. arXiv preprint. Available at:
https://arxiv.org/pdf/2306.03358 [Accessed 8 May 2025].

Guerra, J.F., Manríquez, M.R. and Sierra, C.A.M., 2022. Technostress in university students
as an effect of the Coronavirus pandemic. In: V. Vila and M. Marquez, eds. Research in
Administrative Sciences Under COVID-19. Bingley: Emerald Publishing Limited, pp.117–
133.

Kundu, D., Mehta, A., Kumar, R., Lal, N., Anand, A., Singh, A. and Shah, R.R., 2024.
Keystroke dynamics against academic dishonesty in the age of LLMs. In: 2024 IEEE
International Joint Conference on Biometrics (IJCB). IEEE, pp.1–10.

Page 26 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178


Page 27 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

Pérez-Juárez, M.Á., González-Ortega, D. and Aguiar-Pérez, J.M., 2023. Digital distractions


from the point of view of higher education students. Sustainability, 15(7), p.6044. Available
at: https://arxiv.org/pdf/2402.05249 [Accessed 8 May 2025].

Selwyn, N., 2016. Digital downsides: Exploring university students’ negative engagements
with digital technology. Teaching in Higher Education, 21(8), pp.1006–1021.

Upadhyaya, P. and Vrinda, 2021. Impact of technostress on academic productivity of


university students. Education and Information Technologies, 26(2), pp.1647–1664.

Page 27 of 27 - AI Writing Submission Submission ID trn:oid:::3117:457631178

You might also like