AI-Literature Review 4.docx
AI-Literature Review 4.docx
428057167 220843438
Literature Review 4.docx
Turnitin
Document Details
Submission ID
trn:oid:::3117:457631178 25 Pages
Download Date
File Name
Literature_Review_4.docx
File Size
30.6 KB
The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.
Detection Groups
24 AI-generated only 48%
Likely AI-generated text from a large-language model.
Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.
False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.
AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).
The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.
Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.
TUTOR
THE DATE
Abstract
The integration of digital technology in higher education has transformed the academic
landscape, offering both opportunities and challenges for university students. While digital
tools such as learning management systems, online libraries, productivity apps, and artificial
intelligence (AI) platforms have enhanced access to academic content and supported flexible
learning, emerging evidence suggests that these tools also present significant negative
impacts. This research proposal outlines a qualitative study aimed at exploring how
undergraduate students perceive and experience the negative effects of digital technology in
their academic lives. Guided by Neil Selwyn’s (2016) framework of “digital downsides,” the
study focuses on four main themes: digital distractions, diminished depth of study, cognitive
overload, and academic integrity risks.
The research will employ a qualitative design using semi-structured interviews with a
purposive sample of undergraduate students from diverse academic backgrounds. The
interviews will be audio-recorded, transcribed, and analyzed through thematic analysis to
identify common patterns and themes in students’ narratives. Ethical considerations will
include informed consent, data anonymization, voluntary participation, and approval from the
university’s ethics committee.
Expected findings include a detailed understanding of how students navigate the challenges
posed by digital technology in academic settings. The study anticipates identifying key digital
stressors and disruptions that affect students’ learning processes, emotional well-being, and
academic engagement. By extending Selwyn’s (2016) typology in a contemporary university
context, particularly post-pandemic, the research aims to contribute to more balanced and
student-centered digital education strategies.
Ultimately, this study seeks to inform higher education stakeholders educators, curriculum
designers, and institutional policymakers on how to mitigate the adverse effects of
educational technologies while promoting digital tools that support meaningful learning
experiences. The outcomes of this study will offer practical recommendations to reduce
technostress and improve digital literacy and academic resilience among university students.
INTRODUCTION
The integration of digital technology into higher education has revolutionised the teaching
and learning environment, redefining how students interact with academic content, faculty,
and one another. Technological advancements such as virtual learning environments (VLEs),
artificial intelligence (AI)-powered tools, online databases, and communication platforms like
Microsoft Teams and Zoom have enhanced access, flexibility, and engagement in academic
settings. Digital tools are now considered essential to the modern university experience,
especially in the wake of the COVID-19 pandemic, which necessitated a rapid shift to online
and hybrid learning models. While digitalization has undeniably enabled new forms of
academic participation, collaboration, and content delivery, it has also introduced several
unintended consequences that warrant critical exploration.
Despite the benefits of digital learning, a growing body of research highlights its adverse
effects on students' academic well-being and performance. University students increasingly
report digital fatigue, reduced attention spans, cognitive overload, and academic stress
associated with excessive screen time, multiple platform usage, and the pressure to be
constantly connected. This paradox where digital technology is both a facilitator and inhibitor
of academic success raises important questions about the quality of student engagement in
increasingly digitised learning environments. The pervasive use of educational technology,
while often seen as a solution to institutional and pedagogical challenges, may, in practice,
hinder deep learning, concentration, and academic motivation.
Neil Selwyn’s (2016) concept of “digital downsides” serves as a valuable theoretical lens to
examine these issues. His framework identifies four primary categories of negative student
experiences with digital technology: distractions and interruptions, diminished depth of study,
cognitive overload, and academic integrity risks. These categories underscore how
technology can undermine rather than support students' educational goals. For instance,
constant notifications from messaging apps, social media platforms, and digital multitasking
can divert attention from academic tasks, contributing to fragmented learning. Similarly,
over-reliance on digital shortcuts and AI-generated content can compromise students’ critical
thinking, reflection, and integrity.
In addition, the pandemic exacerbated many of these digital challenges. Research by Guerra,
Manríquez, and Sierra (2022) indicates that technostress intensified during COVID-19 as
students were forced to adapt to full-time remote learning with minimal preparation. The shift
exposed gaps in digital literacy, infrastructure inequalities, and mental health strains
stemming from constant virtual engagement. Upadhyaya and Vrinda (2021) further argue that
the rise in technostress correlates with reduced academic productivity, highlighting a need for
more comprehensive support systems that address the emotional and cognitive burdens
students experience in digital learning environments.
Moreover, the emergence of AI technologies in education has added new complexities. Tools
such as ChatGPT and paraphrasing software raise concerns over academic integrity, with
students facing ethical dilemmas in distinguishing between legitimate academic assistance
and misconduct. Chan (2023) explores how students perceive “AI-giarism” and the blurred
lines between human and machine-generated content. These developments demand a re-
evaluation of institutional policies and pedagogical strategies to ensure academic honesty
while acknowledging the evolving digital landscape.
Despite increasing awareness of these issues, there is still a lack of in-depth qualitative
understanding of how students perceive and internalise these negative digital experiences.
Most existing research relies on quantitative metrics that fail to capture the nuanced,
everyday realities of digital academic life. A qualitative approach, through direct engagement
with students' voices, offers a richer, contextually grounded insight into the emotional,
behavioural, and cognitive dimensions of their digital struggles.
This study, therefore, seeks to explore university students’ lived experiences of the negative
impacts of digital technology on their academic lives. Grounded in Selwyn’s (2016) typology
and supported by current research, it will investigate how students navigate challenges such
as distraction, cognitive fatigue, shallow engagement, and ethical uncertainty. By examining
these issues through semi-structured interviews and thematic analysis, the research aims to
generate a more student-centred understanding of digital learning challenges.
The findings will contribute to a growing discourse on digital wellbeing in higher education,
supporting educators and institutions in designing more balanced digital learning ecosystems.
Ultimately, the research intends to shift the focus from simply integrating more technology
into classrooms to critically assessing its academic and psychological implications for
students.
Research Aim
The primary aim of this research is to explore how university students perceive and
experience the negative effects of digital technology on their academic lives. While digital
tools are widely promoted as facilitators of academic success, this study seeks to examine the
lesser-discussed challenges that students face in increasingly digitised learning environments.
The research will be guided by Selwyn’s (2016) “digital downsides” typology and will aim to
extend his framework within a contemporary, post-pandemic context. Through in-depth
qualitative inquiry, the research seeks to uncover recurring patterns of distraction, cognitive
strain, academic disengagement, and ethical dilemmas experienced by students in relation to
digital technology.
This study also aims to contribute to the development of more balanced and student-centred
digital learning strategies in higher education institutions. By giving voice to students’ lived
experiences, the research will offer fresh insights that can inform curriculum design,
institutional policy, and digital infrastructure planning. It will identify the nuanced ways in
which technology may hinder, rather than help, students’ academic progress and mental well-
being, and propose recommendations that prioritise both innovation and care in educational
technology adoption.
Research Objectives
To achieve the research aim, the following specific objectives will guide the study:
“How do university students perceive and experience the negative effects of digital
technology in their academic lives?”
This question aims to investigate not only what students find problematic about educational
technologies but also how these issues manifest in their day-to-day academic routines. It
encourages participants to reflect on the emotional, cognitive, and behavioural impacts of
digital engagement.
LITERATURE REVIEW
The integration of digital technology into higher education has become indispensable,
offering flexibility, speed, and accessibility in academic delivery and participation. However,
as Selwyn (2016) critically highlights in his work Digital Downsides, the rise of digital tools
in academia is accompanied by a range of student challenges that are often overlooked.
Selwyn’s qualitative study explores the unintended consequences of educational technology,
establishing a foundational typology consisting of four key negative categories: distractions
and interruptions, diminished depth of study, cognitive overload, and academic integrity
risks. This section reviews Selwyn’s framework and integrates recent studies that support and
extend his observations in contemporary contexts, particularly post-pandemic.
This study is anchored in Neil Selwyn’s (2016) framework of “digital downsides,” which
identifies four primary categories of negative engagement university students often have with
educational technology: distractions and interruptions, diminished depth of study, cognitive
overload, and academic integrity risks. These categories offer a clear and comprehensive lens
through which the study examines students’ experiences of digital challenges in academic
contexts.
Selwyn’s model provides structure for both the interview design and the thematic analysis in
this research. His qualitative approach and focus on student narratives make the framework
particularly relevant to this study’s methodology. However, the study also draws on recent
literature to contextualise and extend his categories within the current digital learning
environment.
Upadhyaya and Vrinda (2021) expand on Selwyn’s concept of cognitive overload through
their exploration of technostress, which they define as mental strain caused by digital
overexposure. Their findings show that technostress negatively impacts academic
performance, aligning with Selwyn’s concern about how excessive platform use can hinder
student focus and productivity. Similarly, Biggins and Holley (2022) link poor learning
design and platform complexity to digital fatigue, supporting the view that stress from digital
learning is often built into systems rather than student behaviours.
These works confirm that cognitive overload is both a design issue and a psychological
experience, reinforcing the need for qualitative investigation into how students perceive and
manage such burdens.
Selwyn’s fourth category academic integrity risks has evolved significantly with the advent
of AI tools. Chan (2023) introduces the concept of “AI-giarism,” exploring students’ ethical
concerns around using AI-generated content. His study shows that while students often rely
on such tools for assistance, many feel conflicted about whether their use constitutes
misconduct.
Selwyn’s categories of distraction and diminished depth are supported by Pérez-Juárez et al.
(2023), who found that students experience continuous digital interruptions that undermine
focus. Their study also notes that many students rationalise multitasking as necessary, despite
clear evidence it reduces learning quality. This behaviour illustrates how students adapt to
digital overload in ways that may become normalised but are ultimately harmful.
Guerra et al. (2022) also document how the pandemic exacerbated digital fatigue, showing
that overexposure to screens and platforms led to disengagement and isolation. Their findings
suggest that even digitally proficient students struggled when learning environments offered
no physical or cognitive boundaries.
The constant influx of digital notifications, tab-switching behaviour, and platform hopping
contribute to what Biggins and Holley (2022) term "technological clutter" a phenomenon
where the educational value of digital tools is diminished by excessive digital noise. This
clutter undermines students’ ability to remain mentally present and fully engaged in learning
tasks, compounding stress and confusion.
Another critical dimension in Selwyn’s (2016) framework is the diminished depth of study,
referring to how digital tools may discourage deep, reflective learning. While technology
facilitates access to vast information, it can also promote superficial engagement. Selwyn
notes that the ease of retrieving summaries, video explanations, or AI-generated responses
often discourages critical thinking and comprehensive analysis. Guerra, Manríquez, and
Sierra (2022) extend this concern by discussing how students, during the COVID-19
pandemic, increasingly relied on quick digital solutions as coping mechanisms,
unintentionally cultivating shallow learning habits.
In the same vein, Chan (2023) interrogates the evolving academic culture considering AI-
powered tools. His findings suggest that while students are aware of the learning benefits AI
tools can offer, they often struggle to maintain academic rigour when such tools become
shortcuts rather than supplements. The resulting reliance on algorithmic outputs over personal
analysis reflects a broader shift toward convenience-driven study patterns that inhibit
academic growth and intellectual independence.
Cognitive overload a situation where students experience mental fatigue due to excessive
information input and tool complexity is another key theme in Selwyn’s (2016) typology. The
modern digital learner is exposed to a multitude of applications, tabs, files, and interfaces,
each demanding attention and decision-making. Selwyn contends that this saturation leads to
poor time management, decreased motivation, and stress.
Upadhyaya and Vrinda (2021) directly link cognitive overload to technostress, a term that
encapsulates the anxiety and strain caused by prolonged digital engagement. Their empirical
study demonstrates that higher levels of technostress negatively correlate with academic
productivity among university students. Symptoms such as concentration difficulties,
exhaustion, and emotional disconnection from learning tasks are common. Biggins and
Holley (2022) reinforce this by identifying "critical learning design factors" that must be
considered to prevent digital environments from becoming overwhelming. Their study
highlights the need for structured, intuitive, and student-centred digital platforms that align
with cognitive capacity.
Additionally, Guerra et al. (2022) show that technostress increased significantly during the
pandemic due to the abrupt shift to online learning without sufficient orientation or
psychological support. Students reported feeling constantly “plugged in,” yet disconnected
from meaningful academic dialogue and peer support, further intensifying emotional and
cognitive fatigue.
The final dimension in Selwyn’s (2016) framework concerns academic integrity. He argues
that digital technologies, while expanding educational possibilities, simultaneously offer
more opportunities for dishonesty. The copy-paste culture, online essay banks, and now AI-
generated content have made it easier for students to bypass genuine academic effort.
This concern is heightened in the work of Chan (2023), who explores the concept of “AI-
giarism” academic misconduct arising from misuse of AI tools. Students’ attitudes towards
AI-generated assignments vary, but the overall perception is one of ambiguity and
temptation. The challenge lies in distinguishing between appropriate AI use and unethical
practices, a grey area that many students struggle to navigate without explicit institutional
guidance.
Kundu et al. (2024) propose biometric tools such as keystroke dynamics as a countermeasure,
allowing institutions to track students’ writing patterns to detect discrepancies. While
innovative, such technologies introduce ethical concerns around surveillance, data privacy,
and the psychological pressure they may place on learners. These tensions illustrate the
growing complexity of maintaining academic integrity in a digital age, where students are
both empowered and endangered by technological advancement.
Together, these studies validate and expand upon Selwyn’s (2016) digital downsides
framework, revealing that university students continue to face considerable challenges in
navigating their academic lives amidst an increasingly digital ecosystem. While tools and
platforms evolve, the core concerns of distraction, shallow learning, technostress, and
integrity issues persist and, in some cases, have worsened.
Despite the growing body of literature, a significant gap remains in qualitative, student-
centred research that captures the lived experiences of these challenges. Most studies apply
quantitative methods, missing the nuanced, personal stories that illustrate how these digital
downsides manifest in day-to-day academic routines. This proposal, therefore, aims to fill
this gap by using thematic analysis of semi-structured interviews to explore students'
narratives. Through this, it seeks to provide updated insights that can inform more humane,
practical, and context-sensitive digital education strategies.
METHODOLOGY
This section outlines the methodological design adopted for the study, focusing on the use of
qualitative approaches, participant selection, semi-structured interviews, thematic analysis,
and ethical considerations. The objective is to capture the depth, diversity, and complexity of
university students’ experiences with the negative effects of digital technology on their
academic lives.
Given the exploratory nature of the research question "How do university students perceive
and experience the negative effects of digital technology in their academic lives?" a
qualitative research methodology is the most suitable. Qualitative research is effective in
uncovering rich, contextualised data that illuminate subjective perceptions and lived
experiences (Selwyn, 2016). It enables a deeper understanding of how students interact with
digital technologies and how these interactions impact their academic performance, cognitive
well-being, and learning behaviours.
Selwyn (2016), whose framework forms the core theoretical underpinning of this study,
emphasised the need for qualitative inquiry in understanding the nuanced “digital downsides”
experienced by students, such as distractions, cognitive overload, diminished study depth,
and integrity risks. This approach is echoed in the works of Biggins and Holley (2022), who
argue that student wellbeing issues linked to technostress can only be fully understood
through student narratives and direct engagement.
This study will use purposive sampling to select participants with relevant experience and
exposure to digital technology in academic settings. The target population will consist of 12
to 15 undergraduate students, drawn from various faculties within a university setting. This
sample size is appropriate for a small-scale qualitative study and allows for data saturation
while maintaining depth in individual accounts.
Frequent use of digital platforms for academic purposes (e.g., LMS, research
databases, productivity tools).
Participants will be recruited via email invitations, online student portals, and university
noticeboards. Care will be taken to ensure a diverse sample in terms of gender, year of study,
and course discipline, to reflect varied experiences with digital tools.
The primary method of data collection will be semi-structured interviews, which are well-
suited for exploring participants’ experiences while allowing for flexibility and in-depth
discussion. This format is particularly effective when examining emotional, behavioural, and
ethical dimensions of academic technology use (Guerra, Manríquez & Sierra, 2022).
An interview guide will be developed based on Selwyn’s (2016) four “digital downside”
categories:
“Can you describe how digital tools have impacted your ability to concentrate on
academic tasks?”
“What feelings do you associate with using educational technology in your daily
routine?”
Each interview will last 45–60 minutes and will be conducted either in person or via a secure
online platform, depending on participants’ preferences. Interviews will be audio-recorded
with consent, transcribed verbatim, and supplemented with observational notes.
The data will be analysed using thematic analysis, a flexible method that enables researchers
to identify, organise, and interpret key patterns (themes) within qualitative data. This method
is particularly effective in educational research where lived experience and meaning making
are central.
Following Braun and Clarke’s (2006) six-phase model, the analysis will include:
6. Producing the report to link themes with the research question and literature.
The thematic coding will be both inductive (emerging from the data) and deductive (informed
by Selwyn’s categories). This hybrid approach ensures that while Selwyn’s (2016) model
provides structure, the findings are grounded in contemporary student voices.
Biggins and Holley (2022) and Upadhyaya and Vrinda (2021) both stress the importance of
understanding technostress and cognitive fatigue as emerging themes in higher education.
Their findings support the decision to use thematic analysis to uncover emotional and
behavioural responses to digital overload. Additionally, Pérez-Juárez et al. (2023) advocate
for thematic approaches when exploring digital distraction narratives among students.
Ethical approval for the study will be obtained from the university’s research ethics
committee prior to data collection. The study will strictly adhere to the principles of research
ethics, ensuring respect for participants’ rights and dignity throughout.
Informed Consent: Participants will receive an information sheet outlining the study’s
purpose, procedures, risks, and benefits. They will be required to sign a consent form
before participating.
Kundu et al. (2024) raise valid concerns about digital surveillance and academic monitoring,
which indirectly relate to participant trust in research environments. As such, extra care will
be taken to explain how data will be used, stored, and protected to ensure full transparency
and build trust.
Informed Consent
Informed consent is a cornerstone of ethical research. It ensures that participants are fully
aware of the research purpose, procedures, potential risks, and their rights before agreeing to
take part. In this study, each participant will receive a detailed participant information sheet
outlining:
This information will be delivered both electronically and in print (where applicable), using
clear, jargon-free language. Following this, participants will sign a consent form that
confirms they understand their rights, including the right to withdraw at any point without
any consequences.
This process is supported by Upadhyaya and Vrinda (2021), who stress that transparent
communication in studies involving digital stress helps participants feel psychologically
secure. Similarly, Guerra, Manríquez, and Sierra (2022) emphasise that students dealing with
technostress and cognitive overload should be approached with sensitivity, ensuring they
understand the implications of their involvement in the study.
Informed consent will also cover audio recording, making it explicit that interviews will be
recorded for transcription and analysis purposes. Participants will be informed that all
recordings will be securely stored and used only for research purposes, in compliance with
data protection regulations.
Participants will be assigned pseudonyms, which will be used throughout data analysis,
reporting, and publication. This step is crucial in protecting individuals’ identities, especially
when discussing sensitive issues such as academic dishonesty, overuse of AI tools, or digital
stress. As Chan (2023) notes in his investigation into AI-related academic misconduct, many
students are hesitant to speak freely about ethically ambiguous behaviours unless
confidentiality is explicitly guaranteed. Anonymisation allows them to reflect honestly
without fear of reprisal or judgment.
In addition, the anonymised transcripts will remove or generalise any references that might
indirectly identify the participants, such as specific academic modules, distinctive personal
experiences, or university systems. These measures will help minimise re-identification risks
while maintaining the richness of the data.
This research will strictly comply with the university’s data protection policies and relevant
national legislation (e.g., the UK Data Protection Act 2018 and GDPR standards, where
applicable). Ensuring data security involves both technical and procedural safeguards to
prevent unauthorised access, loss, or misuse of participants’ information.
Audio recordings will be stored on an encrypted digital device and then uploaded to a secure,
university-approved cloud storage platform. Transcripts will be saved in password-protected
Word or PDF documents with access restricted solely to the researcher and supervisor.
All physical documents (e.g., signed consent forms) will be stored in a locked cabinet within
a secure academic office and will not be taken off-site. Digital data will be backed up
regularly, and redundant copies will be deleted once no longer needed.
In line with ethical research practice, all data will be retained for five years post-study
completion (as per university policy), after which it will be securely destroyed. Kundu et al.
(2024) highlight the increased sensitivity of student-related data in research involving digital
monitoring, and while this study does not employ surveillance techniques, it is imperative to
safeguard data due to the personal nature of students' disclosures about digital overload and
ethical concerns.
Participants will also be informed about how their data will be used in academic
dissemination, such as conference presentations or journal publications. They will be assured
that no identifiable information will be included in any such outputs.
Before any data collection takes place, this study will seek formal ethical approval from the
university’s Research Ethics Committee (REC). The ethics application will include the full
proposal, interview guide, informed consent forms, participant information sheets, and data
protection plan.
Particular attention will be given to the emotional risks participants may face, especially
when recounting distressing or ethically complicated digital experiences. While the study
does not involve vulnerable groups or high-risk activities, the REC will be asked to review
whether additional support mechanisms (e.g., referral to counselling services) should be
provided in case a participant becomes distressed during the interview.
Ethical research also involves ongoing reflexivity, where the researcher is critically aware of
their own influence on the research process and power dynamics in the interview setting.
Throughout the study, the researcher will maintain an ethics log to document ethical
decisions, adjustments, and reflections that arise during data collection and analysis. This
adds a layer of transparency and accountability to the research process.
As Selwyn (2016) and Pérez-Juárez et al. (2023) highlight, qualitative research into student
experiences of digital learning is inherently interpretive and must be conducted with
sensitivity to personal narratives. Ethical mindfulness will therefore be maintained from
recruitment through reporting.
This study anticipates several meaningful outcomes that will contribute to a deeper and more
nuanced understanding of how university students perceive and experience the negative
effects of digital technology on their academic lives. These outcomes will not only affirm and
extend existing frameworks particularly Selwyn’s (2016) “digital downsides” but also offer
practical, student-informed recommendations for improving digital learning environments in
higher education.
The first expected outcome is the development of a rich, contextualised picture of how digital
technologies impact students’ academic routines, mental wellbeing, and ethical behaviour.
Through semi-structured interviews, participants will provide first-hand narratives of their
daily struggles with digital tools. These may include feelings of being overwhelmed by
constant notifications, difficulty maintaining focus due to platform switching, or reliance on
AI tools that create ethical dilemmas.
This qualitative richness will go beyond numbers and statistics, offering thick descriptions
that convey the depth of student experiences. This is especially significant given that much of
the existing literature on digital learning stressors is quantitative, leaving gaps in our
understanding of the human side of educational technology.
A second expected outcome is the application and possible extension of Selwyn’s (2016)
typology of digital downsides. Selwyn categorised negative student experiences into four
domains: distractions and interruptions, diminished depth of study, cognitive overload, and
academic integrity risks. While this framework provides a solid foundation, it is likely that
For instance, Chan (2023) highlights how AI-generated content is blurring the boundaries
between support and plagiarism, introducing new ethical tensions that may not have been
fully anticipated in Selwyn’s original work. The study may therefore suggest additional
subcategories or refinements within the existing four themes, offering an updated model that
reflects current digital realities.
Thematic analysis of student interviews will allow for the mapping of Selwyn’s framework
against lived experience, validating where it remains relevant and revealing where it may
need updating. By doing so, the research will provide a modernised interpretation of
Selwyn’s digital downsides, grounded in real-world student narratives.
Beyond challenges, the study is also expected to uncover patterns of coping, adaptation, and
resistance that students use to manage the negative effects of digital technology. For example,
some participants may describe strategies such as disabling notifications, using website
blockers, or scheduling offline study sessions to avoid distraction. Others may reflect on their
personal growth in navigating ethical issues related to AI use, echoing findings from Pérez-
Juárez et al. (2023), who noted that students often develop informal codes of conduct when
formal institutional policies are lacking.
These insights will be important in identifying resilience factors that can be scaled or
supported by institutional interventions. They may also point to opportunities for peer-led
digital literacy training, mental health support tailored to technostress, or clearer policy
communication around acceptable AI use in coursework.
Curriculum design that incorporates digital detox periods, blended offline activities,
or fewer simultaneous online platform requirements;
Policy development that clearly defines acceptable digital practices and promotes
digital wellbeing;
Upadhyaya and Vrinda (2021) emphasise that educational institutions have a duty to
acknowledge and respond to the cognitive and emotional toll of digital learning. This study’s
findings will support that obligation by equipping decision-makers with qualitative evidence
drawn from students’ voices. Furthermore, Kundu et al. (2024) point to the importance of
balancing innovation (e.g., biometric and AI tracking systems) with ethical considerations a
balance that this study will help contextualise from the user perspective.
Finally, this research is expected to contribute to the growing academic conversation around
digital wellbeing in higher education, particularly by demonstrating the value of qualitative
inquiry in this domain. It will fill a noted gap in literature by providing empirical, student-led
insights into the complexities of digital learning that go beyond surface-level assumptions.
It will also encourage institutions to reframe their approach to digitalisation not merely as a
technological upgrade but as a pedagogical and psychological shift that must be carefully
managed to support rather than undermine academic success.
3.8 Timeline
A structured timeline is essential for effective project management, especially for research
that involves multiple stages such as proposal writing, participant recruitment, data
collection, analysis, and reporting. The proposed study will span a period of five months,
ensuring adequate time for each phase of the research process, from ethical approval to the
final submission of findings.
The table below outlines the major activities and estimated time allocations for each phase:
5. Data Collection –
Conducting interviews with 12–15
Semi-structured Week 9–12 4 weeks
students (in-person or online).
Interviews
CONCLUSION
This research proposal presents a qualitative investigation into how university students
perceive and experience the negative effects of digital technology in their academic lives.
While educational technologies have brought about substantial improvements in flexibility,
access, and resource delivery, they also introduce a host of challenges that are often
overshadowed by the dominant narrative of innovation. Drawing on Selwyn’s (2016)
influential framework of “digital downsides,” the study focuses on four primary themes
distractions, diminished depth of study, cognitive overload, and academic integrity risks that
collectively highlight the darker side of digital learning.
Through semi-structured interviews and thematic analysis, this research aims to uncover the
nuanced realities of students’ digital academic lives. The study will not only reaffirm existing
concerns outlined by scholars such as Selwyn (2016), Biggins and Holley (2022), and Guerra
et al. (2022) but will also bring to light contemporary developments such as AI-related
misconduct and digital fatigue, as discussed by Chan (2023) and Pérez-Juárez et al. (2023).
The expected outcomes include a richer understanding of students’ lived experiences, a re-
evaluation and potential extension of Selwyn’s typology, and practical recommendations for
creating more balanced and humane digital learning environments. Ethical integrity and
participant well-being are central to the research design, with measures in place to ensure
informed consent, anonymity, and data security.
Ultimately, this study intends to contribute meaningful, student-led insights that support
institutional decision-makers in higher education. It urges a shift from tech-centric policies to
more student-centred digital strategies ones that not only embrace innovation but also
acknowledge and mitigate the academic and psychological challenges posed by digital
learning environments.
REFERENCES
Biggins, D. and Holley, D., 2022. Student wellbeing and technostress: Critical learning
design factors. Journal of Learning Development in Higher Education, (25). Available at:
https://doi.org/10.47408/jldhe.vi25.985 [Accessed 8 May 2025].
Chan, T., 2023. Is AI changing the rules of academic misconduct? An in-depth look at
students' perceptions of ‘AI-giarism’. arXiv preprint. Available at:
https://arxiv.org/pdf/2306.03358 [Accessed 8 May 2025].
Guerra, J.F., Manríquez, M.R. and Sierra, C.A.M., 2022. Technostress in university students
as an effect of the Coronavirus pandemic. In: V. Vila and M. Marquez, eds. Research in
Administrative Sciences Under COVID-19. Bingley: Emerald Publishing Limited, pp.117–
133.
Kundu, D., Mehta, A., Kumar, R., Lal, N., Anand, A., Singh, A. and Shah, R.R., 2024.
Keystroke dynamics against academic dishonesty in the age of LLMs. In: 2024 IEEE
International Joint Conference on Biometrics (IJCB). IEEE, pp.1–10.
Selwyn, N., 2016. Digital downsides: Exploring university students’ negative engagements
with digital technology. Teaching in Higher Education, 21(8), pp.1006–1021.