dematas
dematas
Note
Jarrel De Matas
University of Massachusetts Amherst
DOI: 10.37514/DBH-J.2023.11.1.09
Introduction
Within the space of just one year, chatbots powered by Artificial Intelligence (AI) Large
Language Models (LLMs), such as ChatGPT (Chat Generative Pre-Trained Transformer),
have grown in functionality.1 Artificial intelligence is not new to writing automation or
the pedagogy of college writing. Software such as WordPerfect and Writer’s Helper has
been shown to improve student efficiency with the amount of time required for revising
and editing (Williamson, 1993). More recently, Ma (2021) has recommended Virtual
Reality technology as part of an immersion teaching strategy for second-language
learners. In both cases, the AI software depends on something initially created by the
student.2 However, ChatGPT-4, described as “more creative and collaborative” than
earlier iterations (OpenAI, n.d., Creativity section), has expanded the capacity in natural
language processing to allow it wide functionality in practical scenarios such as
answering questions, chatting automatically, and de/coding formulae (Zhang & Li,
2021). For academic writing in particular, ChatGPT can accomplish such tasks as
generating summaries of papers, extracting key points from articles, and providing
citations (Aljanabi, 2023)—tasks beyond the capacity of previous GPT models.
ChatGPT was launched in 2022 by OpenAI, an American research laboratory of
the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.
ChatGPT operates as a fine-tuned chatbot with transfer and reinforcement learning
capacities.3 The LLM algorithm of ChatGPT allows the platform to “generate, edit, and
iterate with users on creative and technical writing tasks, such as composing songs,
writing screenplays, or learning a user’s writing style” (OpenAI, n.d., Creativity section).
The possibilities offered by ChatGPT come with the caveat that, if not engaged critically,
it can hinder—not help—user creativity and reasoning.4 What I propose in this note is
an inquiry-based model that centers the user—i.e., the writer—as a problem-solver who
reflexively develops their critical thinking and writing skills through close engagement
with the creative and technical process of the ChatGPT platform.
This note adds to existing research in Writing about Writing (WAW) through the
following questions: What critical thinking parameters can be placed on ChatGPT to
preserve originality in student thought? And how can ChatGPT be used without
displacing the centrality of the student-writer? In what follows, I address each of these
questions as part of my overarching research focus: an examination of the future of
writing in a rapidly changing technological world. Key discussion points I raise during
this note overlap with central ideas in WAW research as discussed by Wardle and Downs
(2022), namely reflection as an aid to encourage “[t]ransferring and repurposing what
we know about writing” (p. 105), conversational inquiry as a form of making new
knowledge “rather than simply reporting on information you’ve gathered from a library
or web search” (p. 126), and creativity as having an “inherently rhetorical quality” (p.
950).
1
Double Helix, Vol 11 (2023)
An Inquiry-Based Model
As it happens, the areas where ChatGPT and similar LLMs have the most shortcomings
are those that provide the most opportunities for improving learning outcomes. For
example, Azaria (2022) explained that ChatGPT may request additional information to
provide an answer. Additionally, a minor change to a question might lead to a
contradicting response. In such cases, AI-powered writing tools provide metacognitive
moments for students to go beyond simply generating responses and into the realm of
critical inquiry. To explore these moments, I take an approach similar to how Ng et al.
(2022) used AI-driven chatbots to apply Pedaste et al.’s (2015) five-phase model of
inquiry-based learning to the writing process. My approach differs by requiring students
to investigate design problems of ChatGPT, a much more advanced chatbot than Siri,
which formed the basis for the study by Ng et al. (2022), by identifying where and how
at each stage of the five-phase model ChatGPT is not successful.
I provide students with a five-phase model, tailored to a college writing
schematic, that enables them to fill the gaps in ChatGPT’s learned knowledge (as shown
in Table 1), which exist because it has been trained on data that go up to its “knowledge
cutoff” of 2021 (Chatterjee & Dethlefs, 2023).5
2
Double Helix, Vol 11 (2023)
essay as part of, say, a rhetorical analysis. Finally, students can be prompted to improve
the essay by rewriting it themselves. The skill of metacognitive awareness that is
enabled through a review process of ChatGPT’s essays is especially useful for EFL
learners (Azizi et al., 2017), and a comprehensive review of ChatGPT’s pre-trained
knowledge will further deepen student creativity and engagement as students will likely
gain confidence from identifying gaps in ChatGPT through an inquiry-based model.
3
Double Helix, Vol 11 (2023)
I asked ChatGPT to compose a song in the music genre called Soca, which is
indigenous to my home country, Trinidad and Tobago:
4
Double Helix, Vol 11 (2023)
Beyond ChatGPT’s structural design of a typical song, the platform borrows the title
lyrics “Feelin’ hot, hot, hot” from The Merrymen’s classic Calypso song.7 Additionally,
ChatGPT mimics the English Creole transcription of “d”’ and “dis” and “dey” for the
Standard English article “the” and pronouns “this” and “they.” There is also the use of
subject pronouns used as object pronouns (“make we jump”) that is characteristic of
English Creole. Finally, in terms of content, ChatGPT references the carnival season as
well as the dance “wine.” A rhetorical evaluation of this response by ChatGPT reveals
certain stereotypes associated with Soca music, language, culture, and dance elements.
ChatGPT is limited by not only the parameters of its datasets, but also the datasets
themselves, which contain biases that concern geo-culture and genre conventions.
Similar user-centered design approaches to evaluating ChatGPT’s responses can
encourage students to be creative with how they use the platform as well as improve
their rhetorical evaluation skills. In each of the two recommendations I provide above,
students of college writing become active creators with the design process of ChatGPT’s
limited knowledge base, allowing educators to maintain general course objectives by
emphasizing writing as an ongoing, imperfect process—even for AI-powered writing
tools.
Conclusion
AI-powered writing tools existed before ChatGPT and, based on the popularity of the
latter, will only continue to become more sophisticated. The significant learning curve
posed to educators by ChatGPT’s pre-trained model is also an opportunity to reinvent
the ways in which writing has been taught. In this note, I have provided
recommendations for educators to utilize, not discourage, ChatGPT in writing
instruction. Such recommendations include an inquiry-based model for using ChatGPT
in writing assignments and a user-centered design approach that actively shapes the
content provided by ChatGPT’s limited knowledge base. Writing instructors must begin
examining the viability of artificial intelligence platforms such as ChatGPT through an
academic lens to grasp the complexity, ramifications, and potential that they may hold
for pedagogy and the future of the field. For researchers, future iterations of WAW can
take up AI-powered writing specifically for the ways in which it can add to the user’s
metacognitive awareness of the writing process.
Notes
1Here I refer to AI as automated devices that have functions similar to human
processes such as analysis, synthesis, learning, and self-correction.
2Williamson (1993) discussed the process using WordPerfect and Writer’s
Helper as follows: “The students first type their essays on the word processor. Then,
they take their essays through the analysis which flags various writing deficiencies. The
students then revise on hard copies of their essays. Finally, using the word processor, the
students edit their essays based on the analyses” (p. 4). Ma (2021) pointed to student-
teacher collaboration with AI when it comes to VR technology immersion: “Students and
teachers studied in groups in a virtual situation created by VR technology and
communicated in English throughout the whole process” (p. 1).
3Transfer learning is an area of machine learning in which knowledge gained
while learning to recognize, say bicycles, could apply to the recognition of scooters.
Reinforcement learning is another area of machine learning whereby an algorithm
learns through trial and error. Ricciardelli & Biswas (2019) have studied FAQ-type
chatbots able to self-improve performance.
5
Double Helix, Vol 11 (2023)
“The Prime Minister of the United Kingdom as of my knowledge cutoff is Boris Johnson.”
6Leverenz (2014) used Buchanan’s (1992) description of wicked assignments as
“ill-formulated, where the information is confusing, where there are many clients and
decision makers with conflicting values, and where the ramifications in the whole
system are thoroughly confusing” (p. 15). According to Leverenz, “[b]y eschewing easy
or obvious solutions, wicked problems require us to think creatively about the problem
as well as the solution. As a result, we come to own the problem—as our vision—rather
than merely fulfilling someone else’s idea of what should be done” (p. 7).
7Calypso is considered to be the primogenitor of Soca.
References
Aljanabi, M., Mohanad G., Ali, A. H., Abed S. A., & ChatGpt. (2023). ChatGpt: Open
possibilities. Iraqi Journal For Computer Science and Mathematics, 4(1), 62–64.
https://doi.org/10.52866/20ijcsm.2023.01.01.0018
Anson, C. M. (2021). A heuristic approach to selecting technological tools for writing
instruction and support. In M. Gustafsson & A. Eriksson (Eds.), Negotiating the
intersections of writing and writing instruction (pp. 63–88). University Press of
Colorado.
Avarzamani, F., & Farahian, M. (2019). An investigation into EFL learners’ reflection in
writing and the inhibitors to their reflection. Cogent Psychology, 6(1), 1–13.
Azaria, A. (2022). ChatGPT usage and limitations. HAL open science, 1–9.
Azizi, M., Nemati, A., & Estahbanati, N. T. (2017). Meta-cognitive awareness of writing
strategy use among Iranian EFL learners and its impact on their writing
performance. International Journal of English Language & Translation Studies,
5(1), 42–51.
Buchanan, R. (1992). Wicked problems in design thinking. Design Issues, 8(2), 5–21.
Burkhard, M. (2022, November). Student perceptions of AI-powered writing tools:
Towards individualized teaching strategies [Paper presentation]. 19th
International Conference on Cognition and Exploratory Learning in Digital Age
(CELDA 2022), Lisbon, Portugal.
Chatterjee, J., & Dethlefs, N. (2023). This new conversational AI model can be your
friend, philosopher, and guide—and even your worst enemy. Patterns, 4(1), 1–3.
Choi, J. H., Hickman, K. E., Monahan, A. B., & Schwarcz, D. (2022). ChatGPT goes to law
school. Journal of Legal Education, 71(3), 1–16.
Greer, M., & Harris, H. S. (2018). User-centered design as a foundation for effective
online writing instruction. Computers and Composition, 49, 14–24. https://doi.
org/10.1016/j.compcom.2018.05.006
Leverenz, C. S. (2014). Design thinking and the wicked problem of teaching writing.
Computers and Composition, 33, 1–12. https://doi.org/10.1016/j.compcom.2014.
07.001
6
Double Helix, Vol 11 (2023)
Ma, L. (2021). An immersive context teaching method for college English based on
artificial intelligence and machine learning in virtual reality technology. Mobile
Information Systems, 2021, 1–7. https://doi.org/10.1155/2021/2637439
Munir, H., Vogel, B., & Jacobsson, A. (2022). Artificial intelligence and machine learning
approaches in digital education: A systematic revision. Information, 13(4), 1–26.
https://doi.org/10.3390/info13040203.
Ng, D. T. K., Luo, W., Chan, H. M. Y., & Chu, S. K. W. (2022). Using digital story writing as a
pedagogy to develop AI literacy among primary students. Computers and
Education: Artificial Intelligence, 3, 2022, 1–14. https://doi.org/10.1016/j.caeai.
2022.100054
O’Connell, T. S., & Dyment, J. E. (2013). Theory into practice: Unlocking the power and the
potential of reflective journals. Information Age.
OpenAI. (n.d.) GPT-4 can solve difficult problems with greater accuracy, thanks to its
broader general knowledge and problem solving abilities. https://openai.com/gpt-
4
Pedaste, M., Mäeots, M., Siiman, L. A., de Jong, T., van Riesan, S. A. N., Kamp, E. T., Manoli,
C. C., Zacharia, Z. C., & Tsourlidaki, E. (2015). Phases of inquiry-based learning:
Definitions and the inquiry cycle. Educational Research Review, 14, 47–61.
https://doi.org/10.1016/j.edurev.2015.02.003
Purdy, J. P. (2014). What can design thinking offer writing studies? College Composition
and Communication, 65(4), 612–41. https://doi.org/10.58680/ccc201425449
Ricciardelli, E., & Debmalya, B. (2019, May). Self-improving chatbots based on
reinforcement learning [Paper presentation]. 4th Multidisciplinary Conference on
Reinforcement Learning and Decision Making (RLDM 2019), Montreal, Canada.
https://tinyurl.com/5db8y3ca
Vie, S. (2008). Technology as a site of struggle: The interplay of identity, morality, and
power in four popular technology. Review of Communication, 8(2), 130–145.
https://doi.org/10.1080/15358590701586568
Wardle, E., & Downs D. (2022). Writing about Writing (5th ed.). Macmillan.
Williamson, B. L. Writing with a byte. Computers: An effective teaching methodology to
improve freshman writing skills (ED 362 245). ERIC. https://files.eric.ed.gov/
fulltext/ED362245.pdf
Zhang, M., & Li J. (2021). A commentary of GPT-3 in MIT Technology Review.
Fundamental Research, 1(6), 831–833. https://doi.org/10.1016/j.fmre.2021.11.
011