0% found this document useful (0 votes)
143 views

ChatGPT Guide Report

This document provides a guide for universities on balancing the risks and benefits of using generative AI like ChatGPT in research. It outlines ChatGPT's capabilities such as generating human-like text responses while also noting limitations such as a lack of domain expertise and potential for spreading misinformation. The guide recommends universities establish policies to ensure ethical and responsible use of AI in research.

Uploaded by

tranhason1705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views

ChatGPT Guide Report

This document provides a guide for universities on balancing the risks and benefits of using generative AI like ChatGPT in research. It outlines ChatGPT's capabilities such as generating human-like text responses while also noting limitations such as a lack of domain expertise and potential for spreading misinformation. The guide recommends universities establish policies to ensure ethical and responsible use of AI in research.

Uploaded by

tranhason1705
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Generative AI in Research:

A Practical Guide for Universities on


Balancing Risks and Benefits
Enago - leading in author services Table of Contents

Enago is a trusted name in author services for the global Objective of the Guide 01
research community with offices in Tokyo, Seoul, Beijing, ChatGPT's Capabilities and Limitations 02
Shanghai, Istanbul, and New York. A preferred partner for
leading publishers, societies, and universities since 2005, we High-Risk and Safe Applications of ChatGPT and
have worked with researchers in over 125 countries. Other Generative AI Models 05

The ChatGPT Guide was published in October 2023 and is Potential Negative Consequences for an
based on GPT-3.5, the latest free version available at the Organization/Institution 07
time. It contains independent views and opinions of the 08
Reported Risks of Using ChatGPT
authors. We recommend readers to follow latest relevant
guidelines to comply with ethical use of generative AI or GPT. Guiding Principles for Framing Policies
for AI Use in Academia 09
Authors: Uttkarsha Bhosale, Gayatri Phadke, and Anupama Kapadia
Recommendations to Ensure Ethical and
Reach us at: [email protected] | www.enago.com
Responsible AI Use in Research 10

Concluding Remarks 12

1. This publication is available in Open Access under the Creative AI Solutions With Enago 13
Commons — Attribution-NonCommercial-ShareAlike 4.0 International
— (CC-BY-NC-SA 4.0) license. Appendices 14

2. The users accept to be bound by Enago Academy’s terms of use, Learn with Expert Resources 20
including but not limited to re-use of this publication and inclusion of
third-party links and information.

3. Enago is not liable for any loss or damage, including indirect or


consequential loss, arising from the use of this publication.

All rights reserved by Enago (Crimson Interactive Inc.)


Objective of the Guide

Artificial Intelligence (AI) has been in the global spotlight with the increased accessibility of ChatGPT
and other generative AI tools. While the academic community remains apprehensive about
generative AI usage in research, its usage continues to impact the academic research and writing
processes.
Consequently, it becomes essential for universities, professional organizations, and publishers to
establish clear and detailed policies to embrace AI responsibly.

Enago has curated this comprehensive guide with the following objectives:

1. Outline the benefits and risks associated with ChatGPT


2. Discuss potential use cases in education and research
3. Guide universities and institutions in setting effective policies for generative AI use

This guide aims to equip decision-makers with a balanced and informed approach to integrating
generative AI technologies responsibly in education and research settings.

All rights reserved by Enago (Crimson Interactive Inc.) 01


ChatGPT's Capibilities and Limitations

Let’s quickly understand some of ChatGPT’s capabilities and limitations at the time this
guide was written:

Capabilities of ChatGPT

Multi-turn Contextual Text Enhanced Context


Generation Handling

ChatGPT excels at generating human-like Users can provide a system message to


text responses. More importantly, it can guide the model's behavior throughout the
provide informative and contextually conversation, enabling more structured
relevant answers to various prompts. It interactions. It also supports interactive
allows users to specify desired formats, such and dynamic conversations, allowing users
as bullet points or code snippets, giving to engage in back-and-forth exchanges
control over the presentation of generated with the model.
text.

Expanded Language
Support

ChatGPT can converse in multiple languages, facilitating global accessibility and


communication. However, the text generation in non-popular languages may be limited by
lack of extensive training data.

All rights reserved by Enago (Crimson Interactive Inc.) 02


Limitations of ChatGPT (1/2)

Limited Domain Misinformation


Understanding Risk

ChatGPT struggles with domain-specific ChatGPT's training data also included


nuances, requiring caution in unreliable and inaccurate information
interpreting its responses for complex sources.1 This can lead to propagation of
academic research.1 Validation by misinformation within its responses.
domain experts is essential.

Lack of Potential Biases in


Source Evaluation Training Data

ChatGPT lacks the ability to assess The training data also contains biased
sources or verify facts, and its opaque information (such as underrepresentation
responses make it hard to understand its of women in research).2 Thus, societal and
reasoning or identify citations. cultural stereotypes may affect ChatGPT's
responses.

1. Sabzalieva E, Valentini A. ChatGPT and artificial intelligence in higher education: quick start guide (2023) [Internet]. UNESCO International Institute for Higher Education in Latin America and
the Caribbean [cited 2023 April 28]. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000385146.locale=en

2. Open AI. How should AI systems behave, and who should decide? (February 16, 2023) [Internet]. Open AI. [cited 2023 August 28].
Available from: https://openai.com/blog/how-should-ai-systems-behave

All rights reserved by Enago (Crimson Interactive Inc.) 03


Limitations of ChatGPT (2/2)

Repeating and Reusing Existing Imaginative but Inaccurate Output


Data (or Regurgitated Content): (or Hallucinations):

ChatGPT recycles information as it creates responses based on ChatGPT sometimes generates responses that seem plausible but are
statistical probabilities of which word may appear next. So, it may factually wrong or unrelated to the context. These hallucinations are
produce the text with different words without changing the often a result of training data biases, a lack of real-world
essence of the response. This may limit innovation, comprehension, and the AI model's technical limitations.3 When
advancements, creativity, and originality. addressing unfamiliar topics, it tends to provide inaccurate responses.

Did you know? 25%


21

No. of Citations
20%
In March 2023, a group of researchers assessed if ChatGPT could
15%
reliably generate accurate references for literature searches.4 Out of 12
the 35 citations provided by ChatGPT, only 2 were accurate references. 10%

5% 02

0%
Fake Slightly Erroneous Real

3. Gungor A. ChatGPT: What are hallucinations and why are they A problem for AI systems [Internet]. Bernard Marr. 2023 [cited 2023 May 23].
Available from: https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/

4. McGowan A, Yunlai G, Dobbs M, Shuster S, Cotter M, Selloni A, Goodman M, Srivastava A, Cecchi GA, Corcoran CM. Psychiatry Research. 2023;326:115334.
Available from: https://doi.org/10.1016/j.psychres.2023.115334

All rights reserved by Enago (Crimson Interactive Inc.) 04


High-Risk and Safe Applications of ChatGPT and Other Generative AI Models (1/2)

We provide below a summary of use cases provided to help researchers and educators in making informed decisions about the application of
ChatGPT and other generative AI models:

Conducting Research

Stage of Research and Writing Safe Applications High Risk

Hypothesis Initial research idea discovery or Full hypothesis generation, as ideas may be
Generation inspiration for new avenues incomplete, plagiarized or outdated

Research Refining research design for enhanced clarity End-to-end experimental design due to potentially
Design and coherence impractical or biased recommendations

Literature Review Automation of vast information search and Unauthorized use of copyright data or falsification
& Meta-analysis statistical data analysis of data sources to support study results

Summarization Extraction of key points for quick review Oversimplification and non-contextual
summarization of complex research findings

Data Generation of initial hypotheses or Misinterpretation of complex datasets without


Interpretation identification of data patterns context

Note: Since there are already established low-risk applications of generative AI such as writing emails, essays, application letters, and so on, they are not added in here.

All rights reserved by Enago (Crimson Interactive Inc.) 05


High-Risk and Safe Applications of ChatGPT and Other Generative AI Models (2/2)

Academic Writing, Editing, and Publishing

Stage of Research and Writing Safe Applications High Risk

Manuscript Generation of titles and keywords, and Generation of results and discussion prompting
Writing structuring the research outline misleading conclusions

Language Aids with specific vocabulary and Translations may lack precision due to language
Translation expressions complexity and cultural references

Plagiarism Checker Assists with rewording of sentences or Unintentional plagiarism if the output closely
and Paraphrasing checking for verbatim text copy resembles the original text

Reference Basic guidance on how to format and May not align with specific citation styles or the
Management structure citations guidelines provided by journals or academic institutions

Grammar Check Correction of basic grammar and syntax Inadequate compliance with academic requirements,
and Editing errors in large amounts of text in less time fact-checking, and understanding discipline-specific terms

Find a wide range of journals on broader Overlook relevant publishing opportunities and ignore
Journal Finder
subjects key factors influencing journal selection process

Identification of typical language and No subject matter expertise and inability to address
Peer Review
grammar errors ethical considerations

All rights reserved by Enago (Crimson Interactive Inc.) 06


Potential Negative Consequences for an
Organization/Institution

ChatGPT's capabilities seem to tempt users to rely heavily on its responses without conducting
thorough evaluations. This can inadvertently lead to the dissemination of plagiarized content or
neglect of proper attribution and citation practices. Preventing these issues demands rigorous
policies: validating outputs, correcting biases, upholding legal and ethical norms, and balancing
AI with human expertise.
Threat to Critical Thinking
Overreliance on ChatGPT or similar AI systems without human oversight may erode
expertise and judgment, impairing research quality and tarnishing reputation.

Risk of Publishing Inaccurate or Misleading Information


ChatGPT may generate responses that are factually incorrect or misleading, in reasonable,
confident tones. Relying on such outputs without proper verification can harm the
credibility of the organization.

Perpetuation of Biased Content


The responses can be biased and potentially increase existing societal biases. If
unaddressed, this can cause reputational damage to the organization by inadvertently
propagating such biased content.

Legal and Compliance Issues


ChatGPT may produce unlawful content or make unsubstantiated claims. Failure to
recognize and rectify such outputs can risk legal consequences and financial losses.

Privacy and Data Security


Inadequate data protection measures in the system could expose confidential information,
compromising the privacy and security of individuals or the organization.

All rights reserved by Enago (Crimson Interactive Inc.) 07


Reported Risks of Using ChatGPT Did you know?

Several organizations and publishers have already


Questions of Accountability and Intellectual Ownership restricted the addition of AI as an author, reiterating
that researchers must take full ownership of their
ChatGPT, alike other AI algorithms can create new inventions or products,
work.6
leading to questions about who owns the intellectual property rights.
Researchers must ensure that they have the legal right to use the AI
technologies they employ. The accountability for ChatGPT's output rests
with its users.5

Ethical Concerns
Any data users add into prompts for ChatGPT are stored in its dataset; this
poses ethical concerns as users may add confidential and sensative
infromation. A particularly concerning areas of use is th integration of
ChatGPT or generative AI in healthcare. For example, the use of generative
AI in a mental health app has already drawn intense scruitiny. Medical
applications will require stricter adherence to informed consent laws and
ethical guidelines to protect users' rights and privacy.6

Source: nature.com

5. Tsigaris P, Teixeira da Silva JA. The role of ChatGPT in scholarly editing and publishing. Eur Sci Ed [Internet]. 2023 [cited 2023 May 15];49:e101121.
Available from: https://ese.arphahub.com/article/101121/list/8/

6. Quarles S. Online mental health company uses ChatGPT to help users respond to experiment - raising ethical concerns [Internet]. Business News. 2023 [cited 2023 Jul 20].
Available from: https://biz.crast.net/online-mental-health-company-uses-chatgpt-to-help-users-respond-to-experiment-raising-ethical-concerns-around-healthcare-and-ai-technology/

All rights reserved by Enago (Crimson Interactive Inc.) 08


Guiding Principles for Framing Policies for
AI Use in Academia
While it is difficult to provide a universally applicable blueprint for developing policies related to AI
use in research, administrators should consider some key principles that can be adapted to fit each
institution’s need. These principles will allow implementation of a standardized approach to defining
an achievable policy and set the trajectory of AI integration at the organizational level.

These principles cover crucial steps in ensuring a successful policy as follows:


1. Define the scope of AI adoption on where, when, and how AI use is allowed for all educational or
research activity.
2. Establish ethical guidelines based on existing frameworks. Provide access to ethics committees.
3. Identify and plan for IT and non-IT support, including data storage, validation tools, access to
funding.
4. Assign points of contact for information as well as for reporting and investigating misuse.

5. Consult diverse stakeholders, including researchers, AI ethicists, funders and public


representatives while policy building.
6. Communicate transparently and avoid grey areas in policies for users.

7. Adopt a culture of continuous learning and training to stay updated with the latest technology
and ethics in AI research.
These guiding principles can serve as a foundation for developing comprehensive policies that address ethical,
technical, and social considerations, while also fostering responsible and innovative AI research practices.

All rights reserved by Enago (Crimson Interactive Inc.) 09


Recommendations to Ensure Ethical and Responsible AI Use in Research (1/2)

Re-think Performance Evaluations Facilitate Ethical AI Usage Broaden Research Strategies


Personalize assignments and diverse Leverage technology to detect plagiarism Encourage interdisciplinary collaboration
assessment formats Utilize plagiarism detection software to identify Foster partnerships between AI experts and
Create individualized assignments, making it instances of copied or AI-generated content in domain-specific researchers to ensure AI
challenging for students to utilize AI-generated student assignments. These tools act as technologies are applied appropriately and
model answers. Integrate various assessment deterrents and help identify potential cheating effectively in research projects.
types such as handwritten essays, in-person cases.
Establish clear objectives
discussions, oral presentations, and hands-on
Ensure informed consent
projects. This diversity reduces students' Clearly define the goals and intended
reliance on AI-generated responses. Establish protocols to obtain informed consent outcomes of AI implementation in research to
from participants involved in AI-enabled guide decision-making. Ensure alignment with
Provide timely and constructive feedback research, ensuring they are aware of the data ethical considerations for handling sensitive
Build open lines of interaction and collection, usage, and potential implications. data, and medical information.
approachability with students. Offer feedback
Secure data protection Establish validation procedures
on assignments promptly, allowing students to
improve their work and learn from mistakes. Implement robust data security measures to Implement mechanisms for independent
This feedback loop discourages cheating by safeguard sensitive and confidential validation and verification of AI-generated
highlighting the significance of authentic effort information, adhering to relevant privacy outputs to ensure accuracy, reliability, and
and learning. regulations and industry best practices. mitigate the risks of misinformation.

All rights reserved by Enago (Crimson Interactive Inc.) 10


Recommendations to Ensure Ethical and Responsible AI Use in Research (2/2)

Establish Training Programs


AI literacy Did you know?
Regularly assess AI use in research for impact, effectiveness, and policy
adherence. Additionally, provide training on AI technologies, limitations, Although many universities have developed policies to
risks, and ethics to researchers, faculty, and staff. regulate AI use, very few have a definite stand for
research and academic writing. Our analysis of the
Ethical awareness
policies set up by top 25 universities (QS Rankings 2023)
Foster a culture of ethical awareness and responsible AI use by highlights the need for standardizing key components
promoting discussions, workshops, and seminars on AI ethics and to be considered when establishing such guidelines.
responsible research practices.

Strengthen Compliance Measures


Facilitate ethical integration
Create policies that outline ethical principles and standards for AI usage,
emphasizing fairness, transparency, and accountability. Expand review
boards to evaluate AI-based research proposals, ensure compliance,
and provide guidance.

Data governance and access


Define protocols for data sharing, access, and storage, ensuring legal Read our insights here!
and ethical compliance while protecting intellectual property rights.

All rights reserved by Enago (Crimson Interactive Inc.) 11


Concluding Remarks

The limitations of generative AI do create opportunities for inaccurate and fake research; however,
universities should not expect individual researchers to completely avoid this technology in light of
its widespread use. Instead, institutions can focus on setting well-defined guidelines that outline
requirements, workflows, and set up regulations to ensure responsible AI use.

Utilizing ChatGPT and other generative AI tools will require continuous attention and
adherence to evolving AI guidelines. Our key recommendations are as follows:

1. Continuous Vigilance: Generative AI technology evolves through ongoing training and


optimization. It is crucial to remain vigilant and up-to-date with AI advancements and best
practices.
2. Human Expertise is Invaluable: In complex domains like academic research and writing,
generative AI cannot replace human skills and expertise. While AI accelerates processes, it
poses risks to research integrity and scholarly reputation.
3. Safeguarding Research Integrity: To safeguard the credibility and reputation of institutions,
it's imperative to recognize their limitations and plan ahead.

To achieve this, university administrators could utilize a four-step process process for setting up sustainable policies:

Step Step Step Step

01 02 03 04
Set Guiding Define Policy Demonstrate Re-visit and Update
Principles Frameworks Safe Use Cases Policies Regularly

All rights reserved by Enago (Crimson Interactive Inc.) 12


AI Solutions With Enago

Enago is here to support your journey with AI and help you implement the right solutions.
Advisory and Consulting Service
Our AI consultants can help you identify the right use cases as well as provide you a clear
forward-looking direction for your Gen AI needs, keeping in mind your strategic goals.

AI powered Writing and Reading Assistant


Our AI-powered Writing and Reading Assistant, Trinka AI and Enago Read, provide a
comprehensive solution to researchers, helping them be more efficient and productive.

Publication Optimization
Enago Reports is a one-stop solution to ensure superior language quality, technical
compliances, eliminate bias, automatically proofread, facilitate journal submission, and
identify plagiarism or AI-generated content.

Education and Training Solutions


We provide personalized training on use of generative AI to upskill users so that they are
more effective.

Control the future of knowledge creation and dissemination with smart AI solutions that redefine
the boundaries of what's possible in the world of academia.

Reach out to us for a personalized exploration of AI solutions.


Email: [email protected]

All rights reserved by Enago (Crimson Interactive Inc.) 13


Appendices

Table of Content
• Brief Introduction to ChatGPT 15

• Ongoing Legal Concerns Around ChatGPT and


Other Generative AI Tools 16

• Comparative Example of AI-assisted and


Human-assisted Scholarly Editing 18

• Need for Guidance From Universities:


Researchers’ Perspectives From Enago’s Poll 19

All rights reserved by Enago (Crimson Interactive Inc.) 14


Brief Introduction to ChatGPT
ChatGPT is an advanced large language model developed by OpenAI, trained on all freely-available
internet sources until September of 2021.1* Its generative pre-trained transformer (GPT) models
were trained to analyze patterns, relationships, and context within this immense training data.
Responses by ChatGPT are the result of statistical patterns and associations within that data,
rather than a direct experience from the physical world.2 Essentially, the model predicts the most
likely words or sentences that should follow, by making educated guesses based on the statistical
patterns learned from the training data.

ChatGPT created a buzz due to its user experience and functionality:

1. Generation of human-like, coherent, and contextually relevant responses


2. Easy-to-use chat interface where a text “prompt” can be input
3. Fine-tuning capabilities that enable development of targeted applications

*OpenAI introduced web browsing in GPT-4.0 for Plus and Enterprise users in late September, thereby eliminating the knowledge
cutoff limitation. However, this is currently not accessible in the free version.

1. 2.
Researchgate.net. [cited 2023 May 3]. Available from: https://www.researchgate.net/publication/369817340_Chapter_2_ChatGPT_in_Academic_Writing_and_Publishing_A_Comprehensive_Guide

2. Torres JL. Tec de Monterrey recomienda a su comunidad uso inteligente de ChatGPT [Internet]. Tec.mx. [cited 2023 May 11].
Available from: https://conecta.tec.mx/es/noticias/nacional/institucion/tec-de-monterrey-recomienda-su-comunidad-uso-inteligente-de-chatgpt

All rights reserved by Enago (Crimson Interactive Inc.) 15


Ongoing Legal Concerns Around ChatGPT and Other Generative AI Tools (1/2)
Lack of transparency in the training data used for generative AI tools and their capability to
generate human-like text has raised concerns from multiple fields about potential violations of
academic integrity policies in non-academic writing and research reporting.

1. Nature Prohibits the AI-generated images or Videos:


Nature journal has made the decision to refrain from publishing visual content sourced from
generative AI applications, such as photography, videos, or illustrations, due to concerns
regarding integrity, attribution to original sources, consent for re-use of copyrighted materials,
and privacy.3 Nature will, however, allow the incorporation of AI-assisted text, provided
appropriate acknowledgments are made and sources are provided.
2. That’s a No to Using Generative AI for Peer Review, say Funding Agencies:
Funding agencies like the National Institutes of Health (NIH) and the Australian Research Council
(ARC) are banning the use of generative AI tools for peer-review of grant proposals.4 Primary
concerns include confidentiality, errors in identifying novelty, bias, lack of creativity, and
accountability. The use of AI-written reviews could also compromise originality of thought, lead
to generalized feedback, and even constitute plagiarism.

3. Nature Editorials. Why Nature will not allow the use of generative AI in images and videos [Internet]. Naure. Vol 618, 8 Jun 2023 [cited 2023 July 28].
Available from: https://www.nature.com/articles/d41586-023-01546-4

4. Kaiser J. Science funding agencies say no to using AI for peer review [Internet]. ScienceInsider. 14 Jul 2023 [cited 2023 July 28].
Available from: https://www.science.org/content/article/science-funding-agencies-say-no-using-ai-peer-review

All rights reserved by Enago (Crimson Interactive Inc.) 16


Ongoing Legal Concerns Around ChatGPT and Other Generative AI Tools (2/2)
3. Class Action Lawsuit Against OpenAI and Microsoft for Disregard of Privacy:
A lawsuit against OpenAI and Microsoft claims that several generative AI products such as
ChatGPT, Dall-E, and Vall-E involve the unauthorized scraping of personal data.5 Claims include
past and continued use of private and personally identifiable information, from millions of
internet users, including children, without their consent or knowledge. Furthermore, the lawsuit
claims that such breach of privacy has increased since OpenAI became a for-profit business.
4. Renowned Authors Write an Open Letter for Protection of Their Content Rights:
Nearly 8000 prominent writers (including Nora Roberts, Viet Thanh Nguyen, Michael Chabon,
and Margaret Atwood) urge AI companies to stop using their work in the training data with
explicit permission or defined compensation in a recently published letter as concerns grow
about an impingement on writers' livelihood.6 Text-based generative AI applications, which
scrape authors' content and could generate new content in their writing styles, has heightened
these concerns.
5. AI Detection Tools Found Biased Against Non-native English Writers:
As AI tools have gained popularity, so have AI detection tools. While relying solely on generative
AI detectors to identify instances of academic misconduct is not recommended, they are
anticipated to assist in flagging potential issues. However, a recent study calls into question the
fairness and robustness of such tools as some GPT detectors misclassified non-native English
writing as AI generated.7 It is essential to address the biases in these detectors to avoid
marginalizing and perhaps even penalizing non-native English speakers as publishers and
educators may look to increase implementation of such detection tools.

5. Hill C. OpenAI and Microsoft face class action lawsuit for allegedly violating copyright and privacy laws [Internet]. Legal IT Insider. 29 Jun 2023 [cited 2023 July 28].
Available from: https://legaltechnology.com/2023/06/29/openai-and-microsoft-face-class-action-lawsuit-for-allegedly-violating-copyright-and-privacy-laws/

6. Knight L. Authors call for AI companies to stop using their work without consent [Internet]. The Guardian. 20 Jul 2023 [cited 2023 July 28].
Available from: https://www.theguardian.com/books/2023/jul/20/authors-call-for-ai-companies-to-stop-using-their-work-without-consent

7. Liang W, Yuksekgonul M, Mao Y, Wu E, Zou J. GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779, 2023. https://doi.org/10.1016/j.patter.2023.100779

All rights reserved by Enago (Crimson Interactive Inc.) 17


Comparative Example of AI-assisted and Human-assisted Scholarly Editing
While ChatGPT appears to provide well-edited text, its capabilities fall short of the expectation in academic editing. Moreover, expert intervention
does improve the text by providing suitable subject-matter specific editing. Aside from being unable to understand technical nuances, generative
AI’s editing skills have been inefficient at several other important aspects of academic editing, such as comprehending the full scope of academic
writing requirements, including context, citation, consistency, tone, logic, clarity, cultural sensitivity, plagiarism detection, subject-specific expertise,
and ethical considerations. Read more about this here.

Original Text
Error Explanation
In current study we have single stock whose price observe a switching geometric The human editor combined the two
Brownian motion. Also, the stockpay no dividends. sentences with the use of technical jargon.
In contrast, the ChatGPT sentence could not
revise “stock pays no dividend” and
Edited by ChatGPT Edited by a Human (Enago Editor) “switching geometric Brownian motion” to
“no-dividend stock” and “regime-switching
In current study, we have a single stock In current This study we have examines a
geometric Brownian motion”, respectively.
whose price follows a switching no-dividend single stock whose prices
geometric Brownian motion. Additionally, observe a exhibit regime- switching
the stock pays no dividends. geometric Brownian motion. Also, the Expert Tip
stockpay no dividends. Aside from being unable to understand
technical nuances, generative AI’s editing
skills have been inefficient at several other
Publication-ready Statement
important aspects of academic research and
This study examines a no-dividend stock whose prices exhibit regime-switching writing. Please find more examples here:
geometric Brownian motion. https://www.enago.com/academy/limitation
s-of-deepl-write-chatgpt-editing/

All rights reserved by Enago (Crimson Interactive Inc.) 18


Need for Guidance From Universities: Researchers’ perspectives from Enago’s poll
Even six months after the disruptive release of ChatGPT, universities are struggling to propose cohesive policies for its usage in research setting. In the
meantime, there’s a clear need for guidance highlighted from our recent researcher poll.

While a majority of researchers understood individual responsibility, more than 60% researchers held Funding bodies (24%), Peer Reviewers
(20%) and even ChatGPT itself (15%) responsible for declaring how to use generative AI responsibly.

Poll details:
1. Survey was conducted on English, Japanese, and Korean websites.
2. Total of 7,748 researchers answered the poll question

Who should be responsible for ensuring the accuracy and ethical


standards of research content created using ChatGPT?
40%
37%

35%

30%

25% 24%

20%
20%
15%
15%

10%

5% 4%

0%
The researcher/s Funding body ChatGPT developers Peer reviewers Regulatory bodies

All rights reserved by Enago (Crimson Interactive Inc.) 19


Learn with Expert Resources

Resources for more information

1. https://www.enago.com/academy/manuscript-preparation-with-ai/

2. https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-
education-Quick-Start-guide_EN_FINAL.pdf

3. https://www.enago.com/academy/human-editors-vs-chatgpt-publication-ready-research/

4. https://www.enago.com/academy/negative-costs-of-using-chatgpt-to-edit-research-manuscript/

5. https://research.aimultiple.com/generative-ai-in-life-sciences/

6. https://www.enago.com/academy/why-ai-alone-is-not-enough/

7. https://research.aimultiple.com/chatgpt-survey/

8. https://www.researchgate.net/publication/369359524_ChatGPT_and_AI-Written_Research_Papers_Ethical_
Considerations_for_Scholarly_Publishing

9. https://www.cam.ac.uk/stories/ChatGPT-and-education

10. https://axial.acs.org/publishing/ai-in-publishing-the-ghost-writer-in-the-machine

11. https://www.enago.com/academy/generative-ai-ethics-in-academic-writing/

12. https://www.enago.com/academy/university-policies-for-AI-use-education-research/

13. https://www.enago.com/academy/chatgpt-and-ai-tools-in-academic-publishing/

All rights reserved by Enago (Crimson Interactive Inc.) 20


www.enago.com

You might also like