0% found this document useful (0 votes)
17 views

Large Language Models

This is an ARTICLE about LARGE LANGUAGE MODELS.

Uploaded by

bhavanit150903
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Large Language Models

This is an ARTICLE about LARGE LANGUAGE MODELS.

Uploaded by

bhavanit150903
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Large Language Models for qualitative research in Software

Engineering: Exploring opportunities and challenges


Abstract
The recent surge in integrating Large Language Models (LLMs) like Chat GPT into qualitative research in software
engineering, much like in other professional domains, demands a closer inspection. This vision paper seeks to explore
the opportunities of using LLMs in qualitative research to address many of its legacy challenges as well as potential new
concerns and pitfalls arising from the use of LLMs. We share our vision for the evolving role of the qualitative researcher
in the age of LLMs and contemplate how they may utilize LLMs at various stages of their research experience.

Keywords
Large language models - LLMs - Qualitative research - Software engineering

Contents 4 The Promise of LLMs across varied research expertise 5


5 Conclusion 5
Introduction 1
References 6
1 Addressing legacy challenges of SE qualitative research with
LLMs ............................. 2 Introduction
1.1 Time-intensive work ...........................2 The advent of Large Language Models (LLMs) such as
OpenAI's ChatGPT and Google's Bard has been nothing
1.2 Generalizability ...........................2
short of a paradigm shift in academia, much like in many
1.3 Consistency ...........................2 other professions. Within a year of their inception, these
models have become a focal point of academic scrutiny,
1.4 Subjectivity ...........................
with researchers exploring their potential across a plethora
3
of domains (Bano et al. 2023). From analyzing its pivotal
2 New frontiers, new challenges 3 role in research and academia to understanding its
2.1 Ethical and privacy concerns. . . . . . . . . . . . . . . . . . . . . . .3 transformative potential in educational settings, the
2.2 Model biases ..........................3 emerging body of literature paints an intriguing picture of
2.3 Lack of contextual and philosophical understanding . . .3 the far-reaching implications of LLMs. We found an
2.4 Dependency on Technology . . . . . . . . . . . . . . . . . . . . . . 3 increased interest from researchers in the use of LLMs in
2.5 Quality control ..........................3 Software Engineering (SE), and the work is characterized by
a multifaceted evaluation of potential benefits and inherent
2.6 Reproducibility . . . . . . . . . . . . . . . . . . . . . . . . . .4
challenges. Ozkaya (2023) projects an AI-augmented
2.7 context of related work . . . . . . . . . . . . . . . . . . . . . . . . . . 4
software development lifecycle, with AI assistants
2.8 Critical thinking ..........................4
contributing to various SE tasks like specification generation
2.9 Intellectual property (IP) . . . . . . . . . . . . . . . . . . . . . . . . . 4
and legacy code translation.
3 The evolving role of human research 4
Concurrently, Jalil et al. (2023) analyze ChatGPT's role in
3.1 Ensuring ethical practices . . . . . . . . . . . . . . . . . . . . . . . . 4 software testing education, revealing its potential and the
3.2 Prompt engineering .........................4 risks of overreliance due to varying response accuracy.
3.3 Defining research questions. . . . . . . . . . . . . . . . . . . . . . 4 Ebert and Louridas (2023) discuss how generative AI can
3.4 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . .4 automate SE tasks, urging a balanced integration
3.5 Quality checking ..........................5 considering ethical and privacy concerns. Scoccia (2023)
gathers early adopter experiences with ChatGPT's code
3.6 Theorizing . . . . . . . . . . . . . . . . . . . . . . . . . . .5
generation, indicating its significant impact yet mixed usage
outcomes.
The empirical study by Kuhail et al. (2023) presents a
nuanced perspective on AI's role in SE, suggesting increased
trust with frequent use but also heightened job security
Large Language Models— 2/6
concerns. Arora et al. (2023) propose a SWOT analysis for when viewed from the perspective of researchers at
LLMs in Requirements Engineering, suggesting a cautiously different stages of their academic journey.
optimistic view towards AI's role in elicitation and
validation. Nguyen-Duc et al. (2023) outline a research
agenda for Generative AI, emphasizing its potential for
1. Addressing legacy challenges of SE qualitative
partial automation in SE tasks. research with LLMs
Finally, Hou et al. (2023) provide a systematic literature
review of LLMs in SE offering a comprehensive roadmap for Historically, qualitative research in SE has grappled with
future research and practical applications in SE. Each of challenges like the time-intensive nature of the work,
these contributions underscores the transformative limitations to scalability due to its manual nature, and the
potential of LLMs in SE, while also acknowledging the inherent subjectivity that qualitative methodologies can
complexity of their integration and the need for ongoing sometimes entail.
research to navigate the challenges they present. However,
augmenting research processes with LLMs is yet to be fully
1.1 Time-intensive work
investigated. In the evolving landscape of SE research, the
Conducting qualitative research often requires intensive data
intertwining of technological advancements with human- analysis, which can be time-consuming. LLMs can help to
driven insights has been a constant. While no one can claim automate or expedite parts of these processes. For example,
they saw the exact nature and shape of LLMs emerging, they could help to make sense of large amounts of textual data,
predictions have been made about "further advancements identify themes and patterns within data, and generate initial
in technology and artificial intelligence" offering codes or categories. Such technical assistance could significantly
"unexplored potential in supplementing, augmenting, and speed up the data analysis process and allow researchers to
automating parts of qualitative data analysis to ease the handle larger datasets, thereby allowing them to scale
qualitative analysis in ways hardly possible through a
human effort and improve both the quality and scale of
commensurate amount of manual effort.
theory development". The work of Byun et al. (2023) shows
that LLMs like GPT-3 can produce text that is comparable to 1.2 Generalizability
that written by humans, even in qualitative analysis, which Qualitative research is hard to generalize universally or
traditionally relies heavily on human insight. Their work to wider populations outside the originally studied
demonstrates that AI can not only generate text but also context, which is typically a relatively narrow
identify themes and provide detailed analysis similar to that
phenomenon. Based on the constructive worldview, it
of human researchers. They suggest that AI could
may even be undesirable. However, the use of AI-based
potentially match human capabilities in interpreting
models and advanced natural language processing, such
qualitative data. Their findings indicate a promising avenue
as those offered by LLMs, can help improve the
for using AI in qualitative research, where it could serve as a
tool to both augment and potentially replace human relevance and generalizability of the qualitative findings,
analysis, raising important questions about the future role such as descriptive findings, taxonomies, and theories by
of humans in research processes. Bano et al. (2023) expanding the contexts studied (Hoda 2021).
challenge these claims. They acknowledge the potential of 1.3 Consistency
AI to align with human analysis in some cases but caution Variations in qualitative data analysis are expected to
against an overreliance on AI due to significant disparities
exist across different researchers, but consistency can
between AI and human reasoning. Their study reveals that
still be an issue for individual researchers. Depending on
while AI, specifically LLMs like ChatGPT 3.5 and GPT-4, can
several factors not limited to external and personal
sometimes provide logical classifications, there is often a
circumstances, achieving high levels of human
lack of consensus between AI-generated and human-
generated insights, raising questions about the AI's consistency is a known challenge (Gentles et al. 2015;
capability to fully grasp the complexities of human language Watson 2006). On the other hand, LLMs, being
and the contextual nuances important in qualitative computing entities, can process and analyze data
research. consistently, considering the consistency in prompts.
Improved consistency is likely to lend itself to better
repeatability of the process and higher reproducibility of
Despite preliminary progress, there remains a significant
the research outcomes. This may be particularly
lack of clarity on how LLMs compare to human intelligence
desirable from a positive perspective.
in qualitative research (Bender et al. 2021). The role of
LLMs in SE qualitative research presents both
unprecedented opportunities and inherent risks, especially
Large Language Models— 3/6
1.4 Subjectivity 2.2 Model biases
While it may be impossible or even undesirable to Like all machine learning models, LLMs can have inherent biases
eliminate human subjectivity from qualitative research, based on the data they were trained on, which can be awed or
LLMs could potentially add a layer to the analysis. For insufficient. This could potentially skew the analysis or conclusions
example, the use of LLMs can help a team of qualitative drawn from their use in qualitative research. For example, in SE
researchers discuss and agree on the concepts emerging qualitative research, if an LLM is trained on data that
from their analyses. Furthermore, the concepts generated predominantly consists of contributions from male developers, it
may inadvertently downplay or overlook the communication
by LLMs can act as a 'third party' reference to help address
styles, coding preferences, or problem-solving approaches more
and reconcile differences emerging from personal beliefs,
common among female developers or those from
experiences, or emotions. It seems early and somewhat
underrepresented groups. In such cases, researchers have the
naive to suggest that an LLM can act as an objective responsibility to be aware of and acknowledge the inherent biases
baseline or a source of a deciding 'expert opinion'. LLMs, in the underlying data on which the LLMs are trained, as part of
like humans, are known to harbor their own set of biases the limitations of their research.
based on the training data and parameters that can 2.3 Lack of contextual and philosophical understanding
influence their inference logic when it comes to qualitative While LLMs can process and generate text based on
research (Navigli et al. 2023). With rapid enhancements in patterns learned, they lack a true understanding of the
LLM capabilities, these aspects can be re-examined in the context, which is crucial in qualitative research. This could
future. lead to oversights and misinterpretations. For example, in a
SE qualitative study analyzing developer communication on
issue trackers, an LLM might interpret technical jargon or
project-specific slang literally, missing the nuanced meaning
2. New frontiers, new challenges intended by the developers. While LLMs could identify and
While LLMs may seem to be the panacea for many summarize discussions on a given research topic from
traditional qualitative research issues, they bring with them various sources, articles, and grey literature, they might not
a set of unique challenges. We summarize these below: fully grasp the subtleties of concerns that require a deeper
philosophical understanding and contextual awareness,
2.1 Ethical and privacy concerns which human researchers provide. In such cases, the
Incorporating LLMs into data analysis poses ethical and researchers should be paying special attention to any
privacy challenges, especially with sensitive data. Ethical missing or misinterpreted contexts.
issues include ensuring data consent, proper anonymization 2.4 Dependency on Technology
to enable de-identification, and addressing biases that AI There is a risk of becoming overly dependent on technology
may perpetuate (Arora et al. 2023; Ebert and Louridas for research. While LLMs can assist in data analysis, they
2023). These concerns necessitate a responsible AI should not replace the human element of research, which
framework that respects individual privacy and data rights. includes critical thinking, contextual understanding, and
For ethical usage, Nguyen-Duc et al. (2023) recommend ethical judgment (Bano et al. 2023). To educate and train
integrating AI with an awareness of ethical implications and the next generation of qualitative researchers it is
privacy risks, such as by using AI to enhance rather than important to not overly rely on augmented research
replace human decision-making, and keeping sensitive raw technologies such as LLMs. We elaborate further on the
data local to avoid exposure. Ozkaya (2023) further level of expertise of researchers later in this paper.
suggests robust data governance to ensure AI applications 2.5 Quality control
adhere to ethical standards and privacy regulations, Ensuring the quality and accuracy of the results generated
balancing AI's potential with necessary oversight. by LLMs can be challenging. Researchers need to be vigilant
and critical when interpreting the outputs of LLMs. For
example, ChatGPT is known to be prone to hallucinations,
instances where LLMs generate inaccurate or entirely
fabricated information. Not checking for inaccurate and
fake information generated by LLMs can land researchers in
trouble.2 To address the issue of hallucinations the
involvement of human researchers is imperative. As
pointed out by Rudolph et al. (2023) and Alkaissi and
McFarlane (2023), these hallucinations can lead to
misinterpretation of research outcomes, compromise the
validity of results, and introduce bias or error. To counteract
Large Language Models— 4/6
this, researchers must scrutinize, verify, and interpret the 3.9 Intellectual property (IP)
outputs of LLMs meticulously, ensuring that the conclusions IP concerns are another dimension to consider in the use of
are aligned with the actual context and maintain the LLMs in research. The contribution of LLMs' responses and
integrity of the research. This human intervention is analyses to the creation of a research output could raise
necessary not only for validation but also to continually questions about authorship, such as whether ChatGPT
rene and calibrate the models, thereby improving their should be credited as a co-author, reflecting the model's
understanding and minimizing potential drawbacks role in data processing and knowledge generation. Another
(Watkins 2023). layer of complication is the copyright and IP of the data on
2.6 Reproducibility which LLMs are trained. Determining the extent of LLMs'
As LLMs are continuously updated, and old models are contribution, and that of underlying sources, and its
deprecated, the ability to reproduce an analysis with the implications for IP rights and academic recognition is an
same precision diminishes over time, a phenomenon ongoing debate in the research community.
known as model drift.3 Researchers may provide exhaustive
details on their methodology, including data sets, prompts, 3. The evolving role of the human researcher
parameters, and the versions of models used, but this does Amid the LLM revolution, the role of the human researcher
not guarantee that the same analysis can be reproduced in is undergoing a nuanced shift.
the future by LLMs. Unlike human researchers, where
insights and analytical reasoning can be revisited or
3.1 Ensuring ethical practices
clarified, LLMs do not offer the possibility to revisit the
Researchers must ensure that their studies are conducted
reasoning behind their outputs once the model version is
ethically. This includes obtaining informed consent from
no longer available.
participants, ensuring privacy and confidentiality, and
2.7 Context of related work
treating the data in a way that respects the rights and
Integrating an LLM's data analysis within the broader
dignity of the participants/ sources.
context of related work poses a significant challenge,
3.2 Prompt engineering
primarily because the model cannot access the entirety of
Prompt engineering is emerging as a crucial skill,
potentially relevant literature due to constraints on data
underscoring the fact that the quality of LLM outputs
availability and access rights due to paywalls. This limitation
hinges significantly on the inputs it receives. It's important
hampers the LLM's ability to draw comprehensive
to note that prompt engineering can also be a stage where
connections and insights that are informed by the existing
researchers might unintentionally introduce bias, as the
research, potentially narrowing the scope and depth of its
way questions are framed can influence the direction and
analytical outputs. In the future, if LLMs are capable of
nature of the LLM's response, potentially reinforcing certain
handling large quantities of raw data from literature along
perspectives or excluding others.
with the context of related work, this could lead to
3.3 Defining Research Questions
augmenting systematic literature reviews (Kitchen Ham
2004) with LLMs. Although LLMs can be used to brainstorm research topics
and ideas, the researcher must de ne the research
2.8 Critical thinking
questions and objectives. An LLM can help process data,
Developing critical thinking in LLMs is a complex challenge,
but LLMs do not have intellectual curiosity, intention,
as it involves the model's exposure to a variety of data,
motivation, or enough information to set research
including incorrect statements, to enhance its evaluative
directions, which will depend on the researcher.
capabilities (Emmert-Streib 2023). To ensure LLMs are
3.4 Data collection
exposed to such a range of data, researchers could
deliberately include datasets with known errors or While LLMs can help process and analyze large amounts of
contradictory information during the training phase. This data, and now, with web searchability can collect data as
method could potentially help LLMs learn to discern and well, it is still the researcher's responsibility to collect the
evaluate the accuracy of information they analyze. data in certain qualitative research contexts such as
However, this approach also raises concerns about how to interviews or surveys. However, in some instances where it
effectively teach LLMs to recognize and appropriately is extremely difficult to recruit real participants for
handle incorrect information without perpetuating or research, e.g. in health domain patients with chronic
amplifying these errors in their outputs. Currently, it's ailments, LLMs can be used to simulate and role-play
unclear how critical thinking might be incorporated in LLMs certain personas for data collection. The known limitations
when analyzing qualitative data. of using personas in research, as well as the lack of lived
human experience in simulated data, will continue to be a
challenge.
Large Language Models— 5/6
3.5 Quality checking the underlying domain of inquiry or the principles of
It is important for researchers to check the quality of the qualitative data analysis can compromise the quality of
work done by the LLM. For instance, they need to look for research outputs and their capabilities as researchers. It is
biases in the analysis and ensure that the LLM is correctly essential to strike a balance to ensure data integrity and
interpreting and coding the data. true learning of the research process.
3.6 Theorizing Intermediate researchers will find LLMs useful as they
Developing rich theories that are grounded in evidence dive into more complex data. LLMs can aid in identifying
requires a deep understanding of the data, the ability to recurring themes and intricate patterns, potentially
see connections and patterns, and the creativity to elevating the quality of the analysis through its
formulate a theory. These are all skills that are currently comprehensive approach. However, there is a potential risk
beyond the reach of LLMs. of overreliance on the technology, leading to
overconfidence in automated outputs. Researchers must
4. The Promise of LLMs across varied research maintain a critical eye, ensuring that their growing reliance
on LLMs does not overshadow the need for rigorous human
expertise oversight and contextual interpretation that their increasing
Qualitative research is often rooted in a constructivist experience affords them.
paradigm emphasizing the non-replicable human capacity For seasoned qualitative researchers, LLMs present an
to understand and contextualize social phenomena. The opportunity to explore new breadth and depth within data
constructivist paradigm in SE research is concerned with analysis. For example, LLMs can be used to scale qualitative
socio-technical realities that are not objective but research beyond what is typically possible through human
constructed through human experiences and contexts. This effort. Experienced qualitative researchers can boost their
paradigm values the researcher's role in interpreting data, practice by taking on larger datasets for analysis, training
where their involvement and perspective are considered bespoke LLMs where accessible, and developing descriptive
integral to the analysis, especially in methods like ndings, taxonomies, and theories that capture a wider
ethnography, participant observation, and grounded range of contexts and are, therefore, more widely
theory. generalizable. But with this deeper dive comes a
Qualitative research in SE also offers unique advantages heightened responsibility for research integrity and
in exploring complex socio-technical processes and aiding accountability.
in theory construction. It can reveal underlying reasons The research outputs, while possibly enhanced by
behind intricate socio-technical dynamics and is often used LLMs, must be thoroughly reviewed for inadvertent errors
to generate new research questions and insights. These or biases. Furthermore, while LLMs can handle the heavy
aspects underscore the necessity of the human element in lifting of data analysis, experts must remain fully
data interpretation, despite the analytical capabilities of accountable for the interpretations and conclusions drawn.
LLMs. For all levels of researchers, LLMs can expedite the data
The expertise of a researcher is crucial across all processing phase, but it is paramount that researchers do
research modalities, including the application of LLMs, as it not bypass the essential learning and understanding phases
guides the critical interpretation of data, the strategic of the research process. LLMs should be tools to enhance
questioning that leads to deeper insights, and the the process, not shortcuts that diminish the depth and
contextual understanding that LLMs alone cannot provide. richness of qualitative research in software engineering.
Further to the opportunities and challenges presented The use of LLMs should not eclipse the importance of
by LLMs in SE qualitative research discussed above, we human judgment and insight.
present our collective thoughts on how these may vary by
the experience level of the Researchers. Firstly, and most
5. Conclusion
importantly, with the introduction of LLMs, ethical
considerations come to the fore. Researchers at all stages As LLMs entrench themselves into most disciplines, SE
must understand and uphold ethical practices, especially research will not remain untouched. For qualitative SE
concerning data privacy, possible plagiarism, and potential research, LLMs offer a landscape rife with opportunities
biases that the LLMs might introduce or perpetuate. and challenges. Researchers, whether novices,
intermediates, or experts, can embrace the potential of
For novices in qualitative research in SE, LLMs can be
LLMs while remaining vigilant and anchored in the core
both an assistive tool and a challenge. LLMs can be used to
tenets of qualitative inquiry.
sift through extensive datasets, identify initial patterns, and
assist in some basic data coding, making the initiation Amidst the rising discourse on the potential threats of AI
phases smoother. However, novice researchers must be and LLMs, accentuated by media narratives, there exists a
cautious. Relying heavily on LLMs without understanding
Large Language Models— 6/6
palpable concern within professional communities about [10] Hou, X., Yanjie, Z., Yue, L., Zhou, Y., Kailong, W., Li, L., Xiapu, L., David, L., John, G., Haoyu, W.: Large

AI's capability to replace human roles. language models for software engineering: a systematic literature review. arXiv preprint

arXiv:2308.10620
Contrarily, empirical findings (Bano et al. 2023), rooted
in an understanding of LLM capabilities and extant [11] Jalil, S., Suzzana, R., Thomas, D.L., Kevin, M., Wing, L.: Chatgpt and software testing education:

research, debunk the AI doomsday notion, particularly for Promises & perils. In: 2023 IEEE International Conference on Software Testing, Veri cation and

qualitative researchers in software engineering. We project Validation Workshops (ICSTW), 4130–37. IEEE (2023)

a harmonious future where LLMs and human researchers [12] Jiang, D., Xiang R., Bill Y-L.: LLM-blender: Ensembling large language models with pairwise ranking

collaboratively further qualitative research. However, while and generative fusion. (2023) arXiv preprint arXiv:2306.02561.

LLMs, like GPT-4 and ChatGPT, show promise, the [13] Kitchenham, B.: Procedures for performing systematic reviews. Keele UK Keele Univ. 33(2004), 1–

irreplaceable role of the human researcher in ensuring 26 (2004)

ethical conduct, well-motivated studies, the validity and [14] Kuhail, M.A., Sujith, S.M., Ashraf, K., Jose, B., Syed J.S.: Will I be replaced? Assessing chatgpt's

reliability of research findings, and appropriate effect on software development and programmer perceptions of AI tools. Assessing Chatgpt's

dissemination remains pivotal. Effect on Software Development and Programmer Perceptions of AI Tools.

Considering the broader interaction between humans [15] Navigli, R., Simone, C., and Björn, R.: Biases in large language models: origins, inventory, and

and LLMs, while the latter's adeptness in qualitative data discussion. ACM J. Data Inform. Qual. (2023)

analysis can optimize certain facets of research, it is [16] Nguyen-Duc, A., Beatriz C.-D., Adam, P., Chetan, A., Dron, K., Tomas, H., Usman, R., Jorge, M.,

imperative to note their limitations in capturing the Eduardo, G., Kai-Kristian K.: Generative artificial intelligence for software engineering–a research

intricate nuances inherent to human researchers. This Agenda, (2023) arXiv preprint arXiv:2310.18648.

sentiment is echoed in seminal anthropological and [17] Ozkaya, I.: Application of large language models to software engineering tasks: opportunities. Risks

sociological works that emphasize the human touch in Implicat. IEEE Software. 40, 4–8 (2023)

interpreting and understanding data. Critically, the ethical [18] Polonsky, M.J., Jeffrey D.R.: Should artificial intelligent agents be your co-author? Arguments in

considerations surrounding LLM use, ranging from data favor, informed by ChatGPT. In: 91–96. SAGE Publications Sage UK: London, England (2023)

privacy to intellectual property rights, call for rigorous [19] Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in
scrutiny. higher education? J. Appl. Learn. Teach. 24, 6 (2023)

[20] Scoccia, G.L.: Exploring Early Adopters' Perceptions of ChatGPT as a Code Generation Tool. In: 2023

38th IEEE/ACM International Conference on Automated Software Engineering Workshops

References (ASEW), pp 88–93 (2023)

[1] Alkaissi, H., McFarlane, S.I.: Artificial hallucinations in ChatGPT: implications in scientific writing. [21] Treude, C., Hideaki H.: She Elicits Requirements and he tests: software engineering gender bias in

Cureus 15, 192 (2023) large language models. (2023) arXiv preprint arXiv:2303.10131.

[2] Arora, C., John, G., Mohamed, A.: Advancing requirements engineering through generative AI: [22] Watkins, R.: Guidance for researchers and peer-reviewers on the ethical use of large language

assessing the role of LLMs. (2023) arXiv preprint arXiv:2310.13976. models (LLMs) in scientific research work ows. AI Ethics 16, 1-6 (2023)

[3] Balel, Y.: The role of artificial intelligence in academic paper writing and its

potential as a co-author', Euro. J.Therap.. (2023)

[4] Bano, M., Didar Z., Jon W.: Exploring qualitative research using LLMs. (2023)

arXiv preprint arXiv: 2306.13298. Bender, E.M., Timnit G., Angelina M.-M.,

Shmargaret S.: On the Dangers of Stochastic Parrots: Can Language Models Be

Too Big?? In Proceedings of the 2021 ACM Conference on Fairness,

accountability, and Transparency, pp 610–23 (2021)

[5] Byun, C., Piper, V., Kevin, S.: Dispensing with Humans in Human-Computer Interaction Research. In:

Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–

26 (2023)

[6] Easterbrook, S., Singer, J., Storey, M.A., Damian, D.: Selecting empirical methods for software

engineering research. Guide to Adv. Emp. Softw. Eng. 8, 285–311 (2008)

[7] Ebert, C., Louridas, P.: Generative AI for software practitioners. IEEE Softw. 40, 30–38 (2023)

Emmert-Streib, F.: Importance of critical thinking to understand ChatGPT. Europ. J. Human Genet.

15, 1–2 (2023)

[8] Gentles, S.J., Cathy, C., Jenny, P., Ann Mckibbon, K.: Sampling in qualitative research: insights from

an overview of the methods literature. Qual. Rep. 20, 1772–1789 (2015)

[9] Hoda, R.: Socio-technical grounded theory for software engineering. IEEE Transaction Software

Engineering. 48, 3808–3832 (2021)

You might also like