0% found this document useful (0 votes)
4 views

Summarization and Visualization of Files based on Genai

This survey examines advancements in augmenting language models (LMs) with enhanced reasoning abilities and tool-usage capabilities. Reasoning in this context involves breaking down complex tasks into simpler subtasks, while tool use refers to engaging with external modules, such as a code interpreter. LMs can apply these capabilities independently or together through heuristics or through learning from example demonstrations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Summarization and Visualization of Files based on Genai

This survey examines advancements in augmenting language models (LMs) with enhanced reasoning abilities and tool-usage capabilities. Reasoning in this context involves breaking down complex tasks into simpler subtasks, while tool use refers to engaging with external modules, such as a code interpreter. LMs can apply these capabilities independently or together through heuristics or through learning from example demonstrations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Volume 9, Issue 11, November – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Summarization and Visualization of


Files based on Genai
S R Abhiram1; Suhas L2; Tejas S3; Tejaswini K. P.4
Department of Information Science
RNS Institute of Technology, Bengaluru

Abstract:- This survey examines advancements in line of research enhances the relevance of LLMs by
augmenting language models (LMs) with enhanced incorporating information from pertinent external documents,
reasoning abilities and tool-usage capabilities. Reasoning in effectively mitigating the limitations posed by their
this context involves breaking down complex tasks into constrained context size. By integrating a retrieval module
simpler subtasks, while tool use refers to engaging with that extracts relevant documents from a database based on the
external modules, such as a code interpreter. LMs can given context, it becomes feasible to achieve comparable
apply these capabilities independently or together through capabilities to some of the largest LLMs while utilizing fewer
heuristics or through learning from example parameters (Borgeaud et al., 2022; Izacard et al., 2022). This
demonstrations. By utilizing various, often non-parametric results in a non-parametric model capable of querying
external modules, these enhanced LMs expand their ability external data sources. Furthermore, LLMs can enhance their
to process context, shifting beyond traditional language context through reasoning strategies (Wei et al., 2022c;
modeling. This type of model is referred to as an Taylor et al., 2022; Yang et al., 2022c), producing a more
Augmented Language Model (ALM). The standard relevant context by investing additional computational
missing token objective enables ALMs to develop reasoning resources prior to generating responses.
skills, utilize tools, and even perform actions, while still
handling typical language tasks—and in some cases, Another approach involves enabling LLMs to utilize
outperforming standard LMs in benchmark tests. This external tools (Press et al., 2022; Gao et al., 2022; Liu et al.,
survey concludes that ALMs could potentially overcome 2022b) to fill in critical gaps in information not captured
significant limitations found in traditional LMs, including within the model’s weights. While many of these studies
issues with interpretability, consistency, and scalability. target specific shortcomings of LLMs, it is clear that a
systematic integration of both reasoning and tools could yield
Keywords:- Reasoning, Tool Use, Non-Parametric Module, significantly more powerful models. We will refer to these
Missing Token Prediction), Heuristics, Demonstrations, models as Augmented Language Models (ALMs). As this
Interpretability, Consistency, Scalability. trend continues to grow, it becomes increasingly challenging
to monitor and comprehend the breadth of results,
I. INTRODUCTION highlighting the need for a taxonomy of ALM research and
clear definitions of the technical terms that are sometimes
The survey investigates recent developments in used interchangeably.
enhancing language models (LMs) by adding reasoning skills
and the ability to use external tools. Reasoning refers to II. REASONING
breaking down complex tasks into simpler parts, while tool
usage involves integrating with modules like code  Previous studies have indicated that while LLMs can
interpreters to extend functionality. These enhancements tackle simple reasoning tasks, they struggle with more
allow LMs to apply reasoning and tool-usage abilities complex ones (Creswell et al., 2022). Consequently, this
independently or jointly, often learned through heuristics or section will explore various strategies aimed at enhancing
demonstrations. Referred to as Augmented Language Models the reasoning capabilities of LMs.
(ALMs), these models can utilize various external, non-  A significant challenge for LMs when faced with complex
parametric modules to broaden their context capabilities. The reasoning problems is accurately deriving solutions by
ALMs retain the core missing token prediction objective, combining the correct answers predicted for sub-
enabling them to perform typical language tasks while also problems. For instance, a language model might
outperforming many conventional LMs in benchmarks. The accurately predict a celebrity's birth and death dates but
survey concludes that ALMs offer a promising approach to fail to calculate their age correctly. This issue has been
address key challenges in traditional LMs, including identified by Press et al. (2022) as the compositionality
limitations in interpretability, consistency, and scalability. gap in LMs. In the remainder of this section, we will
examine three prominent approaches to eliciting
A growing trend in research has emerged aimed at reasoning in LMs. It is worth noting that Huang and
addressing the challenges associated with large language Chang (2022) have conducted a survey on reasoning
models (LLMs), moving slightly away from traditional within language models, while Qiao et al. (2022) have
statistical language modeling approaches. For instance, one focused on reasoning through prompting.

IJISRT24NOV1929 www.ijisrt.com 3139


Volume 9, Issue 11, November – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

 Several studies aim to elicit intermediate reasoning steps III. USING TOOLS AND ACTING
by explicitly breaking down problems into sub-problems,
facilitating a divide-and-conquer approach. This recursive A. Iterative LM calling
strategy is particularly beneficial for complex tasks, as A growing body of research explores how LMs can
compositional generalization can pose significant access knowledge beyond their internal parameters by
challenges for language models (Lake and Baroni, 2018; interacting with external tools for tasks like precise
Keysers et al., 2019; Li et al., 2022a). Approaches that computations or retrieving information. These tools allow
utilize problem decomposition can either address the sub- models to "act" when their outputs affect external
problems independently. environments. For example, LMs can be configured to call
 Despite their impressive outcomes, prompting methods another model or external tool to refine a generated response
have notable drawbacks, particularly their reliance on iteratively or to connect with modules trained on diverse data
model scale. Specifically, they necessitate the types. This multimodal approach expands the model’s ability
identification of effective prompts that can elicit step-by- to perform actions or use other resources, such as search
step reasoning, as well as the manual provision of engines, web browsers, and virtual or physical agent control,
examples for few-shot learning on new tasks. allowing ALMs to perform a broader range of tasks with
Additionally, using long prompts can be computationally increased reliability.
intensive, and the limited context size of models restricts
the ability to take advantage of a large number of Incorporating diverse modalities can enhance the
examples. Recent research proposes addressing these effectiveness of language models (LMs), especially in tasks
challenges by training language models (LMs) to utilize a where context is crucial. For instance, the tone of a
form of working memory. question—whether serious or ironic—can significantly affect
 Reasoning can generally be understood as the process of the type of response required. Recent studies by Hao et al.
breaking down a problem into a series of sub-problems, (2022) and Alayrac et al. (2022) highlight the potential of
approached either iteratively or recursively. However, using LMs as universal interfaces for models that have been
exploring numerous reasoning pathways can be pre-trained on various modalities. Hao et al. (2022) integrate
challenging, and there is no assurance that the several pre-trained encoders that can process different forms
intermediate steps are valid. One method to create reliable of data, such as text and images, into an LM that acts as a
reasoning traces involves generating pairs of questions universal task layer. This integration, achieved through semi-
and their corresponding answers for each reasoning step causal language modeling, combines the advantages of both
(Creswell and Shanahan, 2022), but this still does not causal and non-causal approaches, facilitating in-context
guarantee the accuracy of those intermediate steps. learning and open-ended generation while also allowing for
 Ultimately, a reasoning language model aims to enhance easy fine-tuning of the encoders.
its context independently to improve its likelihood of
producing the correct answer. The extent to which Language models can be improved through memory
language models actually utilize the identified reasoning units, such as neural caches that store recent inputs (Grave et
steps to inform their final predictions is still not well al., 2017; Merity et al., 2017), which bolster their reasoning
understood (Yu et al., 2022). capabilities. Alternatively, knowledge can be retrieved from
 Often, certain reasoning steps can contain errors that external sources, offloading it from the LM. These memory
negatively impact the correctness of the final output. For augmentation strategies help the LM avoid generating
instance, errors in complex mathematical calculations outdated information.
during a reasoning step can result in an incorrect
conclusion. Similarly, mistakes regarding well-known Isolation, focusing solely on digital artifacts and
facts, such as identifying a president during a specific struggling to integrate findings across other forensic domains
year, can lead to inaccuracies. Some of the studies like DNA or physical evidence. Their "black box" nature
mentioned earlier (Yao et al., 2022b; Press et al., 2022) makes it challenging to present transparent, legally
have begun to explore the use of simple external tools. acceptable outputs.
 Tool such as search engines or calculators, to verify these
intermediate steps. The following section of this survey Two types of retrievers can enhance LMs: dense and
will delve into the various tools that language models can sparse. Sparse retrievers rely on bag-of-words representations
query to enhance their chances of generating correct for documents and queries (Robertson and Zaragoza, 2009),
answers, which could be particularly relevant for your whereas dense neural retrievers utilize dense vectors
interests in machine learning and language models.. generated from neural networks (Asai et al., 2021). Both
 Similarly, Khot et al. (2022) uses prompts to break down types evaluate the relevance of documents to information-
tasks into specific operations but permits each sub- seeking queries through either (i) term overlap or (ii)
problem to be addressed by a library of specialized semantic similarity. Sparse retrievers excel at the former,
handlers, each designed for a particular sub-task (e.g., while dense retrievers perform better at the latter (Luan et al.,
retrieval). 2021).

IJISRT24NOV1929 www.ijisrt.com 3140


Volume 9, Issue 11, November – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

When augmenting LMs with dense retrievers, various In a similar vein, Huang et al. (2022a) explored whether
studies have found success by appending retrieved documents LMs can leverage world knowledge to execute specific
to the existing context (Chen et al., 2017; Clark and Gardner, actions in response to high-level tasks articulated in natural
2017; Lee et al., 2019; Guu et al., 2020; Khandelwal et al., language, such as "make breakfast." This study was
2020; Lewis et al., 2020; Izacard and Grave, 2020; Zhong et pioneering in demonstrating that, if the LM is sufficiently
al., 2022; Borgeaud et al., 2022; Izacard et al., 2022). While large and appropriately prompted, it can decompose high-
retrieving documents for question answering is not a novel level tasks into a series of simple commands without needing
concept, retrieval-augmented LMs have recently shown additional training. However, the agent is limited to a
strong performance in other knowledge-intensive tasks predefined set of actions, meaning not all natural language
beyond Q&A, effectively narrowing the performance gap instructions can be executed within the environment. To
with larger LMs that require significantly more parameters. overcome this limitation, the authors proposed using the
REALM (Guu et al., 2020) was the first approach to jointly cosine similarity function to map the LM-generated
train a retrieval system with an encoder LM end-to-end. RAG commands to feasible actions for the agent. This approach
(Lewis et al., 2020) fine-tunes both the retriever and a was evaluated in a virtual household setting, where it showed
sequence-to-sequence model together. Izacard and Grave enhanced task execution capabilities compared to relying
(2020) introduced a modified seq2seq architecture designed solely on the LM-generated plans.
to efficiently handle multiple retrieved documents. Borgeaud
et al. (2022) developed an auto-regressive LM named While these studies highlight the potential of LMs in
RETRO, demonstrating that combining a large corpus with controlling virtual robots, other research has focused on
pre-trained, frozen BERT embeddings for the retriever can physical robots. Zeng et al. (2022) integrated a LM with a
yield performance comparable to GPT-3 on various visual-language model (VLM) and a pre-trained language-
downstream tasks without requiring additional training for conditioned policy to control a simulated robotic arm. Here,
the retriever. the LM functions as a multi-step planner that decomposes
high-level tasks into subgoals, while the VLM describes the
Overall Limitations Across Solutions: Fragmented objects in the environment. The results are passed to the
Analysis: Most existing AI tools focus on specific evidence policy, which executes actions based on the specified goals
types (e.g., digital, genetic, or medicolegal), resulting in and the observed state of the world. Dasgupta et al. (2023)
fragmented forensic investigations. Lack of Transparency: employed the 7B and 70B Chinchilla models as planners for
Many AI models are “black-boxes,” making it difficult to an agent that acts and observes results in a PycoLab
interpret and validate their results, which affects legal environment. A reporter module was also utilized to translate
acceptability. Slow, Sequential Processing: AI models often actions and observations from pixel data to text format.
analyse evidence sequentially rather than simultaneously, Lastly, in Carta et al. (2023), an agent employs a LM to
resulting in longer investigation times and delayed insights. generate action policies for text-based tasks, with interactive
learning through online reinforcement learning helping to
B. Acting on the Virtual and Physical World ground the LM's internal representations in the environment,
Integrated Comprehensive Recent research has shown moving away from solely relying on the statistical surface
that language models (LMs) can effectively control virtual structure of pre-training.
agents in both 2D and 3D simulated environments by
generating executable functions. For instance, Li et al. Liang et al. (2022) utilized a LM to create robot policy
(2022b) fine-tuned control virtual agents in both 2D and 3D code based on natural language commands by prompting the
simulated environments by generating executable functions. model with several demonstrations. By integrating traditional
For instance, Li et al. (2022b) fine-tuned a pre-trained GPT- logic structures and referencing external libraries for tasks
2 (Radford et al., 2019) to handle sequential decision-making such as arithmetic operations, LMs can develop policies that
tasks by encoding goals and observations as a sequence of exhibit spatial and geometric reasoning, generalize to novel
embeddings and predicting subsequent actions. This instructions, and provide precise values for ambiguous
framework demonstrated strong combinatorial generalization descriptions. This method proved effective across various
across various domains, including a simulated household real robot platforms. LMs possess common sense knowledge
environment. It indicates that LMs can generate about the world, which can facilitate robots in complex ways.
representations beneficial for modeling not only language but
also sequential objectives and plans, enhancing their learning
and generalization capabilities in tasks beyond mere language
processing.

IJISRT24NOV1929 www.ijisrt.com 3141


Volume 9, Issue 11, November – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

This survey concludes that ALMs represent a significant


advance in natural language processing, enhancing model
versatility and task performance across benchmarks. By
moving away from the limitations of purely parametric
models, ALMs leverage external modules to expand
contextual understanding and real-time adaptability. This
shift provides a foundation for LMs to become more
autonomous and capable agents, potentially applicable to a
wide range of real-world scenarios. Continued research is
needed to further refine these capabilities, address current
scalability challenges, and explore the broader applications of
ALMs in fields that demand robust reasoning and decision-
making. In summary, the GenAI-Based Forensic Simulation
System sets a new benchmark for AI-driven forensics,
offering a holistic approach that enhances the speed,
accuracy, and integrity of forensic investigations. It not only
bridges the gaps present in existing solutions but also
redefines the potential of AI in aiding the criminal justice
system.

REFERENCES

[1]. Deborah Cohen, Moonkyung Ryu, Yinlam Chow,


Orgad Keller, Ido Greenberg, Avinatan Hassidim,
Michael Fink, Yossi Matias, Idan Szpektor, Craig
Boutilier, et al. Dynamic planning in open-ended
Fig 1: Block Diagram of Proposed Analysis System dialogue using reinforcement learning. arXiv preprint
[2]. Antonia Creswell and Murray Shanahan. Faithful
Table 1: Traditional method vs Proposed method reasoning using large language models.
Traditional Method Proposed Method [3]. Satanjeev Banerjee and Alon Lavie. METEOR: An
Direct rule-based Fine-tuning of language automatic metric for MT evaluation with improved
programming models (LMs) for specific correlation with human judgments. In Proceedings of
tasks. the ACL Workshop on Intrinsic and Extrinsic
Sequential command Multi-step planning and Evaluation Measures for Machine Translation and/or
execution. contextual understanding Summarization, pages 65–72.
Static algorithms Interactive learning through [4]. Antonia Creswell, Murray Shanahan, and Irina
without learning. online reinforcement learning Higgins. Selection-inference: Exploiting large
Limited flexibility Decomposes high-level tasks language models for interpretable logical reasoning
and context- into simpler subgoals [5]. Ishita Dasgupta, Andrew K Lampinen, Stephanie CY
awareness Chan, Antonia Creswell, Dharshan Kumaran, James L
McClelland, and Felix Hill. Language models show
IV. CONCLUSION human-like content effects on reasoning.
[6]. Ishita Dasgupta, Christine Kaeser-Chen, Kenneth
The development of Augmented Language Models Marino, Arun Ahuja, Sheila Babayan, Felix Hill, and
(ALMs) addresses several inherent limitations in traditional Rob Fergus. Collaborating with language models for
language models (LMs). By equipping LMs with reasoning embodied reasoning, 2023.
abilities and the capacity to interact with external tools, [7]. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
ALMs demonstrate a notable improvement in handling Kristina Toutanova. Bert: Pre-training of deep
complex, multi-step tasks. This enhancement allows them to bidirectional transformers for language
apply structured reasoning and retrieve or compute missing understanding. In Proceedings of the North American
information in ways that traditional LMs cannot achieve Chapter of the Association for Computational
within a limited context. The integration of tools—whether Linguistics (NAACL), 2019.
for retrieving information, performing computations, or [8]. Pierre L Dognin, Inkit Padhi, Igor Melnyk, and Payel
controlling virtual and physical agents—provides ALMs with Das. Regen: Reinforcement learning for text and
a more adaptable, context-sensitive approach. However, the knowledge base generation using pretrained language
field still faces challenges, such as achieving greater models. Conference on Empirical Methods in Natural
interpretability and optimizing models for scalability. The Language Processing (EMNLP), 2021.
interplay between reasoning and tool use also raises questions [9]. Chris Donahue, Mina Lee, and Percy Liang. Enabling
about finding the right balance for improved generalization language models to fill in the blanks. In Proceedings
and task efficiency. of the Annual Meeting of the Association for
Computational Linguistics(ACL), 2020.

IJISRT24NOV1929 www.ijisrt.com 3142


Volume 9, Issue 11, November – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

[10]. Iddo Drori, Sarah Zhang, Reece Shuttleworth,


Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu,
Linda Chen, Sunny Tran, Newman Cheng, et al. A
neural network solves, explains, and generates
university math problems by program synthesis and
few-shot learning at human level. Proceedings of the
National Academy of Sciences, 119(32), 2022.
[11]. Andrew Drozdov, Nathanael Schärli, Ekin Akyürek,
Nathan Scales, Xinying Song, Xinyun Chen, Olivier
Bousquet, and Denny Zhou. Compositional semantic
parsing with large language models.
[12]. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and
Matt Gardner. Successive prompting for decomposing
complex questions. Conference on Empirical Methods
in Natural Language Processing (EMNLP), 2022.
[13]. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
Neubig. Pal: Program-aided language models, 2022.
[14]. lge Akkaya, Marcin Andrychowicz, Maciek Chociej,
Mateusz Litwin, Bob McGrew, Arthur Petron, Alex
Paino, Matthias Plappert, Glenn Powell, Raphael
Ribas, et al. Solving rubik’s cube with a robot hand.
[15]. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc,
Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc,
Arthur Mensch, Katie Millican, Malcolm Reynolds, et
al. Flamingo: a visual language model for few-shot
learning. Advances in Neural Information Processing
Systems (NeurIPS), 2022.
[16]. Daniel Andor, Luheng He, Kenton Lee, and Emily
Pitler. Giving BERT a calculator: Finding operations
and arguments with reading comprehension. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), 2019.
[17]. Akari Asai, Xinyan Yu, Jungo Kasai, and Hannaneh
Hajishirzi. One question answering model for many
languages with cross-lingual dense passage retrieval.
Advances in Neural Information Processing Systems
(NeurIPS), 2021.
[18]. Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen,
Gautier Izacard, Sebastian Riedel, Hannaneh
Hajishirzi, and Wen-tau Yih. Task-aware retrieval
with instructions
[19]. Lalit R. Bahl, Frederick Jelinek, and Robert L. Mercer.
A maximum likelihood approach to continuous speech
recognition. IEEE Transactions on Pattern Analysis
and Machine Intelligence, PAMI-5(2):179–190, 1983.
[20]. Yuntao Bai, Saurav Kadavath, Sandipan Kundu,
Amanda Askell, Jackson Kernion, Andy Jones, Anna
Chen, Anna Goldie, Azalia Mirhoseini, Cameron
McKinnon, et al. Constitutional ai: Harmlessness from
ai feedback. arXiv preprint arXiv:2212.08073, 2022.

IJISRT24NOV1929 www.ijisrt.com 3143

You might also like