Skip to content

satyanshu404/Prompt-Generation-with-LLM

Repository files navigation

work_flow.png

Prompt Generation with LLM to improve Proactiveness

Python 3.9+ Jupyter Notebook BM25 Transformers

 

Problem Statement

In addressing the limitations of current large language models (LLMs) which provide generic results for all users, lack efficient human involvement when generated output falls short, and struggle to discern when to personalize responses versus offering generic results in conversations, this project aims to enhance proactiveness in prompt generation, ensuring more tailored and contextually relevant interactions with users.

For Example:

problem.png

Getting started

Proposed Solution

1. Obtain extended output from a general pre-trained LLM (e.g., Chat GPT, LLAMA)
2. Utilize a Bert-GPT Encoder-Decoder model to produce summaries of the generated content
3. Employ a BART sequence-to-sequence model to generate prompts
4. Evaluate and rank all generated prompts, presenting the top 10 options to users for obtaining more specific and detailed results

Setup

Clone the repo:

git clone https://github.com/satyanshu404/Prompt-Generation-with-LLM.git

Install the dependencies:

pip install -r requirements.txt

You are all set! 🎉

 

Example

Let consider a scenario where a user, experiencing metabolic acidosis, seeks guidance on reversing the condition but inadvertently provides insufficient information to the pre-trained LLM, resulting in generic responses, our solution involves the model suggesting additional prompts to the user. These supplementary prompts aim to elicit more details and enhance the specificity of the generated results.

For Example:

example.png

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published