This application aims to provide a template for building generative UI applications with LangChain.js. It comes pre-built with a few UI features which you can use to play about with gen ui. The UI components are built using Shadcn.
First, clone the repository and install dependencies:
git clone https://github.com/bracesprou/gen-ui.git
cd gen-ui
yarn install
Next, if you plan on using the existing pre-built UI components, you'll need to set a few enviroment variables:
Copy the .env.example
file to .env
:
The OPENAI_API_KEY
is required. LangSmith keys are optional, but highly recommended if you plan on developing this application further.
Get your OpenAI API key from the OpenAI dashboard.
Sign up/in to LangSmith and get your API key.
Create a new GitHub PAT (Personal Access Token) with the repo
scope.
Create a free Geocode account.
# ------------------LangSmith tracing------------------
LANGCHAIN_API_KEY=...
LANGCHAIN_CALLBACKS_BACKGROUND=true
LANGCHAIN_TRACING_V2=true
# -----------------------------------------------------
GITHUB_TOKEN=...
OPENAI_API_KEY=...
GEOCODE_API_KEY=...
To run the application in development mode run:
yarn dev
This will start the application on http://localhost:3000
.
To run in production mode:
yarn start
yarn build
If you're interested in ways to take this demo application further, I'd consider the following:
- A custom LangGraph agent instead of the default
AgentExecutor
amdcreateToolCallingAgent
- Adding a "classifier" step before the tool call, with a small fast model (e.g Claude 3 Haiku) which selects the tool call/component first. This would improve overall latency for the time to first UI (with the loading state component).