Compare LLM responses side-by-side. Send one prompt to multiple models, see all responses, rate and save them.
PromptMux is a visual tool for comparing responses from different Large Language Models (LLMs). Enter a prompt once, send it to multiple models simultaneously, and compare the results in an intuitive canvas interface.
[Prompt Input] ──→ [GPT-4o] ──→ Response + Metrics
├─→ [Claude 4] ──→ Response + Metrics
└─→ [Gemini 2.5] ──→ Response + Metrics
- Visual Canvas - Drag-and-drop interface for organizing prompts and responses
- Multi-Provider Support - OpenAI, Anthropic, Google (Gemini), and more
- BYOK Model - Bring Your Own API Keys (securely encrypted)
- Library System - Save, organize, and search your prompt comparisons
- Response Metrics - Track latency, token usage, and estimated costs
- Like & Rate - Mark favorite responses and add notes
- Frontend: SvelteKit 2 + Svelte 5 + Tailwind CSS v4
- UI Components: shadcn-svelte
- Backend: SvelteKit API routes
- Database: Supabase (Auth + PostgreSQL)
- LLM Integration: Vercel AI SDK
- Node.js 18+
- pnpm (
npm install -g pnpm) - A Supabase project (create one free)
-
Clone the repository
git clone https://github.com/lucaderumier/promptmux.git cd promptmux -
Install dependencies
pnpm install
-
Set up environment variables
cp .env.example .env
Edit
.envwith your values:PUBLIC_SUPABASE_URL=your_supabase_project_url PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key ENCRYPTION_KEY=your_64_character_hex_encryption_key
Generate an encryption key:
openssl rand -hex 32 -
Set up the database
Run the SQL migrations in your Supabase SQL Editor:
- Copy contents from
supabase/migrations/files - Execute in order (001, 002, etc.)
- Copy contents from
-
Start the development server
pnpm dev
Open http://localhost:5173
- Create a new project at supabase.com
- Enable Email auth in Authentication > Providers
- (Optional) Enable OAuth providers (Google, GitHub)
- Run the database migrations
- Copy your project URL and anon key to
.env
- Add API Keys - Go to Settings > API Keys and add your provider keys
- Create Prompts - Type your prompt in the canvas
- Add Models - Click "+" to add response nodes and select models
- Generate - Click "Get Response" to query the models
- Compare - View responses side-by-side with metrics
- Save - Save interesting comparisons to your library
src/
├── routes/
│ ├── app/ # Main application routes
│ │ ├── +page.svelte # Canvas page
│ │ ├── library/ # Library page
│ │ └── api-keys/ # API keys management
│ └── api/
│ ├── llm/generate/ # LLM proxy endpoint
│ └── library/ # Library CRUD endpoints
├── lib/
│ ├── components/
│ │ ├── canvas/ # Canvas UI components
│ │ ├── sidebar/ # Navigation sidebar
│ │ └── ui/ # shadcn-svelte components
│ ├── llm/
│ │ ├── types.ts # Types + model configs
│ │ └── providers/ # Provider implementations
│ └── services/ # Business logic
└── hooks.server.ts # Auth middleware
-
Create
src/lib/llm/providers/{provider}.ts:import { createProvider } from '@ai-sdk/{provider}'; import { generateText } from 'ai'; import type { ProviderInstance } from './base'; export function createMyProvider({ apiKey }): ProviderInstance { const provider = createProvider({ apiKey }); return { provider: 'myprovider', generateText: async ({ model, prompt, system, maxTokens, temperature }) => { const result = await generateText({ model: provider(model), prompt, system, maxTokens, temperature }); return { text: result.text, usage: result.usage }; } }; }
-
Export from
src/lib/llm/providers/index.ts -
Add models to
AVAILABLE_MODELSinsrc/lib/llm/types.ts -
Add pricing to
MODEL_PRICINGin the same file -
Add case in
createProvider()insrc/routes/api/llm/generate/+server.ts
pnpm dev # Start development server
pnpm build # Build for production
pnpm preview # Preview production build
pnpm check # TypeScript type checking
pnpm lint # Run linter
pnpm format # Format code with PrettierWe welcome contributions! Please see CONTRIBUTING.md for guidelines.
For security concerns, please see SECURITY.md.
This project is licensed under the MIT License - see the LICENSE file for details.
- shadcn-svelte for the beautiful UI components
- Vercel AI SDK for the unified LLM interface
- Supabase for the backend infrastructure
