This document describes the multi-database and storage architecture used by Trigger.dev, including PostgreSQL for transactional data, ClickHouse for analytics and logs, Redis for caching and queuing, and object storage (S3/R2) for large payloads and artifacts. For information about the task execution engine's interaction with these databases, see Task Execution Engine. For details on deployment infrastructure, see Development and Deployment.
Trigger.dev employs a three-database strategy plus object storage, with each component optimized for specific workloads:
Sources: Diagram 4 from high-level architecture, apps/webapp/app/db.server.ts1-395 apps/webapp/app/env.server.ts116-260 apps/webapp/package.json33-38
The system uses separate Prisma clients for write and read operations to enable horizontal scaling through read replicas.
The primary client is initialized at apps/webapp/app/db.server.ts101-106:
If DATABASE_READ_REPLICA_URL is not configured, $replica falls back to the primary prisma client.
Sources: apps/webapp/app/db.server.ts101-227 apps/webapp/app/db.server.ts229-349
Database connections are configured via environment variables with URL query parameters for connection pooling:
| Parameter | Environment Variable | Default | Description |
|---|---|---|---|
connection_limit | DATABASE_CONNECTION_LIMIT | 10 | Maximum number of connections in pool |
pool_timeout | DATABASE_POOL_TIMEOUT | 60 | Connection pool timeout in seconds |
connection_timeout | DATABASE_CONNECTION_TIMEOUT | 20 | Individual connection timeout in seconds |
Connection URLs are extended with these parameters at apps/webapp/app/db.server.ts112-116:
Sources: apps/webapp/app/db.server.ts108-227 apps/webapp/app/env.server.ts51-66 apps/webapp/app/db.server.ts351-362
Prisma clients are configured with event-based logging for structured output. Query performance monitoring is enabled through the queryPerformanceMonitor:
The monitoring system tracks slow queries when VERBOSE_PRISMA_LOGS or VERY_SLOW_QUERY_THRESHOLD_MS environment variables are set.
Sources: apps/webapp/app/db.server.ts126-220 apps/webapp/app/db.server.ts217-219
The codebase provides a custom $transaction wrapper that enhances Prisma's native transaction support with OpenTelemetry spans, automatic retry logic, and structured error handling.
The transaction function is defined at apps/webapp/app/db.server.ts27-97 and supports:
swallowPrismaErrors optionTransaction Options:
| Option | Type | Description |
|---|---|---|
maxWait | number | Max time (ms) to acquire transaction from pool (default: 2000) |
timeout | number | Max transaction runtime (ms) before rollback (default: 5000) |
isolationLevel | TransactionIsolationLevel | Transaction isolation level |
swallowPrismaErrors | boolean | Return undefined instead of throwing on errors |
maxRetries | number | Max retry attempts for serialization failures (default: 0) |
Sources: apps/webapp/app/db.server.ts27-97 internal-packages/database/src/transaction.ts59-112
The transaction system implements automatic retry logic for specific Prisma error codes at internal-packages/database/src/transaction.ts38-46:
These error codes represent:
Sources: internal-packages/database/src/transaction.ts33-112 apps/webapp/app/db.server.ts66-96
ClickHouse is used for storing time-series analytics data and logs. Migrations are managed using Goose and run during container startup.
ClickHouse migrations are executed in the Docker entrypoint at docker/scripts/entrypoint.sh13-37:
The Goose binary is installed during the Docker build at docker/Dockerfile3-4 and docker/Dockerfile59-61
Sources: docker/scripts/entrypoint.sh13-37 docker/Dockerfile3-4 docker/Dockerfile59-61
Redis is logically partitioned into seven specialized instances, each optimized for a specific use case. All instances can share the same physical Redis cluster but are configured independently via environment variables.
Each Redis instance can be configured with its own connection parameters, falling back to the base REDIS_* environment variables when specific overrides aren't provided.
Configuration Pattern (example for Cache Redis at apps/webapp/app/env.server.ts157-187):
Sources: apps/webapp/app/env.server.ts116-260 Diagram 4 from high-level architecture
| Instance | Purpose | Key Package | Environment Prefix |
|---|---|---|---|
| Cache | Application-level caching | @unkey/cache | CACHE_REDIS_* |
| Rate Limit | API and JWT rate limiting | @upstash/ratelimit | RATE_LIMIT_REDIS_* |
| Realtime Streams | Event streaming for UI updates | @s2-dev/streamstore | REALTIME_STREAMS_REDIS_* |
| PubSub | Socket.IO adapter for scaling | @socket.io/redis-adapter | PUBSUB_REDIS_* |
| Queue | Run Engine task queues | Custom MARQS | Base REDIS_* |
| Lock | Distributed locking | redlock | Base REDIS_* |
| Presence | Developer session tracking | Custom | Base REDIS_* |
Each instance supports optional read replica configuration via *_READER_HOST and *_READER_PORT variables for read-heavy workloads.
Sources: apps/webapp/app/env.server.ts116-260 apps/webapp/package.json122-124 apps/webapp/package.json107
Each Redis instance can be configured for cluster mode via its *_CLUSTER_MODE_ENABLED environment variable:
When enabled, the Redis client will use cluster-aware connection logic for horizontal scaling.
Sources: apps/webapp/app/env.server.ts187 apps/webapp/app/env.server.ts155 apps/webapp/app/env.server.ts221
PostgreSQL schema migrations are managed by Prisma and executed during application startup. The migration command runs in both local development and production Docker containers.
Entrypoint migration at docker/scripts/entrypoint.sh8-11:
The corresponding package.json script at package.json14:
Prisma schema location: internal-packages/database/prisma/schema.prisma
Sources: docker/scripts/entrypoint.sh8-11 package.json14 docker/Dockerfile32-33
Sources: docker/scripts/entrypoint.sh1-52 docker/Dockerfile59-61 docker/Dockerfile93-94
The system uses graphile-worker for background job processing, configured via the ZodWorker wrapper.
Worker initialization at apps/webapp/app/services/worker.server.ts124-148:
Worker Environment Variables:
| Variable | Default | Description |
|---|---|---|
WORKER_SCHEMA | graphile_worker | PostgreSQL schema for worker tables |
WORKER_CONCURRENCY | 10 | Number of concurrent jobs |
WORKER_POLL_INTERVAL | 1000 | Polling interval in ms |
WORKER_ENABLED | true | Enable/disable worker processing |
GRACEFUL_SHUTDOWN_TIMEOUT | 60000 | Shutdown timeout in ms |
The worker is initialized in the application entry point at apps/webapp/app/entry.server.tsx190-192:
Sources: apps/webapp/app/services/worker.server.ts124-148 apps/webapp/app/env.server.ts108-112 apps/webapp/app/entry.server.tsx190-192
The system includes a migration helper to handle graphile-worker schema conflicts during upgrades:
The helper is invoked at apps/webapp/app/services/worker.server.ts124-131:
Sources: apps/webapp/app/services/worker.server.ts124-131 apps/webapp/app/entry.server.tsx201-203
The system includes a query performance monitoring utility that tracks slow database queries and logs them for analysis.
Query monitoring is configured at apps/webapp/app/db.server.ts217-219 for the primary client:
And at apps/webapp/app/db.server.ts339-341 for the replica:
The monitoring activates when either:
VERBOSE_PRISMA_LOGS="1" - Logs all queries to stdoutVERY_SLOW_QUERY_THRESHOLD_MS is set - Logs queries exceeding thresholdQuery Event Configuration at apps/webapp/app/db.server.ts158-167:
This allows selective performance monitoring in production without the overhead of logging all queries.
Sources: apps/webapp/app/db.server.ts158-220 apps/webapp/app/db.server.ts281-341
The Prisma schema is located in internal-packages/database/prisma/schema.prisma and organized into logical groups. For detailed information about specific models:
The schema defines the multi-tenant data model with proper isolation at the organization and environment levels. The DATABASE_SCHEMA constant provides the PostgreSQL schema name (default: public) for use in raw SQL queries at apps/webapp/app/db.server.ts376-395:
Sources: apps/webapp/app/db.server.ts376-395 docker/scripts/entrypoint.sh40-41
Refresh this wiki