PsyQueue supports three storage backends. All implement the same BackendAdapter interface, so you can swap between them without changing application code.
| Backend | Best For | Tradeoffs |
|---|---|---|
| SQLite | Development, prototyping, single-process apps, edge/embedded | No network setup. Not suited for multi-process workers. |
| Redis | High-throughput production, multi-worker setups | Requires Redis server. No ACID transactions across jobs. |
| Postgres | Enterprise, audit requirements, ACID compliance, complex queries | Slightly higher latency than Redis. Requires Postgres server. |
Embedded storage using better-sqlite3. No external infrastructure required.
npm install @psyqueue/backend-sqliteimport { sqlite } from '@psyqueue/backend-sqlite'
q.use(sqlite({
path: './jobs.db', // File path, or ':memory:' for in-memory
}))| Option | Type | Default | Description |
|---|---|---|---|
path |
string |
required | SQLite database file path. Use ':memory:' for in-memory databases (useful for testing). |
- Local development and prototyping
- Single-process applications
- Edge computing or embedded scenarios
- Tests (use
:memory:for fast, isolated tests)
- Single-writer: only one process can write at a time
- Not suitable for horizontally-scaled worker pools
- Data lives on a single machine
High-performance backend using ioredis. Designed for production multi-worker deployments.
npm install @psyqueue/backend-redisimport { redis } from '@psyqueue/backend-redis'
// Option 1: Individual settings
q.use(redis({
host: 'localhost',
port: 6379,
password: 'secret',
db: 0,
keyPrefix: 'psyqueue:',
}))
// Option 2: Connection URL
q.use(redis({
url: 'redis://:secret@localhost:6379/0',
}))| Option | Type | Default | Description |
|---|---|---|---|
host |
string |
'localhost' |
Redis host |
port |
number |
6379 |
Redis port |
password |
string |
- | Redis password |
db |
number |
0 |
Redis database number |
url |
string |
- | Full Redis connection URL. Overrides host/port/password/db. |
keyPrefix |
string |
'psyqueue:' |
Prefix for all Redis keys |
- Production multi-worker deployments
- High-throughput job processing (7,989 jobs/sec with concurrency:10 -- 1.29x faster than BullMQ)
- When you already have Redis in your infrastructure
- Real-time applications needing sub-millisecond dequeue latency
The Redis backend uses a hybrid list + sorted-set model optimized for throughput:
Hybrid dequeue model:
- Default-priority jobs (priority = 0) use a Redis LIST with RPUSH/RPOP -- O(1) enqueue and dequeue.
- Priority jobs (priority > 0) use a sorted set for ordering, then get promoted to the front of the ready list via LPUSH. This means the hot path (most jobs) avoids sorted-set overhead entirely.
BRPOPLPUSH for blocking dequeue:
- When used with
startWorker(), the Redis backend uses BRPOPLPUSH for blocking dequeue -- the connection blocks until a job arrives, eliminating polling overhead. - A dedicated blocking client connection is created for this purpose, separate from the command connection.
Hash field packing (hot/cold split):
- Each job is stored as a Redis hash with 13 fields instead of ~30.
- Hot fields (id, queue, name, payload, status, priority, attempt, max_retries, completion_token, created_at, started_at, completed_at) stay as individual hash fields for fast Lua script access.
- Cold fields (backoff settings, workflow IDs, tenant IDs, trace IDs, metadata, etc.) are packed into a single
_extJSON blob.
ackAndFetch fusion:
- A single Lua script acks the current job AND dequeues the next job atomically. This reduces per-job Redis round-trips from 3 to 2, similar to BullMQ's
moveToFinishedoptimization.
Active set uses SADD/SREM:
- Active job tracking uses a plain Redis set (SADD/SREM) instead of a sorted set (ZADD/ZREM), since active jobs don't need ordering.
| Metric | PsyQueue Redis | BullMQ Redis |
|---|---|---|
| Processing throughput | 7,989 jobs/sec | 6,187 jobs/sec |
Benchmark: 5,000 jobs, concurrency:10, no-op handler, measured after ack.
- Atomic operations via Lua scripts (enqueue, dequeue, ack, nack, ackAndFetch)
- Distributed locking for cron leader election
- Hybrid list + sorted set for priority-based dequeue
- Blocking dequeue (BRPOPLPUSH) for zero-latency job pickup
- Batch dequeue for reduced round-trips
- Pub/sub for real-time notifications
ACID-compliant relational backend using pg. Best for enterprise workloads with audit requirements.
npm install @psyqueue/backend-postgresimport { postgres } from '@psyqueue/backend-postgres'
// Option 1: Connection string
q.use(postgres({
connectionString: 'postgresql://user:pass@localhost:5432/psyqueue',
}))
// Option 2: Individual settings
q.use(postgres({
host: 'localhost',
port: 5432,
database: 'psyqueue',
user: 'psyqueue',
password: 'secret',
ssl: true,
max: 20,
}))| Option | Type | Default | Description |
|---|---|---|---|
connectionString |
string |
- | Full Postgres connection string. Overrides other connection options. |
host |
string |
'localhost' |
Postgres host |
port |
number |
5432 |
Postgres port |
database |
string |
- | Database name |
user |
string |
- | Database user |
password |
string |
- | Database password |
ssl |
boolean |
false |
Enable SSL |
max |
number |
10 |
Connection pool size |
- Enterprise applications with ACID requirements
- Audit compliance (SQL-queryable job history)
- Complex reporting queries on job data
- When you already have Postgres and want to avoid adding Redis
- ACID transactions for atomic operations
SELECT ... FOR UPDATE SKIP LOCKEDfor safe concurrent dequeue- Auto-creates schema on first connect
- Connection pooling via
pg.Pool
Use the PsyQueue CLI to migrate jobs between backends:
npx psyqueue migrate \
--from sqlite:./jobs.db \
--to redis://localhost:6379 \
--dry-runThe migration tool:
- Connects to the source backend
- Reads all jobs (pending, scheduled, dead)
- Connects to the destination backend
- Bulk-enqueues jobs to the destination
- Verifies counts match
- Deploy with both backends configured (read from old, write to both)
- Drain the old backend (process remaining jobs)
- Switch reads to the new backend
- Remove the old backend configuration
Since all backends implement the same interface, your handlers and middleware work without changes.