A NestJS learning project for building a realistic core banking or digital wallet backend with:
- Event sourcing for the write model
- CQRS for the write/read split
- Snapshotting for faster aggregate loads
- Kafka-driven live projections for query APIs
- PostgreSQL or in-memory infrastructure behind shared interfaces
- Create, deposit into, withdraw from, and freeze accounts
- Persist account changes as immutable domain events
- Deduplicate account command retries with a Postgres-backed idempotency store
- Guard deposit and withdrawal business transactions with a write-side transaction registry
- Rehydrate aggregates from event history, using snapshots every 50 versions
- Maintain read models for account details, balances, and statement history
- Publish persisted events through a transactional outbox
- Update live account projections through Kafka consumers
- Keep manual replay and rebuild tooling backed by the event store
- Run lightweight versioned SQL migrations in Postgres mode
Longer design context lives in:
- docs/spec/spec-001.md for the baseline event-sourced write model, snapshots, and initial query side before outbox/Kafka live projection
- docs/spec/spec-002.md for the outbox, Kafka live projections, projection gap handling, and rebuild tooling update
- docs/spec/spec-003.md for hot-account concurrency hardening: server-side retry, per-account in-process mutex, snapshot tuning, and outbox poll interval reduction
This repo uses a modular monolith with clear boundaries:
accountsfor account commands, aggregate logic, projections, and queriestransfersfor transfer orchestration scaffoldinginfrastructurefor database access, event store, snapshots, projections, and messaging
High-level flow:
Client
|
v
HTTP Controller
|
v
Command Handler
|
v
Load Aggregate (snapshot + tail events)
|
v
Domain Decision
|
v
Append Events to Event Store
|
+--> Transactional Outbox --> Kafka --> Live Projection Consumer --> Read Tables
|
+--> Manual Projection Replay / Rebuild From Event Store
src/
common/
cqrs/
domain/
infrastructure/
db/
event-store/
messaging/
projections/
snapshots/
modules/
accounts/
application/
domain/
query/
transfers/
The write side is event sourced:
- the
AccountAggregateenforces business rules - the
AccountRepositoryloads the aggregate from its stream - new events are appended to the
eventstable - optimistic concurrency is enforced with
expectedVersion
Example account stream:
AccountCreatedMoneyDepositedMoneyWithdrawnAccountFrozen
The current account state is rebuilt from those facts, not from a mutable accounts row.
To avoid replaying very long account streams from version 1 every time, the repo stores snapshots:
- snapshot interval is currently
50versions - snapshots are a performance optimization only
- the event stream remains the source of truth
Aggregate load flow:
- load latest snapshot for
account-{id} - restore aggregate state from snapshot
- read only events after the snapshot version
- replay the remaining tail events
The query side is served from projections, not from aggregate rehydration during reads.
Current projection tables:
account_summaryaccount_statementprojection_checkpoints
Live account projection updates are Kafka-driven.
The manual projection runner remains in the codebase for:
- replay
- rebuild
- repair from source of truth
This means the query side is eventually consistent with the write side.
EVENT_STORE_KIND controls which infrastructure implementation is used:
in-memoryfor fast local learning and testspostgresfor persistent event store, snapshots, and read models
In Postgres mode, the app uses:
eventsfor the append-only event logsnapshotsfor aggregate snapshotsaccount_summaryfor current account stateaccount_statementfor account historyprojection_checkpointsfor projection progressoutbox_eventsfor reliable event publicationidempotency_recordsfor API-level command deduplicationtransaction_recordsfor business transaction deduplicationschema_migrationsfor versioned SQL migrations
docker compose up -d postgres zookeeper kafkanpm installdocker compose exec -T postgres psql -U banking -d banking -f /docker-entrypoint-initdb.d/init.sqlinit.sql is useful for first-time container initialization. After that, incremental schema changes should be added as versioned migrations in:
src/infrastructure/db/migrations/migrations.ts
npm run start:devBase URL:
http://localhost:3000/api
Health check:
GET /health
All account command endpoints now require an Idempotency-Key header.
Use one unique key per logical operation, and reuse the same key only when retrying that exact same request.
Create account:
curl -X POST http://localhost:3000/api/accounts \
-H "Content-Type: application/json" \
-H "Idempotency-Key: 2d3c5e2f-54fc-42b1-98d0-65f5fd4d6448" \
-d "{\"accountId\":\"acc-1\",\"ownerId\":\"user-1\",\"currency\":\"USD\"}"Deposit money:
curl -X POST http://localhost:3000/api/accounts/acc-1/deposits \
-H "Content-Type: application/json" \
-H "Idempotency-Key: 4e8cb2bc-e8cf-4407-979d-b1e8ac25fe65" \
-d "{\"amount\":1000,\"currency\":\"USD\",\"transactionId\":\"txn-1\"}"Withdraw money:
curl -X POST http://localhost:3000/api/accounts/acc-1/withdrawals \
-H "Content-Type: application/json" \
-H "Idempotency-Key: cbcf3754-2e55-4ab2-b788-0ef2bbac7776" \
-d "{\"amount\":200,\"currency\":\"USD\",\"transactionId\":\"txn-2\"}"Freeze account:
curl -X POST http://localhost:3000/api/accounts/acc-1/freeze \
-H "Content-Type: application/json" \
-H "Idempotency-Key: 197f5140-4516-4eb4-b10f-cf2cdfa4c906" \
-d "{\"reason\":\"compliance review\"}"Transfer scaffolding:
curl -X POST http://localhost:3000/api/transfers \
-H "Content-Type: application/json" \
-d "{\"sourceAccountId\":\"acc-1\",\"destinationAccountId\":\"acc-2\",\"amount\":150,\"currency\":\"USD\"}"Get account details from account_summary:
curl http://localhost:3000/api/accounts/acc-1Example response:
{
"accountId": "acc-1",
"ownerId": "user-1",
"currency": "USD",
"status": "ACTIVE",
"balance": 800,
"version": 3,
"createdAt": "2026-03-22T10:00:00.000Z",
"updatedAt": "2026-03-22T10:05:00.000Z"
}Get current balance:
curl http://localhost:3000/api/accounts/acc-1/balanceGet account history from account_statement:
curl "http://localhost:3000/api/accounts/acc-1/history?limit=50&offset=0"Example response:
{
"accountId": "100000",
"entries": [
{
"eventId": "ab2db266-6610-4a7f-8ad4-fe10851523fc",
"accountId": "100000",
"streamVersion": 1,
"eventType": "AccountCreated",
"occurredAt": "2026-03-22T21:22:08.718Z"
},
{
"eventId": "f5eecd0f-2981-4327-9652-83a1772c5424",
"accountId": "100000",
"streamVersion": 2,
"eventType": "MoneyDeposited",
"amount": 15000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:26:04.083Z"
},
{
"eventId": "fb917c12-f9e4-4f52-98ae-1dce314a7809",
"accountId": "100000",
"streamVersion": 3,
"eventType": "MoneyDeposited",
"amount": 25000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:26:47.177Z"
},
{
"eventId": "b4cbbcab-a5ba-424a-8e4a-cebd4eb3f074",
"accountId": "100000",
"streamVersion": 4,
"eventType": "MoneyWithdrawn",
"amount": 5000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:31:35.785Z"
}
]
}This repo now includes a mixed-workload load runner for local stress testing.
It is intentionally scenario-driven instead of a pure raw-HTTP benchmark so it can:
- create fresh accounts
- generate unique
Idempotency-Keyvalues - generate unique deposit and withdrawal
transactionIdvalues - mix reads and writes against the same account pool
- concentrate traffic on a hot subset of accounts to simulate real contention
Start the app first, then run:
npm run load:testUseful flags:
--duration=60total test length in seconds--workers=20number of concurrent request loops--accounts=100seed account count before the run starts--seed-concurrency=10concurrent account creation during setup--initial-deposit=10000opening balance per seeded account--min-amount=100--max-amount=1500--base-url=http://localhost:3000/api--hot-account-ratio=0.1top slice of accounts treated as hot--hot-selection-rate=0.8chance a request targets the hot slice--deposit-weight=35--withdraw-weight=30--balance-weight=20--history-weight=10--create-weight=5
Example heavier run:
npm run load:test -- --duration=120 --workers=50 --accounts=250 --seed-concurrency=25 --initial-deposit=25000 --hot-account-ratio=0.05 --hot-selection-rate=0.9What to watch for:
400responses can be expected under aggressive withdrawal pressure because some requests will hit insufficient funds409responses from deposits and withdrawals now indicate a genuine write conflict that exhausted all server-side retry attempts — this should be rare under normal load404responses on balance or history reads can occur briefly after account creation because projections are asynchronous; they resolve quickly as the outbox publisher and Kafka consumer catch up500responses are unexpected and indicate a system fault worth investigating
If you want a pure endpoint throughput benchmark as a separate experiment, you can also use autocannon externally against a single route, but the built-in script is the better fit for realistic account workflow stress.
If the read model falls behind or becomes inconsistent, rebuild it from the event store.
HTTP admin endpoints:
POST /api/admin/projections/accounts/:accountId/rebuildPOST /api/admin/projections/accounts/rebuild-all
Examples:
curl -X POST http://localhost:3000/api/admin/projections/accounts/acc-1/rebuildcurl -X POST http://localhost:3000/api/admin/projections/accounts/rebuild-allCLI rebuild script:
npm run projections:rebuild -- account acc-1npm run projections:rebuild -- allWrite endpoints return once the event is appended to the event store. Query endpoints read from projections updated asynchronously. Because of that:
- a write can succeed before the read model reflects it
- reads are usually very fast
- reads may lag briefly behind writes
That tradeoff is intentional in CQRS systems.
Account command endpoints use a Postgres-backed idempotency_records table.
Behavior:
- first request with a new
Idempotency-Keyis processed normally - retry with the same key and same payload returns the original success response
- retry with the same key while the first request is still in progress returns a conflict
- reusing the same key for a different payload returns a conflict
Client guidance:
- generate a UUID on the client for each logical command
- reuse that exact UUID only when retrying the same request
- do not generate a fresh key for retries, or the server will treat it as a new command
For deposits and withdrawals, the app also uses a Postgres-backed transaction_records table.
That means:
Idempotency-Keyprotects the API requesttransactionIdprotects the business money movement
Even if a caller changes the Idempotency-Key, reusing the same transactionId for a deposit or withdrawal will not create a second business transaction.
This repo does not use an ORM.
Instead it uses:
pgfor direct SQL access- a Nest provider called
PG_POOLfor shared connections - a lightweight migration runner on app startup in Postgres mode
Migration flow:
- app starts
- if
EVENT_STORE_KIND=postgres, the migration runner checksschema_migrations - pending migrations are applied in order
- outbox, projections, and repositories use the resulting schema
For new schema changes:
- add a new migration object with the next version in
src/infrastructure/db/migrations/migrations.ts - do not edit old migrations that may already be applied in real environments
Key environment variables:
PORTdefault3000EVENT_STORE_KINDeitherin-memoryorpostgresPOSTGRES_HOSTPOSTGRES_PORTPOSTGRES_USERPOSTGRES_PASSWORDPOSTGRES_DBKAFKA_BROKERKAFKA_CLIENT_ID
See .env.example.
Kafka broker values depend on where the app runs:
- host machine with
npm run start:dev->KAFKA_BROKER=localhost:29092 - Docker Compose app container ->
KAFKA_BROKER=kafka:9092
- account query projections are currently the only implemented read models
- transfer flow is still scaffolding rather than a full durable saga
- no read-your-own-write strategy yet for query-after-command UX
- rebuild/admin endpoints are not authenticated yet
- Kafka-based live projections still need broader operational hardening such as monitoring, lag visibility, and richer failure handling
- Add transfer status projection and query endpoint.
- Implement the durable transfer saga/process manager.
- Add auth and audit logging for admin rebuild endpoints.
- Add monitoring for outbox lag, consumer lag, and projection failures.
- Add integration tests covering concurrency conflicts, projection gaps, rebuild flows, and transaction/idempotency deduplication.
Contributions are welcome. To keep things smooth and collaborative, please follow these best practices:
-
Fork the repository
Create your own copy of the repo to work on. -
Create a feature branch
git checkout -b feature/your-feature-name
-
Make your changes
. Keep commits small and meaningful.
. Follow existing code style and conventions.
. Add/update tests where applicable.
- Run tests locally
- Make sure to update the README and then create a spec file with your updates (See previous file for inspo)