Skip to content

Commit aaa9241

Browse files
appleboyclaude
andauthored
feat(cache): add OAuth client cache with redis-aside support (#155)
* feat(cache): add OAuth client cache with redis-aside support Add a new Cache[OAuthApplication] instance that caches client lookups by client_id using the cache-aside pattern. store.GetClient() is called 20+ times across all OAuth flows (device code, authorization code, token exchange, client credentials) — this was the hottest uncached DB query path. Key design decisions: - GetClient() returns cached copy with ClientSecret stripped (defense-in-depth) - GetClientWithSecret() bypasses cache for secret-verification flows - Explicit invalidation on all mutations (create, update, delete, approve, reject, secret regeneration) - Inject ClientService into DeviceService, TokenService, and AuthorizationService to replace direct store.GetClient() calls Configuration: CLIENT_CACHE_TYPE, CLIENT_CACHE_TTL (5m default), CLIENT_CACHE_CLIENT_TTL (30s), CLIENT_CACHE_SIZE_PER_CONN (32MB) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(cache): use closure vars in fetchFunc and add DB fallback on cache errors - Use clientID/hash closure variables instead of key param in GetWithFetch fetchFuncs to avoid using redis-aside prefixed keys for DB lookups - Add cache-error fallback in GetClient to distinguish infrastructure failures from genuine not-found, mirroring getAccessTokenByHash pattern - Apply same prefixed-key fix to getAccessTokenByHash in TokenService Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(cache): move GetClient fallback rationale to doc comment Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(cache): remove unreachable DB fallback in GetClient fetchThrough already calls the fetch function on any cache Get error, so the explicit fallback path could never execute for cache backend failures. When the DB itself fails, calling it twice is wasteful. Remove the dead fallback and drop the now-unused gorm import. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(cache): restore DB fallback in GetClient for redis-aside outages - Restore gorm.ErrRecordNotFound check and DB fallback in GetClient - RueidisAsideCache.GetWithFetch can return an error without calling fetchFunc when Redis/RESP3 is unavailable, so the fallback is needed to avoid treating infrastructure failures as "client not found" - Add tests: secret stripping, cache hit (fetchFunc called once), cache invalidation on UpdateClient and RegenerateSecret Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * style(services): wrap long function signature in client_test.go - Break GetWithFetch method signature across multiple lines to satisfy golines formatter * fix(services): distinguish store errors from cache-backend errors in GetClient - Wrap fetchFunc store errors with clientFetchErr sentinel to prevent redundant DB retry - Return the original store error instead of masking it as ErrClientNotFound - Fix DB fallback to propagate non-ErrRecordNotFound store errors correctly * refactor(services): remove unreachable ErrRecordNotFound check in GetClient - fetchFunc always wraps store errors in clientFetchErr, so a raw gorm.ErrRecordNotFound can never reach this branch * style(services): remove redundant inline comment in GetClient fetchFunc * fix(services): evict corrupted cache entry on ErrInvalidValue in GetClient - On cache.ErrInvalidValue (unmarshal failure), delete the bad key before falling back to DB so subsequent requests re-populate the cache correctly instead of hot-looping through the DB fallback on every call * fix(services): log Delete errors and fix ErrInvalidValue eviction in token cache - Log cache Delete errors on ErrInvalidValue eviction in GetClient (was silently discarded) - Apply same ErrInvalidValue + eviction pattern to TokenService.getAccessTokenByHash to prevent corrupted token cache entries from hot-looping through the DB fallback * style(services): mask token hash in eviction log to match invalidateTokenCache pattern * refactor(services): add ctx parameter to GetClient and GetClientWithSecret - Propagate caller context through cache I/O and DB fallback so that request timeouts/cancellation are respected and tracing can propagate - Handlers pass c.Request.Context(); service callers pass their ctx; methods without a context use context.Background() as a fallback * refactor(services): propagate ctx through GetClientByUserCode, ValidateAuthorizationRequest, AuthenticateClient - All three methods called from HTTP handlers but lacked ctx parameter; context.Background() replaced with the actual request context so cancellation/timeout from handlers flows through to cache and DB * fix(services): propagate real DB errors from GetClientWithSecret - Preserve non-404 store errors instead of masking them as ErrClientNotFound - Remove unnecessary cache invalidation from CreateClient (new clients are never cached) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(services): wrap token store errors to prevent double DB hit Use tokenFetchErr sentinel (parallel to clientFetchErr) so transient DB errors inside GetWithFetch fetchFunc are distinguished from cache-backend failures and short-circuited instead of triggering a redundant DB fallback. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(services): merge clientFetchErr and tokenFetchErr into shared fetchErr Both types were identical wrappers used to distinguish store errors from cache-backend errors inside GetWithFetch callbacks. Extract once into errors.go and remove the per-file duplicates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * test(config): add CLIENT_CACHE_TYPE validation test coverage Cover all validation branches: invalid type, redis/redis-aside without REDIS_ADDR, zero CLIENT_CACHE_TTL, and redis-aside with zero CLIENT_CACHE_CLIENT_TTL. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(services): deep-copy RedirectURIs slice when caching OAuthApplication Prevent callers from accidentally corrupting cached backing arrays via in-place slice mutations. The cached entry now has its own independent StringArray so modifications to the returned value cannot affect the cache. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * docs: add Client Cache and Token Cache sections to CONFIGURATION.md - Add ## Client Cache section covering backends, configuration vars, TTL trade-offs, and multi-pod recommendations for CLIENT_CACHE_* settings - Add ## Token Cache section covering the opt-in token verification cache with TOKEN_CACHE_* settings, revocation invalidation, and RESP3 notes - Add both sections to the table of contents - Mention CLIENT_CACHE_TYPE and TOKEN_CACHE_TYPE in README Scalability section --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 5643380 commit aaa9241

38 files changed

Lines changed: 909 additions & 159 deletions

.env.example

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -205,6 +205,25 @@ EXPIRED_TOKEN_CLEANUP_INTERVAL=1h # How often to run the cleanup (default:
205205
# Client-side cache size per connection in MB for redis-aside mode only (default: 32MB)
206206
# CLIENT_COUNT_CACHE_SIZE_PER_CONN=32
207207

208+
# ============================================================
209+
# Client Cache Settings (caches OAuth client lookups by client_id)
210+
# ============================================================
211+
# Cache backend for OAuth client lookups. Every OAuth flow (device code, authorization code,
212+
# token exchange) queries the client record. Caching reduces DB load significantly.
213+
# In single-instance deployments "memory" is sufficient.
214+
# In multi-pod deployments use "redis" or "redis-aside" for shared cache with invalidation.
215+
# CLIENT_CACHE_TYPE=memory
216+
217+
# Server-side cache lifetime for client records (default: 5m)
218+
# Mutations (create, update, delete, approve, reject, secret regeneration) always invalidate immediately.
219+
# CLIENT_CACHE_TTL=5m
220+
221+
# Client-side cache TTL for redis-aside mode only (default: 30s)
222+
# CLIENT_CACHE_CLIENT_TTL=30s
223+
224+
# Client-side cache size per connection in MB for redis-aside mode only (default: 32MB)
225+
# CLIENT_CACHE_SIZE_PER_CONN=32
226+
208227
# ============================================================
209228
# Token Cache Settings
210229
# ============================================================

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -549,7 +549,7 @@ docker run -d \
549549

550550
- **SQLite**: Suitable for < 1000 concurrent devices, single-instance deployments
551551
- **PostgreSQL**: Recommended for production, supports horizontal scaling
552-
- **Multi-Pod**: Use PostgreSQL + Redis for rate limiting and user cache across pods (`RATE_LIMIT_STORE=redis`, `USER_CACHE_TYPE=redis` or `redis-aside`). Note: `redis-aside` requires Redis >= 7.0.
552+
- **Multi-Pod**: Use PostgreSQL + Redis for rate limiting, user cache, client cache, and token cache across pods (`RATE_LIMIT_STORE=redis`, `USER_CACHE_TYPE=redis` or `redis-aside`, `CLIENT_CACHE_TYPE=redis` or `redis-aside`, `TOKEN_CACHE_TYPE=redis` or `redis-aside`). Note: `redis-aside` requires Redis >= 7.0.
553553

554554
**[Performance Guide →](docs/PERFORMANCE.md)**
555555

docs/CONFIGURATION.md

Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ This guide covers all configuration options for AuthGate, including environment
1212
- [Service-to-Service Authentication](#service-to-service-authentication)
1313
- [HTTP Retry with Exponential Backoff](#http-retry-with-exponential-backoff)
1414
- [User Cache](#user-cache)
15+
- [Client Cache](#client-cache)
16+
- [Token Cache](#token-cache)
1517
- [Rate Limiting](#rate-limiting)
1618
- [CORS (Cross-Origin Resource Sharing)](#cors-cross-origin-resource-sharing)
1719

@@ -722,6 +724,151 @@ USER_CACHE_SIZE_PER_CONN=32 # Adjust based on available memory per pod
722724
723725
---
724726

727+
## Client Cache
728+
729+
Every OAuth flow (device code, authorization code, token exchange, client credentials) queries the `OAuthApplication` record to validate the client. Caching these lookups reduces database pressure on busy deployments.
730+
731+
The cache is always enabled with no feature flag required. Mutations (create, update, delete, secret regeneration, approve/reject) always invalidate the cache entry immediately.
732+
733+
### How It Works
734+
735+
The cache uses a **cache-aside pattern**:
736+
737+
1. On the first request for a client ID, the DB is queried and the result is stored in cache with a TTL
738+
2. Client secrets are **stripped before caching** (defense-in-depth — secrets are never stored in the cache backend)
739+
3. Cache entries are invalidated immediately on any write operation (create, update, delete, secret rotation)
740+
741+
### Cache Backends
742+
743+
| Backend | Env value | Use case |
744+
| ----------- | ------------------ | --------------------------------------------------------------------------------- |
745+
| Memory | `memory` (default) | Single-instance, zero external dependencies |
746+
| Redis | `redis` | 2–5 pods, shared cache across instances |
747+
| Redis-aside | `redis-aside` | 5+ pods, client-side caching with stampede protection — **requires Redis >= 7.0** |
748+
749+
### Configuration
750+
751+
```bash
752+
# Cache backend: memory (default), redis, or redis-aside
753+
CLIENT_CACHE_TYPE=memory
754+
755+
# How long a cached client record is valid (default: 5m); must be > 0
756+
# Mutations always invalidate immediately, so this is only a fallback TTL.
757+
CLIENT_CACHE_TTL=5m
758+
759+
# Client-side TTL for redis-aside mode only (default: 30s); must be > 0
760+
CLIENT_CACHE_CLIENT_TTL=30s
761+
762+
# Client-side cache size per connection in MB for redis-aside mode only (default: 32MB)
763+
# Total memory per pod = cache_size × connections (~10 based on GOMAXPROCS) → default ~320MB
764+
CLIENT_CACHE_SIZE_PER_CONN=32
765+
```
766+
767+
Redis-based backends also require the shared Redis settings:
768+
769+
```bash
770+
REDIS_ADDR=localhost:6379
771+
REDIS_PASSWORD=
772+
REDIS_DB=0
773+
```
774+
775+
### Multi-Pod Recommendation
776+
777+
```bash
778+
# 2–5 pods: Redis shared cache
779+
CLIENT_CACHE_TYPE=redis
780+
REDIS_ADDR=redis-service:6379
781+
782+
# 5+ pods or DDoS protection: redis-aside with client-side caching
783+
CLIENT_CACHE_TYPE=redis-aside
784+
REDIS_ADDR=redis-service:6379
785+
CLIENT_CACHE_CLIENT_TTL=30s
786+
CLIENT_CACHE_SIZE_PER_CONN=32 # Adjust based on available memory per pod
787+
```
788+
789+
> **Note**: `redis-aside` uses RESP3 client-side caching for automatic invalidation across all pods and requires **Redis >= 7.0**. Memory usage per pod is `CLIENT_CACHE_SIZE_PER_CONN × ~10 connections` (default ~320MB).
790+
791+
---
792+
793+
## Token Cache
794+
795+
`/oauth/tokeninfo` and every request protected by token-based auth call `GetAccessTokenByHash`, which hits the database on every validation. The token cache absorbs these lookups, reducing DB load significantly on high-traffic deployments.
796+
797+
The token cache is **disabled by default** (`TOKEN_CACHE_ENABLED=false`). Enable it for production deployments with significant token validation traffic.
798+
799+
### How It Works
800+
801+
The cache uses a **cache-aside pattern**:
802+
803+
1. On the first validation of a token hash, the DB is queried and the result is stored in cache with a TTL
804+
2. Subsequent validations within the TTL window are served from cache
805+
3. Token revocation, rotation, and status changes always **explicitly invalidate** the cache entry — the TTL is a fallback only
806+
807+
### Cache Backends
808+
809+
| Backend | Env value | Use case |
810+
| ----------- | ------------------ | --------------------------------------------------------------------------------- |
811+
| Memory | `memory` (default) | Single-instance, zero external dependencies |
812+
| Redis | `redis` | 2–5 pods, shared cache across instances |
813+
| Redis-aside | `redis-aside` | 5+ pods, client-side caching with RESP3 real-time invalidation — **requires Redis >= 7.0** |
814+
815+
### Configuration
816+
817+
```bash
818+
# Enable token verification cache (default: false)
819+
TOKEN_CACHE_ENABLED=false
820+
821+
# Cache backend: memory (default), redis, or redis-aside
822+
TOKEN_CACHE_TYPE=memory
823+
824+
# Cache lifetime (default: 10h — matches JWT_EXPIRATION)
825+
# Revocation uses explicit cache invalidation; this TTL is a fallback for rare missed invalidations.
826+
TOKEN_CACHE_TTL=10h
827+
828+
# Client-side TTL for redis-aside mode only (default: 1h)
829+
# RESP3 handles real-time invalidation; this TTL is a safety net for missed notifications.
830+
TOKEN_CACHE_CLIENT_TTL=1h
831+
832+
# Client-side cache size per connection in MB for redis-aside mode only (default: 32MB)
833+
# Total memory per pod = cache_size × connections (~10 based on GOMAXPROCS) → default ~320MB
834+
TOKEN_CACHE_SIZE_PER_CONN=32
835+
```
836+
837+
Redis-based backends also require the shared Redis settings:
838+
839+
```bash
840+
REDIS_ADDR=localhost:6379
841+
REDIS_PASSWORD=
842+
REDIS_DB=0
843+
```
844+
845+
### TTL Trade-offs
846+
847+
| Setting | Behaviour |
848+
| ------------------------- | ----------------------------------------------------------------------------------- |
849+
| `TOKEN_CACHE_TTL=10h` | Default — matches JWT expiry; cached tokens expire naturally alongside JWT |
850+
| `TOKEN_CACHE_CLIENT_TTL=1h` | redis-aside client-side TTL; RESP3 invalidation fires immediately on revocation |
851+
852+
### Multi-Pod Recommendation
853+
854+
```bash
855+
# Enable with Redis for multi-pod deployments
856+
TOKEN_CACHE_ENABLED=true
857+
TOKEN_CACHE_TYPE=redis
858+
REDIS_ADDR=redis-service:6379
859+
860+
# Or redis-aside for real-time invalidation across all pods (requires Redis >= 7.0)
861+
TOKEN_CACHE_ENABLED=true
862+
TOKEN_CACHE_TYPE=redis-aside
863+
REDIS_ADDR=redis-service:6379
864+
TOKEN_CACHE_CLIENT_TTL=1h
865+
TOKEN_CACHE_SIZE_PER_CONN=32
866+
```
867+
868+
> **Note**: `redis-aside` uses RESP3 client-side caching with **real-time invalidation** — when a token is revoked, all pods drop their client-side cache entry immediately via RESP3 push notifications. This requires **Redis >= 7.0**. Memory usage per pod is `TOKEN_CACHE_SIZE_PER_CONN × ~10 connections` (default ~320MB).
869+
870+
---
871+
725872
## Rate Limiting
726873

727874
AuthGate includes built-in rate limiting to protect against brute force attacks, credential stuffing, and API abuse. The rate limiting system is production-ready with support for both single-instance and distributed deployments.

internal/bootstrap/bootstrap.go

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,8 @@ type Application struct {
3333
UserCacheCloser func() error
3434
ClientCountCache core.Cache[int64]
3535
ClientCountCacheCloser func() error
36+
ClientCache core.Cache[models.OAuthApplication]
37+
ClientCacheCloser func() error
3638
TokenCache core.Cache[models.AccessToken]
3739
TokenCacheCloser func() error
3840
RateLimitRedisClient *redis.Client
@@ -116,6 +118,12 @@ func (app *Application) initializeInfrastructure(ctx context.Context) error {
116118
return err
117119
}
118120

121+
// Client Cache (caches OAuth client lookups by client_id)
122+
app.ClientCache, app.ClientCacheCloser, err = initializeClientCache(ctx, app.Config)
123+
if err != nil {
124+
return err
125+
}
126+
119127
// Token Cache
120128
app.TokenCache, app.TokenCacheCloser, err = initializeTokenCache(ctx, app.Config)
121129
if err != nil {
@@ -154,6 +162,7 @@ func (app *Application) initializeBusinessLayer() {
154162
app.MetricsRecorder,
155163
app.UserCache,
156164
app.ClientCountCache,
165+
app.ClientCache,
157166
app.TokenProvider,
158167
app.TokenCache,
159168
)
@@ -206,6 +215,7 @@ func (app *Application) startWithGracefulShutdown() {
206215
addCacheCleanupJob(m, app.MetricsCache, app.Config)
207216
addUserCacheCleanupJob(m, app.UserCache, app.Config)
208217
addClientCountCacheCleanupJob(m, app.ClientCountCache, app.Config)
218+
addClientCacheCleanupJob(m, app.ClientCache, app.Config)
209219
addTokenCacheCleanupJob(m, app.TokenCache, app.Config)
210220
addDatabaseShutdownJob(m, app.DB, app.Config)
211221
addAuditLogCleanupJob(m, app.Config, app.AuditService)

internal/bootstrap/cache.go

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,20 @@ func initializeTokenCache(
131131
})
132132
}
133133

134+
// initializeClientCache initializes the OAuth client cache (always enabled, defaults to memory)
135+
func initializeClientCache(
136+
ctx context.Context,
137+
cfg *config.Config,
138+
) (core.Cache[models.OAuthApplication], func() error, error) {
139+
return initializeCache[models.OAuthApplication](ctx, cfg, cacheOpts{
140+
cacheType: cfg.ClientCacheType,
141+
keyPrefix: "authgate:clients:",
142+
clientTTL: cfg.ClientCacheClientTTL,
143+
sizePerConn: cfg.ClientCacheSizePerConn,
144+
label: "Client",
145+
})
146+
}
147+
134148
// initializeUserCache initializes the user cache (always enabled, defaults to memory)
135149
func initializeUserCache(
136150
ctx context.Context,

internal/bootstrap/server.go

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,18 @@ func addClientCountCacheCleanupJob(
264264
addNamedCacheShutdownJob(m, "client count cache", clientCountCache.Close, cfg.CacheCloseTimeout)
265265
}
266266

267+
// addClientCacheCleanupJob adds OAuth client cache cleanup on shutdown
268+
func addClientCacheCleanupJob(
269+
m *graceful.Manager,
270+
clientCache core.Cache[models.OAuthApplication],
271+
cfg *config.Config,
272+
) {
273+
if clientCache == nil {
274+
return
275+
}
276+
addNamedCacheShutdownJob(m, "client cache", clientCache.Close, cfg.CacheCloseTimeout)
277+
}
278+
267279
// addTokenCacheCleanupJob adds token cache cleanup on shutdown
268280
func addTokenCacheCleanupJob(
269281
m *graceful.Manager,

internal/bootstrap/services.go

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ func initializeServices(
2727
prometheusMetrics core.Recorder,
2828
userCache core.Cache[models.User],
2929
clientCountCache core.Cache[int64],
30+
clientCache core.Cache[models.OAuthApplication],
3031
tokenProvider core.TokenProvider,
3132
tokenCache core.Cache[models.AccessToken],
3233
) serviceSet {
@@ -45,7 +46,18 @@ func initializeServices(
4546
userCache,
4647
cfg.UserCacheTTL,
4748
)
48-
deviceService := services.NewDeviceService(db, cfg, auditService, prometheusMetrics)
49+
clientService := services.NewClientService(
50+
db, auditService,
51+
clientCountCache, cfg.ClientCountCacheTTL,
52+
clientCache, cfg.ClientCacheTTL,
53+
)
54+
deviceService := services.NewDeviceService(
55+
db,
56+
cfg,
57+
auditService,
58+
prometheusMetrics,
59+
clientService,
60+
)
4961
tokenService := services.NewTokenService(
5062
db,
5163
cfg,
@@ -54,11 +66,15 @@ func initializeServices(
5466
auditService,
5567
prometheusMetrics,
5668
tokenCache,
69+
clientService,
5770
)
58-
clientService := services.NewClientService(
59-
db, auditService, clientCountCache, cfg.ClientCountCacheTTL,
71+
authorizationService := services.NewAuthorizationService(
72+
db,
73+
cfg,
74+
auditService,
75+
tokenService,
76+
clientService,
6077
)
61-
authorizationService := services.NewAuthorizationService(db, cfg, auditService, tokenService)
6278
dashboardService := services.NewDashboardService(db, auditService)
6379

6480
return serviceSet{

internal/config/config.go

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -185,6 +185,12 @@ type Config struct {
185185
ClientCountCacheClientTTL time.Duration // CLIENT_COUNT_CACHE_CLIENT_TTL for redis-aside (default: 10m)
186186
ClientCountCacheSizePerConn int // CLIENT_COUNT_CACHE_SIZE_PER_CONN for redis-aside in MB (default: 32MB)
187187

188+
// Client Cache settings (caches OAuth client lookups by client_id)
189+
ClientCacheType string // CLIENT_CACHE_TYPE: memory|redis|redis-aside (default: memory)
190+
ClientCacheTTL time.Duration // CLIENT_CACHE_TTL: cache lifetime (default: 5m)
191+
ClientCacheClientTTL time.Duration // CLIENT_CACHE_CLIENT_TTL for redis-aside client-side TTL (default: 30s)
192+
ClientCacheSizePerConn int // CLIENT_CACHE_SIZE_PER_CONN: client-side cache size per connection in MB for redis-aside (default: 32MB)
193+
188194
// Token Cache settings (reduces DB queries for token verification)
189195
TokenCacheEnabled bool // TOKEN_CACHE_ENABLED: enable token verification cache (default: false)
190196
TokenCacheType string // TOKEN_CACHE_TYPE: memory|redis|redis-aside (default: memory)
@@ -389,6 +395,12 @@ func Load() *Config {
389395
32,
390396
), // 32MB default
391397

398+
// Client Cache settings
399+
ClientCacheType: getEnv("CLIENT_CACHE_TYPE", CacheTypeMemory),
400+
ClientCacheTTL: getEnvDuration("CLIENT_CACHE_TTL", 5*time.Minute),
401+
ClientCacheClientTTL: getEnvDuration("CLIENT_CACHE_CLIENT_TTL", 30*time.Second),
402+
ClientCacheSizePerConn: getEnvInt("CLIENT_CACHE_SIZE_PER_CONN", 32), // 32MB default
403+
392404
// Token Cache settings
393405
TokenCacheEnabled: getEnvBool("TOKEN_CACHE_ENABLED", false),
394406
TokenCacheType: getEnv("TOKEN_CACHE_TYPE", CacheTypeMemory),
@@ -598,6 +610,24 @@ func (c *Config) Validate() error {
598610
)
599611
}
600612

613+
// Client Cache validation
614+
if err := validateCacheType("CLIENT_CACHE_TYPE", c.ClientCacheType, c.RedisAddr); err != nil {
615+
return err
616+
}
617+
if c.ClientCacheTTL <= 0 {
618+
return fmt.Errorf(
619+
"CLIENT_CACHE_TTL must be a positive duration (got %s)",
620+
c.ClientCacheTTL,
621+
)
622+
}
623+
if c.ClientCacheType == CacheTypeRedisAside && c.ClientCacheClientTTL <= 0 {
624+
return fmt.Errorf(
625+
"CLIENT_CACHE_CLIENT_TTL must be a positive duration when CLIENT_CACHE_TYPE=%q (got %s)",
626+
CacheTypeRedisAside,
627+
c.ClientCacheClientTTL,
628+
)
629+
}
630+
601631
// Token cache validation (only when enabled)
602632
if c.TokenCacheEnabled {
603633
if err := validateCacheType("TOKEN_CACHE_TYPE", c.TokenCacheType, c.RedisAddr); err != nil {

0 commit comments

Comments
 (0)