Skip to content

Comments

feat: Upstash Redis shared caching + cache key contamination fixes#232

Merged
koala73 merged 8 commits intomainfrom
feat/redis-shared-caching
Feb 23, 2026
Merged

feat: Upstash Redis shared caching + cache key contamination fixes#232
koala73 merged 8 commits intomainfrom
feat/redis-shared-caching

Conversation

@koala73
Copy link
Owner

@koala73 koala73 commented Feb 22, 2026

Summary

  • Add Upstash Redis L2 shared caching (getCachedJson/setCachedJson) to 28 RPC handlers across all service domains — enables cross-instance cache sharing on Vercel serverless
  • Fix 10 P1 cache key contamination bugs where under-specified keys caused wrong data for filtered/parameterized requests
  • Restructure list-internet-outages to cache-then-filter pattern (cache stores unfiltered data, filters always applied after read)
  • Add write_lock mutex to PersistentCache in main.rs to prevent desktop cache write-race conditions
  • Document FMP (Financial Modeling Prep) as Yahoo Finance fallback TODO in market/v1/_shared.ts
  • Add Sentry noise filters for 5 non-actionable error patterns
  • Fix settings stale key status and Yahoo gate concurrency issues

Cache key contamination fixes (P1)

Handler Issue Fix
list-internet-outages Cached data returned before filters applied Cache-then-filter restructure
list-cyber-threats Key omitted type/source/minSeverity Added to key
get-displacement-summary Key omitted countryLimit/flowLimit Added to key
get-fred-series Key omitted limit Added to key
list-world-bank-indicators Key omitted year Added to key
list-prediction-markets Key omitted pageSize Added to key
list-arxiv-papers Key omitted pageSize Added to key
list-trending-repos Key omitted pageSize Added to key
list-unrest-events Key omitted timeRange Added to key
list-acled-events Key omitted timeRange.end Added to key

Test plan

  • tsc --noEmit passes
  • Verify cached responses return correct data for filtered requests
  • Verify Redis cache hit/miss via Upstash dashboard
  • Verify desktop persistent cache survives concurrent writes

Filter dynamic import alt phrasing, script parse errors, maplibre
style/WebGL crashes, and CustomEvent promise rejections. Also fix
beforeSend to catch short Firefox null messages like "E is null".
P1: Replace async background thread cache write with synchronous fs::write
to prevent out-of-order writes and dirty flag cleared before persistence.

P2: Add WorldMonitorTab.refresh() called after loadDesktopSecrets() so
the API key badge reflects actual keychain state.

P3: Replace timestamp-based Yahoo gate with promise queue to ensure
sequential execution under concurrent callers.
…e key contamination

- Add Redis L2 cache (getCachedJson/setCachedJson) to 28 RPC handlers
  across all service domains (market, conflict, cyber, economic, etc.)
- Fix 10 P1 cache key contamination bugs where under-specified keys
  caused cross-request data pollution (e.g. filtered requests returning
  unfiltered cached data)
- Restructure list-internet-outages to cache-then-filter pattern so
  country/timeRange filters always apply after cache read
- Add write_lock mutex to PersistentCache in main.rs to prevent
  desktop cache write-race conditions
- Document FMP (Financial Modeling Prep) as Yahoo Finance fallback TODO
  in market/v1/_shared.ts
@vercel
Copy link

vercel bot commented Feb 22, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
worldmonitor Ready Ready Preview, Comment Feb 23, 2026 6:06am
worldmonitor-finance Ready Ready Preview, Comment Feb 23, 2026 6:06am
worldmonitor-startup Ready Ready Preview, Comment Feb 23, 2026 6:06am

Request Review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dbace4e7fd

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

): Promise<ListTechEventsResponse> {
try {
return await fetchTechEvents(req);
const cacheKey = `${REDIS_CACHE_KEY}:${req.type || 'all'}:${req.mappable ? 1 : 0}:${req.days || 0}`;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Include limit in tech-events cache key

listTechEvents now caches by type, mappable, and days, but fetchTechEvents applies req.limit before returning. If a low-limit request (for example limit=10) populates Redis first, later requests with a higher limit will incorrectly receive only that truncated subset until cache expiry, which is a data-integrity regression introduced by this cache layer.

Useful? React with 👍 / 👎.

}>;
};
if (raw.success && raw.data) {
if (!resp.ok) throw new Error(`PizzINT API returned ${resp.status}`);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep GDELT path resilient to PizzINT upstream failures

Throwing immediately on non-OK PizzINT responses causes the handler to exit before the includeGdelt branch runs, so getPizzintStatus({ includeGdelt: true }) now fails entirely whenever only the PizzINT endpoint is degraded. This regresses behavior from partial-success (GDELT still available) to full failure for a single-upstream outage.

Useful? React with 👍 / 👎.

Copy link
Owner Author

@koala73 koala73 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review — Codex Findings Verified

Verified both findings against the current diff. Both are real bugs.


1. [P1] list-tech-events.ts — Cache key missing limit → data truncation

Confirmed. fetchTechEvents() applies events.slice(0, limit) at line 330-331 before returning. The cache key at line 356 omits limit:

const cacheKey = `${REDIS_CACHE_KEY}:${req.type || 'all'}:${req.mappable ? 1 : 0}:${req.days || 0}`;
//                                                                                   ^ no limit

Scenario: request with limit=10 populates cache → subsequent limit=50 request returns only 10 events from cache.

Recommended fix: Cache-then-filter pattern (consistent with list-internet-outages in this same PR):

  • Call fetchTechEvents with limit=0 (fetch all)
  • Cache the full unfiltered result
  • Apply limit slice after cache read

This way one cache entry serves all limit values. The tech events dataset is small enough that caching all is fine.


2. [P1] get-pizzint-status.ts — PizzINT throw kills GDELT path

Confirmed. The old code wrapped PizzINT in a try-catch — if PizzINT returned 500, execution continued to the GDELT fetch. The new code (line 43-48) removes the try-catch and throws on non-OK:

{   // bare block — no try-catch
    const resp = await fetch(PIZZINT_API, ...);
    if (!resp.ok) throw new Error(`PizzINT API returned ${resp.status}`);
    // ...
}
// GDELT fetch at line 118 — never reached if PizzINT threw

When includeGdelt: true, a PizzINT outage now causes total failure — no GDELT tension pairs returned either. This regresses from partial-success to full-failure.

The comment says "throw on failure so sidecar returns non-OK and the runtime fetch patch falls back to cloud" — but this is too aggressive when GDELT is independently available.

Recommended fix: Restore try-catch around PizzINT:

try {
    const resp = await fetch(PIZZINT_API, ...);
    if (!resp.ok) throw new Error(`PizzINT API returned ${resp.status}`);
    // ... parse pizzint
} catch { /* pizzint unavailable — continue to GDELT */ }

The cache already guards against caching incomplete responses (if (pizzint) { setCachedJson(...) } at line 155), so a PizzINT-down response with only GDELT data won't pollute the cache. The sidecar fallback-to-cloud behavior can be implemented at the caller level instead of inside this handler.


Both are data-integrity regressions that should be fixed before merge.

…sion

- tech-events: fetch with limit=0 and cache full result, apply limit
  slice after cache read to prevent low-limit requests poisoning cache
- pizzint: restore try-catch around PizzINT fetch so GDELT tension
  pairs are still returned when PizzINT API is down
/api/register-interest must reach cloud without a WorldMonitor API key,
otherwise desktop users can never register (circular dependency).
Keep PR's in-memory PersistentCache and background_color fix for settings window.
@koala73 koala73 merged commit 1014168 into main Feb 23, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant