Skip to content

Commit bb2f695

Browse files
ZhuoranYangclaude
andcommitted
docs: deploy T4/T5 rollout status callouts
Added rollout status notes for Dream Consolidation (T4) and Team Memory Sync (T5) in the memory hierarchy post. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 872864c commit bb2f695

2 files changed

Lines changed: 28 additions & 2 deletions

File tree

static/inside-claude-code/17-memory-hierarchy.html

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1035,6 +1035,19 @@ <h3 class="anchored" data-anchor-id="coalescing-and-trailing-runs">Coalescing an
10351035
<section id="tier-4-dream-consolidation-nightly-batch-archival" class="level2">
10361036
<h2 class="anchored" data-anchor-id="tier-4-dream-consolidation-nightly-batch-archival">Tier 4: Dream Consolidation – Nightly Batch Archival</h2>
10371037
<p>The fourth tier is <span class="kw-slate">autoDream</span> – a background consolidation pass that reviews memories accumulated across multiple sessions and reorganizes them. The name is apt: like sleep consolidation in neuroscience, it operates when the agent is not actively processing tasks, synthesizing fragmented observations into coherent long-term knowledge.</p>
1038+
<div class="callout callout-style-default callout-note callout-titled">
1039+
<div class="callout-header d-flex align-content-center">
1040+
<div class="callout-icon-container">
1041+
<i class="callout-icon"></i>
1042+
</div>
1043+
<div class="callout-title-container flex-fill">
1044+
Note
1045+
</div>
1046+
</div>
1047+
<div class="callout-body-container callout-body">
1048+
<p><strong>Rollout status:</strong> T4 Dream Consolidation is fully implemented in v2.1.88 but in progressive rollout. It is gated by a server-side GrowthBook flag (<code>tengu_onyx_plover</code>) that defaults to off. Users can opt in via the <code>autoDreamEnabled</code> setting in settings.json.</p>
1049+
</div>
1050+
</div>
10381051
<section id="triple-gate" class="level3">
10391052
<h3 class="anchored" data-anchor-id="triple-gate">Triple Gate</h3>
10401053
<p>AutoDream fires only when all three gates pass, evaluated in cheapest-first order to minimize per-turn cost:</p>
@@ -1068,6 +1081,19 @@ <h3 class="anchored" data-anchor-id="rollback-on-failure">Rollback on Failure</h
10681081
<section id="tier-5-team-memory-sync-the-distributed-cache" class="level2">
10691082
<h2 class="anchored" data-anchor-id="tier-5-team-memory-sync-the-distributed-cache">Tier 5: Team Memory Sync – The Distributed Cache</h2>
10701083
<p>The outermost tier is <span class="kw-slate">team memory sync</span> (<code>teamMemorySync/index.ts</code>), a server-backed system that shares memories across all authenticated organization members working on the same repository. This is the <span class="kw-sage">distributed shared cache</span> of the hierarchy – the slowest, most broadly scoped, and most complex tier.</p>
1084+
<div class="callout callout-style-default callout-note callout-titled">
1085+
<div class="callout-header d-flex align-content-center">
1086+
<div class="callout-icon-container">
1087+
<i class="callout-icon"></i>
1088+
</div>
1089+
<div class="callout-title-container flex-fill">
1090+
Note
1091+
</div>
1092+
</div>
1093+
<div class="callout-body-container callout-body">
1094+
<p><strong>Rollout status:</strong> T5 Team Memory Sync is fully implemented in v2.1.88 but in progressive rollout. It is double-gated: a compile-time <code>feature('TEAMMEM')</code> flag (enabled in this build) and a server-side GrowthBook flag (<code>tengu_herring_clock</code>) that defaults to off. It also requires OAuth authentication and a GitHub remote, limiting it to Pro/Team/Enterprise users. The codebase contains production bug fixes (e.g., a device that emitted 167K push events over 2.5 days), confirming it has been active for early access users.</p>
1095+
</div>
1096+
</div>
10711097
<section id="sync-semantics" class="level3">
10721098
<h3 class="anchored" data-anchor-id="sync-semantics">Sync Semantics</h3>
10731099
<p>The API contract is built around a single endpoint:</p>

static/inside-claude-code/search.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2426,7 +2426,7 @@
24262426
"href": "17-memory-hierarchy.html#tier-4-dream-consolidation-nightly-batch-archival",
24272427
"title": "Memory Hierarchy",
24282428
"section": "Tier 4: Dream Consolidation – Nightly Batch Archival",
2429-
"text": "Tier 4: Dream Consolidation – Nightly Batch Archival\nThe fourth tier is autoDream – a background consolidation pass that reviews memories accumulated across multiple sessions and reorganizes them. The name is apt: like sleep consolidation in neuroscience, it operates when the agent is not actively processing tasks, synthesizing fragmented observations into coherent long-term knowledge.\n\nTriple Gate\nAutoDream fires only when all three gates pass, evaluated in cheapest-first order to minimize per-turn cost:\n\nTime gate: hours since last consolidation &gt;= minHours (default: 24 hours). This requires only one stat call on the lock file.\nSession gate: number of transcript files with mtime &gt; lastConsolidatedAt &gt;= minSessions (default: 5 sessions). This requires a directory scan, so it is gated behind a 10-minute scan throttle to avoid repeated scanning when the time gate passes but the session gate does not.\nLock gate: no other process is mid-consolidation. The lock file (~/.claude/projects/&lt;slug&gt;/memory/.consolidate-lock) stores the holder’s PID. A stale-PID reclamation mechanism handles crashes: if the PID is dead or the lock is older than 1 hour, the next process reclaims it.\n\nconst DEFAULTS: AutoDreamConfig = {\n minHours: 24,\n minSessions: 5,\n}\n\n\nConsolidation Prompt\nThe consolidation prompt (consolidationPrompt.ts) instructs the forked agent through four phases:\n\nOrient – ls the memory directory, read MEMORY.md, skim existing topic files\nGather recent signal – review daily logs, check for drifted memories, grep transcripts narrowly\nConsolidate – merge new signal into existing topic files (not create duplicates), convert relative dates to absolute, delete contradicted facts\nPrune and index – keep MEMORY.md under 200 lines and 25KB, demote verbose entries, resolve contradictions\n\nThe agent operates under the same createAutoMemCanUseTool permission sandbox as T3 extraction. Bash is restricted to read-only commands – the prompt explicitly states this to prevent the agent from probing.\n\n\nRollback on Failure\nIf the forked agent fails, the lock file’s mtime is rolled back to its pre-acquisition value via rollbackConsolidationLock. This ensures the time gate passes again on the next turn, rather than the failure appearing as a successful consolidation that delays the next attempt by 24 hours.",
2429+
"text": "Tier 4: Dream Consolidation – Nightly Batch Archival\nThe fourth tier is autoDream – a background consolidation pass that reviews memories accumulated across multiple sessions and reorganizes them. The name is apt: like sleep consolidation in neuroscience, it operates when the agent is not actively processing tasks, synthesizing fragmented observations into coherent long-term knowledge.\n\n\n\n\n\n\nNote\n\n\n\nRollout status: T4 Dream Consolidation is fully implemented in v2.1.88 but in progressive rollout. It is gated by a server-side GrowthBook flag (tengu_onyx_plover) that defaults to off. Users can opt in via the autoDreamEnabled setting in settings.json.\n\n\n\nTriple Gate\nAutoDream fires only when all three gates pass, evaluated in cheapest-first order to minimize per-turn cost:\n\nTime gate: hours since last consolidation &gt;= minHours (default: 24 hours). This requires only one stat call on the lock file.\nSession gate: number of transcript files with mtime &gt; lastConsolidatedAt &gt;= minSessions (default: 5 sessions). This requires a directory scan, so it is gated behind a 10-minute scan throttle to avoid repeated scanning when the time gate passes but the session gate does not.\nLock gate: no other process is mid-consolidation. The lock file (~/.claude/projects/&lt;slug&gt;/memory/.consolidate-lock) stores the holder’s PID. A stale-PID reclamation mechanism handles crashes: if the PID is dead or the lock is older than 1 hour, the next process reclaims it.\n\nconst DEFAULTS: AutoDreamConfig = {\n minHours: 24,\n minSessions: 5,\n}\n\n\nConsolidation Prompt\nThe consolidation prompt (consolidationPrompt.ts) instructs the forked agent through four phases:\n\nOrient – ls the memory directory, read MEMORY.md, skim existing topic files\nGather recent signal – review daily logs, check for drifted memories, grep transcripts narrowly\nConsolidate – merge new signal into existing topic files (not create duplicates), convert relative dates to absolute, delete contradicted facts\nPrune and index – keep MEMORY.md under 200 lines and 25KB, demote verbose entries, resolve contradictions\n\nThe agent operates under the same createAutoMemCanUseTool permission sandbox as T3 extraction. Bash is restricted to read-only commands – the prompt explicitly states this to prevent the agent from probing.\n\n\nRollback on Failure\nIf the forked agent fails, the lock file’s mtime is rolled back to its pre-acquisition value via rollbackConsolidationLock. This ensures the time gate passes again on the next turn, rather than the failure appearing as a successful consolidation that delays the next attempt by 24 hours.",
24302430
"crumbs": [
24312431
"Series Home",
24322432
"III. Context Engineering",
@@ -2438,7 +2438,7 @@
24382438
"href": "17-memory-hierarchy.html#tier-5-team-memory-sync-the-distributed-cache",
24392439
"title": "Memory Hierarchy",
24402440
"section": "Tier 5: Team Memory Sync – The Distributed Cache",
2441-
"text": "Tier 5: Team Memory Sync – The Distributed Cache\nThe outermost tier is team memory sync (teamMemorySync/index.ts), a server-backed system that shares memories across all authenticated organization members working on the same repository. This is the distributed shared cache of the hierarchy – the slowest, most broadly scoped, and most complex tier.\n\nSync Semantics\nThe API contract is built around a single endpoint:\nGET /api/claude_code/team_memory?repo={owner/repo} → pull\nGET ...?repo={owner/repo}&view=hashes → checksums only\nPUT /api/claude_code/team_memory?repo={owner/repo} → push (upsert)\nThe semantics are deliberately asymmetric:\n\nPull is server-wins: remote entries overwrite local files. If a teammate pushed a correction, the next pull replaces the local version unconditionally.\nPush is local-wins-on-conflict: if a 412 (Precondition Failed) occurs during push, the system probes server checksums, recomputes the delta excluding keys where the teammate’s push matches ours, and retries. This preserves the local user’s active edit.\nDeletions do not propagate: deleting a local file does not remove it from the server; the next pull restores it locally.\n\n\n\nDelta Upload and Batching\nPush does not upload all local files. It computes a delta by comparing sha256:&lt;hex&gt; content hashes of local files against serverChecksums (populated from the server’s entryChecksums response). Only keys whose hash differs are included in the PUT. This is analogous to a cache write-back policy: only dirty lines are flushed.\nLarge deltas are split into batches under a 200KB body-size cap (MAX_PUT_BODY_BYTES) to stay under the API gateway’s limit. Each batch is an independent PUT with upsert semantics – if batch N fails, batches 1..N-1 are already committed. The serverChecksums map is updated after each successful batch, so a retry naturally resumes from the uncommitted tail.\n\n\nSecret Scanning\nThe upload format is JSON, not the local markdown files directly. Memory entries are serialized into a JSON payload before transmission, which decouples the on-disk representation from the wire protocol and allows the server to store and index entries in a structured format.\nBefore any file is uploaded, it passes through a secret scanner (secretScanner.ts) using patterns derived from gitleaks. Files containing detected secrets are silently excluded from the upload – they never leave the machine. Only the rule ID (not the secret value or file path) is logged for analytics.\nconst secretMatches = scanForSecrets(content)\nif (secretMatches.length &gt; 0) {\n skippedSecrets.push({\n path: relPath,\n ruleId: firstMatch.ruleId,\n label: firstMatch.label,\n })\n return // file excluded from upload\n}\n\n\nConflict Resolution\nOn a 412 conflict, the push logic executes a lightweight resolution cycle:\n\nProbe GET ?view=hashes to refresh per-key checksums (no bodies – saves bandwidth)\nRecompute the delta against the refreshed checksums (keys where a teammate’s concurrent push matches ours are naturally excluded)\nRetry the PUT with the tighter delta\n\nThis cycle repeats up to MAX_CONFLICT_RETRIES = 2 times. The probe-and-recompute approach avoids full content downloads during conflict resolution – a significant optimization when the team memory contains hundreds of kilobytes of content.",
2441+
"text": "Tier 5: Team Memory Sync – The Distributed Cache\nThe outermost tier is team memory sync (teamMemorySync/index.ts), a server-backed system that shares memories across all authenticated organization members working on the same repository. This is the distributed shared cache of the hierarchy – the slowest, most broadly scoped, and most complex tier.\n\n\n\n\n\n\nNote\n\n\n\nRollout status: T5 Team Memory Sync is fully implemented in v2.1.88 but in progressive rollout. It is double-gated: a compile-time feature('TEAMMEM') flag (enabled in this build) and a server-side GrowthBook flag (tengu_herring_clock) that defaults to off. It also requires OAuth authentication and a GitHub remote, limiting it to Pro/Team/Enterprise users. The codebase contains production bug fixes (e.g., a device that emitted 167K push events over 2.5 days), confirming it has been active for early access users.\n\n\n\nSync Semantics\nThe API contract is built around a single endpoint:\nGET /api/claude_code/team_memory?repo={owner/repo} → pull\nGET ...?repo={owner/repo}&view=hashes → checksums only\nPUT /api/claude_code/team_memory?repo={owner/repo} → push (upsert)\nThe semantics are deliberately asymmetric:\n\nPull is server-wins: remote entries overwrite local files. If a teammate pushed a correction, the next pull replaces the local version unconditionally.\nPush is local-wins-on-conflict: if a 412 (Precondition Failed) occurs during push, the system probes server checksums, recomputes the delta excluding keys where the teammate’s push matches ours, and retries. This preserves the local user’s active edit.\nDeletions do not propagate: deleting a local file does not remove it from the server; the next pull restores it locally.\n\n\n\nDelta Upload and Batching\nPush does not upload all local files. It computes a delta by comparing sha256:&lt;hex&gt; content hashes of local files against serverChecksums (populated from the server’s entryChecksums response). Only keys whose hash differs are included in the PUT. This is analogous to a cache write-back policy: only dirty lines are flushed.\nLarge deltas are split into batches under a 200KB body-size cap (MAX_PUT_BODY_BYTES) to stay under the API gateway’s limit. Each batch is an independent PUT with upsert semantics – if batch N fails, batches 1..N-1 are already committed. The serverChecksums map is updated after each successful batch, so a retry naturally resumes from the uncommitted tail.\n\n\nSecret Scanning\nThe upload format is JSON, not the local markdown files directly. Memory entries are serialized into a JSON payload before transmission, which decouples the on-disk representation from the wire protocol and allows the server to store and index entries in a structured format.\nBefore any file is uploaded, it passes through a secret scanner (secretScanner.ts) using patterns derived from gitleaks. Files containing detected secrets are silently excluded from the upload – they never leave the machine. Only the rule ID (not the secret value or file path) is logged for analytics.\nconst secretMatches = scanForSecrets(content)\nif (secretMatches.length &gt; 0) {\n skippedSecrets.push({\n path: relPath,\n ruleId: firstMatch.ruleId,\n label: firstMatch.label,\n })\n return // file excluded from upload\n}\n\n\nConflict Resolution\nOn a 412 conflict, the push logic executes a lightweight resolution cycle:\n\nProbe GET ?view=hashes to refresh per-key checksums (no bodies – saves bandwidth)\nRecompute the delta against the refreshed checksums (keys where a teammate’s concurrent push matches ours are naturally excluded)\nRetry the PUT with the tighter delta\n\nThis cycle repeats up to MAX_CONFLICT_RETRIES = 2 times. The probe-and-recompute approach avoids full content downloads during conflict resolution – a significant optimization when the team memory contains hundreds of kilobytes of content.",
24422442
"crumbs": [
24432443
"Series Home",
24442444
"III. Context Engineering",

0 commit comments

Comments
 (0)