You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix lint, cppcheck, and smoke test issues for release
- Auto-format source files to pass clang-format (store.c, mcp.c, main.c, store.h)
- Fix memory leak in build_search_terms() on malloc failure (cppcheck memleak)
- Narrow variable scope for di and full_items (cppcheck variableScope)
- Update smoke test to check new tool names (index, context, impact, read_symbol, query)
- Add release notes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
# Ranking v2 — From PageRank to Multi-Signal Composite Search
2
+
3
+
## The Problem
4
+
5
+
codebase-memory-mcp v1 used a single signal for ranking: **PageRank**. You'd search for "payment processing" and get back whatever had the highest PageRank score among text matches. This worked for well-connected hub nodes but failed badly for:
6
+
7
+
-**Concept queries** ("authentication and session management") — PageRank doesn't know that "authentication" means `OauthMiddleware`
8
+
-**Cross-file exploration** ("complete order creation flow") — PageRank ranks individual nodes, not flows
9
+
-**Vocabulary gaps** — code is named `postOrd`, users search for "create order"
10
+
11
+
The result: on a 15-case benchmark, the old system scored **30 out of a possible ~200**. Most concept and cross-file queries returned irrelevant results.
12
+
13
+
## The New Architecture
14
+
15
+
### Multi-Signal Composite Ranking
16
+
17
+
Instead of PageRank alone, ranking is now a weighted combination of 5 independent signals:
18
+
19
+
```
20
+
score = W_PPR(0.35) × Personalized PageRank
21
+
+ W_BM25(0.30) × FTS5 BM25 text relevance
22
+
+ W_COCHANGE(0.20) × Co-change frequency
23
+
+ W_BETWEENNESS(0.15) × Betweenness centrality
24
+
+ W_AUTHORITY(0.10) × In-degree authority (HITS)
25
+
```
26
+
27
+
Each signal captures a different aspect of relevance:
28
+
29
+
| Signal | What it measures | Helps with |
30
+
|--------|-----------------|------------|
31
+
|**Personalized PageRank**| Graph proximity to query-relevant seed nodes | Finding related code through call/import edges |
32
+
|**BM25**| Text match quality (name, qualified_name, file_path, search_terms) | Direct name matches, prefix matching |
33
+
|**Co-change**| Files that change together in git history | Finding coupled code across modules |
34
+
|**Betweenness centrality**| Nodes that sit on many shortest paths in the graph | Identifying integration points, middleware, shared utilities |
35
+
|**In-degree authority**| Number of incoming edges (callers/importers) | Ranking genuinely important code over stubs and auto-generated files |
36
+
37
+
### FTS5 Search Pipeline
38
+
39
+
The text search layer was completely rebuilt:
40
+
41
+
1.**Prefix matching** — Query `"payment"` becomes `payment*` in FTS5, matching CamelCase-concatenated tokens like `paymentmappingservice` (from `PaymentMappingService`). This was the single biggest improvement.
42
+
43
+
2.**CamelCase splitting** — New `search_terms` column stores split forms: `OauthMiddleware` is indexed as `"OauthMiddleware Oauth Middleware"`. Now `middleware*` finds it. BM25 weight 0.25 (low enough to not dilute primary name matches).
44
+
45
+
3.**Stop word filtering** — English stop words plus common code verbs (`checks`, `creates`, `handles`, `gets`, `finds`) are stripped before FTS5 query building. Without this, `"checks*"` matched hundreds of `checkXxx` functions.
46
+
47
+
4.**Per-file result cap** — FNV-1a hash tracks file paths; any single file is limited to 3 FTS results. Prevents large files (like `_ide_helper.php` with 5000+ stubs) from flooding the candidate set. General algorithm, no hardcoded exclusions.
48
+
49
+
### Personalized PageRank (PPR)
50
+
51
+
PPR replaced global PageRank. Instead of a static, query-independent rank, PPR is seeded from the top 10 FTS hits and propagates through call/import/inheritance edges with per-type weights:
15 iterations, damping factor 0.85. This means the graph signal is **query-dependent** — searching for "payment" propagates from payment-related nodes, not from globally popular nodes.
58
+
59
+
### Betweenness Centrality
60
+
61
+
Brandes' algorithm computes betweenness centrality across the entire call graph. Nodes that sit on many shortest paths (middleware, shared services, base controllers) score higher. This is precomputed at index time and stored in `node_scores.betweenness`.
62
+
63
+
### In-Degree Authority (Simplified HITS)
64
+
65
+
Inspired by Kleinberg's HITS algorithm. Instead of full hub/authority iteration, we use a simplified version: count incoming edges per node, normalize to [0,1]. Nodes called by many others are authoritative; auto-generated stubs with 0 callers are penalized.
66
+
67
+
### Explore Mode FTS Fallback
68
+
69
+
The `explore` mode (for broad area queries like "order creation flow") previously used only regex matching. When regex found 0 results, it now falls back to `cbm_store_ranked_search` with 20 results. This turned all C-tier cross-file queries from 0 to scoring.
70
+
71
+
### Compact Output
72
+
73
+
Removed debug fields (`ppr`, `bm25`, `betweenness`, `composite_score`) from the locate JSON response. The LLM only needs: `name`, `file`, `type`, `line`. Results are sorted by rank — position conveys importance. This saved ~800 bytes per query.
74
+
75
+
## Development Process
76
+
77
+
25 bounded iterations using the autoresearch methodology. Each iteration: modify one thing → build → run 2683 unit tests (guard) → score against 15 benchmark cases → keep or discard.
78
+
79
+
### Score Progression
80
+
81
+
```
82
+
Iter Score Delta Status What
83
+
0 30 — base PageRank-only ranking
84
+
1-7 — — discard Weight tuning, LIKE fallback — no improvement
-**7 failed iterations** before the first improvement. Pure weight tuning doesn't work when the right files aren't in the candidate set.
102
+
-**Always test changes in isolation.** Iteration 11 combined two +2 changes and got -61.
103
+
-**Don't hardcode.** Synonym tables and file exclusions scored well but were project-specific. Per-file caps and prefix queries are general.
104
+
-**Output efficiency matters.** 31 of 123 points came from reducing output bytes, not improving ranking.
105
+
106
+
## LLM End-to-End Validation
107
+
108
+
The same 15 cases run through Claude Code with and without codebase-memory-mcp:
109
+
110
+
|| No MCP (grep/glob) | With MCP |
111
+
|--|---------------------|----------|
112
+
|**PASS**| 10 |**11**|
113
+
|**PARTIAL**| 4 |**3**|
114
+
|**FAIL**| 1 | 1 |
115
+
|**Cost**|**$4.56**| $5.03 |
116
+
|**Turns**|**88**| 131 |
117
+
118
+
MCP's advantage is modest because Claude Code is already good at grep/glob searching. The real value: MCP gives **direction on the first call** — the LLM then spends turns reading code deeply rather than searching blindly. On concept queries (B-tier), MCP consistently surfaces files the LLM wouldn't find via grep alone.
0 commit comments