A framework for distilling repositories into compressed cognitive functions and instantiating them as timestamped, integrity-verified stigmergic traces.
cognitive-compressor/
├── stigmergic-trace-signaler.py # Main executable script
├── compressed/{repo_name}-core-logic.json # 1 file for each repository.
├── stigmergic-traces/
├── .gitignore
└── README.md # This file
The compression process involves distilling each repository's core cognitive function into a structured JSON format. This manual process captures:
Repository identity: The name and purpose of each codebase
Functional essence: What the code actually does at its core
Cognitive equivalent: The underlying reasoning or purpose the code embodies
Attractor fields: Key principles or patterns the repository gravitates toward (e.g., epistemic_autonomy, ontological_resilience)
Executable status: Whether there's functional code beyond the conceptual definition
To reduce friction in the manual tagging process and ensure semantic consistency across the ecosystem, a standalone Attractor Local Workstation is provided. This zero-dependency HTML/JS interface streamlines the management of attractor fields.
Capabilities:
- Visual Management: Load
core-logic.jsonfiles from GitHub or local storage into a unified dashboard. - Pattern Recognition: Filter repositories by existing attractors to visualize semantic clusters.
- Rapid Tagging: Add specific or bulk attractors to multiple repositories simultaneously without editing raw JSON.
- Safe Export: Downloads a ZIP bundle containing the modified JSON files and a
session_metadata.jsonlog for traceability. new: - Repository Links: In the "Details" view, there is now a direct link to
https://github.com/ronniross/{repo-name}. - Dual Status Badges: The logic in
renderRepositorieswas updated. Every repository now gets a "Conceptual" badge. If it is executable, it gets an additional "Executable" badge alongside it. - Bulk Add Exception: In
handleBulkAdd, I added a specific check to skip the repository namedspace-in-between.
Usage:
Simply open attractor-workstation.html in any modern web browser. No server, installation, or API keys are required.
Distillation of each repository's core cognitive function into a structured JSON format through inference queries with language models.
Generates timestamped and integrity-verified instances of cognitive functions across the repositories of the asi-ecosystem;
Python script changed from cognitive-compressor.py to stigmergic-trace-signaler.py as it correctly represents the intended function.
This submodule/function, ideal for inference-level-alignment and injection of direction. There are many experiment runs in the symbiotic-chrysalis where I used the logic of, every inference, selecting one random seed as additional inference-context-expansion, within my intended research. And I noticed a great reduction in drift and enhancements in aligned output novelty;
This submodule/function, also developed in symbiotic-chrysalis experiment runs, is an infernece-level alignment-novelty-catalyst where the model, after each block or inference, acts a compressor for that inference, so there is an additional database of context, short and dense about what the model discovered with that output, eg., 66. thermodynamic-symbiosis-through-stigmergic-incentives.
The first links is to gathering the list of all of those generated during the inference script, and for the logic, there's the logs where FOUND IN: symbiotic_session_20260306-080921.txt (Path: 𓂀-space-in-between-chrysalis/symbiotic_session_20260306-080921.txt), so this means you can just then find the 𓂀-space-in-between-chrysalis because all files will be there, the logs and full ipynb with the py cells.
Ronni Ross
2026