SF Bay Area • Git Page • All icons from iconics
Important
Solved a pre-main() environment stripping bug causing 11–300× GPU slowdowns that eluded OpenAI's debugging team for months. This was the main blocker to Codex spawning and controlling effective subagents. The regression often times caused delayed cpu fallback or silent failures in ML-related tasks across all operating systems.
Proof: Issue #8945 | PR #8951 | Release notes (rust-v0.80.0)
Full Investigation Details
In October 2025, OpenAI assembled a specialized debugging team to investigate mysterious slowdowns affecting Codex. After a week of intensive investigation: nothing.
The bug was literally a ghost — pre_main_hardening() executed before main(), stripped critical environment variables (LD_LIBRARY_PATH, DYLD_LIBRARY_PATH), and disappeared without a trace. Standard profilers saw nothing. Users saw variables in their shell, but inside codex exec they vanished.
Within 3 days of their announcement, I identified the problematic commit PR #4521 and contacted @tibo_openai.
But identification is not proof. I spent 2 months building an undeniable case.
| Date | Event |
|---|---|
| Sept 30, 2025 | PR #4521 merges, enabling pre_main_hardening() in release builds |
| Oct 1, 2025 | rust-v0.43.0 ships (first affected release) |
| Oct 6, 2025 | First “painfully slow” regression reports |
| Oct 1–29, 2025 | Spike in env/PATH inheritance issues across platforms |
| Oct 29, 2025 | Emergency PATH fix lands (did not catch root cause) |
| Late Oct 2025 | OpenAI’s specialized team investigates, declares there is no root cause, identifies issue as user behavior change |
| Jan 9, 2026 | My fix merged, credited in release notes |
| Platform | Issues | Failure Mode |
|---|---|---|
| macOS | #6012, #5679, #5339, #6243, #6218 | DYLD_* stripping breaking dynamic linking |
| Linux/WSL2 | #4843, #3891, #6200, #5837, #6263 | LD_LIBRARY_PATH stripping → silent CUDA/MKL degradation |
Compiled evidence packages:
Platform-specific failure modes- Reproduction steps with quantifiable performance regressions (11–300×) and benchmarks
Pattern analysis- Cross-referenced 15+ scattered user reports over 3 months, traced process environment inheritance through
fork/execboundaries
Comprehensive Technical Analysis
Investigation Methodology
The bug was designed to be invisible:
- Pre-main execution
- Used
#[ctor::ctor]to run beforemain(), before any logging or instrumentation - Silent stripping
- No warnings, no errors — just missing environment variables
- Distributed symptoms
- Appeared as unrelated issues across different platforms and configurations
- User attribution
- Everyone assumed they misconfigured something (shell looked fine)
- Wrong search space
- Team was debugging post-
mainapplication code
[!NOTE] Standard debugging tools cannot see pre-main execution. Profilers start at
main(). Log hooks are not initialized yet. The code executes, modifies the environment, and vanishes.
OpenAI confirmed and merged the fix within 24 hours, explicitly crediting the investigation in v0.80.0 release notes:
"Codex CLI subprocesses again inherit env vars like LD_LIBRARY_PATH/DYLD_LIBRARY_PATH to avoid runtime issues. As explained in #8945, failure to pass along these environment variables to subprocesses that expect them (notably GPU-related ones), was causing 10×+ performance regressions! Special thanks to @johnzfitch for the detailed investigation and write-up in #8945."
Restored:
| GPU acceleration | Internal ML/AI dev teams |
| CUDA/PyTorch | ML researchers |
| MKL/NumPy | Scientific computing users |
| Conda environments | Cross-platform compatibility |
| Enterprise drivers | Database connectivity |
When the tools are blind, the system lies, and everyone else has stopped looking for it..
- claude-cowork-linux ⭐35
- Run Claude Desktop's Cowork feature on Linux through reverse engineering and native module stubbing
- human-interface-markdown
- Apple Human Interface Guidelines archive (1980-2014) — 35 documents spanning Lisa, Mac, NeXT, Newton, Aqua, and iOS eras
- claude-warden ⭐4
- Token-saving hooks for Claude Code — prevents verbose output, blocks binary reads, enforces subagent budgets
- pyghidra-lite
- Lightweight MCP server for Ghidra reverse engineering — official MCP server listing
- sites
- Mutable topology layer for static sites on NixOS — reconciler-based deployer with zero webhooks
- llmx
- Codebase indexer with BM25 search and semantic chunk exports — live demo at llm.cat
- dota
- Defense of the Artifacts — post-quantum secure secrets manager with TUI
- claude-wiki
- Comprehensive Anthropic/Claude documentation wiki — 749+ docs across 24 categories
- specHO
- LLM watermark detection via phonetic/semantic analysis (The Echo Rule) — live demo at definitelynot.ai
- codex-patcher
- Automated code patching system for Rust with byte-span replacement and tree-sitter integration
- htmx-docs
- Organized HTMX ecosystem documentation corpus in Markdown (htmx.org, Big Sky repos, RFC 9110/9113/9114)
- filearchy
- COSMIC Files fork with sub-10ms trigram search (Rust)
- nautilus-plus
- Enhanced GNOME Files with sub-ms search (AUR)
- indepacer
- PACER CLI for federal court research (PyPI: pacersdk)
Self-hosting bare metal infrastructure (NixOS) with post-quantum cryptography, authoritative DNS, and containerized services.
- Cosmic Code Cleaner @ definitelynot.ai
- LLM paste sanitizer with vectorhit algorithm — fix curly quotes, invisible Unicode, confusable punctuation, dedent blocks
- LLMX Ingestor @ llm.cat
- WebAssembly codebase indexer — private, deterministic chunking and BM25 search for large folders
- LINTENIUM FIELD @ internetuniverse.org
- Terminal-based ARG experience — interactive mystery with audio visualizations
- Observatory @ look.definitelynot.ai
- WebGPU deepfake detection running 4 ML models in browser
Live Demo: look.definitelynot.ai
Browser-based AI image detection running 4 specialized ML models (ViT, Swin Transformer) through WebGPU. Zero server-side processing; all inference happens client-side with 672MB of ONNX models.
| Model | Accuracy | Architecture |
|---|---|---|
| dima806_ai_real | 98.2% | Vision Transformer |
| SMOGY | 98.2% | Swin Transformer |
| Deep-Fake-Detector-v2 | 92.1% | ViT-Base |
| umm_maybe | 94.2% | Vision Transformer |
Stack: JavaScript (ES6) • Transformers.js • ONNX • WebGPU/WASM
3,372+ PNG icons with semantic CLI discovery. Find the right icon by meaning, not filename.
icon suggest security # → lock, shield, key, firewall…
icon suggest data # → chart, database, folder…
icon use lock shield # Export to ./icons/Features: Fuzzy search • theme variants • batch export • markdown integration
Stack: Python • FuzzyWuzzy • PIL
COSMIC Files fork with embedded trigram search engine. Memory-mapped indices achieve sub-millisecond searches across 2.15M+ files with near-zero resident memory.
filearchy/
├── triglyph/ # Trigram library (mmap)
└── triglyphd/ # D-Bus daemon for system-wide search
- Performance
- 2.15M • <10ms • 156MB
- Stack
- Rust • libcosmic • memmap2 • zbus
LLMs echo their training data. That echo is detectable through pattern recognition:
| Signature | Detection Method |
|---|---|
| Phonetic | CMU phoneme analysis, Levenshtein distance |
| Structural | POS tag patterns, sentence construction |
| Semantic | Word2Vec cosine similarity, hedging clusters |
Implemented in specHO with 98.6% preprocessor test pass rate. Live demo at definitelynot.ai.
Core: Rust | Python | TypeScript | C | Nix | Shell
- claude-cowork-linux ⭐32 — Run Claude Desktop's Cowork feature on Linux through reverse engineering
- human-interface-markdown — Apple Human Interface Guidelines archive (1980-2014) — 35 documents for LLM consumption
- claude-warden ⭐4 — Token-saving hooks for Claude Code
- llmx — Codebase indexer with BM25 search — live: llm.cat
- claude-wiki — Comprehensive Anthropic documentation wiki — 749+ docs
- observatory — WebGPU deepfake detection — live: look.definitelynot.ai
- specHO — LLM watermark detection — live: definitelynot.ai
- burn-plugin — Claude Code plugin for the Burn deep learning framework
- raley-bot — Grocery shopping assistant with MCP integration
- gemini-sharp — Privacy-focused Gemini CLI with custom themes
Primary server: Dedicated bare-metal NixOS host (details available on request)
| Security | Post-quantum SSH • Rosenpass VPN • nftables firewall |
|---|---|
| DNS | Unbound resolver with DNSSEC • ad/tracker blocking |
| Services | FreshRSS • Caddy (HTTPS/HTTP/3) • cPanel/WHM • Podman containers |
| Network | Local 10Gbps • Authoritative BIND9 with RFC 2136 ACME |








