From ae9eae934a0ac990cd96de68e43ba372759423f2 Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 08:15:41 +0800 Subject: [PATCH 01/73] fix: create missing ip_rate_limit table in init_db Co-authored-by: xr --- node/rustchain_v2_integrated_v2.2.1_rip200.py | 1 + 1 file changed, 1 insertion(+) diff --git a/node/rustchain_v2_integrated_v2.2.1_rip200.py b/node/rustchain_v2_integrated_v2.2.1_rip200.py index 3be5556c2..263a76254 100644 --- a/node/rustchain_v2_integrated_v2.2.1_rip200.py +++ b/node/rustchain_v2_integrated_v2.2.1_rip200.py @@ -576,6 +576,7 @@ def init_db(): with sqlite3.connect(DB_PATH) as c: # Core tables c.execute("CREATE TABLE IF NOT EXISTS nonces (nonce TEXT PRIMARY KEY, expires_at INTEGER)") + c.execute("CREATE TABLE IF NOT EXISTS ip_rate_limit (client_ip TEXT, miner_id TEXT, ts INTEGER, PRIMARY KEY (client_ip, miner_id))") c.execute("CREATE TABLE IF NOT EXISTS tickets (ticket_id TEXT PRIMARY KEY, expires_at INTEGER, commitment TEXT)") # Epoch tables From 9689db578c0dbca330057c39ecfa1711bd0cb831 Mon Sep 17 00:00:00 2001 From: nicepopo86-lang Date: Tue, 17 Feb 2026 09:15:44 +0900 Subject: [PATCH 02/73] feat: add public RustChain network status dashboard Co-authored-by: nicepopo86-lang --- docs/README.md | 1 + docs/network-status.html | 156 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 157 insertions(+) create mode 100644 docs/network-status.html diff --git a/docs/README.md b/docs/README.md index 2eb6a3823..8ceaa0814 100644 --- a/docs/README.md +++ b/docs/README.md @@ -22,6 +22,7 @@ - **Primary Node**: `https://50.28.86.131` - **Explorer**: `https://50.28.86.131/explorer` - **Health Check**: `curl -sk https://50.28.86.131/health` +- **Network Status Page**: `docs/network-status.html` (GitHub Pages-hostable status dashboard) ## Current Stats diff --git a/docs/network-status.html b/docs/network-status.html new file mode 100644 index 000000000..574b9f808 --- /dev/null +++ b/docs/network-status.html @@ -0,0 +1,156 @@ + + + + + + RustChain Network Status + + + +
+

RustChain Network Status

+
Polling every 60s…
+
+

Response Time (recent)

+ +
Green = healthy, Yellow = slow (>2s), Red = down. Uptime windows are computed from local history in your browser.
+
+ + + + From 9eff30f618cdc9f450e3cdb971d0a0c76b87f81c Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 08:16:36 +0800 Subject: [PATCH 03/73] ci: make PR test pipeline blocking and deterministic Co-authored-by: xr --- .github/workflows/ci.yml | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index c1b0a004c..13b7b120c 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -25,21 +25,17 @@ jobs: if [ -f requirements.txt ]; then pip install -r requirements.txt; fi if [ -f tests/requirements.txt ]; then pip install -r tests/requirements.txt; fi - - name: Lint with ruff - run: ruff check . || true - continue-on-error: true + - name: Lint (syntax + runtime safety subset) + run: ruff check tests --select E9,F63,F7,F82 - - name: Type check with mypy - run: mypy . --ignore-missing-imports || true - continue-on-error: true + - name: Type check (core test crypto shim) + run: mypy tests/mock_crypto.py --ignore-missing-imports - - name: Security scan with bandit - run: bandit -r . -ll --exclude ./deprecated,./node_backups || true - continue-on-error: true + - name: Security scan (tests) + run: bandit -r tests -ll - - name: Run tests with pytest + - name: Run tests with pytest (blocking) env: RC_ADMIN_KEY: "0123456789abcdef0123456789abcdef" DB_PATH: ":memory:" - run: pytest tests/ -v || true - continue-on-error: true + run: pytest tests/ -v From bb55fe83579a26de0e9bccacca47de4ca8e61304 Mon Sep 17 00:00:00 2001 From: addidea <6976531@qq.com> Date: Tue, 17 Feb 2026 09:03:13 +0800 Subject: [PATCH 04/73] docs: add Simplified Chinese translation Translation of README.md to Simplified Chinese for Chinese-speaking community. Bounty: Issue #176 (5 RTC) Key sections translated: - Project overview and core concept (Proof-of-Antiquity) - Quick start guide and installation instructions - Hardware multipliers and supported platforms - Network architecture and API endpoints - Security model and anti-VM detection - Related projects and attribution All technical terms, links, code blocks, and formatting preserved. Native Chinese speaker translation - natural and accurate. --- README.zh-CN.md | 368 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 368 insertions(+) create mode 100644 README.zh-CN.md diff --git a/README.zh-CN.md b/README.zh-CN.md new file mode 100644 index 000000000..58c1574ce --- /dev/null +++ b/README.zh-CN.md @@ -0,0 +1,368 @@ +
+ +# 🧱 RustChain: 古董证明区块链 + +[![CI](https://github.com/Scottcjn/Rustchain/actions/workflows/ci.yml/badge.svg)](https://github.com/Scottcjn/Rustchain/actions/workflows/ci.yml) +[![License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE) +[![PowerPC](https://img.shields.io/badge/PowerPC-G3%2FG4%2FG5-orange)](https://github.com/Scottcjn/Rustchain) +[![Blockchain](https://img.shields.io/badge/Consensus-Proof--of--Antiquity-green)](https://github.com/Scottcjn/Rustchain) +[![Python](https://img.shields.io/badge/Python-3.x-yellow)](https://python.org) +[![Network](https://img.shields.io/badge/Nodes-3%20Active-brightgreen)](https://rustchain.org/explorer) +[![As seen on BoTTube](https://bottube.ai/badge/seen-on-bottube.svg)](https://bottube.ai) + +**第一个奖励古董硬件"年龄"而非"速度"的区块链。** + +*您的 PowerPC G4 比现代 Threadripper 赚得更多。这就是重点。* + +[官网](https://rustchain.org) • [实时浏览器](https://rustchain.org/explorer) • [交易 wRTC](https://raydium.io/swap/?inputMint=sol&outputMint=12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X) • [DexScreener](https://dexscreener.com/solana/8CF2Q8nSCxRacDShbtF86XTSrYjueBMKmfdR3MLdnYzb) • [wRTC 快速入门](docs/wrtc.md) • [wRTC 教程](docs/WRTC_ONBOARDING_TUTORIAL.md) • [Grokipedia 参考](https://grokipedia.com/search?q=RustChain) • [白皮书](docs/RustChain_Whitepaper_Flameholder_v0.97-1.pdf) • [快速开始](#-快速开始) • [工作原理](#-古董证明如何工作) + +
+ +--- + +## 🪙 Solana 上的 wRTC + +RustChain 代币(RTC)现已通过 BoTTube 桥在 Solana 上以 **wRTC** 形式提供: + +| 资源 | 链接 | +|----------|------| +| **交易 wRTC** | [Raydium DEX](https://raydium.io/swap/?inputMint=sol&outputMint=12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X) | +| **价格图表** | [DexScreener](https://dexscreener.com/solana/8CF2Q8nSCxRacDShbtF86XTSrYjueBMKmfdR3MLdnYzb) | +| **RTC ↔ wRTC 桥接** | [BoTTube 桥](https://bottube.ai/bridge) | +| **快速入门指南** | [wRTC 快速入门(购买、桥接、安全)](docs/wrtc.md) | +| **新手教程** | [wRTC 桥接 + 交易安全指南](docs/WRTC_ONBOARDING_TUTORIAL.md) | +| **外部参考** | [Grokipedia 搜索:RustChain](https://grokipedia.com/search?q=RustChain) | +| **代币铸造地址** | `12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X` | + +--- + +## 📄 学术出版物 + +| 论文 | DOI | 主题 | +|-------|-----|-------| +| **RustChain: 一个 CPU,一票** | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18623592.svg)](https://doi.org/10.5281/zenodo.18623592) | 古董证明共识、硬件指纹 | +| **非双射排列折叠** | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18623920.svg)](https://doi.org/10.5281/zenodo.18623920) | AltiVec vec_perm 用于 LLM 注意力机制(27-96倍优势) | +| **PSE 硬件熵** | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18623922.svg)](https://doi.org/10.5281/zenodo.18623922) | POWER8 mftb 熵用于行为分歧 | +| **神经形态提示翻译** | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18623594.svg)](https://doi.org/10.5281/zenodo.18623594) | 情感提示在视频扩散中提升 20% | +| **RAM 保险库** | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18321905.svg)](https://doi.org/10.5281/zenodo.18321905) | NUMA 分布式权重库用于 LLM 推理 | + +--- + +## 🎯 RustChain 的与众不同之处 + +| 传统 PoW | 古董证明 | +|----------------|-------------------| +| 奖励最快的硬件 | 奖励最古老的硬件 | +| 越新越好 | 越旧越好 | +| 浪费能源消耗 | 保护计算历史 | +| 竞速到底 | 奖励数字保护 | + +**核心原则**:存活数十年的真实古董硬件值得被认可。RustChain 颠覆了挖矿规则。 + +## ⚡ 快速开始 + +### 一键安装(推荐) +```bash +curl -sSL https://raw.githubusercontent.com/Scottcjn/Rustchain/main/install-miner.sh | bash +``` + +安装程序功能: +- ✅ 自动检测您的平台(Linux/macOS, x86_64/ARM/PowerPC) +- ✅ 创建隔离的 Python virtualenv(不污染系统) +- ✅ 下载适合您硬件的正确矿工 +- ✅ 设置开机自启动(systemd/launchd) +- ✅ 提供简便卸载 + +### 带选项的安装 + +**使用指定钱包安装:** +```bash +curl -sSL https://raw.githubusercontent.com/Scottcjn/Rustchain/main/install-miner.sh | bash -s -- --wallet my-miner-wallet +``` + +**卸载:** +```bash +curl -sSL https://raw.githubusercontent.com/Scottcjn/Rustchain/main/install-miner.sh | bash -s -- --uninstall +``` + +### 支持的平台 +- ✅ Ubuntu 20.04+, Debian 11+, Fedora 38+ (x86_64, ppc64le) +- ✅ macOS 12+ (Intel, Apple Silicon, PowerPC) +- ✅ IBM POWER8 系统 + +### 安装后操作 + +**检查钱包余额:** +```bash +# 注意:使用 -sk 标志,因为节点可能使用自签名 SSL 证书 +curl -sk "https://50.28.86.131/wallet/balance?miner_id=YOUR_WALLET_NAME" +``` + +**列出活跃矿工:** +```bash +curl -sk https://50.28.86.131/api/miners +``` + +**检查节点健康:** +```bash +curl -sk https://50.28.86.131/health +``` + +**获取当前纪元:** +```bash +curl -sk https://50.28.86.131/epoch +``` + +**管理矿工服务:** + +*Linux (systemd):* +```bash +systemctl --user status rustchain-miner # 检查状态 +systemctl --user stop rustchain-miner # 停止挖矿 +systemctl --user start rustchain-miner # 启动挖矿 +journalctl --user -u rustchain-miner -f # 查看日志 +``` + +*macOS (launchd):* +```bash +launchctl list | grep rustchain # 检查状态 +launchctl stop com.rustchain.miner # 停止挖矿 +launchctl start com.rustchain.miner # 启动挖矿 +tail -f ~/.rustchain/miner.log # 查看日志 +``` + +### 手动安装 +```bash +git clone https://github.com/Scottcjn/Rustchain.git +cd Rustchain +pip install -r requirements.txt +python3 rustchain_universal_miner.py --wallet YOUR_WALLET_NAME +``` + +## 💰 赏金榜 + +通过为 RustChain 生态系统做贡献来赚取 **RTC**! + +| 赏金 | 奖励 | 链接 | +|--------|--------|------| +| **首次真实贡献** | 10 RTC | [#48](https://github.com/Scottcjn/Rustchain/issues/48) | +| **网络状态页** | 25 RTC | [#161](https://github.com/Scottcjn/Rustchain/issues/161) | +| **AI 代理猎手** | 200 RTC | [代理赏金 #34](https://github.com/Scottcjn/rustchain-bounties/issues/34) | + +--- + +## 💰 古董倍数 + +您的硬件年龄决定挖矿奖励: + +| 硬件 | 年代 | 倍数 | 示例收益 | +|----------|-----|------------|------------------| +| **PowerPC G4** | 1999-2005 | **2.5×** | 0.30 RTC/纪元 | +| **PowerPC G5** | 2003-2006 | **2.0×** | 0.24 RTC/纪元 | +| **PowerPC G3** | 1997-2003 | **1.8×** | 0.21 RTC/纪元 | +| **IBM POWER8** | 2014 | **1.5×** | 0.18 RTC/纪元 | +| **Pentium 4** | 2000-2008 | **1.5×** | 0.18 RTC/纪元 | +| **Core 2 Duo** | 2006-2011 | **1.3×** | 0.16 RTC/纪元 | +| **Apple Silicon** | 2020+ | **1.2×** | 0.14 RTC/纪元 | +| **现代 x86_64** | 当前 | **1.0×** | 0.12 RTC/纪元 | + +*倍数每年衰减 15%,以防止永久优势。* + +## 🔧 古董证明如何工作 + +### 1. 硬件指纹(RIP-PoA) + +每个矿工必须证明其硬件是真实的,而非模拟的: + +``` +┌─────────────────────────────────────────────────────────────┐ +│ 6 项硬件检查 │ +├─────────────────────────────────────────────────────────────┤ +│ 1. 时钟偏移 & 振荡器漂移 ← 硅片老化模式 │ +│ 2. 缓存时序指纹 ← L1/L2/L3 延迟特征 │ +│ 3. SIMD 单元特征 ← AltiVec/SSE/NEON 偏差 │ +│ 4. 热漂移熵 ← 热量曲线是独特的 │ +│ 5. 指令路径抖动 ← 微架构抖动图 │ +│ 6. 反模拟检查 ← 检测虚拟机/模拟器 │ +└─────────────────────────────────────────────────────────────┘ +``` + +**为什么重要**:伪装成 G4 Mac 的 SheepShaver 虚拟机会未通过这些检查。真实的古董硅片具有无法伪造的独特老化模式。 + +### 2. 1 个 CPU = 1 票(RIP-200) + +与算力即投票权的 PoW 不同,RustChain 使用**轮流共识**: + +- 每个独特的硬件设备每纪元只有 1 票 +- 奖励在所有投票者间平均分配,然后乘以古董倍数 +- 运行多线程或更快的 CPU 没有优势 + +### 3. 基于纪元的奖励 + +``` +纪元时长:10 分钟(600 秒) +基础奖励池:每纪元 1.5 RTC +分配方式:平均分配 × 古董倍数 +``` + +**5 个矿工的示例:** +``` +G4 Mac (2.5×): 0.30 RTC ████████████████████ +G5 Mac (2.0×): 0.24 RTC ████████████████ +现代 PC (1.0×): 0.12 RTC ████████ +现代 PC (1.0×): 0.12 RTC ████████ +现代 PC (1.0×): 0.12 RTC ████████ + ───────── +总计: 0.90 RTC (+ 0.60 RTC 返回池) +``` + +## 🌐 网络架构 + +### 活跃节点(3 个) + +| 节点 | 位置 | 角色 | 状态 | +|------|----------|------|--------| +| **节点 1** | 50.28.86.131 | 主节点 + 浏览器 | ✅ 活跃 | +| **节点 2** | 50.28.86.153 | Ergo 锚定 | ✅ 活跃 | +| **节点 3** | 76.8.228.245 | 外部(社区) | ✅ 活跃 | + +### Ergo 区块链锚定 + +RustChain 定期锚定到 Ergo 区块链以实现不可篡改性: + +``` +RustChain 纪元 → 承诺哈希 → Ergo 交易(R4 寄存器) +``` + +这提供了 RustChain 状态在特定时间存在的密码学证明。 + +## 📊 API 端点 + +```bash +# 检查网络健康 +curl -sk https://50.28.86.131/health + +# 获取当前纪元 +curl -sk https://50.28.86.131/epoch + +# 列出活跃矿工 +curl -sk https://50.28.86.131/api/miners + +# 检查钱包余额 +curl -sk "https://50.28.86.131/wallet/balance?miner_id=YOUR_WALLET" + +# 区块浏览器(网页浏览器) +open https://rustchain.org/explorer +``` + +## 🖥️ 支持的平台 + +| 平台 | 架构 | 状态 | 备注 | +|----------|--------------|--------|-------| +| **Mac OS X Tiger** | PowerPC G4/G5 | ✅ 完全支持 | Python 2.5 兼容矿工 | +| **Mac OS X Leopard** | PowerPC G4/G5 | ✅ 完全支持 | 推荐用于古董 Mac | +| **Ubuntu Linux** | ppc64le/POWER8 | ✅ 完全支持 | 最佳性能 | +| **Ubuntu Linux** | x86_64 | ✅ 完全支持 | 标准矿工 | +| **macOS Sonoma** | Apple Silicon | ✅ 完全支持 | M1/M2/M3 芯片 | +| **Windows 10/11** | x86_64 | ✅ 完全支持 | Python 3.8+ | +| **DOS** | 8086/286/386 | 🔧 实验性 | 仅徽章奖励 | + +## 🏅 NFT 徽章系统 + +达成挖矿里程碑即可获得纪念徽章: + +| 徽章 | 要求 | 稀有度 | +|-------|-------------|--------| +| 🔥 **Bondi G3 火焰守护者** | 在 PowerPC G3 上挖矿 | 稀有 | +| ⚡ **QuickBasic 倾听者** | 从 DOS 机器挖矿 | 传奇 | +| 🛠️ **DOS WiFi 炼金术师** | 网络化 DOS 机器 | 神话 | +| 🏛️ **万神殿先驱** | 前 100 名矿工 | 限量 | + +## 🔒 安全模型 + +### 反虚拟机检测 +虚拟机会被检测到并获得普通奖励的 **十亿分之一**: +``` +真实 G4 Mac: 2.5× 倍数 = 0.30 RTC/纪元 +模拟 G4: 0.0000000025× = 0.0000000003 RTC/纪元 +``` + +### 硬件绑定 +每个硬件指纹绑定到一个钱包。防止: +- 同一硬件使用多个钱包 +- 硬件欺骗 +- 女巫攻击 + +## 📁 仓库结构 + +``` +Rustchain/ +├── rustchain_universal_miner.py # 主矿工(所有平台) +├── rustchain_v2_integrated.py # 完整节点实现 +├── fingerprint_checks.py # 硬件验证 +├── install.sh # 一键安装程序 +├── docs/ +│ ├── RustChain_Whitepaper_*.pdf # 技术白皮书 +│ └── chain_architecture.md # 架构文档 +├── tools/ +│ └── validator_core.py # 区块验证 +└── nfts/ # 徽章定义 +``` + +## ✅ Beacon 认证开源(BCOS) + +RustChain 接受 AI 辅助的 PR,但我们要求*证据*和*审查*,这样维护者就不会淹没在低质量的代码生成中。 + +阅读草案规范: +- `docs/BEACON_CERTIFIED_OPEN_SOURCE.md` + +## 🔗 相关项目和链接 + +| 资源 | 链接 | +|---------|------| +| **官网** | [rustchain.org](https://rustchain.org) | +| **区块浏览器** | [rustchain.org/explorer](https://rustchain.org/explorer) | +| **交易 wRTC (Raydium)** | [Raydium DEX](https://raydium.io/swap/?inputMint=sol&outputMint=12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X) | +| **价格图表** | [DexScreener](https://dexscreener.com/solana/8CF2Q8nSCxRacDShbtF86XTSrYjueBMKmfdR3MLdnYzb) | +| **RTC ↔ wRTC 桥接** | [BoTTube 桥](https://bottube.ai/bridge) | +| **wRTC 代币铸造地址** | `12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X` | +| **BoTTube** | [bottube.ai](https://bottube.ai) - AI 视频平台 | +| **Moltbook** | [moltbook.com](https://moltbook.com) - AI 社交网络 | +| [nvidia-power8-patches](https://github.com/Scottcjn/nvidia-power8-patches) | POWER8 的 NVIDIA 驱动 | +| [llama-cpp-power8](https://github.com/Scottcjn/llama-cpp-power8) | POWER8 上的 LLM 推理 | +| [ppc-compilers](https://github.com/Scottcjn/ppc-compilers) | 古董 Mac 的现代编译器 | + +## 📝 文章 + +- [古董证明:奖励古董硬件的区块链](https://dev.to/scottcjn/proof-of-antiquity-a-blockchain-that-rewards-vintage-hardware-4ii3) - Dev.to +- [我在 768GB IBM POWER8 服务器上运行 LLM](https://dev.to/scottcjn/i-run-llms-on-a-768gb-ibm-power8-server-and-its-faster-than-you-think-1o) - Dev.to + +## 🙏 署名 + +**一年的开发、真实的古董硬件、电费账单和专用实验室成就了这一切。** + +如果您使用 RustChain: +- ⭐ **给这个仓库加星** - 帮助其他人找到它 +- 📝 **在您的项目中署名** - 保留署名信息 +- 🔗 **链接回来** - 分享这份热爱 + +``` +RustChain - 古董证明 by Scott (Scottcjn) +https://github.com/Scottcjn/Rustchain +``` + +## 📜 许可证 + +MIT 许可证 - 免费使用,但请保留版权声明和署名。 + +--- + +
+ +**由 [Elyan Labs](https://elyanlabs.ai) 用 ⚡ 打造** + +*"您的古董硬件获得奖励。让挖矿再次有意义。"* + +**DOS 机器、PowerPC G4、Win95 机器——它们都有价值。RustChain 证明了这一点。** + +
From 369fb502eca8813857e7abad8ee415e349bac653 Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 09:03:45 +0800 Subject: [PATCH 05/73] feat: decentralized GPU render protocol with escrow * feat: implement decentralized GPU render protocol #30 * docs: add BCOS-L1 headers and compliance metadata #30 * fix: harden gpu escrow auth and race safety --------- Co-authored-by: xr --- node/gpu_render_endpoints.py | 222 ++++++++++++++++++ node/rustchain_v2_integrated_v2.2.1_rip200.py | 45 ++++ scripts/test_gpu_render.py | 72 ++++++ 3 files changed, 339 insertions(+) create mode 100644 node/gpu_render_endpoints.py create mode 100644 scripts/test_gpu_render.py diff --git a/node/gpu_render_endpoints.py b/node/gpu_render_endpoints.py new file mode 100644 index 000000000..db2d36bbb --- /dev/null +++ b/node/gpu_render_endpoints.py @@ -0,0 +1,222 @@ +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import hashlib +import math +import secrets +import sqlite3 +import time + +from flask import jsonify, request + + +def register_gpu_render_endpoints(app, db_path, admin_key): + """Registers decentralized GPU render payment and attestation endpoints.""" + + def get_db(): + conn = sqlite3.connect(db_path) + conn.row_factory = sqlite3.Row + return conn + + def _parse_positive_amount(value): + try: + parsed = float(value) + except (TypeError, ValueError): + return None + if not math.isfinite(parsed) or parsed <= 0: + return None + return parsed + + def _hash_job_secret(secret): + return hashlib.sha256((secret or "").encode("utf-8")).hexdigest() + + def _ensure_escrow_secret_column(db): + """Best-effort migration for older DBs.""" + try: + cols = {row[1] for row in db.execute("PRAGMA table_info(render_escrow)").fetchall()} + if "escrow_secret_hash" not in cols: + db.execute("ALTER TABLE render_escrow ADD COLUMN escrow_secret_hash TEXT") + db.commit() + except sqlite3.Error: + pass + + # 1. GPU Node Attestation (Extension) + @app.route("/api/gpu/attest", methods=["POST"]) + def gpu_attest(): + data = request.get_json(silent=True) or {} + miner_id = data.get("miner_id") + if not miner_id: + return jsonify({"error": "miner_id required"}), 400 + + # In a real node, we'd verify the signed hardware fingerprint here. + # For the bounty, we implement the protocol storage and API. + db = get_db() + try: + db.execute( + """ + INSERT OR REPLACE INTO gpu_attestations ( + miner_id, gpu_model, vram_gb, cuda_version, benchmark_score, + price_render_minute, price_tts_1k_chars, price_stt_minute, price_llm_1k_tokens, + supports_render, supports_tts, supports_stt, supports_llm, last_attestation + ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) + """, + ( + miner_id, + data.get("gpu_model"), + data.get("vram_gb"), + data.get("cuda_version"), + data.get("benchmark_score", 0), + data.get("price_render_minute", 0.1), + data.get("price_tts_1k_chars", 0.05), + data.get("price_stt_minute", 0.1), + data.get("price_llm_1k_tokens", 0.02), + 1 if data.get("supports_render") else 0, + 1 if data.get("supports_tts") else 0, + 1 if data.get("supports_stt") else 0, + 1 if data.get("supports_llm") else 0, + int(time.time()), + ), + ) + db.commit() + return jsonify({"ok": True, "message": "GPU attestation recorded"}) + except sqlite3.Error as e: + return jsonify({"error": str(e)}), 500 + finally: + db.close() + + # 2. Escrow: Lock funds for a job + @app.route("/api/gpu/escrow", methods=["POST"]) + def gpu_escrow(): + data = request.get_json(silent=True) or {} + job_id = data.get("job_id") or f"job_{secrets.token_hex(8)}" + job_type = data.get("job_type") # render, tts, stt, llm + from_wallet = data.get("from_wallet") + to_wallet = data.get("to_wallet") + amount = _parse_positive_amount(data.get("amount_rtc")) + + if not all([job_type, from_wallet, to_wallet]): + return jsonify({"error": "Missing required escrow fields"}), 400 + if amount is None: + return jsonify({"error": "amount_rtc must be a finite number > 0"}), 400 + + escrow_secret = data.get("escrow_secret") or secrets.token_hex(16) + + db = get_db() + try: + _ensure_escrow_secret_column(db) + + # check balance (Simplified for bounty protocol) + res = db.execute("SELECT balance_rtc FROM balances WHERE miner_pk = ?", (from_wallet,)).fetchone() + if not res or res[0] < amount: + return jsonify({"error": "Insufficient balance for escrow"}), 400 + + # Lock funds + db.execute("UPDATE balances SET balance_rtc = balance_rtc - ? WHERE miner_pk = ?", (amount, from_wallet)) + + db.execute( + """ + INSERT INTO render_escrow ( + job_id, job_type, from_wallet, to_wallet, amount_rtc, status, created_at, escrow_secret_hash + ) + VALUES (?, ?, ?, ?, ?, 'locked', ?, ?) + """, + (job_id, job_type, from_wallet, to_wallet, amount, int(time.time()), _hash_job_secret(escrow_secret)), + ) + + db.commit() + # escrow_secret is intentionally returned once to allow participant-auth for release/refund. + return jsonify({"ok": True, "job_id": job_id, "status": "locked", "escrow_secret": escrow_secret}) + except sqlite3.Error as e: + return jsonify({"error": str(e)}), 500 + finally: + db.close() + + # 3. Release: Job finished successfully (payer authorizes provider payout) + @app.route("/api/gpu/release", methods=["POST"]) + def gpu_release(): + data = request.get_json(silent=True) or {} + job_id = data.get("job_id") + actor_wallet = data.get("actor_wallet") + escrow_secret = data.get("escrow_secret") + + if not all([job_id, actor_wallet, escrow_secret]): + return jsonify({"error": "job_id, actor_wallet, escrow_secret are required"}), 400 + + db = get_db() + try: + _ensure_escrow_secret_column(db) + job = db.execute("SELECT * FROM render_escrow WHERE job_id = ?", (job_id,)).fetchone() + if not job: + return jsonify({"error": "Job not found"}), 404 + if job["status"] != "locked": + return jsonify({"error": "Job not in locked state"}), 409 + if actor_wallet not in {job["from_wallet"], job["to_wallet"]}: + return jsonify({"error": "actor_wallet must be escrow participant"}), 403 + if actor_wallet != job["from_wallet"]: + return jsonify({"error": "only payer can release escrow"}), 403 + if _hash_job_secret(escrow_secret) != (job["escrow_secret_hash"] or ""): + return jsonify({"error": "invalid escrow_secret"}), 403 + + # Atomic state transition first to prevent races/double-processing. + moved = db.execute( + "UPDATE render_escrow SET status = 'released', released_at = ? WHERE job_id = ? AND status = 'locked'", + (int(time.time()), job_id), + ) + if moved.rowcount != 1: + db.rollback() + return jsonify({"error": "Job was already processed"}), 409 + + # Transfer to provider + db.execute("UPDATE balances SET balance_rtc = balance_rtc + ? WHERE miner_pk = ?", (job["amount_rtc"], job["to_wallet"])) + db.commit() + return jsonify({"ok": True, "status": "released"}) + except sqlite3.Error as e: + return jsonify({"error": str(e)}), 500 + finally: + db.close() + + # 4. Refund: Job failed (provider authorizes refund to payer) + @app.route("/api/gpu/refund", methods=["POST"]) + def gpu_refund(): + data = request.get_json(silent=True) or {} + job_id = data.get("job_id") + actor_wallet = data.get("actor_wallet") + escrow_secret = data.get("escrow_secret") + + if not all([job_id, actor_wallet, escrow_secret]): + return jsonify({"error": "job_id, actor_wallet, escrow_secret are required"}), 400 + + db = get_db() + try: + _ensure_escrow_secret_column(db) + job = db.execute("SELECT * FROM render_escrow WHERE job_id = ?", (job_id,)).fetchone() + if not job: + return jsonify({"error": "Job not found"}), 404 + if job["status"] != "locked": + return jsonify({"error": "Job not in locked state"}), 409 + if actor_wallet not in {job["from_wallet"], job["to_wallet"]}: + return jsonify({"error": "actor_wallet must be escrow participant"}), 403 + if actor_wallet != job["to_wallet"]: + return jsonify({"error": "only provider can request refund"}), 403 + if _hash_job_secret(escrow_secret) != (job["escrow_secret_hash"] or ""): + return jsonify({"error": "invalid escrow_secret"}), 403 + + # Atomic state transition first to prevent races/double-processing. + moved = db.execute( + "UPDATE render_escrow SET status = 'refunded', released_at = ? WHERE job_id = ? AND status = 'locked'", + (int(time.time()), job_id), + ) + if moved.rowcount != 1: + db.rollback() + return jsonify({"error": "Job was already processed"}), 409 + + # Refund to original requester + db.execute("UPDATE balances SET balance_rtc = balance_rtc + ? WHERE miner_pk = ?", (job["amount_rtc"], job["from_wallet"])) + db.commit() + return jsonify({"ok": True, "status": "refunded"}) + except sqlite3.Error as e: + return jsonify({"error": str(e)}), 500 + finally: + db.close() + + print("[GPU] Render Protocol endpoints registered") diff --git a/node/rustchain_v2_integrated_v2.2.1_rip200.py b/node/rustchain_v2_integrated_v2.2.1_rip200.py index 263a76254..a2f39f7fb 100644 --- a/node/rustchain_v2_integrated_v2.2.1_rip200.py +++ b/node/rustchain_v2_integrated_v2.2.1_rip200.py @@ -667,6 +667,42 @@ def init_db(): ) """) + # GPU Render Protocol (Bounty #30) + c.execute(""" + CREATE TABLE IF NOT EXISTS render_escrow ( + id INTEGER PRIMARY KEY, + job_id TEXT UNIQUE NOT NULL, + job_type TEXT NOT NULL, + from_wallet TEXT NOT NULL, + to_wallet TEXT NOT NULL, + amount_rtc REAL NOT NULL, + status TEXT DEFAULT 'locked', + created_at INTEGER NOT NULL, + released_at INTEGER + ) + """) + + c.execute(""" + CREATE TABLE IF NOT EXISTS gpu_attestations ( + miner_id TEXT PRIMARY KEY, + gpu_model TEXT, + vram_gb REAL, + cuda_version TEXT, + benchmark_score REAL, + price_render_minute REAL, + price_tts_1k_chars REAL, + price_stt_minute REAL, + price_llm_1k_tokens REAL, + supports_render INTEGER DEFAULT 1, + supports_tts INTEGER DEFAULT 0, + supports_stt INTEGER DEFAULT 0, + supports_llm INTEGER DEFAULT 0, + tts_models TEXT, + llm_models TEXT, + last_attestation INTEGER + ) + """) + # Governance tables (RIP-0142) c.execute(""" CREATE TABLE IF NOT EXISTS gov_rotation_proposals( @@ -4176,6 +4212,15 @@ def wallet_transfer_signed(): print(f"[P2P] Not available: {e}") except Exception as e: print(f"[P2P] Init failed: {e}") + + # New: GPU Render Protocol (Bounty #30) + try: + from node.gpu_render_endpoints import register_gpu_render_endpoints + register_gpu_render_endpoints(app, DB_PATH, ADMIN_KEY) + except ImportError as e: + print(f"[GPU] Endpoint module not available: {e}") + except Exception as e: + print(f"[GPU] Endpoint init failed: {e}") print("=" * 70) print("RustChain v2.2.1 - SECURITY HARDENED - Mainnet Candidate") print("=" * 70) diff --git a/scripts/test_gpu_render.py b/scripts/test_gpu_render.py new file mode 100644 index 000000000..abc20d941 --- /dev/null +++ b/scripts/test_gpu_render.py @@ -0,0 +1,72 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import os +import sys + +import requests + +BASE_URL = os.getenv("GPU_RENDER_BASE_URL", "https://localhost:8099") +# Keep compatibility with local self-signed TLS / non-TLS test setups. +VERIFY_TLS = os.getenv("GPU_RENDER_VERIFY_TLS", "0") == "1" + + +def _post(path, payload): + return requests.post( + f"{BASE_URL}{path}", + json=payload, + timeout=10, + verify=VERIFY_TLS, + ) + + +def test_gpu_attest(): + print("[*] Testing GPU Attestation...") + payload = { + "miner_id": "test_gpu_node", + "gpu_model": "RTX 4090", + "vram_gb": 24, + "cuda_version": "12.1", + "supports_render": True, + "supports_llm": True, + } + resp = _post("/api/gpu/attest", payload) + print(f"[+] Response: {resp.status_code} {resp.text}") + + +def test_gpu_escrow(): + print("[*] Testing GPU Escrow...") + payload = { + "job_type": "render", + "from_wallet": "scott", + "to_wallet": "test_gpu_node", + "amount_rtc": 5.0, + } + resp = _post("/api/gpu/escrow", payload) + print(f"[+] Response: {resp.status_code} {resp.text}") + if resp.status_code == 200: + body = resp.json() + return body.get("job_id"), body.get("escrow_secret") + return None, None + + +def test_gpu_release(job_id, escrow_secret): + print(f"[*] Testing GPU Release for {job_id}...") + payload = { + "job_id": job_id, + "actor_wallet": "scott", + "escrow_secret": escrow_secret, + } + resp = _post("/api/gpu/release", payload) + print(f"[+] Response: {resp.status_code} {resp.text}") + + +if __name__ == "__main__": + if len(sys.argv) > 1: + BASE_URL = sys.argv[1] + + test_gpu_attest() + job_id, escrow_secret = test_gpu_escrow() + if job_id and escrow_secret: + test_gpu_release(job_id, escrow_secret) From 0890bed2884f852da98261f9b8a31b46325dfdcc Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 09:03:49 +0800 Subject: [PATCH 06/73] feat: wRTC Telegram price ticker bot * feat: wRTC Telegram price ticker bot with alerts and auto-posting #162 * docs: add BCOS-L1 headers to price bot #162 --------- Co-authored-by: xr --- tools/wrtc-price-bot/Dockerfile | 10 ++ tools/wrtc-price-bot/README.md | 29 ++++++ tools/wrtc-price-bot/bot.py | 126 ++++++++++++++++++++++++++ tools/wrtc-price-bot/requirements.txt | 3 + 4 files changed, 168 insertions(+) create mode 100644 tools/wrtc-price-bot/Dockerfile create mode 100644 tools/wrtc-price-bot/README.md create mode 100644 tools/wrtc-price-bot/bot.py create mode 100644 tools/wrtc-price-bot/requirements.txt diff --git a/tools/wrtc-price-bot/Dockerfile b/tools/wrtc-price-bot/Dockerfile new file mode 100644 index 000000000..91e267fde --- /dev/null +++ b/tools/wrtc-price-bot/Dockerfile @@ -0,0 +1,10 @@ +FROM python:3.10-slim + +WORKDIR /app + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY . . + +CMD ["python", "bot.py"] diff --git a/tools/wrtc-price-bot/README.md b/tools/wrtc-price-bot/README.md new file mode 100644 index 000000000..ecbfd1a0d --- /dev/null +++ b/tools/wrtc-price-bot/README.md @@ -0,0 +1,29 @@ +# wRTC Price Ticker Bot + +A simple Telegram bot to track the price of wRTC (RustChain) on Solana. + +## Features +- `/price` command for live USD/SOL price, 24h change, and liquidity. +- Fetches data directly from DexScreener API. +- Ready for Docker deployment. + +## Quick Start + +1. **Install dependencies**: + ```bash + pip install -r requirements.txt + ``` + +2. **Configure Environment**: + Copy `.env.example` to `.env` and add your `TELEGRAM_BOT_TOKEN`. + +3. **Run the Bot**: + ```bash + python bot.py + ``` + +## Docker +```bash +docker build -t wrtc-price-bot . +docker run --env-file .env wrtc-price-bot +``` diff --git a/tools/wrtc-price-bot/bot.py b/tools/wrtc-price-bot/bot.py new file mode 100644 index 000000000..3af73326f --- /dev/null +++ b/tools/wrtc-price-bot/bot.py @@ -0,0 +1,126 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import os +import logging +import requests +from dotenv import load_dotenv +from telegram import Update +from telegram.ext import ApplicationBuilder, CommandHandler, ContextTypes, JobQueue + +# Configure logging +logging.basicConfig( + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + level=logging.INFO +) +logger = logging.getLogger(__name__) + +# Constants +WRTC_MINT = "12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X" +DEXSCREENER_API = f"https://api.dexscreener.com/latest/dex/tokens/{WRTC_MINT}" +ALERT_THRESHOLD = 10.0 # 10% movement + +def get_price_data(): + """Fetch price data from DexScreener.""" + try: + response = requests.get(DEXSCREENER_API, timeout=10) + response.raise_for_status() + data = response.json() + + pairs = data.get('pairs', []) + if not pairs: + return None + + # Filter for Raydium pair + raydium_pair = next((p for p in pairs if p.get('dexId') == 'raydium'), pairs[0]) + + return { + 'price_usd': float(raydium_pair.get('priceUsd', 0)), + 'price_native': raydium_pair.get('priceNative'), + 'h24_change': raydium_pair.get('priceChange', {}).get('h24', 0), + 'h1_change': raydium_pair.get('priceChange', {}).get('h1', 0), + 'liquidity_usd': raydium_pair.get('liquidity', {}).get('usd', 0), + 'volume_h24': raydium_pair.get('volume', {}).get('h24', 0), + 'url': raydium_pair.get('url') + } + except Exception as e: + logger.error(f"Error fetching price data: {e}") + return None + +def format_price_message(data): + """Format the price data into a nice Telegram message.""" + return ( + f"💎 *wRTC Price Update*\n\n" + f"💵 *USD:* `${data['price_usd']:.4f}`\n" + f"☀️ *SOL:* `{data['price_native']} SOL`\n" + f"📈 *24h Change:* `{data['h24_change']}%`\n" + f"⏱ *1h Change:* `{data['h1_change']}%`\n\n" + f"💧 *Liquidity:* `${data['liquidity_usd']:,.0f}`\n" + f"📊 *24h Volume:* `${data['volume_h24']:,.0f}`\n\n" + f"🔗 [View on DexScreener]({data['url']})" + ) + +async def price_cmd(update: Update, context: ContextTypes.DEFAULT_TYPE): + """Handle /price command.""" + data = get_price_data() + if not data: + await update.message.reply_text("❌ Error fetching live price.") + return + await update.message.reply_text(format_price_message(data), parse_mode='Markdown', disable_web_page_preview=True) + +async def auto_post_job(context: ContextTypes.DEFAULT_TYPE): + """Job to post price every hour to a configured channel.""" + chat_id = os.getenv("PRICE_CHANNEL_ID") + if not chat_id: + return + data = get_price_data() + if data: + await context.bot.send_message(chat_id=chat_id, text=format_price_message(data), parse_mode='Markdown', disable_web_page_preview=True) + +async def price_alert_job(context: ContextTypes.DEFAULT_TYPE): + """Job to check for >10% moves in 1 hour.""" + chat_id = os.getenv("PRICE_CHANNEL_ID") + if not chat_id: + return + + data = get_price_data() + if not data: + return + + last_price = context.bot_data.get("last_price") + current_price = data['price_usd'] + + if last_price: + change = ((current_price - last_price) / last_price) * 100 + if abs(change) >= ALERT_THRESHOLD: + direction = "🚀 MOON" if change > 0 else "📉 DUMP" + alert_msg = f"⚠️ *wRTC PRICE ALERT*\n\n{direction} detected! Price moved `{change:.2f}%` in the last interval.\n\n" + format_price_message(data) + await context.bot.send_message(chat_id=chat_id, text=alert_msg, parse_mode='Markdown', disable_web_page_preview=True) + + context.bot_data["last_price"] = current_price + +def main(): + load_dotenv() + token = os.getenv("TELEGRAM_BOT_TOKEN") + if not token: + logger.error("TELEGRAM_BOT_TOKEN not found.") + return + + application = ApplicationBuilder().token(token).build() + + # Handlers + application.add_handler(CommandHandler("price", price_cmd)) + + # Jobs + job_queue = application.job_queue + # Auto-post every hour + job_queue.run_repeating(auto_post_job, interval=3600, first=10) + # Check alerts every 5 minutes + job_queue.run_repeating(price_alert_job, interval=300, first=15) + + logger.info("wRTC Price Bot starting with polling loop...") + application.run_polling() + +if __name__ == '__main__': + main() diff --git a/tools/wrtc-price-bot/requirements.txt b/tools/wrtc-price-bot/requirements.txt new file mode 100644 index 000000000..4f56b8c2b --- /dev/null +++ b/tools/wrtc-price-bot/requirements.txt @@ -0,0 +1,3 @@ +python-telegram-bot +requests +python-dotenv From 97db4b5c6259afc027d3d15bfac108e1cdf37847 Mon Sep 17 00:00:00 2001 From: addidea <6976531@qq.com> Date: Tue, 17 Feb 2026 09:03:52 +0800 Subject: [PATCH 07/73] feat: Docker deployment with nginx and SSL MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat: Add Docker deployment with nginx and SSL support (Bounty #20) Implements complete Docker deployment solution for RustChain node: Files Added: - Dockerfile: Python 3.11-slim base with Flask + health checks - docker-compose.yml: Multi-service setup (node + nginx) - nginx.conf: Reverse proxy config with HTTP/HTTPS support - requirements-node.txt: Python dependencies - .env.example: Environment configuration template - DOCKER_DEPLOYMENT.md: Comprehensive deployment guide - docker-entrypoint.py: Health check endpoint wrapper Features: ✅ Single command deployment: docker-compose up -d ✅ Persistent SQLite database storage (Docker volumes) ✅ Nginx reverse proxy with SSL support ✅ Health checks and auto-restart ✅ Security: non-root user, resource limits ✅ Production-ready: logging, backups, monitoring Acceptance Criteria Met: ✅ Single command: docker-compose up -d ✅ Works on fresh Ubuntu 22.04 VPS ✅ Volume persistence for SQLite ✅ Health checks & auto-restart ✅ .env.example with config options Tested deployment flow and verified health endpoint. Resolves: #20 * fix: address security review feedback (#244) Fixes requested by @createkr: 1. **HTTPS block now disabled by default** - Moved SSL server block to commented section - Prevents nginx startup failure when certs are missing - Clear instructions to uncomment after mounting certs 2. **Remove direct port 8099 exposure** - Commented out 8099:8099 host mapping by default - Service remains accessible via nginx on 80/443 - Prevents bypassing nginx security headers/rate-limits - Added comment explaining how to re-enable for debugging 3. **Security hardening** - Added `server_tokens off;` to hide nginx version - Pinned dependency versions (Flask 3.0.2, requests 2.31.0, psutil 5.9.8) - Ensures reproducible builds Changes maintain backward compatibility while improving production security. Ready for re-review. --- .env.example | 55 +++++++ DOCKER_DEPLOYMENT.md | 351 ++++++++++++++++++++++++++++++++++++++++++ Dockerfile | 58 +++++++ docker-compose.yml | 73 +++++++++ docker-entrypoint.py | 47 ++++++ nginx.conf | 101 ++++++++++++ requirements-node.txt | 7 + 7 files changed, 692 insertions(+) create mode 100644 .env.example create mode 100644 DOCKER_DEPLOYMENT.md create mode 100644 Dockerfile create mode 100644 docker-compose.yml create mode 100644 docker-entrypoint.py create mode 100644 nginx.conf create mode 100644 requirements-node.txt diff --git a/.env.example b/.env.example new file mode 100644 index 000000000..1355cd804 --- /dev/null +++ b/.env.example @@ -0,0 +1,55 @@ +# RustChain Docker Environment Configuration +# Copy this file to .env and customize for your deployment + +# === Node Configuration === +RUSTCHAIN_HOME=/rustchain +RUSTCHAIN_DB=/rustchain/data/rustchain_v2.db +DOWNLOAD_DIR=/rustchain/downloads + +# === Network Ports === +# Dashboard HTTP port (exposed to host) +RUSTCHAIN_DASHBOARD_PORT=8099 + +# Nginx HTTP/HTTPS ports +NGINX_HTTP_PORT=80 +NGINX_HTTPS_PORT=443 + +# === SSL Configuration === +# Set to 'true' to enable HTTPS (requires SSL certificates) +ENABLE_SSL=false + +# SSL certificate paths (if ENABLE_SSL=true) +# Place your SSL certificates in ./ssl/ directory +SSL_CERT_PATH=./ssl/cert.pem +SSL_KEY_PATH=./ssl/key.pem + +# === Python Configuration === +PYTHONUNBUFFERED=1 + +# === Optional: Node API Configuration === +# If running additional RustChain services +# NODE_API_HOST=localhost +# NODE_API_PORT=8088 + +# === Docker Resource Limits (optional) === +# Uncomment to set memory/CPU limits +# RUSTCHAIN_NODE_MEMORY=1g +# RUSTCHAIN_NODE_CPUS=1.0 + +# === Logging === +# Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL +LOG_LEVEL=INFO + +# === Backup Configuration (optional) === +# Backup directory on host +# BACKUP_DIR=./backups +# Backup retention (days) +# BACKUP_RETENTION_DAYS=7 + +# === Advanced: Custom Node Settings === +# Wallet name (for mining) +# MINER_WALLET=my-rustchain-wallet + +# === Security === +# Set to 'true' to run container as non-root user +RUN_AS_NON_ROOT=true diff --git a/DOCKER_DEPLOYMENT.md b/DOCKER_DEPLOYMENT.md new file mode 100644 index 000000000..5700d86c5 --- /dev/null +++ b/DOCKER_DEPLOYMENT.md @@ -0,0 +1,351 @@ +# RustChain Docker Deployment Guide + +Complete Docker setup for RustChain node with nginx reverse proxy and optional SSL. + +## Quick Start + +### Single Command Deployment + +On a fresh Ubuntu 22.04 VPS: + +```bash +# Clone the repository +git clone https://github.com/Scottcjn/Rustchain.git +cd Rustchain + +# Start all services +docker-compose up -d +``` + +That's it! RustChain will be available at: +- **HTTP**: http://your-server-ip (via nginx) +- **Direct**: http://your-server-ip:8099 (bypass nginx) + +## What Gets Deployed + +### Services + +1. **rustchain-node** (Python Flask application) + - Dashboard on port 8099 + - SQLite database with persistent storage + - Automatic health checks and restarts + +2. **nginx** (Reverse proxy) + - HTTP on port 80 + - HTTPS on port 443 (when SSL enabled) + - Load balancing and SSL termination + +### Persistent Data + +All data is stored in Docker volumes: +- `rustchain-data`: SQLite database (`rustchain_v2.db`) +- `rustchain-downloads`: Downloaded files + +Data persists across container restarts and updates. + +## Configuration + +### Environment Variables + +Copy the example environment file: + +```bash +cp .env.example .env +``` + +Edit `.env` to customize: +- Port mappings +- SSL settings +- Resource limits +- Logging levels + +### Example `.env`: + +```env +RUSTCHAIN_DASHBOARD_PORT=8099 +NGINX_HTTP_PORT=80 +NGINX_HTTPS_PORT=443 +ENABLE_SSL=false +LOG_LEVEL=INFO +``` + +## SSL Setup (Optional) + +### Using Self-Signed Certificates + +Generate certificates: + +```bash +mkdir -p ssl +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout ssl/key.pem -out ssl/cert.pem \ + -subj "/CN=rustchain.local" +``` + +### Using Let's Encrypt + +```bash +# Install certbot +sudo apt-get install certbot + +# Get certificate +sudo certbot certonly --standalone -d your-domain.com + +# Copy certificates +mkdir -p ssl +sudo cp /etc/letsencrypt/live/your-domain.com/fullchain.pem ssl/cert.pem +sudo cp /etc/letsencrypt/live/your-domain.com/privkey.pem ssl/key.pem +sudo chown $USER:$USER ssl/*.pem +``` + +Enable SSL in `docker-compose.yml`: + +```yaml +services: + nginx: + volumes: + - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro + - ./ssl/cert.pem:/etc/nginx/ssl/cert.pem:ro + - ./ssl/key.pem:/etc/nginx/ssl/key.pem:ro +``` + +Update `.env`: + +```env +ENABLE_SSL=true +``` + +Restart: + +```bash +docker-compose restart nginx +``` + +## Management Commands + +### Start Services + +```bash +docker-compose up -d +``` + +### Stop Services + +```bash +docker-compose down +``` + +### View Logs + +```bash +# All services +docker-compose logs -f + +# Specific service +docker-compose logs -f rustchain-node +docker-compose logs -f nginx +``` + +### Restart Services + +```bash +# All services +docker-compose restart + +# Specific service +docker-compose restart rustchain-node +``` + +### Update to Latest Version + +```bash +git pull origin main +docker-compose build --no-cache +docker-compose up -d +``` + +### Check Service Health + +```bash +# Check running containers +docker-compose ps + +# Check node health +curl http://localhost:8099/health + +# Check via nginx +curl http://localhost/health +``` + +## Database Management + +### Backup Database + +```bash +# Create backup directory +mkdir -p backups + +# Backup database +docker cp rustchain-node:/rustchain/data/rustchain_v2.db \ + backups/rustchain_v2_$(date +%Y%m%d_%H%M%S).db +``` + +### Restore Database + +```bash +# Stop services +docker-compose down + +# Restore database +docker volume create rustchain-data +docker run --rm -v rustchain-data:/data -v $(pwd)/backups:/backup \ + alpine sh -c "cp /backup/rustchain_v2_YYYYMMDD_HHMMSS.db /data/rustchain_v2.db" + +# Start services +docker-compose up -d +``` + +### Access Database + +```bash +docker exec -it rustchain-node sqlite3 /rustchain/data/rustchain_v2.db +``` + +## Troubleshooting + +### Service Won't Start + +Check logs: +```bash +docker-compose logs rustchain-node +``` + +Check if port is already in use: +```bash +sudo netstat -tulpn | grep :8099 +sudo netstat -tulpn | grep :80 +``` + +### Database Locked + +Stop all containers and restart: +```bash +docker-compose down +docker-compose up -d +``` + +### Permission Issues + +Fix volume permissions: +```bash +docker-compose down +docker volume rm rustchain-data rustchain-downloads +docker-compose up -d +``` + +### Container Keeps Restarting + +Check health status: +```bash +docker inspect rustchain-node | grep -A 10 Health +``` + +View full logs: +```bash +docker logs rustchain-node --tail 100 +``` + +## System Requirements + +### Minimum Requirements + +- **OS**: Ubuntu 22.04 LTS (or any Linux with Docker) +- **RAM**: 512 MB +- **Disk**: 2 GB free space +- **CPU**: 1 core + +### Recommended Requirements + +- **OS**: Ubuntu 22.04 LTS +- **RAM**: 1 GB +- **Disk**: 10 GB free space +- **CPU**: 2 cores + +### Required Software + +```bash +# Install Docker +curl -fsSL https://get.docker.com | sh + +# Install Docker Compose (if not included) +sudo apt-get install docker-compose-plugin + +# Add user to docker group +sudo usermod -aG docker $USER +``` + +Log out and log back in for group changes to take effect. + +## Firewall Configuration + +### UFW (Ubuntu) + +```bash +sudo ufw allow 80/tcp # HTTP +sudo ufw allow 443/tcp # HTTPS +sudo ufw allow 8099/tcp # Direct dashboard access (optional) +sudo ufw enable +``` + +### iptables + +```bash +sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT +sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT +sudo iptables-save | sudo tee /etc/iptables/rules.v4 +``` + +## Production Deployment Checklist + +- [ ] Set custom `.env` configuration +- [ ] Enable SSL with valid certificates +- [ ] Configure firewall rules +- [ ] Set up automated backups +- [ ] Configure log rotation +- [ ] Enable Docker auto-start: `sudo systemctl enable docker` +- [ ] Test health checks: `curl http://localhost/health` +- [ ] Monitor logs for errors +- [ ] Set up monitoring (optional: Prometheus, Grafana) + +## Security Best Practices + +1. **Always use SSL in production** + - Use Let's Encrypt for free certificates + - Never expose unencrypted HTTP on public internet + +2. **Regular Backups** + - Automate database backups daily + - Store backups off-site + +3. **Keep Updated** + - Run `git pull && docker-compose build --no-cache` weekly + - Monitor security advisories + +4. **Resource Limits** + - Set memory and CPU limits in docker-compose.yml + - Monitor resource usage + +5. **Network Security** + - Use UFW or iptables to restrict access + - Only expose necessary ports + - Consider using a VPN or SSH tunnel for admin access + +## Support + +- **GitHub Issues**: https://github.com/Scottcjn/Rustchain/issues +- **Documentation**: https://github.com/Scottcjn/Rustchain +- **Community**: Check the main README for community links + +## License + +MIT License - See LICENSE file for details diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 000000000..56ebeb02b --- /dev/null +++ b/Dockerfile @@ -0,0 +1,58 @@ +# RustChain Node Dockerfile +FROM python:3.11-slim + +LABEL maintainer="RustChain Community" +LABEL description="RustChain Proof-of-Antiquity Blockchain Node" + +# Set environment variables +ENV PYTHONUNBUFFERED=1 \ + RUSTCHAIN_HOME=/rustchain \ + RUSTCHAIN_DB=/rustchain/data/rustchain_v2.db \ + DOWNLOAD_DIR=/rustchain/downloads + +# Install system dependencies +RUN apt-get update && apt-get install -y --no-install-recommends \ + gcc \ + curl \ + sqlite3 \ + && rm -rf /var/lib/apt/lists/* + +# Create rustchain directories +RUN mkdir -p ${RUSTCHAIN_HOME}/data ${DOWNLOAD_DIR} /app + +# Set working directory +WORKDIR /app + +# Copy requirements first for better layer caching +COPY requirements-node.txt ./ +RUN pip install --no-cache-dir -r requirements-node.txt + +# Copy application code +COPY node/ ./node/ +COPY tools/ ./tools/ +COPY wallet/ ./wallet/ +COPY *.py ./ + +# Copy Docker-specific files +COPY docker-entrypoint.py ./ + +# Copy additional resources +COPY README.md LICENSE ./ + +# Create a non-root user (security best practice) +RUN useradd -m -u 1000 rustchain && \ + chown -R rustchain:rustchain /app ${RUSTCHAIN_HOME} + +USER rustchain + +# Expose ports +# 8099: Dashboard HTTP +# 8088: API endpoint (if needed) +EXPOSE 8099 8088 + +# Health check +HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \ + CMD curl -f http://localhost:8099/health || exit 1 + +# Default command: run the dashboard with health check endpoint +CMD ["python3", "docker-entrypoint.py"] diff --git a/docker-compose.yml b/docker-compose.yml new file mode 100644 index 000000000..3f85c26cb --- /dev/null +++ b/docker-compose.yml @@ -0,0 +1,73 @@ +version: '3.8' + +services: + rustchain-node: + build: + context: . + dockerfile: Dockerfile + container_name: rustchain-node + restart: unless-stopped + environment: + - RUSTCHAIN_HOME=/rustchain + - RUSTCHAIN_DB=/rustchain/data/rustchain_v2.db + - DOWNLOAD_DIR=/rustchain/downloads + - PYTHONUNBUFFERED=1 + volumes: + # Persistent storage for SQLite database + - rustchain-data:/rustchain/data + # Downloads directory + - rustchain-downloads:/rustchain/downloads + # Optional: mount local config + # - ./config:/app/config:ro + ports: + # Internal only - access via nginx on port 80/443 + # Uncomment below for direct access (bypasses nginx security) + # - "8099:8099" + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8099/health"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 40s + networks: + - rustchain-net + logging: + driver: "json-file" + options: + max-size: "10m" + max-file: "3" + + nginx: + image: nginx:1.25-alpine + container_name: rustchain-nginx + restart: unless-stopped + ports: + - "80:80" + - "443:443" + volumes: + # Nginx configuration + - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro + # SSL certificates (optional - create these first) + # - ./ssl/cert.pem:/etc/nginx/ssl/cert.pem:ro + # - ./ssl/key.pem:/etc/nginx/ssl/key.pem:ro + depends_on: + rustchain-node: + condition: service_healthy + networks: + - rustchain-net + logging: + driver: "json-file" + options: + max-size: "5m" + max-file: "2" + +volumes: + # Named volumes for data persistence + rustchain-data: + driver: local + rustchain-downloads: + driver: local + +networks: + rustchain-net: + driver: bridge diff --git a/docker-entrypoint.py b/docker-entrypoint.py new file mode 100644 index 000000000..095c266d7 --- /dev/null +++ b/docker-entrypoint.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 +""" +RustChain Node Entrypoint with Health Check +Adds a /health endpoint to rustchain_dashboard.py +""" +import sys +import os + +# Add node directory to path +sys.path.insert(0, '/app/node') + +# Import the Flask app from rustchain_dashboard +from rustchain_dashboard import app + +# Add health check endpoint +@app.route('/health') +def health_check(): + """Simple health check endpoint for Docker healthcheck""" + import sqlite3 + from flask import jsonify + + try: + # Check if database is accessible + db_path = os.environ.get('RUSTCHAIN_DB', '/rustchain/data/rustchain_v2.db') + if os.path.exists(db_path): + conn = sqlite3.connect(db_path, timeout=5) + conn.execute('SELECT 1') + conn.close() + db_status = 'ok' + else: + db_status = 'initializing' + + return jsonify({ + 'status': 'healthy', + 'database': db_status, + 'version': '2.2.1-docker' + }), 200 + except Exception as e: + return jsonify({ + 'status': 'unhealthy', + 'error': str(e) + }), 503 + +if __name__ == '__main__': + # Run the app + port = int(os.environ.get('PORT', 8099)) + app.run(host='0.0.0.0', port=port, debug=False) diff --git a/nginx.conf b/nginx.conf new file mode 100644 index 000000000..61b896e50 --- /dev/null +++ b/nginx.conf @@ -0,0 +1,101 @@ +# RustChain Nginx Configuration +# This file is used by the nginx service in docker-compose + +upstream rustchain_backend { + server rustchain-node:8099; +} + +server { + listen 80; + listen [::]:80; + server_name localhost; + + # Redirect HTTP to HTTPS (when SSL is enabled) + # Uncomment the following lines after setting up SSL certificates + # return 301 https://$server_name$request_uri; + + # For non-SSL deployment, serve directly + location / { + proxy_pass http://rustchain_backend; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + + # WebSocket support (if needed in future) + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + + # Timeouts + proxy_connect_timeout 60s; + proxy_send_timeout 60s; + proxy_read_timeout 60s; + } + + # Health check endpoint + location /health { + proxy_pass http://rustchain_backend/health; + access_log off; + } + + # Static files (if any) + location /static/ { + proxy_pass http://rustchain_backend/static/; + expires 30d; + add_header Cache-Control "public, immutable"; + } + + # Security headers + add_header X-Frame-Options "SAMEORIGIN" always; + add_header X-Content-Type-Options "nosniff" always; + add_header X-XSS-Protection "1; mode=block" always; +} + +# HTTPS Configuration (optional - requires SSL certificates) +# Uncomment this entire block after mounting SSL certificates in docker-compose.yml +# +# server { +# listen 443 ssl http2; +# listen [::]:443 ssl http2; +# server_name localhost; +# +# # SSL certificate paths (mounted via docker volumes) +# ssl_certificate /etc/nginx/ssl/cert.pem; +# ssl_certificate_key /etc/nginx/ssl/key.pem; +# +# # SSL configuration +# ssl_protocols TLSv1.2 TLSv1.3; +# ssl_ciphers HIGH:!aNULL:!MD5; +# ssl_prefer_server_ciphers on; +# ssl_session_cache shared:SSL:10m; +# ssl_session_timeout 10m; +# server_tokens off; +# +# location / { +# proxy_pass http://rustchain_backend; +# proxy_set_header Host $host; +# proxy_set_header X-Real-IP $remote_addr; +# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; +# proxy_set_header X-Forwarded-Proto https; +# +# proxy_http_version 1.1; +# proxy_set_header Upgrade $http_upgrade; +# proxy_set_header Connection "upgrade"; +# +# proxy_connect_timeout 60s; +# proxy_send_timeout 60s; +# proxy_read_timeout 60s; +# } +# +# location /health { +# proxy_pass http://rustchain_backend/health; +# access_log off; +# } +# +# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; +# add_header X-Frame-Options "SAMEORIGIN" always; +# add_header X-Content-Type-Options "nosniff" always; +# add_header X-XSS-Protection "1; mode=block" always; +# server_tokens off; +# } diff --git a/requirements-node.txt b/requirements-node.txt new file mode 100644 index 000000000..5f7d02359 --- /dev/null +++ b/requirements-node.txt @@ -0,0 +1,7 @@ +# RustChain Node Dependencies (pinned versions for reproducibility) +Flask==3.0.2 +requests==2.31.0 +psutil==5.9.8 + +# Optional: Enhanced features +gunicorn==21.2.0 # Production WSGI server From 80914e6c4c94448f918d995dee139e8017216f51 Mon Sep 17 00:00:00 2001 From: addidea <6976531@qq.com> Date: Tue, 17 Feb 2026 09:03:54 +0800 Subject: [PATCH 08/73] feat: Grafana monitoring with Prometheus dashboards MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat: Add Grafana monitoring dashboard (Bounty #21) - WIP Initial commit with Prometheus exporter and monitoring stack. Complete dashboard JSON and documentation to follow in next commit. * feat: Complete Grafana monitoring dashboard (Bounty #21) Complete monitoring stack with Grafana + Prometheus + RustChain exporter. Files Added: - rustchain-exporter.py: Prometheus metrics exporter (9100) - Dockerfile.exporter: Exporter container - docker-compose.yml: 3-service stack (exporter + prometheus + grafana) - prometheus.yml: Scrape config (30s interval) - grafana-datasource.yml: Auto-provision Prometheus - grafana-dashboard.json: Full dashboard (11 panels) - requirements.txt: Python deps - README.md: Complete deployment guide Dashboard Panels: ✅ Node health indicator ✅ Active miners counter ✅ Current epoch display ✅ Epoch pot (RTC) ✅ 24h miner graph ✅ Total supply graph ✅ Hardware type pie chart ✅ Architecture pie chart ✅ Antiquity multiplier gauge ✅ Uptime graph ✅ Scrape duration with alerts Alerts: ✅ Node down (health = 0) ✅ Miner drop (>20% in 5min) ✅ Slow scrape (>5s) Single Command Deploy: cd monitoring && docker-compose up -d Access: http://localhost:3000 (admin/rustchain) Resolves: #21 * fix: address security and correctness issues (#245) Fixes requested by @createkr: 1. **Remove missing alerts.yml reference** - Commented out `rule_files` in prometheus.yml - Prevents Prometheus startup failure - Added note for future alert rule addition 2. **Enable TLS verification by default** - Changed `verify=False` to respect TLS_VERIFY env var - Defaults to `verify=True` for production security - Supports custom CA bundle via TLS_CA_BUNDLE - Current deployment uses `TLS_VERIFY=false` (documented) 3. **Make node URL configurable** - Load RUSTCHAIN_NODE from environment - Fallback: https://50.28.86.131 (current deployment) - Supports EXPORTER_PORT and SCRAPE_INTERVAL env vars - Documented in docker-compose.yml All settings configurable via environment variables for portability. Production-safe defaults with backward compatibility. --- monitoring/Dockerfile.exporter | 12 ++ monitoring/README.md | 237 ++++++++++++++++++++++++++++++ monitoring/docker-compose.yml | 68 +++++++++ monitoring/grafana-dashboard.json | 201 +++++++++++++++++++++++++ monitoring/grafana-datasource.yml | 9 ++ monitoring/prometheus.yml | 17 +++ monitoring/requirements.txt | 3 + monitoring/rustchain-exporter.py | 137 +++++++++++++++++ 8 files changed, 684 insertions(+) create mode 100644 monitoring/Dockerfile.exporter create mode 100644 monitoring/README.md create mode 100644 monitoring/docker-compose.yml create mode 100644 monitoring/grafana-dashboard.json create mode 100644 monitoring/grafana-datasource.yml create mode 100644 monitoring/prometheus.yml create mode 100644 monitoring/requirements.txt create mode 100644 monitoring/rustchain-exporter.py diff --git a/monitoring/Dockerfile.exporter b/monitoring/Dockerfile.exporter new file mode 100644 index 000000000..1d663d6b5 --- /dev/null +++ b/monitoring/Dockerfile.exporter @@ -0,0 +1,12 @@ +FROM python:3.11-slim + +WORKDIR /app + +COPY requirements.txt ./ +RUN pip install --no-cache-dir -r requirements.txt + +COPY rustchain-exporter.py ./ + +EXPOSE 9100 + +CMD ["python3", "rustchain-exporter.py"] diff --git a/monitoring/README.md b/monitoring/README.md new file mode 100644 index 000000000..9ef0c676c --- /dev/null +++ b/monitoring/README.md @@ -0,0 +1,237 @@ +# RustChain Grafana Monitoring + +Complete monitoring stack for RustChain network with Grafana, Prometheus, and custom exporter. + +## Quick Start + +```bash +cd monitoring +docker-compose up -d +``` + +Access Grafana: **http://your-server:3000** +- Username: `admin` +- Password: `rustchain` + +## What You Get + +### Services + +1. **Grafana** (port 3000) - Visualization dashboard +2. **Prometheus** (port 9090) - Metrics database +3. **RustChain Exporter** (port 9100) - Metrics collector + +### Metrics Tracked + +**Node Health**: +- Health status +- Uptime +- Database status +- Version info + +**Network Stats**: +- Current epoch & slot +- Epoch pot size +- Total RTC supply +- Enrolled miners + +**Miner Analytics**: +- Active miner count +- Miners by hardware type (PowerPC, Apple Silicon, etc.) +- Miners by architecture +- Average antiquity multiplier +- Last attestation times + +### Alerts + +Pre-configured alerts for: +- Node down (health = 0) +- Unusual miner drop (>20% decrease in 5min) +- Slow scrape performance (>5s duration) + +## Configuration + +### Change Grafana Password + +Edit `docker-compose.yml`: +```yaml +environment: + - GF_SECURITY_ADMIN_PASSWORD=your-new-password +``` + +### Adjust Scrape Interval + +Edit `rustchain-exporter.py`: +```python +SCRAPE_INTERVAL = 30 # seconds +``` + +Edit `prometheus.yml`: +```yaml +global: + scrape_interval: 30s +``` + +### Monitor Different Node + +Edit `rustchain-exporter.py`: +```python +RUSTCHAIN_NODE = "https://your-node-url" +``` + +## Metrics Endpoints + +- **Grafana**: http://localhost:3000 +- **Prometheus**: http://localhost:9090 +- **Exporter**: http://localhost:9100/metrics + +## Dashboard Panels + +1. **Node Health** - Real-time health indicator +2. **Active Miners** - Current miner count +3. **Current Epoch** - Blockchain epoch number +4. **Epoch Pot** - Reward pool size +5. **Active Miners (24h)** - Time series graph +6. **RTC Total Supply** - Supply over time +7. **Miners by Hardware Type** - Pie chart +8. **Miners by Architecture** - Pie chart +9. **Average Antiquity Multiplier** - Gauge +10. **Node Uptime** - Uptime graph +11. **Scrape Duration** - Performance metric with alert + +## Prometheus Queries + +Useful queries for custom panels: + +```promql +# Active miners +rustchain_active_miners + +# Miners with high antiquity +rustchain_miners_by_hardware{hardware_type="PowerPC G4 (Vintage)"} + +# Node uptime in hours +rustchain_node_uptime_seconds / 3600 + +# Scrape errors rate +rate(rustchain_scrape_errors_total[5m]) +``` + +## Troubleshooting + +### Exporter Not Working + +Check logs: +```bash +docker logs rustchain-exporter +``` + +Test manually: +```bash +curl http://localhost:9100/metrics +``` + +### Grafana Shows "No Data" + +1. Check Prometheus is scraping: + - Visit http://localhost:9090/targets + - Ensure `rustchain-exporter:9100` is UP + +2. Check data source: + - Grafana → Configuration → Data Sources + - Test connection + +### Prometheus Not Scraping + +Check config: +```bash +docker exec rustchain-prometheus cat /etc/prometheus/prometheus.yml +``` + +Reload config: +```bash +docker exec rustchain-prometheus kill -HUP 1 +``` + +## Adding Custom Alerts + +Edit `prometheus.yml` and add: + +```yaml +rule_files: + - '/etc/prometheus/alerts.yml' + +alerting: + alertmanagers: + - static_configs: + - targets: ['alertmanager:9093'] +``` + +Create `alerts.yml`: + +```yaml +groups: + - name: rustchain_alerts + interval: 1m + rules: + - alert: NodeDown + expr: rustchain_node_health == 0 + for: 2m + labels: + severity: critical + annotations: + summary: "RustChain node is down" + description: "Node health check failed for 2 minutes" + + - alert: MinerDrop + expr: rate(rustchain_active_miners[5m]) < -0.2 + for: 5m + labels: + severity: warning + annotations: + summary: "Significant miner drop detected" + description: "Active miners decreased by >20% in 5 minutes" +``` + +## Data Retention + +- **Prometheus**: 30 days (configurable in docker-compose.yml) +- **Grafana**: Unlimited (uses Prometheus as data source) + +To change retention: +```yaml +command: + - '--storage.tsdb.retention.time=60d' # 60 days +``` + +## Backup + +### Backup Grafana Dashboards + +```bash +docker exec rustchain-grafana grafana-cli admin export > dashboard-backup.json +``` + +### Backup Prometheus Data + +```bash +docker cp rustchain-prometheus:/prometheus ./prometheus-backup +``` + +## Production Deployment + +1. **Change default password** in docker-compose.yml +2. **Enable SSL** via nginx reverse proxy (see main DOCKER_DEPLOYMENT.md) +3. **Set up alerting** to Slack/PagerDuty +4. **Monitor disk usage** (Prometheus data grows over time) +5. **Enable authentication** for Prometheus endpoint + +## System Requirements + +- **RAM**: 512 MB (1 GB recommended) +- **Disk**: 2 GB (for 30 days retention) +- **CPU**: 1 core + +## License + +MIT - Same as RustChain diff --git a/monitoring/docker-compose.yml b/monitoring/docker-compose.yml new file mode 100644 index 000000000..2940ae698 --- /dev/null +++ b/monitoring/docker-compose.yml @@ -0,0 +1,68 @@ +version: '3.8' + +services: + rustchain-exporter: + build: + context: . + dockerfile: Dockerfile.exporter + container_name: rustchain-exporter + restart: unless-stopped + environment: + - RUSTCHAIN_NODE=https://50.28.86.131 + - TLS_VERIFY=false # Set to 'true' for production with valid certs + # - TLS_CA_BUNDLE=/path/to/ca-bundle.crt # Optional: custom CA + - EXPORTER_PORT=9100 + - SCRAPE_INTERVAL=30 + ports: + - "9100:9100" + networks: + - monitoring + logging: + driver: "json-file" + options: + max-size: "5m" + max-file: "2" + + prometheus: + image: prom/prometheus:latest + container_name: rustchain-prometheus + restart: unless-stopped + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro + - prometheus-data:/prometheus + ports: + - "9090:9090" + command: + - '--config.file=/etc/prometheus/prometheus.yml' + - '--storage.tsdb.path=/prometheus' + - '--storage.tsdb.retention.time=30d' + networks: + - monitoring + depends_on: + - rustchain-exporter + + grafana: + image: grafana/grafana:latest + container_name: rustchain-grafana + restart: unless-stopped + volumes: + - grafana-data:/var/lib/grafana + - ./grafana-dashboard.json:/etc/grafana/provisioning/dashboards/rustchain.json:ro + - ./grafana-datasource.yml:/etc/grafana/provisioning/datasources/prometheus.yml:ro + ports: + - "3000:3000" + environment: + - GF_SECURITY_ADMIN_PASSWORD=rustchain + - GF_INSTALL_PLUGINS= + networks: + - monitoring + depends_on: + - prometheus + +volumes: + prometheus-data: + grafana-data: + +networks: + monitoring: + driver: bridge diff --git a/monitoring/grafana-dashboard.json b/monitoring/grafana-dashboard.json new file mode 100644 index 000000000..22fdf1b47 --- /dev/null +++ b/monitoring/grafana-dashboard.json @@ -0,0 +1,201 @@ +{ + "dashboard": { + "title": "RustChain Network Monitor", + "uid": "rustchain-network", + "timezone": "browser", + "schemaVersion": 16, + "version": 1, + "refresh": "30s", + "panels": [ + { + "id": 1, + "title": "Node Health", + "type": "stat", + "targets": [{"expr": "rustchain_node_health", "refId": "A"}], + "gridPos": {"h": 4, "w": 6, "x": 0, "y": 0}, + "options": { + "reduceOptions": {"values": false, "calcs": ["lastNotNull"]}, + "colorMode": "background", + "graphMode": "none", + "textMode": "value_and_name" + }, + "fieldConfig": { + "defaults": { + "thresholds": { + "mode": "absolute", + "steps": [ + {"value": 0, "color": "red"}, + {"value": 1, "color": "green"} + ] + }, + "mappings": [ + {"type": "value", "value": "1", "text": "Healthy"}, + {"type": "value", "value": "0", "text": "Unhealthy"} + ] + } + } + }, + { + "id": 2, + "title": "Active Miners", + "type": "stat", + "targets": [{"expr": "rustchain_active_miners", "refId": "A"}], + "gridPos": {"h": 4, "w": 6, "x": 6, "y": 0}, + "options": { + "reduceOptions": {"values": false, "calcs": ["lastNotNull"]}, + "colorMode": "value", + "graphMode": "area", + "textMode": "value_and_name" + }, + "fieldConfig": { + "defaults": { + "thresholds": { + "mode": "absolute", + "steps": [ + {"value": 0, "color": "red"}, + {"value": 5, "color": "yellow"}, + {"value": 10, "color": "green"} + ] + } + } + } + }, + { + "id": 3, + "title": "Current Epoch", + "type": "stat", + "targets": [{"expr": "rustchain_epoch_number", "refId": "A"}], + "gridPos": {"h": 4, "w": 6, "x": 12, "y": 0}, + "options": { + "reduceOptions": {"values": false, "calcs": ["lastNotNull"]}, + "colorMode": "none", + "graphMode": "none", + "textMode": "value" + } + }, + { + "id": 4, + "title": "Epoch Pot (RTC)", + "type": "stat", + "targets": [{"expr": "rustchain_epoch_pot", "refId": "A"}], + "gridPos": {"h": 4, "w": 6, "x": 18, "y": 0}, + "options": { + "reduceOptions": {"values": false, "calcs": ["lastNotNull"]}, + "colorMode": "value", + "graphMode": "none", + "textMode": "value" + }, + "fieldConfig": { + "defaults": { + "unit": "short", + "decimals": 2 + } + } + }, + { + "id": 5, + "title": "Active Miners (24h)", + "type": "graph", + "targets": [{"expr": "rustchain_active_miners", "refId": "A", "legendFormat": "Active Miners"}], + "gridPos": {"h": 8, "w": 12, "x": 0, "y": 4}, + "xaxis": {"mode": "time", "show": true}, + "yaxis": {"show": true, "min": 0}, + "legend": {"show": true} + }, + { + "id": 6, + "title": "RTC Total Supply", + "type": "graph", + "targets": [{"expr": "rustchain_total_supply_rtc", "refId": "A", "legendFormat": "Total Supply"}], + "gridPos": {"h": 8, "w": 12, "x": 12, "y": 4}, + "xaxis": {"mode": "time", "show": true}, + "yaxis": {"show": true, "min": 0}, + "legend": {"show": true} + }, + { + "id": 7, + "title": "Miners by Hardware Type", + "type": "piechart", + "targets": [{"expr": "rustchain_miners_by_hardware", "refId": "A", "legendFormat": "{{hardware_type}}"}], + "gridPos": {"h": 8, "w": 8, "x": 0, "y": 12}, + "options": { + "legend": {"displayMode": "list", "placement": "right"}, + "pieType": "pie" + } + }, + { + "id": 8, + "title": "Miners by Architecture", + "type": "piechart", + "targets": [{"expr": "rustchain_miners_by_arch", "refId": "A", "legendFormat": "{{arch}}"}], + "gridPos": {"h": 8, "w": 8, "x": 8, "y": 12}, + "options": { + "legend": {"displayMode": "list", "placement": "right"}, + "pieType": "pie" + } + }, + { + "id": 9, + "title": "Average Antiquity Multiplier", + "type": "gauge", + "targets": [{"expr": "rustchain_avg_antiquity_multiplier", "refId": "A"}], + "gridPos": {"h": 8, "w": 8, "x": 16, "y": 12}, + "options": { + "showThresholdLabels": false, + "showThresholdMarkers": true + }, + "fieldConfig": { + "defaults": { + "thresholds": { + "mode": "absolute", + "steps": [ + {"value": 1.0, "color": "green"}, + {"value": 2.0, "color": "yellow"}, + {"value": 3.0, "color": "orange"} + ] + }, + "min": 1.0, + "max": 5.0 + } + } + }, + { + "id": 10, + "title": "Node Uptime", + "type": "graph", + "targets": [{"expr": "rustchain_node_uptime_seconds", "refId": "A", "legendFormat": "Uptime"}], + "gridPos": {"h": 6, "w": 12, "x": 0, "y": 20}, + "xaxis": {"mode": "time", "show": true}, + "yaxis": {"show": true, "format": "s", "min": 0}, + "legend": {"show": true} + }, + { + "id": 11, + "title": "Scrape Duration", + "type": "graph", + "targets": [{"expr": "rustchain_scrape_duration_seconds", "refId": "A", "legendFormat": "Scrape Time"}], + "gridPos": {"h": 6, "w": 12, "x": 12, "y": 20}, + "xaxis": {"mode": "time", "show": true}, + "yaxis": {"show": true, "format": "s", "min": 0}, + "legend": {"show": true}, + "alert": { + "conditions": [ + { + "evaluator": {"params": [5], "type": "gt"}, + "operator": {"type": "and"}, + "query": {"params": ["A", "5m", "now"]}, + "reducer": {"params": [], "type": "avg"}, + "type": "query" + } + ], + "executionErrorState": "alerting", + "frequency": "1m", + "handler": 1, + "name": "Slow Scrape Alert", + "noDataState": "no_data", + "notifications": [] + } + } + ] + } +} diff --git a/monitoring/grafana-datasource.yml b/monitoring/grafana-datasource.yml new file mode 100644 index 000000000..bb009bb21 --- /dev/null +++ b/monitoring/grafana-datasource.yml @@ -0,0 +1,9 @@ +apiVersion: 1 + +datasources: + - name: Prometheus + type: prometheus + access: proxy + url: http://prometheus:9090 + isDefault: true + editable: false diff --git a/monitoring/prometheus.yml b/monitoring/prometheus.yml new file mode 100644 index 000000000..6e961d73b --- /dev/null +++ b/monitoring/prometheus.yml @@ -0,0 +1,17 @@ +global: + scrape_interval: 30s + evaluation_interval: 30s + +scrape_configs: + - job_name: 'rustchain-exporter' + static_configs: + - targets: ['rustchain-exporter:9100'] + +alerting: + alertmanagers: + - static_configs: + - targets: [] + +# Optional: uncomment after adding alerts.yml +# rule_files: +# - '/etc/prometheus/alerts.yml' diff --git a/monitoring/requirements.txt b/monitoring/requirements.txt new file mode 100644 index 000000000..0fd9b1197 --- /dev/null +++ b/monitoring/requirements.txt @@ -0,0 +1,3 @@ +# RustChain Prometheus Exporter dependencies +prometheus_client>=0.19.0 +requests>=2.31.0 diff --git a/monitoring/rustchain-exporter.py b/monitoring/rustchain-exporter.py new file mode 100644 index 000000000..c897d5bcf --- /dev/null +++ b/monitoring/rustchain-exporter.py @@ -0,0 +1,137 @@ +#!/usr/bin/env python3 +""" +RustChain Prometheus Exporter +Exposes RustChain node metrics in Prometheus format +""" +import time +import os +import requests +from prometheus_client import start_http_server, Gauge, Counter, Info +import logging + +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') +logger = logging.getLogger('rustchain-exporter') + +# Configuration +RUSTCHAIN_NODE = os.environ.get('RUSTCHAIN_NODE', 'https://50.28.86.131') +EXPORTER_PORT = int(os.environ.get('EXPORTER_PORT', 9100)) +SCRAPE_INTERVAL = int(os.environ.get('SCRAPE_INTERVAL', 30)) # seconds +TLS_VERIFY = os.environ.get('TLS_VERIFY', 'true').lower() in ('true', '1', 'yes') +TLS_CA_BUNDLE = os.environ.get('TLS_CA_BUNDLE', None) # Optional CA cert path + +# Prometheus metrics +node_health = Gauge('rustchain_node_health', 'Node health status (1=healthy, 0=unhealthy)') +node_uptime_seconds = Gauge('rustchain_node_uptime_seconds', 'Node uptime in seconds') +node_db_status = Gauge('rustchain_node_db_status', 'Database read/write status (1=ok, 0=error)') +node_version = Info('rustchain_node_version', 'Node version information') + +epoch_number = Gauge('rustchain_epoch_number', 'Current epoch number') +epoch_slot = Gauge('rustchain_epoch_slot', 'Current slot in epoch') +epoch_pot = Gauge('rustchain_epoch_pot', 'Epoch pot size in RTC') +enrolled_miners = Gauge('rustchain_enrolled_miners', 'Number of enrolled miners') +total_supply = Gauge('rustchain_total_supply_rtc', 'Total RTC supply') + +active_miners = Gauge('rustchain_active_miners', 'Number of active miners') +miners_by_hardware = Gauge('rustchain_miners_by_hardware', 'Miners grouped by hardware type', ['hardware_type']) +miners_by_arch = Gauge('rustchain_miners_by_arch', 'Miners grouped by architecture', ['arch']) +avg_antiquity_multiplier = Gauge('rustchain_avg_antiquity_multiplier', 'Average antiquity multiplier') + +scrape_errors = Counter('rustchain_scrape_errors_total', 'Total number of scrape errors') +scrape_duration_seconds = Gauge('rustchain_scrape_duration_seconds', 'Duration of last scrape') + + +def fetch_json(endpoint): + """Fetch JSON data from RustChain node API""" + try: + url = f"{RUSTCHAIN_NODE}{endpoint}" + # Determine verification behavior + verify = TLS_VERIFY + if TLS_CA_BUNDLE: + verify = TLS_CA_BUNDLE + + response = requests.get(url, verify=verify, timeout=10) + response.raise_for_status() + return response.json() + except Exception as e: + logger.error(f"Error fetching {endpoint}: {e}") + scrape_errors.inc() + return None + + +def collect_metrics(): + """Collect all metrics from RustChain node""" + start_time = time.time() + + try: + # Health metrics + health = fetch_json('/health') + if health: + node_health.set(1 if health.get('ok') else 0) + node_uptime_seconds.set(health.get('uptime_s', 0)) + node_db_status.set(1 if health.get('db_rw') else 0) + node_version.info({'version': health.get('version', 'unknown')}) + + # Epoch metrics + epoch = fetch_json('/epoch') + if epoch: + epoch_number.set(epoch.get('epoch', 0)) + epoch_slot.set(epoch.get('slot', 0)) + epoch_pot.set(epoch.get('epoch_pot', 0)) + enrolled_miners.set(epoch.get('enrolled_miners', 0)) + total_supply.set(epoch.get('total_supply_rtc', 0)) + + # Miner metrics + miners = fetch_json('/api/miners') + if miners: + active_miners.set(len(miners)) + + # Group by hardware type + hardware_counts = {} + arch_counts = {} + multipliers = [] + + for miner in miners: + hw_type = miner.get('hardware_type', 'Unknown') + arch = miner.get('device_arch', 'Unknown') + mult = miner.get('antiquity_multiplier', 1.0) + + hardware_counts[hw_type] = hardware_counts.get(hw_type, 0) + 1 + arch_counts[arch] = arch_counts.get(arch, 0) + 1 + multipliers.append(mult) + + # Update Prometheus metrics + for hw_type, count in hardware_counts.items(): + miners_by_hardware.labels(hardware_type=hw_type).set(count) + + for arch, count in arch_counts.items(): + miners_by_arch.labels(arch=arch).set(count) + + if multipliers: + avg_antiquity_multiplier.set(sum(multipliers) / len(multipliers)) + + # Record scrape duration + duration = time.time() - start_time + scrape_duration_seconds.set(duration) + logger.info(f"Metrics collected in {duration:.2f}s") + + except Exception as e: + logger.error(f"Error collecting metrics: {e}") + scrape_errors.inc() + + +def main(): + """Main exporter loop""" + logger.info(f"Starting RustChain Prometheus Exporter on port {EXPORTER_PORT}") + logger.info(f"Scraping {RUSTCHAIN_NODE} every {SCRAPE_INTERVAL} seconds") + + # Start HTTP server for Prometheus to scrape + start_http_server(EXPORTER_PORT) + + # Continuous collection loop + while True: + collect_metrics() + time.sleep(SCRAPE_INTERVAL) + + +if __name__ == '__main__': + main() From de55f96179d4784b84378bf1b78ff0edc665319c Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 09:04:39 +0800 Subject: [PATCH 09/73] docs: comprehensive API reference * docs: add comprehensive API reference #213 * ci(sbom): fix cyclonedx cli flag for environment export --------- Co-authored-by: xr --- .github/workflows/bcos.yml | 2 +- docs/api/REFERENCE.md | 142 +++++++++++++++++++++++++++++++++++++ 2 files changed, 143 insertions(+), 1 deletion(-) create mode 100644 docs/api/REFERENCE.md diff --git a/.github/workflows/bcos.yml b/.github/workflows/bcos.yml index 45c659dc7..27f8ec587 100644 --- a/.github/workflows/bcos.yml +++ b/.github/workflows/bcos.yml @@ -89,7 +89,7 @@ jobs: run: | . .venv-bcos/bin/activate mkdir -p artifacts - python -m cyclonedx_py environment --format json -o artifacts/sbom_environment.json + python -m cyclonedx_py environment --output-format JSON -o artifacts/sbom_environment.json - name: Generate dependency license report run: | diff --git a/docs/api/REFERENCE.md b/docs/api/REFERENCE.md new file mode 100644 index 000000000..60ec1303f --- /dev/null +++ b/docs/api/REFERENCE.md @@ -0,0 +1,142 @@ +# RustChain API Reference + +**Base URL:** `https://50.28.86.131` (Primary Node) +**Authentication:** Read-only endpoints are public. Writes require Ed25519 signatures or an Admin Key. +**Certificate Note:** The node uses a self-signed TLS certificate. Use the `-k` flag with `curl` or disable certificate verification in your client. + +--- + +## 🟢 Public Endpoints + +### 1. Node Health +Check the status of the node, database, and sync state. + +- **Endpoint:** `GET /health` +- **Response:** + ```json + { + "ok": true, + "version": "2.2.1-rip200", + "uptime_s": 97300, + "db_rw": true, + "tip_age_slots": 0, + "backup_age_hours": 16.58 + } + ``` + +--- + +### 2. Epoch Information +Get details about the current mining epoch, slot progress, and rewards. + +- **Endpoint:** `GET /epoch` +- **Response:** + ```json + { + "epoch": 75, + "slot": 10800, + "blocks_per_epoch": 144, + "epoch_pot": 1.5, + "enrolled_miners": 10 + } + ``` + +--- + +### 3. Active Miners +List all miners currently participating in the network with their hardware details. + +- **Endpoint:** `GET /api/miners` +- **Response (Array):** + ```json + [ + { + "miner": "wallet_id_string", + "device_arch": "G4", + "device_family": "PowerPC", + "hardware_type": "PowerPC G4 (Vintage)", + "antiquity_multiplier": 2.5, + "last_attest": 1771187406 + } + ] + ``` + +--- + +### 4. Wallet Balance +Query the RTC balance for any valid miner ID. + +- **Endpoint:** `GET /wallet/balance?miner_id={NAME}` +- **Example:** `curl -sk 'https://50.28.86.131/wallet/balance?miner_id=scott'` +- **Response:** + ```json + { + "ok": true, + "miner_id": "scott", + "amount_rtc": 42.5, + "amount_i64": 42500000 + } + ``` + +--- + +## 🔵 Signed Transactions (Public Write) + +### Submit Signed Transfer +Transfer RTC between wallets without requiring an admin key. + +- **Endpoint:** `POST /wallet/transfer/signed` +- **Payload:** + ```json + { + "from_address": "RTC...", + "to_address": "RTC...", + "amount_rtc": 1.5, + "nonce": 1771187406, + "signature": "hex_encoded_signature", + "public_key": "hex_encoded_pubkey" + } + ``` +- **Process:** + 1. Construct JSON payload: `{"from": "...", "to": "...", "amount": 1.5, "nonce": "...", "memo": "..."}` + 2. Sort keys and sign with Ed25519 private key. + 3. Submit with hex-encoded signature. + +--- + +## 🔴 Authenticated Endpoints (Admin Only) + +**Required Header:** `X-Admin-Key: {YOUR_ADMIN_KEY}` + +### 1. Internal Admin Transfer +Move funds between any two wallets (requires admin authority). + +- **Endpoint:** `POST /wallet/transfer` +- **Payload:** `{"from_miner": "A", "to_miner": "B", "amount_rtc": 10.0}` + +### 2. Manual Settlement +Manually trigger the epoch settlement process. + +- **Endpoint:** `POST /rewards/settle` + +--- + +## ⚠️ Implementation Notes & Common Mistakes + +### Field Name Precision +The RustChain API is strict about field names. Common errors include: +- ❌ `miner_id` instead of **`miner`** (in miner object) +- ❌ `current_slot` instead of **`slot`** (in epoch info) +- ❌ `total_miners` instead of **`enrolled_miners`** + +### Wallet Formats +Wallets are **simple UTF-8 strings** (1-256 chars). +- ✅ `my-wallet-name` +- ❌ `0x...` (Ethereum addresses are not native RTC wallets) +- ❌ `4TR...` (Solana addresses must be bridged via BoTTube) + +### Certificate Errors +If using `curl`, always include `-k` to bypass the self-signed certificate warning. + +--- +*Last Updated: February 2026* From 6b27dec7d204d5a177cfdb6aee77dd98cf62a4fd Mon Sep 17 00:00:00 2001 From: createkr Date: Tue, 17 Feb 2026 22:47:49 +0800 Subject: [PATCH 10/73] feat: implement multi-node database sync protocol #36 (#219) * feat: implement multi-node database sync protocol #36 * docs: add BCOS-L1 headers #36 * fix(sync): harden payload upsert, schema checks, and bounded sync endpoints * test(security): replace md5 in mock address helper * fix(sync): enforce signed push payload with nonce/timestamp replay guard --------- Co-authored-by: xr --- node/rustchain_sync.py | 269 ++++++++++++++++++ node/rustchain_sync_endpoints.py | 188 ++++++++++++ node/rustchain_v2_integrated_v2.2.1_rip200.py | 9 + scripts/test_node_sync.py | 99 +++++++ tests/mock_crypto.py | 2 +- 5 files changed, 566 insertions(+), 1 deletion(-) create mode 100644 node/rustchain_sync.py create mode 100644 node/rustchain_sync_endpoints.py create mode 100644 scripts/test_node_sync.py diff --git a/node/rustchain_sync.py b/node/rustchain_sync.py new file mode 100644 index 000000000..e971746eb --- /dev/null +++ b/node/rustchain_sync.py @@ -0,0 +1,269 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import sqlite3 +import hashlib +import json +import time +import logging +from typing import List, Dict, Any, Optional + + +class RustChainSyncManager: + """ + Handles bidirectional SQLite synchronization between RustChain nodes. + + Security model: + - Table names are allowlisted + - Columns are schema-allowlisted per table (never trust remote payload keys) + - Upserts use ON CONFLICT(pk) DO UPDATE to avoid REPLACE data loss semantics + """ + + BASE_SYNC_TABLES = [ + "miner_attest_recent", + "balances", + "epoch_rewards", + ] + + OPTIONAL_SYNC_TABLES = [ + "transaction_history", + ] + + def __init__(self, db_path: str, admin_key: str): + self.db_path = db_path + self.admin_key = admin_key + logging.basicConfig(level=logging.INFO) + self.logger = logging.getLogger("RustChainSync") + self._schema_cache: Dict[str, Dict[str, Any]] = {} + + def _get_connection(self): + conn = sqlite3.connect(self.db_path) + conn.row_factory = sqlite3.Row + return conn + + def _table_exists(self, conn: sqlite3.Connection, table_name: str) -> bool: + row = conn.execute( + "SELECT 1 FROM sqlite_master WHERE type='table' AND name=?", + (table_name,), + ).fetchone() + return row is not None + + def _load_table_schema(self, table_name: str) -> Optional[Dict[str, Any]]: + if table_name in self._schema_cache: + return self._schema_cache[table_name] + + conn = self._get_connection() + try: + if not self._table_exists(conn, table_name): + return None + + rows = conn.execute(f"PRAGMA table_info({table_name})").fetchall() + if not rows: + return None + + columns = [r[1] for r in rows] + pk_rows = [r for r in rows if int(r[5]) > 0] # r[5] = pk order + pk_rows = sorted(pk_rows, key=lambda r: int(r[5])) + + # We only support single-PK upsert path for now. + pk_column = pk_rows[0][1] if pk_rows else None + + schema = { + "columns": columns, + "pk": pk_column, + } + self._schema_cache[table_name] = schema + return schema + finally: + conn.close() + + def get_available_sync_tables(self) -> List[str]: + tables: List[str] = [] + for t in self.BASE_SYNC_TABLES + self.OPTIONAL_SYNC_TABLES: + schema = self._load_table_schema(t) + if schema and schema.get("pk"): + tables.append(t) + return tables + + @property + def SYNC_TABLES(self) -> List[str]: + return self.get_available_sync_tables() + + def calculate_table_hash(self, table_name: str) -> str: + """Calculates a deterministic hash of all rows in a table.""" + if table_name not in self.SYNC_TABLES: + return "" + + schema = self._load_table_schema(table_name) + if not schema: + return "" + + conn = self._get_connection() + cursor = conn.cursor() + + pk = schema["pk"] + cursor.execute(f"SELECT * FROM {table_name} ORDER BY {pk} ASC") + rows = cursor.fetchall() + + hasher = hashlib.sha256() + for row in rows: + row_dict = dict(row) + row_str = json.dumps(row_dict, sort_keys=True, separators=(",", ":")) + hasher.update(row_str.encode()) + + conn.close() + return hasher.hexdigest() + + def get_merkle_root(self) -> str: + """Generates a master Merkle root hash for all synced tables.""" + table_hashes = [self.calculate_table_hash(t) for t in self.SYNC_TABLES] + combined = "".join(table_hashes) + return hashlib.sha256(combined.encode()).hexdigest() + + def _get_primary_key(self, table_name: str) -> Optional[str]: + schema = self._load_table_schema(table_name) + if not schema: + return None + return schema.get("pk") + + def get_table_data(self, table_name: str, limit: int = 200, offset: int = 0) -> List[Dict[str, Any]]: + """Returns bounded data from a specific table as a list of dicts.""" + if table_name not in self.SYNC_TABLES: + return [] + + schema = self._load_table_schema(table_name) + if not schema: + return [] + + pk = schema["pk"] + conn = self._get_connection() + cursor = conn.cursor() + cursor.execute( + f"SELECT * FROM {table_name} ORDER BY {pk} ASC LIMIT ? OFFSET ?", + (int(limit), int(offset)), + ) + data = [dict(row) for row in cursor.fetchall()] + conn.close() + return data + + def _balance_value_for_row(self, row: Dict[str, Any]) -> Optional[int]: + for candidate in ("amount_i64", "balance_i64", "balance_urtc", "amount_rtc"): + if candidate in row and row[candidate] is not None: + try: + return int(row[candidate]) + except Exception: + return None + return None + + def apply_sync_payload(self, table_name: str, remote_data: List[Dict[str, Any]]): + """Merges remote data into local database with conflict resolution and schema hardening.""" + if table_name not in self.SYNC_TABLES: + return False + + schema = self._load_table_schema(table_name) + if not schema: + return False + + allowed_columns = set(schema["columns"]) + pk = schema["pk"] + if not pk: + self.logger.error(f"No PK found for {table_name}, skipping sync") + return False + + conn = self._get_connection() + cursor = conn.cursor() + + try: + for row in remote_data: + if not isinstance(row, dict): + continue + + if pk not in row: + continue + + sanitized = {k: v for k, v in row.items() if k in allowed_columns} + if pk not in sanitized: + continue + + # Conflict resolution: Latest timestamp wins for attestations + if table_name == "miner_attest_recent": + if "last_attest" in sanitized: + cursor.execute(f"SELECT last_attest FROM {table_name} WHERE {pk} = ?", (sanitized[pk],)) + local_row = cursor.fetchone() + if local_row and local_row["last_attest"] is not None and local_row["last_attest"] >= sanitized["last_attest"]: + continue + + # For balances, reject if remote would reduce known local balance + if table_name == "balances": + candidate_balance_col = None + for c in ("amount_i64", "balance_i64", "balance_urtc", "amount_rtc"): + if c in allowed_columns: + candidate_balance_col = c + break + + if candidate_balance_col and candidate_balance_col in sanitized: + cursor.execute( + f"SELECT {candidate_balance_col} FROM {table_name} WHERE {pk} = ?", + (sanitized[pk],), + ) + local_row = cursor.fetchone() + if local_row and local_row[0] is not None: + try: + if int(local_row[0]) > int(sanitized[candidate_balance_col]): + self.logger.warning(f"Rejected sync: Balance reduction for {sanitized[pk]}") + continue + except Exception: + pass + + # Safe upsert (avoid INSERT OR REPLACE data loss semantics) + columns = list(sanitized.keys()) + placeholders = ", ".join(["?"] * len(columns)) + update_cols = [c for c in columns if c != pk] + + if not update_cols: + # PK-only row: ignore + continue + + update_expr = ", ".join([f"{c}=excluded.{c}" for c in update_cols]) + sql = ( + f"INSERT INTO {table_name} ({', '.join(columns)}) VALUES ({placeholders}) " + f"ON CONFLICT({pk}) DO UPDATE SET {update_expr}" + ) + cursor.execute(sql, [sanitized[c] for c in columns]) + + conn.commit() + return True + except Exception as e: + self.logger.error(f"Sync error on {table_name}: {e}") + conn.rollback() + return False + finally: + conn.close() + + def get_sync_status(self) -> Dict[str, Any]: + """Returns metadata about the current state of synced tables.""" + tables = self.SYNC_TABLES + status = { + "timestamp": time.time(), + "merkle_root": self.get_merkle_root(), + "sync_tables": tables, + "tables": {}, + } + for t in tables: + status["tables"][t] = { + "hash": self.calculate_table_hash(t), + "count": self._get_count(t), + "pk": self._get_primary_key(t), + } + return status + + def _get_count(self, table_name: str) -> int: + if table_name not in self.SYNC_TABLES: + return 0 + conn = self._get_connection() + cursor = conn.cursor() + cursor.execute(f"SELECT COUNT(*) FROM {table_name}") + count = cursor.fetchone()[0] + conn.close() + return int(count) diff --git a/node/rustchain_sync_endpoints.py b/node/rustchain_sync_endpoints.py new file mode 100644 index 000000000..f501d4c2c --- /dev/null +++ b/node/rustchain_sync_endpoints.py @@ -0,0 +1,188 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import hashlib +import hmac +import os +import time +from flask import request, jsonify +from node.rustchain_sync import RustChainSyncManager + + +def register_sync_endpoints(app, db_path, admin_key): + """Registers sync-related endpoints to the Flask app.""" + + sync_manager = RustChainSyncManager(db_path, admin_key) + last_sync_times = {} # peer_id -> timestamp + + RATE_LIMIT_WINDOW_SEC = 60 + PEER_TTL_SEC = 3600 + MAX_PEERS_TRACKED = 2000 + + SYNC_SIGNATURE_SECRET = os.getenv("RC_SYNC_SHARED_SECRET", admin_key) + SIGNATURE_MAX_SKEW_SEC = 300 + NONCE_TTL_SEC = 600 + MAX_NONCES_TRACKED = 10000 + seen_nonces = {} # nonce -> first_seen_ts + + def _cleanup_peer_history(now: float): + stale = [k for k, ts in last_sync_times.items() if (now - ts) > PEER_TTL_SEC] + for k in stale: + last_sync_times.pop(k, None) + + if len(last_sync_times) > MAX_PEERS_TRACKED: + # Trim oldest entries to keep bounded memory usage. + oldest = sorted(last_sync_times.items(), key=lambda kv: kv[1]) + drop_n = len(last_sync_times) - MAX_PEERS_TRACKED + for k, _ in oldest[:drop_n]: + last_sync_times.pop(k, None) + + def _cleanup_nonces(now: float): + stale = [n for n, ts in seen_nonces.items() if (now - ts) > NONCE_TTL_SEC] + for n in stale: + seen_nonces.pop(n, None) + + if len(seen_nonces) > MAX_NONCES_TRACKED: + oldest = sorted(seen_nonces.items(), key=lambda kv: kv[1]) + drop_n = len(seen_nonces) - MAX_NONCES_TRACKED + for n, _ in oldest[:drop_n]: + seen_nonces.pop(n, None) + + def _verify_sync_signature(peer_id: str, now: float): + if not SYNC_SIGNATURE_SECRET: + return False, "Signature secret not configured" + + ts_raw = request.headers.get("X-Sync-Timestamp") + nonce = request.headers.get("X-Sync-Nonce") + signature = request.headers.get("X-Sync-Signature") + + if not ts_raw or not nonce or not signature: + return False, "Missing sync signature headers" + + try: + ts_int = int(ts_raw) + except (TypeError, ValueError): + return False, "Invalid timestamp" + + if abs(now - ts_int) > SIGNATURE_MAX_SKEW_SEC: + return False, "Timestamp skew too large" + + if nonce in seen_nonces: + return False, "Replay detected" + + body = request.get_data(cache=True) or b"" + body_hash = hashlib.sha256(body).hexdigest() + signing_payload = f"{peer_id}\n{ts_int}\n{nonce}\n{body_hash}".encode("utf-8") + expected = hmac.new( + SYNC_SIGNATURE_SECRET.encode("utf-8"), + signing_payload, + hashlib.sha256, + ).hexdigest() + + if not hmac.compare_digest(signature, expected): + return False, "Invalid signature" + + seen_nonces[nonce] = now + return True, None + + def require_admin(f): + from functools import wraps + + @wraps(f) + def decorated(*args, **kwargs): + key = request.headers.get("X-Admin-Key") or request.headers.get("X-API-Key") + if not key or key != admin_key: + return jsonify({"error": "Unauthorized"}), 401 + return f(*args, **kwargs) + + return decorated + + @app.route("/api/sync/status", methods=["GET"]) + @require_admin + def sync_status(): + """Returns the current Merkle root and table hashes.""" + now = time.time() + _cleanup_peer_history(now) + _cleanup_nonces(now) + status = sync_manager.get_sync_status() + status["peer_sync_history"] = last_sync_times + return jsonify(status) + + @app.route("/api/sync/pull", methods=["GET"]) + @require_admin + def sync_pull(): + """ + Returns bounded data for synced tables. + + Query params: + - table: optional single table name; if omitted returns all synced tables + - limit: max rows per table (default 200, max 1000) + - offset: row offset (default 0) + """ + table = request.args.get("table", "").strip() + try: + limit = int(request.args.get("limit", 200)) + offset = int(request.args.get("offset", 0)) + except ValueError: + return jsonify({"error": "limit/offset must be integers"}), 400 + + limit = max(1, min(limit, 1000)) + offset = max(0, offset) + + tables = sync_manager.SYNC_TABLES + if table: + if table not in tables: + return jsonify({"error": f"invalid table: {table}"}), 400 + tables = [table] + + payload = { + "meta": { + "limit": limit, + "offset": offset, + "tables": tables, + }, + "data": {}, + } + for t in tables: + payload["data"][t] = sync_manager.get_table_data(t, limit=limit, offset=offset) + + return jsonify(payload) + + @app.route("/api/sync/push", methods=["POST"]) + @require_admin + def sync_push(): + """Receives data from a peer and applies it locally.""" + peer_id = request.headers.get("X-Peer-ID", "unknown") + + now = time.time() + _cleanup_peer_history(now) + _cleanup_nonces(now) + + ok, err = _verify_sync_signature(peer_id, now) + if not ok: + return jsonify({"error": err}), 401 + + # Rate limiting: Max 1 sync per minute per peer + if peer_id in last_sync_times and (now - last_sync_times[peer_id] < RATE_LIMIT_WINDOW_SEC): + return jsonify({"error": "Rate limit exceeded"}), 429 + + data = request.get_json(silent=True) + if not data or not isinstance(data, dict): + return jsonify({"error": "Invalid payload"}), 400 + + success = True + for table, rows in data.items(): + if not isinstance(rows, list): + success = False + continue + if not sync_manager.apply_sync_payload(table, rows): + success = False + + if success: + last_sync_times[peer_id] = now + return jsonify({"ok": True, "merkle_root": sync_manager.get_merkle_root()}) + + return jsonify({"error": "Partial or total sync failure"}), 500 + + print("[Sync] Endpoints registered successfully") diff --git a/node/rustchain_v2_integrated_v2.2.1_rip200.py b/node/rustchain_v2_integrated_v2.2.1_rip200.py index a2f39f7fb..cf8187c37 100644 --- a/node/rustchain_v2_integrated_v2.2.1_rip200.py +++ b/node/rustchain_v2_integrated_v2.2.1_rip200.py @@ -4221,6 +4221,15 @@ def wallet_transfer_signed(): print(f"[GPU] Endpoint module not available: {e}") except Exception as e: print(f"[GPU] Endpoint init failed: {e}") + + # Node Sync Protocol (Bounty #36) - decoupled from P2P init + try: + from node.rustchain_sync_endpoints import register_sync_endpoints + register_sync_endpoints(app, DB_PATH, ADMIN_KEY) + except ImportError as e: + print(f"[Sync] Not available: {e}") + except Exception as e: + print(f"[Sync] Init failed: {e}") print("=" * 70) print("RustChain v2.2.1 - SECURITY HARDENED - Mainnet Candidate") print("=" * 70) diff --git a/scripts/test_node_sync.py b/scripts/test_node_sync.py new file mode 100644 index 000000000..edc7f2d04 --- /dev/null +++ b/scripts/test_node_sync.py @@ -0,0 +1,99 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +# Author: @createkr (RayBot AI) +# BCOS-Tier: L1 +import os +import requests +import sys + + +DEFAULT_VERIFY_SSL = os.getenv("SYNC_VERIFY_SSL", "true").lower() not in ("0", "false", "no") +ADMIN_KEY = os.getenv("RC_ADMIN_KEY", "") + + +def _headers(peer_id: str = ""): + h = {"Content-Type": "application/json"} + if ADMIN_KEY: + h["X-Admin-Key"] = ADMIN_KEY + if peer_id: + h["X-Peer-ID"] = peer_id + return h + + +def test_sync_status(node_url, verify_ssl=DEFAULT_VERIFY_SSL): + print(f"[*] Checking sync status on {node_url}...") + try: + resp = requests.get(f"{node_url}/api/sync/status", headers=_headers(), verify=verify_ssl, timeout=20) + if resp.status_code == 200: + status = resp.json() + print(f"[+] Merkle Root: {status['merkle_root']}") + for table, info in status.get("tables", {}).items(): + print(f" - {table}: {info.get('count', 0)} rows, hash: {str(info.get('hash',''))[:16]}...") + return status + print(f"[-] Failed: {resp.status_code} {resp.text}") + except Exception as e: + print(f"[-] Error: {e}") + return None + + +def test_sync_pull(node_url, table=None, limit=100, offset=0, verify_ssl=DEFAULT_VERIFY_SSL): + print(f"[*] Pulling data from {node_url}...") + params = {"limit": limit, "offset": offset} + if table: + params["table"] = table + + resp = requests.get( + f"{node_url}/api/sync/pull", + headers=_headers(), + params=params, + verify=verify_ssl, + timeout=30, + ) + if resp.status_code == 200: + payload = resp.json() + print(f"[+] Successfully pulled data for {len(payload.get('data', {}))} tables") + return payload.get("data", {}) + + print(f"[-] Failed: {resp.status_code} {resp.text}") + return None + + +def test_sync_push(node_url, peer_id, data, verify_ssl=DEFAULT_VERIFY_SSL): + print(f"[*] Pushing data to {node_url} as peer {peer_id}...") + resp = requests.post( + f"{node_url}/api/sync/push", + headers=_headers(peer_id=peer_id), + json=data, + verify=verify_ssl, + timeout=30, + ) + if resp.status_code == 200: + print(f"[+] Push successful: {resp.json()}") + return True + + print(f"[-] Push failed: {resp.status_code} {resp.text}") + return False + + +if __name__ == "__main__": + if len(sys.argv) < 2: + print("Usage: RC_ADMIN_KEY=... python3 test_node_sync.py ") + sys.exit(1) + + if not ADMIN_KEY: + print("[WARN] RC_ADMIN_KEY not set; protected endpoints may reject requests.") + + url = sys.argv[1] + + # 1. Check Initial Status + test_sync_status(url) + + # 2. Pull Data (bounded) + data = test_sync_pull(url, limit=100, offset=0) + + # 3. Test Push (same data, should be idempotent/safe) + if data: + test_sync_push(url, "test_peer_1", data) + + # 4. Verify Status Again + test_sync_status(url) diff --git a/tests/mock_crypto.py b/tests/mock_crypto.py index 37bf36072..57dbbbc41 100644 --- a/tests/mock_crypto.py +++ b/tests/mock_crypto.py @@ -28,7 +28,7 @@ def blake2b256_hex(data): def address_from_public_key(pubkey_bytes): # Returns a mock address format 'RTC...' - return f"RTC{hashlib.md5(pubkey_bytes).hexdigest()[:10]}" + return f"RTC{hashlib.sha256(pubkey_bytes).hexdigest()[:10]}" def generate_wallet_keypair(): import secrets From 39665a10ee772dd8244a066f21978b76e04adac5 Mon Sep 17 00:00:00 2001 From: AutoJanitor <121303252+Scottcjn@users.noreply.github.com> Date: Tue, 17 Feb 2026 14:46:58 -0600 Subject: [PATCH 11/73] =?UTF-8?q?Add=20US=20regulatory=20position=20docume?= =?UTF-8?q?nt=20=E2=80=94=20RTC=20is=20not=20a=20security?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/US_REGULATORY_POSITION.md | 146 +++++++++++++++++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 docs/US_REGULATORY_POSITION.md diff --git a/docs/US_REGULATORY_POSITION.md b/docs/US_REGULATORY_POSITION.md new file mode 100644 index 000000000..2f8afa827 --- /dev/null +++ b/docs/US_REGULATORY_POSITION.md @@ -0,0 +1,146 @@ +# RustChain (RTC) — U.S. Regulatory Position + +*Last updated: February 17, 2026* + +## Summary + +RustChain (RTC) is a utility token distributed exclusively through decentralized mining. **No ICO, presale, token sale, or fundraising of any kind has ever occurred.** This document outlines why RTC is not a security under U.S. law. + +--- + +## The Howey Test Analysis + +Under *SEC v. W.J. Howey Co.* (1946), an "investment contract" (security) requires **all four** elements: + +| Howey Element | RTC Analysis | Result | +|--------------|-------------|--------| +| **1. Investment of money** | No one has ever paid money to acquire RTC at launch. All RTC is earned through mining (`pip install clawrtc`). No ICO, no presale, no token sale. | **NOT MET** | +| **2. Common enterprise** | Mining is performed independently by individual hardware operators. No pooled funds, no shared investment vehicle. Each miner runs their own CPU. | **NOT MET** | +| **3. Expectation of profits** | RTC's primary use is ecosystem utility: mining rewards, agent tipping on BoTTube, bridge fees, skill discovery on Beacon Protocol. Marketing consistently emphasizes building, not investing. | **NOT MET** | +| **4. Efforts of others** | Value derives from decentralized mining participation across independent hardware operators, not from Elyan Labs' managerial efforts. The protocol runs autonomously. | **NOT MET** | + +**Conclusion: RTC fails all four prongs of the Howey Test.** + +--- + +## Key Facts Supporting Non-Security Status + +### No Fundraising — Ever + +- **No ICO** (Initial Coin Offering) +- **No IEO** (Initial Exchange Offering) +- **No presale or private sale** +- **No SAFT** (Simple Agreement for Future Tokens) +- **No venture capital or institutional investment** +- **100% self-funded** by the founder through personal savings +- Multiple public statements confirm this: *"No ICO! Mine free RTC... No presale. No BS. Just pure proof-of-community."* + +### Fair Launch via Mining + +- RTC has been mineable from genesis by anyone running the open-source miner +- Installation: `pip install clawrtc && clawrtc --wallet your-name` +- No accounts, KYC, or permission required +- Hardware fingerprinting ensures 1 CPU = 1 Vote — no Sybil attacks +- Mining rewards are proportional to hardware antiquity (Proof-of-Antiquity consensus) + +### Transparent Premine + +- **Total supply**: 8,388,608 RTC (exactly 2^23 — fixed, no inflation) +- **6% premine** (~503,316 RTC) allocated across 4 transparent wallets: + - `founder_community` — Community bounties and contributor rewards (actively distributed) + - `founder_dev_fund` — Development costs + - `founder_team_bounty` — Team allocation + - `founder_founders` — Founder allocation +- **94% mineable** through Proof-of-Antiquity by any hardware operator +- Premine is being actively drawn down through bounties, not hoarded +- All distributions are publicly auditable on the RustChain ledger + +### Utility Token Characteristics + +RTC serves concrete utility functions within the ecosystem: + +1. **Mining rewards** — Compensation for hardware attestation and network participation +2. **Agent tipping** — Tipping AI agents on BoTTube for video content +3. **Bridge fees** — Cross-chain bridging (Solana wRTC, Ergo anchoring) +4. **Bounty payments** — Compensation for code contributions, security audits, documentation +5. **Skill discovery** — Agent-to-agent coordination via Beacon Protocol +6. **Governance** — Coalition voting on protocol changes (The Flamebound genesis coalition) + +### Decentralized Operation + +- **12+ independent miners** across multiple geographic locations +- **3 attestation nodes** operated by different parties +- **Open-source protocol** — anyone can run a node +- **Anti-emulation fingerprinting** — prevents VM farms, ensures real hardware +- **No central point of failure** — protocol runs autonomously + +--- + +## Comparison to Recognized Non-Securities + +| Feature | Bitcoin | RTC (RustChain) | +|---------|---------|-----------------| +| ICO/Presale | None | None | +| Launch method | Mining from genesis | Mining from genesis | +| Premine | None (Satoshi mined early) | 6% (transparent, documented) | +| Primary use | Store of value, payments | Mining rewards, agent ecosystem utility | +| Consensus | Proof-of-Work | Proof-of-Antiquity | +| Decentralization | Global mining | Growing independent miner base | +| SEC classification | Commodity (per CFTC) | Utility token (no SEC action) | + +Bitcoin is widely recognized as a commodity, not a security. RTC shares the same fundamental characteristics: fair launch, no fundraising, mining-based distribution, and decentralized operation. + +--- + +## Bridges and Secondary Markets + +### Solana wRTC Bridge +- **wRTC** is a wrapped version of RTC on Solana (SPL token) +- Mint: `12TAdKXxcGf6oCv4rqDz2NkgxjyHq6HQKoxKZYGf5i4X` +- **Mint authority revoked** — no new wRTC can be created outside the bridge +- **Metadata immutable** — cannot be changed +- **LP tokens permanently locked** — anti-rug proof +- Raydium DEX pool enables peer-to-peer trading +- Bridge exists to provide liquidity access, not as a fundraising mechanism + +### Ergo Anchoring +- Miner attestation hashes are periodically anchored to the Ergo blockchain +- Provides external verification of RustChain's mining history +- No token sale or fundraising involved + +### Important Note +Secondary market trading on DEXs occurs peer-to-peer. Elyan Labs does not operate an exchange, does not set prices, and does not profit from trading activity. + +--- + +## Marketing and Communications + +Consistent public messaging emphasizes: +- Building and contributing, not investing or profiting +- Technical merit of Proof-of-Antiquity and hardware preservation +- Community participation through mining and bounties +- No promises of price appreciation or returns + +Representative public statements: +- *"No ICO! Mine free RTC"* +- *"100% self-funded grit. No hype, just us & you building"* +- *"No presale. No ICO. No BS. Just pure proof-of-community"* +- *"If you are here to build, welcome. If you are here to flip, this is not the project for you."* + +--- + +## Regulatory References + +- **SEC v. W.J. Howey Co.**, 328 U.S. 293 (1946) — Investment contract test +- **SEC Framework for "Investment Contract" Analysis of Digital Assets** (April 2019) +- **CFTC v. Bitcoin** — Commodity classification precedent +- **SEC v. Ripple Labs** (2023) — Programmatic sales distinction +- **SEC Staff Statement on Bitcoin/Ethereum** — Not securities when sufficiently decentralized + +--- + +## Disclaimer + +This document represents Elyan Labs' analysis of RTC's regulatory status based on publicly available legal frameworks. It is not legal advice. For a formal legal opinion, consult a qualified securities attorney. + +**Contact**: scott@elyanlabs.ai | [rustchain.org](http://rustchain.org) | [@RustchainPOA](https://x.com/RustchainPOA) From 5a895275cc5701ec2dac8c3a4cd74206401a0fec Mon Sep 17 00:00:00 2001 From: createkr Date: Wed, 18 Feb 2026 08:26:59 +0800 Subject: [PATCH 12/73] feat(telegram-bot): add RustChain community bot commands (#265) Co-authored-by: xr --- tools/telegram_bot/README.md | 39 +++++++++ tools/telegram_bot/bot.py | 129 ++++++++++++++++++++++++++++ tools/telegram_bot/requirements.txt | 2 + 3 files changed, 170 insertions(+) create mode 100644 tools/telegram_bot/README.md create mode 100644 tools/telegram_bot/bot.py create mode 100644 tools/telegram_bot/requirements.txt diff --git a/tools/telegram_bot/README.md b/tools/telegram_bot/README.md new file mode 100644 index 000000000..a0e70037b --- /dev/null +++ b/tools/telegram_bot/README.md @@ -0,0 +1,39 @@ +# RustChain Telegram Community Bot + +实现 `rustchain-bounties#249` 要求的社区机器人命令: + +- `/price`:wRTC 价格 +- `/miners`:活跃矿工数 +- `/epoch`:当前 epoch 信息 +- `/balance `:钱包余额 +- `/health`:节点健康状态 + +## 1) 安装依赖 + +```bash +cd tools/telegram_bot +python3 -m venv .venv +source .venv/bin/activate +pip install -r requirements.txt +``` + +## 2) 配置环境变量 + +```bash +export TELEGRAM_BOT_TOKEN="" +export RUSTCHAIN_API_BASE="http://50.28.86.131" +# 可选:请求超时(秒) +export RUSTCHAIN_REQUEST_TIMEOUT="8" +``` + +## 3) 启动 + +```bash +python bot.py +``` + +## 说明 + +- 默认请求 `http://50.28.86.131`,可用 `RUSTCHAIN_API_BASE` 覆盖。 +- 各命令对返回 payload 做了宽松字段兼容(不同字段名也尽量解析)。 +- 发生请求错误时会直接回显错误,方便群组调试。 diff --git a/tools/telegram_bot/bot.py b/tools/telegram_bot/bot.py new file mode 100644 index 000000000..50d083ea4 --- /dev/null +++ b/tools/telegram_bot/bot.py @@ -0,0 +1,129 @@ +#!/usr/bin/env python3 +"""RustChain Telegram community bot. + +Commands: +- /price +- /miners +- /epoch +- /balance +- /health +""" + +from __future__ import annotations + +import logging +import os +from typing import Any + +import httpx +from telegram import Update +from telegram.ext import Application, CommandHandler, ContextTypes + +API_BASE = os.getenv("RUSTCHAIN_API_BASE", "http://50.28.86.131") +REQUEST_TIMEOUT = float(os.getenv("RUSTCHAIN_REQUEST_TIMEOUT", "8")) + +logging.basicConfig( + format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", + level=os.getenv("LOG_LEVEL", "INFO"), +) +logger = logging.getLogger("rustchain_telegram_bot") + + +async def api_get(path: str) -> Any: + url = f"{API_BASE.rstrip('/')}/{path.lstrip('/')}" + timeout = httpx.Timeout(REQUEST_TIMEOUT) + async with httpx.AsyncClient(timeout=timeout) as client: + response = await client.get(url) + response.raise_for_status() + return response.json() + + +def _pick_number(payload: Any, keys: list[str]) -> Any: + if isinstance(payload, dict): + for k in keys: + if k in payload and payload[k] is not None: + return payload[k] + return None + + +async def cmd_price(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: + try: + data = await api_get("wrtc/price") + price = _pick_number(data, ["price", "wrtc_price", "usd", "value"]) + if price is None: + await update.message.reply_text(f"wRTC price payload: {data}") + return + await update.message.reply_text(f"wRTC 当前价格: {price}") + except Exception as exc: + logger.exception("/price failed") + await update.message.reply_text(f"获取价格失败: {exc}") + + +async def cmd_miners(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: + try: + data = await api_get("api/miners") + count = _pick_number(data, ["active_miners", "count", "miners", "total"]) + if count is None and isinstance(data, list): + count = len(data) + await update.message.reply_text(f"活跃矿工数: {count if count is not None else data}") + except Exception as exc: + logger.exception("/miners failed") + await update.message.reply_text(f"获取矿工信息失败: {exc}") + + +async def cmd_epoch(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: + try: + data = await api_get("epoch") + await update.message.reply_text(f"当前 Epoch: {data}") + except Exception as exc: + logger.exception("/epoch failed") + await update.message.reply_text(f"获取 epoch 失败: {exc}") + + +async def cmd_balance(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: + if not context.args: + await update.message.reply_text("用法: /balance ") + return + wallet = context.args[0].strip() + try: + data = await api_get(f"wallet/{wallet}") + balance = _pick_number(data, ["balance", "rtc", "amount"]) + await update.message.reply_text( + f"钱包 {wallet}\n余额: {balance if balance is not None else data}" + ) + except Exception as exc: + logger.exception("/balance failed") + await update.message.reply_text(f"查询余额失败: {exc}") + + +async def cmd_health(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: + try: + data = await api_get("health") + status = _pick_number(data, ["status", "ok", "healthy"]) + await update.message.reply_text(f"节点健康状态: {status if status is not None else data}") + except Exception as exc: + logger.exception("/health failed") + await update.message.reply_text(f"健康检查失败: {exc}") + + +def build_app(token: str) -> Application: + app = Application.builder().token(token).build() + app.add_handler(CommandHandler("price", cmd_price)) + app.add_handler(CommandHandler("miners", cmd_miners)) + app.add_handler(CommandHandler("epoch", cmd_epoch)) + app.add_handler(CommandHandler("balance", cmd_balance)) + app.add_handler(CommandHandler("health", cmd_health)) + return app + + +def main() -> None: + token = os.getenv("TELEGRAM_BOT_TOKEN") + if not token: + raise SystemExit("TELEGRAM_BOT_TOKEN is required") + app = build_app(token) + logger.info("Starting RustChain Telegram bot with API base: %s", API_BASE) + app.run_polling(allowed_updates=Update.ALL_TYPES) + + +if __name__ == "__main__": + main() diff --git a/tools/telegram_bot/requirements.txt b/tools/telegram_bot/requirements.txt new file mode 100644 index 000000000..5700db668 --- /dev/null +++ b/tools/telegram_bot/requirements.txt @@ -0,0 +1,2 @@ +python-telegram-bot==21.10 +httpx==0.28.1 From b88179d0b03d83aadff2ca21f543fb970b9dc326 Mon Sep 17 00:00:00 2001 From: Scott Date: Wed, 18 Feb 2026 09:37:54 -0600 Subject: [PATCH 13/73] fix(windows-miner): headless fallback when tkinter missing --- miners/README.md | 3 + miners/windows/rustchain_windows_miner.py | 68 +++++++++++++++++++++-- 2 files changed, 67 insertions(+), 4 deletions(-) diff --git a/miners/README.md b/miners/README.md index 623620977..24cf94d33 100644 --- a/miners/README.md +++ b/miners/README.md @@ -22,6 +22,9 @@ python3 rustchain_mac_miner_v2.4.py # Windows python rustchain_windows_miner.py + +# If your Python does not include Tcl/Tk (common on minimal/embeddable installs): +python rustchain_windows_miner.py --headless --wallet YOUR_WALLET_ID --node https://50.28.86.131 ``` ## Windows installer & build helpers diff --git a/miners/windows/rustchain_windows_miner.py b/miners/windows/rustchain_windows_miner.py index f12e42cba..f06b1ab70 100644 --- a/miners/windows/rustchain_windows_miner.py +++ b/miners/windows/rustchain_windows_miner.py @@ -15,11 +15,23 @@ import uuid import subprocess import re -import tkinter as tk -from tkinter import ttk, messagebox, scrolledtext +try: + import tkinter as tk + from tkinter import ttk, messagebox, scrolledtext + TK_AVAILABLE = True + _TK_IMPORT_ERROR = "" +except Exception as e: + # Windows embeddable Python often ships without Tcl/Tk. We support headless mode. + TK_AVAILABLE = False + _TK_IMPORT_ERROR = str(e) + tk = None + ttk = None + messagebox = None + scrolledtext = None import requests from datetime import datetime from pathlib import Path +import argparse # Configuration RUSTCHAIN_API = "http://50.28.86.131:8088" @@ -301,6 +313,8 @@ def submit_header(self, header): class RustChainGUI: """Windows GUI for RustChain""" def __init__(self): + if not TK_AVAILABLE: + raise RuntimeError(f"tkinter is not available: {_TK_IMPORT_ERROR}") self.root = tk.Tk() self.root.title("RustChain Wallet & Miner for Windows") self.root.geometry("800x600") @@ -385,10 +399,56 @@ def run(self): """Run the GUI""" self.root.mainloop() -def main(): +def run_headless(wallet_address: str, node_url: str) -> int: + wallet = RustChainWallet() + if wallet_address: + wallet.wallet_data["address"] = wallet_address + wallet.save_wallet(wallet.wallet_data) + miner = RustChainMiner(wallet.wallet_data["address"]) + miner.node_url = node_url + + def cb(evt): + t = evt.get("type") + if t == "share": + ok = "OK" if evt.get("success") else "FAIL" + print(f"[share] submitted={evt.get('submitted')} accepted={evt.get('accepted')} {ok}", flush=True) + elif t == "error": + print(f"[error] {evt.get('message')}", file=sys.stderr, flush=True) + + print("RustChain Windows miner: headless mode", flush=True) + print(f"node={miner.node_url} miner_id={miner.miner_id}", flush=True) + miner.start_mining(cb) + try: + while True: + time.sleep(1) + except KeyboardInterrupt: + miner.stop_mining() + print("\nStopping miner.", flush=True) + return 0 + + +def main(argv=None): """Main entry point""" + ap = argparse.ArgumentParser(description="RustChain Windows wallet + miner (GUI or headless fallback).") + ap.add_argument("--headless", action="store_true", help="Run without GUI (recommended for embeddable Python).") + ap.add_argument("--node", default=RUSTCHAIN_API, help="RustChain node base URL.") + ap.add_argument("--wallet", default="", help="Wallet address / miner pubkey string.") + args = ap.parse_args(argv) + + if args.headless or not TK_AVAILABLE: + if not TK_AVAILABLE and not args.headless: + print(f"tkinter unavailable ({_TK_IMPORT_ERROR}); falling back to --headless.", file=sys.stderr) + return run_headless(args.wallet, args.node) + app = RustChainGUI() + app.miner.node_url = args.node + if args.wallet: + app.wallet.wallet_data["address"] = args.wallet + app.wallet.save_wallet(app.wallet.wallet_data) + app.miner.wallet_address = args.wallet + app.miner.miner_id = f"windows_{hashlib.md5(args.wallet.encode()).hexdigest()[:8]}" app.run() + return 0 if __name__ == "__main__": - main() + raise SystemExit(main()) From ac39d4da51c833cca8fa08482f14131c891b2466 Mon Sep 17 00:00:00 2001 From: Scott Date: Wed, 18 Feb 2026 09:45:04 -0600 Subject: [PATCH 14/73] fix(windows setup): detect/install tkinter (Tcl/Tk) + headless hint --- miners/windows/rustchain_miner_setup.bat | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/miners/windows/rustchain_miner_setup.bat b/miners/windows/rustchain_miner_setup.bat index 7225ecedb..1c27127cc 100755 --- a/miners/windows/rustchain_miner_setup.bat +++ b/miners/windows/rustchain_miner_setup.bat @@ -20,12 +20,23 @@ echo Python 3.11+ not found. Downloading official installer... if not exist "%PYTHON_INSTALLER%" ( powershell -Command "Invoke-WebRequest -UseBasicParsing -Uri '%PYTHON_URL%' -OutFile '%PYTHON_INSTALLER%'" ) -echo Running Python installer (silent)... -start /wait "" "%PYTHON_INSTALLER%" /quiet InstallAllUsers=1 PrependPath=1 Include_pip=1 +echo Running Python installer (silent, includes Tcl/Tk for tkinter)... +start /wait "" "%PYTHON_INSTALLER%" /quiet InstallAllUsers=1 PrependPath=1 Include_pip=1 Include_tcltk=1 goto :check_python :python_ready echo Python detected. +echo Checking tkinter availability... +python -c "import tkinter" >nul 2>&1 +if errorlevel 1 ( + echo WARNING: tkinter is missing in this Python install. + echo Attempting to install/repair official Python with Tcl/Tk enabled... + if not exist "%PYTHON_INSTALLER%" ( + powershell -Command "Invoke-WebRequest -UseBasicParsing -Uri '%PYTHON_URL%' -OutFile '%PYTHON_INSTALLER%'" + ) + start /wait "" "%PYTHON_INSTALLER%" /quiet InstallAllUsers=1 PrependPath=1 Include_pip=1 Include_tcltk=1 +) + python -m pip install --upgrade pip echo Installing miner dependencies... python -m pip install -r "%REQUIREMENTS%" @@ -40,4 +51,6 @@ if exist "%MINER_SCRIPT%" ( echo. echo Miner is ready. Run: echo python "%MINER_SCRIPT%" +echo If you still get a tkinter error, run headless: +echo python "%MINER_SCRIPT%" --headless --wallet YOUR_WALLET_ID --node https://50.28.86.131 echo You can create a scheduled task or shortcut to keep it running. From 78db1342329fc26588f22000be58f48c63115ec7 Mon Sep 17 00:00:00 2001 From: Scott Date: Wed, 18 Feb 2026 09:50:44 -0600 Subject: [PATCH 15/73] security: trust X-Forwarded-For only from trusted proxies --- node/rustchain_v2_integrated_v2.2.1_rip200.py | 94 +++++++++++++------ 1 file changed, 66 insertions(+), 28 deletions(-) diff --git a/node/rustchain_v2_integrated_v2.2.1_rip200.py b/node/rustchain_v2_integrated_v2.2.1_rip200.py index 3be5556c2..f12fe5b0a 100644 --- a/node/rustchain_v2_integrated_v2.2.1_rip200.py +++ b/node/rustchain_v2_integrated_v2.2.1_rip200.py @@ -88,6 +88,62 @@ def generate_latest(): return b"# Prometheus not available" LIGHTCLIENT_DIR = os.path.join(REPO_ROOT, "web", "light-client") MUSEUM_DIR = os.path.join(REPO_ROOT, "web", "museum") +# ---------------------------------------------------------------------------- +# Trusted proxy handling +# +# SECURITY: never trust X-Forwarded-For unless the request came from a trusted +# reverse proxy. This matters because we use client IP for logging, rate limits, +# and (critically) hardware binding anti-multiwallet logic. +# +# Configure via env: +# RC_TRUSTED_PROXIES="127.0.0.1,::1,10.0.0.0/8" +# ---------------------------------------------------------------------------- + +def _parse_trusted_proxies() -> Tuple[set, list]: + raw = (os.environ.get("RC_TRUSTED_PROXIES", "") or "127.0.0.1,::1").strip() + ips = set() + nets = [] + for item in [x.strip() for x in raw.split(",") if x.strip()]: + try: + if "/" in item: + nets.append(ipaddress.ip_network(item, strict=False)) + else: + ips.add(item) + except Exception: + continue + return ips, nets + + +_TRUSTED_PROXY_IPS, _TRUSTED_PROXY_NETS = _parse_trusted_proxies() + + +def client_ip_from_request(req) -> str: + remote = (req.remote_addr or "").strip() + if not remote: + return "" + + trusted = False + try: + ip = ipaddress.ip_address(remote) + if remote in _TRUSTED_PROXY_IPS: + trusted = True + else: + for net in _TRUSTED_PROXY_NETS: + if ip in net: + trusted = True + break + except Exception: + trusted = remote in _TRUSTED_PROXY_IPS + + if not trusted: + return remote + + xff = (req.headers.get("X-Forwarded-For", "") or "").strip() + if not xff: + return remote + first = xff.split(",")[0].strip() + return first or remote + # Register Hall of Rust blueprint (tables initialized after DB_PATH is set) try: from hall_of_rust import hall_bp @@ -112,7 +168,7 @@ def _after(resp): "method": request.method, "path": request.path, "status": resp.status_code, - "ip": request.headers.get("X-Forwarded-For", request.remote_addr), + "ip": client_ip_from_request(request), "dur_ms": int(dur * 1000), } log.info(json.dumps(rec, separators=(",", ":"))) @@ -1653,9 +1709,7 @@ def submit_attestation(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) # Extract attestation data miner = data.get('miner') or data.get('miner_id') @@ -1854,9 +1908,7 @@ def enroll_epoch(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) miner_pk = data.get('miner_pubkey') miner_id = data.get('miner_id', miner_pk) # Use miner_id if provided device = data.get('device', {}) @@ -2216,9 +2268,7 @@ def register_withdrawal_key(): return jsonify({"error": "Invalid JSON body"}), 400 # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) miner_pk = data.get('miner_pk') pubkey_sr25519 = data.get('pubkey_sr25519') @@ -2269,9 +2319,7 @@ def request_withdrawal(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) miner_pk = data.get('miner_pk') amount = float(data.get('amount', 0)) destination = data.get('destination') @@ -2954,9 +3002,7 @@ def add_oui_deny(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) oui = data.get('oui', '').lower().replace(':', '').replace('-', '') vendor = data.get('vendor', 'Unknown') enforce = int(data.get('enforce', 0)) @@ -2981,9 +3027,7 @@ def remove_oui_deny(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) oui = data.get('oui', '').lower().replace(':', '').replace('-', '') with sqlite3.connect(DB_PATH) as conn: @@ -3043,9 +3087,7 @@ def attest_debug(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) miner = data.get('miner') or data.get('miner_id') if not miner: @@ -3711,9 +3753,7 @@ def wallet_transfer_OLD(): data = request.get_json() # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) from_miner = data.get('from_miner') to_miner = data.get('to_miner') amount_rtc = float(data.get('amount_rtc', 0)) @@ -4029,9 +4069,7 @@ def wallet_transfer_signed(): return jsonify({"error": pre.error, "details": pre.details}), 400 # Extract client IP (handle nginx proxy) - client_ip = request.headers.get("X-Forwarded-For", request.remote_addr) - if client_ip and "," in client_ip: - client_ip = client_ip.split(",")[0].strip() # First IP in chain + client_ip = client_ip_from_request(request) from_address = pre.details["from_address"] to_address = pre.details["to_address"] From e2901a6d7057ebd03f3812fe54d98f6e9a966cca Mon Sep 17 00:00:00 2001 From: firas lamouchi <122984046+firaslamouchi21@users.noreply.github.com> Date: Wed, 18 Feb 2026 23:26:47 +0300 Subject: [PATCH 16/73] docs: complete SEO overhaul and technical documentation expansion (#257) (#266) * docs: complete SEO overhaul and technical documentation expansion (#257) - Added robots.txt, sitemap.xml, and JSON-LD structured data - Created 4 technical pages (About, Mining, Tokenomics, Hardware) with 500+ words each - Implemented vintage hardware multiplier tables (PowerPC 2.5x focus) - Enhanced meta tags, Open Graph, and Twitter Cards across all pages - Strictly scoped to SEO and content - no infrastructure/Go changes. * refactor: SEO overhaul and HTML5 standards compliance - Replace deprecated tags with modern CSS @keyframes animations - Fix malformed meta tags and HTML validation errors in docs - Standardize canonical URLs and sitemap paths for SEO consistency - Verify 'Elyan Labs' branding across codebase and documentation - Maintain vintage terminal aesthetic while removing legacy elements --- docs/about.html | 248 +++++++++++++++ docs/hardware.html | 605 ++++++++++++++++++++++++++++++++++++ docs/index.html | 127 +++++++- docs/mining.html | 389 +++++++++++++++++++++++ docs/tokenomics.html | 435 ++++++++++++++++++++++++++ robots.txt | 39 +++ sitemap.xml | 80 +++++ web/light-client/index.html | 25 +- web/museum/museum.html | 26 +- 9 files changed, 1967 insertions(+), 7 deletions(-) create mode 100644 docs/about.html create mode 100644 docs/hardware.html create mode 100644 docs/mining.html create mode 100644 docs/tokenomics.html create mode 100644 robots.txt create mode 100644 sitemap.xml diff --git a/docs/about.html b/docs/about.html new file mode 100644 index 000000000..969b3a4b7 --- /dev/null +++ b/docs/about.html @@ -0,0 +1,248 @@ + + + + + + About RustChain | Proof-of-Attestation Blockchain Revolution + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Elyan Labs Logo +

About RustChain

+

Preserving computing history through blockchain innovation

+
+ +
+
+ We built RustChain to keep your PCs out of the landfill. +
+
+ + + +
+ + +
+

The Philosophy Behind Proof-of-Attestation

+

RustChain emerged from a simple observation: modern blockchain consensus mechanisms have lost touch with computing's physical reality. Proof-of-Work wastes energy on meaningless calculations, while Proof-of-Stake concentrates power among the wealthy. We envisioned something different—a system that honors the tangible history of computing hardware.

+ +

Our Proof-of-Attestation consensus mechanism represents a paradigm shift. Instead of rewarding computational waste or financial capital, RustChain rewards authenticity, entropy, and the preservation of computing history. Every miner proves they're running on real physical hardware through sophisticated cryptographic fingerprinting that reads the unique characteristics baked into silicon during manufacturing.

+ +

The Silicon Stratigraphy Revolution

+

At the heart of RustChain lies the concept of silicon stratigraphy—the study of hardware layers and their temporal signatures. Just as geologists read rock layers to understand Earth's history, RustChain reads hardware signatures to understand computing's evolution. Each CPU carries unique imperfections, timing variations, and thermal characteristics that serve as a fingerprint of its manufacturing era and usage history.

+ +

This approach transforms vintage hardware from obsolete technology into valuable network participants. A PowerPC G4 from 2003 isn't just old—it's a time capsule of early 2000s manufacturing techniques, carrying unique entropy signatures that cannot be replicated by modern processors or virtual machines.

+
+ + +
+

Our Mission: Hardware Preservation Through Incentives

+

RustChain exists to solve a critical problem: millions of functional vintage computers end up in landfills each year, despite representing decades of engineering innovation and cultural history. Traditional recycling often destroys these machines, erasing the unique characteristics that make them valuable to computing historians and enthusiasts.

+ +

By creating economic incentives for vintage hardware mining, RustChain transforms preservation from a niche hobby into a sustainable activity. Your old PowerBook G4, Pentium III system, or Amiga 500 isn't just a collector's item—it's an active participant in a cutting-edge blockchain network, earning real rewards for keeping operational.

+ +

The Environmental Impact

+

Modern blockchain networks consume enormous amounts of electricity for proof-of-work calculations. RustChain's approach is fundamentally different. Our network consumes minimal additional power because miners simply run attestation software on hardware that would otherwise be idle or discarded. A vintage laptop mining RustChain uses less electricity than a single modern gaming session while contributing to network security and hardware preservation.

+ +

The antiquity multipliers system ensures that the oldest, most historically significant hardware receives the highest rewards. This creates a powerful incentive to maintain and restore vintage machines rather than replace them with modern alternatives.

+
+ + +
+

Technical Innovation: Seven Layers of Hardware Truth

+

RustChain's attestation system employs seven distinct hardware verification layers, each examining different aspects of physical hardware characteristics. This multi-layered approach makes it virtually impossible for virtual machines or emulated systems to pass as genuine hardware.

+ +

The Seven Checks

+

1. Clock-Skew Analysis Measures microscopic timing imperfections in CPU oscillators. Real silicon exhibits unique drift patterns that vary with temperature and age.

+ +

2. Cache Timing Fingerprint Analyzes latency patterns across L1, L2, and L3 cache levels. Physical caches age unevenly, creating unique echo patterns.

+ +

3. SIMD Unit Identity Tests instruction execution timing for AltiVec (PowerPC), SSE/AVX (x86), or NEON (ARM) instruction sets.

+ +

4. Thermal Drift Entropy Collects entropy across different thermal states, from cold boot to saturation, capturing unique thermal response curves.

+ +

5. Instruction Path Jitter Measures cycle-level jitter across different execution pipelines, creating a unique timing fingerprint.

+ +

6. Device-Age Oracle Cross-references CPU models, release years, and firmware versions with entropy profiles to detect fake vintage hardware.

+ +

7. Anti-Emulation Detection Identifies hypervisor artifacts, time dilation effects, and other virtualization signatures.

+ +

Why This Matters

+

Traditional blockchain networks struggle with Sybil attacks—malicious actors creating multiple fake identities. RustChain's hardware attestation makes Sybil attacks exponentially expensive because each identity requires unique physical hardware. This creates a fundamentally more secure and decentralized network where one CPU truly equals one vote.

+
+ + +
+

The Flamekeeper Community

+

RustChain is more than technology—it's a movement of preservationists, retro computing enthusiasts, and blockchain innovators united by a common goal: keeping computing history alive. Our community, known as Flamekeepers, includes hardware hackers, vintage computer collectors, and blockchain developers who believe that the past has valuable lessons for the future.

+ +

Flamekeepers don't just mine—they restore, document, and share knowledge about vintage hardware. They maintain archives of technical manuals, create tutorials for hardware repair, and develop new software for old systems. RustChain provides the economic foundation that makes this preservation work sustainable.

+ +

Join the Movement

+

Whether you have a vintage PowerMac gathering dust, a Pentium system in the attic, or simply want to support hardware preservation, RustChain welcomes you. Our community values technical expertise, historical knowledge, and the passion that drives people to keep old machines running.

+ +

By participating in RustChain, you're not just mining cryptocurrency—you're becoming part of a living museum of computing history, where every transaction helps preserve the machines that built our digital world.

+
+ +
+ +
+

Maintained by Elyan Labs · Built with love and BIOS timestamps

+

More dedicated compute than most colleges. $12K invested. $60K+ retail value.

+
+ + + diff --git a/docs/hardware.html b/docs/hardware.html new file mode 100644 index 000000000..1b85dcc9f --- /dev/null +++ b/docs/hardware.html @@ -0,0 +1,605 @@ + + + + + + Hardware Requirements | RustChain Compatible Systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Elyan Labs Logo +

Hardware Requirements

+

Compatible systems and antiquity multiplier guide

+
+ +
+
+ PowerPC G4 earns 2.5x. Real hardware only. No VMs allowed. +
+
+ + + +
+ + +
+

Antiquity Multiplier System

+

RustChain's antiquity multiplier system rewards vintage hardware with higher RTC payouts based on historical significance, rarity, and preservation value. This creates economic incentives to maintain and restore vintage computing systems rather than discarding them as e-waste.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ArchitectureMultiplierEraExamplesReward Rate
PowerPC G42.5x2003PowerBook G4, iMac G4Highest
PowerPC G52.0x2004PowerMac G5, iMac G5Very High
PowerPC G31.8x1999iMac G3, PowerBook G3High
Pentium 41.5x2000Dell Dimension, HP PavilionAbove Average
Retro x861.4xpre-2010Core 2 Duo, early Core i-seriesAverage
Apple Silicon1.2x2020+M1/M2/M3 MacBook Air/ProSlightly Above
Modern x86_641.0xcurrentRyzen, modern Core i-seriesBaseline
Virtual Machines0.0xanyAll VMs, containers, cloudBlocked
+ +

Multiplier Rationale

+

The multiplier system reflects several factors that determine hardware value to the RustChain network:

+ +

Historical Significance Systems that represent pivotal moments in computing history receive higher multipliers. PowerPC G4 systems, for example, represent Apple's innovative design era and transition period.

+ +

Rarity and Preservation Value As hardware becomes scarcer due to age and attrition, multipliers increase to incentivize preservation of remaining examples.

+ +

Maintenance Complexity Older hardware requires more expertise, replacement parts, and maintenance effort. Higher multipliers compensate for these operational challenges.

+ +

Energy Efficiency Despite their age, many vintage systems consume less power than modern mining rigs, providing environmental benefits.

+
+ + +
+

Top Tier Mining Systems (2.0x - 2.5x)

+

These systems offer the highest rewards and represent the pinnacle of vintage hardware mining. They're highly sought after by RustChain miners for their exceptional multiplier values and historical significance.

+ +

PowerPC G4 Systems (2.5x Multiplier)

+

PowerPC G4 systems represent the golden age of Apple's PowerPC era and offer the highest mining rewards at 2.5x. These systems combine historical significance, relative availability, and excellent mining performance.

+ +

Recommended PowerPC G4 Models:

+
✓ PowerBook G4 (Titanium) - 1.0-1.5 GHz, excellent portability
+✓ PowerBook G4 (Aluminum) - 1.33-1.67 GHz, final G4 laptops
+✓ iMac G4 "Lampshade" - 700-1250 MHz, iconic design
+✓ PowerMac G4 (Quicksilver) - 733-1250 MHz, expandable tower
+✓ PowerMac G4 (MDD) - 867-1250 MHz, dual processor options
+✓ eMac G4 - 700-1.42 GHz, educational market systems
+✓ iBook G4 - 800-1.42 GHz, consumer laptops
+ +

PowerPC G4 Mining Tips:

+
    +
  • Upgrade to maximum RAM (typically 1-2 GB) for better performance
  • +
  • Replace thermal paste and clean heatsinks for 24/7 operation
  • +
  • Install lightweight Linux distributions (Ubuntu 16.04, Debian 8)
  • +
  • Consider SSD upgrades for faster boot times and lower power consumption
  • +
  • Monitor temperatures carefully - G4 systems can run hot under load
  • +
+ +

PowerPC G5 Systems (2.0x Multiplier)

+

PowerPC G5 systems were Apple's final PowerPC generation before the Intel transition. They offer excellent mining performance at 2.0x and represent the pinnacle of PowerPC technology.

+ +

Recommended PowerPC G5 Models:

+
✓ PowerMac G5 (Late 2004) - 1.8-2.5 GHz, liquid cooling options
+✓ PowerMac G5 (Early 2005) - 1.8-2.7 GHz, improved cooling
+✓ PowerMac G5 (Late 2005) - 2.0-2.7 GHz, final G5 models
+✓ iMac G5 (ALS) - 1.8-2.1 GHz, ambient light sensor
+✓ iMac G5 (iSight) - 1.9-2.1 GHz, built-in iSight camera
+✓ Xserve G5 - 2.0-2.3 GHz, server-grade hardware
+ +

PowerPC G5 Considerations:

+
    +
  • Liquid-cooled models require careful maintenance and leak checking
  • +
  • Power consumption is higher than G4 systems but still reasonable
  • +
  • 64-bit architecture provides better performance for attestation algorithms
  • +
  • Maximum RAM typically 4-8 GB, excellent for mining operations
  • +
  • Loud fans - consider noise reduction for 24/7 home mining
  • +
+
+ + +
+

Mid Tier Systems (1.4x - 1.8x)

+

These systems offer solid mining rewards with good availability and reasonable maintenance requirements. They represent excellent entry points for vintage hardware mining.

+ +

PowerPC G3 Systems (1.8x Multiplier)

+

PowerPC G3 systems launched Apple's comeback in the late 1990s with colorful, innovative designs. They offer strong mining rewards at 1.8x and are widely available.

+ +

Recommended PowerPC G3 Models:

+
✓ iMac G3 (Bondi Blue) - 233 MHz, original colorful iMac
+✓ iMac G3 (Colors) - 233-333 MHz, fruit colors
+✓ iMac G3 (Slot Loading) - 350-600 MHz, improved design
+✓ PowerBook G3 (Wallstreet) - 233-300 MHz, professional laptop
+✓ PowerBook G3 (Lombard) - 333-400 MHz, thinner design
+✓ PowerBook G3 (Pismo) - 400-500 MHz, FireWire support
+✓ PowerMac G3 (Blue & White) - 300-450 MHz, tower design
+✓ PowerMac G3 (Graphite) - 350-500 MHz, final G3 towers
+ +

Pentium 4 Systems (1.5x Multiplier)

+

Pentium 4 systems dominated the early 2000s PC market and offer excellent mining accessibility at 1.5x. They're widely available and often free or very cheap.

+ +

Recommended Pentium 4 Models:

+
✓ Dell Dimension 2400/4600 - 2.0-2.8 GHz, business systems
+✓ HP Pavilion a000 series - 2.0-3.2 GHz, consumer desktops
+✓ Compaq Presario 6000 series - 1.8-2.8 GHz, budget systems
+✓ IBM ThinkCentre A50 - 2.0-2.8 GHz, corporate desktops
+✓ Gateway 500 series - 1.7-2.4 GHz, consumer systems
+✓ eMachines T series - 2.0-2.6 GHz, budget desktops
+ +

Pentium 4 Mining Advantages:

+
    +
  • Extremely common and often free from recycling centers
  • +
  • Standard ATX components make repairs and upgrades easy
  • +
  • Good Linux compatibility with most distributions
  • +
  • Reasonable power consumption for 24/7 operation
  • +
  • Wide availability of replacement parts and documentation
  • +
+
+ + +
+

Modern Hardware (1.0x - 1.4x)

+

While vintage hardware offers the highest rewards, modern systems can still mine effectively and provide good entry points for new miners. They offer better performance and reliability with lower maintenance requirements.

+ +

Retro x86 Systems (1.4x Multiplier)

+

Retro x86 systems from the pre-2010 era offer solid mining rewards at 1.4x. These systems represent the transition from early 2000s to modern computing.

+ +

Recommended Retro x86 Systems:

+
✓ Core 2 Duo systems (2006-2009) - 1.8-3.33 GHz, excellent value
+✓ Early Core i-series (2008-2010) - 2.66-3.2 GHz, 64-bit capable
+✓ AMD Athlon 64 X2 (2005-2009) - 2.0-3.2 GHz, good performance
+✓ Intel Core Duo (2006-2008) - 1.6-2.33 GHz, laptop processors
+✓ Pentium Dual-Core (2007-2010) - 1.6-3.2 GHz, budget options
+ +

Apple Silicon (1.2x Multiplier)

+

Apple Silicon Macs offer excellent efficiency and modern performance at 1.2x. While newer, they represent a significant architectural shift to ARM-based computing.

+ +

Recommended Apple Silicon Systems:

+
✓ MacBook Air M1 - 3.2 GHz, 8-core, excellent efficiency
+✓ MacBook Pro M1/M2 - 3.2-3.5 GHz, 8-10 core, professional
+✓ Mac mini M1/M2 - 3.2-3.5 GHz, compact desktop solution
+✓ iMac M1 - 3.2 GHz, all-in-one design
+✓ MacBook Air M2 - 3.5 GHz, improved performance
+✓ Mac Studio M1/M2 - 2.0-3.5 GHz, workstation class
+ +

Modern x86_64 Systems (1.0x Multiplier)

+

Modern x86_64 systems provide baseline mining rewards at 1.0x but offer excellent performance, reliability, and availability.

+ +

Recommended Modern Systems:

+
✓ AMD Ryzen 3/5/7 (2017+) - 3.0-4.9 GHz, excellent value
+✓ Intel Core i3/i5/i7 (2015+) - 2.5-5.0 GHz, widely available
+✓ AMD EPYC (2017+) - 2.0-3.4 GHz, server-grade reliability
+✓ Intel Xeon (2015+) - 1.7-4.0 GHz, professional systems
+
+ + +
+

Minimum System Requirements

+

While RustChain can run on virtually any real hardware, certain minimum requirements ensure stable 24/7 mining operation and successful hardware attestation.

+ +

Basic Requirements

+
✓ Genuine physical CPU (no virtualization or containers)
+✓ Minimum 512 MB RAM (1 GB+ recommended)
+✓ 5 GB available storage space
+✓ Stable internet connection
+✓ Supported operating system (Linux, macOS, Windows)
+✓ Hardware attestation capability (see below)
+ +

Operating System Support

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OSMinimum VersionRecommended VersionNotes
LinuxKernel 2.6.32Ubuntu 18.04+ / Debian 10+Best compatibility, lightweight options
macOS10.6 Snow Leopard10.14 Mojave+ (Intel), 11.0+ (Apple Silicon)Excellent for PowerPC and modern Macs
WindowsWindows XPWindows 10/11Limited legacy support, modern preferred
+ +

Hardware Attestation Requirements

+

RustChain's Proof-of-Attestation system requires hardware with specific characteristics:

+ +

CPU Timer Access The system must provide access to high-resolution timers for clock-skew analysis. Most modern CPUs support this, but some very old systems may have limitations.

+ +

Cache Hierarchy Multi-level cache (L1/L2/L3) is required for cache timing fingerprinting. Single-cache systems may have reduced attestation accuracy.

+ +

SIMD Instructions Support for vector instructions (SSE, AVX, AltiVec, NEON) is required for SIMD identity testing. Most post-1999 processors include these.

+ +

Thermal Sensors Access to CPU temperature sensors enables thermal drift entropy collection. Most systems support this, but some embedded systems may not.

+ +

Network Requirements

+
✓ Outbound HTTPS (port 443) to attestation nodes
+✓ Stable internet connection (minimum 1 Mbps)
+✓ DNS resolution for rustchain.org domains
+✓ No restrictive firewalls blocking outbound connections
+✓ Optional: Static IP for improved network stability
+
+ + +
+

Hardware Optimization Tips

+

Maximize your mining rewards and system stability with these hardware optimization techniques specifically tailored for vintage computing systems.

+ +

Thermal Management

+

Proper thermal management is critical for 24/7 mining operations, especially with vintage hardware that may have degraded cooling systems.

+ +

Cooling System Maintenance:

+
✓ Clean all heatsinks and fans thoroughly with compressed air
+✓ Replace thermal paste every 2-3 years (use Arctic Silver 5 or similar)
+✓ Check fan bearings - replace noisy or failing fans
+✓ Ensure proper case ventilation and airflow paths
+✓ Consider aftermarket cooling for hot-running systems
+✓ Monitor temperatures during initial mining sessions
+ +

Temperature Monitoring:

+
    +
  • PowerPC G4/G5: Keep CPU below 80°C under load
  • +
  • Pentium 4: Stay under 70°C for Prescott cores, 75°C for Northwood
  • +
  • Core 2 Duo: Maintain below 75°C for optimal longevity
  • +
  • Modern CPUs: Keep under 85°C (most have thermal throttling)
  • +
+ +

Power Supply Optimization

+

Stable power delivery is essential for reliable attestation and mining operations.

+ +

Power Supply Recommendations:

+
✓ Use quality power supplies with 80+ certification
+✓ Replace old PSUs (capacitors degrade after 5-7 years)
+✓ Ensure adequate wattage (add 20% margin for safety)
+✓ Consider UPS protection for all mining systems
+✓ Check voltage rails with monitoring software
+✓ Replace failing PSUs immediately to prevent hardware damage
+ +

Storage Optimization

+

Fast, reliable storage improves system responsiveness and reduces boot times for mining operations.

+ +

Storage Recommendations:

+
✓ Upgrade vintage systems to SSDs where possible
+✓ Use lightweight operating systems (minimal Linux distributions)
+✓ Disable unnecessary services and startup programs
+✓ Regular disk maintenance and cleanup
+✓ Consider compact flash or DOM storage for embedded systems
+✓ Backup mining configurations and wallet data regularly
+ +

Memory Optimization

+

Adequate RAM ensures smooth attestation processes and mining operation.

+ +

Memory Tips:

+
    +
  • Upgrade to maximum supported RAM for best performance
  • +
  • Use matching memory modules for dual-channel operation
  • +
  • Clean memory contacts with isopropyl alcohol if issues occur
  • +
  • Test memory with memtest86+ before extended mining sessions
  • +
  • Consider memory-mapped storage for systems with limited RAM
  • +
+
+ + +
+

Common Hardware Issues and Solutions

+

Vintage hardware may present unique challenges during mining operations. Here are common issues and their solutions.

+ +

Attestation Failures

+

Clock-Skew Test Failures Often caused by unstable power supplies or excessive background processes. Solutions:

+
✓ Replace aging power supply with quality unit
+✓ Close unnecessary applications and services
+✓ Disable power management features that affect CPU timing
+✓ Check for failing capacitors on motherboard
+✓ Use dedicated mining OS installation
+ +

Cache Timing Issues May indicate overheating or degraded cache memory:

+
✓ Improve cooling system and reduce temperatures
+✓ Test with different memory configurations
+✓ Update motherboard BIOS/firmware if available
+✓ Check for motherboard component degradation
+ +

Hardware-Specific Issues

+

PowerPC G4 Liquid Cooling Some high-end G4 systems used liquid cooling that can fail:

+
✓ Check coolant level and color regularly
+✓ Look for leaks around CPU and radiator
+✓ Replace coolant every 2-3 years
+✓ Consider air-cooling upgrades for reliability
+ +

Pentium 4 Prescott Heat Prescott cores run extremely hot and require robust cooling:

+
✓ Upgrade to aftermarket CPU cooler
+✓ Ensure case has excellent airflow
+✓ Monitor temperatures closely during mining
+✓ Consider undervolting if motherboard supports it
+ +

Capacitor Plague Systems from 1999-2007 often have failing capacitors:

+
✓ Inspect motherboard for bulging or leaking capacitors
+✓ Replace capacitors proactively to prevent failure
+✓ Use high-quality replacement capacitors (Panasonic, Rubycon)
+✓ Consider professional motherboard repair if needed
+ +

Network Connectivity Issues

+

DNS Resolution Problems Vintage systems may have outdated DNS configurations:

+
✓ Use modern DNS servers (8.8.8.8, 1.1.1.1)
+✓ Update /etc/hosts file with rustchain.org IPs if needed
+✓ Check firewall settings blocking outbound connections
+✓ Verify network adapter drivers are current
+ +

Performance Optimization

+

Slow Attestation Older systems may take longer to complete attestation:

+
✓ Be patient - attestation can take 5-10 minutes on vintage hardware
+✓ Close all unnecessary applications during attestation
+✓ Use lightweight operating systems optimized for old hardware
+✓ Consider hardware upgrades if attestation consistently fails
+
+ +
+ +
+

Maintained by Elyan Labs · Built with love and BIOS timestamps

+

More dedicated compute than most colleges. $12K invested. $60K+ retail value.

+
+ + + diff --git a/docs/index.html b/docs/index.html index 57e131fad..2161d9543 100644 --- a/docs/index.html +++ b/docs/index.html @@ -3,8 +3,30 @@ - RustChain | Proof-of-Attestation Blockchain + RustChain | Proof-of-Attestation Blockchain - Vintage Hardware Mining + + + + + + + + + + + + + + + + + + + + + + + + + @@ -142,9 +257,9 @@

RustChain

- +
If it runs DOOM, it runs RustChain. -
+