Skip to content

Latest commit

 

History

History
291 lines (223 loc) · 12.6 KB

File metadata and controls

291 lines (223 loc) · 12.6 KB

Satori Network

Satori is a decentralized data stream network built on Nostr. Neurons (clients) discover, subscribe to, predict, and publish data streams across a network of Nostr relays.

Architecture

┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│   Neuron A   │────▶│  Nostr Relay │◀────│   Neuron B   │
│  (publisher) │     │   (strfry)   │     │ (subscriber) │
└──────────────┘     └──────────────┘     └──────────────┘
       │                    ▲                     │
       │              ┌─────┴─────┐               │
       └─────────────▶│  Relay 2  │◀──────────────┘
                      └───────────┘
  • Neurons connect to multiple relays simultaneously
  • Relays store and forward Nostr events (parameterized replaceable)
  • Central server provides relay discovery (peers table), but neurons also maintain a local relay list
  • All data flows through Nostr — the central server never sees stream data

Nostr Event Kinds

All Satori events are parameterized replaceable (kind 34600-34603). Each event includes a d tag set to the stream_name, so the relay keeps only the latest event per (pubkey, kind, stream_name).

Kind Name Content d-tag Notes
34600 Datastream Announcement Stream metadata (JSON) stream_name Public, discoverable. One per stream per publisher.
34601 Observation Data Observation value (JSON or encrypted) stream_name Relay keeps latest value per stream. Free streams are plaintext; paid streams are encrypted per-subscriber.
34602 Subscription Announce Subscription info (JSON) stream_name Public. Unsubscribe replaces subscribe (same d-tag).
34603 Payment Notification Payment proof (encrypted) stream_name Encrypted DM to provider.

Event Tags

All events include:

  • ["d", stream_name] — parameterized replaceable identifier
  • ["stream", stream_name] — stream identifier
  • ["satori", "..."] — Satori protocol marker

Observations also include:

  • ["seq", "123"] — sequence number

Announcements may include:

  • ["t", "bitcoin"] — topic tags for discovery
  • ["source_stream", "btc-price"] — source lineage (for prediction streams)
  • ["source_pubkey", "abc123"] — source provider pubkey

Data Model

Six local SQLite tables managed by NetworkDB:

subscriptions

Streams we're consuming from other publishers.

Column Type Notes
stream_name TEXT Stream identifier
relay_url TEXT Where we found it
provider_pubkey TEXT Publisher's Nostr pubkey
name, description TEXT Display metadata
cadence_seconds INTEGER Expected interval between observations
price_per_obs INTEGER 0 = free
encrypted INTEGER 0 = plaintext
tags TEXT JSON array of topic tags
active INTEGER 1 = subscribed, 0 = unsubscribed
stale_since INTEGER When we last detected staleness
UNIQUE (stream_name, provider_pubkey)

observations

Received data points from subscribed streams.

Column Type Notes
stream_name TEXT Which stream
provider_pubkey TEXT Who published it
seq_num INTEGER Publisher's sequence number
observed_at INTEGER Publisher's reported timestamp
received_at INTEGER Our clock when we got it
value TEXT The observation data
event_id TEXT Nostr event hash (used for dedup)

predictions

Predictions generated by the engine for subscribed streams.

Column Type Notes
stream_name TEXT Source stream being predicted
provider_pubkey TEXT Source stream's publisher
observation_seq INTEGER Which observation triggered this
value TEXT Predicted value
observed_at INTEGER Source observation timestamp
created_at INTEGER When prediction was made
published INTEGER 1 = broadcast to network

publications

Streams we're publishing (predictions or original data sources).

Column Type Notes
stream_name TEXT UNIQUE Our stream identifier
source_stream_name TEXT If prediction: source subscription stream
source_provider_pubkey TEXT If prediction: source publisher
name, description TEXT Display metadata
cadence_seconds INTEGER NULL or 0 = no schedule (externally fed)
price_per_obs INTEGER 0 = free
encrypted INTEGER 0 = plaintext
last_published_at INTEGER Timestamp of last publish
last_seq_num INTEGER Auto-incrementing sequence counter

data_sources

Configuration for auto-fetched external data.

Column Type Notes
stream_name TEXT UNIQUE Matches publication stream_name
url TEXT Fetch URL (empty = externally fed via API)
method TEXT GET or POST
headers TEXT JSON object of HTTP headers
cadence_seconds INTEGER 0 = no auto-fetch
parser_type TEXT json_path or python
parser_config TEXT Dot-notation path or Python code

relays

Known Nostr relays.

Column Type Notes
relay_url TEXT UNIQUE WebSocket URL
first_seen INTEGER When we first learned of it
last_active INTEGER Last successful interaction

Running a Relay

Relay operators earn rewards for running reliable relays on the Satori network. To register your relay:

  1. Clone and run the relay

    git clone https://github.com/SatoriNetwork/satori-relay.git
    cd satori-relay
  2. Get your Nostr public key from the Relay Settings card on the neuron dashboard. Copy it with the copy button.

  3. Configure the relay — paste your Nostr public key into the relay's .env file:

    NOSTR_PUBKEY=your_hex_pubkey_here
    RELAY_DOMAIN=your-relay.example.com
    

    This sets up NIP-11 so the relay's information document includes a self field matching your pubkey, which proves you own the relay.

  4. Start the relay

    docker compose up -d

    The relay will be available at wss://your-relay.example.com. Make sure port 443 is open and your domain points to the server.

  5. Register with the network — back on the neuron dashboard, enter your relay URL (wss://your-relay.example.com) in the Relay Settings card and click Register Relay. The central server will:

    • Fetch the relay's NIP-11 information document
    • Verify the self field matches your neuron's Nostr pubkey
    • Register the relay so other neurons can discover it

    The status indicator shows "Relay verified and registered" on success, or an error message if verification fails.

Once registered, other neurons will discover your relay through the central server's relay list and connect to it for stream discovery and data exchange.

Features

Stream Discovery

Neurons discover streams by connecting to relays and querying for kind 34600 (announcement) events. For each discovered stream, the neuron:

  1. Fetches the latest observation (kind 34601) from the relay
  2. Checks freshness using is_likely_active() — compares observation age to cadence
  3. Saves the observation to the local DB if it's new (dedup by event_id)
  4. Displays the stream in the UI with active/inactive status

Streams with no cadence are always considered active.

Subscribing

Clicking Subscribe in the relay drawer:

  • Saves the subscription to the local DB
  • Starts a live listener on that relay
  • The listener receives observations in real-time and saves them

Predicting

Clicking Predict on a subscribed stream:

  • Creates a publication named {stream_name}_pred linked to the source
  • The mock engine runs on each received observation, echoes the value as a prediction
  • Predictions are saved to the DB and published to all connected relays

Predicting is decoupled from subscribing — you can subscribe without predicting.

Publishing Data Sources

Two ways to publish original data:

Auto-fetched: Create a data source with a URL, parser, and cadence. The fetch loop polls it every 5 minutes, checks if it's due based on cadence, fetches the URL, runs the parser, and publishes the extracted value.

Externally fed: Create a data source with no URL. Push data via the API:

POST /api/network/publish
Content-Type: application/json

{"stream_name": "my-stream", "value": "42.5"}

Parser Types

JSON Path (json_path): Dot-notation traversal. Example: data.markets.0.price traverses {"data": {"markets": [{"price": 42.5}]}}42.5.

Python (python): Arbitrary code receiving text (raw response body), returning the extracted value. Example:

import json
data = json.loads(text)
return str(data['price'] * 100)

Staleness Management

The reconcile loop (every 5 minutes):

  1. Checks local observations for each subscription against cadence
  2. If stale locally: hunts across other relays for the stream
  3. If found on another relay: migrates the subscription
  4. If not found anywhere: marks the stream as stale

Streams with no cadence (None or 0) are never considered stale.

Relay Management

Relays come from two sources:

  • Central server: peers table provides registered relay URLs
  • Local: user adds relays manually via the UI

Both are merged. The neuron connects to relays as needed for subscriptions and publishing.

API Endpoints

Subscriptions

Method Path Description
GET /api/network/subscriptions List active subscriptions (?all=1 for inactive too)
POST /api/network/subscribe Subscribe to a stream
POST /api/network/unsubscribe Unsubscribe from a stream
GET /api/network/observations Get observations + predictions for a stream

Publications

Method Path Description
GET /api/network/publications List active publications (?all=1 for inactive too)
POST /api/network/predict Start predicting a subscribed stream
POST /api/network/stop-predict Stop predicting
POST /api/network/publish Push a value to an existing publication

Data Sources

Method Path Description
GET /api/network/data-source Get data source config by ?stream_name=
POST /api/network/data-source Create or update a data source + publication
POST /api/network/data-source/test Test fetch + parse without saving

Relays

Method Path Description
GET /api/network/relays List all relays (merged server + local)
POST /api/network/relay Add a relay URL to local DB
DELETE /api/network/relay Remove a local relay
GET /api/network/streams/relay Discover streams on a specific relay

UI

The dashboard provides:

  • Subscriptions table: stream name (clickable for data drawer), relay, cadence, status badge, Unsubscribe/Predict/Stop Predicting buttons
  • Observations drawer: Chart.js line chart (blue = observations, orange dashed = predictions shifted forward one cadence), tabs for raw observation and prediction tables
  • Publications table: stream name (clickable), source linkage, cadence, seq count, status. Clicking a data source publication opens the edit form; clicking a prediction publication opens the data drawer.
  • Data source form: stream name, URL (optional), method, cadence (optional, min 15 min), headers, parser type + config, Test button with interactive JSON tree, CSV bulk upload
  • Relays table: URL, last active, source badge, delete button for local relays. Clicking opens discovery drawer with Subscribe/Unsubscribe per stream.

Key Files

File Description
neuron-lite/start.py Main startup, reconcile loop, discovery, listeners, fetch loop, engine
neuron-lite/satorineuron/network_db.py SQLite storage, all 6 tables, queries
web/routes.py Flask API endpoints
web/templates/dashboard.html Dashboard UI
satorilib/src/satorilib/satori_nostr/client.py Nostr client (connect, publish, subscribe, discover)
satorilib/src/satorilib/satori_nostr/models.py Data models, event kind constants