Skip to content

Latest commit

 

History

History
217 lines (174 loc) · 5.42 KB

File metadata and controls

217 lines (174 loc) · 5.42 KB

API Reference

This repo currently contains two score-schema lineages:

  • canonical public/product read surface: scores
  • legacy engine/SQLAlchemy lineage: an_scores + dimension_scores

Unless explicitly noted otherwise, public/product-facing read flows should be understood as reading from scores. See docs/CANONICAL-SCORE-CONTRACT.md and docs/SCORE-CONTRACT-CONSUMER-AUDIT.md.

Scoring Endpoint

POST /v1/score

Legacy/internal scoring-engine endpoint.

Calculates an AN Score from explicit dimension inputs. In the legacy engine lineage, this path persists score records via the SQLAlchemy-backed scoring layer rather than the canonical public scores read surface.

Request body

{
  "service_slug": "stripe",
  "dimensions": {
    "I1": 9.5,
    "I2": 9.0,
    "I3": 8.5,
    "I4": 9.5,
    "I5": 9.0,
    "I6": 8.0,
    "I7": 9.0,
    "F1": 9.0,
    "F2": 9.5,
    "F3": 9.5,
    "F4": 8.5,
    "F5": 10.0,
    "F6": 9.0,
    "F7": 9.0,
    "O1": 9.0,
    "O2": 9.0,
    "O3": 8.0
  },
  "access_dimensions": {
    "A1": 6.0,
    "A2": 5.5,
    "A3": 6.0,
    "A4": 7.5,
    "A5": 8.0,
    "A6": 8.0
  },
  "evidence_count": 72,
  "freshness": "12 minutes ago",
  "probe_types": ["health", "auth", "schema", "load", "idempotency"],
  "production_telemetry": true,
  "probe_freshness": "18 minutes ago",
  "probe_latency_distribution_ms": {"p50": 120, "p95": 340, "p99": 620, "samples": 9},
  "hydrate_probe_telemetry": true
}

Response body

{
  "service_slug": "stripe",
  "score": 8.9,
  "execution_score": 9.1,
  "access_readiness_score": 8.4,
  "aggregate_recommendation_score": 8.9,
  "an_score_version": "0.2",
  "confidence": 0.98,
  "tier": "L4",
  "tier_label": "Native",
  "explanation": "Stripe scores 8.9 because idempotency supports safe retries, but auth flow friction interrupts agent autonomy.",
  "dimension_snapshot": {
    "dimensions": { "I1": 9.5, "...": 9.0 },
    "raw_weights": { "I1": 0.1, "...": 0.03 },
    "normalized_weights": { "I1": 0.1, "...": 0.03 },
    "category_scores": {
      "infrastructure": 8.9,
      "interface": 9.1,
      "operational": 8.7
    }
  },
  "score_id": "uuid",
  "calculated_at": "2026-03-03T22:11:00+00:00"
}

hydrate_probe_telemetry is optional. When true, the API auto-hydrates probe_freshness and probe_latency_distribution_ms from the latest stored probe result when those fields are omitted.

In v0.2, score remains a backward-compatible alias of aggregate_recommendation_score.

GET /v1/services/{slug}/score

Fetch the latest persisted score for a service from the current product-facing score surface. For the initial calibration set (stripe, hubspot, sendgrid, resend, github), this route can bootstrap from hand-scored fixtures when no DB row exists yet.

Search Endpoint

GET /v1/search?q=<query>&limit=<n>

Search indexed services by free-text query. Used by rhumb find <query>.

Response body

{
  "data": {
    "query": "payment routing",
    "results": [
      {
        "service_slug": "stripe",
        "name": "Stripe",
        "aggregate_recommendation_score": 8.9,
        "tier": "L4",
        "confidence": 0.95,
        "why": "Best default for payment flows with strong reliability."
      }
    ]
  },
  "error": null
}

limit is optional and can be used by clients to cap result count.

Pricing Endpoint

GET /v1/pricing

Returns Rhumb's current machine-readable public pricing contract.

Response body

{
  "data": {
    "pricing_version": "2026-03-18",
    "canonical_api_base_url": "https://api.rhumb.dev/v1",
    "free_tier": {
      "included_executions_per_month": 1000
    },
    "modes": {
      "rhumb_managed": {
        "margin_percent": 20
      },
      "x402": {
        "margin_percent": 15,
        "network": "Base",
        "token": "USDC"
      },
      "byok": {
        "upstream_passthrough": true,
        "margin_percent": 0
      }
    }
  },
  "error": null
}

The pricing contract intentionally omits unfinished volume-discount tiers.

Probe Endpoints

POST /v1/probes/run

Run and persist one internal probe.

Example:

{
  "service_slug": "stripe",
  "probe_type": "schema",
  "target_url": "https://status.stripe.com/api/v2/status.json",
  "sample_count": 3,
  "trigger_source": "internal"
}

POST /v1/probes/schedule/run

Execute a batch run from seed specs (Stripe/OpenAI/HubSpot).

Example:

{
  "service_slugs": ["stripe", "openai"],
  "sample_count": 3,
  "base_interval_minutes": 30,
  "dry_run": false
}

Response includes cadence_by_service guardrails with:

  • base_interval_minutes (clamped to a minimum of 5 and maximum of 1440)
  • next_interval_minutes (failure-aware exponential backoff)
  • consecutive_failures
  • jitter_seconds (deterministic per service)

GET /v1/services/{slug}/probes/latest

Fetch the latest persisted probe result for a service (optional probe_type query param).

For probe_type=schema, metadata includes schema_signature_version=v2 and schema_fingerprint_v2, which are derived from nested response shape descriptors (semantic drift guardrail beyond top-level key lists).

GET /v1/alerts

Fetch probe-derived drift alerts.

Current primitive alert types:

  • schema_drift — latest schema fingerprint differs from previous schema probe
  • latency_regression — p95 health latency regressed beyond threshold versus previous probe

Optional query params:

  • limit (default 50, max 100)