Skip to content

BlockRun SDK for XRPL - Pay-per-request AI via x402 with RLUSD

Notifications You must be signed in to change notification settings

BlockRunAI/blockrun-llm-xrpl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

BlockRun XRPL SDK

Pay-per-request access to GPT-5.2, GPT-5.2 Codex, Claude Opus 4.6, Gemini 3 Pro, Grok 4, and 38+ models via x402 micropayments on XRPL with RLUSD.

Installation

pip install blockrun-llm-xrpl

Quick Start

from blockrun_llm_xrpl import LLMClient

client = LLMClient()  # Uses BLOCKRUN_XRPL_SEED from env
response = client.chat("openai/gpt-4o-mini", "Hello!")
print(response)

That's it. The SDK handles x402 payment with RLUSD automatically.

Smart Routing (ClawRouter)

Let the SDK automatically pick the cheapest capable model for each request:

from blockrun_llm_xrpl import LLMClient

client = LLMClient()

# Auto-routes to cheapest capable model
result = client.smart_chat("What is 2+2?")
print(result.response)  # '4'
print(result.model)     # 'nvidia/kimi-k2.5' (cheap, fast)
print(f"Saved {result.routing.savings * 100:.0f}%")  # 'Saved 94%'

# Complex reasoning task -> routes to reasoning model
result = client.smart_chat("Prove the Riemann hypothesis step by step")
print(result.model)  # 'xai/grok-4-1-fast-reasoning'

Routing Profiles

Profile Description Best For
free nvidia/gpt-oss-120b only (FREE) Testing, development
eco Cheapest models per tier (DeepSeek, xAI) Cost-sensitive production
auto Best balance of cost/quality (default) General use
premium Top-tier models (OpenAI, Anthropic) Quality-critical tasks
# Use premium models for complex tasks
result = client.smart_chat(
    "Write production-grade async Python code",
    routing_profile="premium"
)
print(result.model)  # 'openai/gpt-5.2-codex' (coding) or 'anthropic/claude-opus-4.6' (architecture)

How It Works

ClawRouter uses a 14-dimension rule-based classifier to analyze each request:

  • Token count - Short vs long prompts
  • Code presence - Programming keywords
  • Reasoning markers - "prove", "step by step", etc.
  • Technical terms - Architecture, optimization, etc.
  • Creative markers - Story, poem, brainstorm, etc.
  • Agentic patterns - Multi-step, tool use indicators

The classifier runs in <1ms, 100% locally, and routes to one of four tiers:

Tier Example Tasks Auto Profile Model
SIMPLE "What is 2+2?", definitions moonshot/kimi-k2.5
MEDIUM Code snippets, explanations xai/grok-code-fast-1
COMPLEX Architecture, long documents google/gemini-3-pro-preview
REASONING Proofs, multi-step reasoning xai/grok-4-1-fast-reasoning

How It Works

  1. You send a request to BlockRun's XRPL API
  2. The API returns a 402 Payment Required with the price
  3. The SDK automatically signs an RLUSD payment on XRPL
  4. The request is retried with the payment proof
  5. The t54.ai facilitator settles the payment
  6. You receive the AI response

Your seed never leaves your machine - it's only used for local signing.

Environment Variables

Variable Description Required
BLOCKRUN_XRPL_SEED Your XRPL wallet seed Yes (or pass to constructor)

Setting Up Your Wallet

  1. Create an XRPL wallet (or use existing one)
  2. Fund it with XRP for transaction fees (~1 XRP is plenty)
  3. Set up a trust line to RLUSD issuer
  4. Get some RLUSD for API payments
  5. Export your seed and set it as BLOCKRUN_XRPL_SEED
# .env file
BLOCKRUN_XRPL_SEED=sEd...your_seed_here

Create a New Wallet

from blockrun_llm_xrpl import create_wallet

address, seed = create_wallet()
print(f"Address: {address}")
print(f"Seed: {seed}")  # Save this securely!

Check Balances

from blockrun_llm_xrpl import LLMClient

client = LLMClient()
print(f"RLUSD Balance: {client.get_balance()}")

Usage Examples

Simple Chat

from blockrun_llm_xrpl import LLMClient

client = LLMClient()

response = client.chat("openai/gpt-4o", "Explain quantum computing")
print(response)

# Use Codex for coding (cost-effective)
response = client.chat(
    "openai/gpt-5.2-codex",
    "Write a binary search tree in Python"
)

# With system prompt
response = client.chat(
    "anthropic/claude-opus-4.6",
    "Design a microservices architecture",
    system="You are a senior software architect."
)

Full Chat Completion

from blockrun_llm_xrpl import LLMClient

client = LLMClient()

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "How do I read a file in Python?"}
]

result = client.chat_completion("openai/gpt-4o-mini", messages)
print(result.choices[0].message.content)

Check Spending

from blockrun_llm_xrpl import LLMClient

client = LLMClient()

response = client.chat("openai/gpt-4o-mini", "Hello!")
print(response)

spending = client.get_spending()
print(f"Spent ${spending['total_usd']:.4f} across {spending['calls']} calls")

Async Usage

import asyncio
from blockrun_llm_xrpl import AsyncLLMClient

async def main():
    async with AsyncLLMClient() as client:
        response = await client.chat("openai/gpt-4o-mini", "Hello!")
        print(response)

        # Multiple requests concurrently
        tasks = [
            client.chat("openai/gpt-4o-mini", "What is 2+2?"),
            client.chat("openai/gpt-4o-mini", "What is 3+3?"),
        ]
        responses = await asyncio.gather(*tasks)
        for r in responses:
            print(r)

asyncio.run(main())

Available Models

All 38+ models from BlockRun are available:

  • OpenAI: gpt-5.2, gpt-5.2-codex, gpt-4o, gpt-4o-mini, o1, o3, o4-mini
  • Anthropic: claude-opus-4.6, claude-opus-4.5, claude-opus-4, claude-sonnet-4.6, claude-sonnet-4, claude-haiku-4.5
  • Google: gemini-3-pro-preview, gemini-2.5-pro, gemini-2.5-flash
  • DeepSeek: deepseek-chat, deepseek-reasoner
  • xAI: grok-4-1-fast-reasoning, grok-4-fast-reasoning, grok-3, grok-3-mini, grok-code-fast-1
  • NVIDIA: gpt-oss-120b (FREE), kimi-k2.5
  • Moonshot: kimi-k2.5 (256k context, great for coding)

Latest Additions:

  • Claude Opus 4.6 - Latest flagship with 64k output
  • GPT-5.2 Codex - Optimized for code generation
  • Kimi K2.5 - 256k context window, excellent for coding tasks

Error Handling

from blockrun_llm_xrpl import LLMClient, APIError, PaymentError

client = LLMClient()

try:
    response = client.chat("openai/gpt-4o-mini", "Hello!")
except PaymentError as e:
    print(f"Payment failed: {e}")
    # Check your RLUSD balance
except APIError as e:
    print(f"API error ({e.status_code}): {e}")

Security

  • Seed stays local: Your seed is only used for signing on your machine
  • No custody: BlockRun never holds your funds
  • Verify transactions: All payments are on-chain and verifiable on XRPL

Links

License

MIT

About

BlockRun SDK for XRPL - Pay-per-request AI via x402 with RLUSD

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages