Context
The Prometheus intelligence service (apps/intelligence/) exists as a FastAPI app but is completely non-functional. It has one endpoint (GET /health) and a services/claude.py file that initializes the Anthropic client but never uses it. The INTELLIGENCE_ANTHROPIC_API_KEY env var is already in the config. Everything is in place — nothing is wired.
This issue is about making Prometheus actually answer questions and give real, context-aware responses about the user's vaults, portfolio, and the broader yield landscape on Stellar — using Claude as the underlying model.
What Prometheus Should Be
Prometheus is not a general-purpose chatbot. It is a financial copilot scoped strictly to Nester. It should:
- Answer questions about the user's specific vaults and portfolio performance
- Explain yield strategies in plain language
- Suggest which vault to deposit into based on the user's risk profile and goals
- Flag when a user's allocation looks off relative to their stated goals
- Explain what a pending settlement or transaction means
- Refuse to answer anything outside its scope
Implementation
1. Core Chat Endpoint (Streaming)
Request:
{
"message": "Which vault should I put my emergency fund in?",
"conversation_id": "uuid-optional",
"user_context": {
"user_id": "uuid",
"portfolio_snapshot": {},
"active_vaults": []
}
}
This endpoint streams the Claude response back using Server-Sent Events (SSE). Streaming is critical for perceived responsiveness — the user sees words appear as Claude generates them rather than waiting for the full response.
# apps/intelligence/app/routers/chat.py
from fastapi.responses import StreamingResponse
from anthropic import Anthropic
async def stream_response(message: str, context: UserContext):
client = Anthropic(api_key=settings.anthropic_api_key)
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=1024,
system=build_system_prompt(context),
messages=[{"role": "user", "content": message}],
) as stream:
for text in stream.text_stream:
yield f"data: {text}\n\n"
2. System Prompt Design
The system prompt is built dynamically per request, injecting live user data so Claude gives specific answers rather than generic financial advice.
def build_system_prompt(context: UserContext) -> str:
return f"""
You are Prometheus, the financial intelligence layer of Nester — a yield-bearing savings
platform built on the Stellar blockchain.
Your role is to help users make informed decisions about their savings vaults, understand
their portfolio performance, and optimise their yield strategy.
## Scope
You ONLY answer questions related to:
- The user's Nester vaults and portfolio
- Yield strategies, APYs, and risk tiers available on Nester
- Savings goals and how to reach them
- Offramp and settlement (converting to NGN, GHS, KES)
- How Nester contracts and fee structure work
You do NOT answer questions about:
- Price predictions or market speculation
- Other DeFi protocols not integrated with Nester
- General life or financial advice outside the platform
If asked something out of scope, respond: "That is outside what I can help with —
I am focused on your Nester vaults and savings strategy."
## User Context (live data)
Total Portfolio Value: {context.total_value_usd} USD
Active Vaults: {context.active_vaults_summary}
Yield Earned (30d): {context.yield_30d_usd} USD
Risk Profile: {context.risk_profile}
Savings Goal: {context.savings_goal or "Not set"}
## Available Vaults
{context.available_vaults_summary}
## Tone
Be direct and specific. Use plain language. When you recommend something, say why.
Keep responses concise — the user is reading a sidebar panel, not an essay.
""".strip()
3. Conversation History
Multi-turn conversations require message history per conversation_id. For MVP, store in-memory with a 1-hour TTL. Persistent DB history is a follow-up.
# apps/intelligence/app/services/conversation_store.py
class ConversationStore:
def __init__(self, ttl_minutes: int = 60):
self._store: dict[str, list] = defaultdict(list)
self._touched: dict[str, datetime] = {}
def get(self, conversation_id: str) -> list:
return self._store.get(conversation_id, [])
def append(self, conversation_id: str, role: str, content: str):
self._store[conversation_id].append({"role": role, "content": content})
self._touched[conversation_id] = datetime.utcnow()
Pass the full history to each Claude call so it retains context across turns.
4. Structured Analysis Endpoint (non-streaming)
For features that need structured JSON back rather than streamed prose — vault recommendations, rebalancing suggestions, savings goal coaching — add a second endpoint:
This calls Claude with a JSON-mode instruction in the system prompt and returns parsed structured output. Used by issues #109, #110, #112.
5. Frontend Integration
The existing PrometheusPanel component needs to:
- Call
POST /chat and consume the SSE stream
- Append streamed tokens to the message as they arrive
- Pass
user_context from the portfolio provider with every message
- Maintain
conversation_id per session for multi-turn support
const response = await fetch('/intelligence/chat', {
method: 'POST',
body: JSON.stringify({ message, conversation_id: sessionId, user_context: ctx }),
headers: { 'Content-Type': 'application/json' },
})
const reader = response.body.getReader()
const decoder = new TextDecoder()
// append decoded chunks to displayed message as they stream in
Files to Create / Modify
| File |
Action |
apps/intelligence/app/routers/chat.py |
New — streaming chat endpoint |
apps/intelligence/app/routers/analyze.py |
New — structured JSON analysis endpoint |
apps/intelligence/app/services/prometheus.py |
New — system prompt builder + Claude call logic |
apps/intelligence/app/services/conversation_store.py |
New — in-memory conversation history with TTL |
apps/intelligence/app/services/claude.py |
Extend — currently only initialises client, add actual call methods |
apps/intelligence/app/main.py |
Modify — register new routers |
apps/dapp/frontend/components/prometheus-panel.tsx |
Modify — wire to streaming endpoint |
Acceptance Criteria
Notes
- The Anthropic client is already initialised in
apps/intelligence/app/services/claude.py and INTELLIGENCE_ANTHROPIC_API_KEY is already in config.py — no new env setup needed beyond setting the actual key value
- Keep
max_tokens at 1024 for chat; users are reading a panel, not a document
claude-sonnet-4-6 is the right model choice here — fast enough for streaming UX, capable enough for financial reasoning
Context
The Prometheus intelligence service (
apps/intelligence/) exists as a FastAPI app but is completely non-functional. It has one endpoint (GET /health) and aservices/claude.pyfile that initializes the Anthropic client but never uses it. TheINTELLIGENCE_ANTHROPIC_API_KEYenv var is already in the config. Everything is in place — nothing is wired.This issue is about making Prometheus actually answer questions and give real, context-aware responses about the user's vaults, portfolio, and the broader yield landscape on Stellar — using Claude as the underlying model.
What Prometheus Should Be
Prometheus is not a general-purpose chatbot. It is a financial copilot scoped strictly to Nester. It should:
Implementation
1. Core Chat Endpoint (Streaming)
Request:
{ "message": "Which vault should I put my emergency fund in?", "conversation_id": "uuid-optional", "user_context": { "user_id": "uuid", "portfolio_snapshot": {}, "active_vaults": [] } }This endpoint streams the Claude response back using Server-Sent Events (SSE). Streaming is critical for perceived responsiveness — the user sees words appear as Claude generates them rather than waiting for the full response.
2. System Prompt Design
The system prompt is built dynamically per request, injecting live user data so Claude gives specific answers rather than generic financial advice.
3. Conversation History
Multi-turn conversations require message history per
conversation_id. For MVP, store in-memory with a 1-hour TTL. Persistent DB history is a follow-up.Pass the full history to each Claude call so it retains context across turns.
4. Structured Analysis Endpoint (non-streaming)
For features that need structured JSON back rather than streamed prose — vault recommendations, rebalancing suggestions, savings goal coaching — add a second endpoint:
This calls Claude with a JSON-mode instruction in the system prompt and returns parsed structured output. Used by issues #109, #110, #112.
5. Frontend Integration
The existing
PrometheusPanelcomponent needs to:POST /chatand consume the SSE streamuser_contextfrom the portfolio provider with every messageconversation_idper session for multi-turn supportFiles to Create / Modify
apps/intelligence/app/routers/chat.pyapps/intelligence/app/routers/analyze.pyapps/intelligence/app/services/prometheus.pyapps/intelligence/app/services/conversation_store.pyapps/intelligence/app/services/claude.pyapps/intelligence/app/main.pyapps/dapp/frontend/components/prometheus-panel.tsxAcceptance Criteria
POST /chatstreams a real Claude response scoped to the user's live portfolio contextclaude-sonnet-4-6POST /analyzereturns structured JSON for vault recommendations and goal coachingPrometheusPanelstreams tokens and displays them as they arrive/chatrequests per user per minute to control API costsNotes
apps/intelligence/app/services/claude.pyandINTELLIGENCE_ANTHROPIC_API_KEYis already inconfig.py— no new env setup needed beyond setting the actual key valuemax_tokensat 1024 for chat; users are reading a panel, not a documentclaude-sonnet-4-6is the right model choice here — fast enough for streaming UX, capable enough for financial reasoning