TrendReef is a multi-tenant SaaS backend for AI content intelligence and autonomous market research.
docs/README.md- full documentation indexdocs/ARCHITECTURE.md- platform architecture and runtime flowdocs/API_REFERENCE.md- API contractsdocs/OPERATIONS.md- runbook and troubleshootingdocs/FILE_GUIDE.md- file-by-file ownership mapdocs/AGENT_ONBOARDING.md- coding-agent handoff guidedocs/architecture.mmd- system architecture diagram (Mermaid)docs/research-sequence.mmd- research request sequence diagramdocs/workflow-sequence.mmd- workflow execution sequence diagram
- FastAPI async API with tenant-scoped endpoints.
- Multi-provider LLM layer:
BaseLLMProviderOllamaProviderOpenAIProviderAnthropicProviderAzureProvider
ModelRoutersupports per-tenant/per-agent/per-task routing.LLMServiceenforces:- fallback provider logic
- circuit breaker
- rate limits
- token/latency/cost usage logging
- LangGraph orchestration with agents:
SourceAgentWebResearchAgent(TinyFish SSE)IntelligenceAgentStrategyAgentContentGeneratorAgentPublisherAgent(stub)AnalyticsAgent
- PostgreSQL + pgvector + SQLAlchemy async models.
- Celery worker for async workflow execution.
app/
agents/
api/
routes/
core/
models/
providers/
repositories/
services/
workflows/
workers/
frontend/
src/
Dockerfile
docker-compose.yml
requirements.txt
.env.example
tenantsuserssourcesraw_postsprocessed_trendsembeddingsgenerated_contentanalytics_eventsexternal_research_jobsmodel_providersmodel_configstenant_model_preferencesmodel_usage_logs
pgvector extension is enabled at startup.
Base prefix: /api/v1
GET /healthGET /provider-healthGET|POST /sourcesGET /trendsPOST /generateGET|POST /analyticsPOST /run-workflowGET|POST /research
All business endpoints require header: X-Tenant-ID.
TinyFishAutomationClient calls:
POST https://agent.tinyfish.ai/v1/automation/run-sse
Features implemented:
- SSE stream parsing (
data:lines) - partial chunk buffering
- safe JSON parse fallback
- timeout + retries
- structured logging
- graceful errors
WebResearchAgent now calls a crawler abstraction with routing/fallback:
tinyfishproviderplaywrightprovider (local browser crawl)
Provider selection:
- request-level via
/researchpayloadprovider_preference:auto | tinyfish | playwright autotries TinyFish first, then Playwright fallback- tenant feature flags can disable providers (
tinyfish_enabled,playwright_enabled)
WebResearchAgent stores lifecycle and results in external_research_jobs.
For Playwright local crawling, install Chromium once in runtime environment:
playwright install chromiumDocker builds already install Chromium for Playwright provider.
If Chromium runtime libs are missing on your host/container, keep provider as auto or tinyfish.
- Copy env:
cp .env.example .env- Start stack:
docker compose up --build- Frontend:
http://localhost:5173
- API:
http://localhost:8000/docs
Apply migrations:
alembic upgrade headRecommended with this compose setup (no DB host-port mapping): run Alembic inside the API container so db resolves:
docker compose run --rm api alembic upgrade headIf you run Alembic locally from your host shell, expose DB port yourself and point DATABASE_URL to that host port.
Rollback one revision:
alembic downgrade -1Seed a demo tenant, providers, model configs, and default routing preference:
python -m scripts.seed_defaultsWith compose networking, run:
docker compose run --rm api python -m scripts.seed_defaultsThis creates tenant slug demo.
Celery worker runs as worker service in docker-compose.yml.
Manual run:
celery -A app.workers.celery_app.celery_app worker --loglevel=infoProvider secrets/config are resolved in this order:
- Tenant DB config (
model_providers) .envfallback
Routing preferences are read from tenant_model_preferences by:
tenant_idagent_nametask_type
If missing, router defaults to:
- primary:
ollama - secondary:
openai
Structured logs capture:
- model latency
- token usage
- cost estimates
- TinyFish job duration
- trend scoring breakdown
Provider status is exposed via /api/v1/provider-health.
Phase 1:
- Core app structure, config, logging, DB models.
Phase 2:
- Provider abstraction, router, LLM service resilience.
Phase 3:
- TinyFish SSE research integration + external jobs tracking.
Phase 4:
- LangGraph multi-agent orchestration and workflow endpoint.
Phase 5:
- Dockerization, worker integration, docs and ops ergonomics.
- Publisher integrations are intentionally plugin-ready stubs.
- Embedding generation is currently placeholder logic and should be connected to a provider embedding API for production rollout.
- Initial Alembic migration is included in
migrations/versions/0001_initial_trendreef_schema.py.
Run tests:
pytest -qCoverage focus in current suite:
- core resilience primitives (circuit breaker)
- TinyFish SSE line parsing safety
- intelligence scoring + dedupe logic
- API smoke + tenant header enforcement