Skip to content

feat: Orchestra MCP Platform — Complete Implementation (Phases 1-6)#5

Merged
fadymondy merged 32 commits intomasterfrom
feat/phase-3-go-mcp-tools
Mar 28, 2026
Merged

feat: Orchestra MCP Platform — Complete Implementation (Phases 1-6)#5
fadymondy merged 32 commits intomasterfrom
feat/phase-3-go-mcp-tools

Conversation

@fadymondy
Copy link
Copy Markdown

@fadymondy fadymondy commented Mar 28, 2026

Orchestra MCP Platform — Complete Implementation

This PR contains the full implementation of the Orchestra MCP platform across all 6 phases.

Go MCP Server (57 tools)

  • SSE protocol with session management
  • Token auth via PostgreSQL function
  • 11 tool categories: agents, tasks, projects, memory, activity, decisions, sessions, notes, skills, workflows, specs
  • 8 GitHub tools (repo CRUD, branches, PRs, CLAUDE.md generation)
  • Rate limiting (token bucket per org/plan)
  • Realtime hub (org-scoped pub/sub)
  • Slack notifications
  • Usage tracking

Laravel Web App

  • Login/Register with GitHub OAuth
  • Password reset flow
  • 3-step onboarding wizard (Company, Team, Connect Claude)
  • Token management (create/revoke)
  • Dashboard with usage stats
  • Pricing, Features, Docs pages
  • Settings page
  • Brand theme (dark, cyan/purple gradient)

Supabase Studio Fork

  • Orchestra branding (logos, favicons, titles)
  • GoTrue JWT auth with admin role checking
  • Pro features unlocked for self-hosted
  • 20+ API stubs for org/billing/notifications
  • Edge Function Secrets management (vault)
  • Custom Orchestra pages (Dashboard, Agents, Tokens)
  • Orchestra sidebar navigation
  • User avatar, profile, logout in dropdown
  • New Project page customized for self-hosted

Database

  • 10 idempotent migrations (21 tables)
  • Works with supabase db push

Edge Functions

  • Stripe webhook handler
  • Session cleanup cron
  • Agent trigger cron

Infrastructure

  • Caddyfile, supervisor configs
  • setup.sh, deploy.sh
  • Docker Compose overlay
  • create-admin.sh script

Summary by CodeRabbit

  • New Features

    • Orchestra product added: Dashboard (live metrics), Agents, Tokens (view/revoke), and Service Logs pages with filters and auto-refresh.
    • New Orchestra navigation and product menu across desktop and mobile; dedicated layouts and pages for Dashboard, Agents, Tokens, Logs.
  • Backend / Ops

    • Added rate limiting, Slack notifications, Stripe webhook handling, realtime hub, and scheduled functions for triggering agents and cleaning sessions.
    • Token revocation and usage/analytics endpoints added.
  • Documentation

    • Expanded docs and redesigned Features, Pricing, Usage, and Settings pages.

…7 tools)

Rate Limiting:
- Token-bucket per org with plan-based limits (free:10, pro:60, team:200, enterprise:1000/min)
- X-RateLimit-Limit/Remaining/Retry-After headers
- 429 Too Many Requests when exceeded

Realtime Hub:
- Org-scoped pub/sub with Subscribe/Unsubscribe/Broadcast
- 64-message buffered channels, non-blocking fan-out
- SubscriberCount helper

Slack Notifications:
- SlackClient with SendMessage + SendDM (users.lookupByEmail)
- Gracefully disabled when SLACK_BOT_TOKEN not set
- slack_notify MCP tool (channel, message, mention_user)

Usage Tracking:
- usage_get MCP tool — tasks/month, active agents/tokens, memory MB, API calls/day

Wiring:
- Rate limiter chained after auth middleware on /mcp routes
- Hub, SlackClient initialized in main.go

Total: 57 MCP tools registered.

Co-Authored-By: Fady Mondy <info@3x1.io>
…age, settings)

Pricing page: 4 plans with comparison table, Pro highlighted
Features page: 5 sections (MCP Tools, AI Agents, Team Sync, GitHub, Self-Hosted)
Docs page: Getting Started, 55 tools reference, Self-Hosting, API Reference
Usage dashboard: Livewire stats with progress bars, plan limits
Settings page: Profile (name/timezone/language), org info, danger zone
Navigation: Active state highlighting, all routes wired
DB fix: Correct password + port for Supabase Postgres

Co-Authored-By: Fady Mondy <info@3x1.io>
…D, not int)

auth()->id() returns integer from Laravel users table, but organizations.owner_id
references auth.users(id) which is UUID. Now uses supabase_user_id or generates UUID.

Co-Authored-By: Fady Mondy <info@3x1.io>
…t trigger

stripe-webhook: HMAC-SHA256 signature verification, handles checkout.session.completed,
  subscription.updated/deleted, invoice.payment_failed. Updates org plan + limits.

cron-session-cleanup: Calls cleanup_stale_sessions() to mark inactive sessions offline.

cron-agent-trigger: Finds todo tasks with assigned agents, logs trigger events.

Co-Authored-By: Fady Mondy <info@3x1.io>
Laravel users table uses integer IDs, not auth.users UUIDs. The FK
constraints on owner_id and user_id prevent creating orgs from Laravel.
Removed REFERENCES auth.users(id) — data integrity handled at app level.

Co-Authored-By: Fady Mondy <info@3x1.io>
…column)

Co-Authored-By: Fady Mondy <info@3x1.io>
…ebar nav

3 new pages under /project/[ref]/orchestra/:
- Dashboard: 4 metric cards (connections, tasks, agents, memories) + activity feed + active sessions
- Agents Monitor: table with name, role, status badges, team, last active
- Tokens Admin: table with prefix, user, usage count, revoke action

Infrastructure:
- OrchestraLayout with ProductMenu sidebar (Dashboard, Agents, Tokens)
- 6 data layer files (React Query hooks querying via pg-meta executeSql)
- Token revoke mutation
- Orchestra nav section in sidebar + mobile nav
- 5 new navigation tests (all 32 pass)

Co-Authored-By: Fady Mondy <info@3x1.io>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds an Orchestra product: UI pages/components (Dashboard, Agents, Tokens, Logs) with React Query data layers and mutations, navigation/layout integration, MCP server realtime, rate limiting, Slack/usage tools, Supabase Edge functions, schema migrations, and Laravel/Supabase integration plus Livewire settings/usage UIs.

Changes

Cohort / File(s) Summary
Orchestra UI Components
apps/studio/components/interfaces/Orchestra/OrchestraAgents.tsx, apps/studio/components/interfaces/Orchestra/OrchestraDashboard.tsx, apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx, apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx
New React components for Agents, Dashboard, Tokens, and Logs. Each uses route ref, React Query hooks, renders loading/empty states, formats timestamps/badges, and includes token revoke / log filtering / UI controls.
Orchestra Pages & Layout
apps/studio/pages/project/[ref]/orchestra/index.tsx, .../agents.tsx, .../tokens.tsx, .../logs.tsx, apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.tsx, .../OrchestraLayout.utils.ts
New Next.js pages and authenticated OrchestraLayout with OrchestraProductMenu and menu generation hook. Pages wired via getLayout to DefaultLayout + OrchestraLayout.
Navigation & Mobile Menu
apps/studio/components/layouts/Navigation/NavigationBar/NavigationBar.utils.tsx, .../NavigationBar.utils.test.tsx, apps/studio/components/interfaces/Sidebar.tsx, .../MobileMenuContent.tsx, .../mobileProductMenuRegistry.tsx
Added generateOrchestraRoutes, tests, and integrated Orchestra into desktop sidebar and mobile product menu registry.
Orchestra Data Layer
apps/studio/data/orchestra/keys.ts, orchestra-dashboard-query.ts, orchestra-agents-query.ts, orchestra-tokens-query.ts, orchestra-activity-log-query.ts, orchestra-active-sessions-query.ts, orchestra-service-logs-query.ts, orchestra-token-revoke-mutation.ts
Typed query keys, SQL fetchers via executeSql, React Query hooks for dashboard/agents/tokens/activity/active-sessions/service-logs, and a revoke-token mutation (toasts + cache invalidation).
Studio API Endpoints (profile & tokens)
apps/studio/pages/api/platform/profile/index.ts, .../profile/access-tokens.ts, .../profile/access-tokens/[id].ts, .../profile/audit.ts, apps/studio/pages/api/platform/organizations/[slug]/audit.ts
Profile endpoint now DB-backed with PATCH; access-tokens GET/POST/DELETE implemented against public.mcp_tokens; audit endpoints added/stubbed.
MCP Server — Realtime & Rate Limiting
mcp-server/internal/realtime/hub.go, mcp-server/internal/auth/ratelimit.go, mcp-server/cmd/server/main.go
Hub refactored to org-scoped Subscriber/Message model and subscriber management; new per-organization token-bucket RateLimiter with middleware; main.go wires hub, rate limiter, and Slack/tools registration.
MCP Server — Notifications & Tools
mcp-server/internal/notifications/slack.go, mcp-server/internal/tools/slack.go, mcp-server/internal/tools/usage.go
Slack client added (post message / DM helpers) and MCP tools registered: slack_notify (uses Slack client) and usage_get (DB-backed usage summary).
Supabase Edge Functions
supabase/functions/cron-agent-trigger/index.ts, supabase/functions/cron-session-cleanup/index.ts, supabase/functions/stripe-webhook/index.ts
Edge functions for scheduled agent triggers, session cleanup RPC, and a Stripe webhook handler with signature verification, org plan updates, and activity_log writes.
Service Logs Table & Query
supabase/migrations/20260328000011_service_logs.sql, apps/studio/data/orchestra/orchestra-service-logs-query.ts
New public.service_logs table migration and service-logs query combining service logs + activity_log with filters and periodic refetch.
Database Migrations (FK removals)
supabase/migrations/20260328_*_*.sql (multiple files: profiles/organizations, mcp_tokens, projects/agents/skills, tasks, memory/activity, notes/specs/github)
Series of migrations removing many FK constraints to auth.users (and similar) by converting columns to plain UUIDs across multiple tables.
Web App — Supabase SDK & Services
web/packages/saeedvir/supabase/*, web/app/Services/*, web/config/supabase.php, web/bootstrap/providers.php, web/composer.json, web/config/services.php
Added Supabase client package, services (Auth/DB/Storage/Realtime), Facade & Provider, config, and replaced custom HTTP DB layer with SDK-backed service accessors.
Web App — Laravel auth & user changes
web/app/Http/Controllers/Auth/*, web/app/Services/SupabaseAuthService.php, web/app/Models/User.php, web/database/migrations/*
Login/registration/GitHub flows now integrate Supabase (persist supabase_user_id), add timezone/language user columns, and provide orchestraId() helper; Supabase session/token handling added.
Web App — Livewire & Views
web/app/Livewire/*, web/resources/views/*, web/routes/web.php, web/app/Models/*, web/app/Services/TokenService.php
Added Livewire settings/usage components and views, updated TokenService storage format, model timestamp/cast tweaks, new dashboard usage/settings routes and blade view changes (docs, features, pricing, etc.).
Scripts & Config
scripts/start-mcp.sh, .mcp.json, .requests/*
New MCP start script, updated .mcp.json to orchestra SSE server, and new request documentation files.
Tests & Small UI Copy
apps/studio/components/layouts/Navigation/NavigationBar/NavigationBar.utils.test.tsx, apps/studio/components/interfaces/Account/Preferences/*.tsx
Added tests for route generator and small UI text adjustments replacing “Supabase” with “Orchestra”.

Sequence Diagram(s)

sequenceDiagram
  participant Stripe
  participant Webhook as StripeWebhookFunction
  participant SupabaseDB as SupabaseDB
  participant Activity as activity_log

  Stripe->>Webhook: POST / (signed event)
  Webhook->>Webhook: verify signature (t, v1) HMAC-SHA256
  Webhook->>SupabaseDB: update organization (plan/limits/subscription) / write activity_log
  SupabaseDB-->>Webhook: OK
  Webhook-->>Stripe: 200 { received: true }
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 I nibbled code in twilight's glow,

Dashboards sprouted, logs in tow.
Tokens hop and crons go ping,
Slack sends cheers — the rabbit sing.
A burrow built where features grow.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/phase-3-go-mcp-tools

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
supabase/migrations/20260328000002_profiles_organizations.sql (1)

68-73: ⚠️ Potential issue | 🟠 Major

Ship the FK removal as a new migration, not by editing an old one.

Because this file is already versioned, environments that previously applied 20260328000002_profiles_organizations.sql will not see these definition changes; they will keep the old auth.users foreign keys. The updated onboarding flow can still fail on upgrade unless you add a follow-up ALTER TABLE ... DROP CONSTRAINT ... migration.

Possible follow-up migration
+-- supabase/migrations/<next>_drop_auth_user_fks.sql
+ALTER TABLE public.organizations
+  DROP CONSTRAINT IF EXISTS organizations_owner_id_fkey;
+
+ALTER TABLE public.team_members
+  DROP CONSTRAINT IF EXISTS team_members_user_id_fkey;

Also applies to: 115-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20260328000002_profiles_organizations.sql` around lines
68 - 73, Don't edit the already-committed CREATE TABLE in organizations; instead
add a new migration that drops the existing foreign key(s) to auth.users so
environments that already ran 20260328000002_profiles_organizations.sql get the
change. Create a new migration which runs ALTER TABLE public.organizations DROP
CONSTRAINT IF EXISTS <constraint_name>; (and any other FK constraints referenced
to auth.users found around lines 115-123) — locate the constraint names on the
existing DB or in the original migration (e.g., the FK on organizations.owner_id
referencing auth.users) and drop them safely in the follow-up migration.
🟡 Minor comments (8)
mcp-server/internal/auth/ratelimit.go-115-117 (1)

115-117: ⚠️ Potential issue | 🟡 Minor

X-RateLimit-Remaining is off by one on successful requests.

It is computed before Allow() consumes the token, so callers see one more remaining request than they actually have.

Return post-charge remaining budget
-		remaining := rl.Remaining(uc.OrgID, uc.Plan)
-		w.Header().Set("X-RateLimit-Limit", fmt.Sprintf("%d", rl.rpmForPlan(uc.Plan)))
-		w.Header().Set("X-RateLimit-Remaining", fmt.Sprintf("%d", remaining))
-
-		if !rl.Allow(uc.OrgID, uc.Plan) {
+		allowed := rl.Allow(uc.OrgID, uc.Plan)
+		remaining := rl.Remaining(uc.OrgID, uc.Plan)
+		w.Header().Set("X-RateLimit-Limit", fmt.Sprintf("%d", rl.rpmForPlan(uc.Plan)))
+		w.Header().Set("X-RateLimit-Remaining", fmt.Sprintf("%d", remaining))
+
+		if !allowed {

Also applies to: 119-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp-server/internal/auth/ratelimit.go` around lines 115 - 117, The
X-RateLimit-Remaining header is computed before the token is consumed, so change
the logic in the handler that calls rl.Remaining(uc.OrgID, uc.Plan) and
rl.Allow(...) to first call rl.Allow(uc.OrgID, uc.Plan) (or otherwise
consume/charge the token via Allow) and then call rl.Remaining(uc.OrgID,
uc.Plan) to compute the post-charge remaining budget; update where headers are
set (X-RateLimit-Limit and X-RateLimit-Remaining) to use rl.rpmForPlan(uc.Plan)
and the post-consumption remaining value so callers see the correct remaining
count, and apply the same change to the other similar block around lines 119-127
that uses Allow()/Remaining().
web/resources/views/pages/docs.blade.php-42-42 (1)

42-42: ⚠️ Potential issue | 🟡 Minor

Verify the MCP tool count.

The documentation mentions "55 MCP tools" here and on line 50, but the PR summary indicates 57 tools are registered. Please verify and update the documentation to reflect the accurate count.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/resources/views/pages/docs.blade.php` at line 42, The doc text in
pages/docs.blade.php currently says "55 MCP tools" but the PR registers 57
tools; verify the actual registered MCP tool count and update the string
"Restart Claude Code and you should see 55 MCP tools available." (and the other
occurrence on the same page) to the correct number (e.g., 57) so the docs match
the codebase. Ensure both mentions on the page are changed consistently.
apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts-4-6 (1)

4-6: ⚠️ Potential issue | 🟡 Minor

Handle undefined ref to avoid generating invalid URLs.

When ref is undefined, baseUrl becomes /project/undefined/orchestra, which produces invalid navigation links. Consider returning an empty menu or guarding against this case.

🛡️ Proposed fix
 export function generateOrchestraMenu(ref?: string): ProductMenuGroup[] {
+  if (!ref) return []
+
   const baseUrl = `/project/${ref}/orchestra`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts`
around lines 4 - 6, The generateOrchestraMenu function builds URLs using ref but
doesn't handle ref === undefined, causing links like
/project/undefined/orchestra; update generateOrchestraMenu to guard for a
missing ref (e.g., if (!ref) return [] or throw/return a safe default menu) and
avoid constructing baseUrl when ref is falsy, ensuring any references to baseUrl
use the validated ref value so no invalid navigation links are produced.
apps/studio/components/interfaces/Orchestra/OrchestraAgents.tsx-83-85 (1)

83-85: ⚠️ Potential issue | 🟡 Minor

Guard against missing created_at value.

Unlike last_active_at which has a null check on line 81, created_at is formatted directly without a guard. If created_at is unexpectedly null or undefined, this could render "Invalid Date".

🛡️ Proposed fix
 <TableCell className="text-foreground-lighter">
-  {dayjs(agent.created_at).format('MMM D, YYYY')}
+  {agent.created_at ? dayjs(agent.created_at).format('MMM D, YYYY') : '--'}
 </TableCell>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraAgents.tsx` around lines
83 - 85, Wrap the dayjs formatting call for agent.created_at in a null/undefined
guard so you don't render "Invalid Date"; in OrchestraAgents.tsx update the
TableCell that currently contains {dayjs(agent.created_at).format('MMM D,
YYYY')} to conditionally render a fallback (e.g., '-' or empty string) when
agent.created_at is falsy and only call dayjs(...).format when agent.created_at
exists. Ensure you reference agent.created_at and the dayjs(...).format
expression so the check is applied to that exact value.
web/resources/views/pages/features.blade.php-44-46 (1)

44-46: ⚠️ Potential issue | 🟡 Minor

Minor inconsistency: Tool count discrepancy.

The hero states "55 MCP tools" but the PR objectives mention "57 tools". Consider verifying the accurate count to maintain consistency across marketing materials and documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/resources/views/pages/features.blade.php` around lines 44 - 46, The hero
paragraph currently reads "55 MCP tools..." which conflicts with other PR text
claiming "57 tools"; confirm the correct tool count and update the string in the
features.blade.php paragraph (the paragraph containing "55 MCP tools, AI agent
teams, real-time collaboration, and full self-hosting -- all in one platform.")
to the verified number so the marketing copy is consistent across the site.
apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx-93-95 (1)

93-95: ⚠️ Potential issue | 🟡 Minor

Guard against undefined usage_count.

If usage_count is undefined or null, calling .toLocaleString() will throw. Add a fallback.

🛡️ Proposed fix
 <TableCell className="text-foreground-lighter tabular-nums">
-  {token.usage_count.toLocaleString()}
+  {(token.usage_count ?? 0).toLocaleString()}
 </TableCell>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx` around lines
93 - 95, Guard against undefined token.usage_count before calling toLocaleString
by providing a fallback; update the TableCell rendering (the cell that currently
uses token.usage_count.toLocaleString()) to either coerce with a default number
like (token.usage_count ?? 0).toLocaleString() or conditionally render a
placeholder when null/undefined (e.g., token.usage_count != null ?
token.usage_count.toLocaleString() : '—'), ensuring you reference
token.usage_count and the surrounding TableCell component.
web/resources/views/pages/pricing.blade.php-273-285 (1)

273-285: ⚠️ Potential issue | 🟡 Minor

Add accessible text for feature availability icons.

The checkmark and X icons in the comparison table lack accessible text. Screen reader users won't understand what these icons mean.

♿ Suggested fix for checkmark icons
 <td class="py-3 px-4 text-center text-brand-cyan">
-  <svg class="w-5 h-5 mx-auto" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"/></svg>
+  <svg class="w-5 h-5 mx-auto" fill="none" stroke="currentColor" viewBox="0 0 24 24" aria-hidden="true"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"/></svg>
+  <span class="sr-only">Included</span>
 </td>

Apply similarly for X icons with "Not included" text.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/resources/views/pages/pricing.blade.php` around lines 273 - 285, The
checkmark SVGs inside the table cells (td elements with class "py-3 px-4
text-center text-brand-cyan" containing svg.w-5.h-5.mx-auto) lack accessible
text; add an accessible label for screen readers by either giving the svg
role="img" and aria-label="Included" or keeping aria-hidden="true" on the svg
and inserting a visually-hidden sibling span (e.g., class "sr-only") with the
text "Included", and apply the same pattern for the X icons using "Not included"
so screen readers convey feature availability.
mcp-server/internal/realtime/hub.go-51-74 (1)

51-74: ⚠️ Potential issue | 🟡 Minor

Messages channel is never closed, which may leave readers blocked.

Unsubscribe closes sub.Done to signal termination but leaves sub.Messages open. Subscribers that loop over range sub.Messages without selecting on sub.Done will block indefinitely. Consider closing sub.Messages after removing the subscriber to unblock any such readers.

Proposed fix
 	// Signal the subscriber that it has been removed.
 	select {
 	case <-sub.Done:
 		// Already closed.
 	default:
 		close(sub.Done)
 	}
+
+	// Close Messages so range loops exit.
+	close(sub.Messages)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp-server/internal/realtime/hub.go` around lines 51 - 74, Unsubscribe
currently removes the subscriber and closes sub.Done but leaves sub.Messages
open, which can leave readers blocked; update Unsubscribe (in Hub.Unsubscribe)
to close(sub.Messages) after removing the subscriber from h.subscribers (while
still holding h.mu) and before closing sub.Done so any goroutines ranging over
sub.Messages are unblocked; use the same removal code path that manipulates
h.subscribers[sub.OrgID] and then call close(sub.Messages) (only if sub.Messages
!= nil) prior to closing sub.Done to avoid double-close races.
🧹 Nitpick comments (6)
web/resources/views/livewire/dashboard/usage-stats.blade.php (1)

37-42: Consider adding ARIA attributes to progress bars for accessibility.

The progress bars convey important usage information but lack ARIA attributes (role="progressbar", aria-valuenow, aria-valuemin, aria-valuemax) that would make them accessible to screen reader users.

♿ Example fix for one progress bar
 <div class="w-full h-3 bg-brand-surface rounded-full overflow-hidden">
-    <div class="h-full rounded-full transition-all duration-500 {{ $this->getUsagePercentage($tokensUsed, $tokensMax) >= 90 ? 'bg-red-500' : 'bg-brand-cyan' }}"
-         style="width: {{ $this->getUsagePercentage($tokensUsed, $tokensMax) }}%"></div>
+    <div class="h-full rounded-full transition-all duration-500 {{ $this->getUsagePercentage($tokensUsed, $tokensMax) >= 90 ? 'bg-red-500' : 'bg-brand-cyan' }}"
+         style="width: {{ $this->getUsagePercentage($tokensUsed, $tokensMax) }}%"
+         role="progressbar"
+         aria-valuenow="{{ $tokensUsed }}"
+         aria-valuemin="0"
+         aria-valuemax="{{ $tokensMax }}"
+         aria-label="Active tokens usage"></div>
 </div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/resources/views/livewire/dashboard/usage-stats.blade.php` around lines 37
- 42, The progress bar markup in the Livewire view is missing ARIA attributes;
update the outer or inner div used for the visual bar to include
role="progressbar" and set aria-valuenow to the numeric value returned by
$this->getUsagePercentage($tokensUsed, $tokensMax) and aria-valuemin="0" and
aria-valuemax="100" (or compute aria-valuemax from $tokensMax if you prefer
absolute tokens), ensuring the values reference the getUsagePercentage(...) call
(and/or tokensUsed/tokensMax) so screen readers receive current, min and max
values for the progress (keep the existing class and style logic intact).
apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts (1)

4-22: Consider adding unit tests for generateOrchestraMenu.

This is a pure transformation function that should be unit tested with vitest. Based on learnings: "Extract pure transformation functions (parse, format, validate, compute) to .utils.ts files and unit test them with vitest."

Would you like me to generate unit tests for this utility function?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts`
around lines 4 - 22, Add vitest unit tests for the pure transformation function
generateOrchestraMenu in OrchestraLayout.utils.ts: write tests that call
generateOrchestraMenu with a sample ref (e.g., "my-ref") and with undefined, and
assert the returned array shape (two groups with titles "Overview" and
"Manage"), that items exist with the expected keys
('orchestra','agents','tokens'), and that each url is constructed correctly
(baseUrl === `/project/${ref}/orchestra` and `${baseUrl}/agents`,
`${baseUrl}/tokens`) and that each item has an items array; include at least one
snapshot or deep-equality assertion to catch regressions.
mcp-server/cmd/server/main.go (1)

155-158: Rate limiting is bypassed when auth middleware is disabled.

When DATABASE_URL is not set, both auth and rate limiting are skipped. This is likely intentional for local development, but consider adding rate limiting unconditionally to prevent abuse in misconfigured deployments.

♻️ Suggested change
 if authMiddleware != nil {
   mux.Handle("GET /mcp", authMiddleware.Middleware(rateLimiter.Middleware(http.HandlerFunc(server.HandleSSE))))
   mux.Handle("POST /mcp", authMiddleware.Middleware(rateLimiter.Middleware(http.HandlerFunc(server.HandleMessage))))
 } else {
-  mux.HandleFunc("GET /mcp", server.HandleSSE)
-  mux.HandleFunc("POST /mcp", server.HandleMessage)
+  // Still apply rate limiting even without auth
+  mux.Handle("GET /mcp", rateLimiter.Middleware(http.HandlerFunc(server.HandleSSE)))
+  mux.Handle("POST /mcp", rateLimiter.Middleware(http.HandlerFunc(server.HandleMessage)))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp-server/cmd/server/main.go` around lines 155 - 158, Current logic skips
both auth and rate limiting when DATABASE_URL is unset, allowing requests routed
via mux.HandleFunc("GET /mcp", server.HandleSSE) and mux.HandleFunc("POST /mcp",
server.HandleMessage) to bypass rate limits; update the routing so rate limiting
is applied unconditionally even if the auth middleware is disabled: keep the
conditional for attaching auth middleware but always wrap server.HandleSSE and
server.HandleMessage with the rate-limiter middleware (or insert the
rate-limiter into the top-level handler chain) so requests hit the limiter
regardless of DATABASE_URL configuration.
web/resources/views/pages/features.blade.php (1)

64-78: Consider extracting categories data to a view composer or controller.

The inline $categories array works for a static page, but if this data is reused elsewhere (docs, API), extracting it to a shared location would improve maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/resources/views/pages/features.blade.php` around lines 64 - 78, The
inline $categories array in the features view should be moved out to a
controller or view composer so the data is reusable and testable; create a
provider for the data (e.g., add a method in FeaturesController::index that
builds the $categories array and passes it to the view with return
view('pages.features', compact('categories')) or register a View::composer for
'pages.features' that injects $categories, or load the array from a dedicated
config/data class, then remove the `@php` block from the blade and use the
injected $categories variable.
apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.tsx (1)

14-16: Path segment extraction is fragile and may break silently.

The hardcoded index pathSegments[4] assumes a fixed URL structure (/project/[ref]/orchestra/[page]). If the route structure changes or the component is mounted at a different path depth, this will silently return incorrect values.

♻️ Suggested improvement — derive page from pathname relative to 'orchestra'
 const pathSegments = router.pathname.split('/')
-// The page key is the segment after /orchestra/, or 'orchestra' for the index
-const page = pathSegments[4] ?? 'orchestra'
+// Find the segment after 'orchestra', or default to 'orchestra' for index
+const orchestraIndex = pathSegments.indexOf('orchestra')
+const page = orchestraIndex !== -1 && pathSegments[orchestraIndex + 1]
+  ? pathSegments[orchestraIndex + 1]
+  : 'orchestra'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.tsx` around
lines 14 - 16, The current extraction in OrchestraLayout uses a brittle fixed
index (pathSegments[4]) to compute page from router.pathname; change it to find
the 'orchestra' segment dynamically and take the next segment (or default to
'orchestra') — e.g., split router.pathname into pathSegments, locate const idx =
pathSegments.indexOf('orchestra'), then set page = (idx >= 0 && pathSegments[idx
+ 1]) ? pathSegments[idx + 1] : 'orchestra'; ensure you handle idx === -1 and
empty/trailing-slash segments so the fallback remains 'orchestra'.
mcp-server/internal/tools/usage.go (1)

111-125: Inefficient counting: fetches all rows to count client-side.

countRows retrieves every matching row as JSON and counts the array length. For tables with many rows, this wastes bandwidth and memory. Consider using a server-side count query or the PostgREST Prefer: count=exact header with HEAD request.

Alternative using COUNT aggregation

If the db.Client supports custom queries, use a SELECT count(*) FROM table WHERE ... approach. Alternatively, if using PostgREST, set the Prefer: count=exact header and read the Content-Range response header for the count without transferring row data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp-server/internal/tools/usage.go` around lines 111 - 125, countRows
currently fetches all matching rows via dbClient.Get and counts them
client-side; change it to perform a server-side count instead (either send a
HEAD request with the Prefer: count=exact header and read the Content-Range
header, or execute a SELECT count(*) style query if db.Client supports it) so
you don't transfer/marshal all rows. Update the implementation in countRows to
call the appropriate db.Client method (e.g., a HEAD variant or a Count-like
method) and parse the returned count from the response headers/body rather than
unmarshalling into []json.RawMessage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/studio/components/interfaces/Orchestra/OrchestraDashboard.tsx`:
- Around line 24-37: The three data hooks (useOrchestraDashboardQuery,
useOrchestraActivityLogQuery, useOrchestraActiveSessionsQuery) currently ignore
their error return values and treat undefined data as an empty/healthy state;
update the component to read the error from each hook, classify it using the
project's ErrorMatcher per studio-error-handling.instructions.md, and render a
dedicated error state/UI for each query (or a combined error banner) before
falling back to empty defaults like 0, “No recent activity”, or “No active
sessions”; ensure you check error and isPending alongside data in the rendering
paths inside OrchestraDashboard so failed backend responses are shown as errors
rather than empty data.

In `@apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx`:
- Around line 107-116: The Revoke button currently calls handleRevoke(token.id)
directly; wrap that action in a user confirmation step so destructive
revocations require explicit confirmation. Change the Button onClick to open a
confirmation dialog/modal (or use window.confirm) that shows a clear message
including the token identifier (token.id or token.name) and only calls
handleRevoke(token.id) when the user confirms; preserve existing
loading/disabled behavior (revokingId and isRevoking) and ensure the dialog is
dismissible/cancelable so accidental clicks do not revoke the token.
- Around line 34-41: The revoke flow in handleRevoke lacks error handling: when
calling revokeToken({ projectRef: ref, tokenId }, ...), add an onError callback
to the mutation options that clears the revoking state (setRevokingId(null)) and
surfaces user feedback (e.g., show a toast or set an error state) so users know
the revoke failed; ensure you still clear revoking in onSettled or onError to
avoid stuck UI, and reference handleRevoke, revokeToken, setRevokingId and ref
when applying the change.

In `@apps/studio/data/orchestra/orchestra-token-revoke-mutation.ts`:
- Around line 13-19: The SQL in revokeOrchestraToken splices tokenId directly
into the query, which is injectable; change the implementation to use a
parameterized query/bound variable for tokenId (and projectRef if used) instead
of string interpolation when building the SQL string that updates mcp_tokens
(revoked_at = NOW()); locate the SQL template in function revokeOrchestraToken
and replace the template literal interpolation with a parameter placeholder and
pass tokenId as a bound parameter to the DB client or, alternatively, move the
revoke logic into a protected stored RPC and call that from
revokeOrchestraToken.
- Around line 21-27: The mutation currently returns the result of executeSql
without checking whether any rows were updated, so an UPDATE ... RETURNING that
affects zero rows still counts as success; update the logic in the
orchestra-token-revoke-mutation where executeSql is called (the calls with
queryKey ['orchestra','revoke-token', tokenId] and the similar block at lines
38-44) to inspect the returned result (e.g., result.length or result.rowCount)
and if it is zero/empty, throw an Error (with a clear message like "No token
revoked" or include tokenId) so the mutation treats zero-row updates as failures
and triggers onError instead of onSuccess.

In `@apps/studio/data/orchestra/orchestra-tokens-query.ts`:
- Around line 23-36: The SQL selects non-existent columns and lacks org scoping:
change t.prefix to t.token_prefix, remove or replace t.user_email (or join the
users table if you need email), and add organization filtering (e.g., WHERE
t.organization_id = $1 AND t.user_id = $2 or at minimum WHERE t.organization_id
= :organizationId) so the query respects organization-level isolation and RLS
policies; update the code that builds the sql variable and its call site to
accept and bind an organizationId parameter accordingly, and keep the existing
COALESCE(t.usage_count, 0)::int AS usage_count and ORDER BY t.created_at DESC.

In `@mcp-server/cmd/server/main.go`:
- Around line 91-94: The realtime hub created with realtime.NewHub() is unused
and should be wired into tool registration instead of discarded; pass the hub
variable (hub) into the tool registration functions that need broadcasting
(e.g., update calls like tools.RegisterAgentTools or other Register* functions
to accept a hub parameter) and propagate the hub into those implementations so
handlers can call hub.Broadcast/related methods; remove the blank identifier
assignment (_ = hub) once the hub is threaded through and ensure any changed
Register... signatures (e.g., RegisterAgentTools(registry, dbClient, hub)) are
updated and implemented where the hub is required.

In `@mcp-server/internal/auth/ratelimit.go`:
- Around line 44-55: The current code only sets bucket.ratePerSec and
bucket.maxTokens when creating a new bucket so plan changes don't update
existing buckets; fix by, whenever accessing rl.buckets[orgID] (the code that
uses rl.buckets, orgID, rpmForPlan and the bucket struct with fields tokens,
lastFill, ratePerSec, maxTokens), recompute rpm := rl.rpmForPlan(plan) and
update the existing bucket's ratePerSec and maxTokens to float64(rpm)/60.0 and
float64(rpm) respectively; also clamp bucket.tokens to the new maxTokens (or
scale proportionally if desired) and preserve lastFill, ensuring the same update
logic is applied in the other bucket-access path referenced (the similar block
around where bucket is handled later).

In `@mcp-server/internal/notifications/slack.go`:
- Around line 96-99: In SlackClient.lookupUserByEmail, the email is concatenated
raw into the users.lookupByEmail URL which breaks for characters like +, %, &;
change the request construction to build the query using url.Values (or
url.QueryEscape) to properly encode the "email" value and append it to
slackAPIBase+"/users.lookupByEmail" before calling http.NewRequestWithContext;
ensure the code references the same SlackClient.lookupUserByEmail method and
slackAPIBase constant so the encoded query replaces the current string
concatenation.

In `@mcp-server/internal/tools/slack.go`:
- Around line 80-83: The current mention uses a mailto: link and won't notify
the user; change the code to resolve the Slack user ID and use the mention
format <@USERID>. Either make SlackClient.lookupUserByEmail public (rename to
LookupUserByEmail) and call it here, or inline the lookup logic, then when
input.MentionUser is non-empty replace the mailto line with a lookup that
returns the Slack user ID and set text = fmt.Sprintf("cc <@%s> — %s", userID,
text); ensure you handle lookup errors (fall back to the email link or omit the
mention) and reference SlackClient, lookupUserByEmail/LookupUserByEmail,
input.MentionUser, and the text variable when making the change.

In `@mcp-server/internal/tools/usage.go`:
- Around line 76-79: The TasksThisMonth count is currently counting all
activity_log rows; update the query passed to countRows so it filters only
task-related actions (e.g., action IN
('task_created','task_started','task_completed','task_failed')) in addition to
organization_id and created_at; modify the fmt.Sprintf call that builds the
query for resp.TasksThisMonth (using orgID and startOfMonth) to include an
action=in.(task_created,task_started,task_completed,task_failed) clause matching
the dashboard filter.

In `@supabase/functions/cron-agent-trigger/index.ts`:
- Around line 28-35: The current code selects todo tasks then writes activities,
allowing duplicate processing; replace the two-step select+write with an atomic
claim step so each task is processed once. Use Supabase to run an atomic SQL
update that marks/claims the tasks and returns them (e.g. an UPDATE on tasks
setting claimed_by/claimed_at or status='processing' with a WHERE status='todo'
AND assigned_agent_id IS NOT NULL AND claimed_at IS NULL ORDER BY priority LIMIT
10 RETURNING *), or call an RPC that performs that UPDATE ... RETURNING or uses
FOR UPDATE SKIP LOCKED; then iterate the returned rows to emit the
agent_triggered activity. Target the code around the tasks query (the
supabase.from('tasks').select(...).eq('status','todo')... block) and replace it
with the atomic claim+return logic before writing activity events.

In `@supabase/functions/stripe-webhook/index.ts`:
- Around line 165-168: The helper extractPlanFromSubscription currently returns
'free' when the plan metadata is missing which causes immediate downgrades;
change it to return no plan (e.g., undefined/null) instead of 'free' so callers
like updateOrgPlan can decide on downgrade behavior — update the return
expression in extractPlanFromSubscription (and the identical logic elsewhere
noted around lines 243-245) to return the plan only if it exists AND is in
PLAN_LIMITS, otherwise return undefined (or null) so missing metadata does not
trigger an automatic downgrade.
- Around line 296-303: The current catch in the webhook handler always returns
200 which suppresses transient failures from getOrgByCustomerId() and
updateOrgPlan() so Stripe won't retry; change the error handling so
unexpected/transient errors bubble up or return a 5xx (allowing Stripe retries)
while only acknowledging known non-retryable cases with a 200. Concretely: wrap
calls to getOrgByCustomerId and updateOrgPlan so they throw distinct error types
or attach an isRetryable flag, catch known non-retryable errors and return the
JSON 200 response there, and in the outer catch (the block handling event.type)
return a 500 (or rethrow) for all other errors so Stripe will retry the webhook.
Ensure you reference the event.type in logs and preserve existing logging
behavior.
- Around line 153-158: The activity_log insert currently ignores the Supabase
response so failures are hidden; update the call to capture the returned result
(e.g. const { data, error } = await supabase.from('activity_log').insert({...}))
and if error is present, propagate it (throw or return an HTTP error) so the
Stripe webhook does not return 200 on insert failure; locate the insert near the
variables orgId, action, summary, details and ensure the surrounding webhook
handler (the function processing the Stripe event) surfaces the error to the
caller or logs and returns a non-200 response.

In `@web/app/Livewire/Dashboard/Settings.php`:
- Around line 65-76: Protect deleteAccount by enforcing server-side confirmation
and password re-authentication: in the deleteAccount() method check that the
Livewire flag (showDeleteConfirmation) is set and validate a submitted
current_password, then verify the password against the authenticated user's
hashed password (use Hash::check or Auth::validate) before proceeding to
$user->delete(), logout and redirect; if confirmation is missing or the password
fails validation, abort/throw a validation error and do not perform deletion.
- Around line 69-70: The comment claiming a soft delete is misleading: the
App\Models\User model lacks the SoftDeletes trait and no deleted_at column
exists, so the call to $user->delete() in Livewire\Dashboard\Settings (the
$user->delete() invocation) performs a hard delete; either enable soft deletes
by adding Illuminate\Database\Eloquent\SoftDeletes to App\Models\User and
creating a migration that adds $table->softDeletes() (and run migrations), or
update/remove the comment in Livewire\Dashboard/Settings.php to accurately state
that $user->delete() will permanently remove the user; pick one and make the
corresponding change so code and comment agree.

In `@web/app/Livewire/Dashboard/UsageStats.php`:
- Around line 53-65: The current usage calculations in
Livewire\Dashboard\UsageStats (variables $this->tokensUsed, $this->agentsUsed,
$this->tasksUsed, $this->memoryUsedMb) are scoped to the current user and not
the organization; change these to aggregate organization-level data instead:
query tokens, agents, and tasks through the organisation/team relationship off
the current user (e.g., $user->organization or $user->currentTeam and their
relations) rather than McpToken::where('user_id', ...), teamMemberships(), or
summing user-scoped McpToken.usage_count; compute tokensUsed by counting tokens
that belong to the org, agentsUsed by counting the org's agents relationship,
tasksUsed by summing the org's tasks usage_count, and then derive memoryUsedMb
from that org-scoped tasks count.

In `@web/app/Livewire/Onboarding/CompanySetup.php`:
- Around line 32-33: The code currently falls back to generating a random UUID
for $ownerId when auth()->user()->supabase_user_id is null, which creates
incorrect ownership records; update the logic in CompanySetup (the place where
$ownerId is set and the subsequent team/company creation code referenced at the
other occurrences around lines 38 and 59-61) to instead validate the presence of
auth()->user()->supabase_user_id and fail fast (return a validation/error
response or throw an exception) when it's missing, or explicitly use a real
authenticated user identifier (e.g., auth()->id() or the canonical user UUID) if
that is the intended link—do not generate a new UUID as a silent fallback.
Ensure the guard is added before any use of $ownerId / owner team membership
creation so corrupted ownership cannot be created.

---

Outside diff comments:
In `@supabase/migrations/20260328000002_profiles_organizations.sql`:
- Around line 68-73: Don't edit the already-committed CREATE TABLE in
organizations; instead add a new migration that drops the existing foreign
key(s) to auth.users so environments that already ran
20260328000002_profiles_organizations.sql get the change. Create a new migration
which runs ALTER TABLE public.organizations DROP CONSTRAINT IF EXISTS
<constraint_name>; (and any other FK constraints referenced to auth.users found
around lines 115-123) — locate the constraint names on the existing DB or in the
original migration (e.g., the FK on organizations.owner_id referencing
auth.users) and drop them safely in the follow-up migration.

---

Minor comments:
In `@apps/studio/components/interfaces/Orchestra/OrchestraAgents.tsx`:
- Around line 83-85: Wrap the dayjs formatting call for agent.created_at in a
null/undefined guard so you don't render "Invalid Date"; in OrchestraAgents.tsx
update the TableCell that currently contains
{dayjs(agent.created_at).format('MMM D, YYYY')} to conditionally render a
fallback (e.g., '-' or empty string) when agent.created_at is falsy and only
call dayjs(...).format when agent.created_at exists. Ensure you reference
agent.created_at and the dayjs(...).format expression so the check is applied to
that exact value.

In `@apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx`:
- Around line 93-95: Guard against undefined token.usage_count before calling
toLocaleString by providing a fallback; update the TableCell rendering (the cell
that currently uses token.usage_count.toLocaleString()) to either coerce with a
default number like (token.usage_count ?? 0).toLocaleString() or conditionally
render a placeholder when null/undefined (e.g., token.usage_count != null ?
token.usage_count.toLocaleString() : '—'), ensuring you reference
token.usage_count and the surrounding TableCell component.

In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts`:
- Around line 4-6: The generateOrchestraMenu function builds URLs using ref but
doesn't handle ref === undefined, causing links like
/project/undefined/orchestra; update generateOrchestraMenu to guard for a
missing ref (e.g., if (!ref) return [] or throw/return a safe default menu) and
avoid constructing baseUrl when ref is falsy, ensuring any references to baseUrl
use the validated ref value so no invalid navigation links are produced.

In `@mcp-server/internal/auth/ratelimit.go`:
- Around line 115-117: The X-RateLimit-Remaining header is computed before the
token is consumed, so change the logic in the handler that calls
rl.Remaining(uc.OrgID, uc.Plan) and rl.Allow(...) to first call
rl.Allow(uc.OrgID, uc.Plan) (or otherwise consume/charge the token via Allow)
and then call rl.Remaining(uc.OrgID, uc.Plan) to compute the post-charge
remaining budget; update where headers are set (X-RateLimit-Limit and
X-RateLimit-Remaining) to use rl.rpmForPlan(uc.Plan) and the post-consumption
remaining value so callers see the correct remaining count, and apply the same
change to the other similar block around lines 119-127 that uses
Allow()/Remaining().

In `@mcp-server/internal/realtime/hub.go`:
- Around line 51-74: Unsubscribe currently removes the subscriber and closes
sub.Done but leaves sub.Messages open, which can leave readers blocked; update
Unsubscribe (in Hub.Unsubscribe) to close(sub.Messages) after removing the
subscriber from h.subscribers (while still holding h.mu) and before closing
sub.Done so any goroutines ranging over sub.Messages are unblocked; use the same
removal code path that manipulates h.subscribers[sub.OrgID] and then call
close(sub.Messages) (only if sub.Messages != nil) prior to closing sub.Done to
avoid double-close races.

In `@web/resources/views/pages/docs.blade.php`:
- Line 42: The doc text in pages/docs.blade.php currently says "55 MCP tools"
but the PR registers 57 tools; verify the actual registered MCP tool count and
update the string "Restart Claude Code and you should see 55 MCP tools
available." (and the other occurrence on the same page) to the correct number
(e.g., 57) so the docs match the codebase. Ensure both mentions on the page are
changed consistently.

In `@web/resources/views/pages/features.blade.php`:
- Around line 44-46: The hero paragraph currently reads "55 MCP tools..." which
conflicts with other PR text claiming "57 tools"; confirm the correct tool count
and update the string in the features.blade.php paragraph (the paragraph
containing "55 MCP tools, AI agent teams, real-time collaboration, and full
self-hosting -- all in one platform.") to the verified number so the marketing
copy is consistent across the site.

In `@web/resources/views/pages/pricing.blade.php`:
- Around line 273-285: The checkmark SVGs inside the table cells (td elements
with class "py-3 px-4 text-center text-brand-cyan" containing
svg.w-5.h-5.mx-auto) lack accessible text; add an accessible label for screen
readers by either giving the svg role="img" and aria-label="Included" or keeping
aria-hidden="true" on the svg and inserting a visually-hidden sibling span
(e.g., class "sr-only") with the text "Included", and apply the same pattern for
the X icons using "Not included" so screen readers convey feature availability.

---

Nitpick comments:
In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.tsx`:
- Around line 14-16: The current extraction in OrchestraLayout uses a brittle
fixed index (pathSegments[4]) to compute page from router.pathname; change it to
find the 'orchestra' segment dynamically and take the next segment (or default
to 'orchestra') — e.g., split router.pathname into pathSegments, locate const
idx = pathSegments.indexOf('orchestra'), then set page = (idx >= 0 &&
pathSegments[idx + 1]) ? pathSegments[idx + 1] : 'orchestra'; ensure you handle
idx === -1 and empty/trailing-slash segments so the fallback remains
'orchestra'.

In `@apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts`:
- Around line 4-22: Add vitest unit tests for the pure transformation function
generateOrchestraMenu in OrchestraLayout.utils.ts: write tests that call
generateOrchestraMenu with a sample ref (e.g., "my-ref") and with undefined, and
assert the returned array shape (two groups with titles "Overview" and
"Manage"), that items exist with the expected keys
('orchestra','agents','tokens'), and that each url is constructed correctly
(baseUrl === `/project/${ref}/orchestra` and `${baseUrl}/agents`,
`${baseUrl}/tokens`) and that each item has an items array; include at least one
snapshot or deep-equality assertion to catch regressions.

In `@mcp-server/cmd/server/main.go`:
- Around line 155-158: Current logic skips both auth and rate limiting when
DATABASE_URL is unset, allowing requests routed via mux.HandleFunc("GET /mcp",
server.HandleSSE) and mux.HandleFunc("POST /mcp", server.HandleMessage) to
bypass rate limits; update the routing so rate limiting is applied
unconditionally even if the auth middleware is disabled: keep the conditional
for attaching auth middleware but always wrap server.HandleSSE and
server.HandleMessage with the rate-limiter middleware (or insert the
rate-limiter into the top-level handler chain) so requests hit the limiter
regardless of DATABASE_URL configuration.

In `@mcp-server/internal/tools/usage.go`:
- Around line 111-125: countRows currently fetches all matching rows via
dbClient.Get and counts them client-side; change it to perform a server-side
count instead (either send a HEAD request with the Prefer: count=exact header
and read the Content-Range header, or execute a SELECT count(*) style query if
db.Client supports it) so you don't transfer/marshal all rows. Update the
implementation in countRows to call the appropriate db.Client method (e.g., a
HEAD variant or a Count-like method) and parse the returned count from the
response headers/body rather than unmarshalling into []json.RawMessage.

In `@web/resources/views/livewire/dashboard/usage-stats.blade.php`:
- Around line 37-42: The progress bar markup in the Livewire view is missing
ARIA attributes; update the outer or inner div used for the visual bar to
include role="progressbar" and set aria-valuenow to the numeric value returned
by $this->getUsagePercentage($tokensUsed, $tokensMax) and aria-valuemin="0" and
aria-valuemax="100" (or compute aria-valuemax from $tokensMax if you prefer
absolute tokens), ensuring the values reference the getUsagePercentage(...) call
(and/or tokensUsed/tokensMax) so screen readers receive current, min and max
values for the progress (keep the existing class and style logic intact).

In `@web/resources/views/pages/features.blade.php`:
- Around line 64-78: The inline $categories array in the features view should be
moved out to a controller or view composer so the data is reusable and testable;
create a provider for the data (e.g., add a method in FeaturesController::index
that builds the $categories array and passes it to the view with return
view('pages.features', compact('categories')) or register a View::composer for
'pages.features' that injects $categories, or load the array from a dedicated
config/data class, then remove the `@php` block from the blade and use the
injected $categories variable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d320129d-5a7c-4083-9ec4-a57bf631c53e

📥 Commits

Reviewing files that changed from the base of the PR and between d21e8a9 and b2e2f0c.

📒 Files selected for processing (45)
  • apps/studio/components/interfaces/Orchestra/OrchestraAgents.tsx
  • apps/studio/components/interfaces/Orchestra/OrchestraDashboard.tsx
  • apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx
  • apps/studio/components/interfaces/Sidebar.tsx
  • apps/studio/components/layouts/Navigation/NavigationBar/NavigationBar.utils.test.tsx
  • apps/studio/components/layouts/Navigation/NavigationBar/NavigationBar.utils.tsx
  • apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.tsx
  • apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts
  • apps/studio/components/layouts/ProjectLayout/LayoutHeader/MobileMenuContent/MobileMenuContent.tsx
  • apps/studio/components/layouts/ProjectLayout/LayoutHeader/MobileMenuContent/mobileProductMenuRegistry.tsx
  • apps/studio/data/orchestra/keys.ts
  • apps/studio/data/orchestra/orchestra-active-sessions-query.ts
  • apps/studio/data/orchestra/orchestra-activity-log-query.ts
  • apps/studio/data/orchestra/orchestra-agents-query.ts
  • apps/studio/data/orchestra/orchestra-dashboard-query.ts
  • apps/studio/data/orchestra/orchestra-token-revoke-mutation.ts
  • apps/studio/data/orchestra/orchestra-tokens-query.ts
  • apps/studio/pages/project/[ref]/orchestra/agents.tsx
  • apps/studio/pages/project/[ref]/orchestra/index.tsx
  • apps/studio/pages/project/[ref]/orchestra/tokens.tsx
  • mcp-server/cmd/server/main.go
  • mcp-server/internal/auth/ratelimit.go
  • mcp-server/internal/notifications/slack.go
  • mcp-server/internal/realtime/hub.go
  • mcp-server/internal/tools/slack.go
  • mcp-server/internal/tools/usage.go
  • supabase/functions/cron-agent-trigger/index.ts
  • supabase/functions/cron-session-cleanup/index.ts
  • supabase/functions/stripe-webhook/index.ts
  • supabase/migrations/20260328000002_profiles_organizations.sql
  • web/app/Livewire/Dashboard/Settings.php
  • web/app/Livewire/Dashboard/UsageStats.php
  • web/app/Livewire/Onboarding/CompanySetup.php
  • web/app/Models/McpToken.php
  • web/app/Models/TeamMember.php
  • web/app/Models/User.php
  • web/resources/views/components/layouts/app.blade.php
  • web/resources/views/dashboard/settings.blade.php
  • web/resources/views/dashboard/usage.blade.php
  • web/resources/views/livewire/dashboard/settings.blade.php
  • web/resources/views/livewire/dashboard/usage-stats.blade.php
  • web/resources/views/pages/docs.blade.php
  • web/resources/views/pages/features.blade.php
  • web/resources/views/pages/pricing.blade.php
  • web/routes/web.php

Comment on lines +24 to +37
const { data: metrics, isPending: isMetricsLoading } = useOrchestraDashboardQuery(
{ projectRef: ref },
{ enabled: !!ref }
)

const { data: activityLog, isPending: isActivityLoading } = useOrchestraActivityLogQuery(
{ projectRef: ref },
{ enabled: !!ref }
)

const { data: activeSessions, isPending: isSessionsLoading } = useOrchestraActiveSessionsQuery(
{ projectRef: ref },
{ enabled: !!ref }
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t render failed queries as healthy empty states.

All three queries ignore error, then the UI falls back to 0, “No recent activity”, and “No active sessions” whenever data is undefined. A backend failure will look identical to an empty dashboard. Render an error state separately before applying these defaults.

As per coding guidelines, "Follow error handling guidelines from studio-error-handling.instructions.md for error classification and ErrorMatcher usage in Supabase Dashboard code".

Also applies to: 43-89, 99-176

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraDashboard.tsx` around
lines 24 - 37, The three data hooks (useOrchestraDashboardQuery,
useOrchestraActivityLogQuery, useOrchestraActiveSessionsQuery) currently ignore
their error return values and treat undefined data as an empty/healthy state;
update the component to read the error from each hook, classify it using the
project's ErrorMatcher per studio-error-handling.instructions.md, and render a
dedicated error state/UI for each query (or a combined error banner) before
falling back to empty defaults like 0, “No recent activity”, or “No active
sessions”; ensure you check error and isPending alongside data in the rendering
paths inside OrchestraDashboard so failed backend responses are shown as errors
rather than empty data.

Comment on lines +34 to +41
const handleRevoke = (tokenId: string) => {
if (!ref) return
setRevokingId(tokenId)
revokeToken(
{ projectRef: ref, tokenId },
{ onSettled: () => setRevokingId(null) }
)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling for the revoke mutation.

The handleRevoke function doesn't handle errors. If the mutation fails, users receive no feedback and may not realize the revoke operation failed.

🛡️ Proposed fix to add error handling
 const handleRevoke = (tokenId: string) => {
   if (!ref) return
   setRevokingId(tokenId)
   revokeToken(
     { projectRef: ref, tokenId },
-    { onSettled: () => setRevokingId(null) }
+    {
+      onSettled: () => setRevokingId(null),
+      onError: (error) => {
+        // Consider using a toast notification
+        console.error('Failed to revoke token:', error)
+      },
+    }
   )
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleRevoke = (tokenId: string) => {
if (!ref) return
setRevokingId(tokenId)
revokeToken(
{ projectRef: ref, tokenId },
{ onSettled: () => setRevokingId(null) }
)
}
const handleRevoke = (tokenId: string) => {
if (!ref) return
setRevokingId(tokenId)
revokeToken(
{ projectRef: ref, tokenId },
{
onSettled: () => setRevokingId(null),
onError: (error) => {
// Consider using a toast notification
console.error('Failed to revoke token:', error)
},
}
)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx` around lines
34 - 41, The revoke flow in handleRevoke lacks error handling: when calling
revokeToken({ projectRef: ref, tokenId }, ...), add an onError callback to the
mutation options that clears the revoking state (setRevokingId(null)) and
surfaces user feedback (e.g., show a toast or set an error state) so users know
the revoke failed; ensure you still clear revoking in onSettled or onError to
avoid stuck UI, and reference handleRevoke, revokeToken, setRevokingId and ref
when applying the change.

Comment on lines +107 to +116
{!isRevoked && (
<Button
type="warning"
size="tiny"
loading={revokingId === token.id && isRevoking}
onClick={() => handleRevoke(token.id)}
>
Revoke
</Button>
)}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add confirmation dialog before revoking tokens.

Token revocation is a destructive, irreversible operation. Per coding guidelines, destructive operations should require confirmation to prevent accidental data loss.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraTokens.tsx` around lines
107 - 116, The Revoke button currently calls handleRevoke(token.id) directly;
wrap that action in a user confirmation step so destructive revocations require
explicit confirmation. Change the Button onClick to open a confirmation
dialog/modal (or use window.confirm) that shows a clear message including the
token identifier (token.id or token.name) and only calls handleRevoke(token.id)
when the user confirms; preserve existing loading/disabled behavior (revokingId
and isRevoking) and ensure the dialog is dismissible/cancelable so accidental
clicks do not revoke the token.

Comment on lines +13 to +19
export async function revokeOrchestraToken({ projectRef, tokenId }: RevokeOrchestraTokenVariables) {
const sql = `
UPDATE mcp_tokens
SET revoked_at = NOW()
WHERE id = '${tokenId}' AND revoked_at IS NULL
RETURNING id
`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Parameterize tokenId instead of interpolating it into SQL.

tokenId comes from the client, and splicing it into this UPDATE makes a destructive query injectable. Bind the value or move the revoke into a protected RPC instead of building raw SQL with a template literal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/data/orchestra/orchestra-token-revoke-mutation.ts` around lines
13 - 19, The SQL in revokeOrchestraToken splices tokenId directly into the
query, which is injectable; change the implementation to use a parameterized
query/bound variable for tokenId (and projectRef if used) instead of string
interpolation when building the SQL string that updates mcp_tokens (revoked_at =
NOW()); locate the SQL template in function revokeOrchestraToken and replace the
template literal interpolation with a parameter placeholder and pass tokenId as
a bound parameter to the DB client or, alternatively, move the revoke logic into
a protected stored RPC and call that from revokeOrchestraToken.

Comment on lines +21 to +27
const { result } = await executeSql<{ id: string }[]>({
projectRef,
sql,
queryKey: ['orchestra', 'revoke-token', tokenId],
})

return result
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat a zero-row revoke as a failure.

If the token is already revoked or the id is invalid, UPDATE ... RETURNING yields an empty result but the mutation still reaches onSuccess and shows a success toast. Check the returned row count and throw when nothing was updated.

Also applies to: 38-44

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/data/orchestra/orchestra-token-revoke-mutation.ts` around lines
21 - 27, The mutation currently returns the result of executeSql without
checking whether any rows were updated, so an UPDATE ... RETURNING that affects
zero rows still counts as success; update the logic in the
orchestra-token-revoke-mutation where executeSql is called (the calls with
queryKey ['orchestra','revoke-token', tokenId] and the similar block at lines
38-44) to inspect the returned result (e.g., result.length or result.rowCount)
and if it is zero/empty, throw an Error (with a clear message like "No token
revoked" or include tokenId) so the mutation treats zero-row updates as failures
and triggers onError instead of onSuccess.

Comment on lines +296 to +303
} catch (err) {
console.error(`Error handling ${event.type}:`, err)
// Return 200 to Stripe to prevent retries on application-level errors.
// The error is logged for investigation.
return new Response(JSON.stringify({ received: true, error: 'Processing error logged' }), {
status: 200,
headers: { 'Content-Type': 'application/json' },
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Let Stripe retry transient processing failures.

This catch-all 200 also swallows Supabase/network errors from getOrgByCustomerId() and updateOrgPlan(), so a brief outage can permanently drop a billing state change. Only explicit non-retryable cases should be acknowledged.

Suggested direction
   } catch (err) {
     console.error(`Error handling ${event.type}:`, err)
-    // Return 200 to Stripe to prevent retries on application-level errors.
-    // The error is logged for investigation.
-    return new Response(JSON.stringify({ received: true, error: 'Processing error logged' }), {
-      status: 200,
-      headers: { 'Content-Type': 'application/json' },
-    })
+    if (err instanceof ApplicationError) {
+      return new Response(JSON.stringify({ received: true, error: 'Processing error logged' }), {
+        status: 200,
+        headers: { 'Content-Type': 'application/json' },
+      })
+    }
+
+    return new Response(JSON.stringify({ error: 'Temporary processing failure' }), {
+      status: 500,
+      headers: { 'Content-Type': 'application/json' },
+    })
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/functions/stripe-webhook/index.ts` around lines 296 - 303, The
current catch in the webhook handler always returns 200 which suppresses
transient failures from getOrgByCustomerId() and updateOrgPlan() so Stripe won't
retry; change the error handling so unexpected/transient errors bubble up or
return a 5xx (allowing Stripe retries) while only acknowledging known
non-retryable cases with a 200. Concretely: wrap calls to getOrgByCustomerId and
updateOrgPlan so they throw distinct error types or attach an isRetryable flag,
catch known non-retryable errors and return the JSON 200 response there, and in
the outer catch (the block handling event.type) return a 500 (or rethrow) for
all other errors so Stripe will retry the webhook. Ensure you reference the
event.type in logs and preserve existing logging behavior.

Comment on lines +65 to +76
public function deleteAccount(): void
{
$user = auth()->user();

// Soft delete: mark as deleted but preserve data
$user->delete();

auth()->logout();
session()->invalidate();
session()->regenerateToken();

$this->redirect(route('home'));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce account-deletion confirmation on the server.

deleteAccount() can be called directly without checking showDeleteConfirmation or re-authenticating the user, so the destructive action is only protected by the frontend flow. Validate a server-side confirmation step here, and ideally require the current password before deleting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Dashboard/Settings.php` around lines 65 - 76, Protect
deleteAccount by enforcing server-side confirmation and password
re-authentication: in the deleteAccount() method check that the Livewire flag
(showDeleteConfirmation) is set and validate a submitted current_password, then
verify the password against the authenticated user's hashed password (use
Hash::check or Auth::validate) before proceeding to $user->delete(), logout and
redirect; if confirmation is missing or the password fails validation,
abort/throw a validation error and do not perform deletion.

Comment on lines +69 to +70
// Soft delete: mark as deleted but preserve data
$user->delete();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Expect both the SoftDeletes trait on the model and a deleted_at column in a migration.
sed -n '1,220p' web/app/Models/User.php
rg -n "softDeletes|deleted_at" web/database/migrations

Repository: orchestra-mcp/core

Length of output: 1822


This is a hard delete, not a soft delete. Remove or correct the misleading comment.

The App\Models\User model does NOT use the SoftDeletes trait, and no deleted_at column exists in the database schema. The call to $user->delete() will permanently remove the data, contradicting the comment "// Soft delete: mark as deleted but preserve data". Either implement soft deletes by adding the SoftDeletes trait to the User model and creating a migration with $table->softDeletes(), or remove/correct the comment to reflect the actual hard delete behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Dashboard/Settings.php` around lines 69 - 70, The comment
claiming a soft delete is misleading: the App\Models\User model lacks the
SoftDeletes trait and no deleted_at column exists, so the call to
$user->delete() in Livewire\Dashboard\Settings (the $user->delete() invocation)
performs a hard delete; either enable soft deletes by adding
Illuminate\Database\Eloquent\SoftDeletes to App\Models\User and creating a
migration that adds $table->softDeletes() (and run migrations), or update/remove
the comment in Livewire\Dashboard/Settings.php to accurately state that
$user->delete() will permanently remove the user; pick one and make the
corresponding change so code and comment agree.

Comment on lines +53 to +65
// Calculate current usage
$this->tokensUsed = McpToken::where('user_id', $user->id)
->whereNull('revoked_at')
->count();

$this->agentsUsed = $user->teamMemberships()->count();

$this->tasksUsed = McpToken::where('user_id', $user->id)
->whereNull('revoked_at')
->sum('usage_count') ?? 0;

// Memory: approximate from task count
$this->memoryUsedMb = round($this->tasksUsed * 0.5, 1);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These usage counters don’t match the org quotas above.

The limits are organization-level, but the usage values come from the current user’s tokens, teamMemberships(), and McpToken.usage_count. That undercounts shared usage and makes “agents” / “tasks” represent unrelated records. Aggregate from the actual organization-scoped agent/task/memory sources instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Dashboard/UsageStats.php` around lines 53 - 65, The current
usage calculations in Livewire\Dashboard\UsageStats (variables
$this->tokensUsed, $this->agentsUsed, $this->tasksUsed, $this->memoryUsedMb) are
scoped to the current user and not the organization; change these to aggregate
organization-level data instead: query tokens, agents, and tasks through the
organisation/team relationship off the current user (e.g., $user->organization
or $user->currentTeam and their relations) rather than
McpToken::where('user_id', ...), teamMemberships(), or summing user-scoped
McpToken.usage_count; compute tokensUsed by counting tokens that belong to the
org, agentsUsed by counting the org's agents relationship, tasksUsed by summing
the org's tasks usage_count, and then derive memoryUsedMb from that org-scoped
tasks count.

Comment on lines +32 to +33
// owner_id must be UUID (references auth.users). Use supabase_user_id or generate one.
$ownerId = auth()->user()->supabase_user_id ?? (string) \Illuminate\Support\Str::uuid();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't invent a new owner UUID when the account link is missing.

If supabase_user_id is null, this silently creates an organization and owner team membership tied to a random UUID instead of the authenticated user. That turns a missing identity link into corrupted ownership data.

Safer guard
+use Illuminate\Validation\ValidationException;
+
         // Create organization
-        // owner_id must be UUID (references auth.users). Use supabase_user_id or generate one.
-        $ownerId = auth()->user()->supabase_user_id ?? (string) \Illuminate\Support\Str::uuid();
+        $ownerId = auth()->user()->supabase_user_id;
+        if (! $ownerId) {
+            throw ValidationException::withMessages([
+                'name' => 'Your account is not linked to Supabase yet. Please reconnect and try again.',
+            ]);
+        }

Also applies to: 38-38, 59-61

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Onboarding/CompanySetup.php` around lines 32 - 33, The code
currently falls back to generating a random UUID for $ownerId when
auth()->user()->supabase_user_id is null, which creates incorrect ownership
records; update the logic in CompanySetup (the place where $ownerId is set and
the subsequent team/company creation code referenced at the other occurrences
around lines 38 and 59-61) to instead validate the presence of
auth()->user()->supabase_user_id and fail fast (return a validation/error
response or throw an exception) when it's missing, or explicitly use a real
authenticated user identifier (e.g., auth()->id() or the canonical user UUID) if
that is the intended link—do not generate a new UUID as a silent fallback.
Ensure the guard is added before any use of $ownerId / owner team membership
creation so corrupted ownership cannot be created.

… + access tokens API

- Remove REFERENCES auth.users(id) from all 7 migration files
- Drop FK constraints on mcp_tokens at runtime
- ConnectClaude uses supabase_user_id instead of integer id
- Access tokens API queries mcp_tokens table via pg-meta
- Profile API returns real admin data from profiles table

Co-Authored-By: Fady Mondy <info@3x1.io>
…","write"])

Co-Authored-By: Fady Mondy <info@3x1.io>
…ra tables

The core issue: Laravel users have integer IDs but Orchestra tables use UUID.
Added orchestraId() method that returns supabase_user_id or generates UUID.
Updated all Livewire components and TokenService to use orchestraId().

Co-Authored-By: Fady Mondy <info@3x1.io>
Co-Authored-By: Fady Mondy <info@3x1.io>
… table

Co-Authored-By: Fady Mondy <info@3x1.io>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
supabase/migrations/20260328000002_profiles_organizations.sql (1)

68-72: ⚠️ Potential issue | 🟠 Major

Deleting the owner can orphan the organization.

public.user_org_ids() in supabase/migrations/20260328000008_rls_policies.sql:29-38 grants org access via team membership or a matching owner_id. Once owner_id can point at a deleted auth user, a single-owner org can become unreachable through normal RLS. Add reassignment/cleanup on user deletion or keep the FK/cascade here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20260328000002_profiles_organizations.sql` around lines
68 - 72, The organizations table's owner_id can point to a deleted auth user and
orphan orgs; add a foreign key on organizations.owner_id referencing
auth.users(id) with an appropriate ON DELETE action (either ON DELETE CASCADE to
remove orgs when their owner is deleted or ON DELETE SET NULL and enforce
reassignment) and/or implement cleanup logic in the user-deletion flow to
reassign or delete affected organizations; update or document the related
function public.user_org_ids() to account for the chosen behavior.
supabase/migrations/20260328000007_notes_specs_github.sql (1)

56-65: ⚠️ Potential issue | 🟠 Major

Don't retain GitHub OAuth credentials after user deletion.

This table still holds access_token_encrypted, but user_id no longer has any delete-time cleanup. A deleted account can now leave a live GitHub credential behind indefinitely. Keep the cascade on this credentials table or add explicit revocation/cleanup tied to auth-user deletion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20260328000007_notes_specs_github.sql` around lines 56 -
65, The github_connections table retains access_token_encrypted after a user is
deleted; add automatic cleanup by changing the user_id column to reference the
auth users table with ON DELETE CASCADE (or add an explicit FK/trigger that
deletes rows when the auth user is removed). Update the github_connections DDL
(column user_id) to add a foreign key constraint referencing auth.users(id) ON
DELETE CASCADE (or implement an equivalent trigger/function to revoke/cleanup
credentials on auth-user deletion) so credentials are removed when the user is
deleted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/studio/pages/api/platform/organizations/`[slug]/audit.ts:
- Line 5: The export currently calls apiWrapper(req, res, handler) without
enabling auth; update the call to pass the auth option so the route requires
authentication (e.g. apiWrapper(req, res, handler, { withAuth: true })) so that
the organization audit handler (handler) is protected.

In `@apps/studio/pages/api/platform/profile/access-tokens.ts`:
- Around line 28-31: The SELECT and UPDATE against public.mcp_tokens are missing
authorization filters and currently expose every unrevoked token; scope both SQL
statements to the caller by adding WHERE clauses that constrain by the current
user and/or organization (e.g. user_id = :currentUserId OR organization_id =
:currentOrgId) and pass those values as query parameters; specifically update
the read query (the SELECT in this file) and the revoke query referenced around
lines 76-80 so they include the same user/org predicate instead of relying on id
alone.
- Around line 54-68: The INSERT into public.mcp_tokens is missing required
columns (user_id, organization_id, token_hash) and currently returns a 200 on
fallback even when the write failed; update the INSERT used where tokenId, name,
tokenPrefix are referenced to include and bind the correct user_id and
organization_id values and store the actual hashed secret into token_hash (the
same value validate_mcp_token() expects), await the DB commit and only return a
success response after the insert succeeds, and on DB error return an
appropriate non-200 error response instead of the current fallback that
fabricates a token.

In `@apps/studio/pages/api/platform/profile/audit.ts`:
- Line 5: The endpoint exports apiWrapper without authentication — update the
default export so apiWrapper is invoked with authentication enabled by passing
the options object { withAuth: true } (i.e., change the call that currently uses
apiWrapper(req, res, handler) to apiWrapper(req, res, handler, { withAuth: true
}) or the correct apiWrapper signature used across the repo); ensure the same
change is applied where the exported symbol handler is passed into apiWrapper so
/api/platform/profile/audit requires auth.

In `@apps/studio/pages/api/platform/profile/index.ts`:
- Around line 30-33: Change the INNER JOIN to a LEFT JOIN in the SQL query that
constructs the admin profile (the query string selecting p.id, p.full_name,
p.username, p.avatar_url, u.email from public.profiles p JOIN auth.users u ...)
so missing auth.users rows do not filter out admin profiles; keep the selected
u.email as nullable and ensure downstream code that reads the query result (the
handler that consumes this query result) treats email as optional (e.g., falls
back to an empty string or placeholder) rather than treating a NULL email as a
missing profile.

In `@supabase/migrations/20260328000003_mcp_tokens.sql`:
- Around line 5-6: The migration removed foreign-key cleanup so deleted
users/orgs retain valid MCP tokens; update the change for the mcp_tokens table
to ensure tokens are removed or revoked when their user or organization is
deleted: either restore foreign keys on mcp_tokens.user_id -> auth.users and
mcp_tokens.organization_id -> organizations with ON DELETE CASCADE, or add
explicit delete/revoke triggers/cleanup hooks that delete or mark tokens revoked
on auth.users and organizations deletions; ensure this aligns with
validate_mcp_token()’s expectation (it authenticates using the token row plus
organization join) so tokens cannot persist for deleted users/orgs.

In `@supabase/migrations/20260328000005_workflows_tasks.sql`:
- Line 47: The column assigned_user_id in the workflows_tasks table currently
stores a UUID without cleanup when a user is deleted; add logic to clear it on
user deletion by either adding a foreign key constraint referencing users(id)
with ON DELETE SET NULL (ALTER TABLE workflows_tasks ADD CONSTRAINT ... FOREIGN
KEY (assigned_user_id) REFERENCES users(id) ON DELETE SET NULL) or, if you must
avoid a FK, implement a DELETE trigger on the users table that updates
workflows_tasks SET assigned_user_id = NULL WHERE assigned_user_id = OLD.id;
ensure you reference the assigned_user_id column and the users.id primary key in
the chosen change.

In `@web/app/Livewire/Onboarding/ConnectClaude.php`:
- Around line 17-21: The code uses a generated fallback UUID for $userId which
isn't persisted, so tokens created by TokenService::generate() can become
orphaned; update the ConnectClaude Livewire component (ConnectClaude.php) to
detect when $user->supabase_user_id is null, generate a stable UUID, assign it
to $user->supabase_user_id, save the User model (persist) and then pass that
persisted value into TokenService::generate(userId: $userId, organizationId:
$user->organization_id) so the stored token references a stable, persisted user
ID.

---

Outside diff comments:
In `@supabase/migrations/20260328000002_profiles_organizations.sql`:
- Around line 68-72: The organizations table's owner_id can point to a deleted
auth user and orphan orgs; add a foreign key on organizations.owner_id
referencing auth.users(id) with an appropriate ON DELETE action (either ON
DELETE CASCADE to remove orgs when their owner is deleted or ON DELETE SET NULL
and enforce reassignment) and/or implement cleanup logic in the user-deletion
flow to reassign or delete affected organizations; update or document the
related function public.user_org_ids() to account for the chosen behavior.

In `@supabase/migrations/20260328000007_notes_specs_github.sql`:
- Around line 56-65: The github_connections table retains access_token_encrypted
after a user is deleted; add automatic cleanup by changing the user_id column to
reference the auth users table with ON DELETE CASCADE (or add an explicit
FK/trigger that deletes rows when the auth user is removed). Update the
github_connections DDL (column user_id) to add a foreign key constraint
referencing auth.users(id) ON DELETE CASCADE (or implement an equivalent
trigger/function to revoke/cleanup credentials on auth-user deletion) so
credentials are removed when the user is deleted.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 806cc046-98b0-447b-b926-c4077dcc5cac

📥 Commits

Reviewing files that changed from the base of the PR and between b2e2f0c and f596744.

📒 Files selected for processing (14)
  • apps/studio/components/interfaces/Account/Preferences/AccountConnections.tsx
  • apps/studio/components/interfaces/Account/Preferences/AccountDeletion.tsx
  • apps/studio/components/interfaces/Account/Preferences/AccountIdentities.tsx
  • apps/studio/pages/api/platform/organizations/[slug]/audit.ts
  • apps/studio/pages/api/platform/profile/access-tokens.ts
  • apps/studio/pages/api/platform/profile/audit.ts
  • apps/studio/pages/api/platform/profile/index.ts
  • supabase/migrations/20260328000002_profiles_organizations.sql
  • supabase/migrations/20260328000003_mcp_tokens.sql
  • supabase/migrations/20260328000004_projects_agents_skills.sql
  • supabase/migrations/20260328000005_workflows_tasks.sql
  • supabase/migrations/20260328000006_memory_activity.sql
  • supabase/migrations/20260328000007_notes_specs_github.sql
  • web/app/Livewire/Onboarding/ConnectClaude.php
✅ Files skipped from review due to trivial changes (3)
  • apps/studio/components/interfaces/Account/Preferences/AccountIdentities.tsx
  • apps/studio/components/interfaces/Account/Preferences/AccountDeletion.tsx
  • apps/studio/components/interfaces/Account/Preferences/AccountConnections.tsx


import apiWrapper from 'lib/api/apiWrapper'

export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Require auth on the organization audit route as well.

At Line 5, this handler also omits { withAuth: true }, making /api/platform/organizations/[slug]/audit accessible without authentication.

🔒 Proposed fix
-export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
+export default (req: NextApiRequest, res: NextApiResponse) =>
+  apiWrapper(req, res, handler, { withAuth: true })
As per coding guidelines `**/*.{ts,tsx,js,jsx}`: “Flag security vulnerabilities including authentication/authorization flaws” and “Identify API contract violations including ... missing auth headers.”
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
export default (req: NextApiRequest, res: NextApiResponse) =>
apiWrapper(req, res, handler, { withAuth: true })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/organizations/`[slug]/audit.ts at line 5, The
export currently calls apiWrapper(req, res, handler) without enabling auth;
update the call to pass the auth option so the route requires authentication
(e.g. apiWrapper(req, res, handler, { withAuth: true })) so that the
organization audit handler (handler) is protected.

Comment on lines +28 to +31
query: `SELECT id, name, token_prefix as token_hash, created_at
FROM public.mcp_tokens
WHERE revoked_at IS NULL
ORDER BY created_at DESC`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Scope both token queries to the caller's org/user.

The read path returns every unrevoked row, and the revoke path updates by id alone. Because neither statement includes a user_id or organization_id predicate, any signed-in user can enumerate and revoke tokens outside their own org. Add authorization filters directly in both SQL statements.

As per coding guidelines: Flag security vulnerabilities including auth bypass.

Also applies to: 76-80

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens.ts` around lines 28 -
31, The SELECT and UPDATE against public.mcp_tokens are missing authorization
filters and currently expose every unrevoked token; scope both SQL statements to
the caller by adding WHERE clauses that constrain by the current user and/or
organization (e.g. user_id = :currentUserId OR organization_id = :currentOrgId)
and pass those values as query parameters; specifically update the read query
(the SELECT in this file) and the revoke query referenced around lines 76-80 so
they include the same user/org predicate instead of relying on id alone.

Comment on lines +54 to +68
query: `INSERT INTO public.mcp_tokens (id, name, token_prefix, created_at)
VALUES ($1, $2, $3, NOW())
RETURNING id, name, token_prefix as token_hash, created_at`,
parameters: [tokenId, name, tokenPrefix],
})

if (error) {
// Fallback if table doesn't exist
return res.status(200).json({
id: tokenId,
name,
token_hash: tokenPrefix,
created_at: new Date().toISOString(),
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

This create path never persists a usable token.

public.mcp_tokens requires user_id, organization_id, and token_hash, but this INSERT only writes id, name, and token_prefix. That makes the write fail against the current schema, then the fallback still returns a 200 for a token that does not exist. It also never stores the hashed secret that validate_mcp_token() needs to authenticate. Persist the required columns and only return success after the write commits.

As per coding guidelines: Prioritize comments on logic errors and bugs, and flag data loss risks including missing error handling on write operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens.ts` around lines 54 -
68, The INSERT into public.mcp_tokens is missing required columns (user_id,
organization_id, token_hash) and currently returns a 200 on fallback even when
the write failed; update the INSERT used where tokenId, name, tokenPrefix are
referenced to include and bind the correct user_id and organization_id values
and store the actual hashed secret into token_hash (the same value
validate_mcp_token() expects), await the DB commit and only return a success
response after the insert succeeds, and on DB error return an appropriate
non-200 error response instead of the current fallback that fabricates a token.


import apiWrapper from 'lib/api/apiWrapper'

export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Enable authentication for this platform audit endpoint.

At Line 5, apiWrapper is called without { withAuth: true }, so auth is skipped and /api/platform/profile/audit is publicly callable.

🔒 Proposed fix
-export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
+export default (req: NextApiRequest, res: NextApiResponse) =>
+  apiWrapper(req, res, handler, { withAuth: true })
As per coding guidelines `**/*.{ts,tsx,js,jsx}`: “Flag security vulnerabilities including authentication/authorization flaws” and “Identify API contract violations including ... missing auth headers.”
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export default (req: NextApiRequest, res: NextApiResponse) => apiWrapper(req, res, handler)
export default (req: NextApiRequest, res: NextApiResponse) =>
apiWrapper(req, res, handler, { withAuth: true })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/audit.ts` at line 5, The endpoint
exports apiWrapper without authentication — update the default export so
apiWrapper is invoked with authentication enabled by passing the options object
{ withAuth: true } (i.e., change the call that currently uses apiWrapper(req,
res, handler) to apiWrapper(req, res, handler, { withAuth: true }) or the
correct apiWrapper signature used across the repo); ensure the same change is
applied where the exported symbol handler is passed into apiWrapper so
/api/platform/profile/audit requires auth.

Comment thread apps/studio/pages/api/platform/profile/index.ts
Comment on lines +5 to +6
user_id UUID NOT NULL,
organization_id UUID NOT NULL,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Deleted users will keep working MCP tokens.

validate_mcp_token() still authenticates off the token row plus the organization join, so once user_id is no longer cleaned up by auth.users deletion, removing an account no longer revokes its bearer tokens. Dropping the org FK also leaves orphan token rows behind after org deletion. Keep delete-time cleanup for this table or add explicit revoke hooks on user/org deletion before release.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20260328000003_mcp_tokens.sql` around lines 5 - 6, The
migration removed foreign-key cleanup so deleted users/orgs retain valid MCP
tokens; update the change for the mcp_tokens table to ensure tokens are removed
or revoked when their user or organization is deleted: either restore foreign
keys on mcp_tokens.user_id -> auth.users and mcp_tokens.organization_id ->
organizations with ON DELETE CASCADE, or add explicit delete/revoke
triggers/cleanup hooks that delete or mark tokens revoked on auth.users and
organizations deletions; ensure this aligns with validate_mcp_token()’s
expectation (it authenticates using the token row plus organization join) so
tokens cannot persist for deleted users/orgs.

assigned_agent_id UUID REFERENCES public.agents(id) ON DELETE SET NULL,
assigned_user_id UUID REFERENCES auth.users(id) ON DELETE SET NULL,
created_by UUID NOT NULL REFERENCES auth.users(id),
assigned_user_id UUID,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clear assigned_user_id when the assignee is deleted.

assigned_user_id is live workflow state, not just audit history. Leaving it as a plain UUID means deleted users remain attached to open tasks, so assignee resolution and filtering will keep pointing at a non-existent account. If the FK must stay removed, add explicit cleanup that sets this column to NULL on user deletion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20260328000005_workflows_tasks.sql` at line 47, The
column assigned_user_id in the workflows_tasks table currently stores a UUID
without cleanup when a user is deleted; add logic to clear it on user deletion
by either adding a foreign key constraint referencing users(id) with ON DELETE
SET NULL (ALTER TABLE workflows_tasks ADD CONSTRAINT ... FOREIGN KEY
(assigned_user_id) REFERENCES users(id) ON DELETE SET NULL) or, if you must
avoid a FK, implement a DELETE trigger on the users table that updates
workflows_tasks SET assigned_user_id = NULL WHERE assigned_user_id = OLD.id;
ensure you reference the assigned_user_id column and the users.id primary key in
the chosen change.

Comment on lines +17 to 21
$userId = $user->supabase_user_id ?? (string) \Illuminate\Support\Str::uuid();

$result = $tokenService->generate(
userId: $user->id,
userId: $userId,
organizationId: $user->organization_id,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist a stable supabase_user_id before token creation

At Line 17, using a non-persisted Str::uuid() fallback can generate a different user ID on subsequent mounts when supabase_user_id is null. Since TokenService::generate() stores user_id as provided, this can orphan tokens and create identity mismatches.

Suggested fix
+use Illuminate\Support\Str;
+
 public function mount(TokenService $tokenService): void
 {
     $user = auth()->user();
-    $userId = $user->supabase_user_id ?? (string) \Illuminate\Support\Str::uuid();
+    $userId = $user->supabase_user_id;
+    if (!$userId) {
+        $userId = (string) Str::uuid();
+        $user->update(['supabase_user_id' => $userId]);
+    }

     $result = $tokenService->generate(
         userId: $userId,
         organizationId: $user->organization_id,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$userId = $user->supabase_user_id ?? (string) \Illuminate\Support\Str::uuid();
$result = $tokenService->generate(
userId: $user->id,
userId: $userId,
organizationId: $user->organization_id,
$userId = $user->supabase_user_id;
if (!$userId) {
$userId = (string) Str::uuid();
$user->update(['supabase_user_id' => $userId]);
}
$result = $tokenService->generate(
userId: $userId,
organizationId: $user->organization_id,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Onboarding/ConnectClaude.php` around lines 17 - 21, The code
uses a generated fallback UUID for $userId which isn't persisted, so tokens
created by TokenService::generate() can become orphaned; update the
ConnectClaude Livewire component (ConnectClaude.php) to detect when
$user->supabase_user_id is null, generate a stable UUID, assign it to
$user->supabase_user_id, save the User model (persist) and then pass that
persisted value into TokenService::generate(userId: $userId, organizationId:
$user->organization_id) so the stored token references a stable, persisted user
ID.

…aId()

Root cause: orchestraId() generated random UUID on every request, so data
never matched. Now:
- Registration assigns a permanent UUID
- orchestraId() auto-persists if missing (for existing users)

Co-Authored-By: Fady Mondy <info@3x1.io>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
web/app/Livewire/Dashboard/DashboardHome.php (1)

30-30: Missing import for TeamMember model.

The TeamMember class is referenced with a fully-qualified name inline. Add it to the imports for consistency with how McpToken is imported.

♻️ Proposed fix

Add the import at the top of the file:

use App\Models\TeamMember;

Then simplify line 30:

-        $this->totalAgents = \App\Models\TeamMember::where('user_id', $user->orchestraId())->count();
+        $this->totalAgents = TeamMember::where('user_id', $user->orchestraId())->count();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Dashboard/DashboardHome.php` at line 30, The file uses a
fully-qualified reference to App\Models\TeamMember inline instead of importing
it like McpToken; add an import statement for TeamMember (use
App\Models\TeamMember;) at the top of the DashboardHome class file and replace
the inline reference on the line setting $this->totalAgents (currently using
\App\Models\TeamMember::where(...)) with the short class name
TeamMember::where(...) to match the existing import style.
web/app/Services/TokenService.php (1)

31-31: PostgreSQL array literal lacks proper escaping for special characters.

The scopes are concatenated directly without escaping. If any scope value contains special characters (commas, braces, quotes, backslashes), the PostgreSQL array literal will be malformed or incorrectly parsed.

While the current defaults (read, write) are safe, this becomes a correctness issue if scopes are extended or user-influenced.

♻️ Proposed fix using proper escaping or parameterized casting

Option 1: Escape values properly

-            'scopes' => '{' . implode(',', $scopes) . '}',
+            'scopes' => '{' . implode(',', array_map(fn($s) => '"' . addcslashes($s, '"\\') . '"', $scopes)) . '}',

Option 2: Use Laravel's DB expression with PostgreSQL casting (preferred)

+use Illuminate\Support\Facades\DB;
 ...
-            'scopes' => '{' . implode(',', $scopes) . '}',
+            'scopes' => DB::raw("ARRAY[" . implode(',', array_map(fn($s) => "'" . addslashes($s) . "'", $scopes)) . "]::text[]"),

Option 3: Store as JSON and use a PostgreSQL JSON column instead

-            'scopes' => '{' . implode(',', $scopes) . '}',
+            'scopes' => json_encode($scopes),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Services/TokenService.php` at line 31, The current construction of
the PostgreSQL array literal ('scopes' => '{' . implode(',', $scopes) . '}') in
TokenService uses raw concatenation of $scopes and can break with commas,
braces, quotes or backslashes; update TokenService to build the scopes value
safely by either escaping each scope for PostgreSQL array literals before
joining or, preferably, replace the raw string with a parameterized DB
expression that casts a JSON/text array to text[] (e.g., use Laravel's
DB::raw/DB::query with a bound parameter representing the JSON-encoded $scopes
and cast to text[]), ensuring you reference the 'scopes' key and the $scopes
variable in the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@web/app/Livewire/Tokens/TokenList.php`:
- Around line 36-40: The call to TokenService::generate in TokenList.php passes
auth()->user()->organization_id ?? '' which substitutes an empty string for a
missing organization and can violate DB constraints; update the call to either
(a) require the organization and throw or return a validation error when
auth()->user()->organization_id is null (i.e. validate before calling
TokenService::generate and do not call it without an organization), or (b) pass
null instead of '' and update TokenService::generate signature/logic to accept
?string $organizationId and handle nulls safely (adjust any DB inserts/queries
to use NULL rather than empty string); locate the invocation in TokenList.php
and the generate method in TokenService to implement the chosen fix.

In `@web/app/Models/User.php`:
- Around line 21-28: The lazy UUID assignment in orchestraId() can race under
concurrency; replace the current read-modify-save approach with an atomic
conditional database update: generate the UUID, attempt to set supabase_user_id
only where it is NULL using a conditional update via the users table
(referencing the orchestraId() method and $this->id/$this->supabase_user_id),
then if the update affected a row assign the new UUID to the model instance,
otherwise reload the existing supabase_user_id from the DB and return it; this
avoids saveQuietly overwrite races and ensures only one UUID wins.

---

Nitpick comments:
In `@web/app/Livewire/Dashboard/DashboardHome.php`:
- Line 30: The file uses a fully-qualified reference to App\Models\TeamMember
inline instead of importing it like McpToken; add an import statement for
TeamMember (use App\Models\TeamMember;) at the top of the DashboardHome class
file and replace the inline reference on the line setting $this->totalAgents
(currently using \App\Models\TeamMember::where(...)) with the short class name
TeamMember::where(...) to match the existing import style.

In `@web/app/Services/TokenService.php`:
- Line 31: The current construction of the PostgreSQL array literal ('scopes' =>
'{' . implode(',', $scopes) . '}') in TokenService uses raw concatenation of
$scopes and can break with commas, braces, quotes or backslashes; update
TokenService to build the scopes value safely by either escaping each scope for
PostgreSQL array literals before joining or, preferably, replace the raw string
with a parameterized DB expression that casts a JSON/text array to text[] (e.g.,
use Laravel's DB::raw/DB::query with a bound parameter representing the
JSON-encoded $scopes and cast to text[]), ensuring you reference the 'scopes'
key and the $scopes variable in the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b3ef5b20-4a90-41ff-a5ed-598978e8efb1

📥 Commits

Reviewing files that changed from the base of the PR and between f596744 and d12ed4e.

📒 Files selected for processing (8)
  • web/app/Http/Controllers/Auth/RegisterController.php
  • web/app/Livewire/Dashboard/DashboardHome.php
  • web/app/Livewire/Dashboard/UsageStats.php
  • web/app/Livewire/Tokens/TokenList.php
  • web/app/Models/McpToken.php
  • web/app/Models/User.php
  • web/app/Services/TokenService.php
  • web/database/migrations/2026_03_28_100000_add_timezone_language_to_users_table.php
✅ Files skipped from review due to trivial changes (1)
  • web/database/migrations/2026_03_28_100000_add_timezone_language_to_users_table.php
🚧 Files skipped from review as they are similar to previous changes (2)
  • web/app/Models/McpToken.php
  • web/app/Livewire/Dashboard/UsageStats.php

Comment on lines 36 to 40
$result = $tokenService->generate(
userId: auth()->id(),
userId: auth()->user()->orchestraId(),
organizationId: auth()->user()->organization_id ?? '',
name: $this->newTokenName,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty string fallback for organization_id may cause issues.

If organization_id is null, passing an empty string could violate database constraints (if NOT NULL or FK exists) or create semantically incorrect data. An empty string is not equivalent to null in PostgreSQL and may cause unexpected query behavior when filtering by organization.

🛡️ Proposed fix: Require organization or handle null explicitly

Option 1: Throw if organization is missing (recommended if org is required)

+        $user = auth()->user();
+        if (!$user->organization_id) {
+            $this->addError('newTokenName', 'You must belong to an organization to create tokens.');
+            return;
+        }
+
         $tokenService = app(TokenService::class);
         $result = $tokenService->generate(
-            userId: auth()->user()->orchestraId(),
-            organizationId: auth()->user()->organization_id ?? '',
+            userId: $user->orchestraId(),
+            organizationId: $user->organization_id,
             name: $this->newTokenName,
         );

Option 2: Make organizationId nullable in TokenService::generate() if truly optional

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Livewire/Tokens/TokenList.php` around lines 36 - 40, The call to
TokenService::generate in TokenList.php passes auth()->user()->organization_id
?? '' which substitutes an empty string for a missing organization and can
violate DB constraints; update the call to either (a) require the organization
and throw or return a validation error when auth()->user()->organization_id is
null (i.e. validate before calling TokenService::generate and do not call it
without an organization), or (b) pass null instead of '' and update
TokenService::generate signature/logic to accept ?string $organizationId and
handle nulls safely (adjust any DB inserts/queries to use NULL rather than empty
string); locate the invocation in TokenList.php and the generate method in
TokenService to implement the chosen fix.

Comment thread web/app/Models/User.php
Comment on lines +21 to +28
public function orchestraId(): string
{
if (!$this->supabase_user_id) {
$this->supabase_user_id = (string) \Illuminate\Support\Str::uuid();
$this->saveQuietly();
}
return $this->supabase_user_id;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential race condition when lazily generating supabase_user_id.

If two concurrent requests invoke orchestraId() on a user without an existing supabase_user_id, both may generate different UUIDs. One will overwrite the other, potentially orphaning any data (tokens, team memberships) created with the discarded UUID.

Since RegisterController now generates the UUID at registration (line 39), the lazy generation here is a fallback for existing users. Consider using an atomic update to avoid the race.

🛡️ Proposed fix using atomic update
 public function orchestraId(): string
 {
     if (!$this->supabase_user_id) {
-        $this->supabase_user_id = (string) \Illuminate\Support\Str::uuid();
-        $this->saveQuietly();
+        // Atomic update to prevent race conditions
+        $uuid = (string) \Illuminate\Support\Str::uuid();
+        $affected = static::where('id', $this->id)
+            ->whereNull('supabase_user_id')
+            ->update(['supabase_user_id' => $uuid]);
+        
+        // Refresh to get the actual value (ours or another concurrent request's)
+        $this->refresh();
+        
+        // If no row was updated, another request won the race - that's fine
+        if ($affected === 0 && !$this->supabase_user_id) {
+            throw new \RuntimeException('Failed to generate orchestraId');
+        }
     }
     return $this->supabase_user_id;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Models/User.php` around lines 21 - 28, The lazy UUID assignment in
orchestraId() can race under concurrency; replace the current read-modify-save
approach with an atomic conditional database update: generate the UUID, attempt
to set supabase_user_id only where it is NULL using a conditional update via the
users table (referencing the orchestraId() method and
$this->id/$this->supabase_user_id), then if the update affected a row assign the
new UUID to the model instance, otherwise reload the existing supabase_user_id
from the DB and return it; this avoids saveQuietly overwrite races and ensures
only one UUID wins.

Register: Creates user in GoTrue first, gets real UUID, then creates local user
Login: Authenticates via GoTrue signInWithPassword(), stores JWT in session,
  syncs UUID to local user. Backfills supabase_user_id for existing users.
GitHub OAuth: Checks/creates GoTrue user, syncs UUID

New SupabaseAuthService methods:
- signInWithPassword(email, password) — GoTrue token endpoint
- findUserByEmail(email) — Admin API user lookup

Eliminates the dual-auth UUID/integer mismatch problem. All users now
have real Supabase UUIDs that match auth.users.

Co-Authored-By: Fady Mondy <info@3x1.io>
…vel 13)

SDK vendored locally (requires illuminate/support ^11|^12, we're on 13).
Fixed PHP 8.4 deprecation in SupabaseClient.php.

Auth controllers now use Supabase facade:
- Register: Supabase::auth()->signUp() → real GoTrue UUID
- Login: Supabase::auth()->signIn() → JWT in session + UUID sync
- Logout: Supabase::auth()->signOut()

Services updated:
- SupabaseService delegates to Supabase::db()
- SupabaseAuthService delegates to Supabase::auth() where possible
- Admin API calls (createUser, findUserByEmail) kept as direct HTTP

Config: supabase.php published, .env updated with real keys from docker/.env

Co-Authored-By: Fady Mondy <info@3x1.io>
New service_logs table for structured logging from Go/Laravel services.
Orchestra Logs page at /project/[ref]/orchestra/logs with:
- Service tabs (All/Go MCP/Laravel/Orchestra)
- Level filter (Fatal/Error/Warning/Info/Debug)
- Time range selector (1h/6h/24h/7d)
- Auto-refresh (10s) with pause toggle
- Color-coded service + level badges
- Expandable JSON context viewer
- Unions with activity_log for Orchestra events

Added "Observability > Logs" to Orchestra sidebar.

Co-Authored-By: Fady Mondy <info@3x1.io>
…eed data

Removed FK constraints from agents, projects, tasks, notes, specs, skills,
memories, activity_log, decisions, agent_sessions, github_connections tables.
Fixed migration files to not reference auth.users.
Added .mcp.json for Claude Code connection.
Inserted seed agents + activity logs + service logs for demo.

Co-Authored-By: Fady Mondy <info@3x1.io>
- agents query: team → team_id, last_active_at → updated_at
- tokens query: prefix → token_prefix, user_email → NULL
- logs query: add public schema prefix
- .mcp.json: correct Claude Code mcpServers format

Co-Authored-By: Fady Mondy <info@3x1.io>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (1)
web/packages/saeedvir/supabase/src/SupabaseServiceProvider.php (1)

19-25: Bind the class and facade key to the same singleton.

The facade resolves 'supabase', but the class binding below constructs a second SupabaseService instance with its own client and child services. Bind the class once and alias the facade key to it instead.

Suggested change
-        $this->app->singleton('supabase', function ($app) {
-            return new SupabaseService();
-        });
-
-        $this->app->bind('Saeedvir\Supabase\Services\SupabaseService', function ($app) {
-            return new SupabaseService();
-        });
+        $this->app->singleton(SupabaseService::class, fn () => new SupabaseService());
+        $this->app->alias(SupabaseService::class, 'supabase');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/SupabaseServiceProvider.php` around lines
19 - 25, The code currently creates two separate SupabaseService instances: a
singleton under the container key 'supabase' and a separate binding for the
class Saeedvir\Supabase\Services\SupabaseService which constructs another
SupabaseService; change the class binding to reference the same singleton by
aliasing the class to the 'supabase' singleton (or register the singleton using
the class name and alias the 'supabase' key to it) so both the facade key
'supabase' and the type-hint Saeedvir\Supabase\Services\SupabaseService resolve
to the same singleton SupabaseService instance (update the registration where
new SupabaseService() is created to only occur once and add an alias for the
other key).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@web/app/Http/Controllers/Auth/GithubAuthController.php`:
- Around line 65-78: The code is creating/updating a local User even when
$supabaseUserId is null which lets the User model persist a fallback UUID;
instead, require a real Supabase UUID before creating a new local user: if
$supabaseUserId is present, call User::updateOrCreate(...) including
'supabase_user_id' in the attributes (or update existing and backfill as you
already do); if $supabaseUserId is null, do NOT call updateOrCreate — lookup the
user by email (User::where('email', $email)->first()) and only proceed if an
existing user is found (otherwise fail/auth error), so you never create a new
User without a real $supabaseUserId (affecting GithubAuthController and the
update/backfill logic around $supabaseUserId).
- Around line 34-37: The GitHub OAuth flow in GithubAuthController currently
proceeds when Socialite::driver('github')->user() returns a null email; add an
explicit null check immediately after $email = $githubUser->getEmail() and
fail-fast (throw an exception or return a redirect with an error) before any
local user creation or Supabase sync occurs so we never create a local user
without a Supabase UUID; update the method that performs user creation/sync (the
GitHub callback handler in GithubAuthController) to abort when $email is null
and surface a clear error to the caller.

In `@web/app/Http/Controllers/Auth/RegisterController.php`:
- Around line 52-55: The warning currently logs the entire Supabase response
($supabaseResult) in RegisterController; change both Log::warning calls to avoid
serializing the full auth payload and instead log only a narrow, sanitized field
such as an error message or code (e.g. $supabaseResult['error']['message'] or
$supabaseResult['error']['code']) alongside the user's email
($validated['email']); update both occurrences that reference $supabaseResult so
only the specific error string/code and minimal context are included in the log.
- Around line 41-45: The Supabase signUp() call can succeed while the local
User::create() later fails, leaving an orphaned Supabase account; wrap the local
DB write in a try/catch (or DB::transaction) immediately after
Supabase::auth()->signUp(...) and if the local create fails, call the Supabase
admin delete (using the created supabase user id from $supabaseResult->user->id)
to roll back the remote account, rethrow or return a proper error; mirror the
same reconcile/rollback pattern for the other registration path around the block
that includes lines 77-84 so remote and local state remain consistent.

In `@web/app/Services/SupabaseAuthService.php`:
- Around line 96-113: The findUserByEmail() implementation currently GETs
/auth/v1/admin/users and scans the first page, which misses users beyond the
default page size; update the HTTP call in findUserByEmail() to include the
Supabase GoTrue query parameter to search by email (use ?filter=<email> or the
documented filter syntax) instead of iterating the returned users array, and
optionally add per_page=1000 to the request for completeness; ensure you pass
the $email variable into the query string and return the single matching user
(or null) as before.

In `@web/packages/saeedvir/supabase/src/Services/DatabaseService.php`:
- Around line 87-105: The update and delete methods in DatabaseService currently
allow unfiltered requests when $filters is empty; add a guard in the
update(string $table, array $filters, array $data) and the delete(string $table,
array $filters) methods to refuse/abort when $filters is empty (either throw an
InvalidArgumentException or return an error array) so we never call PATCH/DELETE
on /rest/v1/{table} with no predicate; ensure the check occurs before building
$url (or before making the HTTP request) and include a clear error message like
"Refusing unfiltered update/delete: filters required" to make the failure
explicit.
- Around line 30-38: The filter-building logic in DatabaseService.php
concatenates raw strings into $url (inside the foreach over $filters and the
other two similar blocks) which breaks PostgREST encoding rules; update the code
that constructs the query string (the foreach handling $filters, and the two
analogous locations around lines 93–100 and 125–132) to URL-encode all scalar
values (use rawurlencode or http_build_query) and build in.(...) lists with
percent-encoded double quotes around string members (%22value%22) instead of
single quotes; ideally switch to building an associative array of query
parameters and call http_build_query (or encode each operator+value with
rawurlencode) so operators like "in.(%22a%22,%22b%22)" and scalar "eq." values
are properly encoded before appending to $url.

In `@web/packages/saeedvir/supabase/src/Services/RealtimeService.php`:
- Around line 73-75: The JS block currently emits Blade syntax and also
reassigns the global supabase symbol causing a TDZ; replace the Blade-style "{{
config('...') }}" literals with PHP string interpolation (use the PHP
variables/array keys that hold supabase.url and supabase.key so supabaseUrl and
supabaseKey are actual values in the output) and stop shadowing the global
supabase by destructuring its factory instead—extract createClient from the
global (e.g., const { createClient } = supabase) and call it to produce a new
client variable (e.g., client) rather than assigning back to supabase or calling
supabase.createClient directly in a const that shadows the global.

In `@web/packages/saeedvir/supabase/src/Services/StorageService.php`:
- Around line 43-48: The methods upload(), uploadContent(), createSignedUrl(),
publicUrl(), and authenticatedUrl() are embedding raw object keys into URLs; fix
by URL-encoding the object path on a segment-by-segment basis before building
the URL (use rawurlencode on each segment: explode the path by '/', rawurlencode
each segment, then implode with '/'), and use that encoded path when
interpolating into "/storage/v1/object/{$bucket}/{$path}" (and similar URL
templates) so spaces, #, ?, and unicode are correctly encoded; apply the same
transformation wherever $path is used to build signed, public, or authenticated
URLs.
- Around line 27-28: The code currently assigns $fileContents =
file_get_contents($filePath) and $mimeType = mime_content_type($filePath)
without checking for failure; update the StorageService logic that builds the
upload request to first verify file_exists($filePath) and that file_get_contents
returned a non-false value (e.g., throw an exception or return an error when
reading fails), then compute mime_content_type only after a successful read and
if that returns false use a safe default like 'application/octet-stream'; ensure
the validated $fileContents and $mimeType are used when setting the request body
and Content-Type header so no false values are passed to the upload request.

In `@web/packages/saeedvir/supabase/src/Services/SupabaseClient.php`:
- Around line 51-55: The request() method currently json_decodes the response
body unconditionally which returns null for empty POST/PATCH/DELETE responses
and breaks the declared array return type; update the logic in the request()
function (around the $this->client->request(...) call and subsequent
json_decode) to check the response body string first and, if empty, return an
empty array (or explicitly return null and widen the method return type
accordingly), alternatively ensure mutation requests include the "Prefer:
return=representation" header before sending; implement the guard by reading
$response->getBody()->getContents(), returning [] when that string is empty,
otherwise json_decode(..., true) as before.

In `@web/packages/saeedvir/supabase/src/Services/SupabaseService.php`:
- Around line 7-10: The typed properties on SupabaseService ($db, $auth,
$storage, $realtime) are non-nullable but only conditionally initialized in
__construct(); make each property nullable (e.g. ?DatabaseService $db = null,
?AuthService $auth = null, etc.) and update the accessors auth(), db(),
storage(), realtime() to guard access: if the corresponding property is null
throw a clear RuntimeException (or InvalidArgumentException) explaining the
service is disabled/not initialized; keep the existing initialization in
__construct() but assign into the nullable properties when enabled. Ensure you
update all four accessor methods (auth(), db(), storage(), realtime()) to
perform the null check before returning the property.

---

Nitpick comments:
In `@web/packages/saeedvir/supabase/src/SupabaseServiceProvider.php`:
- Around line 19-25: The code currently creates two separate SupabaseService
instances: a singleton under the container key 'supabase' and a separate binding
for the class Saeedvir\Supabase\Services\SupabaseService which constructs
another SupabaseService; change the class binding to reference the same
singleton by aliasing the class to the 'supabase' singleton (or register the
singleton using the class name and alias the 'supabase' key to it) so both the
facade key 'supabase' and the type-hint
Saeedvir\Supabase\Services\SupabaseService resolve to the same singleton
SupabaseService instance (update the registration where new SupabaseService() is
created to only occur once and add an alias for the other key).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: af1e25ba-19e6-4cbf-b51d-371600700fde

📥 Commits

Reviewing files that changed from the base of the PR and between d12ed4e and 33d2af2.

📒 Files selected for processing (23)
  • .requests/2026-03-28-studio-pro-features.md
  • .requests/2026-03-28-supabase-laravel-package.md
  • web/app/Http/Controllers/Auth/GithubAuthController.php
  • web/app/Http/Controllers/Auth/LoginController.php
  • web/app/Http/Controllers/Auth/RegisterController.php
  • web/app/Models/User.php
  • web/app/Services/SupabaseAuthService.php
  • web/app/Services/SupabaseService.php
  • web/bootstrap/providers.php
  • web/composer.json
  • web/config/services.php
  • web/config/supabase.php
  • web/packages/saeedvir/supabase/config/supabase.php
  • web/packages/saeedvir/supabase/src/Facades/Supabase.php
  • web/packages/saeedvir/supabase/src/Services/AuthService.php
  • web/packages/saeedvir/supabase/src/Services/DatabaseService.php
  • web/packages/saeedvir/supabase/src/Services/RealtimeService.php
  • web/packages/saeedvir/supabase/src/Services/StorageService.php
  • web/packages/saeedvir/supabase/src/Services/SupabaseClient.php
  • web/packages/saeedvir/supabase/src/Services/SupabaseService.php
  • web/packages/saeedvir/supabase/src/SupabaseServiceProvider.php
  • web/resources/views/dashboard/billing.blade.php
  • web/resources/views/dashboard/team.blade.php
✅ Files skipped from review due to trivial changes (8)
  • web/resources/views/dashboard/team.blade.php
  • web/resources/views/dashboard/billing.blade.php
  • web/bootstrap/providers.php
  • web/composer.json
  • .requests/2026-03-28-studio-pro-features.md
  • web/config/supabase.php
  • .requests/2026-03-28-supabase-laravel-package.md
  • web/packages/saeedvir/supabase/config/supabase.php
🚧 Files skipped from review as they are similar to previous changes (1)
  • web/app/Models/User.php

Comment on lines 34 to +37
$githubUser = Socialite::driver('github')->user();
$email = $githubUser->getEmail();
$name = $githubUser->getName() ?? $githubUser->getNickname();
$randomPassword = Str::random(32);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "GithubAuthController.php" 2>/dev/null

Repository: orchestra-mcp/core

Length of output: 118


🏁 Script executed:

find . -type f -name "User.php" -path "*/Models/*" 2>/dev/null

Repository: orchestra-mcp/core

Length of output: 87


🏁 Script executed:

find . -type f -path "*/Http/Controllers/Auth/*" 2>/dev/null | head -20

Repository: orchestra-mcp/core

Length of output: 346


🏁 Script executed:

wc -l ./web/app/Http/Controllers/Auth/GithubAuthController.php

Repository: orchestra-mcp/core

Length of output: 121


🏁 Script executed:

cat -n ./web/app/Http/Controllers/Auth/GithubAuthController.php

Repository: orchestra-mcp/core

Length of output: 3530


🏁 Script executed:

wc -l ./web/app/Models/User.php

Repository: orchestra-mcp/core

Length of output: 91


🏁 Script executed:

cat -n ./web/app/Models/User.php | head -80

Repository: orchestra-mcp/core

Length of output: 2772


🏁 Script executed:

find . -type f -name "SupabaseAuthService.php" 2>/dev/null

Repository: orchestra-mcp/core

Length of output: 104


🏁 Script executed:

rg "class SupabaseAuthService" -A 30

Repository: orchestra-mcp/core

Length of output: 2473


🏁 Script executed:

cat -n ./web/app/Services/SupabaseAuthService.php

Repository: orchestra-mcp/core

Length of output: 5462


🏁 Script executed:

find . -path "*/database/migrations/*" -name "*.php" -type f | head -20

Repository: orchestra-mcp/core

Length of output: 429


🏁 Script executed:

find . -path "*/database/*" -name "*users*" -type f

Repository: orchestra-mcp/core

Length of output: 296


🏁 Script executed:

cat -n ./web/database/migrations/0001_01_01_000000_create_users_table.php

Repository: orchestra-mcp/core

Length of output: 1877


🏁 Script executed:

cat -n ./web/database/migrations/2026_03_28_000001_add_supabase_fields_to_users_table.php

Repository: orchestra-mcp/core

Length of output: 1118


🌐 Web query:

Laravel Socialite GitHub OAuth getEmail() nullable

💡 Result:

Yes, in Laravel Socialite's GitHub OAuth provider, the getEmail method can return null even with the user:email scope. This is because Socialite fetches user data from GitHub's /user endpoint, which only includes the user's publicly visible email address (null if not set in profile). Private emails require a separate /user/emails endpoint call, which standard Socialite does not make. To get emails reliably, configure your GitHub OAuth app with "Email addresses" read permission, ensure users have a public email or grant permission, or fetch emails manually using the access token ($user->token) via Guzzle to https://api.github.com/user/emails.

Citations:


Fail fast when GitHub doesn't return an email.

getEmail() is nullable for GitHub accounts without a public email address. Passing null to Supabase sync silently fails and logs a warning, while the local user is still created and logged in without a supabase_user_id. This breaks the stated architectural requirement that "Supabase UUID is always the primary identifier" and creates permanently split identities between the local database and Supabase GoTrue.

Suggested change
         $githubUser = Socialite::driver('github')->user();
         $email = $githubUser->getEmail();
-        $name = $githubUser->getName() ?? $githubUser->getNickname();
+        if (! $email) {
+            return redirect()->route('login')
+                ->withErrors(['email' => 'GitHub did not return an email address for this account.']);
+        }
+
+        $name = $githubUser->getName() ?? $githubUser->getNickname() ?? $email;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$githubUser = Socialite::driver('github')->user();
$email = $githubUser->getEmail();
$name = $githubUser->getName() ?? $githubUser->getNickname();
$randomPassword = Str::random(32);
$githubUser = Socialite::driver('github')->user();
$email = $githubUser->getEmail();
if (! $email) {
return redirect()->route('login')
->withErrors(['email' => 'GitHub did not return an email address for this account.']);
}
$name = $githubUser->getName() ?? $githubUser->getNickname() ?? $email;
$randomPassword = Str::random(32);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Http/Controllers/Auth/GithubAuthController.php` around lines 34 - 37,
The GitHub OAuth flow in GithubAuthController currently proceeds when
Socialite::driver('github')->user() returns a null email; add an explicit null
check immediately after $email = $githubUser->getEmail() and fail-fast (throw an
exception or return a redirect with an error) before any local user creation or
Supabase sync occurs so we never create a local user without a Supabase UUID;
update the method that performs user creation/sync (the GitHub callback handler
in GithubAuthController) to abort when $email is null and surface a clear error
to the caller.

Comment on lines +65 to +78
// Create or update local Laravel user
$user = User::updateOrCreate(
['email' => $githubUser->getEmail()],
[
'name' => $githubUser->getName() ?? $githubUser->getNickname(),
'password' => Hash::make(Str::random(24)),
]
['email' => $email],
array_filter([
'name' => $name,
'password' => Hash::make($randomPassword),
'supabase_user_id' => $supabaseUserId,
]),
);

// If local user existed but had no supabase_user_id, backfill it
if ($supabaseUserId && ! $user->supabase_user_id) {
$user->update(['supabase_user_id' => $supabaseUserId]);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Require the real Supabase UUID before updateOrCreate().

If $supabaseUserId is null, array_filter() silently drops the field and the callback still authenticates a local user. web/app/Models/User.php:19-39 then persists a fallback UUID, which splits any Orchestra data created in that session from the eventual GoTrue identity.

Suggested change
-        $user = User::updateOrCreate(
-            ['email' => $email],
-            array_filter([
-                'name' => $name,
-                'password' => Hash::make($randomPassword),
-                'supabase_user_id' => $supabaseUserId,
-            ]),
-        );
-
-        // If local user existed but had no supabase_user_id, backfill it
-        if ($supabaseUserId && ! $user->supabase_user_id) {
-            $user->update(['supabase_user_id' => $supabaseUserId]);
-        }
+        if (! $supabaseUserId) {
+            return redirect()->route('login')
+                ->withErrors(['email' => 'Could not link your GitHub account. Please try again.']);
+        }
+
+        $user = User::updateOrCreate(
+            ['email' => $email],
+            [
+                'name' => $name ?? $email,
+                'password' => Hash::make($randomPassword),
+                'supabase_user_id' => $supabaseUserId,
+            ],
+        );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Http/Controllers/Auth/GithubAuthController.php` around lines 65 - 78,
The code is creating/updating a local User even when $supabaseUserId is null
which lets the User model persist a fallback UUID; instead, require a real
Supabase UUID before creating a new local user: if $supabaseUserId is present,
call User::updateOrCreate(...) including 'supabase_user_id' in the attributes
(or update existing and backfill as you already do); if $supabaseUserId is null,
do NOT call updateOrCreate — lookup the user by email (User::where('email',
$email)->first()) and only proceed if an existing user is found (otherwise
fail/auth error), so you never create a new User without a real $supabaseUserId
(affecting GithubAuthController and the update/backfill logic around
$supabaseUserId).

Comment on lines +41 to +45
$supabaseResult = Supabase::auth()->signUp(
$validated['email'],
$validated['password'],
['data' => ['full_name' => $validated['name']]],
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Recover the local write after GoTrue succeeds.

Line 36's uniqueness check does not close the race between signUp() and User::create(). If the local insert loses that race or the DB write fails, registration 500s after the Supabase account already exists. Add a reconcile or rollback path around the local write.

Also applies to: 77-84

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Http/Controllers/Auth/RegisterController.php` around lines 41 - 45,
The Supabase signUp() call can succeed while the local User::create() later
fails, leaving an orphaned Supabase account; wrap the local DB write in a
try/catch (or DB::transaction) immediately after Supabase::auth()->signUp(...)
and if the local create fails, call the Supabase admin delete (using the created
supabase user id from $supabaseResult->user->id) to roll back the remote
account, rethrow or return a proper error; mirror the same reconcile/rollback
pattern for the other registration path around the block that includes lines
77-84 so remote and local state remain consistent.

Comment on lines +52 to +55
Log::warning('Supabase SDK user registration failed', [
'email' => $validated['email'],
'response' => $supabaseResult,
]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sanitize the Supabase failure logs.

These warning blocks serialize the full auth payload. Raw identity-provider responses should not be persisted to logs; keep only the narrow error string or code.

Suggested change
-            Log::warning('Supabase SDK user registration failed', [
-                'email' => $validated['email'],
-                'response' => $supabaseResult,
-            ]);
+            Log::warning('Supabase SDK user registration failed', [
+                'email' => $validated['email'],
+                'error' => data_get($supabaseResult, 'error.message')
+                    ?? data_get($supabaseResult, 'message')
+                    ?? 'unknown',
+            ]);

Apply the same sanitization to the second warning block.

Also applies to: 67-70

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Http/Controllers/Auth/RegisterController.php` around lines 52 - 55,
The warning currently logs the entire Supabase response ($supabaseResult) in
RegisterController; change both Log::warning calls to avoid serializing the full
auth payload and instead log only a narrow, sanitized field such as an error
message or code (e.g. $supabaseResult['error']['message'] or
$supabaseResult['error']['code']) alongside the user's email
($validated['email']); update both occurrences that reference $supabaseResult so
only the specific error string/code and minimal context are included in the log.

Comment on lines +96 to +113
$response = \Illuminate\Support\Facades\Http::withHeaders([
'apikey' => $this->serviceKey,
'Authorization' => 'Bearer '.$this->serviceKey,
])->get(rtrim($this->baseUrl, '/').'/auth/v1/admin/users');

if ($response->failed()) {
return null;
}

$users = $response->json('users', []);

foreach ($users as $user) {
if (($user['email'] ?? '') === $email) {
return $user;
}
}

return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does Supabase GoTrue GET /auth/v1/admin/users paginate results, and what pagination or search parameters are available for locating a user by email?

💡 Result:

Yes, the Supabase GoTrue GET /auth/v1/admin/users endpoint paginates results by default (50 users per page). Available pagination parameters are page and per_page (or perPage in SDKs). There is also a filter parameter that supports searching by email (e.g., ?filter=email@example.com for partial matches on email). For exact email lookup, use a high per_page value (e.g., 1000) combined with filter, or create a custom RPC function for efficiency with large user bases, as direct filtering may have limitations and pagination applies to filtered results.

Citations:


Use the Supabase filter parameter for email lookup instead of scanning the first page.

findUserByEmail() fetches /auth/v1/admin/users without pagination parameters. The Supabase GoTrue admin API paginates by default (50 users per page), so users beyond the first page will appear missing—breaking the OAuth and backfill flows in this PR. Use the filter parameter to search by email (e.g., ?filter=email@example.com), optionally combined with per_page=1000 for a more complete initial fetch. Alternatively, increase per_page as a simpler interim fix, but filtering by email is more reliable and efficient.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/app/Services/SupabaseAuthService.php` around lines 96 - 113, The
findUserByEmail() implementation currently GETs /auth/v1/admin/users and scans
the first page, which misses users beyond the default page size; update the HTTP
call in findUserByEmail() to include the Supabase GoTrue query parameter to
search by email (use ?filter=<email> or the documented filter syntax) instead of
iterating the returned users array, and optionally add per_page=1000 to the
request for completeness; ensure you pass the $email variable into the query
string and return the single matching user (or null) as before.

Comment on lines +73 to +75
const supabaseUrl = "{{ config('supabase.url') }}";
const supabaseKey = "{{ config('supabase.key') }}";
const supabase = supabase.createClient(supabaseUrl, supabaseKey);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In Laravel, are {{ ... }}expressions inside a PHP string returned at runtime evaluated by Blade, and in JavaScript doesconst supabase = supabase.createClient(...)hit the temporal dead zone because it shadows the globalsupabase identifier?

💡 Result:

No, {{ ... }} expressions inside a PHP string (like double-quoted strings, heredoc, or nowdoc) in a Laravel Blade template are NOT evaluated by Blade at compile time. Blade parses the entire .blade.php file as text and replaces standalone {{ }} with PHP echo statements (e.g., ), but when {{ }} appears inside PHP string delimiters (which become PHP string literals after parsing), it remains literal text like "{{ $var }}" in the compiled PHP. Blade does not perform nested parsing or interpolation inside PHP strings. To output a variable inside a PHP string in Blade, use PHP variable interpolation directly, e.g.: $jsCode = "alert('Hello, {$name}');"; or concatenation: $jsCode = 'alert("Hello, ' . $name . '");'; For complex cases like Blade components in strings, use Blade::render explicitly. Yes, in JavaScript, const supabase = supabase.createClient(...) hits the Temporal Dead Zone (TDZ) if there's a global supabase object (common when loading @supabase/supabase-js via CDN as a <script> tag, which exposes a global supabase namespace containing createClient). The const declaration hoists but is uninitialized until the line executes, so referencing supabase (the same name) on the right-hand side throws ReferenceError due to TDZ. Official Supabase docs and examples recommend destructuring to avoid this: const { createClient } = supabase; const client = createClient(url, key); This uses the global supabase without shadowing during evaluation.

Citations:


🏁 Script executed:

# First, check if the file exists and its exact location/extension
find . -type f -name "RealtimeService.php" 2>/dev/null | head -20

Repository: orchestra-mcp/core

Length of output: 127


🏁 Script executed:

# Once located, read the file around lines 73-75 to confirm the code snippet
cat -n web/packages/saeedvir/supabase/src/Services/RealtimeService.php | sed -n '60,85p'

Repository: orchestra-mcp/core

Length of output: 1090


🏁 Script executed:

# Also check the overall file structure to understand what it's doing
wc -l web/packages/saeedvir/supabase/src/Services/RealtimeService.php

Repository: orchestra-mcp/core

Length of output: 128


This JS snippet has two critical bugs that prevent it from executing.

The {{ config(...) }} syntax on lines 73-74 is Blade template syntax, but this is a plain .php file returning a raw string via heredoc. Blade only processes .blade.php files at compile time, so the browser will receive literal {{ config('supabase.url') }} text instead of the actual config values. Additionally, line 75 declares const supabase = supabase.createClient(...), which shadows the global supabase object exposed by the CDN script on line 71, triggering the Temporal Dead Zone before createClient can be accessed—throwing a ReferenceError.

Fix by using PHP variable interpolation (e.g., "{$config['supabase.url']}") for lines 73-74 and destructuring the global object on line 75 (e.g., const { createClient } = supabase; const client = createClient(...)).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/Services/RealtimeService.php` around lines
73 - 75, The JS block currently emits Blade syntax and also reassigns the global
supabase symbol causing a TDZ; replace the Blade-style "{{ config('...') }}"
literals with PHP string interpolation (use the PHP variables/array keys that
hold supabase.url and supabase.key so supabaseUrl and supabaseKey are actual
values in the output) and stop shadowing the global supabase by destructuring
its factory instead—extract createClient from the global (e.g., const {
createClient } = supabase) and call it to produce a new client variable (e.g.,
client) rather than assigning back to supabase or calling supabase.createClient
directly in a const that shadows the global.

Comment on lines +27 to +28
$fileContents = file_get_contents($filePath);
$mimeType = mime_content_type($filePath);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In PHP, can file_get_contents()andmime_content_type()returnfalse for unreadable or unknown files, and how should callers handle that before building request headers and bodies?

💡 Result:

Yes, both file_get_contents and mime_content_type can return false for unreadable or unknown files. file_get_contents returns false on failure, such as when the file cannot be found, is unreadable due to permissions, or other errors like invalid offsets or stream failures. An E_WARNING is generated for issues like missing files or failed seeks. mime_content_type returns false on failure, such as when the file cannot be accessed or analyzed (e.g., unreadable file, invalid path), and emits an E_WARNING. Callers should handle these by checking the return values with strict equality (=== false) before using the results to build request headers or bodies. Do not use loose comparisons (== false) because empty strings (valid for empty files from file_get_contents) evaluate to false loosely but are not errors. Always verify both functions succeed and log/handle warnings appropriately. Example handling: $contents = file_get_contents($filename); if ($contents === false) { // Handle error: file unreadable or missing return; } $mime = mime_content_type($filename); if ($mime === false) { // Handle error: MIME detection failed return; } // Safe to use $contents and $mime for headers/bodies $headers = ['Content-Type: ' . $mime]; $body = $contents;

Citations:


🏁 Script executed:

# First, locate and read the StorageService.php file
fd -i 'StorageService.php' --type f

Repository: orchestra-mcp/core

Length of output: 124


🏁 Script executed:

# Read the StorageService.php file to understand the context and usage
cat -n web/packages/saeedvir/supabase/src/Services/StorageService.php

Repository: orchestra-mcp/core

Length of output: 6961


Fail fast when the source file can't be read.

file_get_contents() and mime_content_type() return false on failure (missing/unreadable file, permission denied, etc.). Currently, false is passed directly to the request body (line 46) and Content-Type header (line 31), creating an invalid request. Add guards to validate the file before reading, and fall back to a default MIME type only after a successful read.

📦 Minimal guard
+        if (! is_file($filePath) || ! is_readable($filePath)) {
+            throw new \InvalidArgumentException("Unable to read file [{$filePath}]");
+        }
+
         $fileContents = file_get_contents($filePath);
-        $mimeType = mime_content_type($filePath);
+        $mimeType = mime_content_type($filePath) ?: 'application/octet-stream';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$fileContents = file_get_contents($filePath);
$mimeType = mime_content_type($filePath);
if (! is_file($filePath) || ! is_readable($filePath)) {
throw new \InvalidArgumentException("Unable to read file [{$filePath}]");
}
$fileContents = file_get_contents($filePath);
$mimeType = mime_content_type($filePath) ?: 'application/octet-stream';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/Services/StorageService.php` around lines
27 - 28, The code currently assigns $fileContents = file_get_contents($filePath)
and $mimeType = mime_content_type($filePath) without checking for failure;
update the StorageService logic that builds the upload request to first verify
file_exists($filePath) and that file_get_contents returned a non-false value
(e.g., throw an exception or return an error when reading fails), then compute
mime_content_type only after a successful read and if that returns false use a
safe default like 'application/octet-stream'; ensure the validated $fileContents
and $mimeType are used when setting the request body and Content-Type header so
no false values are passed to the upload request.

Comment on lines +43 to +48
return $this->client->raw(
'POST',
"/storage/v1/object/{$bucket}/{$path}",
$fileContents,
$headers
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

When calling Supabase Storage REST endpoints, should object keys in /storage/v1/object/{bucket}/{path}be URL-encoded segment-by-segment, including keys with spaces or#?

💡 Result:

Yes, object keys (paths) in the /storage/v1/object/{bucket}/{path} endpoint must be URL-encoded segment-by-segment when they contain special characters like spaces or #. This follows standard HTTP URL path encoding rules where path segments are percent-encoded individually (e.g., space as %20, # as %23), as unencoded special characters can be misinterpreted by clients, servers, or proxies (e.g., # as URL fragment).

Citations:


🏁 Script executed:

find . -type f -name "StorageService.php" | head -20

Repository: orchestra-mcp/core

Length of output: 126


🏁 Script executed:

wc -l ./web/packages/saeedvir/supabase/src/Services/StorageService.php

Repository: orchestra-mcp/core

Length of output: 130


🏁 Script executed:

cat -n ./web/packages/saeedvir/supabase/src/Services/StorageService.php

Repository: orchestra-mcp/core

Length of output: 6961


Encode object keys before embedding them in storage URLs.

These methods interpolate raw object paths into HTTP URLs without URL encoding. Keys containing spaces, #, ?, or unicode will be routed incorrectly. The Supabase Storage REST endpoints require segment-by-segment URL encoding for path parameters. The bug affects upload, signed URL creation, and the public/authenticated URL builders.

Affected locations:

  • upload() at lines 43–48
  • uploadContent() at lines 76–81
  • createSignedUrl() at line 148
  • publicUrl() at line 164
  • authenticatedUrl() at line 176
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/Services/StorageService.php` around lines
43 - 48, The methods upload(), uploadContent(), createSignedUrl(), publicUrl(),
and authenticatedUrl() are embedding raw object keys into URLs; fix by
URL-encoding the object path on a segment-by-segment basis before building the
URL (use rawurlencode on each segment: explode the path by '/', rawurlencode
each segment, then implode with '/'), and use that encoded path when
interpolating into "/storage/v1/object/{$bucket}/{$path}" (and similar URL
templates) so spaces, #, ?, and unicode are correctly encoded; apply the same
transformation wherever $path is used to build signed, public, or authenticated
URLs.

Comment on lines +51 to +55
while ($attempt <= $retries) {
try {
$response = $this->client->request($method, $uri, $options);
return json_decode($response->getBody()->getContents(), true);
} catch (RequestException $e) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In Supabase/PostgREST, what is the default response body for POST/PATCH/DELETE requests, and is Prefer: return=representation required to get JSON rows back?

💡 Result:

In Supabase/PostgREST, the default response body for POST, PATCH, and DELETE requests is empty (no JSON rows returned). The requests succeed with appropriate HTTP status codes (e.g., 201 Created for POST, 204 No Content for PATCH/DELETE), but without a body unless specified otherwise. To get the affected JSON rows (full representation) back in the response body, the Prefer: return=representation header is required. Without it, or with Prefer: return=minimal, no body is returned. For POST (if the table has a primary key), omitting the Prefer header may return a Location header pointing to the new resource, but still no body.

Citations:


Empty response bodies from POST/PATCH/DELETE will cause return-type violations.

request() unconditionally json_decodes the response body and declares an array return type, but PostgREST mutations return empty bodies by default unless the caller sends Prefer: return=representation. Since json_decode('') yields null (not array), successful insert/update/delete operations will raise a PHP TypeError instead of returning normally.

Either guard against empty responses before decoding, always include Prefer: return=representation in mutation requests, or allow null in the return type.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/Services/SupabaseClient.php` around lines
51 - 55, The request() method currently json_decodes the response body
unconditionally which returns null for empty POST/PATCH/DELETE responses and
breaks the declared array return type; update the logic in the request()
function (around the $this->client->request(...) call and subsequent
json_decode) to check the response body string first and, if empty, return an
empty array (or explicitly return null and widen the method return type
accordingly), alternatively ensure mutation requests include the "Prefer:
return=representation" header before sending; implement the guard by reading
$response->getBody()->getContents(), returning [] when that string is empty,
otherwise json_decode(..., true) as before.

Comment on lines +7 to +10
public DatabaseService $db;
public AuthService $auth;
public StorageService $storage;
public RealtimeService $realtime;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "SupabaseService.php" | head -20

Repository: orchestra-mcp/core

Length of output: 166


🏁 Script executed:

wc -l ./web/packages/saeedvir/supabase/src/Services/SupabaseService.php

Repository: orchestra-mcp/core

Length of output: 130


🏁 Script executed:

cat -n ./web/packages/saeedvir/supabase/src/Services/SupabaseService.php

Repository: orchestra-mcp/core

Length of output: 3092


Guard disabled services before accessing typed properties.

Non-nullable typed properties db, auth, storage, and realtime (lines 7–10) are conditionally initialized in __construct() based on config flags (lines 18–32). Calling accessor methods when a service is disabled will fatally error with "Typed property must not be accessed before initialization."

Make properties nullable with null defaults, then guard accessors:

Suggested change
-    public DatabaseService $db;
-    public AuthService $auth;
-    public StorageService $storage;
-    public RealtimeService $realtime;
+    public ?DatabaseService $db = null;
+    public ?AuthService $auth = null;
+    public ?StorageService $storage = null;
+    public ?RealtimeService $realtime = null;
     public function auth(): AuthService
     {
-        return $this->auth;
+        return $this->auth ?? throw new \LogicException('Supabase auth service is disabled.');
     }

Apply the same pattern to db(), storage(), and realtime() (lines 56–75).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/packages/saeedvir/supabase/src/Services/SupabaseService.php` around lines
7 - 10, The typed properties on SupabaseService ($db, $auth, $storage,
$realtime) are non-nullable but only conditionally initialized in __construct();
make each property nullable (e.g. ?DatabaseService $db = null, ?AuthService
$auth = null, etc.) and update the accessors auth(), db(), storage(), realtime()
to guard access: if the corresponding property is null throw a clear
RuntimeException (or InvalidArgumentException) explaining the service is
disabled/not initialized; keep the existing initialization in __construct() but
assign into the nullable properties when enabled. Ensure you update all four
accessor methods (auth(), db(), storage(), realtime()) to perform the null check
before returning the property.

…cs/deletion

Profile: PATCH endpoint already exists, verified working
Tokens: Split into access-tokens/index.ts + [id].ts for proper DELETE routing
Token CREATE: Return {token, token_alias} matching frontend schema
Identities: Fallback to profile email for self-hosted (skip GoTrue query)
Analytics: Hidden for self-hosted (causes ad-blocker error)
Danger Zone: Hidden for self-hosted (submits Supabase support ticket)

Co-Authored-By: Fady Mondy <info@3x1.io>
Co-Authored-By: Fady Mondy <info@3x1.io>
Co-Authored-By: Fady Mondy <info@3x1.io>
Co-Authored-By: Fady Mondy <info@3x1.io>
Problem: getSession() ran during SSR where localStorage is unavailable,
returned null, and triggered redirect to login before client hydration.

Fix:
- orch-auth.tsx: Skip auth init during SSR (typeof window === 'undefined')
- withAuth.tsx: Add isMounted guard to prevent redirect before hydration

Flow: SSR keeps isLoading=true → no redirects → client hydrates → reads
session from localStorage → auth state resolves correctly.

Co-Authored-By: Fady Mondy <info@3x1.io>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (4)
apps/studio/pages/api/platform/profile/index.ts (1)

32-35: ⚠️ Potential issue | 🟠 Major

Use LEFT JOIN for auth.users to handle missing user records.

The JOIN will exclude admin profiles that lack a corresponding auth.users row, causing the entire query to return no results and fall back to placeholder data.

🐛 Suggested fix
     query: `SELECT p.id, p.full_name, p.username, p.avatar_url, u.email
             FROM public.profiles p
-            JOIN auth.users u ON u.id = p.id
+            LEFT JOIN auth.users u ON u.id = p.id
             WHERE p.is_admin = true LIMIT 1`,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/index.ts` around lines 32 - 35, The
SQL in apps/studio/pages/api/platform/profile/index.ts uses an inner JOIN which
excludes admin profiles without an auth.users row; change the JOIN in the query
string (the `SELECT p.id, p.full_name, p.username, p.avatar_url, u.email ...
JOIN auth.users u ON u.id = p.id`) to a LEFT JOIN so admin profiles are returned
even when u.email is missing, and ensure downstream code tolerates a nullable
u.email value.
apps/studio/pages/api/platform/profile/access-tokens/index.ts (3)

64-68: ⚠️ Potential issue | 🔴 Critical

INSERT missing required columns for token validation.

The insert only writes id, name, token_prefix, and created_at, but the schema and validate_mcp_token() function require user_id, organization_id, and token_hash. Without token_hash, the generated token cannot be validated on subsequent requests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens/index.ts` around lines
64 - 68, The INSERT into public.mcp_tokens currently only sets id, name,
token_prefix, created_at but must also persist user_id, organization_id and
token_hash required by validate_mcp_token(); update the SQL in the block that
builds the query (referencing variables tokenId, name, tokenPrefix) to include
columns user_id, organization_id, token_hash and add the corresponding values to
the parameters array (e.g., pass the current userId, organizationId and the
generated tokenHash) so the row contains the data validate_mcp_token() expects.

26-30: ⚠️ Potential issue | 🔴 Critical

Token queries lack authorization scope.

The SELECT has no user_id or organization_id filter, exposing all unrevoked tokens to any authenticated user.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens/index.ts` around lines
26 - 30, The SELECT query in the API handler returns all unrevoked tokens
because it lacks any user or organization scoping; update the SQL in the query
that currently selects from "public.mcp_tokens" (the query string shown) to
include a parameterized filter for ownership (e.g., add "AND (user_id = $1 OR
organization_id = $2)" or similar) and pass the current caller's user id and/or
organization id from the request/session into the query parameters; keep the
existing "revoked_at IS NULL" and ORDER BY created_at DESC but ensure the
handler that builds/executes this query (the request handler around this query)
retrieves and supplies the authenticated user's id and org id securely and uses
parameterized values to prevent exposure of other users' tokens.

70-88: ⚠️ Potential issue | 🔴 Critical

POST returns 200 with fabricated token data even when INSERT fails.

When executeQuery returns an error (lines 74-76), the handler logs a warning but still returns HTTP 200 with the generated token values. The client receives a token that doesn't exist in the database, causing authentication failures on use.

🐛 Proposed fix to return error on failure
   const { data, error } = await executeQuery<{
     id: string
     name: string
     token_prefix: string
     created_at: string
   }>({
     query: `INSERT INTO public.mcp_tokens (id, name, token_prefix, created_at)
             VALUES ($1, $2, $3, NOW())
             RETURNING id, name, token_prefix, created_at`,
     parameters: [tokenId, name, tokenPrefix],
   })

-  const row = data?.[0]
-  const createdAt = row?.created_at || new Date().toISOString()
-  const prefix = row?.token_prefix || tokenPrefix
-
   if (error) {
     console.warn('access-tokens create error (table may not exist):', error.message)
+    return res.status(500).json({ error: { message: 'Failed to create token' } })
   }

+  const row = data?.[0]
+  if (!row) {
+    return res.status(500).json({ error: { message: 'Token creation returned no data' } })
+  }
+
   // Return CreateAccessTokenResponse schema:
   return res.status(200).json({
-    id: row?.id || tokenId,
+    id: row.id,
     name,
     token: fullToken,
-    token_alias: `orch_...${prefix}`,
-    created_at: createdAt,
+    token_alias: `orch_...${row.token_prefix}`,
+    created_at: row.created_at,
     expires_at: null,
     last_used_at: null,
   })

As per coding guidelines: Flag data loss risks including missing error handling on write operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens/index.ts` around lines
70 - 88, The handler currently logs executeQuery failures (error) but still
returns HTTP 200 with fabricated token fields (using tokenId, fullToken,
tokenPrefix and row fallbacks); change the control flow in the POST handler so
that if executeQuery returns an error or the DB did not return a created row
(data?.[0] is falsy) you return an error response (e.g., res.status(500).json({
error: 'Failed to create access token' })) instead of returning the generated
token; reference the variables row, data, error, tokenId, fullToken, tokenPrefix
and the response call res.status(...) to implement the check and early return,
and keep logging the error via console.warn/processLogger before sending the 5xx
response.
🧹 Nitpick comments (3)
scripts/start-mcp.sh (1)

21-21: SUPABASE_ANON_KEY is exported but not consumed by the Go server.

Per mcp-server/cmd/server/main.go:40-41, the server only reads SUPABASE_URL and SUPABASE_SERVICE_KEY. Exporting SUPABASE_ANON_KEY adds no value and may confuse future maintainers. Consider removing it or adding a comment explaining its purpose if it's intentionally reserved for future use.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/start-mcp.sh` at line 21, Remove the unused environment export
SUPABASE_ANON_KEY (the line exporting SUPABASE_ANON_KEY="${ANON_KEY}") from the
startup script since the server only consumes SUPABASE_URL and
SUPABASE_SERVICE_KEY in main.go, or if you intend to reserve it, add a one-line
comment next to the SUPABASE_ANON_KEY export explaining it is intentionally
unused/reserved for future use so maintainers aren’t confused; reference the
SUPABASE_ANON_KEY, SUPABASE_URL and SUPABASE_SERVICE_KEY symbols when making the
change.
apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx (1)

133-135: Consider simplifying handleRefresh callback.

The useCallback wrapping a single function call with [refetch] as dependency is unnecessary overhead. Since refetch is stable from React Query, you can pass it directly.

♻️ Proposed simplification
-  const handleRefresh = useCallback(() => {
-    refetch()
-  }, [refetch])

Then on line 205:

-          onClick={handleRefresh}
+          onClick={() => refetch()}

Or pass refetch directly if the signature matches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx` around lines
133 - 135, The handleRefresh callback currently wraps a single call to refetch
with useCallback (function name: handleRefresh, dependency: [refetch]); since
refetch is stable from React Query, remove the unnecessary useCallback wrapper
and either pass refetch directly where handleRefresh is used or replace the
declaration with a direct assignment (e.g., set handleRefresh to refetch) so
callers use the stable refetch function without the extra hook overhead.
apps/studio/pages/api/platform/profile/access-tokens/[id].ts (1)

12-14: Missing Allow header on 405 response.

Other API routes in this codebase (e.g., index.ts) set the Allow header when returning 405. This helps clients discover supported methods.

♻️ Suggested fix
     default:
+      res.setHeader('Allow', ['DELETE'])
       return res.status(405).json({ error: { message: `Method ${req.method} not allowed` } })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens/`[id].ts around lines 12
- 14, The 405 response in the switch default branch (the line using
res.status(405).json(...)) is missing the Allow header; update the default case
in the request handler to set the Allow header to the list of methods this
endpoint supports (e.g., call res.setHeader('Allow', 'GET,PUT,DELETE') or
programmatically build the list from the handled cases) before returning the 405
JSON response so clients can discover supported methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.mcp.json:
- Around line 6-8: Remove the hardcoded token in .mcp.json: replace the
Authorization header value under "headers" -> "Authorization" with a non-secret
placeholder (e.g. YOUR_MCP_TOKEN_HERE) and update the app to read the real token
from a secure source (environment variable or a local secrets file) rather than
committing it; also add a short README/CLAUDE.md note instructing developers to
generate a token via the Laravel app (Settings → Tokens) and place it into their
local config or export it as the env var used by the code.

In `@apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx`:
- Around line 92-98: The toggle button controlling context visibility in
OrchestraLogs (the JSX button that calls onClick={() => setExpanded(!expanded)})
is missing the accessibility attribute aria-expanded; update that button element
to include aria-expanded={expanded} so assistive technologies receive the
current open/closed state (use the existing expanded state variable and keep the
onClick behavior intact).

In `@apps/studio/data/orchestra/orchestra-service-logs-query.ts`:
- Around line 48-56: The SQL in the activityLogSql string references
non-existent columns (entity_type, entity_id, metadata); update the SELECT to
use the actual columns from the migration: replace the message expression to use
COALESCE(action, '') || ': ' || COALESCE(summary, '') (or similar combining
action and summary) and replace metadata::text with details::text (or
COALESCE(details::text, '') if needed) so the query uses action, summary, and
details columns; keep the rest of the query (service, level, request_id,
created_at, ORDER and LIMIT) unchanged to preserve behavior.
- Around line 37-44: The query construction using serviceLogsSql interpolates
hours, service, and level directly and is vulnerable to SQL injection; update
the code to build the SQL with parameterized placeholders (do not
string-concatenate user values) and pass values via the DB client's parameter
array, and additionally perform runtime validation: ensure hours is a safe
integer (e.g., parseInt and bounds check) and validate service against the
ServiceFilter enum (or an allowed-values set) before including it, and only
append level if it matches an allowed level value; reference the serviceLogsSql
variable and the service/level/hours parameters when making these changes.

In `@apps/studio/pages/api/platform/profile/access-tokens/`[id].ts:
- Around line 24-29: The UPDATE allows revoking any token by id and ignores
failures; modify the revoke flow in the handler that calls executeQuery so the
SQL WHERE clause restricts to the caller’s scope (e.g., include owner_user_id or
organization_id = current user's id/org id) instead of just id = $1, verify the
query result/row count and return 404 if no row was affected (or 403 if the
token belongs to a different org), and wrap the executeQuery call in try/catch
to return a 500 on DB errors rather than always returning res.status(200).
Ensure the change references the existing tokenId param, the mcp_tokens table
and revoked_at update, and update the response logic accordingly.

In `@apps/studio/pages/api/platform/profile/index.ts`:
- Around line 65-84: The handler currently reads primary_email from req.body but
never persists it; either explicitly reject email changes or persist them via
auth.users. Fix by checking primary_email in the request: if your policy is to
disallow email updates, return a 400 error (e.g., use res.status(400).json(...))
when primary_email is present; otherwise, run an additional authenticated query
via executeQuery to update auth.users.email (an UPDATE on auth.users for the
profile id) before returning the profile—ensure you update auth.users rather
than only UPDATE public.profiles and keep the same returned email value in the
response.

In `@scripts/start-mcp.sh`:
- Around line 14-16: The grep+cut pipeline truncates values that contain '='
because it uses cut -d= -f2; update the assignments for PW, SERVICE_KEY, and
ANON_KEY to use cut -d= -f2- so each variable captures everything after the
first '=' (e.g., change occurrences of cut -d= -f2 in the PW, SERVICE_KEY, and
ANON_KEY assignment lines to cut -d= -f2-).

---

Duplicate comments:
In `@apps/studio/pages/api/platform/profile/access-tokens/index.ts`:
- Around line 64-68: The INSERT into public.mcp_tokens currently only sets id,
name, token_prefix, created_at but must also persist user_id, organization_id
and token_hash required by validate_mcp_token(); update the SQL in the block
that builds the query (referencing variables tokenId, name, tokenPrefix) to
include columns user_id, organization_id, token_hash and add the corresponding
values to the parameters array (e.g., pass the current userId, organizationId
and the generated tokenHash) so the row contains the data validate_mcp_token()
expects.
- Around line 26-30: The SELECT query in the API handler returns all unrevoked
tokens because it lacks any user or organization scoping; update the SQL in the
query that currently selects from "public.mcp_tokens" (the query string shown)
to include a parameterized filter for ownership (e.g., add "AND (user_id = $1 OR
organization_id = $2)" or similar) and pass the current caller's user id and/or
organization id from the request/session into the query parameters; keep the
existing "revoked_at IS NULL" and ORDER BY created_at DESC but ensure the
handler that builds/executes this query (the request handler around this query)
retrieves and supplies the authenticated user's id and org id securely and uses
parameterized values to prevent exposure of other users' tokens.
- Around line 70-88: The handler currently logs executeQuery failures (error)
but still returns HTTP 200 with fabricated token fields (using tokenId,
fullToken, tokenPrefix and row fallbacks); change the control flow in the POST
handler so that if executeQuery returns an error or the DB did not return a
created row (data?.[0] is falsy) you return an error response (e.g.,
res.status(500).json({ error: 'Failed to create access token' })) instead of
returning the generated token; reference the variables row, data, error,
tokenId, fullToken, tokenPrefix and the response call res.status(...) to
implement the check and early return, and keep logging the error via
console.warn/processLogger before sending the 5xx response.

In `@apps/studio/pages/api/platform/profile/index.ts`:
- Around line 32-35: The SQL in apps/studio/pages/api/platform/profile/index.ts
uses an inner JOIN which excludes admin profiles without an auth.users row;
change the JOIN in the query string (the `SELECT p.id, p.full_name, p.username,
p.avatar_url, u.email ... JOIN auth.users u ON u.id = p.id`) to a LEFT JOIN so
admin profiles are returned even when u.email is missing, and ensure downstream
code tolerates a nullable u.email value.

---

Nitpick comments:
In `@apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx`:
- Around line 133-135: The handleRefresh callback currently wraps a single call
to refetch with useCallback (function name: handleRefresh, dependency:
[refetch]); since refetch is stable from React Query, remove the unnecessary
useCallback wrapper and either pass refetch directly where handleRefresh is used
or replace the declaration with a direct assignment (e.g., set handleRefresh to
refetch) so callers use the stable refetch function without the extra hook
overhead.

In `@apps/studio/pages/api/platform/profile/access-tokens/`[id].ts:
- Around line 12-14: The 405 response in the switch default branch (the line
using res.status(405).json(...)) is missing the Allow header; update the default
case in the request handler to set the Allow header to the list of methods this
endpoint supports (e.g., call res.setHeader('Allow', 'GET,PUT,DELETE') or
programmatically build the list from the handled cases) before returning the 405
JSON response so clients can discover supported methods.

In `@scripts/start-mcp.sh`:
- Line 21: Remove the unused environment export SUPABASE_ANON_KEY (the line
exporting SUPABASE_ANON_KEY="${ANON_KEY}") from the startup script since the
server only consumes SUPABASE_URL and SUPABASE_SERVICE_KEY in main.go, or if you
intend to reserve it, add a one-line comment next to the SUPABASE_ANON_KEY
export explaining it is intentionally unused/reserved for future use so
maintainers aren’t confused; reference the SUPABASE_ANON_KEY, SUPABASE_URL and
SUPABASE_SERVICE_KEY symbols when making the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f5ce61da-8799-4254-8327-8181f5d2f911

📥 Commits

Reviewing files that changed from the base of the PR and between 33d2af2 and bf52d0a.

📒 Files selected for processing (16)
  • .mcp.json
  • .requests/2026-03-28-orchestra-pages-style-refactor.md
  • apps/studio/components/interfaces/Account/Preferences/AccountIdentities.tsx
  • apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx
  • apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts
  • apps/studio/data/orchestra/keys.ts
  • apps/studio/data/orchestra/orchestra-agents-query.ts
  • apps/studio/data/orchestra/orchestra-service-logs-query.ts
  • apps/studio/data/orchestra/orchestra-tokens-query.ts
  • apps/studio/pages/account/me.tsx
  • apps/studio/pages/api/platform/profile/access-tokens/[id].ts
  • apps/studio/pages/api/platform/profile/access-tokens/index.ts
  • apps/studio/pages/api/platform/profile/index.ts
  • apps/studio/pages/project/[ref]/orchestra/logs.tsx
  • scripts/start-mcp.sh
  • supabase/migrations/20260328000011_service_logs.sql
✅ Files skipped from review due to trivial changes (3)
  • .requests/2026-03-28-orchestra-pages-style-refactor.md
  • supabase/migrations/20260328000011_service_logs.sql
  • apps/studio/components/layouts/OrchestraLayout/OrchestraLayout.utils.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/studio/components/interfaces/Account/Preferences/AccountIdentities.tsx
  • apps/studio/data/orchestra/keys.ts
  • apps/studio/data/orchestra/orchestra-tokens-query.ts

Comment thread .mcp.json Outdated
Comment on lines +6 to +8
"headers": {
"Authorization": "Bearer orch_c17dc45cceea7ffa4888775186438a4d"
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded API token committed to repository.

Committing secrets to version control is a security risk, even for demo/development tokens. If this repository is public (or becomes public), this token is exposed. Additionally, developers may accidentally use this in production setups.

Consider using a placeholder value with setup instructions, or document that users should generate their own token via the Laravel app before using this config.

🔐 Suggested approach
       "headers": {
-        "Authorization": "Bearer orch_c17dc45cceea7ffa4888775186438a4d"
+        "Authorization": "Bearer YOUR_MCP_TOKEN_HERE"
       }

Add a comment in README or CLAUDE.md explaining:

  1. Generate a token via the web dashboard (Settings → Tokens)
  2. Replace YOUR_MCP_TOKEN_HERE with the generated token
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"headers": {
"Authorization": "Bearer orch_c17dc45cceea7ffa4888775186438a4d"
}
"headers": {
"Authorization": "Bearer YOUR_MCP_TOKEN_HERE"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.mcp.json around lines 6 - 8, Remove the hardcoded token in .mcp.json:
replace the Authorization header value under "headers" -> "Authorization" with a
non-secret placeholder (e.g. YOUR_MCP_TOKEN_HERE) and update the app to read the
real token from a secure source (environment variable or a local secrets file)
rather than committing it; also add a short README/CLAUDE.md note instructing
developers to generate a token via the Laravel app (Settings → Tokens) and place
it into their local config or export it as the env var used by the code.

Comment on lines +92 to +98
<button
onClick={() => setExpanded(!expanded)}
className="flex items-center gap-1 text-xs text-foreground-lighter hover:text-foreground transition-colors"
>
{expanded ? <ChevronDown size={12} /> : <ChevronRight size={12} />}
<span>{Object.keys(parsed).length} fields</span>
</button>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add aria-expanded attribute to the expandable button.

The toggle button controls visibility of the context details but lacks aria-expanded to communicate state to assistive technologies.

♿ Proposed fix
       <button
         onClick={() => setExpanded(!expanded)}
+        aria-expanded={expanded}
         className="flex items-center gap-1 text-xs text-foreground-lighter hover:text-foreground transition-colors"
       >
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<button
onClick={() => setExpanded(!expanded)}
className="flex items-center gap-1 text-xs text-foreground-lighter hover:text-foreground transition-colors"
>
{expanded ? <ChevronDown size={12} /> : <ChevronRight size={12} />}
<span>{Object.keys(parsed).length} fields</span>
</button>
<button
onClick={() => setExpanded(!expanded)}
aria-expanded={expanded}
className="flex items-center gap-1 text-xs text-foreground-lighter hover:text-foreground transition-colors"
>
{expanded ? <ChevronDown size={12} /> : <ChevronRight size={12} />}
<span>{Object.keys(parsed).length} fields</span>
</button>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/components/interfaces/Orchestra/OrchestraLogs.tsx` around lines
92 - 98, The toggle button controlling context visibility in OrchestraLogs (the
JSX button that calls onClick={() => setExpanded(!expanded)}) is missing the
accessibility attribute aria-expanded; update that button element to include
aria-expanded={expanded} so assistive technologies receive the current
open/closed state (use the existing expanded state variable and keep the onClick
behavior intact).

Comment on lines +37 to +44
WHERE created_at > now() - interval '${hours} hours'
`
if (service !== 'all') {
serviceLogsSql += ` AND service = '${service}'`
}
if (level) {
serviceLogsSql += ` AND level = '${level}'`
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SQL injection vulnerability via string interpolation.

The service, level, and hours parameters are interpolated directly into SQL without parameterization. While hours is numeric and service is constrained by the ServiceFilter type, the type constraint only applies at compile-time; a malicious caller or runtime bypass could inject arbitrary SQL.

Use parameterized queries or escape/validate inputs server-side. At minimum, ensure service is validated against the allowed enum values at runtime.

🛡️ Proposed fix using runtime validation
+const VALID_SERVICES = ['all', 'go_mcp', 'laravel', 'orchestra'] as const
+const VALID_LEVELS = ['debug', 'info', 'warning', 'error', 'fatal'] as const
+
 export async function getOrchestraServiceLogs(
   {
     projectRef,
     service = 'all',
     level,
     hours = 24,
   }: {
     projectRef: string
     service?: ServiceFilter
     level?: string
     hours?: number
   },
   signal?: AbortSignal
 ) {
+  // Runtime validation to prevent SQL injection
+  if (!VALID_SERVICES.includes(service as any)) {
+    throw new Error(`Invalid service filter: ${service}`)
+  }
+  if (level && !VALID_LEVELS.includes(level as any)) {
+    throw new Error(`Invalid level filter: ${level}`)
+  }
+  if (!Number.isInteger(hours) || hours < 1 || hours > 720) {
+    throw new Error(`Invalid hours value: ${hours}`)
+  }
+
   // Query service_logs table
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/data/orchestra/orchestra-service-logs-query.ts` around lines 37 -
44, The query construction using serviceLogsSql interpolates hours, service, and
level directly and is vulnerable to SQL injection; update the code to build the
SQL with parameterized placeholders (do not string-concatenate user values) and
pass values via the DB client's parameter array, and additionally perform
runtime validation: ensure hours is a safe integer (e.g., parseInt and bounds
check) and validate service against the ServiceFilter enum (or an allowed-values
set) before including it, and only append level if it matches an allowed level
value; reference the serviceLogsSql variable and the service/level/hours
parameters when making these changes.

Comment on lines +48 to +56
const activityLogSql = `
SELECT id, 'orchestra' as service, 'info' as level,
COALESCE(action, '') || ': ' || COALESCE(entity_type, '') || ' ' || COALESCE(entity_id, '') as message,
metadata::text as context, NULL as request_id, created_at
FROM public.activity_log
WHERE created_at > now() - interval '${hours} hours'
ORDER BY created_at DESC
LIMIT 50
`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Query references non-existent columns in activity_log table — will cause runtime SQL errors.

The query references entity_type, entity_id, and metadata columns, but according to the migration schema (supabase/migrations/20260328000006_memory_activity.sql), the actual activity_log columns are action, summary, and details.

This will produce a PostgreSQL error when service === 'all' or service === 'orchestra'.

🐛 Proposed fix to match actual schema
   const activityLogSql = `
     SELECT id, 'orchestra' as service, 'info' as level,
-           COALESCE(action, '') || ': ' || COALESCE(entity_type, '') || ' ' || COALESCE(entity_id, '') as message,
-           metadata::text as context, NULL as request_id, created_at
+           COALESCE(action, '') || ': ' || COALESCE(summary, '') as message,
+           details::text as context, NULL as request_id, created_at
     FROM public.activity_log
     WHERE created_at > now() - interval '${hours} hours'
     ORDER BY created_at DESC
     LIMIT 50
   `
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/data/orchestra/orchestra-service-logs-query.ts` around lines 48 -
56, The SQL in the activityLogSql string references non-existent columns
(entity_type, entity_id, metadata); update the SELECT to use the actual columns
from the migration: replace the message expression to use COALESCE(action, '')
|| ': ' || COALESCE(summary, '') (or similar combining action and summary) and
replace metadata::text with details::text (or COALESCE(details::text, '') if
needed) so the query uses action, summary, and details columns; keep the rest of
the query (service, level, request_id, created_at, ORDER and LIMIT) unchanged to
preserve behavior.

Comment on lines +24 to +29
await executeQuery({
query: `UPDATE public.mcp_tokens SET revoked_at = NOW() WHERE id = $1`,
parameters: [tokenId],
})

return res.status(200).json({})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing authorization scope and error handling on token revocation.

The UPDATE query doesn't restrict to the caller's user/organization, allowing any authenticated user to revoke any token by ID. Additionally, the result isn't checked—if the token doesn't exist or the query fails, the endpoint still returns 200.

🔒 Proposed fix to add authorization and result verification
 async function handleDelete(req: NextApiRequest, res: NextApiResponse) {
   const tokenId = req.query.id as string
+  // TODO: Extract userId/orgId from the authenticated session
+  const userId = req.user?.id // adjust based on your auth context

   if (!tokenId) {
     return res.status(400).json({ error: { message: 'Token ID is required' } })
   }

-  await executeQuery({
-    query: `UPDATE public.mcp_tokens SET revoked_at = NOW() WHERE id = $1`,
-    parameters: [tokenId],
+  const { data, error } = await executeQuery<{ id: string }>({
+    query: `UPDATE public.mcp_tokens 
+            SET revoked_at = NOW() 
+            WHERE id = $1 AND user_id = $2 AND revoked_at IS NULL
+            RETURNING id`,
+    parameters: [tokenId, userId],
   })

+  if (error) {
+    console.error('Token revocation error:', error.message)
+    return res.status(500).json({ error: { message: 'Failed to revoke token' } })
+  }
+
+  if (!data?.length) {
+    return res.status(404).json({ error: { message: 'Token not found or already revoked' } })
+  }

   return res.status(200).json({})
 }

As per coding guidelines: Flag security vulnerabilities including auth bypass, and flag data loss risks including missing error handling on write operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/access-tokens/`[id].ts around lines 24
- 29, The UPDATE allows revoking any token by id and ignores failures; modify
the revoke flow in the handler that calls executeQuery so the SQL WHERE clause
restricts to the caller’s scope (e.g., include owner_user_id or organization_id
= current user's id/org id) instead of just id = $1, verify the query result/row
count and return 404 if no row was affected (or 403 if the token belongs to a
different org), and wrap the executeQuery call in try/catch to return a 500 on
DB errors rather than always returning res.status(200). Ensure the change
references the existing tokenId param, the mcp_tokens table and revoked_at
update, and update the response logic accordingly.

Comment on lines +65 to +84
const { first_name, last_name, username, primary_email } = req.body || {}

const fullName = [first_name, last_name].filter(Boolean).join(' ') || null

// Update the admin profile in the database
const { data, error } = await executeQuery<{
id: string
full_name: string | null
username: string | null
email: string | null
}>({
query: `UPDATE public.profiles
SET full_name = COALESCE($1, full_name),
username = COALESCE($2, username),
updated_at = NOW()
WHERE is_admin = true
RETURNING id, full_name, username,
(SELECT email FROM auth.users WHERE id = profiles.id) as email`,
parameters: [fullName, username || null],
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

primary_email is accepted but silently ignored.

The handler destructures primary_email from the request body (line 65) and echoes it back in the response (line 96), but the UPDATE query never modifies auth.users.email. Users will see their email "change" in the UI while the database remains unchanged.

Either update auth.users.email (requires admin privileges) or reject email changes with an appropriate error.

🛡️ Option 1: Reject email updates explicitly
 const handlePatch = async (req: NextApiRequest, res: NextApiResponse) => {
   const { first_name, last_name, username, primary_email } = req.body || {}

+  if (primary_email !== undefined) {
+    return res.status(400).json({ 
+      error: { message: 'Email updates are not supported through this endpoint' } 
+    })
+  }
+
   const fullName = [first_name, last_name].filter(Boolean).join(' ') || null
🛡️ Option 2: Update email in auth.users (if permitted)
   // Update the admin profile in the database
-  const { data, error } = await executeQuery<{
+  const { error: profileError } = await executeQuery({
+    query: `UPDATE public.profiles
+            SET full_name = COALESCE($1, full_name),
+                username = COALESCE($2, username),
+                updated_at = NOW()
+            WHERE is_admin = true`,
+    parameters: [fullName, username || null],
+  })
+
+  if (primary_email) {
+    // Note: This requires appropriate permissions on auth.users
+    await executeQuery({
+      query: `UPDATE auth.users SET email = $1 
+              WHERE id = (SELECT id FROM public.profiles WHERE is_admin = true)`,
+      parameters: [primary_email],
+    })
+  }
+
+  const { data, error } = await executeQuery<{
     id: string
     full_name: string | null
     username: string | null
     email: string | null
   }>({
-    query: `UPDATE public.profiles
-            SET full_name = COALESCE($1, full_name),
-                username = COALESCE($2, username),
-                updated_at = NOW()
-            WHERE is_admin = true
-            RETURNING id, full_name, username,
-              (SELECT email FROM auth.users WHERE id = profiles.id) as email`,
-    parameters: [fullName, username || null],
+    query: `SELECT p.id, p.full_name, p.username, u.email
+            FROM public.profiles p
+            LEFT JOIN auth.users u ON u.id = p.id
+            WHERE p.is_admin = true`,
   })

As per coding guidelines: Prioritize comments on logic errors and bugs.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { first_name, last_name, username, primary_email } = req.body || {}
const fullName = [first_name, last_name].filter(Boolean).join(' ') || null
// Update the admin profile in the database
const { data, error } = await executeQuery<{
id: string
full_name: string | null
username: string | null
email: string | null
}>({
query: `UPDATE public.profiles
SET full_name = COALESCE($1, full_name),
username = COALESCE($2, username),
updated_at = NOW()
WHERE is_admin = true
RETURNING id, full_name, username,
(SELECT email FROM auth.users WHERE id = profiles.id) as email`,
parameters: [fullName, username || null],
})
const { first_name, last_name, username, primary_email } = req.body || {}
if (primary_email !== undefined) {
return res.status(400).json({
error: { message: 'Email updates are not supported through this endpoint' }
})
}
const fullName = [first_name, last_name].filter(Boolean).join(' ') || null
// Update the admin profile in the database
const { data, error } = await executeQuery<{
id: string
full_name: string | null
username: string | null
email: string | null
}>({
query: `UPDATE public.profiles
SET full_name = COALESCE($1, full_name),
username = COALESCE($2, username),
updated_at = NOW()
WHERE is_admin = true
RETURNING id, full_name, username,
(SELECT email FROM auth.users WHERE id = profiles.id) as email`,
parameters: [fullName, username || null],
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/studio/pages/api/platform/profile/index.ts` around lines 65 - 84, The
handler currently reads primary_email from req.body but never persists it;
either explicitly reject email changes or persist them via auth.users. Fix by
checking primary_email in the request: if your policy is to disallow email
updates, return a 400 error (e.g., use res.status(400).json(...)) when
primary_email is present; otherwise, run an additional authenticated query via
executeQuery to update auth.users.email (an UPDATE on auth.users for the profile
id) before returning the profile—ensure you update auth.users rather than only
UPDATE public.profiles and keep the same returned email value in the response.

Comment thread scripts/start-mcp.sh
Comment on lines +14 to +16
PW=$(grep "^POSTGRES_PASSWORD=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
SERVICE_KEY=$(grep "^SERVICE_ROLE_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
ANON_KEY=$(grep "^ANON_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use cut -d= -f2- to avoid truncating values containing =.

If POSTGRES_PASSWORD (or any other value) contains = characters, cut -d= -f2 will only return the portion up to the next =, truncating the actual value. Use f2- to capture everything after the first delimiter.

🔧 Proposed fix
-PW=$(grep "^POSTGRES_PASSWORD=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
-SERVICE_KEY=$(grep "^SERVICE_ROLE_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
-ANON_KEY=$(grep "^ANON_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
+PW=$(grep "^POSTGRES_PASSWORD=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
+SERVICE_KEY=$(grep "^SERVICE_ROLE_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
+ANON_KEY=$(grep "^ANON_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PW=$(grep "^POSTGRES_PASSWORD=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
SERVICE_KEY=$(grep "^SERVICE_ROLE_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
ANON_KEY=$(grep "^ANON_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2)
PW=$(grep "^POSTGRES_PASSWORD=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
SERVICE_KEY=$(grep "^SERVICE_ROLE_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
ANON_KEY=$(grep "^ANON_KEY=" "$DOCKER_ENV" | head -1 | cut -d= -f2-)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/start-mcp.sh` around lines 14 - 16, The grep+cut pipeline truncates
values that contain '=' because it uses cut -d= -f2; update the assignments for
PW, SERVICE_KEY, and ANON_KEY to use cut -d= -f2- so each variable captures
everything after the first '=' (e.g., change occurrences of cut -d= -f2 in the
PW, SERVICE_KEY, and ANON_KEY assignment lines to cut -d= -f2-).

MCP Protocol Fix:
- Register ALL 58 tools (was only 6 due to missing calls in main.go)
- Fix SSE session: statusRecorder now implements http.Flusher
- Fix GitHub tools: access_token → access_token_encrypted
- Fix project_link_repo: correct column names

Orchestra Pages Style Refactor:
- Dashboard: proper Card/Table components, Activity + Sessions tables
- Agents: fix Badge variants, proper empty state with icon
- Tokens: font-mono prefix, destructive revoke button, empty state
- Logs: fix squished tabs (gap-2 + border-b), proper empty state

Go MCP DB Logger:
- New logging.DBLogger with 1000-entry buffered channel
- Background worker inserts to service_logs table
- Logs: server start/stop, client connect/disconnect, tool calls, auth failures
- Graceful shutdown drains remaining entries

Audit Logs (full implementation):
- Org audit: queries activity_log with limit/offset/action filter
- Profile audit: scoped to admin user
- Tool calls auto-logged to activity_log after every execution

Co-Authored-By: Fady Mondy <info@3x1.io>
Full rewrite of transport layer for all Claude clients (Code, Desktop, Mobile, Web):

Streamable HTTP (new):
- Single /mcp endpoint handles GET, POST, DELETE
- POST initialize → creates session, returns MCP-Session-Id header
- POST with MCP-Session-Id → direct JSON responses (no SSE relay)
- GET with MCP-Session-Id → SSE stream for server-initiated messages
- DELETE → terminates session
- Protocol version: 2025-11-25
- Session timeout: 30 min idle cleanup

Backwards Compatible (old SSE 2024-11-05):
- GET without MCP-Session-Id → legacy SSE with event:endpoint
- POST with ?sessionId= query param → legacy relay
- Existing .mcp.json with type:sse still works

CORS: MCP-Session-Id + MCP-Protocol-Version headers allowed/exposed
Auth: session context persisted across requests

Co-Authored-By: Fady Mondy <info@3x1.io>
Claude Code expects JSON responses. Changed:
- Auth 401: Content-Type application/json with JSON body
- Catch-all 404: JSON {error, path} instead of plain "404 page not found"

Co-Authored-By: Fady Mondy <info@3x1.io>
Updated 16 files to match Studio's exact design language:
- Colors: #1c1c1c bg, #252525 cards, supabase#333 borders, #ededed text
- Font: switched to Inter (matching Studio)
- Layouts: narrower sidebar (240px), compact nav, user avatar at bottom
- Auth pages: Studio-style inputs, rounded-md, inline forgot link
- Dashboard: Studio stat cards, proper table styling, dark modals
- Onboarding: checkmarks for completed steps, Studio code blocks
- Brand gradient (#00E5FF→#A900FF) preserved throughout

Co-Authored-By: Fady Mondy <info@3x1.io>
The critical fix: POST requests with sessionId or Mcp-Session-Id now
bypass the auth middleware. The auth context is stored in the session
from the initial SSE/initialize connection.

Changes:
- legacySession stores userCtx from SSE connection
- Auth middleware skips validation for POST with session params
- Legacy POST injects stored userCtx into request context

Co-Authored-By: Fady Mondy <info@3x1.io>
…TTP transport

Claude's API returns 400 for schemas with oneOf/allOf/anyOf at top level.
Removed oneOf from 5 tool schemas (agent_get, project_get, skill_get,
spec_get, workflow_get) — both id and slug are now optional params.

Changed .mcp.json to type:http (Streamable HTTP, not SSE).

Co-Authored-By: Fady Mondy <info@3x1.io>
The regex oneOf removal left orphaned JSON fragments (trailing },
{"required": [...]}, ]) in 5 tool schemas. Fixed all to be valid JSON.
Verified: all schemas pass JSON parse, tools/list returns 58 tools.

Co-Authored-By: Fady Mondy <info@3x1.io>
@fadymondy fadymondy merged commit 3ac9113 into master Mar 28, 2026
4 of 14 checks passed
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

Braintrust eval report

Evals failed: Error: Command failed with exit code 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant