Conversation
Check if auth is already set up before attempting to create auth files. This prevents errors when running the auth setup command multiple times.
- .gitignore: Removed unused 'cli-auth-page/' entry - CODEBASE_MAP.md: Fixed env var inconsistency - aligned STORAGE_ACCESS_KEY_ID/STORAGE_SECRET_ACCESS_KEY - CODEBASE_MAP.md: Replaced placeholder clone command with generic <repository-URL> - CODEBASE_MAP.md: Separated consecutive requireAuth/optionalAuth middleware into distinct examples - CODEBASE_MAP.md: Added missing 'eq' import for database query
…t emission - Add onchange() method to DatabaseConnection interface in types.ts - Implement CDC for Turso/SQLite: wrap execute() to emit DBEvent after INSERT/UPDATE/DELETE - Implement CDC for PostgreSQL/Neon/Supabase: setup LISTEN/NOTIFY pattern - Add CDC support placeholder for PlanetScale (MySQL - no native CDC) - Update RealtimeServer in templates/base to connect CDC events to WebSocket - Webhooks integrator continues to receive db:change events correctly This enables automatic realtime events without manual broadcast() calls.
- Add event type (INSERT|UPDATE|DELETE|*) to subscription messages - Server now filters events by table+event before broadcasting - Client SDK updated to send event type with subscribe/unsubscribe - Only matching subscribers receive events (no more broadcast to all) - Supports wildcard '*' for receiving all event types on a table
- Create packages/core/src/auto-rest.ts with mountAutoRest function - Auto-generates CRUD routes: GET/POST /api/:table, GET/PATCH/DELETE /api/:table/:id - Add autoRest config to BetterBaseConfigSchema with enabled and excludeTables - Integrate with templates/base to call mountAutoRest at startup - Routes support pagination (limit, offset query params) - Returns BetterBaseResponse<T> format including count and pagination - RLS integration point added (middleware hooks ready) - Manual routes can override auto-generated routes
- Create packages/core/src/rls/evaluator.ts - Implement evaluatePolicy() to parse and evaluate policy expressions - Supports: auth.uid() = column_name, auth.role() = 'value', true, false - Apply RLS to SELECT: fetch rows first, then filter through evaluator - Apply RLS to INSERT/UPDATE/DELETE: evaluate before execution, throw UnauthorizedError if denied - Create middleware factory createRLSMiddleware() for easy integration - Uses UnauthorizedError from @betterbase/shared
- Add StoragePolicy type to packages/core/src/storage/types.ts
- Create packages/core/src/storage/policy-engine.ts with evaluateStoragePolicy()
- Add storagePolicies to BetterBaseConfigSchema in packages/core/src/config/schema.ts
- Integrate storage policy checks into templates/base/src/routes/storage.ts
- Export storage types and functions from packages/core/src/index.ts
- Fail-closed by default: deny access if no policies match
- Supports expressions: 'true' (public), auth.uid() = path.split('/')[1], path.startsWith('prefix')
- Add magicLink plugin to templates/base/src/auth/index.ts - Configure SMTP env vars: SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASS, SMTP_FROM - Implement dev mode logging instead of sending real emails - Add client methods in packages/client/src/auth.ts: - sendMagicLink(email), verifyMagicLink(token) - sendOtp(email), verifyOtp(email, code) - Add routes in templates/auth/src/routes/auth.ts: - POST /api/auth/magic-link/send - GET /api/auth/magic-link/verify - POST /api/auth/otp/send - POST /api/auth/otp/verify - Dev mode logs tokens and OTPs to stdout
- Add twoFactor plugin to templates/base/src/auth/index.ts - Add MFA routes in templates/auth/src/routes/auth.ts: - POST /api/auth/mfa/enable - returns QR URI and backup codes - POST /api/auth/mfa/verify - verify and activate MFA - POST /api/auth/mfa/disable - disable MFA - POST /api/auth/mfa/challenge - complete login with TOTP code - Add requiresMFA to Session interface in packages/client/src/auth.ts - Add client MFA methods: mfaEnable(), mfaVerify(), mfaDisable(), mfaChallenge()
- Add phone routes in templates/auth/src/routes/auth.ts: - POST /api/auth/phone/send - send SMS code (accepts phone in E.164 format) - POST /api/auth/phone/verify - verify SMS code and create session - Env vars: TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, TWILIO_PHONE_NUMBER - Dev mode logs SMS codes instead of sending - Add client methods: sendPhoneOtp(phone), verifyPhoneOtp(phone, code) - Phone stored in E.164 format (e.g., +15555555555) - Code expires after 10 minutes
- Add AllowedMimeTypes and BucketConfig types to packages/core/src/storage/types.ts - Add MIME type validation function: validateMimeType() - Add file size validation function: validateFileSize() - Add env vars: STORAGE_ALLOWED_MIME_TYPES, STORAGE_MAX_FILE_SIZE - Validate MIME types on upload (supports wildcards like 'image/*') - Validate file size on upload (default 50MB) - Export new types from packages/core/src/storage/index.ts
- Add vector types for embedding providers (OpenAI, Cohere, HuggingFace) - Add embedding generation utilities with provider abstraction - Add vector similarity search functions (cosine, euclidean, inner_product) - Add pgvector index creation support - Add comprehensive tests for vector operations - Export vector module from @betterbase/core
- Add branching types and interfaces for preview environments - Add DatabaseBranching class for PostgreSQL database cloning - Add StorageBranching class for S3-compatible storage bucket isolation - Add BranchManager for orchestrating preview environment lifecycle - Add configuration schema for branching options (maxPreviews, sleepTimeout) - Add comprehensive tests for branching operations - Export branching module from @betterbase/core
- Add bb branch create <name> to create preview environments - Add bb branch list to list all preview environments - Add bb branch delete <name> to delete preview environments - Add bb branch sleep <name> to pause preview environments - Add bb branch wake <name> to resume preview environments
- Add generateVectorSearchResolver function for creating GraphQL vector search resolvers - Support both vector embedding search and text-toembedding search - Integrate with existing resolver patterns and GraphQL context
- Add GraphQL resolvers tests - Add GraphQL schema generator tests - Add GraphQL SDL exporter tests - Add GraphQL server tests - Add RLS auth bridge tests - Add RLS evaluator tests - Add RLS generator tests - Add RLS scanner tests - Add RLS types tests - Add storage policy engine tests - Add storage S3 adapter tests - Add storage types tests - Add storage core tests
- Update main feature documentation for T-14 (Vector Search) and T-15 (Branching) - Add test results documentation - Fix CLI auth command test timeout
Removes deprecated internal task documents (BetterBase_Core_Tasks.docx.md, betterbase_backend_rebuild.md, betterbase_real_world_project_creation.md, betterbase_test_suite_v3.md) that served as development planning artifacts. Updates CODEBASE_MAP.md and README.md to reflect current feature set including Vector Search, Branching/Preview Environments, Auto-REST, Magic Link Auth, MFA, and Phone Auth capabilities.
This comment was marked as resolved.
This comment was marked as resolved.
|
This is a pretty massive PR , I have no idea why Coderabit did not start reviewing it , any way might be cuz of thr lint test typecheck did not pass ! |
|
okay it started late , coderabit is on |
|
Fuck , that will take forever to finish |
- Update CODEBASE_MAP.md with latest codebase structure - Update README.md with new information - Add core task issues.md tracking document - Remove outdated new update March 7th 2026.md file
- Update database branching with improved connection handling - Enhance branching index with new features - Improve storage branching implementation - Add new types for branching operations
- Enhance Neon provider with new features - Fix issues in PlanetScale provider - Improve PostgreSQL provider implementation - Update Supabase provider with new configurations - Fix and improve Turso provider
- Significantly improve embeddings implementation - Update vector search with better algorithms - Add new vector types and interfaces - Enhance vector index functionality
- Fix issues in S3 adapter for storage operations - Improve RLS evaluator with better policy handling
- Major update to auto-rest functionality with new endpoints - Update schema configuration with new options
- Fix issues in CLI main index file - Update client authentication implementation
- Update branching tests - Fix graphql-resolvers tests - Fix graphql-sdl-exporter tests - Fix rls-evaluator tests - Fix rls-scanner tests - Fix storage-s3-adapter tests
- Enhance base template index with new features - Add realtime functionality to base template
…roperking/Betterbase into feature/core-tasks-march-2026
Delete outdated task tracking and PR documentation files that are no longer needed: - bb dev hot reload documentation - Error message improvement documentation - PR #31 changes documentation - Core task issues documentation - March 2026 update documentation
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (16)
README.md (2)
986-990:⚠️ Potential issue | 🟠 MajorStorage env docs still conflict with the config example.
This block uses
STORAGE_*vars (Line 986-990), but the config snippet still referencesprocess.env.S3_BUCKET,process.env.AWS_ACCESS_KEY_ID, andprocess.env.AWS_SECRET_ACCESS_KEY(Line 929-933). Readers will misconfigure unless you unify names or explicitly document aliases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 986 - 990, The README shows conflicting environment variable names: the storage example uses STORAGE_PROVIDER/REGION/ACCESS_KEY_ID/SECRET_ACCESS_KEY/BUCKET while the config snippet reads process.env.S3_BUCKET, process.env.AWS_ACCESS_KEY_ID and process.env.AWS_SECRET_ACCESS_KEY; update the docs so the variable names are consistent—either change the example to use AWS-prefixed vars (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, S3_BUCKET) or update the config snippet to read process.env.STORAGE_BUCKET, process.env.STORAGE_ACCESS_KEY_ID and process.env.STORAGE_SECRET_ACCESS_KEY, and add a brief note listing the supported aliases if both names should be accepted (mention the symbols process.env.S3_BUCKET, process.env.AWS_ACCESS_KEY_ID, process.env.AWS_SECRET_ACCESS_KEY and STORAGE_BUCKET, STORAGE_ACCESS_KEY_ID, STORAGE_SECRET_ACCESS_KEY).
806-832:⚠️ Potential issue | 🟠 MajorRoute prefixes are still inconsistent with earlier docs.
Line 828 documents Auto-REST under
/api/:table, but Line 195 still says REST is/rest/v1/*, and there’s no mapping note. Please either standardize to one canonical prefix or add an explicit “gateway vs direct” mapping to avoid 404-prone setup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 806 - 832, Documentation shows conflicting REST prefixes: Auto-REST examples use "/api/:table" while earlier docs reference "/rest/v1/*"; update README to either standardize to one canonical prefix (e.g., change all Auto-REST examples and endpoint tables to use "/rest/v1/*" or vice versa) or add a clear "Gateway vs Direct REST" mapping section that explains the two routes and how they map (e.g., "/api/* -> gateway proxy" vs "/rest/v1/* -> direct DB rest endpoint"), and update the Method/Endpoint tables (symbols: `/api/:table`, `/rest/v1/*`, `/api/auth/*`, "Auto-REST") and any examples so they consistently reflect the chosen approach.packages/core/test/graphql-sdl-exporter.test.ts (1)
51-59:⚠️ Potential issue | 🔴 CriticalSwitch SDL test expectations from plural to singular names.
These assertions are currently mismatched with generated SDL (e.g.,
User,createUser,CreateUserInput), which is why CI is failing at Line 56/65/75/188 and whyexportTypeSDLlookups fail for plural type names.🐛 Proposed fix
test("should include Mutation type in SDL", () => { const schema = createTestSchema(); const sdl = exportSDL(schema); expect(sdl).toContain("type Mutation"); - expect(sdl).toContain("createUsers"); - expect(sdl).toContain("updateUsers"); - expect(sdl).toContain("deleteUsers"); + expect(sdl).toContain("createUser"); + expect(sdl).toContain("updateUser"); + expect(sdl).toContain("deleteUser"); }); test("should include Object types in SDL", () => { const schema = createTestSchema(); const sdl = exportSDL(schema); - expect(sdl).toContain("type Users"); + expect(sdl).toContain("type User"); expect(sdl).toContain("id"); expect(sdl).toContain("name"); expect(sdl).toContain("email"); }); test("should include Input types in SDL", () => { const schema = createTestSchema(); const sdl = exportSDL(schema); - expect(sdl).toContain("input CreateUsersInput"); - expect(sdl).toContain("input UpdateUsersInput"); - expect(sdl).toContain("input UsersWhereInput"); + expect(sdl).toContain("input CreateUserInput"); + expect(sdl).toContain("input UpdateUserInput"); + expect(sdl).toContain("input UserWhereInput"); }); @@ describe("exportTypeSDL", () => { test("should export specific Object type", () => { const schema = createTestSchema(); - // The type name is pluralized (Users, not User) - const typeSdl = exportTypeSDL(schema, "Users"); + // The type name is singular (User, not Users) + const typeSdl = exportTypeSDL(schema, "User"); expect(typeSdl).toBeDefined(); - expect(typeSdl).toContain("type Users"); + expect(typeSdl).toContain("type User"); expect(typeSdl).toContain("id"); }); test("should export specific Input type", () => { const schema = createTestSchema(); // Export the Input type and verify it contains the expected SDL - const typeSdl = exportTypeSDL(schema, "CreateUsersInput"); + const typeSdl = exportTypeSDL(schema, "CreateUserInput"); expect(typeSdl).toBeDefined(); - expect(typeSdl).toContain("input CreateUsersInput"); + expect(typeSdl).toContain("input CreateUserInput"); expect(typeSdl).toContain("name"); expect(typeSdl).toContain("email"); }); @@ test("should respect includeDescriptions option", () => { const schema = createTestSchema(); - const typeSdl = exportTypeSDL(schema, "Users", { includeDescriptions: true }); + const typeSdl = exportTypeSDL(schema, "User", { includeDescriptions: true }); expect(typeSdl).toBeDefined(); }); @@ describe("SDL output validation", () => { test("should produce valid SDL syntax", () => { const schema = createTestSchema(); const sdl = exportSDL(schema); // Check for basic SDL structure expect(sdl).toMatch(/type Query \{/); expect(sdl).toMatch(/type Mutation \{/); - expect(sdl).toMatch(/type Users \{/); + expect(sdl).toMatch(/type User \{/); });Also applies to: 61-69, 71-78, 128-136, 139-146, 158-161, 185-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/graphql-sdl-exporter.test.ts` around lines 51 - 59, The test expectations use plural names but the generated SDL uses singular names; update assertions in graphql-sdl-exporter.test.ts to expect singular types and field names (e.g., change "createUsers"/"updateUsers"/"deleteUsers" to "createUser"/"updateUser"/"deleteUser", and "Users"/"User" where applicable) when asserting against exportSDL(exported by exportSDL) and any exportTypeSDL lookups; locate the assertions around createTestSchema(), exportSDL(), and any exportTypeSDL(...) calls and replace plural identifiers with their singular counterparts throughout the file (also mirror the same changes in the other failing blocks noted: 61-69, 71-78, 128-136, 139-146, 158-161, 185-189).packages/core/src/providers/neon.ts (1)
58-85:⚠️ Potential issue | 🔴 CriticalCDC still never reaches registered callbacks.
pollForChanges()only sends a syntheticpg_notifypayload; nothing in this flow consumes notifications or callsNeonConnection._notifyChange(). As written,onchange()can register callbacks, but this class never dispatches a realDBEventto them.Also applies to: 108-116
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/providers/neon.ts` around lines 58 - 85, The polling loop in pollForChanges is only sending synthetic pg_notify payloads and never dispatches real DBEvent notifications to registered onchange callbacks; modify pollForChanges (or the notify handling path on notifyConnection) to actually listen for notifications from the connection and call NeonConnection._notifyChange(event) whenever a notification is received (or convert the synthetic payload into a DBEvent and pass it to _notifyChange), ensuring registered callbacks via onchange are invoked; locate pollForChanges, NeonConnection._notifyChange, onchange, and the notifyConnection usage and wire the notify listener -> parse payload -> call _notifyChange(path) flow so events reach subscribers.packages/core/src/rls/evaluator.ts (1)
33-60:⚠️ Potential issue | 🔴 CriticalAnchor the supported policy regexes before evaluating access.
match()only needs a prefix here, so an expression likeauth.uid() = user_id AND is_active = trueis treated as a supported uid policy and can allow rows the full expression should deny. Trim the input and anchor both regexes so anything outside the tiny grammar falls through to the existing deny path.🐛 Minimal fail-closed fix
+ const expr = policyExpression.trim(); - if (policyExpression === "true") { + if (expr === "true") { return true; } - if (policyExpression === "false") { + if (expr === "false") { return false; } @@ - const uidMatch = policyExpression.match(/auth\.uid\(\)\s*=\s*(\w+)/); + const uidMatch = expr.match(/^auth\.uid\(\)\s*=\s*(\w+)$/); @@ - const roleMatch = policyExpression.match(/auth\.role\(\)\s*=\s*'([^']+)'/); + const roleMatch = expr.match(/^auth\.role\(\)\s*=\s*'([^']+)'$/);#!/bin/bash node - <<'NODE' const expr = "auth.uid() = user_id AND is_active = true"; console.log(expr.match(/auth\.uid\(\)\s*=\s*(\w+)/)?.[0] ?? "no match"); NODEExpected output:
auth.uid() = user_id.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/rls/evaluator.ts` around lines 33 - 60, Trim policyExpression before evaluating and change the uid and role regex checks to require a full-match anchor so only exact supported policies are accepted: replace the current uidMatch = policyExpression.match(/auth\.uid\(\)\s*=\s*(\w+)/) with something like policyExpression.trim().match(/^auth\.uid\(\)\s*=\s*(\w+)$/) and similarly replace the roleMatch regex with /^auth\.role\(\)\s*=\s*'([^']+)'$/; keep these checks in the same location (the evaluator logic that handles uid and role comparisons) so anything not exactly matching falls through to deny.packages/core/src/auto-rest.ts (1)
218-223:⚠️ Potential issue | 🔴 CriticalDon't collapse composite primary keys to the first column.
Returning
columns[0].namemeans the generated/:idroutes target only one component of a composite key. PATCH/DELETE can then hit every row sharing that first value. Skip composite-key tables here instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/auto-rest.ts` around lines 218 - 223, The getPrimaryKey function currently returns tableMeta.primaryKey.columns[0].name which collapses composite keys; change its logic to only return a primary key name when the primaryKey.columns array has exactly one column, and return null for composite keys (i.e., when columns.length !== 1). Update the check on tableMeta?.primaryKey?.columns to enforce columns.length === 1 and return that single column name, otherwise return null so routes for composite-key tables are skipped.packages/core/src/graphql/resolvers.ts (1)
673-678:⚠️ Potential issue | 🟠 MajorApply
defaultOptions.thresholdin both resolver branches.
limitandmetrichonor configured defaults, butthresholdstill only readsargs. That makesdefaultOptionspartially ignored and leaves the earlier behavior gap unresolved.Possible fix
- const threshold = args.threshold as number | undefined; + const threshold = + (args.threshold as number | undefined) ?? config.defaultOptions?.threshold; @@ - const threshold = args.threshold as number | undefined; + const threshold = + (args.threshold as number | undefined) ?? config.defaultOptions?.threshold;Also applies to: 714-719
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/graphql/resolvers.ts` around lines 673 - 678, The threshold variable currently only reads args.threshold and ignores config.defaultOptions, so update the resolver branches where threshold is derived (the assignments that set limit/metric/threshold) to compute threshold similarly to limit and metric: use (args.threshold as number | undefined) ?? config.defaultOptions?.threshold ?? undefined (or a sensible fallback), ensuring both resolver branches reference config.defaultOptions?.threshold when args.threshold is not provided; update the variable named threshold in the resolver functions to follow this pattern so defaultOptions.threshold is honored everywhere.packages/core/src/branching/index.ts (1)
34-38:⚠️ Potential issue | 🔴 CriticalMove
branchStoreoff the module scope.Every
BranchManagerinstance shares the same map, so separate projects in the same process bleed into each other's preview limits and list/delete/sleep/wake operations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/index.ts` around lines 34 - 38, The module-level Map named branchStore is shared across all BranchManager instances; remove the top-level const branchStore and instead add a per-instance storage (e.g., this.branchStore: Map<string, BranchConfig>) on the BranchManager class or accept a Map via the BranchManager constructor, then update all references in methods (list/delete/sleep/wake/create/etc.) to use this.branchStore; ensure the Map is typed as Map<string, BranchConfig> and initialized in the BranchManager constructor so each instance gets its own independent store.packages/core/src/vector/search.ts (4)
160-166:⚠️ Potential issue | 🔴 CriticalUse Drizzle's
eq()helper instead of calling.eq()on columns.
PgColumndoes not expose an instanceeq()API here, so Line 165 either fails at runtime or relies onanyto bypass typing. This should build conditions with Drizzle's comparison helpers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/search.ts` around lines 160 - 166, The code currently calls an instance .eq() on a PgColumn (via (column as any).eq(value)), which isn't exposed; replace that with Drizzle's comparison helper by importing eq from drizzle-orm and build conditions using eq(column, value) instead of (column as any).eq(value) within the map over Object.entries(filter) (references: variables: conditions, filter, table.columns and the instance .eq call). Ensure the eq helper is imported and used for all column comparisons so typings and runtime behavior are correct.
68-73:⚠️ Potential issue | 🔴 CriticalThe vector literal is emitted as a tuple, not a pgvector literal.
Lines 70-73 generate
($1::float8, $2::float8, ...)::vector, which PostgreSQL treats as a record constructor.pgvectorexpects array-literal syntax like[1,2,3]::vector.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/search.ts` around lines 68 - 73, The current return in search.ts builds a tuple literal `(${sql.join(...)})::vector` which Postgres treats as a record; change it to emit a pgvector array literal using square brackets so the result is `[val1,val2,...]::vector`. Specifically, update the code that uses sql.join over queryEmbedding (the return that references column, sql.raw(operator), and sql.join) to wrap the joined values with `[` and `]` instead of `(` and `)`, preserving the per-value cast to `::float8` so the final expression becomes an array-literal cast to `::vector`.
283-316:⚠️ Potential issue | 🔴 Critical
createVectorIndex()emits executable DDL from unvalidated input.Lines 306-315 splice table/column names and numeric options straight into SQL. Invalid identifiers or out-of-range
lists/connectionsvalues can break migrations or allow injection if this helper is fed external input.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/search.ts` around lines 283 - 316, createVectorIndex currently interpolates tableName, columnName and numeric options directly into SQL; validate and sanitize these inputs before building the DDL in createVectorIndex: ensure indexType is one of "hnsw"|"ivfflat" and metric maps to ops (reuse ops lookup), validate tableName and columnName against a strict identifier regex (e.g. /^[A-Za-z_][A-Za-z0-9_]*$/) and throw if invalid, clamp/validate numeric options (lists and connections) to safe bounds (e.g. lists between 1 and 10000, connections between 1 and 1024) and throw on out-of-range values, and only then construct the SQL using properly quoted identifiers (double-quote the validated identifiers) and the validated numeric values so untrusted input cannot break migrations or inject SQL.
243-257:⚠️ Potential issue | 🔴 CriticalValidate or quote identifiers before building raw search SQL.
tableName,vectorColumn, and filter keys are interpolated directly into the query string on Lines 244-253. Any user-controlled value here becomes SQL injection.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/search.ts` around lines 243 - 257, The query construction in search.ts interpolates unvalidated identifiers (tableName, vectorColumn and filter keys) into the raw SQL, risking SQL injection; update the code to validate or safely quote these identifiers before interpolation and avoid placing user-controlled keys directly into the SQL. Specifically, add a validation/whitelist or use a proper identifier-quoting helper for tableName and vectorColumn (the variables used when building query) and for each filter key before appending to filterConditions/whereClause, and continue to use parameterized values in params for the filter values (as currently done) so only identifiers are sanitized while values remain bound parameters. Ensure the helper is applied wherever whereClause, tableName, or vectorColumn are used to build the final query string.packages/core/src/branching/storage.ts (1)
47-64:⚠️ Potential issue | 🔴 Critical
createPreviewBucket()never creates the bucket it returns.
Line 56assumes S3-compatible providers create buckets on first upload, but the standard API does not. The method returnsinitialized: trueon Line 63, thencopyFilesToPreview()starts uploading into a bucket that may not exist.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/storage.ts` around lines 47 - 64, createPreviewBucket currently assumes the preview bucket exists and returns initialized: true without ensuring creation; update createPreviewBucket to verify and create the bucket before returning by using the storage client (e.g., call a head/getBucket call and if not found call the client's createBucket/createBucketIfNotExists), handle and propagate errors from that operation, and only set initialized: true after successful creation/verification; reference generatePreviewBucketName to compute the name, getPublicUrl for the returned URL, and ensure copyFilesToPreview will no longer race against a missing bucket.packages/core/src/branching/database.ts (3)
239-254:⚠️ Potential issue | 🔴 CriticalThe row-copy SQL uses unsupported postgres.js/PostgreSQL syntax.
Line 240referencesschema:table, and Lines 249-253 build MySQL-style?placeholders. Neither is valid here, so data copy will fail as soon as a table has rows.
294-300:⚠️ Potential issue | 🔴 CriticalSequence copy is using invalid qualified-name syntax.
Lines 295-300 again build
schema:sequencereferences, andsetval()expects a sequence name/regclass rather than stitched identifier fragments. Sequence sync will fail even if table cloning succeeds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/database.ts` around lines 294 - 300, The sequence synchronization builds an invalid qualified name by concatenating schema and sequence fragments with ':'; instead, construct and pass a single qualified sequence identifier/regclass to setval so Postgres recognizes the sequence. Change the setval call in the sequence-sync code that uses sourceDb, targetDb, schemaName and seqName to supply a proper qualified identifier (e.g. schemaName.seqName via the DB client's identifier interpolation/helper) and pass currentValue.value as the numeric argument to setval rather than the stitched fragments.
223-225:⚠️ Potential issue | 🔴 Critical
pg_get_tabledef()is not portable PostgreSQL.
Line 224depends on a function that is not present in standard PostgreSQL installations, so schema cloning fails before any tables are created unless the environment separately defines it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/database.ts` around lines 223 - 225, The code currently calls a non-standard function pg_get_tabledef via mainDb to fetch table DDL (see createTableResult, schemaName, tableName), which will fail on vanilla PostgreSQL; replace this with a portable approach—either invoke pg_dump --schema-only -t "<schema>.<table>" (spawn/exec) and use its output as the DDL, or build the DDL from standard catalog queries (pg_catalog.pg_class/pg_attribute/pg_constraint/pg_indexes) as a fallback; update the logic that sets createTableResult to use the chosen portable method and ensure error handling/logging remains consistent.
🟠 Major comments (24)
packages/client/src/auth.ts-306-310 (1)
306-310:⚠️ Potential issue | 🟠 MajorPOST requests may fail without explicit
Content-Type: application/jsonheader.All new
fetch-based POST methods rely onthis._headersincludingContent-Type. If the caller doesn't provide it, JSON bodies may not be parsed correctly by the server. This affectssendMagicLink,sendOtp,verifyOtp, and all MFA/phone methods.🔧 Proposed fix: Ensure Content-Type is set for JSON requests
const response = await this.fetchImpl(`${this.url}/api/auth/magic-link/send`, { method: "POST", - headers: this._headers, + headers: { + ...this._headers, + "Content-Type": "application/json", + }, body: JSON.stringify({ email }), });Apply the same pattern to all other POST requests (
sendOtp,verifyOtp,mfaEnable,mfaVerify,mfaDisable,mfaChallenge,sendPhoneOtp,verifyPhoneOtp).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/src/auth.ts` around lines 306 - 310, The POST calls building JSON bodies (e.g., in sendMagicLink) rely on this._headers but may lack a Content-Type; update each POST method (sendMagicLink, sendOtp, verifyOtp, mfaEnable, mfaVerify, mfaDisable, mfaChallenge, sendPhoneOtp, verifyPhoneOtp) to ensure headers include "Content-Type": "application/json" before calling this.fetchImpl, by merging/overriding this._headers with the Content-Type entry so JSON is parsed correctly by the server.packages/core/src/providers/turso.ts-36-49 (1)
36-49:⚠️ Potential issue | 🟠 MajorCommon valid SQL forms bypass CDC entirely.
extractTableName()only matches bare identifiers, so valid statements likeinsert into "users"orupdate main.users ...returnnull. Because Line 97 requires a truthy table name, those writes never emit aDBEvent.Also applies to: 97-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/providers/turso.ts` around lines 36 - 49, extractTableName currently only matches unquoted bare identifiers so statements like insert into "users" or update main.users return null and CDC never emits DBEvent; update extractTableName to accept quoted identifiers and schema-qualified names (double quotes, backticks, square brackets and dotted schema.table forms), capture the table portion (last identifier) and return it after unquoting/stripping surrounding quotes/brackets; keep behavior for INSERT, UPDATE, DELETE and ensure callers that expect a truthy table name (used when creating DBEvent) receive the normalized table name.packages/core/src/providers/neon.ts-29-33 (1)
29-33:⚠️ Potential issue | 🟠 MajorReuse the configured
connectionStringfor the CDC connection.
getConnectionString()ignores the constructor argument and falls back toprocess.env.DATABASE_URL. That can point CDC at the wrong database—or an empty URL—whenever the caller passes a different config.💡 Proposed fix
class NeonConnection implements NeonDatabaseConnection { readonly provider = "neon" as const; readonly neon: NeonClient; // Store the drizzle-compatible client for use with drizzle-orm readonly drizzle: NeonClient; + private readonly _connectionString: string; private _isConnected = false; private _changeCallbacks: ((event: DBEvent) => void)[] = []; private _listening = false; constructor(connectionString: string) { + this._connectionString = connectionString; this.neon = neon(connectionString); this.drizzle = this.neon; this._isConnected = true; @@ private getConnectionString(): string { - return process.env.DATABASE_URL || ""; + return this._connectionString; }Also applies to: 48-48, 98-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/providers/neon.ts` around lines 29 - 33, The getConnectionString logic is ignoring the constructor's connectionString and falling back to process.env.DATABASE_URL, so update getConnectionString (and any places that build the CDC client) to prefer the instance-stored constructor value used to initialize this.neon (from the NeonProvider constructor) before falling back to environment variables; ensure methods like the constructor, getConnectionString, and any CDC connection code (the code paths referenced around lines 48 and 98-103 that create the CDC client) use the same stored connectionString property so CDC connects to the same database the provider was initialized with.packages/core/src/providers/turso.ts-113-119 (1)
113-119:⚠️ Potential issue | 🟠 MajorDon't log full row payloads on callback failures.
event.recordcan contain application data, so logging the entireeventobject here risks leaking PII into server logs. Log stable metadata only.🛡️ Proposed fix
- console.error("[CDC] Callback error:", callbackError, "Event:", event); + console.error("[CDC] Callback error:", callbackError, { + table: event.table, + type: event.type, + timestamp: event.timestamp, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/providers/turso.ts` around lines 113 - 119, The catch block is currently logging the entire event (including event.record which may contain PII) when a callback in this._changeCallbacks fails; change the console.error call inside the try/catch that surrounds callback(event) to only log stable metadata (e.g., event.type, event.table or event.tableName, event.key/id, event.timestamp) and the callbackError, and do not serialize or include event.record or the full event object to avoid leaking application data.packages/core/src/providers/turso.ts-97-121 (1)
97-121:⚠️ Potential issue | 🟠 MajorCDC events are not emitted for writes without a RETURNING clause.
The event dispatch is nested under
for (const record of records), making it depend on returned rows rather than successful execution. For INSERT, UPDATE, and DELETE operations without a RETURNING clause,result.rowswill be empty whilerowsAffectedis populated—so successful writes will be invisible to CDC. Move the callback dispatch outside the record loop and emit events based onresult.rowsAffectedinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/providers/turso.ts` around lines 97 - 121, The CDC dispatch currently occurs inside the for (const record of records) loop and thus only emits events when result.rows is populated; move the callback dispatch out of that loop in the method that processes writes (where tableName, operation, result, and this._changeCallbacks are available) so you emit events even when no RETURNING rows exist by using result.rowsAffected to detect successful writes. Keep the existing per-row behavior when result.rows is non-empty (creating DBEvent with record populated), but when result.rows is empty and result.rowsAffected > 0 produce events (using DBEvent, eventType, tableName, timestamp) with record and/or old_record set to undefined (or null) and call each callback from this._changeCallbacks for each affected row (or at least once per operation) wrapped in the same try/catch error logging.packages/core/test/rls-evaluator.test.ts-188-204 (1)
188-204:⚠️ Potential issue | 🟠 MajorThis case is passing mixed-table policies into a table-scoped API.
applyRLSSelect()is documented to receive policies "for the table", but this test feeds"posts"and"other"together and then expects"posts"to win. That's why Line 203 currently gets2rows. Either pre-filter to"posts"here, or extend the evaluator API to accept a table name and scope internally.✅ Minimal fix for the current contract
- const policy2 = definePolicy("other", { - select: "true", - }); - - const result = applyRLSSelect(rows, [policy1, policy2], "user-123"); + const result = applyRLSSelect(rows, [policy1], "user-123");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/rls-evaluator.test.ts` around lines 188 - 204, The test is passing mixed-table policies into applyRLSSelect which expects policies scoped to a single table; fix by ensuring only policies for the target table are passed or by extending applyRLSSelect to accept a table name. Minimal fix: in this test, filter the policies array (policy1, policy2) to only include policies whose table equals "posts" before calling applyRLSSelect; alternative fix: add an overload/param to applyRLSSelect to accept a tableName and have it internally ignore policies not matching that table. Reference: applyRLSSelect, definePolicy, and the test case that constructs policy1/policy2.packages/core/test/storage-s3-adapter.test.ts-57-60 (1)
57-60:⚠️ Potential issue | 🟠 MajorUpdate these URL assertions to match the adapter's key-encoding contract.
The suite is currently red because
getPublicUrl()is returning percent-encoded object keys, so the expectations on Lines 59, 104, and 120 no longer match (/is emitted as%2F). Pin one behavior here and assert it consistently across all three cases.Also applies to: 102-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/storage-s3-adapter.test.ts` around lines 57 - 60, Tests currently expect raw slashes in object keys but adapter.getPublicUrl() returns percent-encoded keys; update the three assertions that compare returned URLs to use percent-encoded object keys (e.g., replace "/" with "%2F" in the expected URLs) so they match adapter.getPublicUrl("my-bucket", "path/to/file.txt") and the other two cases; keep the assertions consistent across all tests that call adapter.getPublicUrl.packages/core/src/config/schema.ts-87-91 (1)
87-91:⚠️ Potential issue | 🟠 MajorMake Auto-REST opt-in instead of default-on.
templates/base/src/index.tsreadsconfig.autoRest?.enabled ?? true, so leaving this block out now mounts a new CRUD surface automatically after upgrade. That is a backwards-compat/security surprise for existing apps; default this tofalseor require an explicitautoRestblock.Possible fix
autoRest: z .object({ - enabled: z.boolean().default(true), + enabled: z.boolean().default(false), excludeTables: z.array(z.string()).default([]), })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/config/schema.ts` around lines 87 - 91, The autoRest schema currently defaults autoRest.enabled to true which causes Auto-REST to be mounted by default; update the zod schema for the autoRest object (the autoRest: z.object({...}) block with enabled: z.boolean().default(true)) to make Auto-REST opt-in by changing the default to false (enabled: z.boolean().default(false)) or alternatively require callers to provide an explicit autoRest object (remove the .default and make the field optional/required accordingly) so that templates/base check (config.autoRest?.enabled ?? true) no longer enables Auto-REST implicitly.packages/core/src/graphql/resolvers.ts-726-730 (1)
726-730:⚠️ Potential issue | 🟠 MajorDon't replace the validated
textarg withargs[textColumn].
searchByTextalready requiresargs.text, but whenconfig.textColumnis set it ignores that value and looks for another GraphQL arg instead. With the example config (textColumn: "content"), normaltextqueries will always fail on Line 729.Possible fix
- // Use textColumn if specified, otherwise use the text directly - const textToEmbed = config.textColumn ? (args[config.textColumn] as string) : text; - if (!textToEmbed) { - throw new Error(`textColumn "${config.textColumn}" not found in args`); - } + const textToEmbed = text;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/graphql/resolvers.ts` around lines 726 - 730, The code replaces the validated text argument with args[config.textColumn], causing valid text queries to fail; change the selection logic for textToEmbed so it prefers the validated text (variable text) first, then falls back to args[config.textColumn] only if text is empty and config.textColumn is set, and only throw an error if neither is available; update the error message to reference both the missing text and the configured textColumn.packages/core/src/vector/index.ts-69-79 (1)
69-79:⚠️ Potential issue | 🟠 Major
vectorColumn()acceptsnullableanddefaultbut doesn't apply them.The function signature (lines 72-75) documents and accepts both options, yet the builder returned (lines 77-79) only passes
dimensionsto Drizzle'svector()function. Drizzle's vector column builder supports.nullable()and.default()method chaining, so either wire these through or remove them from the public API to prevent callers from expecting functionality that won't be applied.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/index.ts` around lines 69 - 79, vectorColumn accepts nullable and default in its config but currently only forwards dimensions to Drizzle's vector() builder; update the function (vectorColumn) to apply config.nullable and config.default to the returned builder (call .nullable() when config.nullable is true and .default(config.default) when config.default is provided) before returning, so callers get the expected behavior, or alternatively remove those options from the vectorColumn signature if you intend not to support them.packages/cli/src/commands/branch.ts-28-34 (1)
28-34:⚠️ Potential issue | 🟠 MajorConvert the config path to a file URL before dynamic import.
import(configPath)fails on Windows because Node.js ESM does not support raw absolute filesystem paths; it requires afile://URL. ConvertingconfigPathwithpathToFileURL()makes this cross-platform compatible and prevents branch commands from failing to loadbetterbase.config.tson Windows.Suggested fix
import { readFile } from "node:fs/promises"; import { resolve } from "node:path"; +import { pathToFileURL } from "node:url"; @@ try { - const configContent = await readFile(configPath, "utf-8"); + await readFile(configPath, "utf-8"); // Extract the config object from the file - const configModule = await import(configPath); + const configModule = await import(pathToFileURL(configPath).href); return configModule.default || configModule.config || null; } catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/branch.ts` around lines 28 - 34, loadConfig currently calls import(configPath) which fails on Windows because Node ESM requires a file:// URL; update loadConfig to convert configPath to a file URL using pathToFileURL from the 'url' module (referencing loadConfig, configPath, CONFIG_FILE_NAME, BetterBaseConfig) and pass that URL (or its href) to import. Also add the pathToFileURL import at top of the module so dynamic imports of betterbase.config.ts work cross-platform.packages/core/src/auto-rest.ts-283-287 (1)
283-287:⚠️ Potential issue | 🟠 MajorFix row counting logic to properly count table rows and respect RLS filters.
The current implementation
select({ count: () => 0 }).limit(1)does not count rows. The incorrect syntaxcount: () => 0is a function literal that returns 0, and.limit(1)causescountResult.lengthto be 0 or 1 regardless of table size. This breaks pagination metadata on line 296.Additionally, the count query does not apply RLS filtering. When RLS is enabled (lines 274-277 filter the rows query), the total count should also be filtered to match only the user's accessible rows, not the unfiltered table size.
Use Drizzle's proper count function:
import { count } from 'drizzle-orm'and apply RLS filtering to the count query when enabled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/auto-rest.ts` around lines 283 - 287, Replace the bogus row-count logic with a proper COUNT using Drizzle: import { count } from 'drizzle-orm', run a count query like db.select({ total: count() }).from(table) (no .limit) and extract total as a number (e.g. Number(countResult[0]?.total || 0)); additionally apply the same RLS filtering used for the rows query (the filter built/applied around lines 274-277) to this count query so the total reflects only the accessible rows for the user. Ensure you reference the same db and table variables and remove use of count: () => 0 and countResult.length.packages/core/src/auto-rest.ts-98-112 (1)
98-112:⚠️ Potential issue | 🟠 MajorReplace internal
table.config.columnsaccess with Drizzle's supportedgetTableConfig()API.
table.config.columnsis an undocumented internal API with no stability guarantee. If column discovery fails silently,sanitizeInputBody()will filter out all user-supplied fields, causing POST/PATCH to accept requests but strip all input. Drizzle 0.44.5 providesgetTableConfig(table)as the supported alternative, though it requires dialect-specific imports. Consider either detecting the dialect at runtime, making columns explicit via configuration, or documenting the internal API dependency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/auto-rest.ts` around lines 98 - 112, The getTableColumns function currently reads the undocumented internal table.config.columns which can break sanitizeInputBody and strip all input; replace that access with Drizzle's supported getTableConfig(table) API (used in getTableColumns) or otherwise obtain columns explicitly; update getTableColumns to call getTableConfig(table) to read the columns (or fallback to a provided explicit columns config) and handle dialect-specific imports by either runtime-detecting the dialect or accepting a caller-supplied function/mapper so column discovery remains stable and documented.packages/core/src/vector/embeddings.ts-96-104 (1)
96-104:⚠️ Potential issue | 🟠 Major
timeoutis dropped by the config normalizer.
createEmbeddingConfig()never copiesconfig.timeout, but bothgenerateEmbedding()andgenerateEmbeddings()normalize before constructing the provider. Any caller-supplied timeout is silently lost before it reachesOpenAIEmbeddingProviderorCohereEmbeddingProvider.Also applies to: 495-514
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/embeddings.ts` around lines 96 - 104, createEmbeddingConfig currently drops config.timeout so caller timeouts never reach providers; update createEmbeddingConfig (and the equivalent normalizer at lines ~495-514) to copy timeout from the input config into the returned EmbeddingConfig object (e.g., include timeout: config.timeout || providerDefaults.timeout), ensuring generateEmbedding() and generateEmbeddings() pass the timeout through to OpenAIEmbeddingProvider and CohereEmbeddingProvider; reference createEmbeddingConfig, generateEmbedding, generateEmbeddings, OpenAIEmbeddingProvider and CohereEmbeddingProvider when making the change.packages/core/src/branching/index.ts-401-415 (1)
401-415:⚠️ Potential issue | 🟠 Major
getPreviewEnvironment()returns fabricated database metadata.Lines 412-414 hardcode
provider: "postgres"and an empty database name. Callers get incorrect connection metadata for Neon/Supabase/managed previews, and even postgres previews never include the actual DB name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/index.ts` around lines 401 - 415, getPreviewEnvironment currently returns fabricated DB metadata (it hardcodes provider: "postgres" and an empty database name), which yields incorrect connection info; update getPreviewEnvironment to derive real values from the branch instead of faking them: use branch.databaseConnectionString to parse and extract the actual database name and detect provider (or read provider from branch metadata if available), populate PreviewEnvironment.database.provider with the proper ProviderType value and PreviewEnvironment.database.database with the parsed DB name, and if neither can be determined return null/omit those fields rather than hardcoding; refer to the getPreviewEnvironment function, PreviewEnvironment type, and branch.databaseConnectionString/branch.databaseConnectionString fields when implementing the parsing or lookup.packages/core/src/branching/storage.ts-171-176 (1)
171-176:⚠️ Potential issue | 🟠 MajorEmpty preview buckets are reported as missing.
Line 174equates existence withlistObjects().length > 0, so a newly created or fully cleaned bucket returnsfalse. This helper needs an explicit exists/head-bucket capability or a less misleading contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/storage.ts` around lines 171 - 176, The previewBucketExists implementation incorrectly treats empty buckets as non-existent by returning listObjects(bucketName).length > 0; change it to use an explicit existence check on the adapter (e.g. call this.mainStorageAdapter.bucketExists or headBucket) if available, and if not available implement a safe fallback: attempt an adapter head/bucket metadata call and only fall back to listObjects when metadata is unavailable, treating an empty list as "exists" rather than "missing"; update the previewBucketExists function to prefer the adapter's existence/head method and only use listObjects for compatibility, returning true for empty but present buckets and false only for adapter-not-found errors.packages/core/src/branching/index.ts-53-77 (1)
53-77:⚠️ Potential issue | 🟠 MajorThe constructor ignores the project's branching config.
Lines 54-55 always load
DEFAULT_BRANCHING_CONFIG, and the constructor never mergesbetterbaseConfig.branching.enabled,maxPreviews,defaultSleepTimeout, andstorageEnabledfrom user config are therefore ignored.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/index.ts` around lines 53 - 77, The constructor currently overwrites branching settings with DEFAULT_BRANCHING_CONFIG and ignores user-supplied betterbaseConfig.branching; update the constructor to merge the defaults with the provided config (e.g., set this.config = { ...DEFAULT_BRANCHING_CONFIG, ...(betterbaseConfig.branching || {}) }) so that enabled, maxPreviews, defaultSleepTimeout, and storageEnabled are honored; keep subsequent checks (like the storage initialization that reads this.config.storageEnabled and the database/storage creation that call resolveStorageAdapter/createStorageBranching) unchanged so they act on the merged this.config.packages/core/src/vector/embeddings.ts-227-251 (1)
227-251:⚠️ Potential issue | 🟠 MajorTimeout handling is inconsistent outside the happy path.
OpenAIEmbeddingProvider.generateBatch()andCohereEmbeddingProvider.generate()never attach anAbortController, so those calls can hang indefinitely even though both classes expose a timeout setting.Also applies to: 313-331
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/embeddings.ts` around lines 227 - 251, OpenAIEmbeddingProvider.generateBatch and CohereEmbeddingProvider.generate currently call fetch without an AbortController so requests can hang; modify each fetch call to create an AbortController, pass controller.signal to fetch, and if a timeout value is set on the instance (e.g., this.timeout) start a timer that calls controller.abort() after the timeout, then clear the timer after fetch resolves; update error handling to treat aborted requests consistently (inspect for DOMException/AbortError) and record/throw the timeout error as appropriate; apply the same pattern to the other fetch usage referenced (the block around lines 313-331) so all network calls respect the provider timeout settings.packages/core/src/branching/database.ts-185-277 (1)
185-277:⚠️ Potential issue | 🟠 MajorTear down the preview database on partial clone failures.
After
CREATE DATABASEon Line 187, any later exception just closes the clients and rethrows. That leaves an orphaned preview DB behind, which will leak databases and eventually trip preview quotas.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/database.ts` around lines 185 - 277, A preview DB created by the initial mainDb`CREATE DATABASE ${mainDb(previewDbName)}` can be left orphaned if any error occurs during subsequent cloning (in the block that creates schemas/tables, uses previewDb, calls copySequences/copyIndexes); wrap that cloning block in a try/catch and on any error execute a cleanup DROP DATABASE IF EXISTS ${previewDbName} via the original mainDb connection (before rethrowing) and ensure previewDb.end() and mainDb.end() still run in finally; adjust logic around previewConnectionString/previewDb creation and the inner try/finally so the catch can drop the preview DB (use previewDbName and mainDb to identify the DB to remove).packages/core/src/vector/search.ts-152-155 (1)
152-155:⚠️ Potential issue | 🟠 MajorThreshold filtering breaks when
includeScoreisfalse.When
includeScoreis disabled, Line 154 drops_scorefrom the select, but Lines 191-214 still filter on it by substituting0. That makesthresholdreturn incorrect results even though the query itself was ordered with the real distance.Also applies to: 188-214
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/search.ts` around lines 152 - 155, The threshold filter logic breaks when includeScore is false because the code drops the _score column from the SELECT (controlled by includeScore/selectColumns) but subsequent threshold filtering (the block around where the code checks threshold between lines 188-214) still references _score (substituting 0), producing wrong results; fix by ensuring the distance expression is always available for filtering: either always include the _score alias in the SELECT when threshold is provided (i.e. if threshold != null force adding _score: distanceExpr to selectColumns) or change the threshold filter to use the raw distanceExpr (not the _score alias) in the WHERE/HAVING so filtering uses the same expression used for ordering; update references to includeScore, selectColumns, distanceExpr and the threshold filtering block accordingly.packages/core/src/vector/embeddings.ts-472-483 (1)
472-483:⚠️ Potential issue | 🟠 MajorThe public API advertises providers that always throw.
The file header and
DEFAULT_EMBEDDING_CONFIGSclaim HuggingFace/custom support, but Lines 478-483 throw for both.packages/core/src/graphql/resolvers.ts:732-752callsgenerateEmbedding()directly, so a configured"huggingface"or"custom"provider becomes a runtime 500 instead of a clear configuration error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/embeddings.ts` around lines 472 - 483, createEmbeddingProvider currently advertises "huggingface"/"custom" but always throws, causing runtime 500s when generateEmbedding is called; update createEmbeddingProvider to either return a real provider or fail with an explicit configuration error: for "huggingface" and "custom" throw a clear, descriptive Error that names the offending provider and where it surfaced (e.g., generateEmbedding), and suggest the remediation (install/implement a HuggingFaceEmbeddingProvider or extend EmbeddingProviderBase); reference createEmbeddingProvider, EmbeddingProviderBase, DEFAULT_EMBEDDING_CONFIGS and generateEmbedding so the check is easy to locate and ensure API docs/defaults match the actual supported providers.packages/core/src/branching/database.ts-86-93 (1)
86-93:⚠️ Potential issue | 🟠 MajorPreview DB naming exceeds PostgreSQL's identifier limit and DSN reconstruction drops connection parameters.
Line 93 generates names without length validation using
preview_${sanitized}_${timestamp}. SinceDate.now().toString(36)produces ~9–10 characters and "preview_" adds 8 more, only ~44 characters remain for the sanitized branch name before exceeding PostgreSQL's 63-byte identifier limit. PostgreSQL truncates identifiers silently on creation, but line 140 constructs the DSN using the untruncated name, causing connection failures when branch names are long.Additionally,
parseConnectionString(lines 101–130) discards all URL query parameters, andcreateConnectionString(lines 138–140) reconstructs the connection string without them. This breaks connections to managed PostgreSQL services (Neon, Supabase) that requiresslmode=requireor other connection options.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/database.ts` around lines 86 - 93, generatePreviewDatabaseName can produce identifiers longer than PostgreSQL's 63-byte limit and DSN rebuilding uses the full (untruncated) name, and parseConnectionString/createConnectionString drop URL query parameters; fix by truncating the sanitized branch portion so the final name `preview_${sanitized}_${timestamp}` fits within 63 bytes (use byte-length/UTF-8 safe truncation on sanitized branch before joining) inside generatePreviewDatabaseName, and update parseConnectionString to preserve the URL's query/search parameters (e.g., keep URL.search or parsed query object) and have createConnectionString append those preserved query parameters back into the reconstructed DSN so SSL and other service-specific options are not lost.packages/core/src/branching/storage.ts-17-25 (1)
17-25:⚠️ Potential issue | 🟠 MajorAdd length validation to preview bucket names.
S3-compatible bucket names are limited to 63 characters. The
generatePreviewBucketNamefunction concatenates${mainBucket}-preview-${sanitized}-${timestamp}without any length checks. Projects with moderately long bucket or branch names will fail to create preview buckets. For example, a 55-character main bucket produces a 93-character name, exceeding the limit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/branching/storage.ts` around lines 17 - 25, The generated preview bucket name in generatePreviewBucketName can exceed the 63-char S3 limit; update the function to calculate the available space by computing the suffix length (the literal "-preview-" plus the "-" and the timestamp), then enforce a maximum length for the sanitized branch segment so that `${mainBucket}-preview-${sanitized}-${timestamp}` is at most 63 chars; if mainBucket itself is too long, truncate mainBucket first (keeping at least a tiny prefix) and then truncate sanitized accordingly, ensuring the timestamp remains intact to preserve uniqueness (use the existing sanitized and timestamp variables and return the truncated combined string).packages/core/src/vector/embeddings.ts-187-190 (1)
187-190:⚠️ Potential issue | 🟠 MajorAdd
dimensionsto OpenAI embedding request bodies.The OpenAI embeddings API supports the
dimensionsparameter for text-embedding-3-small and text-embedding-3-large models, but lines 187-190 and 247-250 omit it from the request body. When users configure custom dimensions, the API returns embeddings with the default dimensions (1536), causing the validation at lines 210 and 265 to fail even though the remote request succeeded.Affected request bodies
Lines 187-190:
body: JSON.stringify({ input: input.text, model: this.config.model, }),Lines 247-250:
body: JSON.stringify({ input: batch.map((b) => b.text), model: this.config.model, }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/embeddings.ts` around lines 187 - 190, The request bodies for single and batch embeddings in packages/core/src/vector/embeddings.ts (inside the OpenAI embeddings implementation used by methods that build the body with input and model) must include the optional dimensions field so custom dimensions are honored; update both bodies that currently do JSON.stringify({ input: ..., model: this.config.model }) to include dimensions: this.config.dimensions (or omit it only if undefined) so the payload becomes JSON.stringify({ input: ..., model: this.config.model, ...(this.config.dimensions !== undefined && { dimensions: this.config.dimensions })}) for the single-input and batch-input code paths.
🟡 Minor comments (3)
README.md-568-590 (1)
568-590:⚠️ Potential issue | 🟡 MinorCLI command count/list is now stale after adding
bb branch.With
bb branchadded here, “The Betterbase CLI (bb) provides 11 commands” (Line 368) and the package-architecture command list (Line 257-258) are outdated. Please update those to keep the README internally consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 568 - 590, Update README references to include the newly added bb branch commands: increment the Betterbase CLI (`bb`) command count (the text saying "provides 11 commands") to reflect the new total and add the bb branch entries into the package-architecture command list so the command listing (the section that enumerates package CLI commands) includes bb branch create/delete/list/status/wake/sleep; search for the phrases "The Betterbase CLI (`bb`)" and the package-architecture command list to locate the spots to edit and ensure the README is internally consistent with the new bb branch subcommands.packages/client/src/auth.ts-314-319 (1)
314-319:⚠️ Potential issue | 🟡 MinorInconsistent error checking: missing
data.errorcheck.
sendMagicLinkandsendOtponly check!response.ok, while all other new methods check!response.ok || data.error. If the server returns HTTP 200 with an error payload, these methods will incorrectly return success.🔧 Proposed fix
-if (!response.ok) { +if (!response.ok || data.error) { return { data: null, error: new AuthError(data.error?.message ?? "Failed to send magic link", data), }; }Apply the same fix to
sendOtpat line 409.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/src/auth.ts` around lines 314 - 319, The response handling in sendMagicLink (and similarly sendOtp) only checks !response.ok and misses cases where the server returns HTTP 200 with an error payload; update the conditional in sendMagicLink to check for (!response.ok || data.error) and return the same AuthError construction using data.error?.message when either condition is true, and apply the identical change to sendOtp so both methods mirror the other new methods' error handling.packages/core/src/vector/index.ts-129-133 (1)
129-133:⚠️ Potential issue | 🟡 MinorReject non-finite numbers in SQL defaults.
This guard excludes
NaNbut still allowsInfinityand-Infinity, which will generate invalid vector literals at Line 141. UseNumber.isFinite()here.Possible fix
- if (typeof val !== "number" || Number.isNaN(val)) { + if (typeof val !== "number" || !Number.isFinite(val)) { throw new Error(`Invalid default value: ${val}. Default values must be numbers.`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/vector/index.ts` around lines 129 - 133, The current sanitize step in the options.default mapping (sanitizedDefaults) only rejects NaN but still permits Infinity/-Infinity; update the guard to reject non-finite values by using Number.isFinite instead of Number.isNaN so only real finite numbers pass (i.e., check typeof val === "number" && Number.isFinite(val)), and keep throwing the same Error with the invalid value when the check fails to prevent generating invalid vector literals later.
| let query = db | ||
| .select() | ||
| .from(table) | ||
| .where((table as any)[primaryKey].eq(id)) | ||
| .limit(1); | ||
|
|
||
| if (enableRLS && userId && ownerColumn) { | ||
| // Apply per-row RLS filtering | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| query = query.where((table as any)[ownerColumn].eq(userId)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the auto-rest.ts file
fd auto-rest.tsRepository: weroperking/Betterbase
Length of output: 96
🏁 Script executed:
# Get the file size to determine if we can read it entirely
wc -l packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 100
🏁 Script executed:
# Read the specific sections mentioned in the review
sed -n '320,335p' packages/core/src/auto-rest.ts
echo "---"
sed -n '440,455p' packages/core/src/auto-rest.ts
echo "---"
sed -n '495,510p' packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1530
🏁 Script executed:
# Let me check the broader context and see the complete GET/PATCH/DELETE implementation
sed -n '310,360p' packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1362
🏁 Script executed:
# Let me check if there's an import of Drizzle's and() function
grep -n "import.*and\|from.*drizzle" packages/core/src/auto-rest.ts | head -20Repository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Check the entire query building pattern to understand the logic flow
sed -n '400,470p' packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1959
🏁 Script executed:
# Check what's imported from drizzle
head -50 packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1640
🏁 Script executed:
# Search for any usage of Drizzle's and() function in the codebase
rg "and\(" packages/core/src/ -B2 -A2Repository: weroperking/Betterbase
Length of output: 3267
🏁 Script executed:
# Check if there are tests for RLS functionality
fd -e test -e spec packages/core | head -20Repository: weroperking/Betterbase
Length of output: 423
🏁 Script executed:
# Search for where and() is imported from in the codebase
rg "import.*\band\b|from.*drizzle" packages/core/src/ | grep -v node_modules | head -20Repository: weroperking/Betterbase
Length of output: 1380
🏁 Script executed:
# Let's see the full PATCH implementation to understand the query rebuilding logic
sed -n '400,460p' packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1739
🏁 Script executed:
# Check if there's a DELETE section to see the full pattern
sed -n '475,530p' packages/core/src/auto-rest.tsRepository: weroperking/Betterbase
Length of output: 1523
🌐 Web query:
Drizzle ORM multiple where clauses and predicate composition pattern
💡 Result:
In Drizzle ORM, multiple WHERE clauses are handled using logical operators like and and or from 'drizzle-orm', rather than chaining .where calls directly, as subsequent .where calls override previous ones even in dynamic mode. ## Basic Multiple Conditions (AND) Use and to combine conditions: import { and, eq } from 'drizzle-orm'; await db.select.from(users).where( and( eq(users.id, 42), eq(users.name, 'Dan') ) ); Generated SQL: SELECT ... FROM users WHERE "id" = 42 AND "name" = 'Dan' ## Basic Multiple Conditions (OR) Use or similarly: import { or, gt, lt } from 'drizzle-orm'; await db.select.from(table).where( or(gt(table.column, 5), lt(table.column, 7)) ); ## Predicate Composition Pattern (Dynamic/Conditional Filters) The recommended pattern for composing predicates dynamically (e.g., optional filters) is to collect SQL conditions in an array and spread them into and or or: import { SQL, and, eq, ilike } from 'drizzle-orm'; function getUsers(filters: { name?: string; ageMin?: number }) { const conditions: SQL[] = []; if (filters.name) conditions.push(ilike(users.name, %${filters.name}%)); if (filters.ageMin) conditions.push(eq(users.age, filters.ageMin)); // or gt, etc. return db.select.from(users).where(conditions.length ? and(...conditions) : undefined); } - If no filters, returns all users (no WHERE). - Drizzle ignores undefined in and/or. For OR conditions across filters: const orConditions: SQL[] = []; if (filterA) orConditions.push(eq(...)); if (filterB) orConditions.push(eq(...)); .where(or(...orConditions)); ## Dynamic Query Builder ($dynamic) Enables calling methods like .where, .orderBy multiple times without TypeScript errors, but .where still overrides prior WHEREs: let query = db.select.from(users).where(eq(users.id, 1)).$dynamic; query = query.where(eq(users.age, 25)); // Overrides id=1, keeps only age=25 Use array composition instead for true merging. ## Relational Queries (db.query) Supports nested WHERE with where: { AND: [...], OR: [...] }: await db.query.users.findMany({ where: (users, { or, and, eq }) => and( or(eq(users.name, 'John'), eq(users.name, 'Jane')), eq(users.age, 30) ) }); This pattern ensures type-safety, composability, and matches SQL semantics. Official docs emphasize array-based composition for dynamic predicates over chaining.
Citations:
- 1: https://orm.drizzle.team/docs/select
- 2: https://orm.drizzle.team/docs/guides/conditional-filters-in-query
- 3: https://orm.drizzle.team/docs/operators
- 4: https://orm.drizzle.team/docs/data-querying
- 5: https://orm.drizzle.team/docs/dynamic-query-building
🏁 Script executed:
# Check how and() is being used elsewhere in the codebase to understand the pattern
sed -n '1,50p' packages/core/src/vector/search.ts | grep -A5 -B5 "and("Repository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Let's look at the graphql resolvers to see another example
sed -n '1,100p' packages/core/src/graphql/resolvers.ts | grep -A5 -B5 "and("Repository: weroperking/Betterbase
Length of output: 48
Use Drizzle's and() function to compose multiple predicates.
The RLS branches incorrectly chain conditions. In the GET handler, calling .where() twice causes the second call to override the first, silently dropping the primary key filter. When RLS is enabled, the query will incorrectly return all rows owned by the user instead of filtering by both the ID and owner. The PATCH and DELETE handlers use .eq().and() which is not Drizzle's standard predicate composition pattern.
Import and from drizzle-orm and compose conditions as and(eq(...), eq(...)) instead.
Also applies to: 445–449, 498–500
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/auto-rest.ts` around lines 323 - 332, The GET, PATCH and
DELETE query predicates currently call .where() or use .eq().and(), which causes
the primary key filter to be dropped when RLS is applied; import and from
"drizzle-orm" and replace chained .where/.eq().and() usage by composing
predicates with and((table as any)[primaryKey].eq(id), (table as
any)[ownerColumn].eq(userId)) so the queries (the local variable query built
from db.select().from(table) and the update/delete predicates) always include
both the id and owner predicates when enableRLS and userId/ownerColumn are
present.
| // Security: If owner column is specified and we have a user, auto-set it | ||
| if (ownerColumn && userId && !sanitizedBody[ownerColumn]) { | ||
| sanitizedBody[ownerColumn] = userId; | ||
| } |
There was a problem hiding this comment.
Enforce owner-column control by key presence, not truthiness.
On create, a client can send another user's ID and keep it because Line 382 only stamps userId when the field is falsy. On update, ""/0 values slip past Line 429 for the same reason. Always overwrite the owner on create and always delete the key on update whenever it is present.
Possible fix
- // Security: If owner column is specified and we have a user, auto-set it
- if (ownerColumn && userId && !sanitizedBody[ownerColumn]) {
+ // Security: If owner column is specified and we have a user, always stamp it
+ if (ownerColumn && userId) {
sanitizedBody[ownerColumn] = userId;
}
@@
- if (ownerColumn && sanitizedBody[ownerColumn]) {
+ if (ownerColumn && Object.prototype.hasOwnProperty.call(sanitizedBody, ownerColumn)) {
delete sanitizedBody[ownerColumn];
}Also applies to: 428-430
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/auto-rest.ts` around lines 381 - 384, On create, always set
the owner column by key presence rather than truthiness: in the create handling
branch, unconditionally assign sanitizedBody[ownerColumn] = userId (no check for
falsy value) so clients cannot override owner; in the update handling branch, if
the request body contains the owner key (i.e. ownerColumn in sanitizedBody or
Object.prototype.hasOwnProperty.call(sanitizedBody, ownerColumn)), remove it
(delete sanitizedBody[ownerColumn]) to prevent updates to ownership. Update the
logic around sanitizedBody, ownerColumn, and userId accordingly (both the create
assignment and the update deletion spots referenced in the diff).
| // Get all schemas except system schemas | ||
| const schemas = await mainDb` | ||
| SELECT schema_name | ||
| FROM information_schema.schemata | ||
| WHERE schema_name NOT IN ('pg_catalog', 'information_schema', 'pg_toast') | ||
| `; | ||
|
|
||
| // Clone each schema | ||
| for (const schema of schemas) { | ||
| const schemaName = schema.schema_name; | ||
|
|
||
| // Create schema | ||
| await previewDb`CREATE SCHEMA ${previewDb(schemaName)}`; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l packages/core/src/branching/database.tsRepository: weroperking/Betterbase
Length of output: 109
🏁 Script executed:
sed -n '197,215p' packages/core/src/branching/database.tsRepository: weroperking/Betterbase
Length of output: 675
🏁 Script executed:
# Check if there's any filtering of 'public' schema in the context around these lines
sed -n '190,230p' packages/core/src/branching/database.tsRepository: weroperking/Betterbase
Length of output: 1373
🏁 Script executed:
# Search for any IF NOT EXISTS usage in this file
rg "IF NOT EXISTS" packages/core/src/branching/database.ts -A 2 -B 2Repository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Check if 'public' is mentioned elsewhere in the file
rg "public" packages/core/src/branching/database.ts -iRepository: weroperking/Betterbase
Length of output: 48
Skip public schema in cloning loop.
The query at lines 200–204 excludes system schemas but not public, which exists by default in every PostgreSQL database. When cloning any normal database, line 209 will attempt CREATE SCHEMA public on a fresh database that already has it, causing the operation to fail. Either add 'public' to the NOT IN clause or use CREATE SCHEMA IF NOT EXISTS.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/branching/database.ts` around lines 197 - 210, The cloning
loop queries schemas and then runs previewDb`CREATE SCHEMA
${previewDb(schemaName)}` for each schema; because the SELECT doesn't exclude
the default 'public' schema you'll attempt to create an existing schema and
fail. Fix by either adding 'public' to the NOT IN list in the schemas query (the
variable referenced as schema.schema_name) or change the creation call to use
"CREATE SCHEMA IF NOT EXISTS" when executing previewDb (the call site is
previewDb`CREATE SCHEMA ${previewDb(schemaName)}`) so existing public won't
cause an error.
| this._listening = true; | ||
|
|
||
| try { | ||
| await this.postgres.listen("db_changes", (payload: string) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Expect one LISTEN/NOTIFY channel name to be used consistently by producers and consumers.
rg -n -C2 'betterbase_changes|db_changes|pg_notify|LISTEN'Repository: weroperking/Betterbase
Length of output: 5375
🏁 Script executed:
#!/bin/bash
# Search for database triggers, migrations, or NOTIFY statements
# Look for SQL files, trigger definitions, and any producer of notifications
fd -e sql -e migration | head -20
rg -n "NOTIFY|trigger|CREATE TRIGGER" -iRepository: weroperking/Betterbase
Length of output: 2759
Fix CDC channel name mismatch across providers.
The postgres provider now listens to db_changes, matching supabase at line 46, but neon still uses betterbase_changes (line 51) and is the only provider with a matching producer (line 64 emits to betterbase_changes). No code emits notifications to db_changes, so the CDC feature will silently fail for postgres and supabase—they will listen indefinitely without receiving any changes. Either unify all three providers to use the same channel name or add producer code that emits to db_changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/providers/postgres.ts` at line 45, Postgres and Supabase
are listening on "db_changes" while Neon listens/emits on "betterbase_changes",
so CDC listeners will never receive events; update the providers to use a single
consistent channel name or make the producer emit to the channel the listeners
expect. Specifically, change the channel string used in
this.postgres.listen("db_changes", ...) and the Supabase listener to match
Neon’s "betterbase_changes" (or vice versa), and/or update the producer code
that currently emits to "betterbase_changes" so it emits to "db_changes"; ensure
the channel string is the same across the Postgres provider
(this.postgres.listen), the Neon provider, and the producer emit call so all
listeners receive notifications.
| export function applyRLSUpdate( | ||
| policy: string | undefined, | ||
| userId: string | null, | ||
| record: Record<string, unknown>, | ||
| ): void { | ||
| // If no policy, check authentication requirement | ||
| if (!policy) { | ||
| if (userId === null) { | ||
| throw new UnauthorizedError("Update requires authentication"); | ||
| } | ||
| return; // Allow authenticated users | ||
| } | ||
|
|
||
| // Evaluate the policy - use "using" or "withCheck" expression | ||
| const policyExpr = policy; | ||
| const allowed = evaluatePolicy(policyExpr, userId, "update", record); | ||
|
|
||
| if (!allowed) { | ||
| throw new UnauthorizedError("Update denied by RLS policy"); | ||
| } |
There was a problem hiding this comment.
withCheck never runs on UPDATE.
PolicyDefinition.withCheck is part of the UPDATE DSL, but the update path only passes policy?.update || policy?.using into applyRLSUpdate(). A policy like definePolicy("posts", { withCheck: "auth.uid() = user_id" }) is therefore treated as if there were no update check at all. Pass withCheck through explicitly and evaluate it alongside the UPDATE/USING condition; otherwise UPDATE semantics diverge from the policy model and can over-permit writes.
Also applies to: 233-236
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/rls/evaluator.ts` around lines 152 - 171, The update path
never evaluates PolicyDefinition.withCheck because applyRLSUpdate only accepts a
single policy string; change applyRLSUpdate to accept a separate withCheck
expression (e.g., add a parameter like withCheck?: string) and in the function
evaluate both the update/using expression (via evaluatePolicy(policyExpr,
userId, "update", record)) and the withCheck expression
(evaluatePolicy(withCheck, userId, "withCheck", record) or similar) and deny the
update (throw UnauthorizedError("Update denied by RLS policy")) if either
evaluation returns false; update call sites that currently pass policy?.update
|| policy?.using to instead pass policy?.update or policy?.using as the main
policy and pass policy?.withCheck as the new withCheck argument so withCheck is
enforced on UPDATE.
| // Apply ordering based on metric | ||
| // For cosine and euclidean, lower distance = more similar | ||
| // For inner product, higher (less negative) = more similar | ||
| const orderFn = metric === "inner_product" ? desc : asc; | ||
| queryBuilder = queryBuilder.orderBy(orderFn(distanceExpr)); | ||
|
|
||
| // Apply limit | ||
| queryBuilder = queryBuilder.limit(limit); | ||
|
|
||
| // Execute query | ||
| const results = await queryBuilder.execute(); | ||
|
|
||
| // Filter by threshold if provided and transform results | ||
| return results | ||
| .map((row: Record<string, unknown>) => { | ||
| const score = includeScore ? (row._score as number) : 0; | ||
| const { _score, ...item } = row; | ||
| return { | ||
| item: item as TItem, | ||
| score, | ||
| }; | ||
| }) | ||
| .filter((result: VectorSearchResult<TItem>) => { | ||
| if (threshold === undefined) return true; | ||
|
|
||
| // For cosine, threshold is minimum similarity (0-1) | ||
| if (metric === "cosine") { | ||
| const similarity = 1 - result.score; | ||
| return similarity >= threshold; | ||
| } | ||
|
|
||
| // For euclidean, threshold is max distance | ||
| if (metric === "euclidean") { | ||
| return result.score <= threshold; | ||
| } | ||
|
|
||
| // For inner product, higher (less negative) is more similar | ||
| return result.score >= threshold; | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/core/src/vector/search.ts | sed -n '150,220p'Repository: weroperking/Betterbase
Length of output: 2536
🏁 Script executed:
cat -n packages/core/src/vector/search.ts | sed -n '80,150p'Repository: weroperking/Betterbase
Length of output: 2764
🏁 Script executed:
cat -n packages/core/src/vector/search.ts | sed -n '1,80p'Repository: weroperking/Betterbase
Length of output: 3119
🌐 Web query:
pgvector <#> operator inner product PostgreSQL documentation
💡 Result:
In pgvector, the <#> operator computes the negative inner product (dot product) between two vectors. This design allows for efficient maximum inner product search (MaxIPS) using PostgreSQL's ASC order index scans, as smaller (more negative) values indicate higher similarity. Key details from the official pgvector documentation: - Usage example: SELECT * FROM items ORDER BY embedding <#> '[3,1,2]' LIMIT 5; - To get the positive inner product: SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items; - Supported distances: <-> (L2), <#> (negative inner product), <=> (cosine), <+> (L1). - For indexing: CREATE INDEX ON items USING hnsw (embedding vector_ip_ops); This is consistently documented across pgvector sources, with no changes noted up to version 0.8.2 (late 2025). PostgreSQL core docs do not cover pgvector operators, as it is an extension.
Citations:
- 1: https://github.com/pgvector/pgvector
- 2: https://www.bytebase.com/blog/pgvector/
- 3: https://techdocs.broadcom.com/us/en/vmware-tanzu/data-solutions/tanzu-greenplum/7/greenplum-database/ref_guide-modules-pgvector-pgvector.html
- 4: https://pgt.dev/extensions/pgvector
- 5: https://www.linkedin.com/posts/mydba-dev_postgresql-pgvector-vectorsearch-activity-7433813328923168768-N0RK
- 6: https://severalnines.com/blog/vector-similarity-search-with-postgresqls-pgvector-a-deep-dive/
Fix inverted inner_product ranking and threshold comparison.
pgvector's <#> operator returns negative inner product, where more negative values indicate higher similarity. Line 179 should use asc instead of desc to rank the most similar results first. Additionally, line 213 should negate the score before comparing against the threshold to properly filter by similarity: return (-result.score) >= threshold;. As written, the results are ranked in reverse order and thresholding fails for positive threshold values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/vector/search.ts` around lines 176 - 214, The inner_product
branch currently orders results incorrectly and compares thresholds against the
raw (negative) pgvector score; change the ordering so inner_product uses asc
(not desc) on distanceExpr (adjust the orderFn assignment where metric ===
"inner_product"), and in the threshold check for inner_product negate the stored
score when comparing (use (-result.score) >= threshold) so similarity is
compared correctly; reference metric, orderFn, distanceExpr, queryBuilder,
result.score and the VectorSearchResult mapping when making these edits.
- Add proper state machine for dev server with async stop/restart - Implement graceful shutdown with timeout-based forced kill - Fix SDL exporter to handle Input types without field arguments - Update tests to reflect correct singular naming conventions - Fix URL encoding to properly encode forward slashes in S3 paths - Correct RLS evaluator test expectations for "any policy allows" logic
Summary by CodeRabbit
New Features
CLI
Documentation