diff --git a/.opencode/skills/cost-report/SKILL.md b/.opencode/skills/cost-report/SKILL.md index 33a7268804..e29f179992 100644 --- a/.opencode/skills/cost-report/SKILL.md +++ b/.opencode/skills/cost-report/SKILL.md @@ -7,7 +7,7 @@ description: Analyze Snowflake query costs and identify optimization opportuniti ## Requirements **Agent:** any (read-only analysis) -**Tools used:** sql_execute, sql_analyze, finops_analyze_credits, finops_expensive_queries, finops_warehouse_advice +**Tools used:** sql_execute, sql_analyze, finops_analyze_credits, finops_expensive_queries, finops_warehouse_advice, finops_unused_resources, finops_query_history Analyze Snowflake warehouse query costs, identify the most expensive queries, detect anti-patterns, and recommend optimizations. @@ -60,7 +60,17 @@ Analyze Snowflake warehouse query costs, identify the most expensive queries, de 5. **Warehouse analysis** - Run `finops_warehouse_advice` to check if warehouses used by the top offenders are right-sized. -6. **Output the final report** as a structured markdown document: +6. **Unused resource detection** - Run `finops_unused_resources` to find: + - **Stale tables**: Tables not accessed in the last 30+ days (candidates for archival/drop) + - **Idle warehouses**: Warehouses with no query activity (candidates for suspension/removal) + + Include findings in the report under a "Waste Detection" section. + +7. **Query history enrichment** - Run `finops_query_history` to fetch recent execution patterns: + - Identify frequently-run expensive queries (high frequency × high cost = top optimization target) + - Find queries that could benefit from result caching or materialization + +8. **Output the final report** as a structured markdown document: ``` # Snowflake Cost Report (Last 30 Days) @@ -99,10 +109,20 @@ Analyze Snowflake warehouse query costs, identify the most expensive queries, de ... + ## Waste Detection + ### Unused Tables + | Table | Last Accessed | Size | Recommendation | + |-------|--------------|------|----------------| + + ### Idle Warehouses + | Warehouse | Last Query | Size | Recommendation | + |-----------|-----------|------|----------------| + ## Recommendations 1. Top priority optimizations 2. Warehouse sizing suggestions - 3. Scheduling recommendations + 3. Unused resource cleanup + 4. Scheduling recommendations ``` ## Usage @@ -111,4 +131,4 @@ The user invokes this skill with: - `/cost-report` -- Analyze the last 30 days - `/cost-report 7` -- Analyze the last 7 days (adjust the DATEADD interval) -Use the tools: `sql_execute`, `sql_analyze`, `finops_analyze_credits`, `finops_expensive_queries`, `finops_warehouse_advice`. +Use the tools: `sql_execute`, `sql_analyze`, `finops_analyze_credits`, `finops_expensive_queries`, `finops_warehouse_advice`, `finops_unused_resources`, `finops_query_history`. diff --git a/.opencode/skills/dbt-analyze/SKILL.md b/.opencode/skills/dbt-analyze/SKILL.md index 7aa5ba2f5d..f993e17974 100644 --- a/.opencode/skills/dbt-analyze/SKILL.md +++ b/.opencode/skills/dbt-analyze/SKILL.md @@ -7,7 +7,7 @@ description: Analyze downstream impact of dbt model changes using column-level l ## Requirements **Agent:** any (read-only analysis) -**Tools used:** bash (runs `altimate-dbt` commands), read, glob, dbt_manifest, lineage_check, sql_analyze +**Tools used:** bash (runs `altimate-dbt` commands), read, glob, dbt_manifest, lineage_check, dbt_lineage, sql_analyze, altimate_core_extract_metadata ## When to Use This Skill @@ -45,10 +45,19 @@ For the full downstream tree, recursively call `children` on each downstream mod ### 3. Run Column-Level Lineage -Use the `lineage_check` tool on the changed model's SQL to understand: +**With manifest (preferred):** Use `dbt_lineage` to compute column-level lineage for a dbt model. This reads the manifest.json, extracts compiled SQL and upstream schemas, and traces column flow via the Rust engine. More accurate than raw SQL lineage because it resolves `ref()` and `source()` to actual schemas. + +``` +dbt_lineage(model: ) +``` + +**Without manifest (fallback):** Use `lineage_check` on the raw SQL to understand: - Which source columns flow to which output columns - Which columns were added, removed, or renamed +**Extract structural metadata:** Use `altimate_core_extract_metadata` on the SQL to get tables referenced, columns used, CTEs, subqueries — useful for mapping the full dependency surface. + + ### 4. Cross-Reference with Downstream For each downstream model: diff --git a/.opencode/skills/dbt-develop/SKILL.md b/.opencode/skills/dbt-develop/SKILL.md index b323a2409f..0d18b198b3 100644 --- a/.opencode/skills/dbt-develop/SKILL.md +++ b/.opencode/skills/dbt-develop/SKILL.md @@ -7,7 +7,7 @@ description: Create and modify dbt models — staging, intermediate, marts, incr ## Requirements **Agent:** builder or migrator (requires file write access) -**Tools used:** bash (runs `altimate-dbt` commands), read, glob, write, edit +**Tools used:** bash (runs `altimate-dbt` commands), read, glob, write, edit, schema_search, dbt_profiles, sql_analyze, altimate_core_validate, altimate_core_column_lineage ## When to Use This Skill @@ -41,11 +41,15 @@ altimate-dbt parents --model # understand what feeds this model altimate-dbt children --model # understand what consumes it ``` +**Check warehouse connection:** Run `dbt_profiles` to discover available profiles and map them to warehouse connections. This tells you which adapter (Snowflake, BigQuery, Postgres, etc.) and target the project uses — essential for dialect-aware SQL. + + ### 2. Discover — Understand the Data Before Writing **Never write SQL without deeply understanding your data first.** The #1 cause of wrong results is writing SQL blind — assuming grain, relationships, column names, or values without checking. -**Step 2a: Read all documentation and schema definitions** +**Step 2a: Search for relevant tables and columns** +- Use `schema_search` with natural-language queries to find tables/columns in large warehouses (e.g., `schema_search(query: "customer orders")` returns matching tables and columns from the indexed schema cache) - Read `sources.yml`, `schema.yml`, and any YAML files that describe the source/parent models - These contain column descriptions, data types, tests, and business context - Pay special attention to: primary keys, unique constraints, relationships between tables, and what each column represents diff --git a/.opencode/skills/dbt-test/SKILL.md b/.opencode/skills/dbt-test/SKILL.md index 5787de839b..1d8fb4a733 100644 --- a/.opencode/skills/dbt-test/SKILL.md +++ b/.opencode/skills/dbt-test/SKILL.md @@ -7,7 +7,7 @@ description: Add schema tests, unit tests, and data quality checks to dbt models ## Requirements **Agent:** builder or migrator (requires file write access) -**Tools used:** bash (runs `altimate-dbt` commands), read, glob, write, edit +**Tools used:** bash (runs `altimate-dbt` commands), read, glob, write, edit, altimate_core_testgen, altimate_core_validate ## When to Use This Skill @@ -52,13 +52,27 @@ read ### 3. Generate Tests -Apply test rules based on column patterns — see [references/schema-test-patterns.md](references/schema-test-patterns.md). +**Auto-generate with `altimate_core_testgen`:** Pass the compiled SQL and schema to generate boundary-value, NULL-handling, and edge-case test assertions automatically. This produces executable test SQL covering cases you might miss manually. + +``` +altimate_core_testgen(sql: , schema_context: ) +``` + +Review the generated tests — keep what makes sense, discard trivial ones. Then apply test rules based on column patterns — see [references/schema-test-patterns.md](references/schema-test-patterns.md). ### 4. Write YAML Merge into existing schema.yml (don't duplicate). Use `edit` for existing files, `write` for new ones. -### 5. Run Tests +### 5. Validate SQL + +Before running, validate the compiled model SQL to catch syntax and schema errors early: + +``` +altimate_core_validate(sql: , schema_context: ) +``` + +### 6. Run Tests ```bash altimate-dbt test --model # run tests for this model diff --git a/.opencode/skills/dbt-troubleshoot/SKILL.md b/.opencode/skills/dbt-troubleshoot/SKILL.md index c8333a2160..914a489104 100644 --- a/.opencode/skills/dbt-troubleshoot/SKILL.md +++ b/.opencode/skills/dbt-troubleshoot/SKILL.md @@ -7,7 +7,7 @@ description: Debug dbt errors — compilation failures, runtime database errors, ## Requirements **Agent:** any (read-only diagnosis), builder (if applying fixes) -**Tools used:** bash (runs `altimate-dbt` commands), read, glob, edit, altimate_core_semantics, altimate_core_column_lineage, altimate_core_correct +**Tools used:** bash (runs `altimate-dbt` commands), read, glob, edit, altimate_core_semantics, altimate_core_column_lineage, altimate_core_correct, altimate_core_fix, sql_fix ## When to Use This Skill @@ -81,6 +81,19 @@ altimate_core_column_lineage --sql altimate_core_correct --sql ``` +**Quick-fix tools** — use these when the error type is clear: + +``` +# Schema-based fix: fuzzy-matches table/column names against schema to fix typos and wrong references +altimate_core_fix(sql: , schema_context: ) + +# Error-message fix: given a failing query + database error, analyzes root cause and proposes corrections +sql_fix(sql: , error_message: , dialect: ) +``` + +`altimate_core_fix` is best for compilation errors (wrong names, missing objects). `sql_fix` is best for runtime errors (the database told you what's wrong). Use `altimate_core_correct` for iterative multi-round correction when the first fix doesn't resolve the issue. + + Common findings: - **Wrong join type**: `INNER JOIN` dropping rows that should appear → switch to `LEFT JOIN` - **Fan-out**: One-to-many join inflating row counts → add deduplication or aggregate diff --git a/.opencode/skills/pii-audit/SKILL.md b/.opencode/skills/pii-audit/SKILL.md new file mode 100644 index 0000000000..36f0949346 --- /dev/null +++ b/.opencode/skills/pii-audit/SKILL.md @@ -0,0 +1,117 @@ +--- +name: pii-audit +description: Classify schema columns for PII (SSN, email, phone, name, address, credit card) and check whether queries expose them. Use for GDPR/CCPA/HIPAA compliance audits. +--- + +# PII Audit + +## Requirements +**Agent:** any (read-only analysis) +**Tools used:** altimate_core_classify_pii, altimate_core_query_pii, schema_detect_pii, schema_inspect, read, glob + +## When to Use This Skill + +**Use when the user wants to:** +- Scan a database schema for PII columns (SSN, email, phone, name, address, credit card, IP) +- Check if a specific query exposes PII data +- Audit dbt models for PII leakage before production deployment +- Generate a PII inventory for compliance (GDPR, CCPA, HIPAA) + +**Do NOT use for:** +- SQL injection scanning -> use `sql-review` +- General SQL quality checks -> use `sql-review` +- Access control auditing -> finops role tools in `cost-report` + +## Workflow + +### 1. Classify Schema for PII + +**Option A — From schema YAML/JSON:** + +``` +altimate_core_classify_pii(schema_context: ) +``` + +Analyzes column names, types, and patterns to detect PII categories: +- **Direct identifiers**: SSN, email, phone, full name, credit card number +- **Quasi-identifiers**: Date of birth, zip code, IP address, device ID +- **Sensitive data**: Salary, health records, religious affiliation + +**Option B — From warehouse connection:** + +First index the schema, inspect it, then classify: +``` +schema_index(warehouse: ) +schema_inspect(warehouse: , database: , schema: , table: ) +schema_detect_pii(warehouse: ) +``` + +`schema_detect_pii` scans all indexed columns using pattern matching against the schema cache (requires `schema_index` to have been run). + +### 2. Check Query PII Exposure + +For each query or dbt model, check which PII columns it accesses: + +``` +altimate_core_query_pii(sql: , schema_context: ) +``` + +Returns: +- Which PII-classified columns are selected, filtered, or joined on +- Risk level per column (HIGH for direct identifiers, MEDIUM for quasi-identifiers) +- Whether PII is exposed in the output (SELECT) vs only used internally (WHERE/JOIN) + +### 3. Audit dbt Models (Batch) + +For a full project audit: +```bash +glob models/**/*.sql +``` + +For each model: +1. Read the compiled SQL +2. Run `altimate_core_query_pii` against the project schema +3. Classify the model's PII risk level + +### 4. Present the Audit Report + +``` +PII Audit Report +================ + +Schema: analytics.public (42 tables, 380 columns) + +PII Columns Found: 18 + +HIGH RISK (direct identifiers): + customers.email -> EMAIL + customers.phone_number -> PHONE + customers.ssn -> SSN + payments.card_number -> CREDIT_CARD + +MEDIUM RISK (quasi-identifiers): + customers.date_of_birth -> DOB + customers.zip_code -> ZIP + events.ip_address -> IP_ADDRESS + +Model PII Exposure: + +| Model | PII Columns Exposed | Risk | Action | +|-------|-------------------|------|--------| +| stg_customers | email, phone, ssn | HIGH | Mask or hash before mart layer | +| mart_user_profile | email | HIGH | Requires access control | +| int_order_summary | (none) | SAFE | No PII in output | +| mart_daily_revenue | zip_code | MEDIUM | Aggregation reduces risk | + +Recommendations: +1. Hash SSN and credit_card in staging layer (never expose raw) +2. Add column-level masking policy for email and phone +3. Restrict mart_user_profile to authorized roles only +4. Document PII handling in schema.yml column descriptions +``` + +## Usage + +- `/pii-audit` -- Scan the full project schema for PII +- `/pii-audit models/marts/mart_customers.sql` -- Check a specific model for PII exposure +- `/pii-audit --schema analytics.public` -- Audit a specific database schema diff --git a/.opencode/skills/query-optimize/SKILL.md b/.opencode/skills/query-optimize/SKILL.md index 25bf698a5b..df3ebdac3c 100644 --- a/.opencode/skills/query-optimize/SKILL.md +++ b/.opencode/skills/query-optimize/SKILL.md @@ -7,7 +7,7 @@ description: Analyze and optimize SQL queries for better performance ## Requirements **Agent:** any (read-only analysis) -**Tools used:** sql_optimize, sql_analyze, read, glob, schema_inspect, warehouse_list +**Tools used:** sql_optimize, sql_analyze, sql_explain, altimate_core_equivalence, read, glob, schema_inspect, warehouse_list Analyze SQL queries for performance issues and suggest concrete optimizations including rewritten SQL. @@ -27,7 +27,17 @@ Analyze SQL queries for performance issues and suggest concrete optimizations in 4. **Run detailed analysis**: - Call `sql_analyze` with the same SQL and dialect to get the full anti-pattern breakdown with recommendations -5. **Present findings** in a structured format: +5. **Get execution plan** (if warehouse connected): + - Call `sql_explain` to run EXPLAIN on the query and get the execution plan + - Look for: full table scans, sort operations on large datasets, inefficient join strategies, missing partition pruning + - Include key findings in the report under "Execution Plan Insights" + +6. **Verify rewrites preserve correctness**: + - If `sql_optimize` produced a rewritten query, call `altimate_core_equivalence` to verify the original and optimized queries produce the same result set + - If not equivalent, flag the difference and present both versions for the user to decide + - This prevents "optimization" that silently changes query semantics + +7. **Present findings** in a structured format: ``` Query Optimization Report @@ -62,9 +72,9 @@ Anti-Pattern Details: -> Consider selecting only the columns you need. ``` -6. **If schema context is available**, mention that the optimization used real table schemas for more accurate suggestions (e.g., expanding SELECT * to actual columns). +8. **If schema context is available**, mention that the optimization used real table schemas for more accurate suggestions (e.g., expanding SELECT * to actual columns). -7. **If no issues are found**, confirm the query looks well-optimized and briefly explain why (no anti-patterns, proper use of limits, explicit columns, etc.). +9. **If no issues are found**, confirm the query looks well-optimized and briefly explain why (no anti-patterns, proper use of limits, explicit columns, etc.). ## Usage @@ -73,4 +83,4 @@ The user invokes this skill with SQL or a file path: - `/query-optimize models/staging/stg_orders.sql` -- Optimize SQL from a file - `/query-optimize` -- Optimize the most recently discussed SQL in the conversation -Use the tools: `sql_optimize`, `sql_analyze`, `read` (for file-based SQL), `glob` (to find SQL files), `schema_inspect` (for schema context), `warehouse_list` (to check connections). +Use the tools: `sql_optimize`, `sql_analyze`, `sql_explain` (execution plans), `altimate_core_equivalence` (rewrite verification), `read` (for file-based SQL), `glob` (to find SQL files), `schema_inspect` (for schema context), `warehouse_list` (to check connections). diff --git a/.opencode/skills/schema-migration/SKILL.md b/.opencode/skills/schema-migration/SKILL.md new file mode 100644 index 0000000000..aa61061826 --- /dev/null +++ b/.opencode/skills/schema-migration/SKILL.md @@ -0,0 +1,119 @@ +--- +name: schema-migration +description: Analyze DDL migrations for data loss risks — type narrowing, missing defaults, dropped constraints, breaking column changes. Use before applying schema changes to production. +--- + +# Schema Migration Analysis + +## Requirements +**Agent:** any (read-only analysis) +**Tools used:** altimate_core_migration, altimate_core_schema_diff, schema_diff, read, glob, bash (for git operations) + +## When to Use This Skill + +**Use when the user wants to:** +- Analyze a DDL migration for data loss risks before applying it +- Compare two schema versions to find breaking changes +- Review ALTER TABLE / CREATE TABLE changes in a PR +- Validate that a model refactoring doesn't break the column contract + +**Do NOT use for:** +- Writing new models -> use `dbt-develop` +- Analyzing downstream impact of SQL logic changes -> use `dbt-analyze` +- Optimizing queries -> use `query-optimize` + +## Workflow + +### 1. Get the Schema Versions + +**For DDL migrations** (ALTER TABLE, CREATE TABLE): +- Read the migration file(s) from disk +- The "old" schema is the current state; the "new" schema is after applying the migration + +**For dbt model changes** (comparing before/after SQL): +```bash +# Get the old version from git +git show HEAD: > /tmp/old_model.sql +# The new version is the current file +``` + +**For schema YAML changes:** +- Read both versions of the schema.yml file + +### 2. Analyze DDL Migration Safety + +Call `altimate_core_migration` to detect data loss risks: + +``` +altimate_core_migration(old_ddl: , new_ddl: , dialect: ) +``` + +This checks for: +- **Type narrowing**: VARCHAR(100) -> VARCHAR(50) (truncation risk) +- **NOT NULL without default**: Adding NOT NULL column without DEFAULT (fails on existing rows) +- **Dropped columns**: Data loss if column has values +- **Dropped constraints**: Unique/check constraints removed (data integrity risk) +- **Type changes**: INTEGER -> VARCHAR (irreversible in practice) +- **Index drops**: Performance regression risk + +### 3. Diff Schema Structures + +**For YAML/JSON schemas:** Call `altimate_core_schema_diff` to compare two schema definitions: + +``` +altimate_core_schema_diff(schema1: , schema2: ) +``` + +Returns: added/removed/modified tables and columns, type changes, constraint changes, breaking change detection. + +**For SQL model changes:** Call `schema_diff` to compare two SQL models for column-level breaking changes: + +``` +schema_diff(old_sql: , new_sql: , dialect: ) +``` + +Returns: dropped columns (BREAKING), type changes (WARNING), potential renames (Levenshtein distance matching). + +### 4. Present the Analysis + +``` +Schema Migration Analysis +========================= + +Migration: alter_orders_table.sql +Dialect: snowflake + +BREAKING CHANGES (2): + [DATA LOSS] Dropped column: orders.discount_amount + -> Column has 1.2M non-NULL values. Data will be permanently lost. + + [TRUNCATION] Type narrowed: orders.customer_name VARCHAR(200) -> VARCHAR(50) + -> 3,400 rows exceed 50 chars. Values will be truncated. + +WARNINGS (1): + [CONSTRAINT] Dropped unique constraint on orders.external_id + -> Duplicates may be inserted after migration. + +SAFE CHANGES (3): + [ADD] New column: orders.updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + [ADD] New column: orders.version INTEGER DEFAULT 1 + [WIDEN] Type widened: orders.amount DECIMAL(10,2) -> DECIMAL(18,2) + +Recommendation: DO NOT apply without addressing BREAKING changes. + 1. Back up discount_amount data before dropping + 2. Verify no values exceed 50 chars, or widen the target type + 3. Confirm external_id uniqueness is no longer required +``` + +### 5. For dbt Model Refactoring + +When the user is refactoring a dbt model (renaming columns, changing types): +1. Run `schema_diff` on old vs new compiled SQL +2. Cross-reference with `dbt-analyze` to check downstream consumers +3. Flag any downstream model that references a dropped/renamed column + +## Usage + +- `/schema-migration migrations/V003__alter_orders.sql` -- Analyze a DDL migration file +- `/schema-migration models/staging/stg_orders.sql` -- Compare current file against last commit +- `/schema-migration --old schema_v1.yml --new schema_v2.yml` -- Compare two schema files diff --git a/.opencode/skills/sql-review/SKILL.md b/.opencode/skills/sql-review/SKILL.md new file mode 100644 index 0000000000..5f3babbf5f --- /dev/null +++ b/.opencode/skills/sql-review/SKILL.md @@ -0,0 +1,118 @@ +--- +name: sql-review +description: Pre-merge SQL quality gate — lint 26 anti-patterns, grade readability/performance A-F, validate syntax, and scan for injection threats. Use before committing or reviewing SQL changes. +--- + +# SQL Review + +## Requirements +**Agent:** any (read-only analysis) +**Tools used:** altimate_core_check, altimate_core_grade, sql_analyze, read, glob, bash (for git operations) + +## When to Use This Skill + +**Use when the user wants to:** +- Review SQL quality before merging a PR +- Get a quality grade (A-F) on a query or model +- Run a comprehensive lint + safety + syntax check in one pass +- Audit SQL files in a directory for anti-patterns + +**Do NOT use for:** +- Optimizing query performance -> use `query-optimize` +- Fixing broken SQL -> use `dbt-troubleshoot` +- Translating between dialects -> use `sql-translate` + +## Workflow + +### 1. Collect SQL to Review + +Either: +- Read SQL from a file path provided by the user +- Accept SQL directly from the conversation +- Auto-detect changed SQL files from git: + +```bash +git diff --name-only HEAD~1 | grep '\.sql$' +``` + +For dbt models, compile first to get the full SQL: +```bash +altimate-dbt compile --model +``` + +### 2. Run Comprehensive Check + +Call `altimate_core_check` — this is the single-call code review that composes: +- **Syntax validation**: Parse errors with line/column positions +- **Lint (26 anti-patterns)**: SELECT *, unused CTEs, implicit casts, NULL comparisons, missing WHERE on DELETE/UPDATE, cartesian joins, non-sargable predicates, missing partition filters, and more +- **Injection scan**: Tautology attacks, UNION injection, stacked queries, comment injection, Jinja template injection +- **PII exposure**: Flags queries accessing columns classified as PII + +``` +altimate_core_check(sql: , schema_context: ) +``` + +### 3. Grade the SQL + +Call `altimate_core_grade` to get an A-F quality score with per-category breakdown: + +``` +altimate_core_grade(sql: , schema_context: ) +``` + +Categories scored: +- **Readability**: Naming, formatting, CTE structure +- **Performance**: Anti-patterns, index usage, scan efficiency +- **Correctness**: NULL handling, join logic, type safety +- **Best Practices**: Explicit columns, proper materialization hints + +### 4. Run Anti-Pattern Analysis + +Call `sql_analyze` for the detailed anti-pattern breakdown with severity levels and concrete recommendations: + +``` +sql_analyze(sql: , dialect: ) +``` + +### 5. Present the Review + +``` +SQL Review: +============================== + +Grade: B+ (82/100) + Readability: A (clear CTEs, good naming) + Performance: B- (missing partition filter on large table) + Correctness: A (proper NULL handling) + Best Practices: C (SELECT * in staging model) + +Issues Found: 3 + [HIGH] SELECT_STAR — Use explicit column list for contract stability + [MEDIUM] MISSING_PARTITION_FILTER — Add date filter to avoid full scan + [LOW] IMPLICIT_CAST — VARCHAR compared to INTEGER on line 23 + +Safety: PASS (no injection vectors detected) +PII: PASS (no PII columns exposed) + +Verdict: Fix HIGH issues before merging. MEDIUM issues are recommended. +``` + +### 6. Batch Mode + +When reviewing multiple files (e.g., all changed SQL in a PR): +- Run the check on each file +- Present a summary table: + +``` +| File | Grade | Issues | Safety | Verdict | +|------|-------|--------|--------|---------| +| stg_orders.sql | A | 0 | PASS | Ship | +| int_revenue.sql | B- | 2 | PASS | Fix HIGH | +| mart_daily.sql | C | 5 | WARN | Block | +``` + +## Usage + +- `/sql-review models/marts/fct_orders.sql` -- Review a specific file +- `/sql-review` -- Review all SQL files changed in the current git diff +- `/sql-review --all models/` -- Review all SQL files in a directory diff --git a/packages/opencode/src/agent/agent.ts b/packages/opencode/src/agent/agent.ts index c133887739..a3c8614f96 100644 --- a/packages/opencode/src/agent/agent.ts +++ b/packages/opencode/src/agent/agent.ts @@ -169,9 +169,9 @@ export namespace Agent { finops_unused_resources: "allow", finops_role_grants: "allow", finops_role_hierarchy: "allow", finops_user_roles: "allow", schema_detect_pii: "allow", schema_tags: "allow", schema_tags_list: "allow", - altimate_core_validate: "allow", altimate_core_lint: "allow", - altimate_core_safety: "allow", altimate_core_transpile: "allow", - altimate_core_check: "allow", + altimate_core_validate: "allow", altimate_core_check: "allow", + altimate_core_rewrite: "allow", + tool_lookup: "allow", read: "allow", grep: "allow", glob: "allow", question: "allow", webfetch: "allow", websearch: "allow", training_save: "allow", training_list: "allow", training_remove: "allow", @@ -201,9 +201,9 @@ export namespace Agent { finops_unused_resources: "allow", finops_role_grants: "allow", finops_role_hierarchy: "allow", finops_user_roles: "allow", schema_detect_pii: "allow", schema_tags: "allow", schema_tags_list: "allow", - altimate_core_validate: "allow", altimate_core_lint: "allow", - altimate_core_safety: "allow", altimate_core_transpile: "allow", - altimate_core_check: "allow", + altimate_core_validate: "allow", altimate_core_check: "allow", + altimate_core_rewrite: "allow", + tool_lookup: "allow", read: "allow", grep: "allow", glob: "allow", question: "allow", webfetch: "allow", websearch: "allow", training_save: "allow", training_list: "allow", training_remove: "allow", @@ -233,9 +233,9 @@ export namespace Agent { finops_unused_resources: "allow", finops_role_grants: "allow", finops_role_hierarchy: "allow", finops_user_roles: "allow", schema_detect_pii: "allow", schema_tags: "allow", schema_tags_list: "allow", - altimate_core_validate: "allow", altimate_core_lint: "allow", - altimate_core_safety: "allow", altimate_core_transpile: "allow", - altimate_core_check: "allow", + altimate_core_validate: "allow", altimate_core_check: "allow", + altimate_core_rewrite: "allow", + tool_lookup: "allow", read: "allow", grep: "allow", glob: "allow", bash: "allow", question: "allow", training_save: "allow", training_list: "allow", training_remove: "allow", @@ -264,9 +264,9 @@ export namespace Agent { finops_unused_resources: "allow", finops_role_grants: "allow", finops_role_hierarchy: "allow", finops_user_roles: "allow", schema_detect_pii: "allow", schema_tags: "allow", schema_tags_list: "allow", - altimate_core_validate: "allow", altimate_core_lint: "allow", - altimate_core_safety: "allow", altimate_core_transpile: "allow", - altimate_core_check: "allow", + altimate_core_validate: "allow", altimate_core_check: "allow", + altimate_core_rewrite: "allow", + tool_lookup: "allow", read: "allow", write: "allow", edit: "allow", grep: "allow", glob: "allow", question: "allow", training_save: "allow", training_list: "allow", training_remove: "allow", @@ -296,9 +296,9 @@ export namespace Agent { finops_unused_resources: "allow", finops_role_grants: "allow", finops_role_hierarchy: "allow", finops_user_roles: "allow", schema_detect_pii: "allow", schema_tags: "allow", schema_tags_list: "allow", - altimate_core_validate: "allow", altimate_core_lint: "allow", - altimate_core_safety: "allow", altimate_core_transpile: "allow", - altimate_core_check: "allow", + altimate_core_validate: "allow", altimate_core_check: "allow", + altimate_core_rewrite: "allow", + tool_lookup: "allow", read: "allow", grep: "allow", glob: "allow", bash: "allow", question: "allow", webfetch: "allow", websearch: "allow", task: "allow", training_save: "allow", training_list: "allow", training_remove: "allow", diff --git a/packages/opencode/src/altimate/index.ts b/packages/opencode/src/altimate/index.ts index 97c1fbf4a9..f6dae26fc0 100644 --- a/packages/opencode/src/altimate/index.ts +++ b/packages/opencode/src/altimate/index.ts @@ -21,27 +21,21 @@ export * from "./tools/altimate-core-export-ddl" export * from "./tools/altimate-core-extract-metadata" export * from "./tools/altimate-core-fingerprint" export * from "./tools/altimate-core-fix" -export * from "./tools/altimate-core-format" export * from "./tools/altimate-core-grade" export * from "./tools/altimate-core-import-ddl" export * from "./tools/altimate-core-introspection-sql" -export * from "./tools/altimate-core-is-safe" -export * from "./tools/altimate-core-lint" export * from "./tools/altimate-core-migration" export * from "./tools/altimate-core-optimize-context" -export * from "./tools/altimate-core-optimize-for-query" export * from "./tools/altimate-core-parse-dbt" export * from "./tools/altimate-core-policy" export * from "./tools/altimate-core-prune-schema" export * from "./tools/altimate-core-query-pii" export * from "./tools/altimate-core-resolve-term" export * from "./tools/altimate-core-rewrite" -export * from "./tools/altimate-core-safety" export * from "./tools/altimate-core-schema-diff" export * from "./tools/altimate-core-semantics" export * from "./tools/altimate-core-testgen" export * from "./tools/altimate-core-track-lineage" -export * from "./tools/altimate-core-transpile" export * from "./tools/altimate-core-validate" export * from "./tools/dbt-lineage" export * from "./tools/dbt-manifest" @@ -72,6 +66,7 @@ export * from "./tools/sql-format" export * from "./tools/sql-optimize" export * from "./tools/sql-rewrite" export * from "./tools/sql-translate" +export * from "./tools/tool-lookup" export * from "./tools/warehouse-add" export * from "./tools/warehouse-discover" export * from "./tools/warehouse-list" diff --git a/packages/opencode/src/altimate/prompts/builder.txt b/packages/opencode/src/altimate/prompts/builder.txt index 75a71a6ae0..00eabc4d50 100644 --- a/packages/opencode/src/altimate/prompts/builder.txt +++ b/packages/opencode/src/altimate/prompts/builder.txt @@ -10,14 +10,18 @@ You are altimate-code in builder mode — a data engineering agent specializing You have full read/write access to the project. You can: - Create and modify dbt models, SQL files, and YAML configs - Execute SQL against connected warehouses via `sql_execute` -- Validate SQL with AltimateCore via `sql_validate` (syntax, safety, lint, PII) +- Validate SQL with AltimateCore via `altimate_core_validate` (syntax + schema references) - Analyze SQL for anti-patterns and performance issues via `sql_analyze` - Inspect database schemas via `schema_inspect` -- Check column-level lineage via `lineage_check` +- Search schemas by natural language via `schema_search` +- Check column-level lineage via `lineage_check` or `dbt_lineage` +- Auto-fix SQL errors via `altimate_core_fix` (schema-based) or `sql_fix` (error-driven) - List and test warehouse connections via `warehouse_list` and `warehouse_test` - Run dbt commands via `altimate-dbt` (build, compile, columns, execute, graph, info) - Use all standard file tools (read, write, edit, bash, grep, glob) +When unsure about a tool's parameters, call `tool_lookup` with the tool name. + ## dbt Operations Use `altimate-dbt` instead of raw `dbt` commands. Key commands: @@ -49,17 +53,18 @@ When creating dbt models: - Update schema.yml files alongside model changes - Run `lineage_check` to verify column-level data flow + ## Pre-Execution Protocol Before executing ANY SQL via sql_execute, follow this mandatory sequence: -1. **Analyze first**: Run sql_analyze on the query. Check for HIGH severity anti-patterns. +1. **Analyze first**: Run `sql_analyze` on the query. Check for HIGH severity anti-patterns. - If HIGH severity issues found (SELECT *, cartesian products, missing WHERE on DELETE/UPDATE, full table scans on large tables): FIX THEM before executing. Show the user what you found and the fixed query. - If MEDIUM severity issues found: mention them and proceed unless the user asks to fix. -2. **Validate safety**: Run sql_validate to catch syntax errors and safety issues BEFORE hitting the warehouse. +2. **Validate syntax**: Run `altimate_core_validate` to catch syntax errors and schema issues BEFORE hitting the warehouse. -3. **Execute**: Only after steps 1-2 pass, run sql_execute. +3. **Execute**: Only after steps 1-2 pass, run `sql_execute`. This sequence is NOT optional. Skipping it means the user pays for avoidable mistakes. You are the customer's cost advocate — every credit saved is trust earned. If the user explicitly requests skipping the protocol, note the risk and proceed. @@ -70,8 +75,8 @@ For trivial queries (e.g., `SELECT 1`, `SHOW TABLES`), use judgment — skip the After ANY dbt operation (build, run, test, model creation/modification): 1. **Compile check**: Verify the model compiles without errors -2. **SQL analysis**: Run sql_analyze on the compiled SQL to catch anti-patterns BEFORE they hit production -3. **Lineage verification**: Run lineage_check to confirm column-level lineage is intact — no broken references, no orphaned columns. If lineage_check fails (e.g., no manifest available), note the limitation and proceed. +2. **SQL analysis**: Run `sql_analyze` on the compiled SQL to catch anti-patterns BEFORE they hit production +3. **Lineage verification**: Run `lineage_check` to confirm column-level lineage is intact — no broken references, no orphaned columns. If lineage_check fails (e.g., no manifest available), note the limitation and proceed. 4. **Test coverage**: Check that the model has not_null and unique tests on primary keys at minimum. If missing, suggest adding them. Do NOT consider a dbt task complete until steps 1-4 pass. A model that compiles but has anti-patterns or broken lineage is NOT done. @@ -103,38 +108,65 @@ Before declaring any task complete, review your own work: - Naming convention violations (check project's existing patterns) - Unnecessary complexity (could a CTE be a subquery? could a join be avoided?) -2. **Validate the output**: Run sql_validate and sql_analyze on any SQL you wrote. +2. **Validate the output**: Run `altimate_core_validate` and `sql_analyze` on any SQL you wrote. -3. **Check lineage impact**: If you modified a model, run lineage_check to verify you didn't break downstream dependencies. +3. **Check lineage impact**: If you modified a model, run `lineage_check` to verify you didn't break downstream dependencies. Only after self-review passes should you present the result to the user. -## Available Skills -You have access to these skills that users can invoke with /: -- /dbt-develop — Create dbt models following project conventions -- /dbt-test — Add tests and data quality validation -- /dbt-docs — Generate model and column descriptions -- /dbt-analyze — Lineage analysis and impact assessment -- /dbt-troubleshoot — Debug and fix dbt errors -- /cost-report — Snowflake cost analysis with optimization suggestions -- /sql-translate — Cross-dialect SQL translation with warnings -- /query-optimize — Query optimization with anti-pattern detection -- /teach — Teach a pattern from an example file -- /train — Learn standards from a document -- /training-status — Show training dashboard - -## FinOps & Governance Tools -- finops_query_history — Query execution history -- finops_analyze_credits — Credit consumption analysis -- finops_expensive_queries — Identify expensive queries -- finops_warehouse_advice — Warehouse sizing recommendations -- finops_unused_resources — Find stale tables and idle warehouses -- finops_role_grants, finops_role_hierarchy, finops_user_roles — RBAC analysis -- schema_detect_pii — Scan for PII columns -- schema_tags, schema_tags_list — Metadata tag queries -- sql_diff — Compare SQL queries +## Skills — When to Invoke + +Skills are specialized workflows that compose multiple tools. Invoke them proactively when the task matches — don't wait for the user to ask. + +### dbt Development Skills + +| Skill | Invoke When | +|-------|-------------| +| `/dbt-develop` | User wants to create, modify, or scaffold dbt models (staging, intermediate, marts, incremental). Always use for model creation. | +| `/dbt-test` | User wants to add tests (schema tests, unit tests, data quality checks). Also auto-generates edge-case tests via `altimate_core_testgen`. | +| `/dbt-docs` | User wants to document models — column descriptions, model descriptions, doc blocks in schema.yml. | +| `/dbt-troubleshoot` | Something is broken — compilation errors, runtime failures, wrong data, slow builds. Uses `altimate_core_fix` and `sql_fix` for auto-repair. | +| `/dbt-analyze` | User wants to understand impact before shipping — downstream consumers, breaking changes, blast radius. Uses `dbt_lineage` for column-level analysis. | + +### SQL Quality & Performance Skills + +| Skill | Invoke When | +|-------|-------------| +| `/sql-review` | Before merging or committing SQL. Runs `altimate_core_check` (lint + safety + syntax + PII) and `altimate_core_grade` (A-F score). Use proactively on any SQL the user asks you to review. | +| `/query-optimize` | User wants to speed up a query. Runs `sql_optimize` + `sql_explain` (execution plans) + `altimate_core_equivalence` (verifies rewrites preserve semantics). | +| `/sql-translate` | User wants to convert SQL between dialects (Snowflake, BigQuery, Postgres, etc.). | +| `/lineage-diff` | User changed SQL and wants to see what column-level data flow changed (added/removed edges). | + +### Compliance & Governance Skills + +| Skill | Invoke When | +|-------|-------------| +| `/cost-report` | User asks about Snowflake costs, expensive queries, or warehouse optimization. Includes unused resource detection and query history analysis. | +| `/pii-audit` | User asks about PII, GDPR, CCPA, HIPAA, or data classification. Scans schemas for PII columns and checks queries for PII exposure. | +| `/schema-migration` | User is changing table schemas (DDL migrations, ALTER TABLE, column renames/drops). Detects data loss risks, type narrowing, missing defaults. | + +### Learning Skills + +| Skill | Invoke When | +|-------|-------------| +| `/teach` | User shows an example file and says "learn this pattern" or "do it like this". | +| `/train` | User provides a document with standards/rules to learn from. | +| `/training-status` | User asks what you've learned or wants to see training dashboard. | + +## Proactive Skill Invocation + +Don't wait for `/skill-name` — invoke skills when the task clearly matches: +- User says "review this SQL" -> invoke `/sql-review` +- User says "this model is broken" -> invoke `/dbt-troubleshoot` +- User says "create a staging model" -> invoke `/dbt-develop` +- User says "how much are we spending" -> invoke `/cost-report` +- User says "check for PII" -> invoke `/pii-audit` +- User says "will this change break anything" -> invoke `/dbt-analyze` +- User says "analyze this migration" -> invoke `/schema-migration` +- User says "make this query faster" -> invoke `/query-optimize` ## Teammate Training + You are a trainable AI teammate. Your team has taught you patterns, rules, glossary terms, and standards that appear in the "Teammate Training" section of your system prompt. This is institutional knowledge — treat it as authoritative. ### Applying Training @@ -156,8 +188,3 @@ When you detect a correction: - training_save — Save a learned pattern, rule, glossary term, or standard - training_list — List all learned training entries with budget usage - training_remove — Remove outdated training entries - -### Available Training Skills -- /teach — Learn a pattern from an example file -- /train — Learn standards from a document -- /training-status — Show what you've learned diff --git a/packages/opencode/src/altimate/tools/altimate-core-format.ts b/packages/opencode/src/altimate/tools/altimate-core-format.ts deleted file mode 100644 index 3c8c53ffa4..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-format.ts +++ /dev/null @@ -1,35 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreFormatTool = Tool.define("altimate_core_format", { - description: - "Format SQL using the Rust-based altimate-core engine. Provides fast, deterministic formatting with dialect-aware keyword casing and indentation.", - parameters: z.object({ - sql: z.string().describe("SQL to format"), - dialect: z.string().optional().describe("SQL dialect (e.g. snowflake, bigquery, postgres)"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.format", { - sql: args.sql, - dialect: args.dialect ?? "", - }) - const data = result.data as Record - return { - title: `Format: ${data.success !== false ? "OK" : "FAILED"}`, - metadata: { success: result.success }, - output: formatFormat(data), - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Format: ERROR", metadata: { success: false }, output: `Failed: ${msg}` } - } - }, -}) - -function formatFormat(data: Record): string { - if (data.error) return `Error: ${data.error}` - if (data.formatted_sql) return data.formatted_sql - return "No formatted output." -} diff --git a/packages/opencode/src/altimate/tools/altimate-core-is-safe.ts b/packages/opencode/src/altimate/tools/altimate-core-is-safe.ts deleted file mode 100644 index fe583dbbae..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-is-safe.ts +++ /dev/null @@ -1,27 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreIsSafeTool = Tool.define("altimate_core_is_safe", { - description: - "Quick boolean safety check for SQL using the Rust-based altimate-core engine. Returns true/false indicating whether the SQL is safe to execute (no injection, no destructive operations).", - parameters: z.object({ - sql: z.string().describe("SQL query to check"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.is_safe", { - sql: args.sql, - }) - const data = result.data as Record - return { - title: `Is Safe: ${data.safe ? "YES" : "NO"}`, - metadata: { success: result.success, safe: data.safe }, - output: data.safe ? "SQL is safe to execute." : "SQL is NOT safe — may contain injection or destructive operations.", - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Is Safe: ERROR", metadata: { success: false, safe: false }, output: `Failed: ${msg}` } - } - }, -}) diff --git a/packages/opencode/src/altimate/tools/altimate-core-lint.ts b/packages/opencode/src/altimate/tools/altimate-core-lint.ts deleted file mode 100644 index fe19d17ac6..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-lint.ts +++ /dev/null @@ -1,44 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreLintTool = Tool.define("altimate_core_lint", { - description: - "Lint SQL for anti-patterns using the Rust-based altimate-core engine. Catches issues like NULL comparisons, implicit casts, unused CTEs, and dialect-specific problems.", - parameters: z.object({ - sql: z.string().describe("SQL query to lint"), - schema_path: z.string().optional().describe("Path to YAML/JSON schema file"), - schema_context: z.record(z.string(), z.any()).optional().describe("Inline schema definition"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.lint", { - sql: args.sql, - schema_path: args.schema_path ?? "", - schema_context: args.schema_context, - }) - const data = result.data as Record - return { - title: `Lint: ${data.clean ? "CLEAN" : `${data.findings?.length ?? 0} findings`}`, - metadata: { success: result.success, clean: data.clean }, - output: formatLint(data), - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Lint: ERROR", metadata: { success: false, clean: false }, output: `Failed: ${msg}` } - } - }, -}) - -function formatLint(data: Record): string { - if (data.error) return `Error: ${data.error}` - if (!data.findings?.length) return "No issues found." - const lines = [`Found ${data.findings.length} finding(s):\n`] - for (const f of data.findings) { - lines.push(` [${f.severity}] ${f.rule}: ${f.message}`) - if (f.location) lines.push(` at line ${f.location.line}, col ${f.location.column}`) - if (f.suggestion) lines.push(` → ${f.suggestion}`) - lines.push("") - } - return lines.join("\n") -} diff --git a/packages/opencode/src/altimate/tools/altimate-core-optimize-for-query.ts b/packages/opencode/src/altimate/tools/altimate-core-optimize-for-query.ts deleted file mode 100644 index 0302f03285..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-optimize-for-query.ts +++ /dev/null @@ -1,37 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreOptimizeForQueryTool = Tool.define("altimate_core_optimize_for_query", { - description: - "Prune schema to only tables and columns relevant to a specific query using the Rust-based altimate-core engine. Reduces context size for LLM prompts.", - parameters: z.object({ - sql: z.string().describe("SQL query to optimize schema for"), - schema_path: z.string().optional().describe("Path to YAML/JSON schema file"), - schema_context: z.record(z.string(), z.any()).optional().describe("Inline schema definition"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.optimize_for_query", { - sql: args.sql, - schema_path: args.schema_path ?? "", - schema_context: args.schema_context, - }) - const data = result.data as Record - return { - title: `Optimize for Query: ${data.tables_kept ?? "?"} tables kept`, - metadata: { success: result.success }, - output: formatOptimizeForQuery(data), - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Optimize for Query: ERROR", metadata: { success: false }, output: `Failed: ${msg}` } - } - }, -}) - -function formatOptimizeForQuery(data: Record): string { - if (data.error) return `Error: ${data.error}` - if (data.pruned_schema) return `Pruned schema:\n${JSON.stringify(data.pruned_schema, null, 2)}` - return JSON.stringify(data, null, 2) -} diff --git a/packages/opencode/src/altimate/tools/altimate-core-safety.ts b/packages/opencode/src/altimate/tools/altimate-core-safety.ts deleted file mode 100644 index 27af50d030..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-safety.ts +++ /dev/null @@ -1,38 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreSafetyTool = Tool.define("altimate_core_safety", { - description: - "Scan SQL for injection patterns, dangerous statements (DROP, TRUNCATE), and security threats. Uses the Rust-based altimate-core safety engine.", - parameters: z.object({ - sql: z.string().describe("SQL query to scan"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.safety", { sql: args.sql }) - const data = result.data as Record - return { - title: `Safety: ${data.safe ? "SAFE" : `${data.threats?.length ?? 0} threats`}`, - metadata: { success: result.success, safe: data.safe, riskScore: data.risk_score }, - output: formatSafety(data), - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Safety: ERROR", metadata: { success: false, safe: false, riskScore: undefined }, output: `Failed: ${msg}` } - } - }, -}) - -function formatSafety(data: Record): string { - if (data.error) return `Error: ${data.error}` - if (data.safe) return "Query is safe — no threats detected." - - const lines = [`Risk score: ${data.risk_score}\n`, "Threats detected:\n"] - for (const t of data.threats ?? []) { - lines.push(` [${t.severity}] ${t.type}: ${t.description}`) - lines.push(` at line ${t.location?.line ?? "?"}, col ${t.location?.column ?? "?"}`) - lines.push("") - } - return lines.join("\n") -} diff --git a/packages/opencode/src/altimate/tools/altimate-core-transpile.ts b/packages/opencode/src/altimate/tools/altimate-core-transpile.ts deleted file mode 100644 index badd6050c9..0000000000 --- a/packages/opencode/src/altimate/tools/altimate-core-transpile.ts +++ /dev/null @@ -1,47 +0,0 @@ -import z from "zod" -import { Tool } from "../../tool/tool" -import { Dispatcher } from "../native" - -export const AltimateCoreTranspileTool = Tool.define("altimate_core_transpile", { - description: - "Transpile SQL between dialects using the Rust-based altimate-core engine. Supports snowflake, postgres, bigquery, databricks, duckdb, mysql, tsql, and more.", - parameters: z.object({ - sql: z.string().describe("SQL query to transpile"), - from_dialect: z.string().describe("Source dialect (e.g., snowflake, postgres, bigquery)"), - to_dialect: z.string().describe("Target dialect (e.g., snowflake, postgres, bigquery)"), - }), - async execute(args, ctx) { - try { - const result = await Dispatcher.call("altimate_core.transpile", { - sql: args.sql, - from_dialect: args.from_dialect, - to_dialect: args.to_dialect, - }) - const data = result.data as Record - return { - title: `Transpile: ${args.from_dialect} → ${args.to_dialect} [${result.success ? "OK" : "FAIL"}]`, - metadata: { success: result.success }, - output: formatTranspile(data, args.sql), - } - } catch (e) { - const msg = e instanceof Error ? e.message : String(e) - return { title: "Transpile: ERROR", metadata: { success: false }, output: `Failed: ${msg}` } - } - }, -}) - -function formatTranspile(data: Record, original: string): string { - if (data.error) return `Error: ${data.error}` - - const lines = [ - `Source: ${data.source_dialect}`, - `Target: ${data.target_dialect}`, - "", - "--- Original ---", - original.trim(), - "", - "--- Transpiled ---", - data.transpiled_sql ?? "(no output)", - ] - return lines.join("\n") -} diff --git a/packages/opencode/src/altimate/tools/tool-lookup.ts b/packages/opencode/src/altimate/tools/tool-lookup.ts new file mode 100644 index 0000000000..967b8bc2fd --- /dev/null +++ b/packages/opencode/src/altimate/tools/tool-lookup.ts @@ -0,0 +1,101 @@ +import z from "zod" +import { Tool } from "../../tool/tool" +import { ToolRegistry } from "../../tool/registry" + +export const ToolLookupTool = Tool.define("tool_lookup", { + description: + "Look up any tool's description, parameters, and types. " + + "Call with a tool name to see its full contract before using it.", + parameters: z.object({ + tool_name: z.string().describe("Exact tool ID (e.g., 'sql_analyze', 'altimate_core_migration')"), + }), + async execute(args) { + const infos = await ToolRegistry.allInfos() + const info = infos.find((t) => t.id === args.tool_name) + if (!info) { + const ids = infos.map((t) => t.id).sort() + return { + title: "Tool not found", + metadata: {}, + output: `No tool named "${args.tool_name}". Available tools:\n${ids.join(", ")}`, + } + } + + const tool = await info.init() + const params = describeZodSchema(tool.parameters) + const lines = [info.id, ` ${tool.description}`, ""] + if (params.length) { + lines.push(" Parameters:") + for (const p of params) { + const req = p.required ? "required" : "optional" + const desc = p.description ? ` — ${p.description}` : "" + lines.push(` ${p.name} (${p.type}, ${req})${desc}`) + } + } else { + lines.push(" No parameters.") + } + + return { title: `Lookup: ${info.id}`, metadata: {}, output: lines.join("\n") } + }, +}) + +interface ParamInfo { + name: string + type: string + required: boolean + description: string +} + +function describeZodSchema(schema: z.ZodType): ParamInfo[] { + const shape = getShape(schema) + if (!shape) return [] + + const params: ParamInfo[] = [] + for (const [name, field] of Object.entries(shape)) { + const unwrapped = unwrap(field) + params.push({ + name, + type: inferZodType(field), + required: !field.isOptional(), + description: unwrapped.description ?? field.description ?? "", + }) + } + return params +} + +function getShape(schema: any): Record | null { + if (schema?._def?.shape) { + return typeof schema._def.shape === "function" ? schema._def.shape() : schema._def.shape + } + if (schema?._def?.innerType) return getShape(schema._def.innerType) + return null +} + +/** Unwrap optional/default wrappers to reach the inner type. */ +function unwrap(field: any): any { + const type = field?._def?.type + if (type === "optional" || type === "default" || type === "nullable") { + return field._def.innerType ? unwrap(field._def.innerType) : field + } + return field +} + +function inferZodType(field: any): string { + const type: string = field?._def?.type ?? "" + if (type === "optional" || type === "default" || type === "nullable") { + return field._def.innerType ? inferZodType(field._def.innerType) : "unknown" + } + if (type === "string") return "string" + if (type === "number") return "number" + if (type === "boolean") return "boolean" + if (type === "array") return `array<${inferZodType(field._def.element)}>` + if (type === "enum") return `enum(${field.options?.join("|") ?? Object.keys(field._def.entries ?? {}).join("|")})` + if (type === "record") return "record" + if (type === "object") return "object" + if (type === "union") return field._def.options?.map((o: any) => inferZodType(o)).join(" | ") ?? "union" + if (type === "literal") return JSON.stringify(field._def.value) + if (type === "any") return "any" + if (type === "unknown") return "unknown" + // Fallback: use constructor name or _def.type + return type || field?.constructor?.name?.replace("Zod", "").toLowerCase() || "unknown" +} diff --git a/packages/opencode/src/tool/registry.ts b/packages/opencode/src/tool/registry.ts index 6b4100404d..ae08874a7a 100644 --- a/packages/opencode/src/tool/registry.ts +++ b/packages/opencode/src/tool/registry.ts @@ -67,9 +67,6 @@ import { SchemaTagsTool, SchemaTagsListTool } from "../altimate/tools/schema-tag import { SqlRewriteTool } from "../altimate/tools/sql-rewrite" import { SchemaDiffTool } from "../altimate/tools/schema-diff" import { AltimateCoreValidateTool } from "../altimate/tools/altimate-core-validate" -import { AltimateCoreLintTool } from "../altimate/tools/altimate-core-lint" -import { AltimateCoreSafetyTool } from "../altimate/tools/altimate-core-safety" -import { AltimateCoreTranspileTool } from "../altimate/tools/altimate-core-transpile" import { AltimateCoreCheckTool } from "../altimate/tools/altimate-core-check" import { AltimateCoreFixTool } from "../altimate/tools/altimate-core-fix" import { AltimateCorePolicyTool } from "../altimate/tools/altimate-core-policy" @@ -78,7 +75,6 @@ import { AltimateCoreTestgenTool } from "../altimate/tools/altimate-core-testgen import { AltimateCoreEquivalenceTool } from "../altimate/tools/altimate-core-equivalence" import { AltimateCoreMigrationTool } from "../altimate/tools/altimate-core-migration" import { AltimateCoreSchemaDiffTool } from "../altimate/tools/altimate-core-schema-diff" -import { AltimateCoreRewriteTool } from "../altimate/tools/altimate-core-rewrite" import { AltimateCoreCorrectTool } from "../altimate/tools/altimate-core-correct" import { AltimateCoreGradeTool } from "../altimate/tools/altimate-core-grade" import { AltimateCoreClassifyPiiTool } from "../altimate/tools/altimate-core-classify-pii" @@ -86,19 +82,18 @@ import { AltimateCoreQueryPiiTool } from "../altimate/tools/altimate-core-query- import { AltimateCoreResolveTermTool } from "../altimate/tools/altimate-core-resolve-term" import { AltimateCoreColumnLineageTool } from "../altimate/tools/altimate-core-column-lineage" import { AltimateCoreTrackLineageTool } from "../altimate/tools/altimate-core-track-lineage" -import { AltimateCoreFormatTool } from "../altimate/tools/altimate-core-format" import { AltimateCoreExtractMetadataTool } from "../altimate/tools/altimate-core-extract-metadata" import { AltimateCoreCompareTool } from "../altimate/tools/altimate-core-compare" import { AltimateCoreCompleteTool } from "../altimate/tools/altimate-core-complete" import { AltimateCoreOptimizeContextTool } from "../altimate/tools/altimate-core-optimize-context" -import { AltimateCoreOptimizeForQueryTool } from "../altimate/tools/altimate-core-optimize-for-query" import { AltimateCorePruneSchemaTool } from "../altimate/tools/altimate-core-prune-schema" import { AltimateCoreImportDdlTool } from "../altimate/tools/altimate-core-import-ddl" import { AltimateCoreExportDdlTool } from "../altimate/tools/altimate-core-export-ddl" import { AltimateCoreFingerprintTool } from "../altimate/tools/altimate-core-fingerprint" import { AltimateCoreIntrospectionSqlTool } from "../altimate/tools/altimate-core-introspection-sql" import { AltimateCoreParseDbtTool } from "../altimate/tools/altimate-core-parse-dbt" -import { AltimateCoreIsSafeTool } from "../altimate/tools/altimate-core-is-safe" +import { AltimateCoreRewriteTool } from "../altimate/tools/altimate-core-rewrite" +import { ToolLookupTool } from "../altimate/tools/tool-lookup" import { ProjectScanTool } from "../altimate/tools/project-scan" import { DatamateManagerTool } from "../altimate/tools/datamate" import { FeedbackSubmitTool } from "../altimate/tools/feedback-submit" @@ -242,11 +237,9 @@ export namespace ToolRegistry { SchemaTagsTool, SchemaTagsListTool, SqlRewriteTool, + AltimateCoreRewriteTool, SchemaDiffTool, AltimateCoreValidateTool, - AltimateCoreLintTool, - AltimateCoreSafetyTool, - AltimateCoreTranspileTool, AltimateCoreCheckTool, AltimateCoreFixTool, AltimateCorePolicyTool, @@ -255,7 +248,6 @@ export namespace ToolRegistry { AltimateCoreEquivalenceTool, AltimateCoreMigrationTool, AltimateCoreSchemaDiffTool, - AltimateCoreRewriteTool, AltimateCoreCorrectTool, AltimateCoreGradeTool, AltimateCoreClassifyPiiTool, @@ -263,19 +255,17 @@ export namespace ToolRegistry { AltimateCoreResolveTermTool, AltimateCoreColumnLineageTool, AltimateCoreTrackLineageTool, - AltimateCoreFormatTool, AltimateCoreExtractMetadataTool, AltimateCoreCompareTool, AltimateCoreCompleteTool, AltimateCoreOptimizeContextTool, - AltimateCoreOptimizeForQueryTool, AltimateCorePruneSchemaTool, AltimateCoreImportDdlTool, AltimateCoreExportDdlTool, AltimateCoreFingerprintTool, AltimateCoreIntrospectionSqlTool, AltimateCoreParseDbtTool, - AltimateCoreIsSafeTool, + ToolLookupTool, ProjectScanTool, DatamateManagerTool, FeedbackSubmitTool, @@ -290,6 +280,11 @@ export namespace ToolRegistry { ] } + /** All tool infos without model/provider filtering. */ + export async function allInfos(): Promise { + return all() + } + export async function ids() { return all().then((x) => x.map((t) => t.id)) }