Skip to content

feat(wren): extend standalone CLI with MySQL support and auto-discovery#1476

Open
douenergy wants to merge 10 commits intoCanner:mainfrom
douenergy:feat/wren-standalone-cli
Open

feat(wren): extend standalone CLI with MySQL support and auto-discovery#1476
douenergy wants to merge 10 commits intoCanner:mainfrom
douenergy:feat/wren-standalone-cli

Conversation

@douenergy
Copy link
Contributor

@douenergy douenergy commented Mar 24, 2026

Summary

  • Add MySQL datasource tests (tests/connectors/test_mysql.py) using testcontainers + TPCH data loaded via pymysql — all 10 suite tests pass
  • Update CLI to auto-discover mdl.json and conn.json from the current directory, so wren --sql '...' works with no extra flags
  • Read datasource from conn.json — no more required --datasource flag
  • Add bare wren --sql '...' as a default command (no subcommand needed)
  • Add testcontainers[mysql] to dev extra and test-mysql CI job
  • Rewrite README with quick-start, full CLI reference, and conn.json formats per datasource
  • Add .claude/skills/wren-query/SKILL.md Claude Code skill for running wren queries

Usage after this PR

pip install wren[mysql]

Drop two files in your project directory:

conn.json

{
  "datasource": "mysql",
  "host": "localhost",
  "port": 3306,
  "database": "mydb",
  "user": "root",
  "password": "secret"
}

mdl.json — your semantic model

Then just run:

wren --sql 'SELECT * FROM "orders" LIMIT 10'

Test plan

  • just test-mysql — all 10 connector tests pass locally
  • just test-postgres — existing tests unaffected
  • wren transpile --sql '...' auto-discovers datasource from conn.json
  • Clear error messages when mdl.json or conn.json are missing

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Add Wren SQL skill for interactive queries, translate/validate modes, and top-level CLI --sql execution with config auto-discovery and output options; MySQL connector support added.
  • Documentation

    • New CLI and connection reference, expanded quick-start and Python SDK examples, and a Claude skills catalog entry.
  • Tests / CI

    • New MySQL integration tests and pytest marker, new test target, and parameterized CI matrix for multiple connectors.
  • Chores

    • Packaging extras and dev dependencies adjusted; project header/branding updated.

@github-actions github-actions bot added documentation Improvements or additions to documentation dependencies Pull requests that update a dependency file ci labels Mar 24, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 24, 2026

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (1)
  • wren/uv.lock is excluded by !**/*.lock

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b192232e-245c-42cc-b986-80abab4dec97

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Claude wren-query skill and skills README; refactors the Wren CLI to support root --sql, MDL/connection auto-discovery/loaders, and context-managed engine lifecycle; introduces MySQL connector tests and CI matrix; updates docs (CLI, connections), packaging, and the wren README/justfile.

Changes

Cohort / File(s) Summary
Claude skills
.claude/skills/wren-query/SKILL.md, .claude/skills/README.md
Add wren-query skill manifest and skills README documenting usage, required files, tools, command mappings, schemas, overrides, and error guidance.
CLI refactor & engine lifecycle
wren/src/wren/cli.py
Introduce root callback invoked with --sql; add _require_mdl, _load_manifest, _load_conn, _resolve_datasource, _build_engine; make datasource/mdl optional; use with-managed WrenEngine; update command signatures and output handling.
Docs: CLI & connections
wren/docs/cli.md, wren/docs/connections.md
Add CLI reference for default --sql invocation and subcommands (query, dry-plan, dry-run, validate); document connection_info.json formats and inline --connection-info usage with connector examples.
Tests & CI
.github/workflows/wren-ci.yml, wren/tests/conftest.py, wren/tests/connectors/test_mysql.py, wren/justfile
Replace single Postgres job with matrix test-connector (postgres/mysql); conditional MySQL system deps; register mysql pytest marker; add MySQL integration test using testcontainers+TPCH; add just test-mysql.
Packaging & README
wren/pyproject.toml, wren/README.md
Rebrand README to “wren-engine”, update Quick Start and Python SDK examples; reformat extras, change dev extra to include testcontainers[postgres,mysql], and change all extra to wren[...].

Sequence Diagram(s)

sequenceDiagram
  participant User as User
  participant CLI as Wren CLI
  participant FS as File System
  participant Engine as WrenEngine
  participant DB as DataSource

  User->>CLI: run `wren --sql "<SQL>"` or `wren query|dry-plan|dry-run|validate`
  CLI->>FS: check `~/.wren/mdl.json` and `~/.wren/connection_info.json` (or use flags/inline JSON)
  alt missing required files
    FS-->>CLI: missing -> CLI returns setup guidance / error
  else files present or overrides provided
    FS-->>CLI: provide mdl and connection info
    CLI->>Engine: _build_engine(datasource, mdl, connection_info)
    activate Engine
    Engine->>DB: connect (if command requires live DB)
    DB-->>Engine: query/validation response
    Engine-->>CLI: results
    deactivate Engine
    CLI-->>User: format and print output (table/csv/json)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

python, ibis

Suggested reviewers

  • wwwy3y3

Poem

🐰 Hopping through manifests and tests anew,

I sniff out mdl and conn tucked in view,
CLI hums queries, dry-plans, validates with glee,
Containers spin MySQL to join the spree,
A rabbit’s wiggle for the changes made true.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 56.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(wren): extend standalone CLI with MySQL support and auto-discovery' directly aligns with the main changes: MySQL connector tests added, CLI auto-discovery for mdl.json and connection_info.json implemented, and datasource read from config file.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (5)
wren/src/wren/cli.py (3)

39-49: Clarify the logic for non-.json file handling.

The manifest loading logic handles .json files by base64-encoding their content, but for non-.json files that exist, it decodes as UTF-8 and strips whitespace. This assumes such files contain pre-encoded base64 strings, which may not be obvious.

Consider adding a comment to clarify:

     if path.exists():
         import base64  # noqa: PLC0415
 
         content = path.read_bytes()
         if mdl.endswith(".json"):
             return base64.b64encode(content).decode()
+        # Non-.json file: assume it contains a base64-encoded manifest string
         return content.decode().strip()
+    # Not a file path: treat as a raw base64 string
     return mdl
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 39 - 49, The _load_manifest function
currently base64-encodes files ending with ".json" but for other existing files
returns decoded UTF-8 text (assuming they already contain a base64 MDL string);
update the function by adding a concise clarifying comment (inside
_load_manifest, near the path.exists() branch) that explains: ".json files are
treated as raw JSON and are base64-encoded, while non-.json existing files are
expected to already contain a base64-encoded MDL string and are decoded/stripped
accordingly" (referencing symbols: function _load_manifest, parameter mdl, local
variable path and the two return branches) so future readers understand the
intent.

92-104: _resolve_datasource mutates the conn_dict parameter.

The function modifies the input dictionary by popping the datasource key. This side effect is intentional (to prevent passing datasource to WrenEngine), but it's worth noting this mutation isn't obvious from the function signature.

Consider adding a brief docstring note about the mutation, or returning a tuple (datasource, cleaned_dict) to make the side effect explicit:

def _resolve_datasource(explicit: str | None, conn_dict: dict) -> str:
    """Return datasource: use explicit --datasource arg first, then pop from conn dict.
    
    Note: Mutates conn_dict by removing the 'datasource' key if present.
    """
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 92 - 104, The helper _resolve_datasource
currently mutates the input conn_dict by popping the 'datasource' key, which is
not obvious from its signature; update the _resolve_datasource docstring to
explicitly state this side-effect (e.g., "Note: Mutates conn_dict by removing
the 'datasource' key if present.") or change the function to return an explicit
tuple (datasource, cleaned_conn_dict) so callers of _resolve_datasource (search
for uses of _resolve_datasource and any place that passes conn_dict into
WrenEngine) handle the cleaned dict instead of relying on in-place mutation.

212-220: Consider using WrenEngine as a context manager.

Since WrenEngine implements __enter__/__exit__ (per wren/src/wren/engine.py lines 147-157), using a with statement would be cleaner and safer than manual try/finally with engine.close(). This pattern appears in both main() and query().

♻️ Proposed fix for main()
-    engine = _build_engine(datasource, mdl, connection_info, connection_file)
-    try:
-        result = engine.query(sql, limit=limit)
-    except Exception as e:
-        typer.echo(f"Error: {e}", err=True)
-        raise typer.Exit(1)
-    finally:
-        engine.close()
-    _print_result(result, output)
+    with _build_engine(datasource, mdl, connection_info, connection_file) as engine:
+        try:
+            result = engine.query(sql, limit=limit)
+        except Exception as e:
+            typer.echo(f"Error: {e}", err=True)
+            raise typer.Exit(1)
+    _print_result(result, output)

The same pattern could be applied to query(), dry_run(), and validate() commands.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 212 - 220, Replace manual engine lifecycle
management in query() (and similarly in main(), dry_run(), validate()) by using
WrenEngine as a context manager: instead of calling _build_engine(...) then
try/except/finally with engine.close(), construct the engine via "with
WrenEngine(...)" (or "with _build_engine(...)" if it returns a WrenEngine) so
__enter__/__exit__ handle closing; keep the existing exception handling (catch
Exception as e and raise typer.Exit(1)) inside the with-block and move
_print_result(result, output) after or inside as appropriate. Locate uses of
_build_engine and the functions query(), main(), dry_run(), validate() to apply
the change consistently.
wren/tests/connectors/test_mysql.py (1)

31-39: Consider using a context manager for the DuckDB connection.

If an exception occurs between duckdb.connect() and duck.close(), the connection will leak. Using a context manager ensures cleanup even on failure.

♻️ Proposed fix
-    duck = duckdb.connect()
-    duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
-
-    orders_rows = duck.execute(
-        "SELECT o_orderkey, o_custkey, o_orderstatus, "
-        "cast(o_totalprice as double), o_orderdate FROM orders"
-    ).fetchall()
-    customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
-    duck.close()
+    with duckdb.connect() as duck:
+        duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
+
+        orders_rows = duck.execute(
+            "SELECT o_orderkey, o_custkey, o_orderstatus, "
+            "cast(o_totalprice as double), o_orderdate FROM orders"
+        ).fetchall()
+        customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/tests/connectors/test_mysql.py` around lines 31 - 39, Replace the manual
connect/close pattern with a context manager to guarantee the DuckDB connection
is closed on errors: wrap duckdb.connect() in a with statement (i.e., with
duckdb.connect() as duck:) and move the subsequent duck.execute(...), fetchall()
calls and duck.close() inside that block so the connection is automatically
cleaned up even if an exception occurs; update the code regions referencing duck
= duckdb.connect(), duck.execute(...), and duck.close() accordingly.
.github/workflows/wren-ci.yml (1)

123-124: Consider running apt-get update before installing packages.

Running apt-get install without first running apt-get update can cause failures if the package index is stale, which can happen depending on the GitHub Actions runner image age.

♻️ Proposed fix
       - name: Install system dependencies for mysqlclient
-        run: sudo apt-get install -y default-libmysqlclient-dev pkg-config
+        run: |
+          sudo apt-get update
+          sudo apt-get install -y default-libmysqlclient-dev pkg-config
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/wren-ci.yml around lines 123 - 124, Add an apt-get update
before installing mysqlclient deps: modify the CI step named "Install system
dependencies for mysqlclient" to run an update first (e.g., run sudo apt-get
update && sudo apt-get install -y default-libmysqlclient-dev pkg-config) so the
package index is fresh before invoking apt-get install.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/wren-ci.yml:
- Around line 123-124: Add an apt-get update before installing mysqlclient deps:
modify the CI step named "Install system dependencies for mysqlclient" to run an
update first (e.g., run sudo apt-get update && sudo apt-get install -y
default-libmysqlclient-dev pkg-config) so the package index is fresh before
invoking apt-get install.

In `@wren/src/wren/cli.py`:
- Around line 39-49: The _load_manifest function currently base64-encodes files
ending with ".json" but for other existing files returns decoded UTF-8 text
(assuming they already contain a base64 MDL string); update the function by
adding a concise clarifying comment (inside _load_manifest, near the
path.exists() branch) that explains: ".json files are treated as raw JSON and
are base64-encoded, while non-.json existing files are expected to already
contain a base64-encoded MDL string and are decoded/stripped accordingly"
(referencing symbols: function _load_manifest, parameter mdl, local variable
path and the two return branches) so future readers understand the intent.
- Around line 92-104: The helper _resolve_datasource currently mutates the input
conn_dict by popping the 'datasource' key, which is not obvious from its
signature; update the _resolve_datasource docstring to explicitly state this
side-effect (e.g., "Note: Mutates conn_dict by removing the 'datasource' key if
present.") or change the function to return an explicit tuple (datasource,
cleaned_conn_dict) so callers of _resolve_datasource (search for uses of
_resolve_datasource and any place that passes conn_dict into WrenEngine) handle
the cleaned dict instead of relying on in-place mutation.
- Around line 212-220: Replace manual engine lifecycle management in query()
(and similarly in main(), dry_run(), validate()) by using WrenEngine as a
context manager: instead of calling _build_engine(...) then try/except/finally
with engine.close(), construct the engine via "with WrenEngine(...)" (or "with
_build_engine(...)" if it returns a WrenEngine) so __enter__/__exit__ handle
closing; keep the existing exception handling (catch Exception as e and raise
typer.Exit(1)) inside the with-block and move _print_result(result, output)
after or inside as appropriate. Locate uses of _build_engine and the functions
query(), main(), dry_run(), validate() to apply the change consistently.

In `@wren/tests/connectors/test_mysql.py`:
- Around line 31-39: Replace the manual connect/close pattern with a context
manager to guarantee the DuckDB connection is closed on errors: wrap
duckdb.connect() in a with statement (i.e., with duckdb.connect() as duck:) and
move the subsequent duck.execute(...), fetchall() calls and duck.close() inside
that block so the connection is automatically cleaned up even if an exception
occurs; update the code regions referencing duck = duckdb.connect(),
duck.execute(...), and duck.close() accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ccf9c5fa-cf64-47ec-b182-63ce46170384

📥 Commits

Reviewing files that changed from the base of the PR and between 5fd5e76 and f202add.

⛔ Files ignored due to path filters (1)
  • wren/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • .claude/skills/wren-query/SKILL.md
  • .github/workflows/wren-ci.yml
  • wren/README.md
  • wren/justfile
  • wren/pyproject.toml
  • wren/src/wren/cli.py
  • wren/tests/conftest.py
  • wren/tests/connectors/test_mysql.py

@douenergy douenergy force-pushed the feat/wren-standalone-cli branch from f202add to 62a5bab Compare March 25, 2026 05:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
wren/src/wren/cli.py (1)

324-329: ⚠️ Potential issue | 🟡 Minor

JSON output format is inconsistent between main path and fallback.

The main path uses to_json(orient="records", lines=True) which outputs JSON Lines format (one JSON object per line), while the fallback uses json.dumps(table.to_pydict()) which outputs a single JSON object with arrays of column values. This could break downstream consumers expecting consistent format.

♻️ Suggested fix for consistent JSON output
     if output == "json":
         try:
             df = table.to_pandas()
             typer.echo(df.to_json(orient="records", lines=True))
         except Exception:
-            typer.echo(json.dumps(table.to_pydict()))
+            # Convert column-oriented dict to list of row dicts for consistency
+            pydict = table.to_pydict()
+            rows = [dict(zip(pydict.keys(), vals)) for vals in zip(*pydict.values())]
+            for row in rows:
+                typer.echo(json.dumps(row))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 324 - 329, The JSON output is
inconsistent: the try path uses table.to_pandas() and
df.to_json(orient="records", lines=True) (JSON Lines), while the except path
json.dumps(table.to_pydict()) emits a single object of column arrays. In the
except block (around the output == "json" branch handling, referencing
table.to_pandas(), table.to_pydict(), and typer.echo), convert the
dict-of-columns from table.to_pydict() into a list/iterator of record objects
(rows) and emit them in the same JSON Lines format (one JSON object per line) so
the fallback matches the primary path's output format.
🧹 Nitpick comments (5)
wren/tests/connectors/test_redshift.py (1)

47-56: Consider using a context manager for DuckDB consistency.

For consistency with test_mysql.py, consider using a context manager for the DuckDB connection instead of manual close().

♻️ Suggested refactor
 def _load_tpch(conn_str: str) -> None:
     """Generate TPCH sf=0.01 via DuckDB and bulk-load into PostgreSQL."""
-    duck = duckdb.connect()
-    duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
-    orders_rows = duck.execute(
-        "SELECT o_orderkey, o_custkey, o_orderstatus, "
-        "cast(o_totalprice as double), o_orderdate FROM orders"
-    ).fetchall()
-    customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
-    duck.close()
+    with duckdb.connect() as duck:
+        duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
+        orders_rows = duck.execute(
+            "SELECT o_orderkey, o_custkey, o_orderstatus, "
+            "cast(o_totalprice as double), o_orderdate FROM orders"
+        ).fetchall()
+        customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/tests/connectors/test_redshift.py` around lines 47 - 56, The _load_tpch
helper currently opens a DuckDB connection with duckdb.connect() and calls
duck.close() manually; modify _load_tpch to use a context manager (with
duckdb.connect() as duck:) so the connection is automatically closed and
consistent with test_mysql.py, updating the uses of duck.execute()/fetchall()
inside that with-block and removing the explicit duck.close() call.
wren/tests/connectors/test_bigquery.py (1)

67-74: Consider using a context manager for DuckDB consistency.

For consistency with test_mysql.py, consider using a context manager for the DuckDB connection.

♻️ Suggested refactor
 def _load_tpch(host: str, port: int) -> None:
     """Generate TPCH sf=0.01 via DuckDB and load into the BigQuery emulator."""
     import google.auth.credentials  # noqa: PLC0415
     from google.api_core.client_options import ClientOptions  # noqa: PLC0415
     from google.cloud import bigquery  # noqa: PLC0415

-    duck = duckdb.connect()
-    duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
-    orders_rows = duck.execute(
-        "SELECT o_orderkey, o_custkey, o_orderstatus, "
-        "cast(o_totalprice as double), cast(o_orderdate as varchar) FROM orders"
-    ).fetchall()
-    customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
-    duck.close()
+    with duckdb.connect() as duck:
+        duck.execute("INSTALL tpch; LOAD tpch; CALL dbgen(sf=0.01)")
+        orders_rows = duck.execute(
+            "SELECT o_orderkey, o_custkey, o_orderstatus, "
+            "cast(o_totalprice as double), cast(o_orderdate as varchar) FROM orders"
+        ).fetchall()
+        customer_rows = duck.execute("SELECT c_custkey, c_name FROM customer").fetchall()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/tests/connectors/test_bigquery.py` around lines 67 - 74, Replace the
explicit duckdb.connect(), manual duck.close() pattern with a context manager to
ensure the connection is always closed and consistent with test_mysql.py;
specifically wrap the creation/usage of the DuckDB connection (duck =
duckdb.connect()) in a with-statement (using with duckdb.connect() as duck) and
move the execute/fetchall calls inside that block so orders_rows and
customer_rows are produced while the context is active and no explicit
duck.close() is needed.
wren/src/wren/cli.py (2)

95-111: Consider returning a tuple instead of mutating conn_dict in-place.

The function mutates the input dictionary (documented in docstring), which can be a source of subtle bugs if callers aren't careful. A slightly safer pattern would return a tuple (datasource, filtered_dict):

def _resolve_datasource(explicit: str | None, conn_dict: dict) -> tuple[str, dict]:
    filtered = {k: v for k, v in conn_dict.items() if k != "datasource"}
    ds = explicit or conn_dict.get("datasource")
    if not ds:
        # error handling...
    return ds, filtered

That said, the current implementation works correctly for all call sites since they consume conn_dict immediately after.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 95 - 111, The _resolve_datasource function
currently mutates its conn_dict by popping "datasource"; change it to return a
tuple (datasource, filtered_conn_dict) instead to avoid in-place mutation:
compute ds = explicit or conn_dict.get("datasource"), create filtered_conn = {k:
v for k, v in conn_dict.items() if k != "datasource"}, handle the error path by
printing and raising typer.Exit if ds is None, then return (ds, filtered_conn);
update all call sites that use _resolve_datasource to accept the tuple and use
the returned filtered_conn instead of relying on the original dict being
mutated.

219-248: Optional: Extract shared query-execution logic to reduce duplication.

The main callback (lines 219-225) and query subcommand (lines 243-248) have identical execution logic. You could extract this into a helper:

♻️ Suggested refactor
def _run_query(
    sql: str,
    datasource: str | None,
    mdl: str | None,
    connection_info: str | None,
    connection_file: str | None,
    limit: int | None,
    output: str,
) -> None:
    with _build_engine(datasource, mdl, connection_info, connection_file) as engine:
        try:
            result = engine.query(sql, limit=limit)
        except Exception as e:
            typer.echo(f"Error: {e}", err=True)
            raise typer.Exit(1)
    _print_result(result, output)

Then both main and query would simply call _run_query(...).

This is optional since the duplication is minimal and self-contained.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 219 - 248, Extract the duplicated
query-execution logic into a small helper function (e.g., _run_query) that
accepts the same parameters as main and query (sql, datasource, mdl,
connection_info, connection_file, limit, output), calls _build_engine(... ) as a
context manager, performs engine.query(sql, limit=limit) with the existing
exception handling (typer.echo + raise typer.Exit(1)), and then calls
_print_result(result, output); then replace the bodies of both main and query to
simply call _run_query(...) so both use the single shared implementation.
.claude/skills/README.md (1)

15-19: Add a language identifier to improve documentation clarity.

The code block at lines 15–19 lacks a language identifier. Adding text would satisfy linting checks without misrepresenting the content as executable code.

📝 Proposed improvement
-```
+```text
 /wren-query SELECT order_id FROM "orders" LIMIT 5
 /wren-query --transpile SELECT * FROM "orders"
 /wren-query --validate SELECT * FROM "NonExistent"

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @.claude/skills/README.md around lines 15 - 19, Add a language identifier to
the fenced code block containing the three /wren-query examples (the lines
starting with "/wren-query SELECT order_id...", "/wren-query --transpile ...",
and "/wren-query --validate ...") by changing the opening triple backticks to

contents unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@wren/src/wren/cli.py`:
- Around line 324-329: The JSON output is inconsistent: the try path uses
table.to_pandas() and df.to_json(orient="records", lines=True) (JSON Lines),
while the except path json.dumps(table.to_pydict()) emits a single object of
column arrays. In the except block (around the output == "json" branch handling,
referencing table.to_pandas(), table.to_pydict(), and typer.echo), convert the
dict-of-columns from table.to_pydict() into a list/iterator of record objects
(rows) and emit them in the same JSON Lines format (one JSON object per line) so
the fallback matches the primary path's output format.

---

Nitpick comments:
In @.claude/skills/README.md:
- Around line 15-19: Add a language identifier to the fenced code block
containing the three /wren-query examples (the lines starting with "/wren-query
SELECT order_id...", "/wren-query --transpile ...", and "/wren-query --validate
...") by changing the opening triple backticks to ```text so linting recognizes
it as non-executable plain text; leave the contents unchanged.

In `@wren/src/wren/cli.py`:
- Around line 95-111: The _resolve_datasource function currently mutates its
conn_dict by popping "datasource"; change it to return a tuple (datasource,
filtered_conn_dict) instead to avoid in-place mutation: compute ds = explicit or
conn_dict.get("datasource"), create filtered_conn = {k: v for k, v in
conn_dict.items() if k != "datasource"}, handle the error path by printing and
raising typer.Exit if ds is None, then return (ds, filtered_conn); update all
call sites that use _resolve_datasource to accept the tuple and use the returned
filtered_conn instead of relying on the original dict being mutated.
- Around line 219-248: Extract the duplicated query-execution logic into a small
helper function (e.g., _run_query) that accepts the same parameters as main and
query (sql, datasource, mdl, connection_info, connection_file, limit, output),
calls _build_engine(... ) as a context manager, performs engine.query(sql,
limit=limit) with the existing exception handling (typer.echo + raise
typer.Exit(1)), and then calls _print_result(result, output); then replace the
bodies of both main and query to simply call _run_query(...) so both use the
single shared implementation.

In `@wren/tests/connectors/test_bigquery.py`:
- Around line 67-74: Replace the explicit duckdb.connect(), manual duck.close()
pattern with a context manager to ensure the connection is always closed and
consistent with test_mysql.py; specifically wrap the creation/usage of the
DuckDB connection (duck = duckdb.connect()) in a with-statement (using with
duckdb.connect() as duck) and move the execute/fetchall calls inside that block
so orders_rows and customer_rows are produced while the context is active and no
explicit duck.close() is needed.

In `@wren/tests/connectors/test_redshift.py`:
- Around line 47-56: The _load_tpch helper currently opens a DuckDB connection
with duckdb.connect() and calls duck.close() manually; modify _load_tpch to use
a context manager (with duckdb.connect() as duck:) so the connection is
automatically closed and consistent with test_mysql.py, updating the uses of
duck.execute()/fetchall() inside that with-block and removing the explicit
duck.close() call.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d0dc5d43-9492-4890-98fb-7e4e69c5b629

📥 Commits

Reviewing files that changed from the base of the PR and between f202add and 62a5bab.

⛔ Files ignored due to path filters (1)
  • wren/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • .claude/skills/README.md
  • .claude/skills/wren-query/SKILL.md
  • .github/workflows/wren-ci.yml
  • wren/README.md
  • wren/docs/cli.md
  • wren/docs/connections.md
  • wren/justfile
  • wren/pyproject.toml
  • wren/src/wren/cli.py
  • wren/src/wren/connector/bigquery.py
  • wren/src/wren/connector/redshift.py
  • wren/src/wren/model/__init__.py
  • wren/tests/conftest.py
  • wren/tests/connectors/test_bigquery.py
  • wren/tests/connectors/test_mysql.py
  • wren/tests/connectors/test_redshift.py
✅ Files skipped from review due to trivial changes (2)
  • wren/docs/connections.md
  • wren/docs/cli.md
🚧 Files skipped from review as they are similar to previous changes (4)
  • wren/justfile
  • .claude/skills/wren-query/SKILL.md
  • wren/pyproject.toml
  • wren/tests/conftest.py

@douenergy douenergy force-pushed the feat/wren-standalone-cli branch from 62a5bab to 4e837de Compare March 25, 2026 06:02
@douenergy douenergy requested a review from goldmedal March 26, 2026 06:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
wren/README.md (1)

66-66: Consider PEP 8 import style.

While functionally correct, PEP 8 recommends separate import statements on different lines for clarity.

♻️ Suggested import formatting
-import base64, orjson
+import base64
+
+import orjson
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wren/README.md` at line 66, The import statement "import base64, orjson"
should be split into separate PEP 8–compliant lines; replace the combined import
with two separate imports (one for base64 and one for orjson) where the current
combined statement appears (look for the line containing "import base64,
orjson") so each module is imported on its own line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@wren/README.md`:
- Line 66: The import statement "import base64, orjson" should be split into
separate PEP 8–compliant lines; replace the combined import with two separate
imports (one for base64 and one for orjson) where the current combined statement
appears (look for the line containing "import base64, orjson") so each module is
imported on its own line.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bb789174-e587-4cdf-a388-0e05df5c14ee

📥 Commits

Reviewing files that changed from the base of the PR and between 9c70021 and 6c458ac.

📒 Files selected for processing (3)
  • .github/workflows/wren-ci.yml
  • wren/README.md
  • wren/docs/cli.md
✅ Files skipped from review due to trivial changes (1)
  • wren/docs/cli.md

@douenergy douenergy force-pushed the feat/wren-standalone-cli branch from 6c458ac to 33e8072 Compare March 26, 2026 09:55
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
.claude/skills/README.md (1)

15-19: Consider adding a language identifier to the code block.

The code block lacks a language identifier, which reduces readability. While retrieved learnings indicate notification messages in blockquotes should omit language identifiers to avoid security scanner alerts, this is a regular usage example—not a notification message.

📋 Proposed improvement
-```
+```bash
 /wren-query SELECT order_id FROM "orders" LIMIT 5
 /wren-query --dry-plan SELECT * FROM "orders"
 /wren-query --validate SELECT * FROM "NonExistent"

Or use `text` if bash syntax highlighting is not desired for slash commands.
</details>

Based on learnings which clarify that only blockquoted notification messages should omit language identifiers for security scanner reasons.

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @.claude/skills/README.md around lines 15 - 19, Add a language identifier to
the fenced code block that contains the three slash-commands (the lines starting
with "/wren-query") so the example renders with highlighting; update the opening
fence from tobash (or ```text if you prefer no shell highlighting) to
improve readability while keeping the exact command lines unchanged.


</details>

</blockquote></details>
<details>
<summary>wren/src/wren/cli.py (2)</summary><blockquote>

`216-222`: **Minor: Consider moving `_print_result` inside the context manager.**

`_print_result` is called after the `with` block exits, meaning the engine is already closed. While this works because `result` is a PyArrow table that doesn't need the engine, keeping the print inside the context would be more defensive if future changes need engine access during formatting.


<details>
<summary>💡 Suggested adjustment</summary>

```diff
     with _build_engine(datasource, mdl, connection_info, connection_file) as engine:
         try:
             result = engine.query(sql, limit=limit)
         except Exception as e:
             typer.echo(f"Error: {e}", err=True)
             raise typer.Exit(1)
-    _print_result(result, output)
+        _print_result(result, output)
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 216 - 222, The call to _print_result
currently happens after the with _build_engine(...) context ends so the engine
is closed; move the _print_result(result, output) call inside the with block
just after the try/except (i.e., after result = engine.query(...) succeeds) so
formatting runs while the engine is still open; keep the same exception handling
(typer.echo and raise typer.Exit(1)) and ensure you only call _print_result when
result is set (inside the try block) to avoid referencing an uninitialized
variable.
```

</details>

---

`34-47`: **Consider checking file content or path extension consistently.**

The logic checks `mdl.endswith(".json")` (the original argument) rather than `path` to determine if base64 encoding is needed. If a user has a JSON file without `.json` extension (e.g., `my-model`), it will be incorrectly treated as already base64-encoded, causing a cryptic error.


<details>
<summary>💡 Suggested fix</summary>

```diff
 def _load_manifest(mdl: str) -> str:
     """Load MDL from a file path or treat as base64 string directly."""
     path = Path(mdl)
     if path.exists():
         import base64  # noqa: PLC0415

         content = path.read_bytes()
-        if mdl.endswith(".json"):
+        if path.suffix == ".json":
             # Raw JSON file — base64-encode it for WrenEngine
             return base64.b64encode(content).decode()
         # Non-.json file — assume it already contains a base64-encoded MDL string
         return content.decode().strip()
     # Not a file path — treat as a raw base64 string passed directly
     return mdl
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@wren/src/wren/cli.py` around lines 34 - 47, In _load_manifest, the code uses
mdl.endswith(".json") (the original argument) instead of inspecting the resolved
Path, which misclassifies files; change the logic to check path.suffix.lower()
== ".json" to decide when to base64-encode the file contents, and for extra
resilience when the file has no .json extension try to detect JSON by decoding
content to text and running json.loads (or alternately detect valid base64 by
attempting base64.b64decode and verifying re-encoding), then return the
base64-encoded JSON or the stripped base64 string accordingly; update references
to path, content, and base64 handling inside the _load_manifest function.
```

</details>

</blockquote></details>
<details>
<summary>wren/tests/connectors/test_mysql.py (1)</summary><blockquote>

`76-92`: **Minor: Generator return type annotation.**

The fixture uses `yield` but has return type `-> WrenEngine`. While this works at runtime (pytest fixtures are special), the accurate type would be `Generator[WrenEngine, None, None]`. The `# type: ignore[override]` suggests awareness of this.


<details>
<summary>💡 More accurate type hint</summary>

```diff
+from collections.abc import Generator
+
 class TestMySQL(WrenQueryTestSuite):
     manifest = make_tpch_manifest(table_catalog=None, table_schema=_SCHEMA)

     `@pytest.fixture`(scope="class")
-    def engine(self) -> WrenEngine:  # type: ignore[override]
+    def engine(self) -> Generator[WrenEngine, None, None]:
         with MySqlContainer("mysql:8") as mysql:
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@wren/tests/connectors/test_mysql.py` around lines 76 - 92, The fixture
`engine` is annotated as `-> WrenEngine` but yields, so update its signature to
`def engine(self) -> Generator[WrenEngine, None, None]:` (import `Generator`
from `typing`) and remove the `# type: ignore[override]` comment; keep the body
(using `MySqlContainer`, `WrenEngine`, `DataSource.mysql`, `conn_info`, and
`manifest_str`) unchanged so the type now correctly reflects a generator
fixture.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/README.md:

  • Line 9: The README entry for wren-query uses the incorrect term "dry-plan";
    update that phrase to "dry-run" so it matches the actual command documented in
    SKILL.md and the CLI usage /wren-query [sql]—specifically replace "dry-plan"
    with "dry-run" in the table row for wren-query.

Nitpick comments:
In @.claude/skills/README.md:

  • Around line 15-19: Add a language identifier to the fenced code block that
    contains the three slash-commands (the lines starting with "/wren-query") so the
    example renders with highlighting; update the opening fence from tobash
    (or ```text if you prefer no shell highlighting) to improve readability while
    keeping the exact command lines unchanged.

In @wren/src/wren/cli.py:

  • Around line 216-222: The call to _print_result currently happens after the
    with _build_engine(...) context ends so the engine is closed; move the
    _print_result(result, output) call inside the with block just after the
    try/except (i.e., after result = engine.query(...) succeeds) so formatting runs
    while the engine is still open; keep the same exception handling (typer.echo and
    raise typer.Exit(1)) and ensure you only call _print_result when result is set
    (inside the try block) to avoid referencing an uninitialized variable.
  • Around line 34-47: In _load_manifest, the code uses mdl.endswith(".json") (the
    original argument) instead of inspecting the resolved Path, which misclassifies
    files; change the logic to check path.suffix.lower() == ".json" to decide when
    to base64-encode the file contents, and for extra resilience when the file has
    no .json extension try to detect JSON by decoding content to text and running
    json.loads (or alternately detect valid base64 by attempting base64.b64decode
    and verifying re-encoding), then return the base64-encoded JSON or the stripped
    base64 string accordingly; update references to path, content, and base64
    handling inside the _load_manifest function.

In @wren/tests/connectors/test_mysql.py:

  • Around line 76-92: The fixture engine is annotated as -> WrenEngine but
    yields, so update its signature to def engine(self) -> Generator[WrenEngine, None, None]: (import Generator from typing) and remove the # type: ignore[override] comment; keep the body (using MySqlContainer, WrenEngine,
    DataSource.mysql, conn_info, and manifest_str) unchanged so the type now
    correctly reflects a generator fixture.

</details>

<details>
<summary>🪄 Autofix (Beta)</summary>

Fix all unresolved CodeRabbit comments on this PR:

- [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended)
- [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: defaults

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `9eaee14c-5693-4c65-8958-85ea47d892ba`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 6c458ace888637c39f53af0061e625eeae3a093b and 33e8072850df745e4325eaeee9e42111aec3d7a9.

</details>

<details>
<summary>⛔ Files ignored due to path filters (1)</summary>

* `wren/uv.lock` is excluded by `!**/*.lock`

</details>

<details>
<summary>📒 Files selected for processing (11)</summary>

* `.claude/skills/README.md`
* `.claude/skills/wren-query/SKILL.md`
* `.github/workflows/wren-ci.yml`
* `wren/README.md`
* `wren/docs/cli.md`
* `wren/docs/connections.md`
* `wren/justfile`
* `wren/pyproject.toml`
* `wren/src/wren/cli.py`
* `wren/tests/conftest.py`
* `wren/tests/connectors/test_mysql.py`

</details>

<details>
<summary>✅ Files skipped from review due to trivial changes (3)</summary>

* wren/docs/connections.md
* wren/docs/cli.md
* .claude/skills/wren-query/SKILL.md

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (3)</summary>

* wren/justfile
* wren/tests/conftest.py
* wren/pyproject.toml

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

douenergy and others added 9 commits March 26, 2026 22:13
- Add tests/connectors/test_mysql.py with full WrenQueryTestSuite via testcontainers
- Register mysql pytest marker in conftest.py
- Add testcontainers[mysql] to dev extra in pyproject.toml
- Add test-mysql command to justfile
- Add test-mysql CI job with mysqlclient system deps in wren-ci.yml

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Default command (no subcommand) runs query when --sql is given
- Auto-discover mdl.json and conn.json from cwd when flags are omitted
- Read datasource from conn.json 'datasource' field; --datasource overrides
- transpile reads datasource from conn.json without requiring connection

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Rewrite README with quick-start, full CLI reference, conn.json formats
  per datasource, Python SDK example, and updated test table (mysql added)
- Add .claude/skills/wren-query/SKILL.md for auto-discovering and running
  wren CLI commands from within Claude Code

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Auto-discovery now looks in ~/.wren/ so `wren --sql '...'` works from
any directory without needing config files in the current path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Quick-start steps now direct users to create ~/.wren/mdl.json and
~/.wren/conn.json; all "current directory" references updated.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…trix

- Add "not yet integration-tested" note for BigQuery and Redshift in
  README test table
- Merge test-postgres and test-mysql CI jobs into a single
  test-connector matrix job; mysql system deps step is conditional

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Only list connectors that have integration tests in the wren package.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…spile→dry-plan

- Rename default conn file from conn.json to connection_info.json to match
  mcp-server's ~/.wren/connection_info.json default
- Rename `wren transpile` CLI command to `wren dry-plan` (aligns with
  PR Canner#1479 which removed WrenEngine.transpile() in favour of dry_plan())
- Update all references in cli.py, README.md, docs/cli.md,
  docs/connections.md, and .claude/skills/

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@douenergy douenergy force-pushed the feat/wren-standalone-cli branch from 33e8072 to e07e6a1 Compare March 26, 2026 14:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
.github/workflows/wren-ci.yml (1)

75-85: Consider fail-fast: false for independent connector tests.

Since postgres and mysql tests are independent, you may want to add fail-fast: false to ensure both jobs run to completion even if one fails. This provides a complete picture of test status across connectors.

     strategy:
+      fail-fast: false
       matrix:
         include:

This is optional—the default fail-fast: true is also reasonable if you prefer faster feedback on failures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/wren-ci.yml around lines 75 - 85, The CI matrix under the
strategy: matrix: include: block currently runs connector jobs with the default
fail-fast behavior; add fail-fast: false to the strategy configuration so
independent connector jobs (the postgres and mysql entries in the matrix) both
run to completion even if one fails. Update the workflow's strategy block to
include fail-fast: false so the matrix execution will not stop after the first
failing job.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@wren/src/wren/cli.py`:
- Around line 169-171: The CLI currently accepts any --output value and falls
back to table for unknown strings; update the handling for the OutputOpt
annotated option so the code validates the value against the allowed set
{"json","csv","table"} and rejects unsupported values instead of silently using
table. Implement a small validator function (or use typer.Argument/Option
callback) referenced from OutputOpt that raises typer.BadParameter (or
typer.Exit with a clear error) when the provided value is not one of the allowed
formats; apply the same validator/logic to the other identical output handling
block at the second occurrence so both places consistently reject invalid
formats and produce a helpful error message.
- Around line 34-45: Expand and validate user-supplied paths before treating
them as inline data: in _load_manifest(), call Path(mdl).expanduser() (or use
os.path.expanduser) and assign that to path, then check path.exists() against
the expanded path; only if the expanded path exists read and (for .json)
base64-encode it, otherwise treat the original input as inline base64 MDL. Apply
the same expand-and-validate change to the connection-file loading code (the
loader handling the connection info at the later block) so explicit overrides
like --mdl and --connection-file that use ~ are correctly resolved before
falling back to inline content.
- Around line 62-80: When decoding JSON for connection_info or from the
connection_file in _resolve_datasource, ensure the result of json.loads(...) is
a mapping (dict) before returning; if it's not a dict (e.g., list, string,
number) raise a typer.Exit with a clear CLI error via typer.echo stating "Error:
--connection-info must be a JSON object" (and similarly for file paths: "Error:
connection file {path_str} must contain a JSON object"). Apply the same check
and error behavior for the parallel block at lines 92-103 so callers of
_resolve_datasource never receive non-dict types.
- Around line 277-280: The code currently always calls
_load_conn(connection_file, required=False) which makes wren dry-plan fail if
the default connection file is malformed; change the logic so you only call
_load_conn when an explicit datasource is not provided. In practice, keep
manifest_str = _load_manifest(_require_mdl(mdl)), then set conn_dict = None
unless datasource is falsy, and only call _load_conn(connection_file,
required=False) to populate conn_dict when datasource is None/empty; finally
pass that conn_dict into _resolve_datasource(datasource, conn_dict) as before
(refer to _load_conn, _resolve_datasource, _load_manifest, _require_mdl, and the
variables manifest_str/conn_dict/ds_str).

---

Nitpick comments:
In @.github/workflows/wren-ci.yml:
- Around line 75-85: The CI matrix under the strategy: matrix: include: block
currently runs connector jobs with the default fail-fast behavior; add
fail-fast: false to the strategy configuration so independent connector jobs
(the postgres and mysql entries in the matrix) both run to completion even if
one fails. Update the workflow's strategy block to include fail-fast: false so
the matrix execution will not stop after the first failing job.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6d54b797-d5aa-4cd5-9a7e-cf582798329d

📥 Commits

Reviewing files that changed from the base of the PR and between 33e8072 and e07e6a1.

⛔ Files ignored due to path filters (1)
  • wren/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • .claude/skills/README.md
  • .claude/skills/wren-query/SKILL.md
  • .github/workflows/wren-ci.yml
  • wren/README.md
  • wren/docs/cli.md
  • wren/docs/connections.md
  • wren/justfile
  • wren/pyproject.toml
  • wren/src/wren/cli.py
  • wren/tests/conftest.py
  • wren/tests/connectors/test_mysql.py
✅ Files skipped from review due to trivial changes (5)
  • wren/justfile
  • .claude/skills/README.md
  • wren/docs/connections.md
  • wren/docs/cli.md
  • .claude/skills/wren-query/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • wren/tests/connectors/test_mysql.py
  • wren/pyproject.toml

Two platform-specific wheel entries (arm64 + x86_64 macos) caused
`uv lock` to fail on CI: "Dependency wren-core-py has missing source
field but has more than one matching package". Keep only the arm64
entry; CI replaces it at build time via --upgrade-package.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci dependencies Pull requests that update a dependency file documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants