diff --git a/README-zh.md b/README-zh.md index badf524..87f0e34 100644 --- a/README-zh.md +++ b/README-zh.md @@ -64,52 +64,37 @@ MySQL 在当前实现中支持 `explain_query`,但不支持 ## 快速开始 -`sql-query-mcp` 提供两种官方支持的 PyPI 接入方式。两种都适合正式使用,不 -只是本地试跑。 +如果你想先把服务跑起来,之后再继续了解其余部分,可以按以下步骤进行。 -1. 先决定让 MCP 客户端如何启动服务。 - -如果你希望先安装一次,之后在客户端里直接调用命令,可以使用安装命令模式。 +1. 创建虚拟环境并安装项目。 ```bash -pipx install sql-query-mcp +python3.10 -m venv .venv +source .venv/bin/activate +python -m pip install --upgrade pip +pip install -e . ``` -如果你希望把包来源直接写进 MCP 配置,让客户端通过 `pipx` 启动服务,可以使 -用托管启动模式。 +2. 复制示例连接配置。 ```bash -pipx run --spec sql-query-mcp sql-query-mcp +cp config/connections.example.json config/connections.json ``` -如果你想固定版本,可使用 `pipx install 'sql-query-mcp==X.Y.Z'`,或者使用 -`pipx run --spec 'sql-query-mcp==X.Y.Z' sql-query-mcp`。已安装版本可通过 -`pipx upgrade sql-query-mcp` 升级。 - -2. 创建配置文件。 - -无论你选择哪种启动方式,都建议把服务配置文件放在仓库之外,便于统一维护。 +3. 设置用于保存数据库 DSN 的环境变量。 ```bash -mkdir -p ~/.config/sql-query-mcp +export PG_CONN_CRM_PROD_MUQIAO_RO='postgresql://user:password@host:5432/dbname' +export MYSQL_CONN_CRM_PROD_MUQIAO_RO='mysql://user:password@host:3306/crm' ``` -然后把本节后面的示例 JSON 保存为 -`~/.config/sql-query-mcp/connections.json`。 - -3. 在你的 MCP 客户端中注册这个服务。 +4. 在你的 MCP 客户端中注册这个服务。 - Codex: `docs/codex-setup.md` - OpenCode: `docs/opencode-setup.md` -安装命令模式表示客户端直接运行 `sql-query-mcp`。托管启动模式表示客户端通 -过 `pipx run` 启动服务。 - -无论使用哪种方式,都建议把 `SQL_QUERY_MCP_CONFIG` 和真实数据库 DSN 放在 -MCP 客户端的 `env` 或 `environment` 配置里,而不是单独在 shell 中导出。 - -对于 `pipx install` 和 `pipx run`,建议显式设置 `SQL_QUERY_MCP_CONFIG`。 -默认的 `config/connections.json` 更适合源码 checkout 和本地开发场景。 +默认配置路径是 `config/connections.json`。如果你需要使用其他位置,请设置 +`SQL_QUERY_MCP_CONFIG`。 示例配置如下。 @@ -122,24 +107,24 @@ MCP 客户端的 `env` 或 `environment` 配置里,而不是单独在 shell }, "connections": [ { - "connection_id": "crm_prod_main_ro", + "connection_id": "crm_prod_muqiao_ro", "engine": "postgres", - "label": "CRM PostgreSQL production / Main / read-only", + "label": "CRM PostgreSQL production / Muqiao / read-only", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "PG_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "PG_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_schema": "public" }, { - "connection_id": "crm_mysql_prod_main_ro", + "connection_id": "crm_mysql_prod_muqiao_ro", "engine": "mysql", - "label": "CRM MySQL production / Main / read-only", + "label": "CRM MySQL production / Muqiao / read-only", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "MYSQL_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "MYSQL_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_database": "crm" } diff --git a/README.md b/README.md index e33a98e..954408f 100644 --- a/README.md +++ b/README.md @@ -72,61 +72,51 @@ implementation. ## Quick start -`sql-query-mcp` supports two official PyPI-based setup modes. Both are intended -for real usage, not just local testing. +If you want to get the server running first and explore the rest later, follow +these steps. -1. Choose how you want your MCP client to start the server. - -Use installed command mode if you want a simple local command after one -install. +1. Create a virtual environment and install the project. ```bash -pipx install sql-query-mcp +python3.10 -m venv .venv +source .venv/bin/activate +python -m pip install --upgrade pip +pip install sql-query-mcp ``` -Use managed launch mode if you want the package source declared directly in -your MCP client config. +Install a specific release with `pip install sql-query-mcp==X.Y.Z` if you want +to pin a version. Published release artifacts are also attached to each GitHub +Release. + +2. Copy the example connection config. ```bash -pipx run --spec sql-query-mcp sql-query-mcp +cp config/connections.example.json config/connections.json ``` -Pin a version with `pipx install 'sql-query-mcp==X.Y.Z'` or -`pipx run --spec 'sql-query-mcp==X.Y.Z' sql-query-mcp`. Upgrade installed -command mode with `pipx upgrade sql-query-mcp`. +3. Export your database DSNs as environment variables. -2. Create a config file. - -The server configuration should live outside the repository so the same file -works with either startup mode. +These example names match the copied `config/connections.example.json` file. ```bash -mkdir -p ~/.config/sql-query-mcp +export PG_CONN_CRM_PROD_MUQIAO_RO='postgresql://user:password@host:5432/dbname' +export MYSQL_CONN_CRM_PROD_MUQIAO_RO='mysql://user:password@host:3306/crm' +export SQL_QUERY_MCP_CONFIG='/absolute/path/to/sql-query-mcp/config/connections.json' ``` -Then save the example JSON later in this section as -`~/.config/sql-query-mcp/connections.json`. - -3. Register the server in your MCP client. +4. Register the server in your MCP client. - Codex: `docs/codex-setup.md` (Chinese) - OpenCode: `docs/opencode-setup.md` (Chinese) -Installed command mode means your client runs `sql-query-mcp` directly. -Managed launch mode means your client starts the server through `pipx run`. - -In both modes, put `SQL_QUERY_MCP_CONFIG` and your real database DSNs in the -MCP client's environment block instead of exporting them in your shell. - The console entry point is `sql-query-mcp`, which maps to `sql_query_mcp.app:main`. The PyPI install name is `sql-query-mcp`, and the Python package import path is `sql_query_mcp`. -For `pipx install` and `pipx run`, set `SQL_QUERY_MCP_CONFIG` explicitly to -your config file path. The default `config/connections.json` path is mainly for -source checkouts and local development. +The default config path is `config/connections.json`. If you need a different +location, set `SQL_QUERY_MCP_CONFIG`. The example config looks like this. @@ -139,24 +129,24 @@ The example config looks like this. }, "connections": [ { - "connection_id": "crm_prod_main_ro", + "connection_id": "crm_prod_muqiao_ro", "engine": "postgres", - "label": "CRM PostgreSQL production / Main / read-only", + "label": "CRM PostgreSQL production / Muqiao / read-only", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "PG_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "PG_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_schema": "public" }, { - "connection_id": "crm_mysql_prod_main_ro", + "connection_id": "crm_mysql_prod_muqiao_ro", "engine": "mysql", - "label": "CRM MySQL production / Main / read-only", + "label": "CRM MySQL production / Muqiao / read-only", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "MYSQL_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "MYSQL_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_database": "crm" } diff --git a/config/connections.example.json b/config/connections.example.json index 23afaa2..2085dff 100644 --- a/config/connections.example.json +++ b/config/connections.example.json @@ -6,24 +6,24 @@ }, "connections": [ { - "connection_id": "crm_prod_main_ro", + "connection_id": "crm_prod_muqiao_ro", "engine": "postgres", - "label": "CRM PostgreSQL 生产库 / Main / 只读", + "label": "CRM PostgreSQL 生产库 / 穆桥 / 只读", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "PG_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "PG_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_schema": "public" }, { - "connection_id": "crm_mysql_prod_main_ro", + "connection_id": "crm_mysql_prod_muqiao_ro", "engine": "mysql", - "label": "CRM MySQL 生产库 / Main / 只读", + "label": "CRM MySQL 生产库 / 穆桥 / 只读", "env": "prod", - "tenant": "main", + "tenant": "muqiao", "role": "ro", - "dsn_env": "MYSQL_CONN_CRM_PROD_MAIN_RO", + "dsn_env": "MYSQL_CONN_CRM_PROD_MUQIAO_RO", "enabled": true, "default_database": "crm" } diff --git a/docs/codex-setup.md b/docs/codex-setup.md index 0208556..14cf245 100644 --- a/docs/codex-setup.md +++ b/docs/codex-setup.md @@ -10,47 +10,38 @@ 辑的 `~/.codex/config.toml` 文件。 - Python 3.10+ -- `pipx` +- `sql-query-mcp` 仓库本地副本 - PostgreSQL 或 MySQL 只读 DSN - Codex 本地配置权限 -## 第一步:选择运行方式 +## 第一步:安装服务 -`sql-query-mcp` 在 Codex 中支持两种正式接入方式。它们的区别不在于是否可 -用于生产,而在于你希望把启动责任放在哪一层。 - -- 安装命令模式:你先执行一次 `pipx install`,之后 Codex 直接运行 - `sql-query-mcp` -- 托管启动模式:你不预先暴露命令,而是在 Codex 配置里写入 `pipx run` - 启动链路 - -安装命令模式先执行下面的安装命令。 +先在仓库目录中创建虚拟环境并安装项目。 ```bash -pipx install sql-query-mcp +cd /absolute/path/to/sql-query-mcp +python3.10 -m venv .venv +source .venv/bin/activate +python -m pip install --upgrade pip +pip install -e . ``` -托管启动模式不要求预先安装 `sql-query-mcp`,但要求本机已安装 `pipx`。 -如果你想先在终端验证启动链路,可以运行: +安装完成后,可执行文件通常位于以下路径。 ```bash -pipx run --spec sql-query-mcp sql-query-mcp +/absolute/path/to/sql-query-mcp/.venv/bin/sql-query-mcp ``` -如果你需要固定版本,可以把 `sql-query-mcp` 替换为 -`'sql-query-mcp==X.Y.Z'`。 - ## 第二步:准备连接配置 -通过 PyPI 安装后,服务不再依赖仓库副本。更直接的方式是把配置文件放到你 -自己的目录里,并通过环境变量告诉 MCP 服务去哪里读取。 +服务默认读取 `config/connections.json`。你可以先复制示例文件,再按实际连 +接修改。 ```bash -mkdir -p ~/.config/sql-query-mcp +cp config/connections.example.json config/connections.json ``` -把下面这份最小配置保存为 -`~/.config/sql-query-mcp/connections.json`。 +下面是一个最小配置示例。 ```json { @@ -99,63 +90,48 @@ mkdir -p ~/.config/sql-query-mcp 这里的超时是数据库会话级超时,不是 Codex 客户端超时。 -## 第三步:注册到 Codex +## 第三步:准备环境变量 -连接配置里的 `dsn_env` 存放的是环境变量名,不是真实连接串。因此你需要在 -Codex 的 MCP 配置里同时提供配置文件路径和真实 DSN。 +连接配置里的 `dsn_env` 存放的是环境变量名,不是真实连接串。因此你还需要 +为对应 DSN 设置环境变量。 -注意下面这些规则。 +```bash +export PG_CONN_CRM_PROD_MAIN_RO='postgresql://username:password@host:5432/dbname' +export MYSQL_CONN_CRM_PROD_MAIN_RO='mysql://username:password@host:3306/crm' +export SQL_QUERY_MCP_CONFIG='/absolute/path/to/sql-query-mcp/config/connections.json' +``` + +注意下面这些配置规则。 - `engine` 必须明确写成 `postgres` 或 `mysql` - PostgreSQL 使用 `default_schema` - MySQL 使用 `default_database` - 不要把真实 DSN 写回 `connections.json` -打开 `~/.codex/config.toml`,然后选择下面任意一种配置。 - -### 安装命令模式 - -如果你已经执行过 `pipx install sql-query-mcp`,推荐使用这段配置。它的启 -动更直接,也更容易和 `pipx upgrade` 配合。 - -```toml -[mcp_servers.sql_query_mcp] -command = "sql-query-mcp" -startup_timeout_sec = 20 -tool_timeout_sec = 60 - -[mcp_servers.sql_query_mcp.env] -SQL_QUERY_MCP_CONFIG = "/Users/yourname/.config/sql-query-mcp/connections.json" -PG_CONN_CRM_PROD_MAIN_RO = "postgresql://username:password@host:5432/dbname" -MYSQL_CONN_CRM_PROD_MAIN_RO = "mysql://username:password@host:3306/crm" -``` - -### 托管启动模式 +## 第四步:注册到 Codex -如果你希望把包来源直接写在 Codex 配置中,可以使用这段配置。它的效果和 -`npx ...` 风格类似,只是这里使用的是 `pipx run`。 +打开 `~/.codex/config.toml`,把下面这段配置加入 MCP servers。 ```toml [mcp_servers.sql_query_mcp] -command = "pipx" -args = ["run", "--spec", "sql-query-mcp", "sql-query-mcp"] -startup_timeout_sec = 20 -tool_timeout_sec = 60 +command = "/absolute/path/to/sql-query-mcp/.venv/bin/sql-query-mcp" +type = "stdio" +startup_timeout_ms = 20000 [mcp_servers.sql_query_mcp.env] -SQL_QUERY_MCP_CONFIG = "/Users/yourname/.config/sql-query-mcp/connections.json" +SQL_QUERY_MCP_CONFIG = "/absolute/path/to/sql-query-mcp/config/connections.json" PG_CONN_CRM_PROD_MAIN_RO = "postgresql://username:password@host:5432/dbname" MYSQL_CONN_CRM_PROD_MAIN_RO = "mysql://username:password@host:3306/crm" ``` -这两种方式都由 `env` 注入配置路径和真实 DSN。你只需要保留其中一种,不要 -同时配置两份同名 server。 +这段配置里,`command` 指向 MCP 可执行文件,`env` 注入配置路径和真实 +DSN。 -## 第四步:重启 Codex +## 第五步:重启 Codex 保存配置后,重启 Codex 或新开一个会话,让新的 MCP 服务注册生效。 -## 第五步:验证接入 +## 第六步:验证接入 接入完成后,建议先用简单问题确认服务可用,再逐步进入真实查询。 @@ -176,7 +152,7 @@ MYSQL_CONN_CRM_PROD_MAIN_RO = "mysql://username:password@host:3306/crm" 载是否成功。 - `SQL_QUERY_MCP_CONFIG` 是否指向正确文件 -- `SQL_QUERY_MCP_CONFIG` 指向的文件是否是合法 JSON +- `config/connections.json` 是否是合法 JSON - `connections.json` 是否至少有一个 `enabled: true` 的连接 ### 提示缺少 DSN 环境变量 diff --git a/docs/opencode-setup.md b/docs/opencode-setup.md index 550966e..c02cc06 100644 --- a/docs/opencode-setup.md +++ b/docs/opencode-setup.md @@ -9,46 +9,37 @@ 开始前,请先确认你已经具备本地运行 MCP 服务的基础条件。 - Python 3.10+ -- `pipx` +- `sql-query-mcp` 仓库本地副本 - PostgreSQL 或 MySQL 只读账号 - 可编辑的 `~/.config/opencode/opencode.json` -## 第一步:选择运行方式 +## 第一步:安装服务 -`sql-query-mcp` 在 OpenCode 中支持两种正式接入方式。它们都适合日常使用, -区别只在于你希望先安装命令,还是把启动链路直接写进配置。 - -- 安装命令模式:先执行 `pipx install`,然后在 OpenCode 中直接运行 - `sql-query-mcp` -- 托管启动模式:在 OpenCode 配置中直接写入 `pipx run` 命令链 - -安装命令模式先执行下面的安装命令。 +先在本地仓库中创建虚拟环境并安装项目。 ```bash -pipx install sql-query-mcp +cd /absolute/path/to/sql-query-mcp +python3.10 -m venv .venv +source .venv/bin/activate +python -m pip install --upgrade pip +pip install -e . ``` -托管启动模式不要求预先安装 `sql-query-mcp`,但要求本机已经可用 `pipx`。 -如果你想先在终端验证启动链路,可以运行: +安装完成后,可执行文件通常位于以下路径。 ```bash -pipx run --spec sql-query-mcp sql-query-mcp +/absolute/path/to/sql-query-mcp/.venv/bin/sql-query-mcp ``` -如果你需要固定版本,可以把 `sql-query-mcp` 替换为 -`'sql-query-mcp==X.Y.Z'`。 - ## 第二步:准备连接配置 -通过 PyPI 安装后,服务不依赖本地仓库目录。更实用的做法是自己维护一个独立 -的配置文件,并通过环境变量指向它。 +服务默认读取 `config/connections.json`。你可以从示例文件开始。 ```bash -mkdir -p ~/.config/sql-query-mcp +cp config/connections.example.json config/connections.json ``` -把下面这份最小配置保存为 -`~/.config/sql-query-mcp/connections.json`。 +下面是一个最小配置示例。 ```json { @@ -97,49 +88,27 @@ mkdir -p ~/.config/sql-query-mcp 这个超时作用在数据库会话层,不是 OpenCode 自己的会话超时。 -## 第三步:注册到 OpenCode +## 第三步:准备环境变量 -`connections.json` 只存环境变量名,因此你需要在 OpenCode 的 MCP 配置里一并 -提供配置文件路径和真实 DSN。 +`connections.json` 只存环境变量名,因此你还需要准备真实 DSN 和配置路径。 + +```bash +export PG_CONN_CRM_PROD_MAIN_RO='postgresql://username:password@host:5432/dbname' +export MYSQL_CONN_CRM_PROD_MAIN_RO='mysql://username:password@host:3306/crm' +export SQL_QUERY_MCP_CONFIG='/absolute/path/to/sql-query-mcp/config/connections.json' +``` 请保持下面这些规则一致。 - `engine` 必须显式配置 -- PostgreSQL 默认命名空间使用 `default_schema` -- MySQL 默认命名空间使用 `default_database` +- PostgreSQL 使用 `schema` +- MySQL 使用 `database` - `dsn_env` 必须和真实环境变量名一致 -OpenCode 的全局配置文件通常位于 `~/.config/opencode/opencode.json`。按你 -选择的运行方式加入对应配置即可。 - -### 安装命令模式 - -如果你已经执行过 `pipx install sql-query-mcp`,可以直接让 OpenCode 调用 -`sql-query-mcp`。 - -```json -{ - "$schema": "https://opencode.ai/config.json", - "mcp": { - "sql_query_mcp": { - "type": "local", - "command": ["sql-query-mcp"], - "enabled": true, - "timeout": 20000, - "environment": { - "SQL_QUERY_MCP_CONFIG": "/Users/yourname/.config/sql-query-mcp/connections.json", - "PG_CONN_CRM_PROD_MAIN_RO": "postgresql://username:password@host:5432/dbname", - "MYSQL_CONN_CRM_PROD_MAIN_RO": "mysql://username:password@host:3306/crm" - } - } - } -} -``` - -### 托管启动模式 +## 第四步:注册到 OpenCode -如果你希望像 `npx` 风格那样,把包来源和启动方式一起写在配置里,可以改成 -下面这样。 +OpenCode 的全局配置文件通常位于 `~/.config/opencode/opencode.json`。在文件 +中加入下面这段 MCP 配置。 ```json { @@ -148,16 +117,11 @@ OpenCode 的全局配置文件通常位于 `~/.config/opencode/opencode.json`。 "sql_query_mcp": { "type": "local", "command": [ - "pipx", - "run", - "--spec", - "sql-query-mcp", - "sql-query-mcp" + "/absolute/path/to/sql-query-mcp/.venv/bin/sql-query-mcp" ], "enabled": true, - "timeout": 20000, "environment": { - "SQL_QUERY_MCP_CONFIG": "/Users/yourname/.config/sql-query-mcp/connections.json", + "SQL_QUERY_MCP_CONFIG": "/absolute/path/to/sql-query-mcp/config/connections.json", "PG_CONN_CRM_PROD_MAIN_RO": "postgresql://username:password@host:5432/dbname", "MYSQL_CONN_CRM_PROD_MAIN_RO": "mysql://username:password@host:3306/crm" } @@ -167,13 +131,13 @@ OpenCode 的全局配置文件通常位于 `~/.config/opencode/opencode.json`。 ``` 如果你的 `opencode.json` 已经包含其他字段,只合并 `mcp` 节点即可,不要整 -个文件覆盖。两种方式只保留一种即可。 +个文件覆盖。 -## 第四步:重启 OpenCode +## 第五步:重启 OpenCode 保存配置后,重启 OpenCode 或新开会话,让 MCP 服务重新加载。 -## 第五步:验证接入 +## 第六步:验证接入 建议先跑一组最小验证动作,确认注册、连接和权限都正常。 @@ -207,7 +171,7 @@ OpenCode 的全局配置文件通常位于 `~/.config/opencode/opencode.json`。 - `dsn_env` 和 `environment` 中的变量名是否完全一致 - 对应数据库账号是否具备目标 schema 或 database 的只读权限 -- 是否把 `default_schema` 和 `default_database` 配反了 +- 是否把 PostgreSQL 的 `schema` 和 MySQL 的 `database` 用反了 ### 查询被安全规则拦截 diff --git a/docs/project-overview.md b/docs/project-overview.md index 4110475..c668d6c 100644 --- a/docs/project-overview.md +++ b/docs/project-overview.md @@ -20,7 +20,7 @@ - 连接配置和真实 DSN 分离,DSN 只通过环境变量注入 - `engine` 必须显式声明,不从 `connection_id` 推断 -- PostgreSQL 和 MySQL 保留各自原生的命名空间概念与默认值字段 +- PostgreSQL 使用 `schema`,MySQL 使用 `database` - 工具执行前先过只读 SQL 校验 - 每次调用都记录审计日志 @@ -33,8 +33,8 @@ 每个连接都由 `connection_id` 唯一标识,并声明自己的数据库引擎、环境、租 户、角色和 DSN 环境变量名。 -源码运行时,服务默认读取 `config/connections.json`。通过 PyPI 接入时,推荐 -显式使用 `SQL_QUERY_MCP_CONFIG` 指向你自己的配置文件路径。 +服务默认读取 `config/connections.json`,也支持通过 +`SQL_QUERY_MCP_CONFIG` 指向自定义路径。 实现上,这部分由 `sql_query_mcp/config.py` 负责加载和校验。 @@ -43,8 +43,8 @@ 项目不会把 PostgreSQL 和 MySQL 粗暴映射成同一个抽象命名空间,而是保留 各自的原生概念。 -- PostgreSQL: 使用 `schema`,默认值字段为 `default_schema` -- MySQL: 使用 `database`,默认值字段为 `default_database` +- PostgreSQL: 使用 `schema` +- MySQL: 使用 `database` 服务会在进入数据库前完成参数合法性校验和默认值回退。 diff --git a/docs/release-process.md b/docs/release-process.md index e22b917..8c56a9e 100644 --- a/docs/release-process.md +++ b/docs/release-process.md @@ -50,8 +50,7 @@ git push origin vX.Y.Z PyPI 和 GitHub Release。 - 在 PyPI 上确认目标版本已经可见。 -- 确认 `pipx install 'sql-query-mcp==X.Y.Z'` 可用,或确认 - `pipx run --spec 'sql-query-mcp==X.Y.Z' sql-query-mcp` 可启动。 +- 确认 `pip install sql-query-mcp==X.Y.Z` 可用。 - 在 GitHub 上确认 `vX.Y.Z` Release 已创建。 - 确认 Release 附件包含 `sdist` 和 `wheel`。 diff --git a/docs/superpowers/plans/2026-03-19-repo-polish-and-awesome-mcp-plan.md b/docs/superpowers/plans/2026-03-19-repo-polish-and-awesome-mcp-plan.md deleted file mode 100644 index 4c4e140..0000000 --- a/docs/superpowers/plans/2026-03-19-repo-polish-and-awesome-mcp-plan.md +++ /dev/null @@ -1,559 +0,0 @@ -# Repository polish and Awesome MCP submission Implementation Plan - -> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. - -**Goal:** Make `sql-query-mcp` presentation-ready for external discovery and community contributions, then prepare accurate submission copy for `awesome-mcp-servers`. - -**Architecture:** Keep runtime behavior unchanged and focus on repository surface area. Use an English-first `README.md` as the external landing page, add a Chinese documentation entry point, add contribution and roadmap docs, verify `docs/git-workflow.md` against Git Flow-adjacent best practices, and align package metadata and license details with the new public-facing positioning. - -**Tech Stack:** Markdown docs, MIT license text, Python packaging metadata in `pyproject.toml`, existing `unittest` test suite, GitHub repository conventions. - ---- - -## File map - -- `README.md` - - Main English-first landing page for external users. -- `docs/README.zh-CN.md` - - Chinese documentation entry point for existing Chinese-speaking users. -- `CONTRIBUTING.md` - - Public contribution entry point that links to `docs/git-workflow.md`. -- `docs/roadmap.md` - - Current support, candidate adapters, and contribution opportunities. -- `docs/adapter-development.md` - - Practical guide for adding a new database adapter. -- `docs/git-workflow.md` - - Source-of-truth Git workflow doc, revised only if concrete inconsistencies or risky examples are found. -- `LICENSE` - - MIT license text. -- `pyproject.toml` - - Package metadata aligned with the new public-facing documentation. -- `.github/ISSUE_TEMPLATE/bug_report.md` - - Bug report template for external users. -- `.github/ISSUE_TEMPLATE/feature_request.md` - - Feature request template for adapter requests and roadmap input. - -## Task 1: Audit public claims and implementation boundaries - -**Files:** -- Read: `README.md` -- Read: `docs/project-overview.md` -- Read: `docs/api-reference.md` -- Read: `docs/git-workflow.md` -- Read: `pyproject.toml` -- Read: `sql_query_mcp/app.py` -- Read: `sql_query_mcp/validator.py` -- Read: `sql_query_mcp/registry.py` -- Test: `tests/test_validator.py` -- Test: `tests/test_registry.py` - -- [ ] **Step 1: Re-read the source-of-truth implementation files** - -Confirm the exact current claims that public docs may make: - -```text -Current support: PostgreSQL, MySQL -Core safety claims: read-only validation, AST-based parsing, explicit engine handling, audit logging -``` - -- [ ] **Step 2: Write a short claim checklist in your scratch notes** - -Use this checklist while editing docs: - -```text -- Do not claim support beyond PostgreSQL/MySQL -- Do not imply write capabilities -- Do not imply generic DB support is already shipped -- Keep future adapters labeled as candidate/community direction -``` - -- [ ] **Step 3: Run the current test suite before doc edits** - -Run: `PYTHONPATH=. python3 -m unittest discover -s tests` -Expected: all existing tests pass - -- [ ] **Step 4: Record any existing documentation mismatches** - -Focus on mismatches that will affect the new README, roadmap, and contribution -docs. - -- [ ] **Step 5: Commit the audit checkpoint if you created any helper notes in tracked files** - -```bash -git add -git commit -m "docs: add repo polish planning notes" -``` - -If no tracked files changed, skip this commit. - -## Task 2: Rewrite `README.md` as the English-first landing page - -**Files:** -- Modify: `README.md` -- Read: `docs/project-overview.md` -- Read: `docs/api-reference.md` -- Read: `docs/codex-setup.md` -- Read: `docs/opencode-setup.md` - -- [ ] **Step 1: Replace the title and opening section with English-first positioning** - -Use copy shaped like this: - -```md -# sql-query-mcp - -Read-only SQL MCP server for AI assistants. It currently supports PostgreSQL -and MySQL, with an adapter-oriented path toward more database engines. -``` - -- [ ] **Step 2: Add a quick-start section that stays implementation-accurate** - -Keep the installation and configuration path short: - -```md -## Quick start - -1. Choose `pipx install sql-query-mcp` or `pipx run --spec sql-query-mcp sql-query-mcp`. -2. Save a config file outside the repository. -3. Put `SQL_QUERY_MCP_CONFIG` and real DSNs in the MCP client's environment block. -4. Register the server in your MCP client. -``` - -- [ ] **Step 3: Add a capability section with current tools only** - -Summarize only the existing MCP tools and engines already implemented. - -- [ ] **Step 4: Add a safety section with verifiable claims only** - -Include only claims backed by current code and docs: - -```md -- Explicit engine configuration -- Read-only SQL validation with `sqlglot` -- Engine-specific namespace handling -- Audit logging -``` - -- [ ] **Step 5: Add documentation, roadmap, and contributing links** - -Link to: - -```md -- `docs/README.zh-CN.md` -- `docs/project-overview.md` -- `docs/api-reference.md` -- `docs/roadmap.md` -- `CONTRIBUTING.md` -``` - -- [ ] **Step 6: Add a short license section** - -```md -## License - -MIT -``` - -- [ ] **Step 7: Re-read the final README for tone and scope control** - -Check that future direction never reads like shipped support. - -- [ ] **Step 8: Commit the README rewrite** - -```bash -git add README.md -git commit -m "docs: rewrite README for external users" -``` - -## Task 3: Add the Chinese documentation entry point - -**Files:** -- Create: `docs/README.zh-CN.md` -- Read: `README.md` -- Read: `docs/project-overview.md` -- Read: `docs/api-reference.md` - -- [ ] **Step 1: Create `docs/README.zh-CN.md` with a short Chinese overview** - -Include: - -```md -# sql-query-mcp 中文说明 - -`sql-query-mcp` 是一个面向 AI 助手的只读 SQL MCP 服务。 -主 README 采用英文优先,本文提供中文入口。 -``` - -- [ ] **Step 2: Link Chinese readers to the most relevant docs** - -At minimum include: - -```md -- `README.md` -- `docs/project-overview.md` -- `docs/api-reference.md` -- `docs/codex-setup.md` -- `docs/opencode-setup.md` -``` - -- [ ] **Step 3: Add a short note about current support and future direction** - -Use wording that separates: - -```text -当前支持: PostgreSQL / MySQL -未来方向: 候选 adapters,不代表已支持 -``` - -- [ ] **Step 4: Self-review for concise Chinese wording** - -Keep it as an entry page, not a second full README. - -- [ ] **Step 5: Commit the Chinese entry doc** - -```bash -git add docs/README.zh-CN.md -git commit -m "docs: add Chinese documentation entry" -``` - -## Task 4: Add contribution guidance that points to `docs/git-workflow.md` - -**Files:** -- Create: `CONTRIBUTING.md` -- Read: `docs/git-workflow.md` -- Read: `AGENT.md` -- Read: `README.md` - -- [ ] **Step 1: Create the opening section and contribution scope** - -Start with: - -```md -# Contributing - -This repository welcomes documentation improvements, bug fixes, and new -database adapter contributions. -``` - -- [ ] **Step 2: Add issue and pull request guidance** - -Cover: - -```md -- Open an issue for significant changes -- Keep changes focused -- Update docs when behavior changes -- Run the relevant tests before submitting a PR -``` - -- [ ] **Step 3: Add the Git workflow source-of-truth rule** - -Use explicit wording like: - -```md -The authoritative Git workflow for this repository is documented in -`docs/git-workflow.md`. -``` - -- [ ] **Step 4: Add a short section for new database adapter contributions** - -Link contributors to `docs/adapter-development.md` and `docs/roadmap.md`. - -- [ ] **Step 5: Add the test command used by this repository** - -```bash -PYTHONPATH=. python3 -m unittest discover -s tests -``` - -- [ ] **Step 6: Re-read the file to confirm it references, not duplicates, Git rules** - -Do not repeat detailed branch strategy, release flow, or hotfix flow. - -- [ ] **Step 7: Commit the contribution guide** - -```bash -git add CONTRIBUTING.md -git commit -m "docs: add contribution guide" -``` - -## Task 5: Add roadmap and adapter contribution docs - -**Files:** -- Create: `docs/roadmap.md` -- Create: `docs/adapter-development.md` -- Read: `docs/project-overview.md` -- Read: `docs/api-reference.md` -- Read: `sql_query_mcp/adapters/postgres.py` -- Read: `sql_query_mcp/adapters/mysql.py` -- Read: `tests/test_metadata.py` -- Read: `tests/test_executor.py` - -- [ ] **Step 1: Create `docs/roadmap.md` with support taxonomy headings** - -Use sections shaped like this: - -```md -## Supported now -## Candidate adapters -## Open for contribution -``` - -- [ ] **Step 2: Fill `docs/roadmap.md` with accurate current support** - -Current support must only mention PostgreSQL and MySQL. - -- [ ] **Step 3: Add candidate adapter direction without promising delivery** - -Use wording like: - -```text -Candidate adapters may include SQLite, SQL Server, or ClickHouse, but they are -not officially supported until implemented, tested, and documented. -``` - -- [ ] **Step 4: Create `docs/adapter-development.md` with the adapter boundary** - -Explain where adapters live and what they interact with: - -```text -sql_query_mcp/adapters/ -sql_query_mcp/registry.py -sql_query_mcp/introspection.py -sql_query_mcp/executor.py -``` - -- [ ] **Step 5: Add minimum expectations for a new adapter** - -Cover areas such as connection handling, metadata inspection, query execution, -namespace behavior, and tests. - -- [ ] **Step 6: Link both docs back to `CONTRIBUTING.md` and `docs/git-workflow.md` where appropriate** - -Keep Git policy centralized. - -- [ ] **Step 7: Commit roadmap and adapter docs** - -```bash -git add docs/roadmap.md docs/adapter-development.md -git commit -m "docs: add roadmap and adapter guide" -``` - -## Task 6: Add MIT license and align package metadata - -**Files:** -- Create: `LICENSE` -- Modify: `pyproject.toml` -- Read: `README.md` - -- [ ] **Step 1: Add the standard MIT license text to `LICENSE`** - -Use the standard MIT template with the correct copyright holder. - -- [ ] **Step 2: Update `pyproject.toml` metadata to reflect the license** - -Add or adjust metadata such as: - -```toml -[project] -license = { text = "MIT" } - -classifiers = [ - "License :: OSI Approved :: MIT License", -] -``` - -- [ ] **Step 3: Verify the package description still matches the public README positioning** - -Keep the description aligned with current support. - -- [ ] **Step 4: Run a lightweight packaging validation step** - -Run one of the following, depending on the available tooling: - -```bash -python3 -m build -``` - -Expected: build metadata resolves successfully - -If `build` is not installed and you do not want to add tooling, use a minimal -syntax validation path that confirms `pyproject.toml` remains valid. - -- [ ] **Step 5: Commit the license and metadata changes** - -```bash -git add LICENSE pyproject.toml -git commit -m "chore: add MIT license metadata" -``` - -## Task 7: Review and minimally fix `docs/git-workflow.md` - -**Files:** -- Modify: `docs/git-workflow.md` -- Read: `AGENT.md` -- Read: `CONTRIBUTING.md` - -- [ ] **Step 1: Compare the current workflow doc against the approved scope** - -Check for concrete issues only: - -```text -- rule/example mismatch -- ambiguous release handling -- ambiguous hotfix handling -- advice that conflicts with PR-first expectations -``` - -- [ ] **Step 2: Fix the `main` rule/example consistency if needed** - -If the doc says `main` should not be directly pushed to, adjust examples or -clarify that merges happen via protected-branch workflow rather than normal -direct development. - -- [ ] **Step 3: Clarify hotfix synchronization when an active `release/*` branch exists** - -Add a brief note only if needed. Keep it minimal and specific. - -- [ ] **Step 4: Add short advisory notes for branch protection or CI only if they reduce ambiguity** - -Do not redesign repository governance. - -- [ ] **Step 5: Re-read the full file to confirm it is still your repo-specific workflow, not a generic template** - -- [ ] **Step 6: Commit the workflow doc changes if any were made** - -```bash -git add docs/git-workflow.md -git commit -m "docs: refine git workflow guidance" -``` - -If no edits were needed, skip this commit and record that the doc was retained -as-is. - -## Task 8: Optionally add lightweight GitHub issue templates - -**Files:** -- Create: `.github/ISSUE_TEMPLATE/bug_report.md` -- Create: `.github/ISSUE_TEMPLATE/feature_request.md` -- Read: `docs/roadmap.md` -- Read: `CONTRIBUTING.md` - -- [ ] **Step 1: Decide whether to include issue templates in this pass** - -Include them only if the main documentation tasks are already complete and the -scope remains small. If not, skip this task without blocking the rest of the -plan. - -- [ ] **Step 2: Create a bug report template** - -Include fields for environment, MCP client, database engine, reproduction steps, -expected behavior, and actual behavior. - -- [ ] **Step 3: Create a feature request template** - -Include fields for use case, requested capability, target database engine, and -whether the request is for current support or a candidate adapter. - -- [ ] **Step 4: Keep both templates aligned with the new support taxonomy** - -Use terms like `Supported now` and `Candidate adapters` where useful. - -- [ ] **Step 5: Commit the issue templates** - -```bash -git add .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md -git commit -m "docs: add issue templates" -``` - -If this task was skipped, do not create this commit. - -## Task 9: Verify links, run tests, and prepare Awesome MCP copy - -**Files:** -- Modify: `README.md` -- Modify: `CONTRIBUTING.md` -- Modify: `docs/roadmap.md` -- Modify: `docs/adapter-development.md` -- Modify: `docs/git-workflow.md` -- Read: `docs/superpowers/specs/2026-03-19-repo-polish-and-awesome-mcp-design.md` - -- [ ] **Step 1: Check all internal links across modified docs** - -Verify every referenced file exists and uses the final path in each edited or -newly created document: - -```text -- README.md -- docs/README.zh-CN.md -- CONTRIBUTING.md -- docs/roadmap.md -- docs/adapter-development.md -- docs/git-workflow.md -``` - -Pay special attention to backlinks among `CONTRIBUTING.md`, -`docs/git-workflow.md`, `docs/README.zh-CN.md`, `docs/roadmap.md`, and -`docs/adapter-development.md`. - -- [ ] **Step 2: Run the repository test suite again** - -Run: `PYTHONPATH=. python3 -m unittest discover -s tests` -Expected: all tests pass - -- [ ] **Step 3: Run a cross-doc consistency review** - -Compare these artifacts side by side: - -```text -- README.md -- docs/README.zh-CN.md -- CONTRIBUTING.md -- docs/roadmap.md -- Awesome MCP submission copy -``` - -Confirm they all preserve the same support boundary: - -```text -- PostgreSQL/MySQL only for current support -- read-only only -- future adapters clearly labeled as not yet supported -``` - -- [ ] **Step 4: Draft the Awesome MCP entry line in a scratch note or final response** - -Use copy shaped like this: - -```md -- [andyWang1688/sql-query-mcp](https://github.com/andyWang1688/sql-query-mcp) 🐍 🏠 - Read-only SQL MCP server for AI assistants with schema inspection, read-only query execution, AST-based validation, and audit logging. Currently supports PostgreSQL and MySQL, with an adapter-oriented path toward more database engines. -``` - -- [ ] **Step 5: Draft the Awesome MCP PR body** - -Keep it short and fact-based: - -```md -## Summary -- Add `sql-query-mcp` to the Databases section -- Read-only SQL MCP server for AI assistants -- Current support: PostgreSQL and MySQL -``` - -- [ ] **Step 6: Re-read the draft submission copy against the codebase and final docs** - -Remove any wording that overstates current support. - -- [ ] **Step 7: Commit the final doc polish if verification required edits** - -```bash -git add README.md CONTRIBUTING.md docs/roadmap.md docs/adapter-development.md docs/git-workflow.md pyproject.toml LICENSE docs/README.zh-CN.md -git commit -m "docs: finalize repo polish for external discovery" -``` - -If issue templates were created in Task 8, add them in the same commit: - -```bash -git add .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md -``` - -If there are no remaining changes after previous commits, skip this commit. diff --git a/docs/superpowers/plans/2026-03-20-release-and-pypi-automation-plan.md b/docs/superpowers/plans/2026-03-20-release-and-pypi-automation-plan.md deleted file mode 100644 index 750139a..0000000 --- a/docs/superpowers/plans/2026-03-20-release-and-pypi-automation-plan.md +++ /dev/null @@ -1,954 +0,0 @@ -# Release and PyPI automation Implementation Plan - -> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. - -**Goal:** Add a production-ready release pipeline that publishes `sql-query-mcp` -to PyPI from `release/vX.Y.Z` tags, creates GitHub Releases, and opens the -back-merge PRs to `main` and `develop`. - -**Architecture:** Keep the runtime MCP server behavior unchanged and add release -automation around the existing Python package. Put naming and version parsing in -a small testable Python helper module, use GitHub Actions for CI and tagged -release orchestration, and update repository docs so PyPI installation and the -new release flow become the source of truth. - -**Tech Stack:** Python 3.10+, `unittest`, `setuptools`, `build`, `twine`, -`actionlint`, GitHub Actions, `gh` CLI in Actions runners, Markdown docs, -PyPI API token. - ---- - -## File map - -This file map locks down the implementation boundaries before any edits start. -The release pipeline stays small by keeping pure parsing logic in Python, -orchestration in workflow YAML, and operator guidance in docs. - -- `sql_query_mcp/release_metadata.py` - - New helper module for parsing `vX.Y.Z` tags, deriving - `release/vX.Y.Z`, reading `pyproject.toml`, resolving the effective release - tag in normal and recovery runs, and exposing a small CLI for - workflow-friendly outputs. -- `tests/test_release_metadata.py` - - Unit tests for version parsing, branch naming, and CLI output from the new - helper module. -- `pyproject.toml` - - Package metadata for PyPI, including license and author details, plus any - packaging metadata cleanup needed for publishing. -- `LICENSE` - - Repository license text included in source distributions and wheel metadata. -- `.gitignore` - - Ignore local build artifacts such as `dist/` and `build/`. -- `.github/workflows/ci.yml` - - Branch and pull request validation workflow for `feature/*`, `develop`, - `release/*`, `main`, and PRs, including `actionlint` validation for the - workflow files themselves. -- `.github/workflows/release.yml` - - Tag-driven publish workflow with permissions, concurrency, validation, - build, PyPI upload, GitHub Release create-or-update, and back-merge PR - creation. -- `README.md` - - Primary user install path updated to `pipx install sql-query-mcp` and - `pipx run --spec sql-query-mcp sql-query-mcp`, with linked release - guidance. -- `docs/release-process.md` - - Human-facing release runbook for PyPI secrets, release branch prep, tagging, - failure recovery, and post-publish checks. -- `docs/git-workflow.md` - - Formalize `release/vX.Y.Z` as the repository rule and align examples with - the automated flow. - -## Task 1: Verify release prerequisites and capture the current baseline - -This task reduces avoidable surprises before any code or workflow changes. It -confirms the package name is publishable, records the current test baseline, -and ensures later failures are caused by new work rather than pre-existing -breakage. - -**Files:** -- Read: `pyproject.toml` -- Read: `README.md` -- Read: `docs/git-workflow.md` -- Read: `docs/superpowers/specs/2026-03-20-release-and-pypi-automation-design.md` -- Test: `tests/` - -- [ ] **Step 1: Run the current unit test suite before changes** - -Run: `PYTHONPATH=. python3 -m unittest discover -s tests` -Expected: all current tests pass with no new failures. - -- [ ] **Step 2: Check the intended PyPI distribution name** - -Run: `python3 - <<'PY' -import json -import urllib.error -import urllib.request - -url = 'https://pypi.org/pypi/sql-query-mcp/json' -try: - with urllib.request.urlopen(url) as response: - payload = json.load(response) - print('exists', sorted(payload['releases'].keys())[-1] if payload['releases'] else 'no-releases') -except urllib.error.HTTPError as exc: - if exc.code == 404: - print('missing') - else: - raise -PY` -Expected: either `exists ...` or `missing`. - -Interpret the result using this rule: - -```text -- exists: manually confirm the PyPI project is already owned by this repository before continuing -- missing: treat the name as apparently available, but verify it again in PyPI before the first real publish -``` - -If the name is already owned by an unrelated project, stop implementation and -create a follow-up rename plan before changing workflows, docs, or package -metadata. - -- [ ] **Step 3: Record the release invariants in scratch notes** - -Keep this checklist beside the implementation work: - -```text -- distribution name: sql-query-mcp -- import path: sql_query_mcp -- release branch: release/vX.Y.Z -- release tag: vX.Y.Z -- project.version: X.Y.Z -- public license text: confirm before Task 3 -- maintainer display name: confirm before Task 3 -``` - -- [ ] **Step 4: Install local verification tools** - -Run: `python3 -m pip install --upgrade build twine` -Expected: `build` and `twine` install successfully into the active -environment. - -Install `actionlint` using any supported local method for your environment. -Then run: `actionlint -version` -Expected: `actionlint` becomes available locally for pre-push workflow checks. - -- [ ] **Step 5: Stop early if license or maintainer facts are still unclear** - -If the repository still does not provide unambiguous license text or maintainer -display name after the prerequisite audit, stop and get those values from the -user before starting Task 3. - -- [ ] **Step 6: Skip committing unless a tracked prerequisite note was added** - -```bash -git add -git commit -m "chore: record release prerequisites" -``` - -If no tracked files changed, do not create a commit. - -## Task 2: Add a testable release metadata helper module - -This task creates the only new Python logic needed for the release pipeline. -Keeping tag and branch parsing in a focused module makes the rules easy to test -and avoids burying fragile string parsing inside workflow YAML. - -**Files:** -- Create: `sql_query_mcp/release_metadata.py` -- Create: `tests/test_release_metadata.py` -- Read: `pyproject.toml` - -- [ ] **Step 1: Write the failing tests for version and branch parsing** - -Create `tests/test_release_metadata.py` with coverage shaped like this: - -```python -import tempfile -import unittest -from pathlib import Path - -from sql_query_mcp.release_metadata import ( - build_release_context, - decide_backmerge_action, - parse_version_tag, - resolve_effective_tag, - should_skip_pypi_upload, -) - - -class ReleaseMetadataTestCase(unittest.TestCase): - def test_parse_version_tag_accepts_v_prefix(self) -> None: - self.assertEqual("0.2.0", parse_version_tag("v0.2.0")) - - def test_parse_version_tag_rejects_missing_v_prefix(self) -> None: - with self.assertRaises(ValueError): - parse_version_tag("0.2.0") - - def test_parse_version_tag_rejects_non_semver_tags(self) -> None: - with self.assertRaises(ValueError): - parse_version_tag("vnext") - - def test_build_release_context_derives_release_branch(self) -> None: - with tempfile.TemporaryDirectory() as temp_dir: - pyproject = Path(temp_dir) / "pyproject.toml" - pyproject.write_text("[project]\nversion = '0.2.0'\n", encoding="utf-8") - context = build_release_context("v0.2.0", pyproject) - - self.assertEqual("0.2.0", context.version) - self.assertEqual("release/v0.2.0", context.release_branch) - - def test_resolve_effective_tag_prefers_dispatch_input(self) -> None: - self.assertEqual( - "v0.2.0", - resolve_effective_tag( - event_name="workflow_dispatch", - github_ref_name="develop", - input_tag="v0.2.0", - ), - ) - - def test_should_skip_pypi_upload_requires_recovery_confirmation(self) -> None: - self.assertFalse( - should_skip_pypi_upload( - is_recovery_run=False, - pypi_version_exists=True, - recovery_confirmed=False, - ) - ) - - def test_decide_backmerge_action_for_main_never_skips(self) -> None: - self.assertEqual("create", decide_backmerge_action("main", False, True)) -``` - -- [ ] **Step 2: Run the targeted test file to verify it fails first** - -If your local interpreter is Python 3.10, install the fallback parser before -running the tests: - -Run: `python3 -m pip install tomli` -Expected: `tomli` installs successfully for local Python 3.10 test runs. - -Run: `PYTHONPATH=. python3 -m unittest tests.test_release_metadata -v` -Expected: FAIL with `ModuleNotFoundError` or missing symbol errors for -`sql_query_mcp.release_metadata`. - -- [ ] **Step 3: Implement the minimal helper module and CLI** - -Create `sql_query_mcp/release_metadata.py` with focused pieces only: - -```python -from __future__ import annotations - -try: - import tomllib -except ModuleNotFoundError: # Python 3.10 - import tomli as tomllib - -from dataclasses import dataclass -from pathlib import Path -import argparse -import re - - -TAG_PATTERN = re.compile(r"^v\d+\.\d+\.\d+$") -VERSION_PATTERN = re.compile(r"^\d+\.\d+\.\d+$") - - -@dataclass(frozen=True) -class ReleaseContext: - tag: str - version: str - release_branch: str - - -def resolve_effective_tag(event_name: str, github_ref_name: str, input_tag: str | None) -> str: - if event_name == "workflow_dispatch": - if not input_tag: - raise ValueError("workflow_dispatch requires an explicit tag") - return input_tag - return github_ref_name - - -def parse_version_tag(tag: str) -> str: - if not TAG_PATTERN.match(tag): - raise ValueError("release tags must match vX.Y.Z") - return tag[1:] - - -def read_project_version(pyproject_path: Path) -> str: - data = tomllib.loads(pyproject_path.read_text(encoding="utf-8")) - version = data.get("project", {}).get("version") - if not version: - raise ValueError("missing project.version") - if not VERSION_PATTERN.match(version): - raise ValueError("project.version must match X.Y.Z") - return version - - -def build_release_context(tag: str, pyproject_path: Path) -> ReleaseContext: - version = parse_version_tag(tag) - project_version = read_project_version(pyproject_path) - if version != project_version: - raise ValueError("tag version does not match pyproject version") - return ReleaseContext(tag=tag, version=version, release_branch=f"release/{tag}") - - -def should_skip_pypi_upload( - is_recovery_run: bool, - pypi_version_exists: bool, - recovery_confirmed: bool, -) -> bool: - return is_recovery_run and pypi_version_exists and recovery_confirmed - - -def decide_backmerge_action(target: str, has_open_pr: bool, has_diff: bool) -> str: - if has_open_pr: - return "reuse" - if target == "main": - return "create" - if has_diff: - return "create" - return "skip" -``` - -Add a tiny CLI entry that prints workflow-friendly lines such as: - -```text -version=0.2.0 -release_branch=release/v0.2.0 -tag=v0.2.0 -``` - -- [ ] **Step 4: Re-run the targeted helper tests** - -Run: `PYTHONPATH=. python3 -m unittest tests.test_release_metadata -v` -Expected: PASS. - -- [ ] **Step 5: Run the helper CLI against the current repository version** - -Run: `CURRENT_TAG="v$(python3 - <<'PY' -from pathlib import Path -try: - import tomllib -except ModuleNotFoundError: - import tomli as tomllib -data = tomllib.loads(Path('pyproject.toml').read_text(encoding='utf-8')) -version = data['project']['version'] -print(version) -PY -)" && PYTHONPATH=. python3 -m sql_query_mcp.release_metadata --tag "$CURRENT_TAG" --pyproject pyproject.toml` -Expected: prints key/value lines including the current `version=` and matching -`release_branch=release/v`. - -- [ ] **Step 6: Commit the helper module and tests** - -```bash -git add sql_query_mcp/release_metadata.py tests/test_release_metadata.py -git commit -m "test: add release metadata helpers" -``` - -## Task 3: Tighten package metadata for PyPI publication - -This task makes the package credible as a real published distribution. The goal -is to improve metadata, include a license file, and ensure local build outputs -match what the release workflow will later upload. - -**Files:** -- Modify: `pyproject.toml` -- Create: `LICENSE` -- Modify: `.gitignore` -- Test: `tests/test_release_metadata.py` - -- [ ] **Step 1: Add a failing metadata test if helper coverage needs it** - -If `tests/test_release_metadata.py` does not yet check project version reads, -add one more assertion like this: - -```python -def test_read_project_version_reads_project_table(self) -> None: - ... - self.assertEqual("0.2.0", read_project_version(pyproject)) -``` - -- [ ] **Step 2: Update `pyproject.toml` with PyPI-facing metadata** - -Before editing, confirm the exact public license choice and author or -maintainer display name from existing repository facts or direct user guidance. -Do not invent values. - -Keep the existing build backend and script entry point, then add only the -missing publish-facing fields: - -```toml -[project] -# Preserve all existing runtime dependencies. -# Append this conditional dependency only if it is not already present. -# tomli>=2; python_version < '3.11' -license = { text = "" } -authors = [{ name = "" }] -classifiers = [ - "License :: OSI Approved :: ", - "Programming Language :: Python :: 3", - "Topic :: Database", -] - -[tool.setuptools] -license-files = ["LICENSE"] -``` - -Do not switch away from `setuptools` in this task. - -- [ ] **Step 3: Add the repository `LICENSE` file** - -Create the exact confirmed license text file named `LICENSE`. - -- [ ] **Step 4: Ignore local build artifacts** - -Update `.gitignore` with at least: - -```gitignore -build/ -dist/ -``` - -- [ ] **Step 5: Build the package locally** - -Run: `rm -rf build dist ./*.egg-info && python3 -m build` -Expected: PASS and create both `dist/*.tar.gz` and `dist/*.whl`. - -- [ ] **Step 6: Validate the generated metadata** - -Run: `python3 -m twine check dist/*` -Expected: PASS with valid metadata and long description checks. - -- [ ] **Step 7: Inspect built artifacts for the license file** - -Run: - -```bash -python3 - <<'PY' -import glob -import tarfile -import zipfile - -sdist = glob.glob('dist/*.tar.gz')[0] -wheel = glob.glob('dist/*.whl')[0] - -with tarfile.open(sdist, 'r:gz') as archive: - sdist_names = archive.getnames() -with zipfile.ZipFile(wheel) as archive: - wheel_names = archive.namelist() - -assert any(name.endswith('/LICENSE') or name.endswith('LICENSE') for name in sdist_names) -assert any(name.endswith('/LICENSE') or name.endswith('LICENSE') for name in wheel_names) -PY -``` - -Expected: PASS and confirm `LICENSE` is present in both sdist and wheel. - -- [ ] **Step 8: Commit the packaging metadata changes** - -```bash -git add pyproject.toml LICENSE .gitignore -git commit -m "build: prepare package metadata for PyPI" -``` - -## Task 4: Add the branch and pull request CI workflow - -This task adds the always-on validation workflow that catches problems before a -tagged release. Keep the workflow small: set up Python, install tooling, run -tests, and prove the package builds. - -**Files:** -- Create: `.github/workflows/ci.yml` -- Read: `pyproject.toml` -- Read: `README.md` -- Test: `tests/` - -- [ ] **Step 1: Create `.github/workflows/ci.yml` with the correct triggers** - -Start with a workflow skeleton like this: - -```yaml -name: CI - -on: - push: - branches: - - main - - develop - - 'feature/**' - - 'release/**' - pull_request: - -jobs: - lint-workflows: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - uses: rhysd/actionlint@v1 - - test: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - uses: actions/setup-python@v5 - with: - python-version: '3.10' -``` - -- [ ] **Step 2: Add the exact validation commands used by this repository** - -Keep the workflow split into two focused jobs: - -```text -- lint-workflows: validate `.github/workflows/*.yml` with `actionlint` -- test: run Python tests and package build checks -``` - -Make the Python validation job run these commands in order: - -```bash -python -m pip install --upgrade pip build -pip install -e . -PYTHONPATH=. python3 -m unittest discover -s tests -python -m build -``` - -- [ ] **Step 3: Run the same commands locally before trusting the workflow** - -Run: `python3 -m pip install --upgrade pip build && pip install -e . && PYTHONPATH=. python3 -m unittest discover -s tests && python3 -m build` -Expected: PASS locally with a fresh `dist/` directory. - -Run: `actionlint` -Expected: PASS with no workflow syntax or expression errors. - -- [ ] **Step 4: Re-read the YAML to keep the job focused** - -Do not add matrix builds, release publishing, or PR creation to `ci.yml`. -Keep `actionlint` in CI so workflow changes fail fast before release day. - -- [ ] **Step 5: Commit the CI workflow** - -```bash -git add .github/workflows/ci.yml -git commit -m "ci: add branch validation workflow" -``` - -## Task 5: Add the tagged release workflow with publish and back-merge PRs - -This task is the center of the feature. It must stay strict about version -validation, publish sequencing, and idempotent recovery after partial success. - -**Files:** -- Create: `.github/workflows/release.yml` -- Read: `sql_query_mcp/release_metadata.py` -- Read: `pyproject.toml` -- Test: `tests/test_release_metadata.py` - -- [ ] **Step 1: Keep workflow-facing helper tests mandatory** - -Add or retain tests for every workflow-facing branch the release logic depends -on, including CLI output, dispatch-tag validation, recovery skip gating, -tag/version mismatch failure, and back-merge decisions for both `main` and -`develop`. Use a test shaped like this: - -```python -def test_cli_prints_release_context(self) -> None: - ... - self.assertIn("release_branch=release/v0.2.0", stdout) -``` - -- [ ] **Step 2: Create the `release.yml` trigger, permissions, and concurrency** - -Start with this structure: - -```yaml -name: Release - -on: - push: - tags: - - 'v*' - workflow_dispatch: - inputs: - tag: - description: Recovery tag such as v0.2.0 - required: true - pypi_already_published: - description: Confirm this is a post-publish recovery run - required: true - type: boolean - -permissions: - contents: write - pull-requests: write - -concurrency: - group: release-${{ inputs.tag || github.ref_name }} - cancel-in-progress: false - -jobs: - publish: - runs-on: ubuntu-latest - env: - PYPI_API_TOKEN: ${{ secrets.PYPI_API_TOKEN }} - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - steps: - - uses: actions/checkout@v4 - with: - fetch-depth: 0 - ref: ${{ github.event_name == 'workflow_dispatch' && inputs.tag || github.ref }} - - uses: actions/setup-python@v5 - with: - python-version: '3.10' -``` - -- [ ] **Step 3: Add the validation and build steps before upload** - -The workflow must do all of the following before publishing: - -```bash -test -n "$GITHUB_TOKEN" -test -n "$GH_TOKEN" -python -m pip install --upgrade pip build twine -rm -rf build dist ./*.egg-info -pip install -e . -PYTHONPATH=. python3 -m unittest discover -s tests -python -m build -python -m twine check dist/* -python -m venv .release-smoke -. .release-smoke/bin/activate -pip install dist/*.whl -sql-query-mcp --help -deactivate -rm -rf .release-smoke -``` - -If GitHub auth tokens are missing, fail immediately with a clear error message -before building or editing releases or PRs. Check `PYPI_API_TOKEN` only in the -branch that will actually upload to PyPI. - -Also use the helper CLI to derive `version` and `release_branch`, then fetch -the matching release branch and verify the tagged commit is reachable from it. - -Use a bounded retry loop for the branch fetch so tag/branch push timing does -not create false negatives: - -```bash -for attempt in 1 2 3 4 5; do - git fetch origin "refs/heads/$RELEASE_BRANCH:refs/remotes/origin/$RELEASE_BRANCH" && break - sleep 3 -done -git merge-base --is-ancestor "$TAG" "origin/$RELEASE_BRANCH" -``` - -If the branch still cannot be fetched after the last retry, fail the workflow -with a clear message and stop before publishing. Treat a zero exit code from -`git merge-base --is-ancestor` as PASS; any non-zero exit code is a hard fail. - -- [ ] **Step 4: Add the PyPI publish step with guarded rerun behavior** - -Implement the publish logic so it behaves like this: - -```bash -test -n "$PYPI_API_TOKEN" -python -m twine upload --non-interactive -u __token__ -p "$PYPI_API_TOKEN" dist/* -``` - -If a rerun happens after a confirmed successful publish of the same tag, -recognize that state and skip the duplicate upload instead of failing the whole -workflow. - -Use this stricter rerun rule: - -```text -- first run: always attempt the upload normally -- compensation rerun: only skip upload if the same tag is already confirmed as published for this repository -- uncertain state: fail and require manual confirmation instead of guessing -``` - -Use this decision procedure as the v1 source of truth: - -```text -- tag push path: publish normally and write a clear success marker into the workflow summary -- compensation path: support `workflow_dispatch` recovery that requires the maintainer to provide the tag and confirm `pypi_already_published=true` -- if that confirmation is missing, do not skip upload just because a version exists remotely -``` - -Back the decision with an explicit existence check against PyPI before skipping -upload in a recovery run: - -```bash -VERSION_TO_CHECK="$VERSION" python - <<'PY' -import json -import os -import urllib.error -import urllib.request - -package = "sql-query-mcp" -version = os.environ["VERSION_TO_CHECK"] -url = f"https://pypi.org/pypi/{package}/json" -try: - with urllib.request.urlopen(url) as response: - payload = json.load(response) -except urllib.error.HTTPError as exc: - if exc.code == 404: - raise SystemExit(1) - raise -raise SystemExit(0 if version in payload["releases"] else 1) -PY -``` - -Only skip upload when all three conditions are true: recovery run, explicit -maintainer confirmation, and the version exists in the PyPI JSON response. - -- [ ] **Step 5: Add GitHub Release create-or-update behavior** - -Use `gh` commands in the workflow so both first runs and compensation reruns are -safe: - -First generate `release-notes.md` inside the workflow, then create or update the -GitHub Release from that file. - -```bash -gh release view "$TAG" >/dev/null 2>&1 || gh release create "$TAG" dist/* --generate-notes -gh release view "$TAG" --json body --jq '.body' > auto-notes.md -cp auto-notes.md release-notes.md -printf '\n\n## Install\n\n`pipx install '\''sql-query-mcp==%s'\''`\n' "$VERSION" >> release-notes.md -gh release upload "$TAG" dist/* --clobber -gh release edit "$TAG" --title "$TAG" --notes-file release-notes.md -``` - -Pass GitHub authentication explicitly in the workflow job: - -```bash -export GH_TOKEN="$GITHUB_TOKEN" -``` - -The final notes file must include at least one install command shaped like: - -```text -pipx install 'sql-query-mcp==X.Y.Z' -``` - -- [ ] **Step 6: Add idempotent PR creation for `main` and `develop`** - -Implement PR logic in the workflow with these branches: - -```text -source: release/vX.Y.Z -targets: main, develop -``` - -Behavior rules: - -```text -- main: always create or reuse `release/vX.Y.Z -> main` -- develop: create or reuse `release/vX.Y.Z -> develop` -- develop may skip PR creation only when an explicit compare shows no release-only changes remain -- fail loudly when API errors or unresolved branch state require manual action -``` - -Use this exact decision order for each target: - -```text -1. Check whether an open PR already exists for the same source and target. -2. If one exists, reuse it and record the URL. -3. If target is main and no PR exists, create one. -4. If target is develop, run an explicit compare against the release branch. -5. If develop has no remaining release-only diff, record "no PR needed". -6. Otherwise create the develop PR. -7. On API failure, ambiguous branch state, or compare failure, stop and require manual recovery. -``` - -Use a concrete compare command for `develop`, for example: - -```bash -git fetch origin develop -git rev-list --count "origin/develop..origin/$RELEASE_BRANCH" -``` - -Treat a result of `0` as "no remaining release-only commits" and allow the -skip. Any positive count means create the PR. Any fetch or compare error is a -hard failure, not a skip. - -- [ ] **Step 7: Re-run the helper tests and local verification commands** - -Run: `rm -rf build dist ./*.egg-info && PYTHONPATH=. python3 -m unittest tests.test_release_metadata -v && PYTHONPATH=. python3 -m unittest discover -s tests && python3 -m build && python3 -m twine check dist/*` -Expected: PASS, including helper coverage for effective tag resolution, -recovery-upload skip rules, and PR reuse or skip decisions. - -- [ ] **Step 8: Commit the release workflow** - -```bash -git add .github/workflows/release.yml sql_query_mcp/release_metadata.py tests/test_release_metadata.py -git commit -m "ci: automate tagged releases to PyPI" -``` - -## Task 6: Update user-facing docs for PyPI install and release operations - -This task aligns the repository surface with the new workflow. Users need a -PyPI-first install path, and maintainers need a release runbook that matches -the automated branch and tag rules exactly. - -**Files:** -- Modify: `README.md` -- Create: `docs/release-process.md` -- Modify: `docs/git-workflow.md` -- Read: `.github/workflows/release.yml` -- Read: `pyproject.toml` - -- [ ] **Step 1: Rewrite the README quick start around PyPI installation** - -Replace the top install path with content shaped like this: - -```md -## Quick start - -1. Choose installed command mode with `pipx install sql-query-mcp`, or managed - launch mode with `pipx run --spec sql-query-mcp sql-query-mcp`. -2. Save your config file outside the repository. -3. Put `SQL_QUERY_MCP_CONFIG` and real DSNs in the MCP client's environment - block. -4. Register the server in your MCP client. -``` - -Keep editable install instructions in the development section, not the main -user path. - -Also add one short clarification near install or development setup: - -```text -PyPI install name: sql-query-mcp -Python import path: sql_query_mcp -``` - -Add one short note that points users to versioned installs and GitHub Releases, -for example: - -```text -Install a specific release with `pipx install 'sql-query-mcp==X.Y.Z'`. -Published release artifacts are also attached to each GitHub Release. -``` - -- [ ] **Step 2: Add `docs/release-process.md` as the release runbook** - -Cover these sections in order: - -```md -## Prerequisites -## Prepare `release/vX.Y.Z` -## Push branch, then push tag -## Watch the release workflow -## Verify PyPI and GitHub Release -## Merge the back-merge PRs -## Recovery when publish succeeds but follow-up steps fail -``` - -Include the required secret name exactly: `PYPI_API_TOKEN`. - -- [ ] **Step 3: Update `docs/git-workflow.md` to formalize `release/vX.Y.Z`** - -Make the abstract rule and examples match each other. The release section must -say the branch format is `release/vX.Y.Z`, not a vague `release/`. - -- [ ] **Step 4: Re-read links and command examples across the edited docs** - -Check that these references all exist and agree with the workflow behavior: - -```text -README.md -docs/release-process.md -docs/git-workflow.md -.github/workflows/release.yml -``` - -- [ ] **Step 5: Run the full verification stack after the doc updates** - -Run: `rm -rf build dist ./*.egg-info && PYTHONPATH=. python3 -m unittest discover -s tests && python3 -m build && python3 -m twine check dist/*` -Expected: PASS. - -- [ ] **Step 6: Commit the documentation updates** - -```bash -git add README.md docs/release-process.md docs/git-workflow.md -git commit -m "docs: document PyPI install and release flow" -``` - -## Task 7: Perform final release-readiness verification - -This final task checks the whole feature as a maintainer would. It is the last -chance to catch mismatches between code, workflow assumptions, and human docs -before opening a PR. - -**Files:** -- Read: `.github/workflows/ci.yml` -- Read: `.github/workflows/release.yml` -- Read: `README.md` -- Read: `docs/release-process.md` -- Read: `docs/git-workflow.md` -- Test: `tests/` - -- [ ] **Step 1: Run the full local validation sequence from a clean shell** - -Run: `python3 -m pip install --upgrade pip build twine && rm -rf build dist ./*.egg-info && pip install -e . && PYTHONPATH=. python3 -m unittest discover -s tests && python3 -m build && python3 -m twine check dist/*` -Expected: PASS. - -- [ ] **Step 2: Run the installed CLI smoke test from the built wheel** - -Run: `python3 -m venv .release-final && . .release-final/bin/activate && pip install dist/*.whl && sql-query-mcp --help && deactivate && rm -rf .release-final` -Expected: PASS and print command help instead of hanging or failing import. - -- [ ] **Step 3: Dry-run the helper CLI for the current repository version** - -Run: `CURRENT_TAG="v$(python3 - <<'PY' -from pathlib import Path -try: - import tomllib -except ModuleNotFoundError: - import tomli as tomllib -data = tomllib.loads(Path('pyproject.toml').read_text(encoding='utf-8')) -version = data['project']['version'] -print(version) -PY -)" && PYTHONPATH=. python3 -m sql_query_mcp.release_metadata --tag "$CURRENT_TAG" --pyproject pyproject.toml` -Expected: PASS and print the current release branch mapping. - -- [ ] **Step 4: Verify workflow validation after push** - -After pushing the branch that contains the workflow files, confirm in GitHub -Actions that the `CI` workflow starts and that the `lint-workflows` job passes. -Run `actionlint` locally before pushing: - -```bash -actionlint -``` - -Expected: local `actionlint` passes if installed, and GitHub shows a successful -`lint-workflows` job for the pushed branch. - -- [ ] **Step 5: Review the final diff for v1-only scope discipline** - -Confirm the diff contains only: - -```text -- metadata and license changes -- helper module and tests -- ci.yml and release.yml -- README/release-process/git-workflow doc updates -``` - -Do not add TestPyPI, changelog automation, multi-version matrix builds, or -hotfix release automation in this PR. - -- [ ] **Step 6: Create the final implementation commit or PR-ready commit stack** - -```bash -git add pyproject.toml LICENSE .gitignore .github/workflows/ci.yml .github/workflows/release.yml sql_query_mcp/release_metadata.py tests/test_release_metadata.py README.md docs/release-process.md docs/git-workflow.md -git commit -m "feat: add automated PyPI release workflow" -``` - -If you already committed task-by-task, keep the logical commit stack and skip a -duplicate squash-style commit. diff --git a/docs/superpowers/specs/2026-03-20-release-and-pypi-automation-design.md b/docs/superpowers/specs/2026-03-20-release-and-pypi-automation-design.md deleted file mode 100644 index 4f176ac..0000000 --- a/docs/superpowers/specs/2026-03-20-release-and-pypi-automation-design.md +++ /dev/null @@ -1,422 +0,0 @@ -# 发布自动化与 PyPI 分发设计 - -本文定义 `sql-query-mcp` 的标准化发布方案。目标是让项目从“需要手动 -clone 仓库并本地安装”升级为“可以通过 PyPI 直接安装,并在 GitHub 上保留 -清晰、可回溯的 release 记录”。 - -本设计采用与你当前仓库一致的 Git Flow 方向:开发改动先进入 `develop`, -发布准备在 `release/vX.Y.Z` 上收口,正式发布由 `vX.Y.Z` tag 触发自动化流 -水线。发布成功后,系统自动创建回合并 PR,把已发布代码同步到 `main` 和 -`develop`。 - -这里也明确收口当前仓库的 release 分支命名:后续统一采用 `release/vX.Y.Z`。 -这不是引入新的分支模型,而是把仓库中已经实际使用的带 `v` 命名方式正式写 -成规则,并同步反映到 Git workflow 文档和自动化实现中。 - -## 背景 - -仓库当前已经具备 Python 包的基本结构。`pyproject.toml` 已声明项目名、版 -本号、依赖和 CLI 入口,说明这个项目已经接近一个可分发的 Python package。 -但从开源使用体验来看,还有几处明显缺口。 - -- 当前 `README.md` 还没有统一为面向最终用户的 `pipx` 接入路径。 -- 仓库还没有 GitHub Actions workflow,发布流程主要依赖人工操作。 -- 虽然 Git 规范已经定义了 `release/` 和 tag 规则,但它们还没有和 - PyPI、GitHub Release 建立可靠的自动化联动。 -- 仓库对外已经开源,但用户还缺少统一、清晰的 PyPI 接入说明。 - -这会带来三个问题:首次使用门槛偏高、版本产物不够标准化,以及线上可回溯 -的 release 记录不够完整。 - -## 实施前置检查 - -在进入实现前,需要先确认 PyPI 分发名本身可用,并且与你计划公开给用户的 -安装命令一致。否则 workflow、README 和 GitHub Release 都可能围绕一个无法 -使用的分发名展开。 - -前置检查包括: - -- 确认 `sql-query-mcp` 这个 PyPI distribution name 可注册或已由当前项目持有。 -- 确认用户接入说明继续采用 `pipx install sql-query-mcp` 和 - `pipx run --spec sql-query-mcp sql-query-mcp`。 -- 确认 package import 路径 `sql_query_mcp` 与 distribution name 的差异在文 - 档中表达清楚。 - -如果 PyPI 名称不可用,则需要先调整分发名,再继续实现自动发布方案。 - -## 目标 - -这次工作聚焦在建立一个可长期复用的正式发布链路。设计完成后,仓库需要具 -备清晰的版本来源、稳定的构建流程,以及面向用户的标准安装方式。 - -- 让用户可以通过 `pipx install sql-query-mcp` 或 - `pipx run --spec sql-query-mcp sql-query-mcp` 接入项目。 -- 让 `vX.Y.Z` tag、GitHub Release、PyPI 版本三者保持一一对应。 -- 保持 `release/vX.Y.Z` 作为发布收口分支,符合现有 Git 规范。 -- 在发布成功后自动创建回合并 PR,而不是直接改写受保护分支。 -- 把发布失败与分支同步解耦,避免未成功发布的代码进入 `main`。 -- 在仓库中补齐发布所需的文档、CI 校验和凭证使用约定。 - -## 非目标 - -这次工作不会把版本治理做成全自动语义化发版平台,也不会引入超出当前仓库 -规模的复杂工具链。 - -- 不引入自动计算版本号的 release bot。 -- 不实现自动 merge 到 `main` 或 `develop`。 -- 不在第一版引入 `CHANGELOG.md` 的强制生成机制。 -- 不切换到新的 Python 打包后端,只在现有 `setuptools` 基础上完善配置。 -- 不改变 `sql-query-mcp` 的运行时行为或 MCP tool surface。 -- 不在第一版把 `hotfix/*` 发布也接入同一套自动化。 - -## 方案概述 - -发布链路采用“tag 驱动的自动发布 + 自动创建回合并 PR”的模型。准备发版 -时,团队仍然在 `release/vX.Y.Z` 分支上完成版本调整、文档同步和最终测试。 -当分支确认可发布后,在该分支的发布提交上创建 `vX.Y.Z` tag,并 push 到远 -端。 - -GitHub Actions 监听 `v*` tag。workflow 首先校验 tag 与 `pyproject.toml` -中的 `project.version` 是否完全一致,然后执行测试、构建 `sdist` 和 `wheel` -包、上传正式 PyPI、创建 GitHub Release。只有当前面的发布动作全部成功后, -才继续创建两个 PR: - -- `release/vX.Y.Z -> main` -- `release/vX.Y.Z -> develop` - -这两个 PR 用于同步已发布代码,而不是替代发布动作本身。这样可以让 `main` -保持“已成功发布代码的备份分支”定位,同时保留团队现有的 PR 审核边界。 - -## 发布流程 - -这一节定义发布链路在日常使用中的顺序和责任边界。流程中的每一步都需要明 -确输入和输出,避免出现“tag 已打,但包没发布”或“包已发布,但主分支没同 -步”的状态漂移。 - -1. 从 `develop` 创建 `release/vX.Y.Z`。 -2. 在 `release/vX.Y.Z` 上调整 `pyproject.toml` 版本号,并同步更新需要反映 - 版本变化的文档。 -3. 在 `release/vX.Y.Z` 上完成最终测试和发布前检查。 -4. 在发布提交上创建 `vX.Y.Z` tag,并 push 对应 branch 和 tag。 -5. GitHub Actions 自动执行发布 workflow。 -6. 如果测试、构建、上传 PyPI、创建 GitHub Release 全部成功,则自动创建 - 回合并 PR。 -7. 团队审核并合并 `release/vX.Y.Z -> main` 和 `release/vX.Y.Z -> develop` - 的 PR,完成分支同步。 - -这里有一个重要约束:回合并 PR 的创建依赖发布成功,而不是单纯依赖 tag 存 -在。只要正式发布没有成功,`main` 就不接收对应代码。 - -为了降低 tag 触发时拿不到 release 分支的概率,操作顺序需要固定为:先 push -`release/vX.Y.Z` 分支,再 push `vX.Y.Z` tag。workflow 仍然要对分支发现做短 -暂重试,但这个顺序是团队默认操作规范。 - -## 仓库变更设计 - -这一节定义仓库里需要补齐的主要文件和职责边界。目标不是一口气引入很多发 -布工具,而是把发布职责拆到最小且可维护的几个文件中。 - -### `.github/workflows/ci.yml` - -这个 workflow 负责持续验证,服务对象是 `feature/*`、`develop`、 -`release/*`、`main` 和 Pull Request。它的目标是尽量在正式打 tag 之前发现 -问题,并在 release PR 合并到 `main` 之后继续验证主分支状态。 - -建议职责如下: - -- 安装项目及依赖。 -- 运行现有测试命令。 -- 执行基本打包检查,确认项目可以正常构建。 - -### `.github/workflows/release.yml` - -这个 workflow 负责正式发布。它监听 `v*` tag,并串行完成版本校验、测试、 -构建、PyPI 上传、GitHub Release 创建,以及回合并 PR 创建。 - -它还必须配置 workflow 级别的 concurrency,并以 tag 作为并发键,确保同一 -个 `vX.Y.Z` 在任意时刻只会有一个正式发布任务执行。这样可以避免重复上传 -PyPI、重复创建 GitHub Release,或在补偿执行时产生竞态。 - -建议包含以下阶段: - -1. 解析 tag,例如 `v0.2.0`。 -2. 获取远端 `release/vX.Y.Z` 分支引用,并确认它在远端存在。 -3. 从仓库读取 `pyproject.toml` 中的 `project.version`。 -4. 校验 tag 去掉前缀 `v` 后是否与项目版本一致。 -5. 校验当前 tag 指向的提交可从远端 `release/vX.Y.Z` 分支到达。 -6. 运行测试。 -7. 构建 `sdist` 和 `wheel`。 -8. 执行发布前校验,例如 `twine check`、从构建出的 wheel 安装并做 CLI - smoke test。 -9. 上传到正式 PyPI。 -10. 创建 GitHub Release,并把构建产物作为附件。 -11. 自动创建 `release/vX.Y.Z -> main` 与 `release/vX.Y.Z -> develop` 的 PR。 - -### `pyproject.toml` - -这个文件继续作为版本号和包元数据的唯一来源。Git tag 是对该版本的外部标 -识,而不是新的版本事实来源。所有发布前检查都以这里的版本值为准。 - -在现有基础上,建议补齐以下信息: - -- 更完整的 license 声明。 -- 作者或维护者信息。 -- 更准确的 classifiers。 -- 对 PyPI 展示更友好的项目元数据。 - -### `README.md` - -主 `README.md` 需要把安装入口改成面向最终用户的分发路径。当前把 clone 仓 -库作为第一入口,不利于开源用户快速试用,也无法体现 PyPI 分发价值。 - -建议调整为: - -- Quick start 提供 `pipx install` 和 `pipx run` 两种正式接入方式。 -- 保留本地 clone + editable install 作为开发者路径。 -- 补充 GitHub Release 和版本安装的简单说明。 - -### 发布说明文档 - -仓库还需要一份独立的发布文档,例如 `docs/release-process.md`。它负责说明 -团队如何准备 release branch、何时打 tag、需要配置哪些 secrets,以及如果 -发布失败该怎么处理。 - -这份文档不重复 Git branch 基础规则,而是专注回答“这个仓库具体怎么发版”。 - -它还需要明确列出受影响的协作文档,例如 `docs/git-workflow.md`,确保其中的 -release 分支命名示例同步为 `release/vX.Y.Z`。 - -## 版本与分支规则 - -自动发布的前提是版本号、分支名和 tag 命名保持一致。这样可以让 workflow -在机器层面做硬校验,而不是依赖人工记忆。 - -- release 分支格式固定为 `release/vX.Y.Z`。 -- Git tag 格式固定为 `vX.Y.Z`。 -- `pyproject.toml` 中的 `project.version` 固定为 `X.Y.Z`。 - -发布 workflow 需要显式验证以下关系: - -- 当前 tag 是否以 `v` 开头。 -- tag 版本与 `project.version` 是否一致。 -- tag 所在提交是否可从对应 `release/vX.Y.Z` 分支解析到。 - -如果这三者之一不成立,workflow 必须失败并停止发布。这样可以防止有人在错 -误分支上误打 tag,或者 tag 与包版本不一致。 - -为了让这个校验可以真正落地,workflow 需要在 tag 触发后显式 fetch 对应的 -`release/vX.Y.Z` 远端引用。如果远端不存在该分支,或者无法证明 tagged -commit 来自该分支,workflow 必须失败,并给出明确错误信息。 - -考虑到 branch 和 tag push 之间可能存在短暂时序差,workflow 可以在读取远端 -`release/vX.Y.Z` 时做有限次重试。例如重试几次并带短间隔等待。如果重试后仍 -无法解析分支,则认定为失败,而不是继续发布。 - -## develop 版本管理规则 - -回合并到 `develop` 时,最容易产生歧义的是 `pyproject.toml` 的版本号状态。 -如果这件事不先定清楚,自动创建 PR 之后很容易在合并阶段发生版本回退争议。 - -第一版建议采用简单且稳定的规则: - -- `release/vX.Y.Z` 在发版前持有正式版本 `X.Y.Z`。 -- `main` 合并 release 后保持 `X.Y.Z`,作为已发布版本归档。 -- `develop` 在 release 回合并完成后,不要求自动立即升到下一个版本。 -- 下一次准备发版时,再从当前 `develop` 创建新的 `release/vA.B.C`,并在那 - 个 release 分支上调整正式版本号。 - -这条规则的核心目的是避免把“下一版本规划”耦合进本次发布自动化。后续如果 -你想采用 `develop` 永远领先一个版本的策略,可以作为单独迭代再设计。 - -## GitHub Actions 权限与凭证 - -自动发布需要最小但明确的仓库权限。凭证设计的目标是让发布链路可运行,同 -时避免把长期密钥散落到代码仓库中。 - -建议使用以下配置: - -- 仓库 secret:`PYPI_API_TOKEN` -- workflow permissions:`contents: write`、`pull-requests: write` -- 如果使用 `gh` CLI 或 GitHub API 创建 PR,则使用 Actions 提供的 - `GITHUB_TOKEN` - -PyPI token 只用于上传包,不在任何文档示例中明文展示。文档只说明配置位置 -和权限要求,不记录真实值。 - -## 错误处理与失败边界 - -发布自动化必须优先保证“失败即停止”,而不是尽量继续执行。对于分发系统来 -说,半成功状态比完全失败更难处理。 - -需要明确以下规则: - -- 如果版本校验失败,workflow 立即结束,不运行测试和上传。 -- 如果测试或构建失败,workflow 不上传 PyPI,不创建 Release,不创建 PR。 -- 如果 PyPI 上传失败,workflow 不创建回合并 PR。 -- 如果 GitHub Release 创建失败,workflow 也不创建回合并 PR。 -- 如果发布已经成功,但 PR 创建失败,workflow 需要明确报错,并把这类问题 - 视为“发布后同步失败”,由维护者手工补建 PR。 - -这个边界的核心原则是:只有在“包已经成功对外发布”之后,才允许开始主干 -同步动作。 - -还需要单独处理“部分成功”的恢复路径。PyPI 上的版本天然不可覆盖,所以如 -果已经上传成功,但后续 GitHub Release 或 PR 创建失败,维护者不能简单重跑 -整个发布并期待得到同样结果。 - -建议恢复策略如下: - -- 把“PyPI 已成功发布”视为正式发布已经成立。 -- 如果后续步骤失败,优先进入补偿流程,而不是重新上传同一版本。 -- release workflow 明确采用“同一 workflow 内检测已发布版本并跳过重复上传” - 的恢复模型。 -- 当 workflow 重跑时,如果检测到目标版本已经存在于 PyPI,则把发布步骤视为 - 已完成,并继续执行 GitHub Release 补建或 PR 补建逻辑。 -- 文档必须提供人工恢复步骤,例如手工创建 GitHub Release、手工补建回合并 - PR、检查 tag 与分支状态。 - -这样可以避免在 PyPI 已经存在目标版本时,维护者因为重复上传而卡在不可恢 -复的错误状态。 - -为了避免把“PyPI 上已经有这个版本”误判成任意场景下都安全,workflow 还需 -要把这条恢复路径限制在同一个 tag 的补偿执行上。也就是说,只有当维护者已 -确认同一个 `vX.Y.Z` tag 之前已经成功上传过正确版本,后续重跑才允许跳过 -PyPI 上传,进入 GitHub Release 或 PR 的补偿步骤。 - -第一版不要求 workflow 去证明 PyPI 上现有文件与当前 commit 的二进制完全等 -价,而是把这类判断留给维护者确认,并在发布文档中写清楚这个人工检查边界。 - -## 测试与验证策略 - -第一版发布自动化不需要引入复杂矩阵,但需要覆盖最关键的正确性检查。目的 -是保证包能装、测试能过、元数据可信。 - -建议最少包含以下验证: - -- 运行现有单元测试。 -- 执行标准构建命令,产出 `dist/*.tar.gz` 和 `dist/*.whl`。 -- 对构建结果执行 `twine check`,确认元数据和 long description 合法。 -- 在 fresh virtualenv 中从构建出的 wheel 执行一次安装验证,而不是只确认文 - 件存在。 -- 对安装后的 CLI 执行 `sql-query-mcp --help` 作为最小 smoke test,确认入口 - 可以被解析。 - -如果后续需要提高可信度,可以再追加 Python 多版本矩阵,但不作为这次设计 -的必选项。 - -## GitHub Release 内容策略 - -GitHub Release 既是用户查看版本记录的入口,也是排查线上版本来源的关键节 -点。因此第一版就需要具备可读性,但不必过度追求复杂模板。 - -建议策略如下: - -- title 使用 `vX.Y.Z` -- 附件包含 `sdist` 和 `wheel` -- body 优先使用自动生成的 release notes -- body 额外补充一个固定安装示例:`pipx install 'sql-query-mcp==X.Y.Z'` -- 创建逻辑采用 create-or-update:如果 release 已存在,则补齐缺失的 notes - 或 assets,而不是直接失败 - -这样可以先建立清晰的版本归档能力,后续如果你决定维护 changelog,再升级 -到自定义 release notes 模板。 - -## 回合并 PR 策略 - -发布成功后自动创建回合并 PR,是为了把“已对外发布”的事实同步回长期分支。 -但 `develop` 往往会比 release 分支继续前进,所以这里需要明确边界,避免把 -自动化假设成总能无冲突完成。 - -建议规则如下: - -- `release/vX.Y.Z -> main` 是必建 PR,因为 `main` 承担已发布代码的归档角色。 -- `release/vX.Y.Z -> develop` 也应自动尝试创建,用于回合并 release 阶段产生 - 的版本调整和发布修复。 -- PR 创建逻辑必须具备幂等性。如果目标 PR 已存在,workflow 记录并复用该结 - 果,而不是报错退出。 -- 如果 `develop` 已前进但仍可创建 PR,则保留 PR 让维护者正常审查与合并。 -- 如果 `develop` 上没有差异,允许 PR 不创建,并把结果记录为“无需回合并”。 -- 如果因为分支冲突或 API 限制无法创建 PR,workflow 需要明确报错,并要求维 - 护者手工处理冲突后再补建 PR。 - -这个策略的重点不是强求自动化覆盖所有 merge 场景,而是让“自动尝试、失败 -可恢复、结果可见”成为默认行为。 - -为了让这部分实现可操作,针对 `develop` 需要进一步采用以下判定顺序: - -1. 先查询是否已存在同一 source 和 target 的 open PR;如果存在,则复用并记 - 录结果。 -2. 如果不存在 open PR,则比较 `release/vX.Y.Z` 与 `develop` 的差异。 -3. 如果 release 分支相对 `develop` 没有新增提交,记录为“无需回合并”,不创 - 建 PR。 -4. 如果存在可提交的差异,则创建 PR。 -5. 如果比较或创建 PR 的过程中遇到 GitHub API 限制、分支异常或需要人工处 - 理的冲突信号,则 workflow 失败并提示维护者手工补建。 - -这里的目标不是在 CI 中自动解决 merge conflict,而是把“已有 PR”“无需 PR” -“可自动建 PR”“必须人工介入”这四种状态区分清楚。 - -## 风险与缓解 - -这个方案的主要风险不在 Python 打包本身,而在自动化边界是否清晰。只有边界 -足够清楚,自动化才会降低成本,而不是制造新的排查负担。 - -- 风险一:tag 与版本号不一致,导致错误版本被发布。缓解方式是把版本一致性 - 校验做成 workflow 的第一步。 -- 风险二:发布成功但主干未同步。缓解方式是把回合并 PR 自动创建纳入发布 - workflow,并对失败给出明确告警。 -- 风险三:主分支被 workflow 直接改写。缓解方式是只创建 PR,不做自动 merge。 -- 风险四:README 仍然把本地 clone 当成主安装路径。缓解方式是调整 README, - 让 PyPI 安装成为默认入口。 -- 风险五:凭证管理不清晰。缓解方式是把 PyPI token 严格限制在 GitHub - secrets 中使用。 - -## 实施范围建议 - -为了把第一版控制在高价值、低复杂度的范围内,实施阶段建议只覆盖这几类改 -动:发布 workflow、CI workflow、`pyproject.toml` 元数据补齐、README 安装 -说明调整,以及新增发布流程文档。 - -像自动 changelog、TestPyPI 演练通道、多 Python 版本矩阵、自动语义化发版 -等增强项,可以放到后续迭代,而不是在第一版一起引入。 - -## v1 必选项与后续增强项 - -为了避免实施阶段把 hardening 想法和第一版必交付混在一起,这里把范围再明 -确拆成两层。 - -v1 必选项: - -- 新增 `ci.yml` 与 `release.yml` -- 基于 tag 的版本一致性校验与 release 分支可达性校验 -- `release.yml` 的 concurrency 控制 -- 单元测试、构建、`twine check`、fresh virtualenv 安装 wheel、 - `sql-query-mcp --help` smoke test -- 上传正式 PyPI -- GitHub Release 的 create-or-update 行为 -- `main` 与 `develop` 的回合并 PR 自动创建或跳过判定 -- `pyproject.toml` 元数据补齐 -- `README.md` 安装路径更新 -- 新增发布流程文档 - -后续增强项: - -- 自动 changelog 管理 -- TestPyPI 演练通道 -- 多 Python 版本矩阵 -- 更强的发布补偿自动化 -- hotfix 分支的独立自动发布链路 - -## 范围边界说明 - -为了保证第一版可落地,这次自动化明确只覆盖标准 release 分支发版路径: -`develop -> release/vX.Y.Z -> vX.Y.Z tag -> 自动发布 -> 回合并 PR`。 - -`hotfix/*` 分支依然保留在 Git 规范中,但暂时不接入这套 PyPI 自动发布流 -程。等标准 release 流程稳定后,再单独设计 hotfix 发版自动化,避免在第一 -版就把两套分支路径耦合到同一个 workflow 中。 - -## 下一步 - -在本设计确认并完成 review 后,下一阶段应编写 implementation plan,把需 -要修改的 workflow、文档和项目元数据拆成可执行步骤,再开始实际实现。