This document provides comprehensive examples for using Plenum's three commands: connect, introspect, and query. All examples show both the command invocation and the expected JSON output.
Launch the interactive connection picker when no arguments are provided:
plenum connectThis will:
- Display existing named connections
- Allow selection via numbered input
- Include "--- Create New ---" option
- Prompt for connection details interactively
plenum connect \
--name prod-db \
--engine postgres \
--host db.production.example.com \
--port 5432 \
--user readonly_user \
--password secretpassword \
--database app_production \
--save globalSuccess Output:
{
"ok": true,
"engine": "postgres",
"command": "connect",
"data": {
"connection_name": "prod-db",
"engine": "postgres",
"saved_to": "global",
"message": "Connection 'prod-db' saved successfully"
},
"meta": {
"execution_ms": 245
}
}plenum connect \
--name local-mysql \
--engine mysql \
--host localhost \
--port 3306 \
--user root \
--password rootpass \
--database test_db \
--save localSuccess Output:
{
"ok": true,
"engine": "mysql",
"command": "connect",
"data": {
"connection_name": "local-mysql",
"engine": "mysql",
"saved_to": "local",
"message": "Connection 'local-mysql' saved successfully"
},
"meta": {
"execution_ms": 198
}
}plenum connect \
--name app-cache \
--engine sqlite \
--file /var/lib/app/cache.db \
--save localSuccess Output:
{
"ok": true,
"engine": "sqlite",
"command": "connect",
"data": {
"connection_name": "app-cache",
"engine": "sqlite",
"saved_to": "local",
"message": "Connection 'app-cache' saved successfully"
},
"meta": {
"execution_ms": 42
}
}plenum connect --name prod-dbThis re-validates an existing connection and displays connection metadata.
plenum introspect --name prod-dbSuccess Output:
{
"ok": true,
"engine": "postgres",
"command": "introspect",
"data": {
"tables": [
{
"name": "users",
"schema": "public",
"columns": [
{
"name": "id",
"data_type": "integer",
"nullable": false,
"default": "nextval('users_id_seq'::regclass)"
},
{
"name": "email",
"data_type": "character varying",
"nullable": false,
"default": null
},
{
"name": "created_at",
"data_type": "timestamp with time zone",
"nullable": false,
"default": "CURRENT_TIMESTAMP"
}
],
"primary_key": ["id"],
"foreign_keys": [],
"indexes": [
{
"name": "users_pkey",
"columns": ["id"],
"unique": true
},
{
"name": "idx_users_email",
"columns": ["email"],
"unique": true
}
]
}
]
},
"meta": {
"execution_ms": 523
}
}plenum introspect --name prod-db --schema analyticsFilters introspection to only the analytics schema (PostgreSQL/MySQL).
plenum introspect \
--engine postgres \
--host localhost \
--port 5432 \
--user admin \
--password adminpass \
--database myappUse explicit connection parameters without saving a named connection.
plenum introspect --engine sqlite --file ./app.dbSuccess Output:
{
"ok": true,
"engine": "sqlite",
"command": "introspect",
"data": {
"tables": [
{
"name": "products",
"schema": null,
"columns": [
{
"name": "id",
"data_type": "INTEGER",
"nullable": false,
"default": null
},
{
"name": "name",
"data_type": "TEXT",
"nullable": false,
"default": null
},
{
"name": "price",
"data_type": "REAL",
"nullable": true,
"default": "0.0"
}
],
"primary_key": ["id"],
"foreign_keys": [],
"indexes": []
}
]
},
"meta": {
"execution_ms": 38
}
}plenum query \
--name prod-db \
--sql "SELECT id, email, created_at FROM users WHERE id = 1"Success Output:
{
"ok": true,
"engine": "postgres",
"command": "query",
"data": {
"columns": ["id", "email", "created_at"],
"rows": [
{
"id": 1,
"email": "admin@example.com",
"created_at": "2024-01-15T10:30:00Z"
}
],
"rows_affected": null
},
"meta": {
"execution_ms": 87,
"rows_returned": 1
}
}plenum query \
--name prod-db \
--sql "SELECT * FROM large_table" \
--max-rows 1000Limits the result set to 1000 rows, even if more exist.
plenum query \
--name prod-db \
--sql "SELECT * FROM expensive_view" \
--timeout-ms 5000Sets a 5-second timeout for query execution.
Plenum is strictly read-only. Write operations are rejected:
plenum query \
--name prod-db \
--sql "INSERT INTO logs (level, message) VALUES ('INFO', 'Application started')"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "CAPABILITY_VIOLATION",
"message": "Plenum is read-only and cannot execute this query. Please run this query manually:\n\nINSERT INTO logs (level, message) VALUES ('INFO', 'Application started')"
}
}Agent Workflow: When a write operation is needed:
- Use Plenum to introspect schema and read current data
- Construct the SQL statement
- Present it to the user for manual execution
DDL operations are also rejected:
plenum query \
--name prod-db \
--sql "CREATE TABLE temp_results (id SERIAL PRIMARY KEY, value TEXT)"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "CAPABILITY_VIOLATION",
"message": "Plenum is read-only and cannot execute this query. Please run this query manually:\n\nCREATE TABLE temp_results (id SERIAL PRIMARY KEY, value TEXT)"
}
}plenum query \
--name prod-db \
--sql-file ./queries/report.sqlReads SQL from a file instead of command line.
plenum query \
--name prod-db \
--sql "
SELECT u.id, u.email, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.id, u.email
ORDER BY post_count DESC
LIMIT 10
"# Step 1: Create connection
plenum connect \
--name analytics-db \
--engine postgres \
--host analytics.example.com \
--port 5432 \
--user readonly \
--password secret123 \
--database analytics \
--save global
# Step 2: Validate it works
plenum introspect --name analytics-db
# Step 3: Run a test query
plenum query --name analytics-db --sql "SELECT version()"# Step 1: Connect and save
plenum connect --name unknown-db --engine mysql --host localhost \
--port 3306 --user root --password rootpass --database mystery_db --save local
# Step 2: Discover schema
plenum introspect --name unknown-db > schema.json
# Step 3: Examine table contents
plenum query --name unknown-db --sql "SELECT * FROM table_name LIMIT 10"# Check source row count
plenum query --name source-db \
--sql "SELECT COUNT(*) as count FROM users"
# Check destination row count
plenum query --name dest-db \
--sql "SELECT COUNT(*) as count FROM users"
# Verify data integrity
plenum query --name dest-db \
--sql "SELECT id, email FROM users WHERE id NOT IN (SELECT id FROM source_users)"# Plenum is read-only - complex queries should be constructed and run manually
# Use Plenum to understand the schema first
plenum introspect --name work-db > schema.json
# Then construct your analysis query
# Present this to the user for manual execution:
# CREATE TEMPORARY TABLE analysis_tmp (id INT, score REAL);
# INSERT INTO analysis_tmp SELECT user_id, AVG(rating) FROM reviews GROUP BY user_id;
# After the user runs it manually, use Plenum to query results
plenum query --name work-db \
--sql "SELECT * FROM analysis_tmp WHERE score > 4.5"The wildcard database feature allows agents to discover available databases before selecting one.
# Step 1: Create connection with wildcard database
plenum connect \
--name mysql-discovery \
--engine mysql \
--host localhost \
--port 3306 \
--user root \
--password rootpass \
--database "*" \
--save local
# Step 2: Discover available databases
plenum query --name mysql-discovery \
--sql "SHOW DATABASES"Example Output:
{
"ok": true,
"engine": "mysql",
"command": "query",
"data": {
"columns": ["Database"],
"rows": [
["information_schema"],
["mysql"],
["performance_schema"],
["sys"],
["app_production"],
["app_staging"],
["app_development"]
],
"rows_affected": null
},
"meta": {
"execution_ms": 12,
"rows_returned": 7
}
}# Step 3: Query a specific database using fully qualified table names
plenum query --name mysql-discovery \
--sql "SELECT * FROM app_production.users LIMIT 10"
# Or use USE statement to select a database
plenum query --name mysql-discovery \
--sql "USE app_production"
# Then query tables in that database
plenum query --name mysql-discovery \
--sql "SELECT COUNT(*) as user_count FROM users"# Step 1: Create connection with wildcard database
plenum connect \
--name pg-discovery \
--engine postgres \
--host localhost \
--port 5432 \
--user postgres \
--password pgpass \
--database "*" \
--save local
# Step 2: Discover available databases
plenum query --name pg-discovery \
--sql "SELECT datname FROM pg_catalog.pg_database WHERE datistemplate = false"Example Output:
{
"ok": true,
"engine": "postgres",
"command": "query",
"data": {
"columns": ["datname"],
"rows": [
["postgres"],
["app_production"],
["app_staging"],
["app_development"]
],
"rows_affected": null
},
"meta": {
"execution_ms": 8,
"rows_returned": 4
}
}Note: PostgreSQL requires reconnecting to switch databases. Create separate connections for each database:
# After discovering databases, create specific connections
plenum connect \
--name app-prod \
--engine postgres \
--host localhost \
--port 5432 \
--user postgres \
--password pgpass \
--database app_production \
--save local
plenum connect \
--name app-dev \
--engine postgres \
--host localhost \
--port 5432 \
--user postgres \
--password pgpass \
--database app_development \
--save localWhen using wildcard mode, you must specify the --schema parameter for introspection:
MySQL:
# Introspect specific database by name
plenum introspect --name mysql-discovery --schema app_productionPostgreSQL:
# Introspect specific schema in the default "postgres" database
plenum introspect --name pg-discovery --schema publicError without schema:
# This will fail with wildcard database
plenum introspect --name mysql-discoveryError Output:
{
"ok": false,
"engine": "mysql",
"command": "introspect",
"error": {
"code": "ENGINE_ERROR",
"message": "No database selected. When using wildcard database (\"*\"), you must specify --schema parameter."
}
}Plenum rejects all write operations:
plenum query --name prod-db \
--sql "UPDATE users SET email = 'test@example.com' WHERE id = 1"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "CAPABILITY_VIOLATION",
"message": "Plenum is read-only and cannot execute this query. Please run this query manually:\n\nUPDATE users SET email = 'test@example.com' WHERE id = 1"
}
}Plenum rejects all DDL operations:
plenum query --name prod-db \
--sql "DROP TABLE users"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "CAPABILITY_VIOLATION",
"message": "Plenum is read-only and cannot execute this query. Please run this query manually:\n\nDROP TABLE users"
}
}plenum query --name prod-db \
--sql "SELCT * FROM users"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "QUERY_FAILED",
"message": "Failed to prepare query: syntax error at or near \"SELCT\""
}
}plenum query --name prod-db \
--sql "SELECT * FROM nonexistent_table"Error Output:
{
"ok": false,
"engine": "postgres",
"command": "query",
"error": {
"code": "QUERY_FAILED",
"message": "Failed to execute query: relation \"nonexistent_table\" does not exist"
}
}plenum connect --name bad-connection \
--engine postgres --host invalid.host.example.com \
--port 5432 --user test --password test --database testError Output:
{
"ok": false,
"engine": "postgres",
"command": "connect",
"error": {
"code": "CONNECTION_FAILED",
"message": "Failed to connect to database: connection refused"
}
}plenum query --name prod-dbError Output:
{
"ok": false,
"engine": "",
"command": "query",
"error": {
"code": "INVALID_INPUT",
"message": "Either --sql or --sql-file is required"
}
}Create a connection that references an environment variable:
export DB_PASSWORD="super_secret_password"
plenum connect \
--name secure-db \
--engine postgres \
--host db.example.com \
--port 5432 \
--user appuser \
--password-env DB_PASSWORD \
--database production \
--save globalThe password is read from $DB_PASSWORD at runtime, not stored in the config file.
Important: Plenum does NOT provide SQL parameterization. The calling agent MUST sanitize inputs:
// Agent-side example (pseudo-code)
const userId = sanitize(userInput); // Agent must sanitize!
const sql = `SELECT * FROM users WHERE id = ${userId}`;
exec(`plenum query --name db --sql "${sql}"`);Plenum assumes all SQL passed to it is safe and validated by the agent.
Using jq to extract specific fields from Plenum output:
# Get just the row count
plenum query --name db --sql "SELECT COUNT(*) as count FROM users" \
| jq -r '.data.rows[0].count'
# Extract error code for conditional logic
RESULT=$(plenum query --name db --sql "SELECT * FROM users")
ERROR_CODE=$(echo "$RESULT" | jq -r '.error.code // empty')
if [ "$ERROR_CODE" == "CAPABILITY_VIOLATION" ]; then
echo "Insufficient permissions"
fi# Introspect source
plenum introspect --name source-db > source-schema.json
# Introspect destination
plenum introspect --name dest-db > dest-schema.json
# Compare (using external tool)
diff source-schema.json dest-schema.json- Always check the
okfield first - Don't assume success - Match on
error.codefor programmatic error handling - Parse JSON output - All Plenum output is valid JSON
- Sanitize user inputs - Plenum passes SQL verbatim to the database
- Use named connections - Avoid repeating connection parameters
- Plenum is read-only - Use it to introspect and query, then present write/DDL SQL to users for manual execution
- Set max-rows for unknown queries - Prevent accidentally fetching millions of rows
- Use timeouts for expensive queries - Protect against long-running operations
Connections are organized by project path, with named connections and an explicit default pointer:
{
"projects": {
"/home/user/myproject": {
"connections": {
"local": {
"engine": "sqlite",
"file": "./app.db"
},
"staging": {
"engine": "postgres",
"host": "staging.example.com",
"port": 5432,
"user": "readonly",
"password_env": "STAGING_DB_PASSWORD",
"database": "staging"
},
"prod": {
"engine": "postgres",
"host": "db.example.com",
"port": 5432,
"user": "readonly",
"password_env": "PROD_DB_PASSWORD",
"database": "production"
}
},
"default": "local"
}
}
}The global config uses the same structure, allowing you to store connections for multiple projects:
{
"projects": {
"/home/user/project1": {
"connections": {
"dev": {
"engine": "postgres",
"host": "localhost",
"port": 5432,
"user": "postgres",
"password": "localdev",
"database": "project1_dev"
}
},
"default": "dev"
},
"/home/user/project2": {
"connections": {
"dev": {
"engine": "mysql",
"host": "localhost",
"port": 3306,
"user": "root",
"password": "rootpass",
"database": "project2_dev"
},
"prod": {
"engine": "mysql",
"host": "prod.example.com",
"port": 3306,
"user": "readonly",
"password_env": "MYSQL_PASSWORD",
"database": "project2_prod"
}
},
"default": "dev"
}
}
}When running commands from within a project directory, Plenum automatically uses the current project path:
# From /home/user/myproject directory:
# Uses the connection specified by the project's default pointer
plenum query --sql "SELECT * FROM users"
# Use a different named connection within the same project
plenum query --name staging --sql "SELECT * FROM users"
# Query a different project's connection
plenum query --project-path /home/user/project2 --sql "SELECT * FROM products"
# Query a specific named connection in a different project
plenum query --project-path /home/user/project2 --name prod --sql "SELECT * FROM products"- Connection overhead: Each command opens and closes a connection (stateless design)
- Large result sets: Use
--max-rowsto limit memory usage - Complex introspection: Databases with 100+ tables may take several seconds to introspect
- Network latency: Remote database connections will be slower than local
- SQLite is fastest: No network overhead, synchronous driver
Run benchmarks to measure performance on your workload:
cargo bench --features sqlite