Skip to content

Commit 3f95e11

Browse files
authored
feat: add execute/executions CLI commands and refactor generate (#180)
## Summary - **New `vlmrun execute` command** — runs agent executions via `/v1/agent/execute` with support for skills (`--skill`, `--skill-id`), toolsets (`--toolset`), file inputs (`-i`), schema (`--schema`), and full agent name (`--name <agent>:<version>`). Supports `--wait` with polling and `--format json` output. - **New `vlmrun executions list/get` subcommands** — list and retrieve agent execution results with rich formatted output (panel layout, colorized status, duration computed from timestamps). - **Refactored `vlmrun generate`** — replaced separate `image` and `document` subcommands with a single unified command (`vlmrun generate -i <file>`) that auto-detects file type from extension and routes to the appropriate API endpoint (`/v1/document/generate` for images/documents, `/v1/video/generate`, `/v1/audio/generate`). Supports skills, schema, batch/wait modes, and JSON output. - **Improved `vlmrun predictions list`** — formatted output matching executions style with panel layout, colorized status, and duration computed from timestamps. Removed N+1 API call overhead (no longer fetches each prediction individually). - **Fixed truthiness checks** in `predictions get` and `executions get` to correctly display `0` values for usage fields (elements_processed, credits_used, etc.). - **Updated `CreditUsage` model** with `steps`, `message`, and `duration_seconds` fields. - **Version bump** to 0.6.0. ## Test plan - [ ] `vlmrun execute -i <file> --skill-id <skill> --wait` completes and displays result - [ ] `vlmrun executions list` shows formatted table with duration - [ ] `vlmrun executions get <id>` shows full details including credits - [ ] `vlmrun generate -i photo.jpg --domain image.caption` auto-detects image type - [ ] `vlmrun generate -i doc.pdf --domain document.invoice` routes to document endpoint - [ ] `vlmrun predictions list` is fast (no N+1 API calls) and shows formatted output - [ ] `vlmrun predictions get <id>` shows 0 values correctly (not hidden) Made with [Cursor](https://cursor.com) <!-- devin-review-badge-begin --> --- <a href="https://app.devin.ai/review/vlm-run/vlmrun-python-sdk/pull/180" target="_blank"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1"> <img src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1" alt="Open with Devin"> </picture> </a> <!-- devin-review-badge-end -->
1 parent f870419 commit 3f95e11

8 files changed

Lines changed: 1184 additions & 71 deletions

File tree

tests/cli/test_cli_generate.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,24 +7,24 @@
77

88

99
def test_generate_image(runner, mock_client, config_file, tmp_path):
10-
"""Test generate image command."""
10+
"""Test generate command with an image file."""
1111
path: Path = download_artifact(
1212
"https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice/invoice_1.jpg",
1313
format="file",
1414
)
1515
result = runner.invoke(
16-
app, ["generate", "image", str(path), "--domain", "document.invoice"]
16+
app, ["generate", "-i", str(path), "--domain", "document.invoice"]
1717
)
1818
assert result.exit_code == 0
1919

2020

2121
def test_generate_document(runner, mock_client, config_file, tmp_path):
22-
"""Test generate document command."""
22+
"""Test generate command with a document file."""
2323
path: Path = download_artifact(
2424
"https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.bank-statement/lending_bankstatement.pdf",
2525
format="file",
2626
)
2727
result = runner.invoke(
28-
app, ["generate", "document", str(path), "--domain", "document.bank-statement"]
28+
app, ["generate", "-i", str(path), "--domain", "document.bank-statement"]
2929
)
3030
assert result.exit_code == 0

0 commit comments

Comments
 (0)