-
Notifications
You must be signed in to change notification settings - Fork 0
Prompt Management
tmam includes a full prompt management system with versioning, publishing, and a live multi-model comparison playground called OpenGround.
Concept | Description -- | -- Prompt | A named prompt template with one or more versions Version | A snapshot of a prompt's content, tags, and metadata Draft | An unpublished version being worked on Published | A released version available for production use Prompt Hub | The SDK endpoint for fetching prompts at runtime OpenGround | The live playground for comparing model outputs
- Go to Prompt Management → Prompts
- Click New Prompt
- Choose a prompt type:
- Text — a single text prompt
-
Chat — a sequence of messages with roles (
system,user,assistant,developer,placeholder)
- Write your prompt content
- Add tags and metadata properties if needed
- Save as Draft or publish immediately
Every save creates a new version. Versions are numbered sequentially (1, 2, 3...) and can be:
- Draft — work in progress, not yet available via the Prompt Hub
- Published — the live version served by the SDK
You can view the full version history, compare versions, and roll back to any previous version.
Each version stores:
- Version number and label
- Prompt content (roles + messages for chat prompts)
- Tags
- Custom meta properties (arbitrary key/value pairs)
- Prompt references (references to other prompts)
- Who last updated it and when
Use get_prompt() to fetch a prompt from the Prompt Hub at runtime:
from tmam import init, get_prompt
init(
url="http://localhost:5050/api/sdk",
public_key="pk-tmam-xxxxxxxx",
secrect_key="sk-tmam-xxxxxxxx",
)
Fetch the latest published version by name
prompt = get_prompt(name="customer-support-v2")
Fetch a specific version
prompt = get_prompt(name="customer-support-v2", version=3)
Fetch by label
prompt = get_prompt(name="customer-support-v2", label="stable")
from tmam import get_prompt
prompt = get_prompt(
name="prompt-name", # str: prompt name (use name OR prompt_id)
prompt_id="xxxxxxxx", # str: prompt ID (alternative to name)
label="stable", # str: version label to fetch (optional)
version=3, # int: specific version number (optional)
)
Returns the compiled prompt data, or None on error
The source field is automatically set to "Python" to identify SDK-fetched prompts in usage analytics.
Chat prompts support {{variable}} placeholders that the server compiles at fetch time:
# Prompt template:
You are a helpful assistant for {{company_name}}.
Answer the user's question about {{topic}}.
Pass variable values when fetching:
prompt = get_prompt(
name="support-agent",
variables={"company_name": "Acme Corp", "topic": "billing"}
)
OpenGround lets you run the same prompt against multiple models simultaneously and compare their responses side by side.
OpenAI · Anthropic · Cohere · Mistral · Gemini · Grok · Llama · Deepseek
- Go to Prompt Management → OpenGround
- Click New Comparison
- Select the providers and models to compare
- Write your prompt (or load from the Prompt Hub)
- Set parameters: temperature, max tokens, system message
- Click Run to execute against all selected models
Results appear side by side with response text, token counts, cost, and latency for each model.
Comparison results are saved and can be reviewed later from the OpenGround list view. This is useful for:
- A/B testing different prompt strategies
- Benchmarking new model releases against your current model
- Sharing comparison results with your team
For OpenGround to call external APIs, you need to configure model credentials in Settings → Models:
- Go to Settings → Models
- Click Add Model
- Select a provider and model
- Link a Vault secret containing the API key for that provider
- Toggle active/inactive as needed
Model configurations are stored per organization/project and used by OpenGround and the AI Arbiter evaluator.