Skip to content

Prompt Management

Niccanor Dhas edited this page Feb 22, 2026 · 1 revision

Prompt Management

tmam includes a full prompt management system with versioning, publishing, and a live multi-model comparison playground called OpenGround.


Concepts

Concept | Description -- | -- Prompt | A named prompt template with one or more versions Version | A snapshot of a prompt's content, tags, and metadata Draft | An unpublished version being worked on Published | A released version available for production use Prompt Hub | The SDK endpoint for fetching prompts at runtime OpenGround | The live playground for comparing model outputs

Creating a Prompt

  1. Go to Prompt Management → Prompts
  2. Click New Prompt
  3. Choose a prompt type:
    • Text — a single text prompt
    • Chat — a sequence of messages with roles (system, user, assistant, developer, placeholder)
  4. Write your prompt content
  5. Add tags and metadata properties if needed
  6. Save as Draft or publish immediately

Prompt Versioning

Every save creates a new version. Versions are numbered sequentially (1, 2, 3...) and can be:

  • Draft — work in progress, not yet available via the Prompt Hub
  • Published — the live version served by the SDK

You can view the full version history, compare versions, and roll back to any previous version.

Version metadata

Each version stores:

  • Version number and label
  • Prompt content (roles + messages for chat prompts)
  • Tags
  • Custom meta properties (arbitrary key/value pairs)
  • Prompt references (references to other prompts)
  • Who last updated it and when

Fetching Prompts in the SDK

Use get_prompt() to fetch a prompt from the Prompt Hub at runtime:

from tmam import init, get_prompt

init( url="http://localhost:5050/api/sdk", public_key="pk-tmam-xxxxxxxx", secrect_key="sk-tmam-xxxxxxxx", )

Fetch the latest published version by name

prompt = get_prompt(name="customer-support-v2")

Fetch a specific version

prompt = get_prompt(name="customer-support-v2", version=3)

Fetch by label

prompt = get_prompt(name="customer-support-v2", label="stable")


Full get_prompt() Reference

from tmam import get_prompt

prompt = get_prompt( name="prompt-name", # str: prompt name (use name OR prompt_id) prompt_id="xxxxxxxx", # str: prompt ID (alternative to name) label="stable", # str: version label to fetch (optional) version=3, # int: specific version number (optional) )

Returns the compiled prompt data, or None on error

The source field is automatically set to "Python" to identify SDK-fetched prompts in usage analytics.


Template Variables

Chat prompts support {{variable}} placeholders that the server compiles at fetch time:

# Prompt template:
You are a helpful assistant for {{company_name}}.
Answer the user's question about {{topic}}.

Pass variable values when fetching:

prompt = get_prompt(
name="support-agent",
variables={"company_name": "Acme Corp", "topic": "billing"}
)

OpenGround — Model Comparison Playground

OpenGround lets you run the same prompt against multiple models simultaneously and compare their responses side by side.

Supported providers in OpenGround

OpenAI · Anthropic · Cohere · Mistral · Gemini · Grok · Llama · Deepseek

Using OpenGround

  1. Go to Prompt Management → OpenGround
  2. Click New Comparison
  3. Select the providers and models to compare
  4. Write your prompt (or load from the Prompt Hub)
  5. Set parameters: temperature, max tokens, system message
  6. Click Run to execute against all selected models

Results appear side by side with response text, token counts, cost, and latency for each model.

Saving comparisons

Comparison results are saved and can be reviewed later from the OpenGround list view. This is useful for:

  • A/B testing different prompt strategies
  • Benchmarking new model releases against your current model
  • Sharing comparison results with your team

Model Configuration

For OpenGround to call external APIs, you need to configure model credentials in Settings → Models:

  1. Go to Settings → Models
  2. Click Add Model
  3. Select a provider and model
  4. Link a Vault secret containing the API key for that provider
  5. Toggle active/inactive as needed

Model configurations are stored per organization/project and used by OpenGround and the AI Arbiter evaluator.

Clone this wiki locally