Add Prompts API#14
Conversation
| /// A single chat message. | ||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||
| #[non_exhaustive] | ||
| pub struct PromptChatMessage { |
There was a problem hiding this comment.
personally think we should just avoid doing this until we can fetch prompts as lingua format (or, use lingua to convert whatever we get from the API to this). I don't tink we should have prompt typespecs redundantly defined here and lingua.
There was a problem hiding this comment.
I can delay until lingua is ready, if you like? I agree that if the types are likely to change then we maybe should hold off. Though would lingua be usable in a way that we could make it an internal implementation detail of this? Or would we want it exported directly as the publicly-visible types?
There was a problem hiding this comment.
I've updated to use Lingua. It's using a git dependency at the moment. How should we proceed from there?
8c1f7b7 to
9d5720b
Compare
77dcdf8 to
f909706
Compare
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
f909706 to
fcd0d15
Compare
| [dependencies] | ||
| anyhow = "1" | ||
| base64 = "0.22" | ||
| lingua = { git = "https://github.com/braintrustdata/lingua.git" } |
There was a problem hiding this comment.
we can't publish a crate with git deps, so we'll need to fix this before that happens.
| "max_tokens": 1000 | ||
| } | ||
| }, | ||
| "template_format": "mustache" |
There was a problem hiding this comment.
template_format is in the tests, but we don't actually implement this in the PR. Should we do that?
| /// Use `client.prompt_builder_with_credentials()` to create a builder, | ||
| /// then configure it and call `build()` to fetch the prompt. | ||
| #[allow(private_bounds)] | ||
| pub struct PromptBuilder<S: PromptFetcher> { |
There was a problem hiding this comment.
Ideally we can avoid the allow(private_bounds.
| .json::<PromptFetchResponse>() | ||
| .await | ||
| .map_err(|e| BraintrustError::Api { | ||
| status: 200, |
There was a problem hiding this comment.
we should pick a better status than 200 for the error.
Prompts allow you to version and manage LLM prompts centrally, with support for variable substitution and pinning to specific versions.