Added:
- Added a
waitForUserInputproperty (defaults totrue) within thellmConnectorattribute - if set tofalse, the llm prompt will be fired upon entering the llm block (i.e. uses user input from previous block instead of waiting for the next user input)
Fixed:
- Fixed an issue with autofocus applying even when embedded chatbot is out of view
Fixed:
- Fixed an issue with OpenAI Provider not working with
responseFormatset tojson
Added:
- Added an optional
debugproperty to all 3 default providers that prints more verbose logs that may help during development
Fixed:
- Fixed an issue where the @wllama/wllama package was causing issues for some users
Note:
WllamaProvider is no longer shipped by default with the plugin, primarily because packaging it into the plugin causes issues that are hard to resolve plugin-side. There's also a lack of practical use case for it currently, though the default implementation is still available for users to copy into their project here.
Fixed:
- Fixed an issue where GeminiProvider's
responseFormatfield was required instead of optional - Fixed an issue where stop conditions do not abort bot streaming responses
- Fixed error message not respecting output type
Added:
- Added an
initialMessageproperty within thellmConnectorattribute to allow users to specify an initial message easily
Added:
- Initial Release