This is a lightweight mock backend used to simulate AGUI streaming behavior while the real backend is under development.
The goal of this server is to:
- Enable UI QA testing
- Simulate realistic streaming responses
- Mimic backend success and failure scenarios
- Keep the implementation simple and easy to modify
The server exposes two endpoints:
POST /orgs/:orgId/agents/:agentId/answerPOST /orgs/:orgId/agents/:agentId/follow-up
Both endpoints stream events using Server-Sent Events (SSE) following the AGUI protocol.
The responses are hard-coded but dynamically adapted based on the input question.
The mock server supports simple keyword-based behavior to simulate real backend conditions.
If the question (q in request body) contains the word:
error
The server will:
- Emit
RUN_STARTED - Emit
header - Emit
RUN_ERROR - End the stream
This allows QA to validate error states and UI error handling.
If the question contains the word:
disabled
The /answer endpoint will send:
{
"followUpEnabled": false
}inside the header custom event.
Otherwise, followUpEnabled defaults to true.
This allows QA to verify UI behavior when follow-up functionality is disabled.
The generated response:
- Is hard-coded
- Streams content in small chunks
- Simulates step progression (
searching,thinking)
This makes the UI behave as if it is connected to a real AI backend.
Generates the initial (head) answer.
-
Creates a new
threadId -
Sends:
RUN_STARTEDheader(withfollowUpEnabled)STEP_STARTED/STEP_FINISHED(searching, thinking)TEXT_MESSAGE_CHUNK(streamed answer)RUN_FINISHED
Generates a follow-up answer in an existing conversation.
-
Uses
conversationIdfrom request body asthreadId -
Sends:
RUN_STARTEDheaderSTEP_STARTED/STEP_FINISHED(thinking)TEXT_MESSAGE_CHUNKRUN_FINISHED
This endpoint does not control followUpEnabled.
npm installnode server.jshttp://localhost:3000