[Problem]
AI agents often have the capability to perform "High-Stakes" actions (e.g., transferring funds, deleting records, or sending external emails). Executing these actions autonomously without a mechanism for human oversight poses a significant security and operational risk.
[Why]
Trust is the biggest barrier to AI adoption. We need a standardized way to pause execution, request permission from a human, and resume—without the module logic itself needing to know about the UI or the approval mechanism.
[How]
- Pipeline Injection: Inserted "Step 4.5: Approval Gate" into the Executor pipeline.
- Declarative Metadata: Added the
requires_approval annotation to the protocol, allowing AI to know before calling that a human will be involved.
- Approval Protocol: Defined
ApprovalHandler and ApprovalRequest structures, supporting both synchronous (Phase A) and asynchronous (Phase B) approval flows.
[Problem]
AI agents often have the capability to perform "High-Stakes" actions (e.g., transferring funds, deleting records, or sending external emails). Executing these actions autonomously without a mechanism for human oversight poses a significant security and operational risk.
[Why]
Trust is the biggest barrier to AI adoption. We need a standardized way to pause execution, request permission from a human, and resume—without the module logic itself needing to know about the UI or the approval mechanism.
[How]
requires_approvalannotation to the protocol, allowing AI to know before calling that a human will be involved.ApprovalHandlerandApprovalRequeststructures, supporting both synchronous (Phase A) and asynchronous (Phase B) approval flows.