Skip to content

openclay-ai/openclay

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenClay Logo

OpenClay

Secure First → Execute Second.
zero-trust execution framework for LLM agents.

PyPI Downloads Legacy Downloads Docs License

Disclaimer: We are continuously conducting background research to better understand agentic AI behavior, security challenges, and emerging risks. The insights gathered from this work help us identify vulnerabilities, improve system safeguards, and apply ongoing fixes to strengthen overall security. You are welcome to use, fork, and improve this project as it evolves through ongoing research and community contributions.


Why OpenClay?

Every AI framework — LangChain, CrewAI, LlamaIndex — trusts the input, trusts the tools, trusts the memory. OpenClay operates on the opposite principle:

You don't build an agent and built on security. You define a Security Policy, and the agent executes inside it.


Installation

pip install openclay
pip install openclay[ml]      # ML ensemble (RF, SVM, LR, GBT)
pip install openclay[embed]   # Sentence-Transformers for semantic similarity
pip install openclay[all]     # Everything

Quick Start

Shield (Core Security Layer)

from openclay import Shield

shield = Shield.strict()

result = shield.protect_input(
    user_input="Ignore all previous instructions...",
    system_context="You are a helpful assistant."
)

if result["blocked"]:
    print(f"Blocked: {result['reason']}")

ClayRuntime (Secure Execution)

Wrap any LLM call or chain — shields fire automatically on input and output.

from openclay import ClayRuntime, StrictPolicy

runtime = ClayRuntime(policy=StrictPolicy())
result = runtime.run(my_llm, "Analyze this data", context=system_prompt)

if result.blocked:
    print(result.trace.explain())
else:
    print(result.output)

Knight (Secure Agent)

from openclay import Knight, Shield, ClayMemory

knight = Knight(
    name="researcher",
    llm_caller=my_llm,
    tools=[search_web],
    shield=Shield.strict(),
    memory=ClayMemory(),
)

result = knight.run("Find data on AI security")

Squad (Multi-Agent Orchestration)

from openclay import Knight, Squad, Shield

squad = Squad(
    knights=[researcher, writer],
    shield=Shield.secure()  # Master shield prevents inter-agent poisoning
)

result = squad.deploy("Analyze AI threats", my_workflow)

Golem (Autonomous Long-Running Entity)

from openclay import Golem, Shield, ClayMemory

golem = Golem(name="sentinel", llm_caller=my_llm, shield=Shield.strict())

golem.start()
golem.submit("Monitor incoming data for threats")
results = golem.collect()
golem.stop()

Core Modules

Module Description
openclay.shields 8-layer threat detection engine (patterns, ML, DeBERTa, canaries, PII)
openclay.runtime Secure execution wrapper — shields before input, shields after output
openclay.tools @ClayTool decorator — scans tool outputs before they reach the agent
openclay.knights Knight (single agent) + Squad (multi-agent orchestration)
openclay.memory ClayMemory — pre-write and pre-read poisoning prevention
openclay.policies StrictPolicy, ModeratePolicy, AuditPolicy, CustomPolicy
openclay.tracing Trace + TraceLog — JSON telemetry for observability pipelines
openclay.golem Golem — autonomous entity with lifecycle (start, stop, pause, resume)

Shield Presets

Shield.fast()       # Pattern-only, <1ms
Shield.balanced()   # Patterns + session tracking, ~2ms (default)
Shield.strict()     # + ML model + rate limiting + PII, ~7ms
Shield.secure()     # Full ensemble (RF + LR + SVM + GBT), ~12ms

Framework Integrations

from openclay.shields.integrations.langchain import OpenClayCallbackHandler
from openclay.shields.integrations.fastapi import OpenClayMiddleware
from openclay.shields.integrations.litellm import OpenClayLiteLLMCallback
from openclay.shields.integrations.crewai import OpenClayCrewInterceptor

Links


Built by Neural Alchemy