Skip to content

Commit 496bb65

Browse files
feat: Migrated to support LangGraph + LangChain v 1.0 (#13)
1 parent 480104e commit 496bb65

15 files changed

Lines changed: 1464 additions & 283 deletions

File tree

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -219,5 +219,5 @@ __marimo__/
219219
/.pre-commit-cache/
220220

221221
# Agents
222-
# codex-instructions
222+
codex-instructions
223223
local

codex-instructions/sdk-production-refactor.md

Lines changed: 0 additions & 174 deletions
This file was deleted.

instrumentation-packages/codon-instrumentation-langgraph/AGENTS.md

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ This document explains how to convert an existing LangGraph `StateGraph` into a
1414
## Why Wrap a LangGraph Graph?
1515
- **Zero instrumentation boilerplate:** every LangGraph node is auto-wrapped with `track_node`, producing OpenTelemetry spans without manual decorators.
1616
- **Stable identifiers:** nodes become `NodeSpec`s with deterministic SHA-256 IDs, and the overall graph gets a logic ID for caching, retries, and provenance.
17-
- **Audit-first runtime:** executions use Codon’s token scheduler, producing a detailed ledger (token enqueue/dequeue, node completions, custom events) for compliance.
17+
- **Telemetry-first runtime:** executions use native LangGraph semantics while Codon emits spans for each node invocation and workload run metadata for downstream analysis.
1818
- **Drop-in ergonomics:** call `LangGraphWorkloadAdapter.from_langgraph(graph, ...)` and keep your existing LangGraph code unchanged.
1919

2020
---
@@ -33,7 +33,7 @@ from myproject.langgraph import build_graph
3333

3434
langgraph = build_graph() # returns StateGraph or compiled graph
3535

36-
workload = LangGraphWorkloadAdapter.from_langgraph(
36+
graph = LangGraphWorkloadAdapter.from_langgraph(
3737
langgraph,
3838
name="ResearchAgent",
3939
version="1.0.0",
@@ -42,16 +42,14 @@ workload = LangGraphWorkloadAdapter.from_langgraph(
4242
)
4343

4444
initial_state = {"topic": "Sustainable cities"}
45-
report = workload.execute({"state": initial_state}, deployment_id="dev")
46-
print(report.node_results("writer")[-1])
47-
print(f"Ledger entries: {len(report.ledger)}")
45+
result = graph.invoke({"topic": "Sustainable cities"})
46+
print(result)
4847
```
4948

5049
### What Happened?
51-
1. Every LangGraph node was registered as a Codon node via `add_node`, producing a `NodeSpec`.
52-
2. Edges in the LangGraph became workload edges, so `runtime.emit` drives execution.
53-
3. `execute` seeded tokens with the provided state, ran the graph in token order, and captured telemetry & audit logs.
54-
4. You can inspect `report.ledger` for compliance, or `report.node_results(...)` for business outputs.
50+
1. Every LangGraph node was registered as a Codon `NodeSpec` for deterministic IDs.
51+
2. The adapter returned an instrumented graph that preserves native LangGraph execution semantics.
52+
3. Telemetry spans are emitted via callbacks during normal LangGraph invocation (no `execute` call required).
5553

5654
---
5755

@@ -63,7 +61,7 @@ The adapter inspects your graph to extract:
6361

6462
Need finer control? Provide a `node_overrides` mapping where each entry is either a plain dict or `NodeOverride` object. You can specify the role, callable used for `NodeSpec` introspection, model metadata, and explicit schemas:
6563
```python
66-
workload = LangGraphWorkloadAdapter.from_langgraph(
64+
graph = LangGraphWorkloadAdapter.from_langgraph(
6765
langgraph,
6866
name="SupportBot",
6967
version="2.3.0",
@@ -83,9 +81,7 @@ Any fields you omit fall back to the adapter defaults. Overrides propagate to te
8381
---
8482

8583
## Handling State
86-
- The adapter expects your token payload to contain a dictionary under the `"state"` key.
87-
- Each LangGraph node receives that state, invokes the original runnable, and emits updated state to successors.
88-
- Shared run-level data lives in `runtime.state`; you can read it from within nodes for cross-node coordination.
84+
- The adapter does not alter LangGraph state semantics. Nodes receive the same state and return the same updates they would without Codon instrumentation.
8985

9086
Example node signature inside your LangGraph graph:
9187
```python
@@ -95,21 +91,21 @@ async def researcher(state):
9591
insights = await fetch_insights(plan)
9692
return {"insights": insights}
9793
```
98-
When wrapped by the adapter, the Codon node sees `message["state"]` and merges the returned dict with the existing state.
94+
When wrapped by the adapter, the original LangGraph node callable is preserved and invoked as usual.
9995

10096
---
10197

10298
## Entry Nodes
10399
By default the adapter infers entry nodes as those with no incoming edges. You can override this by supplying `entry_nodes`:
104100
```python
105-
workload = LangGraphWorkloadAdapter.from_langgraph(
101+
graph = LangGraphWorkloadAdapter.from_langgraph(
106102
langgraph,
107103
name="OpsAgent",
108104
version="0.4.1",
109105
entry_nodes=["bootstrap"],
110106
)
111107
```
112-
At execution time you can still override entry nodes via `workload.execute(..., entry_nodes=[...])` if needed.
108+
Entry nodes are still inferred from the LangGraph structure; override them by changing the graph itself before compilation.
113109

114110
---
115111

@@ -146,33 +142,34 @@ langgraph.add_edge("writer", "critic")
146142
langgraph.add_edge("critic", "writer") # feedback loop
147143
langgraph.add_edge("critic", "finalize")
148144

149-
workload = LangGraphWorkloadAdapter.from_langgraph(
145+
graph = LangGraphWorkloadAdapter.from_langgraph(
150146
langgraph,
151147
name="ReflectiveAgent",
152148
version="0.1.0",
153149
)
154-
result = workload.execute({"state": {"topic": "urban gardens"}}, deployment_id="demo")
155-
print(result.node_results("finalize")[-1])
150+
result = graph.invoke({"topic": "urban gardens"})
151+
print(result)
156152
```
157-
The ledger records each iteration through the loop, and `runtime.state` tracks iteration counts for auditing.
153+
Each iteration is reflected in node spans and the graph snapshot span ties the run to the full graph definition.
158154

159155
---
160156

161157
## Adapter Options & Artifacts
162158
- Use `compile_kwargs={...}` when calling `LangGraphWorkloadAdapter.from_langgraph(...)` to compile your graph with checkpointers, memory stores, or any other LangGraph runtime extras. The adapter still inspects the pre-compiled graph for node metadata while compiling with the provided extras so the runtime is ready to go.
163-
- Set `return_artifacts=True` to receive a `LangGraphAdapterResult` containing the `CodonWorkload`, the original state graph, and the compiled graph. This makes it easy to hand both artifacts to downstream systems (e.g., background runners) without re-compiling.
164-
- Provide `runtime_config={...}` during adaptation to establish default invocation options (e.g., base callbacks, tracing settings). At execution time, pass `langgraph_config={...}` to `workload.execute(...)` to layer per-run overrides; both configs are merged and supplied alongside Codon’s telemetry callback.
165-
- Regardless of the return value, the resulting workload exposes `langgraph_state_graph`, `langgraph_compiled_graph`, `langgraph_compile_kwargs`, and `langgraph_runtime_config` for quick access to the underlying LangGraph objects.
159+
- Set `return_artifacts=True` to receive a `LangGraphAdapterResult` containing the `CodonWorkload`, the original state graph, the compiled graph, and the instrumented graph wrapper. This makes it easy to hand both artifacts to downstream systems (e.g., background runners) without re-compiling.
160+
- Provide `runtime_config={...}` during adaptation to establish default invocation options (e.g., base callbacks, tracing settings). At invocation time, pass `config={...}` to `graph.invoke(...)` (or `graph.ainvoke(...)`) to layer per-run overrides; both configs are merged and supplied alongside Codon’s telemetry callbacks.
161+
- The returned graph exposes `workload`, `langgraph_state_graph`, `langgraph_compiled_graph`, `langgraph_compile_kwargs`, and `langgraph_runtime_config` for quick access to the underlying LangGraph objects and metadata.
166162

167163
---
168164

169165
## Telemetry & Audit Integration
170166
- Call `initialize_telemetry(service_name=...)` once during process startup to export spans via OTLP. The initializer now lives in the core SDK (`codon_sdk.instrumentation.initialize_telemetry`) and is re-exported here. It defaults the endpoint to `https://ingest.codonops.ai:4317`, injects `x-codon-api-key` from the argument or `CODON_API_KEY` env, and respects `OTEL_EXPORTER_OTLP_ENDPOINT`/`OTEL_SERVICE_NAME` overrides. If you already have an OTEL tracer provider (e.g., via auto-instrumentation), set `CODON_ATTACH_TO_EXISTING_OTEL_PROVIDER=true` or pass `attach_to_existing=True` to add Codon’s exporter to the existing provider instead of replacing it.
171167
- Each node span now carries workload metadata (`codon.workload.id`, `codon.workload.run_id`, `codon.workload.logic_id`, `codon.workload.deployment_id`, `codon.organization.id`) so traces can be rolled up by workload, deployment, or organization without joins.
168+
- Each graph invocation emits a single graph snapshot span (`agent.graph`) with node/edge structure serialized in `codon.graph.definition_json` for full topology visibility.
172169
- `LangGraphTelemetryCallback` is attached automatically when invoking LangChain runnables; it captures model vendor/identifier, token usage (prompt, completion, total), and response metadata, all of which is emitted as span attributes (`codon.tokens.*`, `codon.model.*`, `codon.node.raw_attributes_json`).
173170
- Instrumentation writes into the shared `NodeTelemetryPayload` (`runtime.telemetry`) defined by the SDK so future mixins collect the same schema-aligned fields without reimplementing bookkeeping.
174171
- Node inputs/outputs and latency are recorded alongside status codes, enabling the `trace_events` schema to be populated directly from exported span data.
175-
- The audit ledger still covers token enqueue/dequeue, node completions, custom events (`runtime.record_event`), and stop requests for replay and compliance workflows.
172+
- Telemetry spans cover node inputs/outputs, latency, model usage, and workload/run identifiers without altering LangGraph execution.
176173

177174
### Analytics Alignment
178175
- The span attribute set is designed to satisfy the MVP telemetry tables in `docs/design/Codon Telemetry Data Schema - MVP Version.txt`. You can aggregate by `nodespec_id` or `logic_id` to compute token totals, error rates, or latency buckets per node.
@@ -181,9 +178,12 @@ The ledger records each iteration through the loop, and `runtime.state` tracks i
181178
---
182179

183180
## Limitations & Roadmap
184-
- Conditional edges: currently you emit along every registered edge; to mimic conditionals, have your node wrapper decide which edges receive tokens. Future versions aim to map LangGraph’s conditional constructs directly.
185-
- Streaming tokens / concurrency: not yet supported; the adapter processes tokens sequentially (though you can extend it for concurrency).
186-
- Persistence: the workload runtime is in-memory today. Roadmap includes pluggable stores for tokens/state/audit (see `docs/vision/codon-workload-design-philosophy.md`).
181+
- Conditional edges: telemetry spans are emitted only for nodes actually executed; the graph snapshot span provides the full topology for downstream analysis.
182+
- Streaming tokens: relies on LangGraph/LangChain streaming support; Codon captures model usage when providers expose usage metadata.
183+
- Persistence: execution remains native to LangGraph; persistence is governed by your graph checkpointers and stores.
184+
- Direct SDK calls: if a node bypasses LangChain runnables and calls provider SDKs directly, token usage callbacks are not emitted.
185+
- Custom runnables: objects that do not implement `invoke/ainvoke` cannot be auto-wrapped for config injection.
186+
- Async context boundaries: background tasks may lose ContextVar state, preventing automatic config propagation.
187187

188188
---
189189

0 commit comments

Comments
 (0)