Declarative data assembly for Pydantic โ eliminate N+1 queries with minimal code.
pydantic-resolve helps you assemble nested response data with Pydantic models. The easiest way to learn it is in two stages: start with resolve_* and post_* for one endpoint, then move to ER Diagram + AutoLoad only when relationship wiring starts repeating across many models. The same ERD can also power GraphQL queries and MCP services.
flowchart TB
business["**Business Model**<br/>- Entity Relationships<br/>- Aggregate Root Methods"]
manual["**Manual Assembly**<br/>resolve / post / expose /<br/>collector ..."]
graphql["**Auto Generation**<br/>GraphQL"]
api["**Scenario**<br/>API Integration"]
mcp["**Scenario**<br/>MCP Service"]
ops["**Scenario**<br/>Query / Debug / Test /<br/>Admin UI"]
business --> manual
business --> graphql
manual --> api
graphql --> mcp
graphql --> ops
We will reuse one example from start to finish:
Sprinthas manyTaskTaskhas oneowner- The API also wants derived fields such as
task_countandcontributors
The concepts appear in this order on purpose:
resolve_*: fetch related datapost_*: compute fields after nested data is readyExposeAs/SendTo: pass data across layers when parent and child need to coordinate- ER Diagram +
AutoLoad: remove repeated relationship wiring once the model graph grows
If you only need to solve a few endpoint-level N+1 issues, stop after the Core API sections. ERD mode is useful, but it is not the entry point.
| Need | What you write | What the framework does |
|---|---|---|
| Load related data | resolve_* + Loader(...) |
Batch lookups and map results back |
| Compute derived fields | post_* |
Run after descendants are fully resolved |
| Share data across layers | ExposeAs, SendTo, Collector |
Pass context down or aggregate data up |
| Reuse relationship declarations | ER Diagram + AutoLoad |
Centralize relationship wiring for many models |
pip install pydantic-resolve
pip install pydantic-resolve[mcp] # with MCP supportStart with the smallest useful case: each task has an owner_id, and you want an owner object on the response model.
from typing import Optional
from pydantic import BaseModel
from pydantic_resolve import Loader, Resolver, build_object
class UserView(BaseModel):
id: int
name: str
async def user_loader(user_ids: list[int]):
users = await db.query(User).filter(User.id.in_(user_ids)).all()
return build_object(users, user_ids, lambda user: user.id)
class TaskView(BaseModel):
id: int
title: str
owner_id: int
owner: Optional[UserView] = None
def resolve_owner(self, loader=Loader(user_loader)):
return loader.load(self.owner_id)
tasks = [TaskView.model_validate(task) for task in raw_tasks]
tasks = await Resolver().resolve(tasks)That is the core idea of the library:
owneris missing data, so you describe how to fetch it.user_loaderreceives all requestedowner_idvalues together.Resolver().resolve(...)walks the model tree and fills the field.
A useful mental model is: resolve_* means "this field needs data from outside the current node."
Now add one more relationship: Sprint -> tasks. TaskView already knows how to load owner, so the resolver can keep walking the tree recursively.
from typing import List
from pydantic_resolve import build_list
async def task_loader(sprint_ids: list[int]):
tasks = await db.query(Task).filter(Task.sprint_id.in_(sprint_ids)).all()
return build_list(tasks, sprint_ids, lambda task: task.sprint_id)
class SprintView(BaseModel):
id: int
name: str
tasks: List[TaskView] = []
def resolve_tasks(self, loader=Loader(task_loader)):
return loader.load(self.id)
sprints = [SprintView.model_validate(sprint) for sprint in raw_sprints]
sprints = await Resolver().resolve(sprints)Result: one query per loader, regardless of how many sprints or tasks you load.
This is why resolve_* is the best place to start. You can get value from the library before learning any advanced features.
post_* is the part that usually feels abstract at first. The simplest way to read it is this:
- Use
resolve_*when the field needs external data. - Use
post_*when the field can be computed after the current subtree is already assembled.
In the same sprint example, task_count and contributor_names are not fetched from another table. They are derived from already resolved tasks and owner.
class SprintView(BaseModel):
id: int
name: str
tasks: List[TaskView] = []
task_count: int = 0
contributor_names: list[str] = []
def resolve_tasks(self, loader=Loader(task_loader)):
return loader.load(self.id)
def post_task_count(self):
return len(self.tasks)
def post_contributor_names(self):
return sorted({task.owner.name for task in self.tasks if task.owner})Execution order for one sprint looks like this:
resolve_tasksloads the sprint's tasks.- Each
TaskView.resolve_ownerloads its owner. post_task_countandpost_contributor_namesrun after those nested fields are ready.
That timing is the key idea. post_* is not another way to fetch nested data. It is the place to finalize, summarize, or clean up data that is already available.
A short rule of thumb:
| Question | resolve_* |
post_* |
|---|---|---|
| Needs external IO? | Yes | Usually no |
| Runs before descendants are ready? | Yes | No |
| Good for counts, sums, labels, formatting? | Sometimes | Yes |
| Return value gets resolved again? | Yes | No |
post_* can also accept context, parent, ancestor_context, and collector, but you do not need those to understand the basic pattern.
Most users can skip this on the first read. Reach for these tools only when parent and child nodes need to coordinate without hard-coding references to each other.
ExposeAs: send ancestor data downwardSendTo+Collector: send child data upward
from typing import Annotated
from pydantic_resolve import Collector, ExposeAs, SendTo
class SprintView(BaseModel):
id: int
name: Annotated[str, ExposeAs('sprint_name')]
tasks: List[TaskView] = []
contributors: list[UserView] = []
def resolve_tasks(self, loader=Loader(task_loader)):
return loader.load(self.id)
def post_contributors(self, collector=Collector('contributors')):
return collector.values()
class TaskView(BaseModel):
id: int
title: str
owner_id: int
owner: Annotated[Optional[UserView], SendTo('contributors')] = None
full_title: str = ""
def resolve_owner(self, loader=Loader(user_loader)):
return loader.load(self.owner_id)
def post_full_title(self, ancestor_context):
return f"{ancestor_context['sprint_name']} / {self.title}"Use this only when the shape of the tree matters:
- A child needs ancestor context, such as a sprint name or permissions.
- A parent needs to aggregate values from many descendants, such as all contributors or tags.
Up to this point, the Core API is enough. Stay there until relationship declarations start repeating across many response models.
A common signal is when you see the same relation described again and again:
TaskCard.resolve_ownerTaskDetail.resolve_ownerSprintBoard.resolve_tasksSprintReport.resolve_tasks
At that point, the problem is no longer "how do I load this field?" but "where is the source of truth for relationships?"
| Question | Hand-written Core API | ER Diagram + AutoLoad |
|---|---|---|
| First endpoint | Faster | Slower |
| Upfront setup | Low | Medium |
| Reusing the same relation in many models | Repetitive | Centralized |
| Changing a relationship later | Update many resolve_* methods |
Update one ERD declaration |
| GraphQL / MCP generation | Separate work | Natural extension |
ERD mode asks for more discipline up front:
- Define entity classes.
- Declare relationships explicitly.
- Create
AutoLoadfrom the samediagramused by the resolver.
That setup cost is real. The payoff is that relationship knowledge moves into one place.
Here is the same Sprint -> Task -> User example after moving relationship wiring into an ER Diagram:
from typing import Annotated, Optional
from pydantic import BaseModel
from pydantic_resolve import Relationship, base_entity, config_global_resolver
BaseEntity = base_entity()
class UserEntity(BaseModel, BaseEntity):
id: int
name: str
class TaskEntity(BaseModel, BaseEntity):
__relationships__ = [
Relationship(fk='owner_id', name='owner', target=UserEntity, loader=user_loader)
]
id: int
title: str
owner_id: int
class SprintEntity(BaseModel, BaseEntity):
__relationships__ = [
Relationship(fk='id', name='tasks', target=list[TaskEntity], loader=task_loader)
]
id: int
name: str
diagram = BaseEntity.get_diagram()
AutoLoad = diagram.create_auto_load()
config_global_resolver(diagram)
class TaskView(TaskEntity):
owner: Annotated[Optional[UserEntity], AutoLoad()] = None
class SprintView(SprintEntity):
tasks: Annotated[list[TaskView], AutoLoad()] = []
task_count: int = 0
def post_task_count(self):
return len(self.tasks)Compared with the Core API version:
resolve_ownerdisappears.resolve_tasksdisappears.- The relationship definitions live in one place.
post_*still works exactly the same.
If you want to hide internal FK fields such as owner_id, add DefineSubset on top of the ERD setup:
from pydantic_resolve import DefineSubset
class TaskSummary(DefineSubset):
__subset__ = (TaskEntity, ('id', 'title'))
owner: Annotated[Optional[UserEntity], AutoLoad()] = NoneOnce ERD mode makes sense conceptually, you can let the ORM describe the relationships for you and import them into the application-layer ERD.
from pydantic_resolve import ErDiagram
from pydantic_resolve.integration.mapping import Mapping
from pydantic_resolve.integration.sqlalchemy import build_relationship
entities = build_relationship(
mappings=[
Mapping(entity=SprintEntity, orm=SprintORM),
Mapping(entity=TaskEntity, orm=TaskORM),
Mapping(entity=UserEntity, orm=UserORM),
],
session_factory=session_factory,
)
diagram = ErDiagram(entities=[]).add_relationship(entities)
AutoLoad = diagram.create_auto_load()
config_global_resolver(diagram)build_relationship supports SQLAlchemy, Django, and Tortoise ORM. This is a good later optimization when your ORM metadata is already stable and you want to avoid duplicating relationship declarations.
- Start with hand-written
resolve_*andpost_*on one endpoint. - Move repeated relations into ERD when multiple models need the same wiring.
- Let
build_relationship()read ORM metadata when the ORM is already the source of truth.
ERD mode is a good fit when:
- The project has 3+ related entities reused across multiple response models.
- The team wants one shared place to inspect and discuss relationships.
- You want GraphQL or MCP generated from the same model graph.
- You want to hide FK fields while keeping relationship definitions centralized.
Core API is usually enough when:
- You only have a few loading requirements.
- You want each endpoint to stay maximally explicit.
- The response shape is still changing quickly.
Generate GraphQL schema from ERD and execute queries:
from pydantic_resolve.graphql import GraphQLHandler
handler = GraphQLHandler(diagram)
result = await handler.execute("{ users { id name posts { title } } }")Expose GraphQL APIs to AI agents (requires pip install pydantic-resolve[mcp]):
from pydantic_resolve import AppConfig, create_mcp_server
mcp = create_mcp_server(apps=[AppConfig(name="blog", er_diagram=diagram)])
mcp.run()Interactive ERD exploration with fastapi-voyager:
from fastapi_voyager import create_voyager
app.mount('/voyager', create_voyager(app, er_diagram=diagram))| Feature | GraphQL | pydantic-resolve |
|---|---|---|
| N+1 Prevention | Manual DataLoader setup | Built-in automatic batching |
| Type Safety | Separate schema files | Native Pydantic types |
| Learning Curve | Steep (Schema, Resolvers, Loaders) | Gentle (just Pydantic) |
| Debugging | Complex introspection | Standard Python debugging |
| Integration | Requires dedicated server | Works with any framework |
| Query Flexibility | Any client can query anything | Explicit API contracts |
- ๐ Full Documentation
- ๐ Example Project
- ๐ฎ Live Demo
- ๐ฎ Live Demo - GraphQL
- ๐ API Reference
MIT License
tangkikodo (allmonday@126.com)