Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 128 additions & 0 deletions apps/blog/content/blog/how-prisma-build-with-agentic-ai/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
---
title: "Agentic Engineering: How Prisma Builds with AI"
slug: "how-prisma-build-with-agentic-ai"
Comment on lines +2 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align slug/path wording with the published title.

Line 3 uses how-prisma-build-with-agentic-ai while the title uses “Builds.” This can create avoidable URL inconsistency across social shares/SEO metadata. Consider standardizing on “builds” (and matching asset path naming if you change the slug).

Also applies to: 9-10

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/blog/content/blog/how-prisma-build-with-agentic-ai/index.mdx` around
lines 2 - 3, The slug in the frontmatter uses "how-prisma-build-with-agentic-ai"
which conflicts with the title "Agentic Engineering: How Prisma Builds with AI";
update the slug to "how-prisma-builds-with-agentic-ai" and also rename or update
any related asset/path names and internal links referenced elsewhere in this
file (the entries around the other frontmatter/asset lines) so they consistently
use "builds" to match the title and avoid URL/SEO mismatches.

date: "2026-03-20"
authors:
- "Arthur Gamby"
metaTitle: "Agentic Engineering: How Prisma Builds with AI"
metaDescription: "In three months, Prisma fundamentally changed how it builds software by rethinking the relationship between engineers and AI agents. Here's what we learned building Prisma Next with agentic engineering."
metaImagePath: "/how-prisma-build-with-agentic-ai/imgs/meta.png"
heroImagePath: "/how-prisma-build-with-agentic-ai/imgs/meta.png"
tags: ["ai", "education"]
---

In three months, Prisma fundamentally changed how it builds software, not by adopting a new framework, but by rethinking the relationship between engineers and AI agents.

The result is a practice the team calls *agentic engineering*, and it looks nothing like "vibe coding."

Here's what we've learned building Prisma Next this way, and the principles that kept quality high while velocity went through the roof.

<Youtube videoId="0moGjrpNDm8" />

## Agentic Engineering Is Not Vibe Coding

When "vibe coding" first entered the scene, it described a hands-off workflow: give an agent a description, accept whatever it produces, move on. No verification, no architecture, no responsibility for quality.

That's the opposite of what works.

Agentic engineering keeps the engineer as architect, decision-maker, and quality gatekeeper. The agent removes *mechanical* friction, the typing, the boilerplate, the wiring, but it doesn't dissolve the inherent complexity of the problem.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"I'm still in control. I am still responsible for the quality of what I'm producing and I'm still proud of the quality of what I'm producing, but I use an AI tool to speed up my process."
</Quotes>

The distinction matters because it changes your relationship with the tool. If agentic development means "the AI does my job," you'll get poor results and feel threatened. If it means removing mechanical friction so you can focus on harder problems, everything changes.

## When Implementation Is Cheap, Design Becomes the Highest-Leverage Work

This is the counterintuitive insight at the heart of Prisma's approach: the faster you can build, the more time you should spend *not building*.

Prisma Next didn't start with a prompt. It started with Will spending a weekend on a proof of concept, not to ship, but to validate a single architectural hypothesis. Once feasibility was confirmed, the real work began: writing extensive architecture docs describing subsystems and their interactions. That documentation became the foundation for everything the agents would later implement.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"We don't just go in and say 'build us a new ORM' to our agents. We put a lot of effort up front into figuring out how we want whatever we're creating to behave."
</Quotes>

### Adversarial specs catch problems before code exists

One of the most effective practices to emerge is using agents for adversarial design sessions. Engineers challenge assumptions, probe hypothetical boundaries, change constraints, and record findings as they go. The result: a spec that has already survived scrutiny before a single line of production code is written.

<Quotes speakerName="Tyler Hogarth" position="Head of Engineering">
"You're supposed to be challenging each other and it creates more of a debate."
</Quotes>

Specs are living documents. Implementation often invalidates the original concept, sending the team back to revise, and that's the process working as designed.

### POCs validate; they don't replace specs

Not every engineer starts spec-first. Some build a proof of concept and formalize afterward. Both paths are valid, as long as the work converges on a well-defined spec before it ships.

But cheap POCs carry a trap. Throwing a high-level description at an agent produces something that *looks* right, and if that's the extent of your evaluation, you'll get false positives. The discipline is knowing exactly which question your POC answers, and not mistaking a plausible-looking output for a validated design.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"It's very easy to throw a very high level description at an agent and it'll come out with something that looks a lot like what you've described, and if that's the extent of your evaluation, you can get false positives really easily."
</Quotes>

## Trust Is a Ladder, Not a Switch

![A diagram showing the trust ladder in agentic engineering, progressing from tab complete at the bottom through single prompt, scoped tasks, feature work, to full delegation at the top. Each success builds confidence to delegate more.](/how-prisma-build-with-agentic-ai/imgs/the-trust-ladder.png)

Trust in AI agents isn't binary. You don't leap from skepticism to full delegation overnight, and you shouldn't.

Every engineer starts at the bottom: tab completions, then single prompts, then larger tasks. Each success builds confidence to delegate more.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"There's a sequence of greater and greater degrees of trust that you're willing to give the agent because it has performed well in the past or you've learned how to compensate for its deficiencies."
</Quotes>

The mistake most teams make is treating "I don't trust the agent" as a complete thought. It isn't. Is the concern about file isolation, the agent modifying things it shouldn't? Authentication? Code quality? Each has a different solution, and lumping them under a vague distrust prevents progress.

<Quotes speakerName="Tyler Hogarth" position="Head of Engineering">
"When we properly define what we don't trust about it, we can build the deterministic tools that we need for it to do things in a trustworthy way."
</Quotes>

### The unlock: local feedback loops

![A diagram showing the local feedback loop in agentic engineering. An autonomous cycle where the agent writes code, verifies with tests, linters, and types, and loops on failure. On pass, it moves to a human gate for code review and shipping.](/how-prisma-build-with-agentic-ai/imgs/local-feedback-loop.png)

The single biggest accelerator is giving agents the ability to verify their own work. When an agent can run tests, invoke linters, and check types after each change, it iterates toward a correct solution autonomously, and delivers a verifiably correct result for human review.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"If you have something that the agent can invoke to verify its own work, then you can actually progress in this agentic development workflow. If you don't have that, you're kind of limited at the first step."
</Quotes>

Without a local feedback loop, the workflow stalls into a slow cycle of prompt-wait-verify-repeat. With one, the agent debugs its own mistakes. Building this feedback loop for every team is one of Prisma's highest infrastructure priorities.

## Tests Are More Important Now, Not Less

When an agent writes the code, verification isn't optional, it's the entire quality mechanism.

Every piece of agent-produced work at Prisma must add to the exhaustive suite of integration, end-to-end, and unit tests. Every step is independently verifiable. Agents run these tests as they iterate; humans review the final result.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"Testing and code reviews are still just as important, if not more important, because you're often not the person who's writing the individual pieces."
</Quotes>

There's a subtle but critical distinction here: a *verifiably correct* solution isn't the same as a *good* solution. Tests prove the code does what it should. Design review ensures the code is structured in a way that serves the team over time. Both are essential, and neither substitutes for the other.

## Engineers Move Up the Stack

When mechanical coding is automated, engineers don't become less important, they operate at a higher level of abstraction. The work that matters most, system design, observability, error handling, edge cases, is exactly the work that requires deep human understanding of the problem domain.

Agents don't know what they don't know about your system. They can't anticipate the edge case that surfaces under a specific load pattern, or the error handling strategy that accounts for a third-party service's quirky failure modes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a hyphenated compound modifier for clarity.

At Line 112, “error handling strategy” reads better as “error-handling strategy” in this usage.

✏️ Suggested wording tweak
-Agents don't know what they don't know about your system. They can't anticipate the edge case that surfaces under a specific load pattern, or the error handling strategy that accounts for a third-party service's quirky failure modes.
+Agents don't know what they don't know about your system. They can't anticipate the edge case that surfaces under a specific load pattern, or the error-handling strategy that accounts for a third-party service's quirky failure modes.
🧰 Tools
🪛 LanguageTool

[grammar] ~112-~112: Use a hyphen to join words.
Context: ...er a specific load pattern, or the error handling strategy that accounts for a th...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/blog/content/blog/how-prisma-build-with-agentic-ai/index.mdx` at line
112, Replace the non-hyphenated compound "error handling strategy" with the
hyphenated compound "error-handling strategy" in the sentence that currently
reads "Agents don't know what they don't know about your system...or the error
handling strategy that accounts for a third-party service's quirky failure
modes." Update that exact phrase to "error-handling strategy" to use the correct
compound modifier.


<Quotes speakerName="Tyler Hogarth" position="Head of Engineering">
"It's coming up with those things that the agent wouldn't necessarily know about our system or our use case. And that's where I find engineers can now spend more time."
</Quotes>

This is the real promise of agentic engineering: not replacing engineers, but freeing them for the work that defines whether software succeeds or fails in production.

### Fear is the real blocker, curiosity is the fix

The biggest barrier isn't technical, it's emotional. Engineers who mock agent failures ("look how dumb this is") shut down productive conversation before it begins. The more useful frame: when an agent fails, that's signal about your setup, not proof the tools are broken.

<Quotes speakerName="Will Madden" position="Engineering Manager">
"If it fails, that's an indication that you haven't set it up for success. The conversation I want to be having each time something fails is what could we have done to make this thing successful the first time around."
</Quotes>

The engineers who've had the most success at Prisma approached agents with genuine curiosity, *what can this do, where does it fail, how can I set it up to succeed?*, rather than defensiveness. Once you stop seeing yourself as competing with the agent, you stop being afraid of it. It becomes another tool in the box, powerful, but only as effective as the person wielding it.
Binary file added apps/blog/public/authors/arthur-gamby.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading