Most people separate "deep thinker" from "gets things done."
I find that distinction mostly useful for avoiding accountability.
I'm a Systems & Solutions Architect with 20+ years of building things that work inside organizations that sometimes don't.
My specific obsession: the gap between what systems are designed to do and what they actually do and the human decisions (or their absece) that create that gap.
This shows up in my work as:
- AI agent governance: as autonomous agents start making real decisions with real consequences, someone has to define what they're authorized to do. Spoiler: to the best of my knowledge almost nobody has done this yet.
- Zero-standing-privilege architecture: building access systems where trust is earned per-task, not granted permanently and forgotten.
- Grant intelligence automation: AI agents that find co-application opportunities for nonprofits, so organizations with unique credentials stop leaving government funding on the table.
Currently embedded in the ZK and AI agent ecosystem - working across Metis Foundation, Metis L2, ZKM, and GOAT Network, and maintaining the ops infrastructure for CryptoChicks.
A governance framework for organizations that have real secrets management problems but can't touch CyberArk pricing. Built for DAOs, nonprofits, and crypto-native teams where "the sysadmin knows all the passwords" is the current security model.
Because "trust me" is not an access control policy.
The x402 protocol and ZK proofs handle whether a payment happened correctly.
Nobody is handling whether a payment was authorized to happen at all.
This is a framework for the governance layer above the cryptographic layer - defining authorization policies, accountability chains, and audit requirements for AI agents operating with real financial permissions.
The ZK proof verified the transaction perfectly. The $47,000 still went to the wrong place.
Canadian government grants for AI/blockchain education, women in STEM, and youth digital skills structurally require co-applicants. Organizations with relevant credentials - like AI and blockchain education nonprofits - are a natural fit. Finding those opportunities manually is slow and inconsistent.
This agent automates the discovery pipeline: public grant databases → eligible organizations → personalized outreach → qualified co-application conversations.
Currently in build. First deployment: CryptoChicks.
I use philosophical frameworks the way architects use load calculations - not as decoration, but as the thing that determines whether the structure holds.
A few lenses I apply constantly:
Dialectics: every system contains its own contradiction. The organization that says it values accountability but has no mechanism to enforce it is already failing; it just hasn't noticed yet.
Epistemic humility: most bad decisions aren't made by bad people. They're made by people who didn't know what they didn't know, at the moment it mattered most.
Test-driven governance: before implementing any policy or decision, write the falsifiable conditions it must satisfy. If it can't pass its own tests on paper, it shouldn't pass in a vote. (This applies to corporate governance. It applies to DAOs. It applies to governments. The principle doesn't care about your org structure.)
I write about organizational failures through this lens at the Operators Guild - specifically the ones where the post-mortem everyone published missed the actual cause.
- 20+ years consulting across Canadian government, enterprise, and web3
- Application and Systems Architect, DBA and Systems Engineer: web2 → web3
- Currently: Governance Lead, Metis Foundation ecosystem
- Toronto-based, remote-first, fluent in the gap between how systems are documented and how they actually work
- Pursuing Portuguese citizenship - EU operations incoming
Stack I work in: .NET, SQL, JavaScript, Python, Node.js, Vue, React, Docker, Google Cloud, Google Workspace, Bitwarden, ClickUp, Claude API, and whatever the current ecosystem requires.
A penetration tester. A compliance checkbox vendor. Someone who will tell you your governance is fine and charge you for the reassurance.
If your systems work exactly as documented and your team does exactly what they're supposed to - you don't need me.
If there's a gap between the architecture diagram and reality, between the policy document and actual behavior, between the ZK proof and organizational accountability - that's the specific gap I've been thinking about for two decades.
Building something in the ZK or AI agent space and hitting the governance wall?
Working on a nonprofit grant strategy and drowning in manual research?
Read something here that made you argue with your screen?
→ LinkedIn
→ Operators Guild writing
Serious conversations preferred. Cold pitches tolerated once.
Opinions are my own. Frameworks are open source. Accountability is non-negotiable.