"The world of AI prompting is currently a jungle. There are many beast trails, but no paved roads. I got tired of the chaos, so I built a highway."
You are likely here because you are tired of:
- "Magic spells" that work today but break tomorrow.
- Prompts that look like a stream of consciousness.
- Debugging AI hallucinations with no clear methodology.
This repository is not a collection of "tips and tricks." It is a Syntax Standard (RFC) for Prompt Engineering.
Just as we have HTTP for the web and PEP8 for Python, we need a standard for communicating with LLMs.
Read the Standards (v0.3.0) — Pre-release
- Markdown-Native: Use the format LLMs were trained on.
- The "Markdown + XML" Hybrid: Use Markdown for instructions, XML for data boundaries.
- Explicit Variable Declaration:
{variable}: """value""".
- CO-STAR Framework: Enterprise-grade 6-element prompt structuring (Context, Objective, Style, Tone, Audience, Response).
- Structured Reasoning: SCoT (Structured Chain of Thought), CoVe (Chain of Verification), CoD (Chain of Density).
- LLMOps: Version control, observability, evaluation testing, and governance frameworks for prompts.
This standard is open source (CC-BY-SA 4.0). It is designed to be evolved by the community.
See CONTRIBUTING.md to propose changes (PEPs).
I am a nameless senior architect who has spent years building complex systems. Currently, I coach engineers on AI implementation while developing an autonomous agent where 11 nodes organically collaborate for code reviews.
I realized that 90% of "prompt engineering" problems are actually just syntax errors. We are trying to write code in natural language without a compiler.
This standard is my attempt to provide that compiler. It is the "Paved Road" through the jungle.
Walk with me.
— The Architect