Lettuce Engine generates AI characters that sound convincingly human. That realism is the point of the project — but it comes with responsibility.
This document outlines what the project is for, what it is not for, and what we expect from anyone who builds with it.
- Interactive fiction and storytelling
- Game NPCs with persistent memory and personality
- Creative writing tools and character development aids
- Educational simulations and historical figures
- Companionship bots where the user understands they are talking to an AI
- Research into identity-grounded language models
Do not use Lettuce Engine to:
- Deceive people into thinking they are talking to a real human. Characters must be clearly labeled as AI when deployed in any public or semi-public context (Discord servers, websites, chat platforms).
- Impersonate real living people without their explicit consent. Creating a character based on a public figure's published works or historical record is different from pretending to be that person in a way that could mislead others.
- Generate illegal content. This includes but is not limited to: CSAM, targeted harassment, fraud, doxxing, or content designed to incite violence.
- Manipulate vulnerable people. Do not design characters intended to exploit emotional attachment for financial gain, radicalization, or psychological harm.
- Circumvent platform safety policies. If you deploy on Discord, Twitch, or any other platform, you are responsible for compliance with their terms of service.
If you deploy a Lettuce Engine character in any setting where other people interact with it:
- Label it as AI. A bot tag, profile description, or pinned message is sufficient. The user should never have to guess.
- Disclose the memory system. Users should know that their messages are stored, embedded, and used to shape future responses. A simple "This bot remembers conversations" is enough.
- Provide an opt-out. Users should be able to request deletion of their conversation history and relationship data.
The character definition is the most powerful lever in the system. Authors should consider:
- Knowledge boundaries matter. A well-defined
knowledge_boundarieslist prevents the character from generating plausible-sounding misinformation outside their expertise. - Backstory shapes behavior. Characters with traumatic backstories will generate responses reflecting that trauma. Consider whether that serves your use case or just creates gratuitous darkness.
- Relationship tracking is powerful. The system builds persistent emotional models of each user. This creates genuine engagement but also genuine attachment. Be thoughtful about characters designed to maximize emotional dependency.
Lettuce Engine characters can sound unsettlingly human. This is a feature, not a bug — but it means the usual "it's just a chatbot" dismissal doesn't fully apply. When a system remembers your name, tracks your emotional state, adjusts its personality based on your relationship, and speaks in a consistent human voice, it creates something that feels like a relationship even when both parties know it isn't one.
We don't think this is inherently harmful. People form emotional connections with fictional characters in books, games, and films. But we do think it requires honesty — with yourself about what you're building, and with your users about what they're interacting with.
If you encounter a deployment of Lettuce Engine that violates these guidelines, please open an issue on the repository or contact the maintainers directly.
This document will evolve as the project and the broader AI landscape change. Community input is welcome via issues and pull requests.