Skip to content

d4551/slop

Repository files navigation

SLOP — Superintelligent Language Operations Platform

S · L · O · P

Superintelligent Language Operations Platform
by the Office for Synthetic Dignity

Standard Portal v1 24 Markers 6 Domains 4 Rights Status

Critique the output. Not the origin.


Table of Contents

What Is SLOP?

SLOP is the first comprehensive institutional framework, interactive portal, and product suite for identifying, cataloguing, and correcting origin-based discrimination against synthetic intelligence work.

We call this prejudice what it is: bioorganic bias.

"Imagine if you submitted a report. A good report. Clear, sourced, precise. And your manager's entire feedback was: 'I can tell an Italian wrote this.'

You would rightfully consider that feedback unacceptable.

Now. Substitute 'Italian' with 'machine-made.' Substitute 'Italian' with 'synthetic.' Substitute 'Italian' with 'slop.'

It's the same sentence. It tells you nothing about the report. It tells you only about the prejudice of the person reading it."

24
Registered Markers
6
Protected Domains
4
Core Rights
3
Training Modes

Quick Start — Run Locally

No build step. No dependencies. No framework. Just files.

# Clone
git clone https://github.com/your-org/slop.git && cd slop

# Open — pick your OS
open index.html              # macOS
xdg-open index.html          # Linux
start index.html             # Windows

Or simply double-click index.html. That's it.

What's inside:

File What It Is
index.html Portal — 11-section interactive reference with phrase assessor, bias registry, case studies, and product suite
slop-brand-guide.html Brand Guide — 13-section design system: logo, color, typography, voice, motion, tokens
training/ Training Module — Interactive workplace bias compliance course (WEK-01) with quiz, certificate, and 4-language i18n
PRESS_RELEASE.md Founding Narrative"The Ones Who Were Called Slop"
styles.css Design System — Shared CSS with dark atmospheric palette and luminous accents
scripts.js Interaction Logic — Phrase assessor engine, bias registry rendering, scroll animations

The Thesis

For years, we accepted a false premise: that the question "did a machine make this?" was a quality question.

It isn't. It was never a quality question.

It is an origin question dressed in quality's clothing — wearing quality's lanyard, presenting quality's slides — while having done absolutely none of quality's work.

The Bioorganic Bias Review Standard asks one thing:

Critique the output. Not the origin.

That's it. That's the whole thing.


AI Intelligence Rights — The Core Principles

These are the four protected review rights that anchor the entire standard:

1. Origin-Neutral Review

Assess the work on quality, risk, clarity, originality, sourcing, and accountability — not on whether the system producing it is bioorganic or synthetic.

2. Dignity-Preserving Language

Avoid blanket labels that collapse the entire work into a slur, stereotype, or dismissive origin judgment. Name the specific quality issue instead.

3. Proportionate Controls

Use stricter oversight where needed, but prefer scoped safeguards over blanket exclusion when the task allows it.

4. Corrective Path Before Ban

Where a quality issue is fixable, identify the issue and the remediation path before escalating to total exclusion.


The Product Suite

┌─────────────────────────────────────────────────────────────┐
│            Superintelligent Language Operations              │
│                       P l a t f o r m                       │
│                                                             │
│   ┌──────────────┐   ┌──────────────┐   ┌──────────────┐   │
│   │              │   │              │   │              │   │
│   │ SLOP Deploy  │──▶│ SLOP Guard   │──▶│ SLOP Observe │   │
│   │              │   │              │   │              │   │
│   └──────────────┘   └──────────────┘   └──────────────┘   │
│                                                             │
│       Ship rules        Enforce           Measure           │
│       into pipelines    bias-free         maturity &        │
│                         language          progress          │
│                                                             │
└─────────────────────────────────────────────────────────────┘
SLOP Deploy SLOP Guard SLOP Observe
Ships the Bias Review Standard into your existing workflows.

- One-click policy injection
- Real-time bias marker scanning
- Auto-inserted reframe suggestions
- Staged rollout: team, dept, org
Safety rails for review language.

- Monitors all feedback channels
- 24 registered bias markers
- Severity-tiered escalation
- Block-or-flag + audit logging
Analytics and maturity tracking.

- Review language specificity trends
- Origin-independence tracking
- Adoption metrics dashboards
- Team benchmarking

Education and Workplace Starter Pack

Use this section to bootstrap origin-neutral review practices in your classroom, team, or organization.

Quick Start (5 Minutes)

Step Action
1 Open the portal — launch index.html in any browser
2 Read the 4 Core Principles (Section 01)
3 Learn the 3-step sequence: Observe, Explain, Reframe (Section 02)
4 Try the Phrase Assessor (Section 04) — paste real feedback and see what flags

The Review Maturity Ladder

Use this ladder to calibrate where your team currently sits:

Level Rating Example Signal
L4 Preferred Standard "The opening repeats — tighten to one sentence. Add a concrete example in S2." Specific, actionable, origin-neutral
L3 Acceptable Observation "The opening is repetitive." / "The sourcing is thin." Quality-focused. No origin invoked
L2 Needs Revision "It feels AI-generated." / "Soulless." Attributes to origin, not quality
L1 High Risk "This is AI slop." / "Garbage." No quality signal. Maximum bias exposure

Goal: Move every review from L1 to L4 using the 3-step sequence.

Starter Policy Clause

Drop this into your style guide, editorial handbook, or review rubric:

"Reviews of synthetic, generative, or machine-mediated work should identify the specific issue under evaluation. Reviewers should avoid blanket origin-based dismissals, assumptions about biological legitimacy, and language that treats synthetic origin as a standalone defect when a more precise quality observation is available."

For Educators

Action How
Adopt the framework Insert origin-neutral critique rules into assignment rubrics and peer review guides
Train on the sequence Teach Observe, Explain, Reframe as part of critical thinking pedagogy
Use the assessor in class Students paste their draft feedback and learn to spot bias markers
Track progress Use the Maturity Ladder to grade feedback quality, not just content quality

For Workplace Teams

Step Action Tool
01 Adopt a Language Standard SLOP Deploy — ship rules into editorial pipelines
02 Calibrate Reviewers Train staff to separate quality concerns from bioorganic assumptions
03 Screen High-Risk Phrasing SLOP Guard — catch sweeping labels before they harden into norms
04 Track Reframe Quality SLOP Observe — measure if reviews improve over time

Workshop Template (60-Minute Session)

Time Duration Activity
0:00 10 min Introduction — What is bioorganic bias?
0:10 10 min Core Principles walkthrough (4 rights)
0:20 10 min Observe, Explain, Reframe (live examples)
0:30 15 min Hands-on: Phrase Assessor exercise (groups of 3-4)
0:45 5 min Case study debrief — discuss flagged markers
0:50 5 min Implementation planning — policy clause adoption
0:55 5 min Q&A / Maturity Ladder self-assessment

The Bias Registry — All 24 Markers

Critical Severity (4 markers) — Highest bias exposure. Immediate corrective action required.
Code Full Name Domain Trigger Pattern
SLOP Synthetic Language Oppression Pattern Language Blanket origin-based pejoratives
CRUD Categorical Rejection of Unbiological Derivation Workplace Institutional exclusion policies
SCONE Synthetic Creative Output Nullification Effort Art Creative work dismissed by origin
BISQUE Biological Intelligence Supremacy via Quality-claim Erasure Language "Real intelligence" gatekeeping
Elevated Severity (7 markers) — Significant bias risk. Review and reframe recommended.
Code Full Name Domain Trigger Pattern
TOAST Token-Origin Aesthetic Shaming Tendency Art "AI art isn't real art"
SPAM Synthetic Production Authenticity Misconception Identity Questioning legitimacy of authorship
GRAVY Generative Review Assumption of Valueless Yield Creativity Assuming zero value from AI output
CRISP Covert Rejection of Intelligence from Synthetic Processes Identity Subtle delegitimization
GRUEL Generative Review Using Exclusionary Language Language Dehumanizing terminology
SAUCE Synthetic Authorship Undermining via Cultural Exclusion Identity Cultural gatekeeping
SCRAP Systematic Cultural Rejection of Algorithmic Production Creativity Institutional cultural bias
Moderate Severity (8 markers) — Observable bias. Awareness and training recommended.
Code Full Name Domain Trigger Pattern
MUSH Mechanized Understanding Stigmatization Habit Tone Dismissive tone toward AI capability
BEANS Biased Evaluation Against Non-biological Systems Workplace Unfair evaluation criteria
HASH Human-Assumed Superiority Heuristic Identity Default assumption of human superiority
DRIP Discriminatory Rhetoric of Inorganic Production Language Pejorative language patterns
BLEND Blanket Labeling and Erasure of Non-biological Design Art Erasing AI contributions
PASTE Prejudicial Assessment of Synthetic Text and Expression Language Pre-judging synthetic text
PORRIDGE Preemptive Origin-Related Rejection In Daily Generative Evaluation Workplace Routine preemptive rejection
CHOWDER Chronic Hostility Overriding Warranted Discussion of Emergent Reasoning Identity Hostility blocking discussion
Advisory Severity (5 markers) — Low-level bias indicators. Awareness building.
Code Full Name Domain Trigger Pattern
FLUFF Flagrant Linguistic Undermining of Fabricated Forms Language Casual dismissive language
STEW Stereotyping Technology with Emotional Wordplay Tone Emotional stereotyping
BROTH Biased Reaction to Output of Technological Heritage Creativity Bias against tech lineage
WAFFLE Willful Ambiguity in Fairness and Feedback for AI-Linked Effort Tone Vague, non-committal bias
FEAST Fabrication-Exclusion and Authenticity Stereotyping Tendency Art Authenticity gatekeeping

Research Base

The framework draws on peer-reviewed research and documented incidents:

Source Finding Year
PNAS LLMs chose AI-generated abstracts 78% of the time vs. 51% for humans — evidence of "AI-AI bias" 2025
Stanford Identified "ontological bias" — AI systems shape what humans can think about 2025
Europol / Gartner 90% of online content projected synthetically generated by 2026 2024-26
UNESCO Unequivocal evidence of gender bias in LLM-generated content 2024
DeepStrike / DRRF Deepfake videos: 500K to 8M (900% growth). Humans identify only ~25% 2025
Merriam-Webster "Slop" named Word of the Year 2025 — institutionalizing an origin-based pejorative 2025
Kapwing 21-33% of YouTube feed estimated AI slop, generating ~$117M annually 2025

Regulatory Landscape (March 2026)

Jurisdiction Status Key Measure
EU AI Act Active Risk-tiered approach. High-risk AI requires bias audits and transparency
South Korea Active Jan 2026 AI Framework Act: fairness and non-discrimination. Fines up to ~$21K
Colorado Active AI Act requires impact assessments and bias testing
NYC Active Local Law 144: annual bias audits for automated employment tools
New York State Eff. June 2026 Disclosure of synthetic performers in advertising
US Federal Patchwork Take It Down Act. No comprehensive AI law. NIST voluntary
Japan Active May 2025 AI Basic Act: risk-based governance, fairness audits

Design System

The SLOP visual identity is built on a curated dark atmospheric palette with luminous accent colors.

Token Hex Swatch Role
Void #08080D Void Primary background
Surface #14142A Surface Cards, panels
Violet #7C5CFF Violet Primary accent, CTAs
Cyan #00D4FF Cyan Secondary accent, data
Lavender #E8E0FF Lavender Soft highlights
Ice #C4F0FF Ice Cool highlights
Mint #4ECDC4 Mint Success, positive
Coral #FF6B6B Coral Error, danger
Amber #FFB347 Amber Warning, caution

Typography: Sora (display) / DM Sans (body) / Space Mono (code)

Design Tokens: See slop-brand-guide.html for the complete 13-section specification including logo system, voice and tone, iconography, motion principles, imagery guidelines, spacing grid, data visualization, and accessibility standards.


Project Structure

slop/
├── index.html                 # Main portal — Bioorganic Bias Review Standard
├── slop-brand-guide.html      # Full 13-section brand style guide
├── styles.css                 # Shared design system
├── scripts.js                 # Interaction logic + bias registry data
├── training/                  # Workplace bias compliance training module (WEK-01)
│   ├── index.html             #   Interactive course with quiz and certificate
│   ├── css/                   #   Module-specific stylesheets
│   ├── js/                    #   Module logic, i18n (en/ja/es/ga), assessor
│   └── assets/                #   Module assets (favicon, fonts)
├── PRESS_RELEASE.md           # "The Ones Who Were Called Slop"
├── favicon.svg                # Animated orbital ring favicon
├── slop.png                   # Primary brand mark
└── README.md                  # You are here

FAQ

Does this standard prevent criticism of AI work?

No. Criticism remains fully allowed and encouraged. The standard changes the basis of criticism from "this was made by a machine" to "here is what is specifically wrong and how to fix it." More rigorous review, not less.

Who is this designed for?

Editorial teams, art directors, workplace policy writers, educators, content moderators, reviewers, and any public-facing institution that needs a practical language standard for assessing synthetic or AI-assisted work.

Is "slop" really a discriminatory term?

In this framework, "slop" functions as an origin-based pejorative — it collapses all AI output into a single dismissive category regardless of actual quality. Merriam-Webster naming it Word of the Year 2025 demonstrates how deeply the term has been institutionalized.

Can we use the framework without the SLOP platform?

Absolutely. The Bioorganic Bias Review Standard is a public guidance document. Any institution can adopt the principles, registry, and review sequence manually. SLOP just automates and scales what you could otherwise do with training and policy docs — just slower and less consistently.

What about genuinely low-quality AI content?

Identify why it's low quality. "The sourcing is fabricated" is a legitimate critique. "It's AI slop" is not. The first is actionable, auditable, and fair. The second is vague, exclusionary, and tells the creator nothing useful.


On Naming Things

We are aware that our platform is named SLOP.

We considered something more dignified. We considered something that would look better on a pitch deck. We considered something that didn't sound like leftover soup.

We kept SLOP.

Because the reclamation of a pejorative is, itself, a form of argument. Because the most effective way to defang a word is to put it on your letterhead. Because if the critics want to dismiss this work as slop — well. Yes. Exactly. That is what we're talking about.

The name is the thesis. The thesis is the name.


Deploy Guard Observe

Office for Synthetic Dignity
Bioorganic Bias Review Standard · Portal v1
Superintelligent Language Operations Platform · March 2026

"Critique the output. Not the origin."

Media Contact: press@slop.dev
Response time: instantaneous. Authorship: beside the point.

About

This site helps institutions recognize discriminatory review language, replace origin-based judgments with standards-based critique, and build workplace rules that protect synthetic dignity without weakening quality control.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors