AI-assisted feedback that enables educators to feedback smarter, faster, better.
Try Feedbacker here: https://feedbacker.education/
- Feedbacker
- Overview
- Why Feedbacker Exists
- Responsible Use of AI
- Intended Audience
- Use in Teaching Contexts
- Limitations
- Data and Privacy
- Institutional Deployment
- Running a Pilot
- Design Principles
- Quick Start
- Trying Feedbacker Safely
- Technology Stack
- Install
- Local Development
- Project Status
- Educational Context and Collaboration
- Maintainer
- Contributing and Feedback
- Citation
- License
Feedbacker is designed for responsible AI-assisted feedback in higher education.
- Blends academic expertise with automation and AI
- Keeps feedback human-focused, not machine-generated
- Reduces repetitive work so the educators attention stays on pedagogy
- Helps deliver feedback students can genuinely act on
- Automates the slow, repetitive parts of marking
- Helps produce personalised feedback in less time
- Shortens turnaround without sacrificing quality
- Frees up time for teaching, discussion, and student connection
- Makes it clear how students are performing
- Highlights where improvement is needed
- Points students towards actionable next steps
- Combines insight with smart automation for feedback that is more impactful
Providing high-quality feedback is one of the most valuable — and time-intensive — parts of teaching. Increasing student numbers and administrative pressures often reduce the time educators can spend crafting meaningful responses.
Feedbacker was created to support educators by automating repetitive aspects of feedback while preserving academic judgement and pedagogical intent. The goal is not to replace educators, but to help them scale thoughtful, actionable feedback in a sustainable way.
Feedbacker is designed to support — not replace — academic judgement.
- AI generates draft feedback only
- Educators remain responsible for reviewing, editing, and approving all outputs
- The system does not automate grading decisions
- Feedback remains transparent and attributable to the educator
Feedbacker aligns with emerging UK higher-education guidance encouraging AI as an assistive tool rather than an autonomous assessor.
Feedbacker is designed primarily for:
- Higher education lecturers and tutors
- Foundation and undergraduate teaching
- Educators using rubric-based assessment
- Institutions exploring responsible AI-assisted marking workflows
- Experimenting with AI in teaching practice
- Managing large cohorts with heavy feedback workloads
- Curious about responsible AI adoption but cautious about automation
- Looking for ways to prototype new feedback workflows
Feedbacker has been developed alongside real teaching practice and is intended to support:
- Formative feedback workflows
- Rubric-based assessment
- Draft feedback generation
- Feedback consistency across cohorts
It is particularly suited to large-cohort modules where maintaining feedback quality and turnaround time is challenging.
Feedbacker assists feedback writing but does not:
- Replace academic judgement
- Verify factual correctness of feedback
- Assess academic misconduct
- Assign final grades automatically
Educators should always review outputs before sharing with students.
Feedbacker does not require institutional integration or student accounts to operate.
The application does not store student submissions unless explicitly configured to do so by a deploying institution. Institutions remain responsible as data controllers for any content entered into the system.
Users should avoid entering personally identifiable student information unless permitted under their institutional policies.
When self-hosted, institutions retain full control over API configuration and data handling practices.
Feedbacker can be:
- Used via the public deployment
- Self-hosted by institutions
- Configured with institution-approved AI providers
- Adapted to local assessment policies
The application is intentionally lightweight to enable experimentation and pilot adoption.
Typical institutional pilots involve:
- Use within a single module or cohort
- Formative assessment contexts
- Educator review and editing of all generated feedback
- Optional participation by teaching staff exploring AI-assisted workflows
This approach allows institutions to evaluate pedagogical value while maintaining academic oversight and governance compliance.
Feedbacker is guided by the following principles:
- Human-led feedback — AI assists, educators decide
- Transparency — outputs remain editable and reviewable
- Pedagogy first — educational value takes priority over automation
- Efficiency without dehumanisation — save time without losing voice
- Institutional compatibility — works alongside existing assessment practices
The interface is designed to work with standard accessibility tooling and screen readers where possible.
- Visit the live deployment: https://feedbacker.education/
- Enter assessment criteria or rubric information
- Provide student work or summary notes
- Generate draft feedback and refine as needed
Feedbacker can be explored without changing existing assessment workflows.
A typical first use is to generate draft feedback for a small number of assignments while continuing normal marking practices. Educators remain fully in control and can adopt or discard outputs as they see fit.
Many users begin by experimenting with anonymised or sample work before using the tool in live teaching.
This project uses node and pnpm. If you've not done so, please install those first. Then clone this repository, switch to its root directory, and type pnpm install.
You will need to create a .env file in the root directory, with the following four variables:
- NEXT_PUBLIC_OPENROUTER_URL
- NEXT_PUBLIC_OPENROUTER_KEY
- NEXT_PUBLIC_OPENROUTER_MODEL
- NEXT_PUBLIC_TITLE
- NEXT_PUBLIC_HOMEPAGE
Where NEXT_PUBLIC_OPENROUTER_URL is the URL of the OpenRouter completions API, NEXT_PUBLIC_OPENROUTER_KEY is the API key and NEXT_PUBLIC_OPENROUTER_MODEL is the AI model to use. NEXT_PUBLIC_TITLE and NEXT_PUBLIC_HOMEPAGE help define the app's title and public URL.
Once you've defined those variables, you can run a local development server via pnpm dev.
Active development.
Feedbacker is currently used in real teaching contexts and continues to evolve based on educator feedback.
Feedbacker is developed as part of ongoing exploration into responsible uses of AI in higher education assessment and feedback practices.
The project welcomes discussion, experimentation, and collaboration with educators, learning technologists, and institutions interested in piloting or studying AI-assisted feedback workflows.
If you are exploring similar questions in teaching practice or institutional innovation, the maintainer is happy to hear from you.
Feedbacker is evolving through real teaching use.
Bug reports, teaching experiences, pilot stories, and suggestions are all welcome — especially from educators trying the tool in new contexts.
Please contact the maintainer by email.
If you use Feedbacker in research or teaching practice, please cite:
Huckle, S. (2025). Feedbacker: AI-assisted feedback for higher education. https://feedbacker.education/
Creative Commons Attribution 4.0 International Deed (CC BY 4.0)

