Skip to content

Please add a demo of Retrieval-Augmented Generation with embedding and text-generation model. #79

@EmranAhmed

Description

@EmranAhmed

RAG Data Not Following System Prompt / Not Restricting to Retrieved Context

Would you add a demo/example of a Retrieval-Augmented Generation (RAG) pipeline that:

Uses Markdown content as knowledge base

Converts it into embeddings via an Embedding Model

Retrieves relevant chunks

Sends them to a Text Generation Model

Ensures the model strictly answers ONLY from retrieved context

Currently, the model:

❌ Does not follow the system prompt strictly

❌ Generates answers outside of retrieved (RAG) data

❌ Hallucinates or adds extra knowledge

✅ Expected Behavior

The Text Generation Model should:

Only answer using retrieved RAG context

Refuse to answer if context is insufficient

Follow system prompt strictly?

❌ Current Behavior

Model ignores restriction instructions

Uses general knowledge instead of retrieved chunks

Responses are not grounded in RAG data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions