Skip to content

Optimize for reduction of LLM calls #2

@ghost-pep

Description

@ghost-pep

Currently, we make redundant calls to the LLM for things like claim extraction multiple times across different evaluators but for the same inputs. We should optimize this and potentially clean up this code.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions