Skip to content

llm_rerank chunks data unnecessarily and uses hard 2048 batch size limit #168

@anasdorbani

Description

@anasdorbani

llm_rerank always chunks input tuples, even when the entire set would fit within the model’s context window. Additionally, the implementation uses a hard limit of 2048 tuples per batch, while in practice the LLM can handle more tuples if their combined token length fits in the context window.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions