Skip to content

Support batched generation for llama.cpp #102

@the-crypt-keeper

Description

@the-crypt-keeper

Batch generation has landed: ggml-org/llama.cpp#3228

This should make our test suite ~10x faster on GGUF models.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions