Skip to content

Feature Request: Expose llama.cpp --no-mmap option #37

@TrajansRow

Description

@TrajansRow

There was a performance regression in earlier versions of llama.cpp that I may be hitting with long running interactions. This was recently fixed with the addition of a --no-mmap option which forces the entire model to be loaded into ram, and I would like to also use it with koboldcpp.

ggml-org#801

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions