Skip to content

harisnae/Phi-4-mini-instruct-GGUF-Q3_K_M

Repository files navigation

Phi-4-mini-instruct-GGUF-Q3_K_M

This is a 3-bit quantized GGUF of Microsoft's Phi-4-mini-instruct 3.8B parameters Small Language Model.

Description

The weights are quantized with llama.cpp to a 3-bit quantization (Q3_K_M). The Screen capture shows the simple CLI interface of this quantized SML, which runs in Linux Terminal locally on a computer with Intel CORE i5 CPU without internet.

Getting Started

Dependencies

Installation

Executing program

  • Simply download the "Phi-4-mini-instruct-GGUF-Q3_K_M.gguf" file or clone the repository
git clone https://github.com/harisnae/Phi-4-mini-instruct-GGUF-Q3_K_M
  • To run this Language Model in a simple CLI interface, provide the directory path to "llama-cli" and "Phi-4-mini-instruct-GGUF-Q3_K_M.gguf" in the Terminal.
(Path to llama CLI)/llama.cpp/build/bin/llama-cli --color --conversation --model (Path to model GGUF file)/Phi-4-mini-instruct-GGUF-Q3_K_M.gguf

Author

Haris Naeem

Version History

  • 0.1
    • Initial Release

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

Screenshot

Phi-4-mini-instruct-GGUF-Q3_K_M

About

Q3_K_M (3-bit) Quantized GGUF of Microsoft's SLM Phi-4-mini-instruct

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published