Skip to content

Commit 8f75f99

Browse files
authored
Update README.md (#172)
add two FAQs for windows build requestions.
1 parent 0e7dadb commit 8f75f99

File tree

1 file changed

+34
-2
lines changed

1 file changed

+34
-2
lines changed

README.md

Lines changed: 34 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
33
![version](https://img.shields.io/badge/version-1.0-blue)
44

5-
<img src="./assets/header_model_release.png" alt="BitNet Model on Hugging Face" width="800"/>
5+
[<img src="./assets/header_model_release.png" alt="BitNet Model on Hugging Face" width="800"/>](https://huggingface.co/microsoft/BitNet-b1.58-2B-4T)
66

77
bitnet.cpp is the official inference framework for 1-bit LLMs (e.g., BitNet b1.58). It offers a suite of optimized kernels, that support **fast** and **lossless** inference of 1.58-bit models on CPU (with NPU and GPU support coming next).
88

@@ -158,7 +158,7 @@ This project is based on the [llama.cpp](https://github.com/ggerganov/llama.cpp)
158158
### Build from source
159159

160160
> [!IMPORTANT]
161-
> If you are using Windows, please remember to always use a Developer Command Prompt / PowerShell for VS2022 for the following commands
161+
> If you are using Windows, please remember to always use a Developer Command Prompt / PowerShell for VS2022 for the following commands. Please refer to the FAQs below if you see any issues.
162162
163163
1. Clone the repo
164164
```bash
@@ -278,4 +278,36 @@ python utils/generate-dummy-bitnet-model.py models/bitnet_b1_58-large --outfile
278278
# Run benchmark with the generated model, use -m to specify the model path, -p to specify the prompt processed, -n to specify the number of token to generate
279279
python utils/e2e_benchmark.py -m models/dummy-bitnet-125m.tl1.gguf -p 512 -n 128
280280
```
281+
### FAQ (Frequently Asked Questions)📌
281282

283+
#### Q1: The build dies with errors building llama.cpp due to issues with std::chrono in log.cpp?
284+
285+
**A:**
286+
This is an issue introduced in recent version of llama.cpp. Please refer to this [commit](https://github.com/tinglou/llama.cpp/commit/4e3db1e3d78cc1bcd22bcb3af54bd2a4628dd323) in the [discussion](https://github.com/abetlen/llama-cpp-python/issues/1942) to fix this issue.
287+
288+
#### Q2: How to build with clang in conda environment on windows?
289+
290+
**A:**
291+
Before building the project, verify your clang installation and access to Visual Studio tools by running:
292+
```
293+
clang -v
294+
```
295+
296+
This command checks that you are using the correct version of clang and that the Visual Studio tools are available. If you see an error message such as:
297+
```
298+
'clang' is not recognized as an internal or external command, operable program or batch file.
299+
```
300+
301+
It indicates that your command line window is not properly initialized for Visual Studio tools.
302+
303+
• If you are using Command Prompt, run:
304+
```
305+
"C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\Tools\VsDevCmd.bat" -startdir=none -arch=x64 -host_arch=x64
306+
```
307+
308+
• If you are using Windows PowerShell, run the following commands:
309+
```
310+
Import-Module "C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\Tools\Microsoft.VisualStudio.DevShell.dll" Enter-VsDevShell 3f0e31ad -SkipAutomaticLocation -DevCmdArguments "-arch=x64 -host_arch=x64"
311+
```
312+
313+
These steps will initialize your environment and allow you to use the correct Visual Studio tools.

0 commit comments

Comments
 (0)