You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Building llama.cpp with BLAS support is highly recommended as it has shown to provide performance improvements. Make sure to have OpenBLAS installed in your environment.
19
+
Building llama.cpp with BLAS support is highly recommended as it has shown to provide performance improvements.
- By default, NNPA is enabled when available. To disable it (not recommended):
46
-
47
-
```bash
48
-
cmake -S . -B build \
49
-
-DCMAKE_BUILD_TYPE=Release \
50
-
-DGGML_BLAS=ON \
51
-
-DGGML_BLAS_VENDOR=OpenBLAS \
52
-
-DGGML_NNPA=OFF
53
-
54
-
cmake --build build --config Release -j $(nproc)
55
-
```
56
-
57
-
- For debug builds:
44
+
- For debug builds:
58
45
59
46
```bash
60
47
cmake -S . -B build \
61
48
-DCMAKE_BUILD_TYPE=Debug \
62
49
-DGGML_BLAS=ON \
63
50
-DGGML_BLAS_VENDOR=OpenBLAS
51
+
64
52
cmake --build build --config Debug -j $(nproc)
65
53
```
66
54
67
-
- For static builds, add `-DBUILD_SHARED_LIBS=OFF`:
55
+
- For static builds, add `-DBUILD_SHARED_LIBS=OFF`:
68
56
69
57
```bash
70
58
cmake -S . -B build \
@@ -82,18 +70,12 @@ All models need to be converted to Big-Endian. You can achieve this in three cas
82
70
83
71
1. **Use pre-converted models verified for use on IBM Z & LinuxONE (easiest)**
84
72
85
-

73
+
You can find popular models pre-converted and verified at [s390x Ready Models](hf.co/collections/taronaeo/s390x-ready-models-672765393af438d0ccb72a08).
86
74
87
-
You can find popular models pre-converted and verified at [s390x Ready Models](https://huggingface.co/collections/taronaeo/s390x-ready-models-672765393af438d0ccb72a08).
88
-
89
-
These models have already been converted from `safetensors` to `GGUF Big-Endian` and their respective tokenizers verified to run correctly on IBM z15 and later system.
75
+
These models and their respective tokenizers are verified to run correctly on IBM Z & LinuxONE.
90
76
91
77
2. **Convert safetensors model to GGUF Big-Endian directly (recommended)**
92
78
93
-

94
-
95
-
The model you are trying to convert must be in`safetensors` file format (for example [IBM Granite 3.3 2B](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct)). Make sure you have downloaded the model repository for this case.
96
-
97
79
```bash
98
80
python3 convert_hf_to_gguf.py \
99
81
--outfile model-name-be.f16.gguf \
@@ -114,42 +96,32 @@ All models need to be converted to Big-Endian. You can achieve this in three cas
114
96
115
97
3. **Convert existing GGUF Little-Endian model to Big-Endian**
116
98
117
-

118
-
119
-
The model you are trying to convert must be in`gguf` file format (for example [IBM Granite 3.3 2B](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct-GGUF)). Make sure you have downloaded the model file for this case.
120
-
121
99
```bash
122
100
python3 gguf-py/gguf/scripts/gguf_convert_endian.py model-name.f16.gguf BIG
123
101
```
124
102
125
103
For example,
126
-
127
104
```bash
128
105
python3 gguf-py/gguf/scripts/gguf_convert_endian.py granite-3.3-2b-instruct-le.f16.gguf BIG
- The GGUF endian conversion script may not support all data types at the moment and may fail for some models/quantizations. When that happens, please try manually converting the safetensors model to GGUF Big-Endian via Step 2.
135
111
136
112
## IBM Accelerators
137
113
138
114
### 1. SIMD Acceleration
139
115
140
-
Only available in IBM z15 or later system with the `-DGGML_VXE=ON` (turned on by default) compile flag. No hardware acceleration is possible with llama.cpp with older systems, such as IBM z14/arch12. In such systems, the APIs can still run but will use a scalar implementation.
116
+
Only available in IBM z15 or later system with the `-DGGML_VXE=ON` (turned on by default) compile flag. No hardware acceleration is possible with llama.cpp with older systems, such as IBM z14 or EC13. In such systems, the APIs can still run but will use a scalar implementation.
141
117
142
-
### 2. NNPA Vector Intrinsics Acceleration
118
+
### 2. zDNN Accelerator
143
119
144
-
Only available in IBM z16 or later system with the `-DGGML_NNPA=ON` (turned on when available) compile flag. No hardware acceleration is possible with llama.cpp with older systems, such as IBM z15/arch13. In such systems, the APIs can still run but will use a scalar implementation.
120
+
*Only available in IBM z16 or later system. No direction at the moment.*
145
121
146
-
### 3. zDNN Accelerator
122
+
### 3. Spyre Accelerator
147
123
148
-
_Only available in IBM z16 or later system. No direction at the moment._
149
-
150
-
### 4. Spyre Accelerator
151
-
152
-
_No direction at the moment._
124
+
*No direction at the moment.*
153
125
154
126
## Performance Tuning
155
127
@@ -173,22 +145,6 @@ It is strongly recommended to disable SMT via the kernel boot parameters as it n
173
145
174
146
IBM VXE/VXE2 SIMD acceleration depends on the BLAS implementation. It is strongly recommended to use BLAS.
175
147
176
-
## Frequently Asked Questions (FAQ)
177
-
178
-
1. I'm getting the following error message while trying to load a model: `gguf_init_from_file_impl: failed to load model: this GGUF file version 50331648 is extremely large, is there a mismatch between the host and model endianness?`
179
-
180
-
Answer: Please ensure that the model you have downloaded/converted is GGUFv3 Big-Endian. These models are usually denoted with the `-be` suffix, i.e., `granite-3.3-2b-instruct-be.F16.gguf`.
181
-
182
-
You may refer to the [Getting GGUF Models](#getting-gguf-models) section to manually convert a `safetensors` model to `GGUF` Big Endian.
183
-
184
-
2. I'm getting extremely poor performance when running inference on a model
185
-
186
-
Answer: Please refer to the [Appendix B: SIMD Support Matrix](#appendix-b-simd-support-matrix) to check if your model quantization is supported by SIMD acceleration.
187
-
188
-
3. I'm building on IBM z17 and getting the following error messages: `invalid switch -march=z17`
189
-
190
-
Answer: Please ensure that your GCC compiler is of minimum GCC 15.1.0 version, and have `binutils` updated to the latest version. If this does not fix the problem, kindly open an issue.
191
-
192
148
## Getting Help on IBM Z & LinuxONE
193
149
194
150
1. **Bugs, Feature Requests**
@@ -199,48 +155,3 @@ IBM VXE/VXE2 SIMD acceleration depends on the BLAS implementation. It is strongl
199
155
200
156
Please reach out directly to [aionz@us.ibm.com](mailto:aionz@us.ibm.com).
201
157
202
-
## Appendix A: Hardware Support Matrix
203
-
204
-
|| Support | Minimum Compiler Version |
205
-
| ------- | ------- | ------------------------ |
206
-
| IBM z15 | ✅ ||
207
-
| IBM z16 | ✅ ||
208
-
| IBM z17 | ✅ | GCC 15.1.0 |
209
-
210
-
- ✅ - supported and verified to run as intended
211
-
- 🚫 - unsupported, we are unlikely able to provide support
0 commit comments