Skip to content

Commit 07230fd

Browse files
committed
Add ROCm information to README
1 parent c5d1dfe commit 07230fd

File tree

2 files changed

+28
-0
lines changed

2 files changed

+28
-0
lines changed

README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,32 @@ Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
2222
In some cases it can happen that you need to compile from source. If this happens please consider submitting a bug report with `python -m bitsandbytes` information. What now follows is some short instructions which might work out of the box if `nvcc` is installed. If these do not work see further below.
2323

2424
Compilation quickstart:
25+
2526
```bash
2627
git clone https://github.com/timdettmers/bitsandbytes.git
2728
cd bitsandbytes
29+
```
2830

31+
For CUDA
32+
```bash
2933
# CUDA_VERSIONS in {110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 120}
3034
# make argument in {cuda110, cuda11x, cuda12x}
3135
# if you do not know what CUDA you have, try looking at the output of: python -m bitsandbytes
3236
CUDA_VERSION=117 make cuda11x
3337
python setup.py install
3438
```
3539

40+
For ROCm
41+
```bash
42+
# Requiers ROCm 5.6+
43+
# Check if your GPU supports Wave32 with rocminfo | grep "Wavefront Size"
44+
# If this doesn't output 32 and instead 64 this library won't work
45+
46+
# Your ROCm target can be found with rocminfo | grep gfx
47+
ROCM_TARGET=gfx1030 make hip
48+
pip install .
49+
```
50+
3651
**Using Int8 inference with HuggingFace Transformers**
3752

3853
```python

compile_from_source.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,3 +38,16 @@ If you have problems compiling the library with these instructions from source,
3838

3939
Since 0.39.1 bitsandbytes installed via pip no longer provides Kepler binaries and these need to be compiled from source. Follow the steps above and instead of `cuda11x_nomatmul` etc use `cuda11x_nomatmul_kepler`
4040

41+
## Compilation with ROCm
42+
43+
Since this library requires hipblasLt this only supports **ROCm 5.6+**.
44+
Works well with these docker images:
45+
- [rocm/pytorch](https://hub.docker.com/r/rocm/pytorch)
46+
- [rocm/pytorch-nightly](https://hub.docker.com/r/rocm/pytorch-nightly).
47+
48+
For installation do:
49+
```bash
50+
make hip ROCM_TARGET=gfx1030
51+
pip install .
52+
```
53+
see https://www.llvm.org/docs/AMDGPUUsage.html#processors for finding ROCM_TARGET (e.g. gfx1030 for 6800XT,6900XT) or do `rocminfo | grep gfx`.

0 commit comments

Comments
 (0)