Replies: 3 comments 1 reply
-
I plan to make a Docker image that would just run without having to install gigabytes of ROCm stuff. (Well, the image would still be huge, but easier to manage). And, please post feedback to #1087 |
Beta Was this translation helpful? Give feedback.
-
Hi, did you ever make the Docker? |
Beta Was this translation helpful? Give feedback.
-
My own $0.02 in Fedora - As of this post, Fedora 39 lacks support for this but Rawhide has everything you need. In my case the core packages actually didn't conflict with deps in Fedora 39, so I was able to install a few dependencies and just add the rawhide packages to get a build working. Hopefully when Fedora 40 launches hipblas will get native support. In the meantime I wouldn't recommend crossing packages with rawhide but if you're on F39 and want to try at your own risk:
Make sure if you try this that no core dependencies are replaced. I found it pretty smooth actually. Then the issue is I found my test card (Radeon VII Pro) is out of support for ROCm... #winning Anyway the build seems to work fine and OpenCL still works on the device. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Thanks #1087.
Install rocm, search in docs.amd.com find the rocm installation guide. Better use 5.4.2 for PyTorch support anyway.
After finish, check with the hipblas hipcc and anything mentioned in the pull request.
download any current master. And master 12b5900, replace ggml.c ggml-cuda.cu ggml-cuda.h to the current.
modify docs the pull request did change. And set hipblas ON in cmake files
in terminal,
export CXX=hipcc
Cd to the llama.cpp dir,
mkdir build, cd build.
CMAKE_PREFIX_PATH=/opt/rocm cmake ..
Make
5.done.
any running problem related to gfx????
Export HSA_OVERRIDE_GFX_VERSION=9.0.0 / 9.0.6 / 9.0.8 / 9.0.a / 10.3.0
Test it just one by one
Beta Was this translation helpful? Give feedback.
All reactions