A script that automatically installs all the required stuff to run selected AI interfaces on AMD Radeon 7900XTX. It should also work on 7900XT cards. For other cards, change HSA_OVERRIDE_GFX_VERSION and GFX at the beginning of the script (Not tested).
Note
Ubuntu 24.04.2 LTS is recommended. Version 7.x is not tested on older systems.
Name | Info |
---|---|
CPU | AMD Ryzen 9950X3D (iGPU disabled in BIOS) |
GPU | AMD Radeon 7900XTX |
RAM | 64GB DDR5 6600MHz |
Motherboard | ASRock B650E PG Riptide WiFi (BIOS 3.25) |
OS | Ubuntu 24.04.2 LTS |
Kernel | 6.11.0-26-generic |
ROCm | 6.4.1 |
Name | Links | Additional information |
---|---|---|
KoboldCPP | https://github.com/YellowRoseCx/koboldcpp-rocm | Support GGML and GGUF models. |
Text generation web UI | https://github.com/oobabooga/text-generation-webui https://github.com/ROCm/bitsandbytes.git https://github.com/turboderp/exllamav2 |
Support ExLlamaV2, Transformers using ROCm and llama.cpp using Vulkan. |
SillyTavern | https://github.com/SillyTavern/SillyTavern | |
llama.cpp | https://github.com/ggerganov/llama.cpp | 1. Put model.gguf into llama.cpp folder. 2. Change context size in run.sh file (Default: 32768). 3. Set GPU offload layers in run.sh file (Default: 1) |
Name | Links | Additional information |
---|---|---|
ComfyUI | https://github.com/comfyanonymous/ComfyUI | Workflows templates are in the workflows folder. |
Artist | https://github.com/songrise/Artist/ |
Important
For GGUF models:
1. Accept accept the conditions to access its files and content on HugginFace website:
https://huggingface.co/black-forest-labs/FLUX.1-schnell
2. HugginFace token is required during installation.
Name | Links | Additional information |
---|---|---|
Cinemo | https://huggingface.co/spaces/maxin-cn/Cinemo https://github.com/maxin-cn/Cinemo |
Interface PyTorch uses PyTorch 2.4.0 |
Name | Links | Additional information |
---|---|---|
ACE-Step | https://github.com/ace-step/ACE-Step |
Name | Links | Additional information |
---|---|---|
WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui https://github.com/collabora/WhisperSpeech |
|
F5-TTS | https://github.com/SWivid/F5-TTS | Remember to select the voice file when using the interface. |
Matcha-TTS | https://github.com/shivammehta25/Matcha-TTS | |
Dia | https://github.com/nari-labs/dia https://github.com/tralamazza/dia/tree/optional-rocm-cuda |
Script uses the optional-rocm-cuda fork by tralamazza. |
Orpheus-TTS | https://huggingface.co/spaces/MohamedRashad/Orpheus-TTS/tree/main https://github.com/canopyai/Orpheus-TTS |
If GPU is not detected change HIP_VISIBLE_DEVICES value. |
IMS-Toucan | https://github.com/DigitalPhonetics/IMS-Toucan.git | Interface PyTorch uses PyTorch 2.4.0 |
Chatterbox | https://github.com/resemble-ai/chatterbox https://huggingface.co/spaces/ResembleAI/Chatterbox |
|
HierSpeech++ | https://github.com/sh-lee-prml/HierSpeechpp http://huggingface.co/spaces/LeeSangHoon/HierSpeech_TTS |
Interface PyTorch uses PyTorch 2.4.0 |
Name | Links | Additional information |
---|---|---|
TripoSG | https://github.com/VAST-AI-Research/TripoSG | Added custom simple UI. Sometimes there are probelms with the preview, but the model should still be available for download. |
Name | Links | Additional information |
---|---|---|
Fastfetch | https://github.com/fastfetch-cli/fastfetch | Custom Fastfetch configuration with GPU memory info. Script supports not only AMD but also NVIDIA graphics cards (nvidia-smi needed). If you change the number or order of graphics cards you must run the installer again. |
Note
First startup after installation of the selected interface may take longer.
Important
This script does not download any models. If the interface does not have defaults, download your own.
Caution
If you update, back up your settings and models. Reinstallation deletes the previous directories.
1. Add the user to the required groups.
sudo adduser `whoami` video
sudo adduser `whoami` render
2. Reboot
sudo reboot
3. Clone repository
git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git
4. Run installer
bash ./install.sh
5. Select installation path.
6. Select ROCm installation if you are upgrading or running the script for the first time.
7. Install selected interfaces
8. Go to the installation path with the selected interface and run:
./run.sh