Skip to content

LeonardoMeireles55/llm-local-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

LLM Local Experiments

This repository documents my experiments running large language models (LLMs) locally using an AMD RX 6700 XT GPU.

🧠 Purpose

Explore a way of running LLMs locally with GPU acceleration ( ROCm ) on AMD hardware.

🛠️ Setup

  • GPU: AMD RX 6700 XT
  • OS: Windows 11/WSL2-UBUNTU 24.04
  • Backends tested: ROCm

Benchmarks

Below are the results from a basic performance test measuring tokens per second (TPS):

Model GPU Operating System Environment Tokens Generated TPS
qwen3:0.6b RX 6700 XT Windows 11 Ollama (local) 200 39.65 tokens/s
qwen3:4.0b RX 6700 XT Windows 11 Ollama (local) 200 14.47 tokens/s

⚠️ Note: This benchmark was run using unofficial support via likelovewant/ollama-for-amd, and results may vary depending on drivers, thermal limits, and background activity.

📌AMD GPU Compatibility Notes

The official ROCm support for RDNA2‑based GPUs (such as the RX 6700 XT) remains limited.

To work around these limitations, I used the following community repositories:

About

Repository for documenting local experiments running LLMs with AMD RX 6700 XT GPU acceleration.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published