-
I compared the 4060ti 16GB eGPU (self-made with TH3P4G2), the 7840HS (Firebat MN56), and the newly purchased Ryzen AI HX370 (Beelink SER9).I'm comparing these two because their combined prices are similar. Also, I'm using the eGPU (the Firebat) for Deepseek R1 or IBM Granite tests, and the newly purchased HX370 has faster memory speeds than the Firebat (5600 vs. 7500 MT/s). Even though it is a bit slower than the RTX eGPU combination when using ROCm GPU, I ended up buying it because I was swayed by AMD's flashy advertising (bought it on discount for $899)... The environment is the first workflow of the comfyUI (512x512px + SD1.5 base model - v1-5-pruned-maonly.ckpt)
Be honest (From today's price)
What is better for a typical user right now? I'll share my service recipe, and I'd love to hear your creative and challenging ideas! (for my shame $899.00 😭 )
Capture???I've been thinking, if I buy four used 7840HS devices ($150.00 x 4 = $600.00) and try Kubernetes on them, which is known to be challenging, would that be a more cost-effective way to experiment? |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 13 replies
-
Looks like AMD HX370 is a dud as far as AI tasks are concerned: https://www.reddit.com/r/LocalLLaMA/comments/1i7cj11/amd_hx370_llm_performance/ . Neither the NPU or GPU are supported by official AMD ROCm drivers at this time. |
Beta Was this translation helpful? Give feedback.
-
Can you try to compare the 4060ti without offloading to CPU/RAM at all? I'm curious if you only use that Nvidia GPU how that would work. Of course, unless the model literally doesn't fit on the GPU, but if it can fully fit on VRAM, it'd be interesting. |
Beta Was this translation helpful? Give feedback.
-
Seen this? |
Beta Was this translation helpful? Give feedback.
-
Are these speeds on the iGPU (780M I believe)? I was wondering how you got it working. I have a 5750G with a Vega 8 GPU and I struggled to get it working (crashes as soon as CLIP should be loaded). |
Beta Was this translation helpful? Give feedback.
-
i failed to run comfyui on linux on 780m, and now i use it on windows by therock. it can use fp16 and faster than fp32. |
Beta Was this translation helpful? Give feedback.
Looks like AMD HX370 is a dud as far as AI tasks are concerned: https://www.reddit.com/r/LocalLLaMA/comments/1i7cj11/amd_hx370_llm_performance/ . Neither the NPU or GPU are supported by official AMD ROCm drivers at this time.