Skip to content

[ROCm] Fix HIP version check for HIPBLAS V2 API compatibility #14744

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

danielholanda
Copy link

@danielholanda danielholanda commented Jul 17, 2025

Description

This PR updates the HIP version check from:

#if defined(__HIP_PLATFORM_AMD__) && HIP_VERSION >= 70000000

to:

#if defined(__HIP_PLATFORM_AMD__) && HIP_VERSION >= 50600000

🛠 Why

The original condition seems to intend enabling support for HIPBLAS V2 API types (hipblasComputeType_t, HIPBLAS_COMPUTE_*, hipDataType), which replaced the deprecated hipblasDatatype_t and HIPBLAS_R_*. However, the check for HIP_VERSION >= 70000000 likely overshoots the required version.

I believe HIP_VERSION >= 50600000 is a more appropriate threshold:

  • HIPBLAS V2 features were introduced around ROCm 5.x.
  • Version 50600000 is already used throughout the codebase for similar feature gating.
  • This change aligns the version check with actual usage and enables proper builds on current ROCm versions.

This change allowed me to build llama.cpp with ROCm and HIPBLAS V2 support on bleeding-edge ROCm releases from TheRock.

@ggerganov

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Jul 17, 2025
@ggerganov ggerganov requested a review from IMbackK July 17, 2025 17:55
@CISC
Copy link
Collaborator

CISC commented Jul 17, 2025

A few preliminary searches seem to suggest that ROCm 6.0.0 is the (at least official) release where HIPBLAS V2 was introduced.

@IMbackK
Copy link
Collaborator

IMbackK commented Jul 17, 2025

So wont compile as afaik in the entire rocm 6.x series HIPBLAS_R_32F and friends are hipblasDatatype_t unless the hipblas header is included with HIPBLAS_V2 defined.

Rocm 7.0 (or current git) is the earliest this will work as is.

see https://raw.githubusercontent.com/ROCm/hipBLAS/refs/heads/release-staging/rocm-rel-6.4/library/include/hipblas.h

#ifdef HIPBLAS_V2

// Replacing use of hipblasDatatype_t with hipDataType which will be used in a future release.
typedef hipDataType hipblasDatatype_t;

#define HIPBLAS_R_16F HIP_R_16F
...

Copy link
Collaborator

@IMbackK IMbackK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wont compile on rocm <7.0 unless HIPBLAS_V2 is visible during inclusion of the hipblas header.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants