Skip to content

Commit 88ed956

Browse files
luoyu-intelarthw
authored andcommitted
[SYCL] Add oneDNN primitive support (ggml-org#9091)
* add onednn * add sycl_f16 * add dnnl stream * add engine map * use dnnl for intel only * use fp16fp16fp16 * update doc
1 parent 05164f5 commit 88ed956

File tree

6 files changed

+185
-10
lines changed

6 files changed

+185
-10
lines changed

CMakePresets.json

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828
{ "name": "release", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "Release" } },
2929
{ "name": "reldbg", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "RelWithDebInfo" } },
3030
{ "name": "static", "hidden": true, "cacheVariables": { "GGML_STATIC": "ON" } },
31+
{ "name": "sycl_f16", "hidden": true, "cacheVariables": { "GGML_SYCL_F16": "ON" } },
3132

3233
{
3334
"name": "arm64-windows-msvc", "hidden": true,
@@ -60,6 +61,8 @@
6061
{ "name": "x64-windows-msvc+static-release", "inherits": [ "base", "reldbg", "static" ] },
6162

6263
{ "name": "x64-windows-sycl-debug" , "inherits": [ "sycl-base", "debug" ] },
63-
{ "name": "x64-windows-sycl-release", "inherits": [ "sycl-base", "release" ] }
64+
{ "name": "x64-windows-sycl-debug-f16", "inherits": [ "sycl-base", "debug", "sycl_f16" ] },
65+
{ "name": "x64-windows-sycl-release", "inherits": [ "sycl-base", "release" ] },
66+
{ "name": "x64-windows-sycl-release-f16", "inherits": [ "sycl-base", "release", "sycl_f16" ] }
6467
]
6568
}

docs/backend/SYCL.md

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,18 +20,14 @@
2020
**oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include:
2121

2222
- **DPCPP** *(Data Parallel C++)*: The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers.
23-
- **oneAPI Libraries**: A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL - Math Kernel Library)*.
23+
- **oneAPI Libraries**: A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL and oneDNN)*.
2424
- **oneAPI LevelZero**: A high performance low level interface for fine-grained control over intel iGPUs and dGPUs.
2525
- **Nvidia & AMD Plugins**: These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets.
2626

2727
### Llama.cpp + SYCL
2828

2929
The llama.cpp SYCL backend is designed to support **Intel GPU** firstly. Based on the cross-platform feature of SYCL, it could support other vendor GPUs: Nvidia GPU (*AMD GPU coming*).
3030

31-
When targeting **Intel CPU**, it is recommended to use llama.cpp for [Intel oneMKL](README.md#intel-onemkl) backend.
32-
33-
It has the similar design of other llama.cpp BLAS-based paths such as *OpenBLAS, cuBLAS, etc..*. In beginning work, the oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose.
34-
3531
## Recommended Release
3632

3733
The SYCL backend would be broken by some PRs due to no online CI.
@@ -47,6 +43,9 @@ For CI and performance test summary, please refer to [llama.cpp CI for SYCL Back
4743

4844
## News
4945

46+
- 2024.8
47+
- Use oneDNN as the default GEMM library, improve the compatibility for new Intel GPUs.
48+
5049
- 2024.6-7
5150
- Performance is increased: 37 -> 42.9 tokens/s of llama-2-7b.Q4_0 on Arc770.
5251

@@ -201,7 +200,7 @@ Please follow the instructions for downloading and installing the Toolkit for Li
201200

202201
Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.
203202

204-
Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI MKL for intel GPUs.
203+
Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI oneDNN for Intel GPUs.
205204

206205
- **Adding support to Nvidia GPUs**
207206

@@ -260,8 +259,6 @@ or
260259
# Export relevant ENV variables
261260
source /opt/intel/oneapi/setvars.sh
262261

263-
# Build LLAMA with MKL BLAS acceleration for intel GPU
264-
265262
# Option 1: Use FP32 (recommended for better performance in most cases)
266263
cmake -B build -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
267264

ggml/src/CMakeLists.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -549,6 +549,13 @@ if (GGML_SYCL)
549549
file(GLOB GGML_SOURCES_SYCL "ggml-sycl/*.cpp")
550550
list(APPEND GGML_SOURCES_SYCL "ggml-sycl.cpp")
551551

552+
find_package(DNNL)
553+
message("-- DNNL found:"${DNNL_FOUND})
554+
if (GGML_SYCL_TARGET STREQUAL "INTEL")
555+
add_compile_definitions(GGML_SYCL_DNNL=${DNNL_FOUND})
556+
else()
557+
add_compile_definitions(GGML_SYCL_DNNL=0)
558+
endif()
552559
if (WIN32)
553560
find_package(IntelSYCL REQUIRED)
554561
find_package(MKL REQUIRED)
@@ -561,6 +568,9 @@ if (GGML_SYCL)
561568
set(GGML_EXTRA_LIBS ${GGML_EXTRA_LIBS} -fsycl pthread m dl onemkl)
562569
endif()
563570
endif()
571+
if (${DNNL_FOUND} AND GGML_SYCL_TARGET STREQUAL "INTEL")
572+
list(APPEND GGML_EXTRA_LIBS DNNL::dnnl)
573+
endif()
564574
endif()
565575

566576
if (GGML_RPC)

ggml/src/ggml-sycl.cpp

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@
4040
#include "ggml-sycl/presets.hpp"
4141
#include "ggml-sycl/sycl_device.hpp"
4242

43+
#include "ggml-sycl/gemm.hpp"
4344

4445
void ggml_sycl_free_data(struct ggml_tensor * tensor);
4546
void ggml_sycl_copy_to_device(struct ggml_tensor * tensor);
@@ -2349,6 +2350,7 @@ inline void ggml_sycl_op_mul_mat_sycl(
23492350

23502351
const sycl::half alpha_f16 = 1.0f;
23512352
const sycl::half beta_f16 = 0.0f;
2353+
#if !GGML_SYCL_DNNL
23522354
SYCL_CHECK(CHECK_TRY_ERROR(dpct::gemm(
23532355
*stream, oneapi::mkl::transpose::trans,
23542356
oneapi::mkl::transpose::nontrans, row_diff, src1_ncols, ne10,
@@ -2358,6 +2360,13 @@ inline void ggml_sycl_op_mul_mat_sycl(
23582360
dpct::library_data_t::real_half)));
23592361
const to_fp32_sycl_t to_fp32_sycl = ggml_get_to_fp32_sycl(GGML_TYPE_F16);
23602362
to_fp32_sycl(dst_f16.get(), dst_dd_i, row_diff*src1_ncols, stream);
2363+
#else
2364+
auto dnnl_stream = ctx.stream_dnnl(stream);
2365+
DnnlGemmWrapper::row_gemm(dnnl_stream, false, true, src1_ncols, row_diff, ne10, src1_ptr, DnnlGemmWrapper::to_dt<sycl::half>(),
2366+
src0_ptr, DnnlGemmWrapper::to_dt<sycl::half>(), dst_f16.get(), DnnlGemmWrapper::to_dt<sycl::half>());
2367+
const to_fp32_sycl_t to_fp32_sycl = ggml_get_to_fp32_sycl(GGML_TYPE_F16);
2368+
to_fp32_sycl(dst_f16.get(), dst_dd_i, row_diff* src1_ncols, stream);
2369+
#endif
23612370
}
23622371
else {
23632372
// GGML_SYCL_DEBUG("ggml_sycl_op_mul_mat_sycl - fp32 path\n");
@@ -2380,13 +2389,18 @@ inline void ggml_sycl_op_mul_mat_sycl(
23802389

23812390
const float alpha = 1.0f;
23822391
const float beta = 0.0f;
2383-
2392+
#if !GGML_SYCL_DNNL
23842393
SYCL_CHECK(CHECK_TRY_ERROR(oneapi::mkl::blas::column_major::gemm(
23852394
*stream, oneapi::mkl::transpose::trans,
23862395
oneapi::mkl::transpose::nontrans, row_diff, src1_ncols, ne10,
23872396
dpct::get_value(&alpha, *stream), src0_ddf_i, ne00,
23882397
src1_ddf1_i, ne10, dpct::get_value(&beta, *stream),
23892398
dst_dd_i, ldc)));
2399+
#else
2400+
auto dnnl_stream = ctx.stream_dnnl(stream);
2401+
DnnlGemmWrapper::row_gemm(dnnl_stream, false, true, src1_ncols, row_diff, ne10, src1_ddf1_i, DnnlGemmWrapper::to_dt<float>(),
2402+
src0_ddf_i, DnnlGemmWrapper::to_dt<float>(), dst_dd_i, DnnlGemmWrapper::to_dt<float>());
2403+
#endif
23902404
}
23912405
(void) dst;
23922406
(void) src1_ddq_i;

ggml/src/ggml-sycl/common.hpp

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,10 @@
2222
#include "presets.hpp"
2323
#include "sycl_hw.hpp"
2424
#include "sycl_device.hpp"
25+
#if GGML_SYCL_DNNL
26+
#include "dnnl.hpp"
27+
#include "dnnl_sycl.hpp"
28+
#endif
2529

2630
#define GGML_COMMON_DECL_SYCL
2731
#define GGML_COMMON_IMPL_SYCL
@@ -255,6 +259,52 @@ struct ggml_backend_sycl_context {
255259
return stream(device, 0);
256260
}
257261

262+
#if GGML_SYCL_DNNL
263+
dnnl::engine make_engine(sycl::queue* q) {
264+
// Get the device associated with the queue
265+
sycl::device dev = q->get_device();
266+
// Get the context associated with the queue
267+
sycl::context ctx = q->get_context();
268+
const dnnl::engine eng = dnnl::sycl_interop::make_engine(dev, ctx);
269+
return eng;
270+
}
271+
272+
std::unordered_map<sycl::queue*, dnnl::stream> stream_map;
273+
std::unordered_map<sycl::queue*, dnnl::engine> engine_map;
274+
dnnl::stream stream_dnnl(int device, int _stream) {
275+
auto q = stream(device, _stream);
276+
return stream_dnnl(q);
277+
}
278+
dnnl::engine engine_dnnl(sycl::queue* qptr) {
279+
auto it = engine_map.find(qptr);
280+
if (it == engine_map.end()) {
281+
auto eng = make_engine(qptr);
282+
engine_map[qptr] = eng;
283+
return eng;
284+
}
285+
else
286+
{
287+
return it->second;
288+
}
289+
}
290+
dnnl::stream stream_dnnl(sycl::queue* qptr) {
291+
auto it = stream_map.find(qptr);
292+
if (it == stream_map.end()) {
293+
auto eng = engine_dnnl(qptr);
294+
auto stream = dnnl::sycl_interop::make_stream(eng, *qptr);
295+
stream_map[qptr] = stream;
296+
return stream;
297+
}
298+
else
299+
{
300+
return it->second;
301+
}
302+
}
303+
dnnl::stream stream_dnnl() {
304+
return stream_dnnl(device, 0);
305+
}
306+
#endif
307+
258308
// pool
259309
std::unique_ptr<ggml_sycl_pool> pools[GGML_SYCL_MAX_DEVICES];
260310

ggml/src/ggml-sycl/gemm.hpp

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
//
2+
// MIT license
3+
// Copyright (C) 2024 Intel Corporation
4+
// SPDX-License-Identifier: MIT
5+
//
6+
7+
//
8+
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
9+
// See https://llvm.org/LICENSE.txt for license information.
10+
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
11+
//
12+
13+
#ifndef GGML_SYCL_GEMM_HPP
14+
#define GGML_SYCL_GEMM_HPP
15+
16+
#include <fstream>
17+
#include <iostream>
18+
19+
#include "ggml-sycl.h"
20+
21+
#if GGML_SYCL_DNNL
22+
23+
#include "dnnl.hpp"
24+
#include "dnnl_sycl.hpp"
25+
26+
class DnnlGemmWrapper {
27+
public:
28+
using dt = dnnl::memory::data_type;
29+
using tag = dnnl::memory::format_tag;
30+
31+
template<typename T>
32+
static constexpr dt to_dt() {
33+
if constexpr (std::is_same_v<T, float>) return dt::f32;
34+
else if constexpr (std::is_same_v<T, sycl::half>) return dt::f16;
35+
else static_assert(0);
36+
}
37+
38+
static inline void row_gemm(sycl::queue& q, bool a_trans,
39+
bool b_trans, int m, int n, int k,
40+
const void* a, dt at, const void* b, dt bt, void* c, dt ct)
41+
{
42+
// Get the device associated with the queue
43+
sycl::device dev = q.get_device();
44+
// Get the context associated with the queue
45+
sycl::context ctx = q.get_context();
46+
const dnnl::engine eng = dnnl::sycl_interop::make_engine(dev, ctx);
47+
const dnnl::stream stream = dnnl::sycl_interop::make_stream(eng, q);
48+
dnnl::memory::dims a_dims = { m, k };
49+
dnnl::memory::dims b_dims = { k, n };
50+
dnnl::memory::dims c_dims = { m, n };
51+
const auto a_in_md = dnnl::memory::desc(a_dims, at, a_trans ? tag::ba : tag::ab);
52+
const auto b_in_md = dnnl::memory::desc(b_dims, bt, b_trans ? tag::ba : tag::ab);
53+
const auto c_md = dnnl::memory::desc(c_dims, ct, tag::ab);
54+
auto a_mem = dnnl::memory(a_in_md, eng, (void*)a);
55+
auto b_mem = dnnl::memory(b_in_md, eng, (void*)b);
56+
auto matmul_pd = dnnl::matmul::primitive_desc(eng, a_in_md, b_in_md, c_md);
57+
auto c_mem = dnnl::memory(matmul_pd.dst_desc(), eng, c);
58+
59+
// Create the primitive.
60+
auto matmul_prim = dnnl::matmul(matmul_pd);
61+
// Primitive arguments.
62+
std::unordered_map<int, dnnl::memory> matmul_args;
63+
matmul_args.insert({ DNNL_ARG_SRC, a_mem });
64+
matmul_args.insert({ DNNL_ARG_WEIGHTS, b_mem });
65+
matmul_args.insert({ DNNL_ARG_DST, c_mem });
66+
67+
matmul_prim.execute(stream, matmul_args);
68+
}
69+
70+
71+
static inline void row_gemm(const dnnl::stream& stream, bool a_trans,
72+
bool b_trans, int m, int n, int k,
73+
const void* a, dt at, const void* b, dt bt, void* c, dt ct)
74+
{
75+
auto const eng = stream.get_engine();
76+
dnnl::memory::dims a_dims = { m, k };
77+
dnnl::memory::dims b_dims = { k, n };
78+
dnnl::memory::dims c_dims = { m, n };
79+
const auto a_in_md = dnnl::memory::desc(a_dims, at, a_trans ? tag::ba : tag::ab);
80+
const auto b_in_md = dnnl::memory::desc(b_dims, bt, b_trans ? tag::ba : tag::ab);
81+
const auto c_md = dnnl::memory::desc(c_dims, ct, tag::ab);
82+
auto a_mem = dnnl::memory(a_in_md, eng, (void*)a);
83+
auto b_mem = dnnl::memory(b_in_md, eng, (void*)b);
84+
auto matmul_pd = dnnl::matmul::primitive_desc(eng, a_in_md, b_in_md, c_md);
85+
auto c_mem = dnnl::memory(matmul_pd.dst_desc(), eng, c);
86+
87+
// Create the primitive.
88+
auto matmul_prim = dnnl::matmul(matmul_pd);
89+
// Primitive arguments.
90+
std::unordered_map<int, dnnl::memory> matmul_args;
91+
matmul_args.insert({ DNNL_ARG_SRC, a_mem });
92+
matmul_args.insert({ DNNL_ARG_WEIGHTS, b_mem });
93+
matmul_args.insert({ DNNL_ARG_DST, c_mem });
94+
95+
matmul_prim.execute(stream, matmul_args);
96+
}
97+
};
98+
99+
#endif
100+
101+
#endif // GGML_SYCL_GEMM_HPP

0 commit comments

Comments
 (0)