Skip to content

Commit 3eba16c

Browse files
committed
docs: improve the Developer manual
1 parent a8e1ff8 commit 3eba16c

File tree

1 file changed

+97
-48
lines changed

1 file changed

+97
-48
lines changed

docs/developers.md

Lines changed: 97 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Developer manual
22

3-
## Source codes Layout
3+
## Source code layout
44

55
```
66
OpenBLAS/
@@ -51,8 +51,7 @@ OpenBLAS/
5151
5252
```
5353

54-
A call tree for `dgemm` is as following.
55-
54+
A call tree for `dgemm` looks as follows:
5655
```
5756
interface/gemm.c
5857
@@ -61,10 +60,9 @@ driver/level3/level3.c
6160
gemm assembly kernels at kernel/
6261
```
6362

64-
To find the kernel currently used for a particular supported cpu, please check the corresponding `kernel/$(ARCH)/KERNEL.$(CPU)` file.
65-
66-
Here is an example for `kernel/x86_64/KERNEL.HASWELL`
63+
To find the kernel currently used for a particular supported CPU, please check the corresponding `kernel/$(ARCH)/KERNEL.$(CPU)` file.
6764

65+
Here is an example for `kernel/x86_64/KERNEL.HASWELL`:
6866
```
6967
...
7068
DTRMMKERNEL = dtrmm_kernel_4x8_haswell.c
@@ -73,71 +71,122 @@ DGEMMKERNEL = dgemm_kernel_4x8_haswell.S
7371
```
7472
According to the above `KERNEL.HASWELL`, OpenBLAS Haswell dgemm kernel file is `dgemm_kernel_4x8_haswell.S`.
7573

74+
7675
## Optimizing GEMM for a given hardware
7776

78-
Read the Goto paper to understand the algorithm.
77+
!!! abstract "Read the Goto paper to understand the algorithm"
78+
79+
Goto, Kazushige; van de Geijn, Robert A. (2008).
80+
["Anatomy of High-Performance Matrix Multiplication"](http://delivery.acm.org/10.1145/1360000/1356053/a12-goto.pdf?ip=155.68.162.54&id=1356053&acc=ACTIVE%20SERVICE&key=A79D83B43E50B5B8%2EF070BBE7E45C3F17%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1517932837_edfe766f1e295d9a7830812371e1d173).
81+
ACM Transactions on Mathematical Software 34 (3): Article 12
82+
83+
(The above link is available only to ACM members, but this and many related
84+
papers is also available on [the pages of van de Geijn's FLAME project](http://www.cs.utexas.edu/~flame/web/FLAMEPublications.html))
85+
86+
The `driver/level3/level3.c` is the implementation of Goto's algorithm.
87+
Meanwhile, you can look at `kernel/generic/gemmkernel_2x2.c`, which is a naive
88+
`2x2` register blocking `gemm` kernel in C. Then:
89+
90+
* Write optimized assembly kernels. Consider instruction pipeline, available registers, memory/cache access.
91+
* Tune cache block sizes (`Mc`, `Kc`, and `Nc`)
92+
93+
Note that not all of the CPU-specific parameters in `param.h` are actively used in algorithms.
94+
`DNUMOPT` only appears as a scale factor in profiling output of the level3 `syrk` interface code,
95+
while its counterpart `SNUMOPT` (aliased as `NUMOPT` in `common.h`) is not used anywhere at all.
7996

80-
Goto, Kazushige; van de Geijn, Robert A. (2008). ["Anatomy of High-Performance Matrix Multiplication"](http://delivery.acm.org/10.1145/1360000/1356053/a12-goto.pdf?ip=155.68.162.54&id=1356053&acc=ACTIVE%20SERVICE&key=A79D83B43E50B5B8%2EF070BBE7E45C3F17%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1517932837_edfe766f1e295d9a7830812371e1d173). ACM Transactions on Mathematical Software 34 (3): Article 12
81-
(The above link is available only to ACM members, but this and many related papers is also available on the pages
82-
of van de Geijn's FLAME project, http://www.cs.utexas.edu/~flame/web/FLAMEPublications.html )
97+
`SYMV_P` is only used in the generic kernels for the `symv` and `chemv`/`zhemv` functions -
98+
at least some of those are usually overridden by CPU-specific implementations, so if you start
99+
by cloning the existing implementation for a related CPU you need to check its `KERNEL` file
100+
to see if tuning `SYMV_P` would have any effect at all.
83101

84-
The `driver/level3/level3.c` is the implementation of Goto's algorithm. Meanwhile, you can look at `kernel/generic/gemmkernel_2x2.c`, which is a naive `2x2` register blocking gemm kernel in C.
102+
`GEMV_UNROLL` is only used by some older x86-64 kernels, so not all sections in `param.h` define it.
103+
Similarly, not all of the CPU parameters like L2 or L3 cache sizes are necessarily used in current
104+
kernels for a given model - by all indications the CPU identification code was imported from some
105+
other project originally.
85106

86-
Then,
87-
* Write optimized assembly kernels. consider instruction pipeline, available registers, memory/cache accessing
88-
* Tuning cache block size, `Mc`, `Kc`, and `Nc`
89107

90-
Note that not all of the cpu-specific parameters in param.h are actively used in algorithms. DNUMOPT only appears as a scale factor in profiling output of the level3 syrk interface code, while its counterpart SNUMOPT (aliased as NUMOPT in common.h) is not used anywhere at all.
91-
SYMV_P is only used in the generic kernels for the symv and chemv/zhemv functions - at least some of those are usually overridden by cpu-specific implementations, so if you start by cloning the existing implementation for a related cpu you need to check its KERNEL file to see if tuning SYMV_P would have any effect at all.
92-
GEMV_UNROLL is only used by some older x86_64 kernels, so not all sections in param.h define it.
93-
Similarly, not all of the cpu parameters like L2 or L3 cache sizes are necessarily used in current kernels for a given model - by all indications the cpu identification code was imported from some other project originally.
108+
## Running OpenBLAS tests
94109

95-
## Run OpenBLAS Test
110+
We use tests for Netlib BLAS, CBLAS, and LAPACK. In addition, we use
111+
OpenBLAS-specific regression tests. They can be run with Make:
96112

97-
We use netlib blas test, cblas test, and LAPACK test. Meanwhile, we use [BLAS-Tester](https://github.com/xianyi/BLAS-Tester), a modified test tool from ATLAS.
113+
* `make -C test` for BLAS tests
114+
* `make -C ctest` for CBLAS tests
115+
* `make -C utest` for OpenBLAS regression tests
116+
* `make lapack-test` for LAPACK tests
98117

99-
* Run `test` and `ctest` at OpenBLAS. e.g. `make test` or `make ctest`.
100-
* Run regression test `utest` at OpenBLAS.
101-
* Run LAPACK test. e.g. `make lapack-test`.
102-
* Clone [BLAS-Tester](https://github.com/xianyi/BLAS-Tester), which can compare the OpenBLAS result with netlib reference BLAS.
118+
We also use the [BLAS-Tester](https://github.com/xianyi/BLAS-Tester) tests for regression testing.
119+
It is basically the ATLAS test suite adapted for building with OpenBLAS.
120+
121+
The project makes use of several Continuous Integration (CI) services
122+
conveniently interfaced with GitHub to automatically run tests on a number of
123+
platforms and build configurations.
124+
125+
Also note that the test suites included with "numerically heavy" projects like
126+
Julia, NumPy, SciPy, Octave or QuantumEspresso can be used for regression
127+
testing, when those projects are built such that they use OpenBLAS.
103128

104-
The project makes use of several Continuous Integration (CI) services conveniently interfaced with github to automatically check compilability on a number of platforms.
105-
Lastly, the testsuites included with "numerically heavy" projects like Julia, NumPy, Octave or QuantumEspresso can be used for regression testing.
106129

107130
## Benchmarking
108131

109-
Several simple C benchmarks for performance testing individual BLAS functions are available in the `benchmark` folder, and its `scripts` subdirectory contains corresponding versions for Python, Octave and R.
110-
Other options include
132+
A number of benchmarking methods are used by OpenBLAS:
133+
134+
- Several simple C benchmarks for performance testing individual BLAS functions
135+
are available in the `benchmark` folder. They can be run locally through the
136+
`Makefile` in that directory. And the `benchmark/scripts` subdirectory
137+
contains similar benchmarks that use OpenBLAS via NumPy, SciPy, Octave and R.
138+
- On pull requests, a representative set of functions is tested for performance
139+
regressions with Codspeed; results can be viewed at
140+
[https://codspeed.io/OpenMathLib/OpenBLAS](https://codspeed.io/OpenMathLib/OpenBLAS).
141+
- The [OpenMathLib/BLAS-Benchmarks](https://github.com/OpenMathLib/BLAS-Benchmarks) repository
142+
contains an [Airspeed Velocity](https://github.com/airspeed-velocity/asv/)-based benchmark
143+
suite which is run on several CPU architectures in cron jobs. Results are published
144+
to a dashboard: [http://www.openmathlib.org/BLAS-Benchmarks/](http://www.openmathlib.org/BLAS-Benchmarks/).
145+
146+
Benchmarking code for BLAS libraries, and specific performance analysis results, can be found
147+
in a number of places. For example:
111148

112-
* https://github.com/RoyiAvital/MatlabJuliaMatrixOperationsBenchmark (various matrix operations in Julia and Matlab)
113-
* https://github.com/mmperf/mmperf/ (single-core matrix multiplication)
149+
* [MatlabJuliaMatrixOperationsBenchmark](https://github.com/RoyiAvital/MatlabJuliaMatrixOperationsBenchmark)
150+
(various matrix operations in Julia and Matlab)
151+
* [mmperf/mmperf](https://github.com/mmperf/mmperf/) (single-core matrix multiplication)
114152

115-
## Adding autodetection support for a new revision or variant of a supported cpu
116153

117-
Especially relevant for x86_64, a new cpu model may be a "refresh" (die shrink and/or different number of cores) within an existing
118-
model family without significant changes to its instruction set. (e.g. Intel Skylake, Kaby Lake etc. still are fundamentally Haswell,
119-
low end Goldmont etc. are Nehalem). In this case, compilation with the appropriate older TARGET will already lead to a satisfactory build.
154+
## Adding autodetection support for a new revision or variant of a supported CPU
155+
156+
Especially relevant for x86-64, a new CPU model may be a "refresh" (die shrink and/or different number of cores) within an existing
157+
model family without significant changes to its instruction set (e.g., Intel Skylake and Kaby Lake still are fundamentally the same architecture as Haswell,
158+
low end Goldmont etc. are Nehalem). In this case, compilation with the appropriate older `TARGET` will already lead to a satisfactory build.
120159

121160
To achieve autodetection of the new model, its CPUID (or an equivalent identifier) needs to be added in the `cpuid_<architecture>.c`
122-
relevant for its general architecture, with the returned name for the new type set appropriately. For x86 which has the most complex
123-
cpuid file, there are two functions that need to be edited - get_cpuname() to return e.g. CPUTYPE_HASWELL and get_corename() for the (broader)
124-
core family returning e.g. CORE_HASWELL. (This information ends up in the Makefile.conf and config.h files generated by `getarch`. Failure to
125-
set either will typically lead to a missing definition of the GEMM_UNROLL parameters later in the build, as `getarch_2nd` will be unable to
126-
find a matching parameter section in param.h.)
161+
relevant for its general architecture, with the returned name for the new type set appropriately. For x86, which has the most complex
162+
`cpuid` file, there are two functions that need to be edited: `get_cpuname()` to return, e.g., `CPUTYPE_HASWELL` and `get_corename()` for the (broader)
163+
core family returning, e.g., `CORE_HASWELL`.[^1]
164+
165+
[^1]:
166+
This information ends up in the `Makefile.conf` and `config.h` files generated by `getarch`. Failure to
167+
set either will typically lead to a missing definition of the `GEMM_UNROLL` parameters later in the build,
168+
as `getarch_2nd` will be unable to find a matching parameter section in `param.h`.
127169

128-
For architectures where "DYNAMIC_ARCH" builds are supported, a similar but simpler code section for the corresponding runtime detection of the cpu exists in `driver/others/dynamic.c` (for x86) and `driver/others/dynamic_<arch>.c` for other architectures.
170+
For architectures where `DYNAMIC_ARCH` builds are supported, a similar but simpler code section for the corresponding
171+
runtime detection of the CPU exists in `driver/others/dynamic.c` (for x86), and `driver/others/dynamic_<arch>.c` for other architectures.
129172
Note that for x86 the CPUID is compared after splitting it into its family, extended family, model and extended model parts, so the single decimal
130-
number returned by Linux in /proc/cpuinfo for the model has to be converted back to hexadecimal before splitting into its constituent
131-
digits, e.g. 142 = 8E , translates to extended model 8, model 14.
173+
number returned by Linux in `/proc/cpuinfo` for the model has to be converted back to hexadecimal before splitting into its constituent
174+
digits. For example, `142 == 8E` translates to extended model 8, model 14.
132175

133-
## Adding dedicated support for a new cpu model
134176

135-
Usually it will be possible to start from an existing model, clone its KERNEL configuration file to the new name to use for this TARGET and eventually replace individual kernels with versions better suited for peculiarities of the new cpu model. In addition, it is necessary to add
136-
(or clone at first) the corresponding section of GEMM_UNROLL parameters in the toplevel param.h, and possibly to add definitions such as USE_TRMM
137-
(governing whether TRMM functions use the respective GEMM kernel or a separate source file) to the Makefiles (and CMakeLists.txt) in the kernel
138-
directory. The new cpu name needs to be added to TargetLists.txt and the cpu autodetection code used by the `getarch` helper program - contained in
177+
## Adding dedicated support for a new CPU model
178+
179+
Usually it will be possible to start from an existing model, clone its `KERNEL` configuration file to the new name to use for this
180+
`TARGET` and eventually replace individual kernels with versions better suited for peculiarities of the new CPU model.
181+
In addition, it is necessary to add (or clone at first) the corresponding section of `GEMM_UNROLL` parameters in the top-level `param.h`,
182+
and possibly to add definitions such as `USE_TRMM` (governing whether `TRMM` functions use the respective `GEMM` kernel or a separate source file)
183+
to the `Makefile`s (and `CMakeLists.txt`) in the kernel directory. The new CPU name needs to be added to `TargetList.txt`,
184+
and the CPU auto-detection code used by the `getarch` helper program - contained in
139185
the `cpuid_<architecture>.c` file amended to include the CPUID (or equivalent) information processing required (see preceding section).
140186

187+
141188
## Adding support for an entirely new architecture
142189

143-
This endeavour is best started by cloning the entire support structure for 32bit ARM, and within that the ARMV5 cpu in particular as this is implemented through plain C kernels only. An example providing a convenient "shopping list" can be seen in pull request #1526.
190+
This endeavour is best started by cloning the entire support structure for 32-bit ARM, and within that the ARMv5 CPU in particular,
191+
as this is implemented through plain C kernels only. An example providing a convenient "shopping list" can be seen in pull request
192+
[#1526](https://github.com/OpenMathLib/OpenBLAS/pull/1526).

0 commit comments

Comments
 (0)