|
1 | 1 | # MAX78000 Model Training and Synthesis
|
2 | 2 |
|
3 |
| -_May 7, 2021_ |
| 3 | +_May 13, 2021_ |
4 | 4 |
|
5 | 5 | The Maxim Integrated AI project is comprised of four repositories:
|
6 | 6 |
|
@@ -236,8 +236,55 @@ $ yarn
|
236 | 236 | $ cd examples/manifold
|
237 | 237 | $ yarn
|
238 | 238 | # ignore warnings
|
| 239 | +npm run start |
239 | 240 | ```
|
240 | 241 |
|
| 242 | +The actual code will run in JavaScript inside the browser (this may cause warnings that the web page is consuming lots of resources). |
| 243 | + |
| 244 | +##### Integration into PyTorch code |
| 245 | + |
| 246 | +The easiest integration of Manifold is by generating three CSV files during/after training and load them into the demo application (started with the `npm run start` command shown above). For example, a batch tensor can be saved to a CSV file using |
| 247 | + |
| 248 | +```python |
| 249 | +def save_tensor(t, f): |
| 250 | + """ Save tensor `t` to file handle `f` in CSV format """ |
| 251 | + np.savetxt(f, t.reshape(t.shape[0], t.shape[1], -1).mean(axis=2).cpu().numpy(), delimiter=",") |
| 252 | +``` |
| 253 | + |
| 254 | +This example assumes that the shape of the tensor is (batch_size, features, [feature dimensions]) and averages each feature individually. |
| 255 | + |
| 256 | +To create the CSV files, open the files and put the field name(s) into the first line: |
| 257 | + |
| 258 | +```python |
| 259 | + print('Saving x/ypred/ytrue to CSV...') |
| 260 | + f_ytrue = open('ytrue.csv', 'w') |
| 261 | + f_ytrue.write('hr\n') |
| 262 | + f_ypred = open('ypred.csv', 'w') |
| 263 | + f_ypred.write('hr\n') |
| 264 | + f_x = open('x.csv', 'w') |
| 265 | + f_x.write(','.join(data_fields) + '\n') |
| 266 | +``` |
| 267 | + |
| 268 | +Then, where appropriate during test, save features/predictions/truth values to CSV: |
| 269 | + |
| 270 | +```python |
| 271 | + save_tensor(local_batch_val, f_x) |
| 272 | + save_tensor(outputs, f_ypred) |
| 273 | + save_tensor(local_label_val, f_ytrue) |
| 274 | +``` |
| 275 | + |
| 276 | +Finally, close the files: |
| 277 | + |
| 278 | +```python |
| 279 | + f_ytrue.close() |
| 280 | + f_ypred.close() |
| 281 | + f_x.close() |
| 282 | +``` |
| 283 | + |
| 284 | +Note that performance will suffer when there are more than about 20,000 records in the CSV file. Subsampling the data is one way to avoid this problem. |
| 285 | + |
| 286 | + |
| 287 | + |
241 | 288 | #### Windows Systems
|
242 | 289 |
|
243 | 290 | Windows/MS-DOS is not supported for training networks at this time. *This includes the Windows Subsystem for Linux (WSL) since it currently lacks CUDA support.*
|
@@ -322,6 +369,10 @@ For minor updates, pull the latest code and install the updated wheels:
|
322 | 369 | (ai8x-training) $ pip3 install -U -r requirements.txt # or requirements-cu11.txt with CUDA 11.x
|
323 | 370 | ```
|
324 | 371 |
|
| 372 | +##### Updates on Windows |
| 373 | + |
| 374 | +On Windows, please *also* use the Maintenance Tool as documented in the [Maxim Micro SDK (MaximSDK) Installation and Maintenance User Guide](https://pdfserv.maximintegrated.com/en/an/ug7219.pdf). The Maintenance Tool updates the SDK. |
| 375 | + |
325 | 376 | ##### Python Version Updates
|
326 | 377 |
|
327 | 378 | Updating Python may require updating `pyenv` first. Should `pyenv install 3.8.9` fail,
|
@@ -1278,6 +1329,7 @@ The following table describes the most important command line arguments for `ai8
|
1278 | 1329 | | `--softmax` | Add software Softmax functions to generated code | |
|
1279 | 1330 | | `--boost` | Turn on a port pin to boost the CNN supply | `--boost 2.5` |
|
1280 | 1331 | | `--timer` | Insert code to time the inference using a timer | `--timer 0` |
|
| 1332 | +| `--no-wfi` | Do not use WFI instructions when waiting for CNN completion | | |
1281 | 1333 | | *File names* | | |
|
1282 | 1334 | | `--c-filename` | Main C file name base (default: main.c) | `--c-filename main.c` |
|
1283 | 1335 | | `--api-filename` | API C file name (default: cnn.c) | `--api-filename cnn.c` |
|
@@ -2086,8 +2138,12 @@ To deal with this issue, there are several options:
|
2086 | 2138 |
|
2087 | 2139 | #### Debugging Techniques
|
2088 | 2140 |
|
2089 |
| -There can be many reasons why the known-answer test (KAT) fails for a given network. The following techniques may help in narrowing down where in the network or the YAML description of the network the error occurs: |
| 2141 | +There can be many reasons why the known-answer test (KAT) fails for a given network with an error message, or where the KAT does not complete. The following techniques may help in narrowing down where in the network or the YAML description of the network the error occurs: |
2090 | 2142 |
|
| 2143 | +* For very short and small networks, disable the use of WFI instructions while waiting for completion of the CNN computations by using the command line option `--no-wfi`. *Explanation: In these cases, the network terminates more quickly than the time it takes between testing for completion and executing the WFI instruction, so the WFI instruction is never interrupted and the code may appear to hang.* |
| 2144 | + |
| 2145 | +* For very large and deep networks, enable the boost power supply using the `--boost` command line option. On the EVkit, the boost supply is connected to port pin P2.5, so the command line option is `--boost 2.5`. |
| 2146 | + |
2091 | 2147 | * The default compiler optimization level is `-O2`, and incorrect code may be generated under rare circumstances. Lower the optimization level in the generated `Makefile` to `-O1`, clean (`make distclean && make clean`) and rebuild the project (`make`). If this solves the problem, one of the possible reasons is that code is missing the `volatile` keyword for certain variables.
|
2092 | 2148 | To permanently adjust the default compiler optimization level, modify `MXC_OPTIMIZE_CFLAGS` in `assets/embedded-ai85/templateMakefile` for Arm code and `assets/embedded-riscv-ai85/templateMakefile.RISCV` for RISC-V code.
|
2093 | 2149 |
|
|
0 commit comments