Skip to content

Commit 6d774ea

Browse files
authored
Upgrade opset 16 version related doc (#1953)
* upgrade opset 16 status doc Signed-off-by: Deyu Huang <deyhuang@microsoft.com>
1 parent 16eb4b4 commit 6d774ea

File tree

3 files changed

+270
-268
lines changed

3 files changed

+270
-268
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr
1717

1818
| Build Type | OS | Python | TensorFlow | ONNX opset | Status |
1919
| --- | --- | --- | --- | --- | --- |
20-
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.7-3.9 | 1.13-1.15, 2.1-2.8 | 9-15 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
21-
| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.9 | 1.13-1.15, 2.1-2.8 | 9-15 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
20+
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.7-3.9 | 1.13-1.15, 2.1-2.8 | 9-16 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
21+
| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.9 | 1.13-1.15, 2.1-2.8 | 9-16 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
2222
<br/>
2323

2424
## Supported Versions
@@ -27,7 +27,7 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr
2727

2828
tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
2929

30-
We support and test ONNX opset-9 to opset-15. opset-6 to opset-8 should work but we don't test them.
30+
We support and test ONNX opset-9 to opset-16. opset-6 to opset-8 should work but we don't test them.
3131
By default we use ```opset-13``` for the resulting ONNX graph.
3232

3333
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 13```.
@@ -100,7 +100,7 @@ To get started with `tensorflow-onnx`, run the `t2onnx.convert` command, providi
100100

101101
The above command uses a default of `13` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).
102102

103-
```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 15 --output model.onnx```
103+
```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx```
104104

105105
If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.
106106

@@ -118,7 +118,7 @@ You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSD
118118

119119
We recently added support for tflite. You convert ```tflite``` models via command line, for example:
120120

121-
```python -m tf2onnx.convert --opset 15 --tflite tflite--file --output model.onnx```
121+
```python -m tf2onnx.convert --opset 16 --tflite tflite--file --output model.onnx```
122122

123123
## CLI reference
124124

@@ -187,7 +187,7 @@ ONNX requires default values for graph inputs to be constant, while Tensorflow's
187187

188188
#### --opset
189189

190-
By default we use the opset 13 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 15``` would create a onnx graph that uses only ops available in opset 15. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
190+
By default we use the opset 13 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 16``` would create a onnx graph that uses only ops available in opset 16. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
191191

192192
#### --dequantize
193193

0 commit comments

Comments
 (0)