Skip to content

Commit 2b679ff

Browse files
poweiwjames-p-xu
authored andcommitted
TensorRT 10.10 OSS Release (#4437)
Signed-off-by: Po-Wei Wang (Vincent) <poweiw@nvidia.com>
1 parent 0ae0d08 commit 2b679ff

File tree

182 files changed

+1823
-2856
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

182 files changed

+1823
-2856
lines changed

.github/workflows/blossom-ci.yml

Lines changed: 31 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,56 +1,59 @@
1-
# Copyright (c) 2020-2021, NVIDIA CORPORATION.
1+
#
2+
# SPDX-FileCopyrightText: Copyright (c) 1993-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
# SPDX-License-Identifier: Apache-2.0
24
#
35
# Licensed under the Apache License, Version 2.0 (the "License");
46
# you may not use this file except in compliance with the License.
57
# You may obtain a copy of the License at
68
#
7-
# http://www.apache.org/licenses/LICENSE-2.0
9+
# http://www.apache.org/licenses/LICENSE-2.0
810
#
911
# Unless required by applicable law or agreed to in writing, software
1012
# distributed under the License is distributed on an "AS IS" BASIS,
1113
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1214
# See the License for the specific language governing permissions and
1315
# limitations under the License.
16+
#
1417

1518
# A workflow to trigger ci on hybrid infra (github + self hosted runner)
1619
name: Blossom-CI
1720
on:
1821
issue_comment:
1922
types: [created]
2023
workflow_dispatch:
21-
inputs:
22-
platform:
23-
description: 'runs-on argument'
24-
required: false
25-
args:
26-
description: 'argument'
27-
required: false
24+
inputs:
25+
platform:
26+
description: "runs-on argument"
27+
required: false
28+
args:
29+
description: "argument"
30+
required: false
2831
jobs:
2932
Authorization:
3033
name: Authorization
31-
runs-on: blossom
34+
runs-on: blossom
3235
outputs:
3336
args: ${{ env.args }}
34-
37+
3538
# This job only runs for pull request comments
3639
if: |
37-
github.event.comment.body == '/blossom-ci' &&
38-
(
39-
github.actor == 'rajeevsrao' ||
40-
github.actor == 'kevinch-nv' ||
41-
github.actor == 'ttyio' ||
42-
github.actor == 'samurdhikaru' ||
43-
github.actor == 'zerollzeng' ||
44-
github.actor == 'nvpohanh'
45-
)
40+
github.event.comment.body == '/blossom-ci' &&
41+
(
42+
github.actor == 'rajeevsrao' ||
43+
github.actor == 'kevinch-nv' ||
44+
github.actor == 'ttyio' ||
45+
github.actor == 'samurdhikaru' ||
46+
github.actor == 'zerollzeng' ||
47+
github.actor == 'nvpohanh'
48+
)
4649
steps:
4750
- name: Check if comment is issued by authorized person
4851
run: blossom-ci
4952
env:
50-
OPERATION: 'AUTH'
53+
OPERATION: "AUTH"
5154
REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
5255
REPO_KEY_DATA: ${{ secrets.BLOSSOM_KEY }}
53-
56+
5457
Vulnerability-scan:
5558
name: Vulnerability scan
5659
needs: [Authorization]
@@ -61,21 +64,7 @@ jobs:
6164
with:
6265
repository: ${{ fromJson(needs.Authorization.outputs.args).repo }}
6366
ref: ${{ fromJson(needs.Authorization.outputs.args).ref }}
64-
lfs: 'true'
65-
66-
# repo specific steps
67-
#- name: Setup java
68-
# uses: actions/setup-java@v1
69-
# with:
70-
# java-version: 1.8
71-
72-
# add blackduck properties https://synopsys.atlassian.net/wiki/spaces/INTDOCS/pages/631308372/Methods+for+Configuring+Analysis#Using-a-configuration-file
73-
#- name: Setup blackduck properties
74-
# run: |
75-
# PROJECTS=$(mvn -am dependency:tree | grep maven-dependency-plugin | awk '{ out="com.nvidia:"$(NF-1);print out }' | grep rapids | xargs | sed -e 's/ /,/g')
76-
# echo detect.maven.build.command="-pl=$PROJECTS -am" >> application.properties
77-
# echo detect.maven.included.scopes=compile >> application.properties
78-
67+
lfs: "true"
7968
- name: Run blossom action
8069
uses: NVIDIA/blossom-action@main
8170
env:
@@ -85,7 +74,7 @@ jobs:
8574
args1: ${{ fromJson(needs.Authorization.outputs.args).args1 }}
8675
args2: ${{ fromJson(needs.Authorization.outputs.args).args2 }}
8776
args3: ${{ fromJson(needs.Authorization.outputs.args).args3 }}
88-
77+
8978
Job-trigger:
9079
name: Start ci job
9180
needs: [Vulnerability-scan]
@@ -94,18 +83,18 @@ jobs:
9483
- name: Start ci job
9584
run: blossom-ci
9685
env:
97-
OPERATION: 'START-CI-JOB'
86+
OPERATION: "START-CI-JOB"
9887
CI_SERVER: ${{ secrets.CI_SERVER }}
9988
REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
100-
89+
10190
Upload-Log:
10291
name: Upload log
10392
runs-on: blossom
104-
if : github.event_name == 'workflow_dispatch'
93+
if: github.event_name == 'workflow_dispatch'
10594
steps:
10695
- name: Jenkins log for pull request ${{ fromJson(github.event.inputs.args).pr }} (click here)
10796
run: blossom-ci
10897
env:
109-
OPERATION: 'POST-PROCESSING'
98+
OPERATION: "POST-PROCESSING"
11099
CI_SERVER: ${{ secrets.CI_SERVER }}
111100
REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}

CHANGELOG.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,25 @@
11
# TensorRT OSS Release Changelog
22

3+
## 10.10.0 GA - 2025-4-28
4+
5+
Key Features and Updates:
6+
7+
- Plugin changes
8+
- Deprecated the enum classes [PluginVersion](https://docs.nvidia.com/deeplearning/tensorrt/latest/_static/c-api/namespacenvinfer1.html#a6fb3932a2896d82a94c8783e640afb34) & [PluginCreatorVersion](https://docs.nvidia.com/deeplearning/tensorrt/latest/_static/c-api/namespacenvinfer1.html#a43c4159a19c23f74234f3c34124ea0c5). `PluginVersion` & `PluginCreatorVersion` are used only in relation to `IPluginV2`-descendent plugin interfaces, which are all deprecated.
9+
- Added the following APIs that enable users to obtain a list of all Plugin Creators hierarchically registered to a TensorRT `IPluginRegistry` (`C++`, `Python`) instance.
10+
- C++ API: `IPluginRegistry::getAllCreatorsRecursive()`
11+
- Python API: `IPluginRegistry.all_creators_recursive`
12+
- Demo changes
13+
- demoDiffusion
14+
- Added FP16 and FP8 LoRA support for the SDXL and FLUX pipelines.
15+
- Added FP16 ControlNet support for the SDXL pipeline.
16+
- Sample changes
17+
- Added support for the [python_plugin](https://github.com/NVIDIA/TensorRT/tree/release/10.9/samples/python/python_plugin) sample to compile targets to Blackwell.
18+
- Parser changes
19+
- Cleaned up log spam when the ONNX network contained a mixture of Plugins and LocalFunctions.
20+
- UINT8 constants are now properly imported for `QuantizeLinear` & `DequantizeLinear` nodes.
21+
- Plugin fallback importer now also reads its namespace from a Node's domain field.
22+
323
## 10.9.0 GA - 2025-3-10
424

525
Key Features and Updates:

CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ if (DEFINED GPU_ARCHS)
177177
message(STATUS "GPU_ARCHS defined as ${GPU_ARCHS}. Generating CUDA code for SM ${GPU_ARCHS}")
178178
separate_arguments(GPU_ARCHS)
179179
foreach(SM IN LISTS GPU_ARCHS)
180-
list(APPEND CMAKE_CUDA_ARCHITECTURES "${SM}")
180+
list(APPEND CMAKE_CUDA_ARCHITECTURES SM)
181181
endforeach()
182182
else()
183183
list(APPEND CMAKE_CUDA_ARCHITECTURES 72 75 80 86 87 89 90)

LICENSE

Lines changed: 51 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@
305305
> plugin/multiscaleDeformableAttnPlugin/multiscaleDeformableAttn.cu
306306
> plugin/multiscaleDeformableAttnPlugin/multiscaleDeformableAttn.h
307307
> plugin/multiscaleDeformableAttnPlugin/multiscaleDeformableIm2ColCuda.cuh
308-
308+
309309
Copyright 2020 SenseTime
310310

311311
Licensed under the Apache License, Version 2.0 (the "License");
@@ -399,3 +399,53 @@
399399
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
400400
See the License for the specific language governing permissions and
401401
limitations under the License.
402+
403+
> plugin/scatterElementsPlugin/atomics.cuh
404+
> plugin/scatterElementsPlugin/reducer.cuh
405+
> plugin/scatterElementsPlugin/scatterElementsPluginKernel.cu
406+
> plugin/scatterElementsPlugin/scatterElementsPluginKernel.h
407+
> plugin/scatterElementsPlugin/TensorInfo.cuh
408+
409+
Copyright (c) 2020 Matthias Fey <matthias.fey@tu-dortmund.de>
410+
411+
Permission is hereby granted, free of charge, to any person obtaining a copy
412+
of this software and associated documentation files (the "Software"), to deal
413+
in the Software without restriction, including without limitation the rights
414+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
415+
copies of the Software, and to permit persons to whom the Software is
416+
furnished to do so, subject to the following conditions:
417+
418+
The above copyright notice and this permission notice shall be included in
419+
all copies or substantial portions of the Software.
420+
421+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
422+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
423+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
424+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
425+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
426+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
427+
THE SOFTWARE.
428+
429+
> plugin/roiAlignPlugin/roiAlignKernel.cu
430+
431+
MIT License
432+
433+
Copyright (c) Microsoft Corporation
434+
435+
Permission is hereby granted, free of charge, to any person obtaining a copy
436+
of this software and associated documentation files (the "Software"), to deal
437+
in the Software without restriction, including without limitation the rights
438+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
439+
copies of the Software, and to permit persons to whom the Software is
440+
furnished to do so, subject to the following conditions:
441+
442+
The above copyright notice and this permission notice shall be included in all
443+
copies or substantial portions of the Software.
444+
445+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
446+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
447+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
448+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
449+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
450+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
451+
SOFTWARE.

README.md

Lines changed: 32 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,17 @@ To build the TensorRT-OSS components, you will first need the following software
3232

3333
**TensorRT GA build**
3434

35-
- TensorRT v10.9.0.34
35+
- TensorRT v10.10.0.31
3636
- Available from direct download links listed below
3737

3838
**System Packages**
3939

4040
- [CUDA](https://developer.nvidia.com/cuda-toolkit)
4141
- Recommended versions:
42-
- cuda-12.8.0 + cuDNN-8.9
43-
- cuda-11.8.0 + cuDNN-8.9
42+
- cuda-12.9.0
43+
- cuda-11.8.0
44+
- [CUDNN (optional)](https://developer.nvidia.com/cudnn)
45+
- cuDNN 8.9
4446
- [GNU make](https://ftp.gnu.org/gnu/make/) >= v4.1
4547
- [cmake](https://github.com/Kitware/CMake/releases) >= v3.13
4648
- [python](https://www.python.org/downloads/) >= v3.8, <= v3.10.x
@@ -84,24 +86,24 @@ To build the TensorRT-OSS components, you will first need the following software
8486

8587
Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com) with the direct links below:
8688

87-
- [TensorRT 10.9.0.34 for CUDA 11.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.9.0/tars/TensorRT-10.9.0.34.Linux.x86_64-gnu.cuda-11.8.tar.gz)
88-
- [TensorRT 10.9.0.34 for CUDA 12.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.9.0/tars/TensorRT-10.9.0.34.Linux.x86_64-gnu.cuda-12.8.tar.gz)
89-
- [TensorRT 10.9.0.34 for CUDA 11.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.9.0/zip/TensorRT-10.9.0.34.Windows.win10.cuda-11.8.zip)
90-
- [TensorRT 10.9.0.34 for CUDA 12.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.9.0/zip/TensorRT-10.9.0.34.Windows.win10.cuda-12.8.zip)
89+
- [TensorRT 10.10.0.31 for CUDA 11.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.10.0/tars/TensorRT-10.10.0.31.Linux.x86_64-gnu.cuda-11.8.tar.gz)
90+
- [TensorRT 10.10.0.31 for CUDA 12.9, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.10.0/tars/TensorRT-10.10.0.31.Linux.x86_64-gnu.cuda-12.9.tar.gz)
91+
- [TensorRT 10.10.0.31 for CUDA 11.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.10.0/zip/TensorRT-10.10.0.31.Windows.win10.cuda-11.8.zip)
92+
- [TensorRT 10.10.0.31 for CUDA 12.9, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.10.0/zip/TensorRT-10.10.0.31.Windows.win10.cuda-12.9.zip)
9193

92-
**Example: Ubuntu 20.04 on x86-64 with cuda-12.8**
94+
**Example: Ubuntu 20.04 on x86-64 with cuda-12.9**
9395

9496
```bash
9597
cd ~/Downloads
96-
tar -xvzf TensorRT-10.9.0.34.Linux.x86_64-gnu.cuda-12.8.tar.gz
97-
export TRT_LIBPATH=`pwd`/TensorRT-10.9.0.34
98+
tar -xvzf TensorRT-10.10.0.31.Linux.x86_64-gnu.cuda-12.9.tar.gz
99+
export TRT_LIBPATH=`pwd`/TensorRT-10.10.0.31
98100
```
99101

100-
**Example: Windows on x86-64 with cuda-12.8**
102+
**Example: Windows on x86-64 with cuda-12.9**
101103

102104
```powershell
103-
Expand-Archive -Path TensorRT-10.9.0.34.Windows.win10.cuda-12.8.zip
104-
$env:TRT_LIBPATH="$pwd\TensorRT-10.9.0.34\lib"
105+
Expand-Archive -Path TensorRT-10.10.0.31.Windows.win10.cuda-12.9.zip
106+
$env:TRT_LIBPATH="$pwd\TensorRT-10.10.0.31\lib"
105107
```
106108

107109
## Setting Up The Build Environment
@@ -110,34 +112,34 @@ For Linux platforms, we recommend that you generate a docker container for build
110112

111113
1. #### Generate the TensorRT-OSS build container.
112114

113-
**Example: Ubuntu 20.04 on x86-64 with cuda-12.8 (default)**
115+
**Example: Ubuntu 20.04 on x86-64 with cuda-12.9 (default)**
114116

115117
```bash
116-
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.8
118+
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.9
117119
```
118120

119-
**Example: Rockylinux8 on x86-64 with cuda-12.8**
121+
**Example: Rockylinux8 on x86-64 with cuda-12.9**
120122

121123
```bash
122-
./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.8
124+
./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.9
123125
```
124126

125-
**Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.8 (JetPack SDK)**
127+
**Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.9 (JetPack SDK)**
126128

127129
```bash
128-
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.8
130+
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.9
129131
```
130132

131-
**Example: Ubuntu 22.04 on aarch64 with cuda-12.8**
133+
**Example: Ubuntu 22.04 on aarch64 with cuda-12.9**
132134

133135
```bash
134-
./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.8
136+
./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.9
135137
```
136138

137139
2. #### Launch the TensorRT-OSS build container.
138140
**Example: Ubuntu 20.04 build container**
139141
```bash
140-
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.8 --gpus all
142+
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.9 --gpus all
141143
```
142144
> NOTE:
143145
> <br> 1. Use the `--tag` corresponding to build container generated in Step 1.
@@ -149,7 +151,7 @@ For Linux platforms, we recommend that you generate a docker container for build
149151

150152
- Generate Makefiles and build
151153

152-
**Example: Linux (x86-64) build with default cuda-12.8**
154+
**Example: Linux (x86-64) build with default cuda-12.9**
153155

154156
```bash
155157
cd $TRT_OSSPATH
@@ -158,7 +160,7 @@ For Linux platforms, we recommend that you generate a docker container for build
158160
make -j$(nproc)
159161
```
160162

161-
**Example: Linux (aarch64) build with default cuda-12.8**
163+
**Example: Linux (aarch64) build with default cuda-12.9**
162164

163165
```bash
164166
cd $TRT_OSSPATH
@@ -167,27 +169,27 @@ For Linux platforms, we recommend that you generate a docker container for build
167169
make -j$(nproc)
168170
```
169171

170-
**Example: Native build on Jetson (aarch64) with cuda-12.8**
172+
**Example: Native build on Jetson (aarch64) with cuda-12.9**
171173

172174
```bash
173175
cd $TRT_OSSPATH
174176
mkdir -p build && cd build
175-
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.8
177+
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.9
176178
CC=/usr/bin/gcc make -j$(nproc)
177179
```
178180

179181
> NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.
180182
181-
**Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.8 (JetPack)**
183+
**Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.9 (JetPack)**
182184

183185
```bash
184186
cd $TRT_OSSPATH
185187
mkdir -p build && cd build
186-
cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.8 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.8/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.8/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib
188+
cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.9 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.9/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.9/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib
187189
make -j$(nproc)
188190
```
189191

190-
**Example: Native builds on Windows (x86) with cuda-12.8**
192+
**Example: Native builds on Windows (x86) with cuda-12.9**
191193

192194
```bash
193195
cd $TRT_OSSPATH
@@ -197,7 +199,7 @@ For Linux platforms, we recommend that you generate a docker container for build
197199
msbuild TensorRT.sln /property:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS
198200
```
199201

200-
> NOTE: The default CUDA version used by CMake is 12.8.0. To override this, for example to 11.8, append `-DCUDA_VERSION=11.8` to the cmake command.
202+
> NOTE: The default CUDA version used by CMake is 12.9.0. To override this, for example to 11.8, append `-DCUDA_VERSION=11.8` to the cmake command.
201203
202204
- Required CMake build arguments are:
203205
- `TRT_LIB_DIR`: Path to the TensorRT installation directory containing libraries.

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
10.9.0.34
1+
10.10.0.31

0 commit comments

Comments
 (0)