Skip to content

Commit 4226ab8

Browse files
Update README.md for r25.05 release (#8200)
1 parent 198985f commit 4226ab8

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Major features include:
5555
- Provides [Backend API](https://github.com/triton-inference-server/backend) that
5656
allows adding custom backends and pre/post processing operations
5757
- Supports writing custom backends in python, a.k.a.
58-
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/main/docs/python_based_backends.md#python-based-backends)
58+
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/r25.04/docs/python_based_backends.md#python-based-backends)
5959
- Model pipelines using
6060
[Ensembling](docs/user_guide/architecture.md#ensemble-models) or [Business
6161
Logic Scripting
@@ -167,10 +167,10 @@ configuration](docs/user_guide/model_configuration.md) for the model.
167167
[Python](https://github.com/triton-inference-server/python_backend), and more
168168
- Not all the above backends are supported on every platform supported by Triton.
169169
Look at the
170-
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/main/docs/backend_platform_support_matrix.md)
170+
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/r25.04/docs/backend_platform_support_matrix.md)
171171
to learn which backends are supported on your target platform.
172172
- Learn how to [optimize performance](docs/user_guide/optimization.md) using the
173-
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/main/README.md)
173+
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/r25.04/README.md)
174174
and
175175
[Model Analyzer](https://github.com/triton-inference-server/model_analyzer)
176176
- Learn how to [manage loading and unloading models](docs/user_guide/model_management.md) in
@@ -184,14 +184,14 @@ A Triton *client* application sends inference and other requests to Triton. The
184184
[Python and C++ client libraries](https://github.com/triton-inference-server/client)
185185
provide APIs to simplify this communication.
186186

187-
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/examples),
188-
[Python](https://github.com/triton-inference-server/client/blob/main/src/python/examples),
189-
and [Java](https://github.com/triton-inference-server/client/blob/main/src/java/src/main/java/triton/client/examples)
187+
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/r25.04/src/c%2B%2B/examples),
188+
[Python](https://github.com/triton-inference-server/client/blob/r25.04/src/python/examples),
189+
and [Java](https://github.com/triton-inference-server/client/blob/r25.04/src/java/src/main/java/triton/client/examples)
190190
- Configure [HTTP](https://github.com/triton-inference-server/client#http-options)
191191
and [gRPC](https://github.com/triton-inference-server/client#grpc-options)
192192
client options
193193
- Send input data (e.g. a jpeg image) directly to Triton in the [body of an HTTP
194-
request without any additional metadata](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md#raw-binary-request)
194+
request without any additional metadata](https://github.com/triton-inference-server/server/blob/r25.04/docs/protocol/extension_binary_data.md#raw-binary-request)
195195

196196
### Extend Triton
197197

@@ -200,7 +200,7 @@ designed for modularity and flexibility
200200

201201
- [Customize Triton Inference Server container](docs/customization_guide/compose.md) for your use case
202202
- [Create custom backends](https://github.com/triton-inference-server/backend)
203-
in either [C/C++](https://github.com/triton-inference-server/backend/blob/main/README.md#triton-backend-api)
203+
in either [C/C++](https://github.com/triton-inference-server/backend/blob/r25.04/README.md#triton-backend-api)
204204
or [Python](https://github.com/triton-inference-server/python_backend)
205205
- Create [decoupled backends and models](docs/user_guide/decoupled_models.md) that can send
206206
multiple responses for a request or not send any responses for a request
@@ -209,7 +209,7 @@ designed for modularity and flexibility
209209
decryption, or conversion
210210
- Deploy Triton on [Jetson and JetPack](docs/user_guide/jetson.md)
211211
- [Use Triton on AWS
212-
Inferentia](https://github.com/triton-inference-server/python_backend/tree/main/inferentia)
212+
Inferentia](https://github.com/triton-inference-server/python_backend/tree/r25.04/inferentia)
213213

214214
### Additional Documentation
215215

0 commit comments

Comments
 (0)