Skip to content

Commit 4a70a68

Browse files
committed
Update README for Release 24.10
1 parent 122168d commit 4a70a68

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Major features include:
5757
- Provides [Backend API](https://github.com/triton-inference-server/backend) that
5858
allows adding custom backends and pre/post processing operations
5959
- Supports writing custom backends in python, a.k.a.
60-
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/main/docs/python_based_backends.md#python-based-backends)
60+
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/r24.10/docs/python_based_backends.md#python-based-backends)
6161
- Model pipelines using
6262
[Ensembling](docs/user_guide/architecture.md#ensemble-models) or [Business
6363
Logic Scripting
@@ -170,10 +170,10 @@ configuration](docs/user_guide/model_configuration.md) for the model.
170170
[Python](https://github.com/triton-inference-server/python_backend), and more
171171
- Not all the above backends are supported on every platform supported by Triton.
172172
Look at the
173-
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/main/docs/backend_platform_support_matrix.md)
173+
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/r24.10/docs/backend_platform_support_matrix.md)
174174
to learn which backends are supported on your target platform.
175175
- Learn how to [optimize performance](docs/user_guide/optimization.md) using the
176-
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/main/README.md)
176+
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/r24.10/README.md)
177177
and
178178
[Model Analyzer](https://github.com/triton-inference-server/model_analyzer)
179179
- Learn how to [manage loading and unloading models](docs/user_guide/model_management.md) in
@@ -187,14 +187,14 @@ A Triton *client* application sends inference and other requests to Triton. The
187187
[Python and C++ client libraries](https://github.com/triton-inference-server/client)
188188
provide APIs to simplify this communication.
189189

190-
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/examples),
191-
[Python](https://github.com/triton-inference-server/client/blob/main/src/python/examples),
192-
and [Java](https://github.com/triton-inference-server/client/blob/main/src/java/src/main/java/triton/client/examples)
190+
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/r24.10/src/c%2B%2B/examples),
191+
[Python](https://github.com/triton-inference-server/client/blob/r24.10/src/python/examples),
192+
and [Java](https://github.com/triton-inference-server/client/blob/r24.10/src/java/src/main/java/triton/client/examples)
193193
- Configure [HTTP](https://github.com/triton-inference-server/client#http-options)
194194
and [gRPC](https://github.com/triton-inference-server/client#grpc-options)
195195
client options
196196
- Send input data (e.g. a jpeg image) directly to Triton in the [body of an HTTP
197-
request without any additional metadata](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md#raw-binary-request)
197+
request without any additional metadata](https://github.com/triton-inference-server/server/blob/r24.10/docs/protocol/extension_binary_data.md#raw-binary-request)
198198

199199
### Extend Triton
200200

@@ -203,7 +203,7 @@ designed for modularity and flexibility
203203

204204
- [Customize Triton Inference Server container](docs/customization_guide/compose.md) for your use case
205205
- [Create custom backends](https://github.com/triton-inference-server/backend)
206-
in either [C/C++](https://github.com/triton-inference-server/backend/blob/main/README.md#triton-backend-api)
206+
in either [C/C++](https://github.com/triton-inference-server/backend/blob/r24.10/README.md#triton-backend-api)
207207
or [Python](https://github.com/triton-inference-server/python_backend)
208208
- Create [decoupled backends and models](docs/user_guide/decoupled_models.md) that can send
209209
multiple responses for a request or not send any responses for a request
@@ -212,7 +212,7 @@ designed for modularity and flexibility
212212
decryption, or conversion
213213
- Deploy Triton on [Jetson and JetPack](docs/user_guide/jetson.md)
214214
- [Use Triton on AWS
215-
Inferentia](https://github.com/triton-inference-server/python_backend/tree/main/inferentia)
215+
Inferentia](https://github.com/triton-inference-server/python_backend/tree/r24.10/inferentia)
216216

217217
### Additional Documentation
218218

0 commit comments

Comments
 (0)