Skip to content

Release 2.57.0 corresponding to NGC container 25.04

Latest
Compare
Choose a tag to compare
@dmitry-tokarev-nv dmitry-tokarev-nv released this 12 May 18:13
d79c4f1

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

New Features and Improvements

  • Exposed gRPC infer thread count as a server option.
  • Improved server stability during the gRPC client cancellation.
  • Improved server stability in tracing mode.
  • Added BLS decoupled request cancellation in the Python Backend
  • GenAI-Perf now offers a new configuration file alongside the command line.
  • GenAI-Perf now supports the Huggingface TGI generated endpoint.
  • GenAI-Perf added a Token per second per user (TPS/user) metric.
  • GenAI-Perf metric parsing speed was increased by 60%.

Known Issues

  • vLLM backend for 25.04 might be unstable with the vLLM V1 architecture. We recommend switching to V0 for this release, by setting VLLM_USE_V1 environment variable to 0. However, users should be aware that vLLM's V0 API is affected by vulnerabilities.

  • vLLM containers include vllm version 0.8.1 which is affected by new vulnerabilities.
    Workarounds:
    Prior to the fix, your options include:

    • Do not expose the vLLM host to a network where any untrusted connections may reach the host.
    • Ensure that only the other vLLM hosts are able to connect to the TCP port used for the XPUB socket. Note that port used is random.
  • The core Python binding may incur an additional D2H and H2D copy if the backend and frontend both specify device memory to be used for response tensors.

  • A segmentation fault related to DCGM and NSCQ may be encountered during server shutdown on NVSwitch systems. A possible workaround for this issue is to disable the collection of GPU metrics tritonserver --allow-gpu-metrics false ...

  • vLLM backend currently does not take advantage of the vLLM v0.6 performance improvement when metrics are enabled.

  • When using TensorRT models, if auto-complete configuration is disabled and is_non_linear_format_io:true for reformat-free tensors is not provided in the model configuration, the model may not load successfully.

  • When using Python models in decoupled mode, users need to ensure that the ResponseSender goes out of scope or is properly cleaned up before unloading the model to guarantee that the unloading process executes correctly.

  • Restart support was temporarily removed for Python models.

  • Triton Inference Server with vLLM backend currently does not support running vLLM models with tensor parallelism sizes greater than 1 and the default "distributed_executor_backend" setting when using explicit model control mode. In attempt to load a vllm model (tp > 1) in explicit mode, users could potentially see failure at initialize step: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads. For the default model control mode, after server shutdown, vllm related sub-processes are not killed. Related vllm issue: vllm-project/vllm#6766 . Please specify "distributed_executor_backend":"ray" in the model.json when deploying vllm models with tensor parallelism > 1.

  • When loading models with file override, multiple model configuration files are not supported. Users must provide the model configuration by setting parameter "config" : "<JSON>" instead of custom configuration file in the following format: "file:configs/<model-config-name>.pbtxt" : "<base64-encoded-file-content>".

  • TensorRT-LLM backend provides limited support of Triton extensions and features.

  • The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.

  • The Java CAPI is known to have intermittent segfaults.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. TCMalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. NVIDIA recommends experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: https://github.com/pytorch/pytorch/issues/38273

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices.

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

  • When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.

  • Python backend support for Windows is limited and does not currently support the following features:

    • GPU tensors
    • CPU and GPU-related metrics
    • Custom execution environments
    • The model load/unload APIs

Client Libraries and Examples

Ubuntu 24.04 builds of the client libraries and examples are included in this release in the attached v2.57.0_ubuntu2404.clients.tar.gz file. The SDK is also available for as an Ubuntu 24.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

Windows Support

[!NOTE]
There is no Windows release for 25.04, the latest release is 25.02.

Jetson iGPU Support

A release of Triton for IGX is provided in the attached tar file: tritonserver2.57.0-igpu.tar.

  • This release supports TensorRT 10.9.0.34, Onnx Runtime 1.21.0, PyTorch 2.7.0a0+79aa17489c.nv25.4, Python 3.12 and as well as ensembles.
  • ONNX Runtime backend does not support the OpenVINO and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.57.0-py3-none-manylinux2014_aarch64.whl[all]

Triton TRT-LLM Container Support Matrix

The Triton TensorRT-LLM container is built from the 25.03 image nvcr.io/nvidia/tritonserver:25.03-py3-min. Please refer to the support matrix and compatibility.md for all dependency versions related to 25.03. However, the packages listed below have different versions than those specified in the support matrix.

Dependency Version
TensorRT-LLM 0.18.2
TensorRT 10.9.0.34