Skip to content

Dynamo Release v0.3.2

Latest
Compare
Choose a tag to compare
@nv-anants nv-anants released this 18 Jul 05:21
50f3636

Dynamo is a high-performance, low-latency inference framework designed to serve generative AI models—across any framework, architecture, or deployment scale. It's an open source project under the Apache 2.0 license. Dynamo is available for installation via pip wheels and containers from NVIDIA NGC.

As a vendor-neutral serving framework, Dynamo supports multiple large language model (LLM) inference engines to varying degrees:

  • NVIDIA TensorRT-LLM
  • vLLM
  • SGLang

Major Features and Improvements

Engine Support and Routing

  • Example standalone router for use outside of dynamo (#1409).
  • The new SLA-based planner dynamically manages resource allocation based on service-level objectives (#1420).
  • Data-parallel vLLM worker setups are now supported (#1513).
  • SGLang support was extended for DeepEP deployments (#1120).
  • Clean shutdown is now available for vllm_v1 and SGLang engines (#1562, #1764).
  • Experimental support for WideEP with EPLB aggregation and disaggregation is now available for TRTLLM (#1652, #1690).
  • Approximate KV cache residency and predicted active KV blocks for improved routing efficiency (#1636, #1638, #1731).

Observability and Metrics

  • Native DCGM and Prometheus integration enables hardware metrics collection and export. Optional Grafana dashboards are provided (#1488, #1701, #1788).
  • New Grafana dashboards offer composite software and hardware system visibility (#1788).
  • Batch /completions endpoint and speculative decoding metrics are now supported for vLLM (#1626, #1549).

Deployment, Kubernetes, and CLI

  • The Kubernetes operator now supports custom entrypoints, command overrides, and simplified graph deployments (#1396, #1708, #1877, #1893).
  • Example manifests for multimodal and minimal deployments were added (#1836, #1872).
  • Graph Helm chart logic, resource requests, and health probes were improved (#1877, #1888).
  • Two new Helm charts are introduced in this release: dynamo-platform, and dynamo-crds, enabling modular and robust Kubernetes deployments for a variety of topologies and operational requirements.
  • The dynamo-run command line interface now supports the --version flag and improved error handling and validation (#1596, #1674, #1623).
  • Docker and Kubernetes deployment workflows were streamlined. Helm charts and container images were improved (#1742, #1796, #1840, #1841).

Developer Experience

  • Embedding request handling was improved with frontend tokenization (#1494).
  • OpenAI API request validation is now available (#1674).
  • Batch embedding and parallel tokenization improve efficiency for batch inference and embedding (#1657).
  • The /responses endpoint and additional API features were added (#1694).

Bug Fixes

  • Issues related to GPU resource specifications in deployments, container builds, and runtime were fixed (#1826, #1792, #1546).
  • Helm chart logic, resource requests, and health probes were corrected (#1877, #1893).
  • Error handling and model loading were improved for multimodal and distributed deployments (#1545).
  • Metrics publishing and logging were fixed for vLLM, SGLang, and OpenAI endpoints (#1864, #1649, #1639).
  • Process cleanup issues were resolved in tests (#1801).

Documentation

  • Documentation updates include new guides for Ray setup, architecture diagrams, and deployment modes (#1947, #1697).
  • Benchmarking, troubleshooting, and advanced usage scenario documentation was enhanced.
  • Notes were added to deprecate outdated connectors (#1964, #1959).

Build, CI, and Test

  • Dependency upgrades include protobuf, nats, and etcd (#1876, #1744).
  • CI coverage now includes GPU-based and multi-engine tests.
  • Container builds now use distroless images for improved security and efficiency (#1570, #1569).
  • Fault tolerance tests #1444

Known Issues

  • KVBM is supported only with Python 3.12.

Release Assets

Python Wheels:

Rust Crates:

Containers:

Helm Charts:

Contributors

Thank you to all contributors for this release. For a full list, refer to the changelog.