Skip to content

feat: Report histogram metrics to Triton metrics server #57

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 28 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
0686a7c
Add first supported metrics
yinggeh Jul 29, 2024
21e2356
Update comments
yinggeh Jul 30, 2024
d95bb2c
Minor update
yinggeh Aug 1, 2024
321faa0
Add metrics test
yinggeh Aug 3, 2024
468539f
Fix copyright
yinggeh Aug 5, 2024
8eba2f0
Remove unused metrics and update comments
yinggeh Aug 6, 2024
6f97f6f
Minor update
yinggeh Aug 6, 2024
bf7669e
Minor updates
yinggeh Aug 6, 2024
e9d0dbb
Minor fix
yinggeh Aug 7, 2024
7d0dc5b
Remove unused module
yinggeh Aug 7, 2024
979dc02
Fix "metrics not supported error" when building with TRITON_ENABLE_ME…
yinggeh Aug 8, 2024
3dd04c5
Fix "metrics not supported error" when building with TRITON_ENABLE_ME…
yinggeh Aug 8, 2024
07f2575
Simply test
yinggeh Aug 8, 2024
2135145
Completely turn off metrics
yinggeh Aug 9, 2024
56aea05
Add vLLM disable_log_stats config test
yinggeh Aug 9, 2024
0dadc8e
Test metrics are enabled by default if disable_log_stats is not set.
yinggeh Aug 9, 2024
8d8fd2a
Update tests based on comments
yinggeh Aug 9, 2024
4f2e217
Remove _log_gauge
yinggeh Aug 9, 2024
d22fd03
Resolve comments
yinggeh Aug 9, 2024
c8bdb6e
Merge branch 'main' of github.com:triton-inference-server/vllm_backen…
yinggeh Aug 9, 2024
8280d26
Update
yinggeh Aug 9, 2024
6fa7ae3
Change temp directory
yinggeh Aug 9, 2024
89ca6f4
Disable metrics report by default. Controlled by parameter "REPORT_ME…
yinggeh Aug 15, 2024
1158fee
Test server option set --allow-metrics=false
yinggeh Aug 15, 2024
a99d38b
Add docs
yinggeh Aug 15, 2024
de8f25b
Minor update
yinggeh Aug 15, 2024
b1333ce
Both args checking
yinggeh Aug 15, 2024
f15658e
feat: Report histogram metrics to Triton metrics server (#56)
yinggeh Aug 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 67 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,8 @@ container with the following commands:

```
mkdir -p /opt/tritonserver/backends/vllm
wget -P /opt/tritonserver/backends/vllm https://raw.githubusercontent.com/triton-inference-server/vllm_backend/main/src/model.py
git clone https://github.com/triton-inference-server/vllm_backend.git /tmp/vllm_backend
cp -r /tmp/vllm_backend/src/* /opt/tritonserver/backends/vllm
```

## Using the vLLM Backend
Expand Down Expand Up @@ -194,14 +195,78 @@ starting from 23.10 release.

You can use `pip install ...` within the container to upgrade vLLM version.


## Running Multiple Instances of Triton Server

If you are running multiple instances of Triton server with a Python-based backend,
you need to specify a different `shm-region-prefix-name` for each server. See
[here](https://github.com/triton-inference-server/python_backend#running-multiple-instances-of-triton-server)
for more information.

## Triton Metrics
Starting with the 24.08 release of Triton, users can now obtain specific
vLLM metrics by querying the Triton metrics endpoint (see complete vLLM metrics
[here](https://docs.vllm.ai/en/latest/serving/metrics.html)). This can be
accomplished by launching a Triton server in any of the ways described above
(ensuring the build code / container is 24.08 or later) and querying the server.
Upon receiving a successful response, you can query the metrics endpoint by entering
the following:
```bash
curl localhost:8002/metrics
```
VLLM stats are reported by the metrics endpoint in fields that are prefixed with
`vllm:`. Triton currently supports reporting of the following metrics from vLLM.
```bash
# Number of prefill tokens processed.
counter_prompt_tokens
# Number of generation tokens processed.
counter_generation_tokens
# Histogram of time to first token in seconds.
histogram_time_to_first_token
# Histogram of time per output token in seconds.
histogram_time_per_output_token
```
Your output for these fields should look similar to the following:
```bash
# HELP vllm:prompt_tokens_total Number of prefill tokens processed.
# TYPE vllm:prompt_tokens_total counter
vllm:prompt_tokens_total{model="vllm_model",version="1"} 10
# HELP vllm:generation_tokens_total Number of generation tokens processed.
# TYPE vllm:generation_tokens_total counter
vllm:generation_tokens_total{model="vllm_model",version="1"} 16
# HELP vllm:time_to_first_token_seconds Histogram of time to first token in seconds.
# TYPE vllm:time_to_first_token_seconds histogram
vllm:time_to_first_token_seconds_count{model="vllm_model",version="1"} 1
vllm:time_to_first_token_seconds_sum{model="vllm_model",version="1"} 0.03233122825622559
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="0.001"} 0
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="0.005"} 0
...
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="+Inf"} 1
# HELP vllm:time_per_output_token_seconds Histogram of time per output token in seconds.
# TYPE vllm:time_per_output_token_seconds histogram
vllm:time_per_output_token_seconds_count{model="vllm_model",version="1"} 15
vllm:time_per_output_token_seconds_sum{model="vllm_model",version="1"} 0.04501533508300781
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="0.01"} 14
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="0.025"} 15
...
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="+Inf"} 15
```
To enable vLLM engine colleting metrics, "disable_log_stats" option need to be either false
or left empty (false by default) in [model.json](https://github.com/triton-inference-server/vllm_backend/blob/main/samples/model_repository/vllm_model/1/model.json).
```bash
"disable_log_stats": false
```
*Note:* vLLM metrics are not reported to Triton metrics server by default
due to potential performance slowdowns. To enable vLLM model's metrics
reporting, please add following lines to its config.pbtxt as well.
```bash
parameters: {
key: "REPORT_CUSTOM_METRICS"
value: {
string_value:"yes"
}
}
```

## Referencing the Tutorial

You can read further in the
Expand Down
248 changes: 248 additions & 0 deletions ci/L0_backend_vllm/metrics_test/test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,248 @@
#!/bin/bash
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

source ../../common/util.sh

TRITON_DIR=${TRITON_DIR:="/opt/tritonserver"}
SERVER=${TRITON_DIR}/bin/tritonserver
BACKEND_DIR=${TRITON_DIR}/backends
SERVER_ARGS="--model-repository=$(pwd)/models --backend-directory=${BACKEND_DIR} --model-control-mode=explicit --load-model=vllm_opt --log-verbose=1"
SERVER_LOG="./vllm_metrics_server.log"
CLIENT_LOG="./vllm_metrics_client.log"
TEST_RESULT_FILE='test_results.txt'
CLIENT_PY="./vllm_metrics_test.py"
SAMPLE_MODELS_REPO="../../../samples/model_repository"
EXPECTED_NUM_TESTS=1

# Helpers =======================================
function copy_model_repository {
rm -rf models && mkdir -p models
cp -r ${SAMPLE_MODELS_REPO}/vllm_model models/vllm_opt
# `vllm_opt` model will be loaded on server start and stay loaded throughout
# unittesting. To ensure that vllm's memory profiler will not error out
# on `vllm_load_test` load, we reduce "gpu_memory_utilization" for `vllm_opt`,
# so that at least 60% of GPU memory was available for other models.
sed -i 's/"gpu_memory_utilization": 0.5/"gpu_memory_utilization": 0.4/' models/vllm_opt/1/model.json
}

RET=0

# Test disabling vLLM metrics reporting without parameter "REPORT_CUSTOM_METRICS" in config.pbtxt
copy_model_repository
run_server
if [ "$SERVER_PID" == "0" ]; then
cat $SERVER_LOG
echo -e "\n***\n*** Failed to start $SERVER\n***"
exit 1
fi

set +e
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1

if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***"
RET=1
else
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS
if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Test Result Verification FAILED.\n***"
RET=1
fi
fi
set -e

kill $SERVER_PID
wait $SERVER_PID

# Test disabling vLLM metrics reporting with parameter "REPORT_CUSTOM_METRICS" set to "no" in config.pbtxt
copy_model_repository
echo -e "
parameters: {
key: \"REPORT_CUSTOM_METRICS\"
value: {
string_value:\"no\"
}
}
" >> models/vllm_opt/config.pbtxt

run_server
if [ "$SERVER_PID" == "0" ]; then
cat $SERVER_LOG
echo -e "\n***\n*** Failed to start $SERVER\n***"
exit 1
fi

set +e
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1

if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***"
RET=1
else
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS
if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Test Result Verification FAILED.\n***"
RET=1
fi
fi
set -e

kill $SERVER_PID
wait $SERVER_PID

# Test vLLM metrics reporting with parameter "REPORT_CUSTOM_METRICS" set to "yes" in config.pbtxt
copy_model_repository
cp ${SAMPLE_MODELS_REPO}/vllm_model/config.pbtxt models/vllm_opt
echo -e "
parameters: {
key: \"REPORT_CUSTOM_METRICS\"
value: {
string_value:\"yes\"
}
}
" >> models/vllm_opt/config.pbtxt

run_server
if [ "$SERVER_PID" == "0" ]; then
cat $SERVER_LOG
echo -e "\n***\n*** Failed to start $SERVER\n***"
exit 1
fi

set +e
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics -v > $CLIENT_LOG 2>&1

if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics FAILED. \n***"
RET=1
else
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS
if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Test Result Verification FAILED.\n***"
RET=1
fi
fi
set -e

kill $SERVER_PID
wait $SERVER_PID

# Test enabling vLLM metrics reporting in config.pbtxt but disabling in model.json
copy_model_repository
jq '. += {"disable_log_stats" : true}' models/vllm_opt/1/model.json > "temp.json"
mv temp.json models/vllm_opt/1/model.json
echo -e "
parameters: {
key: \"REPORT_CUSTOM_METRICS\"
value: {
string_value:\"yes\"
}
}
" >> models/vllm_opt/config.pbtxt

run_server
if [ "$SERVER_PID" == "0" ]; then
cat $SERVER_LOG
echo -e "\n***\n*** Failed to start $SERVER\n***"
exit 1
fi

set +e
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1

if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***"
RET=1
else
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS
if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Test Result Verification FAILED.\n***"
RET=1
fi
fi
set -e

kill $SERVER_PID
wait $SERVER_PID

# Test enabling vLLM metrics reporting in config.pbtxt while disabling in server option
copy_model_repository
echo -e "
parameters: {
key: \"REPORT_CUSTOM_METRICS\"
value: {
string_value:\"yes\"
}
}
" >> models/vllm_opt/config.pbtxt
SERVER_ARGS="${SERVER_ARGS} --allow-metrics=false"
run_server
if [ "$SERVER_PID" == "0" ]; then
cat $SERVER_LOG
echo -e "\n***\n*** Failed to start $SERVER\n***"
exit 1
fi

set +e
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_refused -v > $CLIENT_LOG 2>&1

if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_refused FAILED. \n***"
RET=1
else
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS
if [ $? -ne 0 ]; then
cat $CLIENT_LOG
echo -e "\n***\n*** Test Result Verification FAILED.\n***"
RET=1
fi
fi
set -e

kill $SERVER_PID
wait $SERVER_PID
rm -rf "./models" "temp.json"

if [ $RET -eq 1 ]; then
cat $CLIENT_LOG
cat $SERVER_LOG
echo -e "\n***\n*** vLLM test FAILED. \n***"
else
echo -e "\n***\n*** vLLM test PASSED. \n***"
fi

collect_artifacts_from_subdir
exit $RET
Loading