Skip to content

Create a benchmark job 'sleep' to leverage Benchpress's features to monitor non-DCPerf workloads #98

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: v2-beta
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions benchpress/config/benchmarks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,8 @@ health_check:
cleanup_script: ./packages/health_check/cleanup_health_check.sh
path: ./benchmarks/health_check/run.sh
metrics: []

sleep:
path: /usr/bin/sleep
parser: returncode
metrics: []
10 changes: 10 additions & 0 deletions benchpress/config/jobs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -806,3 +806,13 @@
client:
args:
- '-r client'

- benchmark: sleep
name: sleep
description: >
Sleep for a period of time. Can be used to leverage Benchpress's Perf
Monitoring Hook to monitor some external workloads.
args:
- '{duration}'
vars:
- 'duration'
3 changes: 3 additions & 0 deletions benchpress/lib/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@ def issue_background_command(cmd, stdout, stderr, env=None):


def verify_install(install_script):
if not install_script:
# install_script not set means this "benchmark" does not need installation
return True
if os.path.exists("benchmark_installs.txt"):
with open("benchmark_installs.txt", "r") as benchmark_installs:
for benchmark_install in benchmark_installs:
Expand Down
3 changes: 2 additions & 1 deletion perfutils/generate_amd_perf_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
import io
import itertools
import subprocess
import typing

import click
import pandas as pd
Expand Down Expand Up @@ -2376,7 +2377,7 @@ def get_memory_info():
)
def main(
amd_perf_csv_file: click.Path,
series: click.File,
series: typing.TextIO,
format: click.Choice,
arch: click.Choice,
) -> None:
Expand Down
3 changes: 2 additions & 1 deletion perfutils/generate_arm_perf_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import functools
import io
import itertools
import typing

import click
import pandas as pd
Expand Down Expand Up @@ -804,7 +805,7 @@ def nvidia_scf_mem_latency_ns(grouped_df):
)
def main(
perf_csv_file: click.Path,
series: click.File,
series: typing.TextIO,
format: click.Choice,
) -> None:
df = read_csv(perf_csv_file)
Expand Down