Skip to content

Commit a0109a3

Browse files
authored
Fixed readme, removed error for --help (#89)
Signed-off-by: Ira <IRAR@il.ibm.com>
1 parent 9f3d093 commit a0109a3

File tree

2 files changed

+7
-3
lines changed

2 files changed

+7
-3
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ API responses contains a subset of the fields provided by the OpenAI API.
8585
For more details see the <a href="https://docs.vllm.ai/en/stable/getting_started/quickstart.html#openai-completions-api-with-vllm">vLLM documentation</a>
8686

8787
## Command line parameters
88-
- `config`: the path to a yaml configuration file
88+
- `config`: the path to a yaml configuration file that can contain the simulator's command line parameters. If a parameter is defined in both the config file and the command line, the command line value overwrites the configuration file value. An example configuration file can be found at `manifests/config.yaml`
8989
- `port`: the port the simulator listents on, default is 8000
9090
- `model`: the currently 'loaded' model, mandatory
9191
- `served-model-name`: model names exposed by the API (a list of space-separated strings)
@@ -135,7 +135,7 @@ The following environment variables can be used to change the image tag: `REGIST
135135
### Running
136136
To run the vLLM Simulator image under Docker, run:
137137
```bash
138-
docker run --rm --publish 8000:8000 ghcr.io/llm-d/llm-d-inference-sim:dev --port 8000 --model "Qwen/Qwen2.5-1.5B-Instruct" --lora "tweet-summary-0,tweet-summary-1"
138+
docker run --rm --publish 8000:8000 ghcr.io/llm-d/llm-d-inference-sim:dev --port 8000 --model "Qwen/Qwen2.5-1.5B-Instruct" --lora-modules '{"name":"tweet-summary-0"}' '{"name":"tweet-summary-1"}'
139139
```
140140
**Note:** To run the vLLM Simulator with the latest release version, in the above docker command replace `dev` with the current release which can be found on [GitHub](https://github.com/llm-d/llm-d-inference-sim/pkgs/container/llm-d-inference-sim).
141141

pkg/llm-d-inference-sim/simulator.go

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ func (s *VllmSimulator) parseCommandParamsAndLoadConfig() error {
159159

160160
// These values were manually parsed above in getParamValueFromArgs, we leave this in order to get these flags in --help
161161
var dummyString string
162-
f.StringVar(&dummyString, "config", "", "The configuration file")
162+
f.StringVar(&dummyString, "config", "", "The path to a yaml configuration file. The command line values overwrite the configuration file values")
163163
var dummyMultiString multiString
164164
f.Var(&dummyMultiString, "served-model-name", "Model names exposed by the API (a list of space-separated strings)")
165165
f.Var(&dummyMultiString, "lora-modules", "List of LoRA adapters (a list of space-separated JSON strings)")
@@ -172,6 +172,10 @@ func (s *VllmSimulator) parseCommandParamsAndLoadConfig() error {
172172
f.AddGoFlagSet(flagSet)
173173

174174
if err := f.Parse(os.Args[1:]); err != nil {
175+
if err == pflag.ErrHelp {
176+
// --help - exit without printing an error message
177+
os.Exit(0)
178+
}
175179
return err
176180
}
177181

0 commit comments

Comments
 (0)