You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-2Lines changed: 1 addition & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -122,7 +122,6 @@ OpenLLM supports a wide range of state-of-the-art open-source LLMs. You can also
122
122
</tr>
123
123
</table>
124
124
125
-
126
125
For the full model list, see the [OpenLLM models repository](https://github.com/bentoml/openllm-models).
127
126
128
127
## Start an LLM server
@@ -252,7 +251,7 @@ OpenLLM supports LLM cloud deployment via BentoML, the unified model serving fra
252
251
[Sign up for BentoCloud](https://www.bentoml.com/) for free and [log in](https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html). Then, run `openllm deploy` to deploy a model to BentoCloud:
Copy file name to clipboardExpand all lines: src/openllm/__main__.py
+3-2Lines changed: 3 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -240,12 +240,13 @@ def deploy(
240
240
instance_type: typing.Optional[str] =None,
241
241
repo: typing.Optional[str] =None,
242
242
verbose: bool=False,
243
+
env: typing.Optional[list[str]] =typer.Option(None, "--env", help="Environment variables to pass to the deployment command. Format: NAME or NAME=value. Can be specified multiple times.")
0 commit comments