Skip to content

Commit 195ae05

Browse files
Fix code example in MLFlow section of deployment docs (#8229)
* fix(documentation): mlflow deployment code example * add caveat callout --------- Co-authored-by: chenmoneygithub <chen.qian@databricks.com>
1 parent 015f795 commit 195ae05

File tree

1 file changed

+37
-4
lines changed

1 file changed

+37
-4
lines changed

docs/docs/tutorials/deployment/index.md

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,13 @@ print(response.json())
120120
You should see the response like below:
121121

122122
```json
123-
{'status': 'success', 'data': {'reasoning': 'The capital of France is a well-known fact, commonly taught in geography classes and referenced in various contexts. Paris is recognized globally as the capital city, serving as the political, cultural, and economic center of the country.', 'answer': 'The capital of France is Paris.'}}
123+
{
124+
"status": "success",
125+
"data": {
126+
"reasoning": "The capital of France is a well-known fact, commonly taught in geography classes and referenced in various contexts. Paris is recognized globally as the capital city, serving as the political, cultural, and economic center of the country.",
127+
"answer": "The capital of France is Paris."
128+
}
129+
}
124130
```
125131

126132
## Deploying with MLflow
@@ -140,7 +146,14 @@ Let's spin up the MLflow tracking server, where we will store our DSPy program.
140146
```
141147

142148
Then we can define the DSPy program and log it to the MLflow server. "log" is an overloaded term in MLflow, basically it means
143-
we store the program information along with environment requirements in the MLflow server. See the code below:
149+
we store the program information along with environment requirements in the MLflow server. This is done via the `mlflow.dspy.log_model()`
150+
function, please see the code below:
151+
152+
> [!NOTE]
153+
> As of MLflow 2.22.0, there is a caveat that you must wrap your DSPy program in a custom DSPy Module class when deploying with MLflow.
154+
> This is because MLflow requires positional arguments while DSPy pre-built modules disallow positional arguments, e.g., `dspy.Predict`
155+
> or `dspy.ChainOfThought`. To work around this, create a wrapper class that inherits from `dspy.Module` and implement your program's
156+
> logic in the `forward()` method, as shown in the example below.
144157
145158
```python
146159
import dspy
@@ -151,7 +164,16 @@ mlflow.set_experiment("deploy_dspy_program")
151164

152165
lm = dspy.LM("openai/gpt-4o-mini")
153166
dspy.settings.configure(lm=lm)
154-
dspy_program = dspy.ChainOfThought("question -> answer")
167+
168+
class MyProgram(dspy.Module):
169+
def __init__(self):
170+
super().__init__()
171+
self.cot = dspy.ChainOfThought("question -> answer")
172+
173+
def forward(self, question):
174+
return self.cot(question=question)
175+
176+
dspy_program = MyProgram()
155177

156178
with mlflow.start_run():
157179
mlflow.dspy.log_model(
@@ -188,7 +210,18 @@ After the program is deployed, you can test it with the following command:
188210
You should see the response like below:
189211

190212
```json
191-
{"choices": [{"index": 0, "message": {"role": "assistant", "content": "{\"reasoning\": \"The question asks for the sum of 2 and 2. To find the answer, we simply add the two numbers together: 2 + 2 = 4.\", \"answer\": \"4\"}"}, "finish_reason": "stop"}]}
213+
{
214+
"choices": [
215+
{
216+
"index": 0,
217+
"message": {
218+
"role": "assistant",
219+
"content": "{\"reasoning\": \"The question asks for the sum of 2 and 2. To find the answer, we simply add the two numbers together: 2 + 2 = 4.\", \"answer\": \"4\"}"
220+
},
221+
"finish_reason": "stop"
222+
}
223+
]
224+
}
192225
```
193226

194227
For complete guide on how to deploy a DSPy program with MLflow, and how to customize the deployment, please refer to the

0 commit comments

Comments
 (0)