You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To build a Nucleus model server, you will need a directory which contains your source code as well as a model server configuration file (e.g. `model-server-config.yaml`).
94
+
To build a Nucleus model server, you will need a directory which contains your source code as well as a model server configuration file (e.g. `nucleus.yaml`).
95
95
96
96
## Model server configuration schema
97
97
@@ -160,7 +160,7 @@ For example:
160
160
161
161
```text
162
162
./my-classifier/
163
-
├── model-server-config.yaml
163
+
├── nucleus.yaml
164
164
├── handler.py
165
165
├── ...
166
166
└── requirements.txt
@@ -189,7 +189,7 @@ Nucleus supports installing Conda packages. We recommend only using Conda when y
189
189
190
190
```text
191
191
./my-classifier/
192
-
├── model-server-config.yaml
192
+
├── nucleus.yaml
193
193
├── handler.py
194
194
├── ...
195
195
└── conda-packages.txt
@@ -228,7 +228,7 @@ Python packages can also be installed by providing a `setup.py` that describes y
228
228
229
229
```text
230
230
./my-classifier/
231
-
├── model-server-config.yaml
231
+
├── nucleus.yaml
232
232
├── handler.py
233
233
├── ...
234
234
├── mypkg
@@ -251,7 +251,7 @@ Nucleus looks for a file named `dependencies.sh` in the top level project direct
251
251
252
252
```text
253
253
./my-classifier/
254
-
├── model-server-config.yaml
254
+
├── nucleus.yaml
255
255
├── handler.py
256
256
├── ...
257
257
└── dependencies.sh
@@ -284,7 +284,7 @@ Here is a sample project directory
284
284
285
285
```text
286
286
./my-classifier/
287
-
├── nucleus-model-server-config.yaml
287
+
├── nucleus.yaml
288
288
├── handler.py
289
289
├── my-data.json
290
290
├── ...
@@ -344,20 +344,20 @@ When deploying a Nucleus model server to a [Cortex cluster](https://github.com/c
344
344
When deploying a Nucleus model server with the tensorflow type on a generic Kubernetes pod (not within Cortex), there are some additional things to keep in mind:
345
345
346
346
* A shared volume (at `/mnt`) must exist between the handler container and the TensorFlow Serving container.
347
-
* The host of the TensorFlow Serving container has to be specified in the model server configuration (`model-server-config.yaml`) so that the handler container can connect to it.
347
+
* The host of the TensorFlow Serving container has to be specified in the model server configuration (`nucleus.yaml`) so that the handler container can connect to it.
348
348
349
349
## Multi-model
350
350
351
351
### Python Handler
352
352
353
353
#### Specifying models in Nucleus configuration
354
354
355
-
##### `nucleus-model-server-config.yaml`
355
+
##### `nucleus.yaml`
356
356
357
357
The directory `s3://cortex-examples/sklearn/mpg-estimator/linreg/` contains 4 different versions of the model.
358
358
359
359
```yaml
360
-
# nucleus-model-server-config.yaml
360
+
# nucleus.yaml
361
361
362
362
type: python
363
363
path: handler.py
@@ -391,10 +391,10 @@ class Handler:
391
391
392
392
#### Without specifying models in Nucleus configuration
393
393
394
-
##### `nucleus-model-server-config.yaml`
394
+
##### `nucleus.yaml`
395
395
396
396
```yaml
397
-
# nucleus-model-server-config.yaml
397
+
# nucleus.yaml
398
398
399
399
type: python
400
400
path: handler.py
@@ -429,10 +429,10 @@ class Handler:
429
429
430
430
### TensorFlow Handler
431
431
432
-
#### `nucleus-model-server-config.yaml`
432
+
#### `nucleus.yaml`
433
433
434
434
```yaml
435
-
# nucleus-model-server-config.yaml
435
+
# nucleus.yaml
436
436
437
437
type: tensorflow
438
438
path: handler.py
@@ -1180,7 +1180,7 @@ Whenever a model path is specified in an Nucleus configuration file, it should b
1180
1180
The most common pattern is to serve a single model per Nucleus server. The path to the model is specified in the `path` field in the `multi_model_reloading` configuration. For example:
1181
1181
1182
1182
```yaml
1183
-
# nucleus-model-server-config.yaml
1183
+
# nucleus.yaml
1184
1184
1185
1185
type: python
1186
1186
multi_model_reloading:
@@ -1192,7 +1192,7 @@ multi_model_reloading:
1192
1192
It is possible to serve multiple models from a single Nucleus server. The paths to the models are specified in the Nucleus configuration, either via the `multi_model_reloading.paths` or `multi_model_reloading.dir` field in the configuration. For example:
1193
1193
1194
1194
```yaml
1195
-
# nucleus-model-server-config.yaml
1195
+
# nucleus.yaml
1196
1196
1197
1197
type: python
1198
1198
multi_model_reloading:
@@ -1205,7 +1205,7 @@ multi_model_reloading:
1205
1205
or:
1206
1206
1207
1207
```yaml
1208
-
# nucleus-model-server-config.yaml
1208
+
# nucleus.yaml
1209
1209
1210
1210
type: python
1211
1211
multi_model_reloading:
@@ -1346,7 +1346,7 @@ Whenever a model path is specified in a Nucleus configuration file, it should be
1346
1346
The most common pattern is to serve a single model per Nucleus server. The path to the model is specified in the `path` field in the `models` configuration. For example:
1347
1347
1348
1348
```yaml
1349
-
# nucleus-model-server-config.yaml
1349
+
# nucleus.yaml
1350
1350
1351
1351
type: tensorflow
1352
1352
models:
@@ -1358,7 +1358,7 @@ models:
1358
1358
It is possible to serve multiple models from a single Nucleus server. The paths to the models are specified in the Nucleus configuration, either via the `models.paths` or `models.dir` field in the Nucleus configuration. For example:
0 commit comments