Skip to content

Commit cda0cbd

Browse files
committed
Update version to 0.3.0
1 parent ca6e84d commit cda0cbd

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Some of Nucleus's features include:
6060
<!-- UPDATE THIS VERSION ON EACH RELEASE (it's better than using "master") -->
6161

6262
```bash
63-
pip install git+https://github.com/cortexlabs/nucleus.git@0.2.2
63+
pip install git+https://github.com/cortexlabs/nucleus.git@0.3.0
6464
```
6565

6666
## Example usage
@@ -1136,7 +1136,7 @@ class Handler:
11361136
# define any handler methods for HTTP/gRPC workloads here
11371137
```
11381138

1139-
When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
1139+
When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/0.3/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
11401140

11411141
When multiple models are defined using the Handler's `multi_model_reloading` field, the `model_client.get_model()` method expects an argument `model_name` which must hold the name of the model that you want to load (for example: `self.client.get_model("text-generator")`). There is also an optional second argument to specify the model version.
11421142

@@ -1310,7 +1310,7 @@ class Handler:
13101310
# define any handler methods for HTTP/gRPC workloads here
13111311
```
13121312

1313-
Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
1313+
Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/0.3/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
13141314

13151315
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
13161316

nucleus/templates/handler.Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# to replace when building the dockerfile
22
FROM $BASE_IMAGE
3-
ENV CORTEX_MODEL_SERVER_VERSION=master
3+
ENV CORTEX_MODEL_SERVER_VERSION=0.3.0
44

55
RUN apt-get update -qq && apt-get install -y -q \
66
build-essential \

nucleus/templates/tfs.Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
FROM $BASE_IMAGE
2-
ENV CORTEX_MODEL_SERVER_VERSION=master
2+
ENV CORTEX_MODEL_SERVER_VERSION=0.3.0

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
import setuptools
1616

17-
CORTEX_MODEL_SERVER_VERSION = "master"
17+
CORTEX_MODEL_SERVER_VERSION = "0.3.0"
1818

1919
with open("requirements.txt") as fp:
2020
install_requires = fp.read()

src/cortex/cortex_internal/consts.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@
1313
# limitations under the License.
1414

1515
SINGLE_MODEL_NAME = "_cortex_default"
16-
MODEL_SERVER_VERSION = "master"
16+
MODEL_SERVER_VERSION = "0.3.0"

src/cortex/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
import pkg_resources
1818
from setuptools import setup, find_packages
1919

20-
CORTEX_MODEL_SERVER_VERSION = "master"
20+
CORTEX_MODEL_SERVER_VERSION = "0.3.0"
2121

2222
with pathlib.Path("cortex_internal.requirements.txt").open() as requirements_txt:
2323
install_requires = [

0 commit comments

Comments
 (0)