You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/user_guide/cli/opctl/localdev/condapack.rst
+10-3Lines changed: 10 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,14 @@ create
24
24
25
25
Build conda packs from your workstation using ``ads opctl conda create`` subcommand.
26
26
27
+
.. admonition:: Tip
28
+
29
+
To publish a conda pack that is natively installed on a oracle linux host (compute or laptop), use ``NO_CONTAINER`` environment variable to remove dependency on the ml-job container image:
Publish conda pack to the object storage bucket from your laptop or workstation. You can use this conda pack inside ``OCI Data Science Jobs``, ``OCI Data Science Notebooks`` and ``OCI Data Science Model Deployment``
36
-
43
+
Publish conda pack to the object storage bucket from your laptop or workstation. You can use this conda pack inside ``OCI Data Science Service`` or ``Data Flow service``.
37
44
38
45
-------
39
46
install
@@ -43,4 +50,4 @@ Install conda pack using its URI. The conda pack can be used inside the docker i
Copy file name to clipboardExpand all lines: docs/source/user_guide/model_registration/frameworks/lightgbmmodel.rst
+122Lines changed: 122 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -121,6 +121,128 @@ Run Prediction against Endpoint
121
121
122
122
[1,0,...,1]
123
123
124
+
Run Prediction with oci raw-request command
125
+
===========================================
126
+
127
+
Model deployment endpoints can be invoked with the OCI-CLI. The below examples invoke a model deployment with the CLI with different types of payload: ``json``, ``numpy.ndarray``, ``pandas.core.frame.DataFrame`` or ``dict``.
Copy file name to clipboardExpand all lines: docs/source/user_guide/model_registration/frameworks/pytorchmodel.rst
+137-3Lines changed: 137 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -267,17 +267,151 @@ Predict with Image
267
267
268
268
Predict Image by passing a uri, which can be http(s), local path, or other URLs
269
269
(e.g. starting with “oci://”, “s3://”, and “gcs://”), of the image or a PIL.Image.Image object
270
-
using the `image` argument in `predict()` to predict a single image.
270
+
using the ``image`` argument in ``predict()`` to predict a single image.
271
271
The image will be converted to a tensor and then serialized so it can be passed to the endpoint.
272
-
You can catch the tensor in `score.py` to perform further transformation.
272
+
You can catch the tensor in ``score.py`` to perform further transformation.
273
+
274
+
.. note::
275
+
The payload size limit is 10 MB. Read more about invoking a model deployment `here <https://docs.oracle.com/iaas/data-science/using/model-dep-invoke.htm#model_dep_invoke>`_.
276
+
277
+
Given the size limitation, the example below is with resized image. To pass an image and invoke prediction, additional code inside score.py is required to preprocess the data. Open ``pytorch_model_artifact/score.py`` and update the ``pre_inference()`` method. The edits are highlighted:
278
+
279
+
.. code-block:: python3
280
+
:emphasize-lines: 15-26
281
+
282
+
def pre_inference(data, input_schema_path):
283
+
"""
284
+
Preprocess json-serialized data to feed into predict function.
285
+
286
+
Parameters
287
+
----------
288
+
data: Data format as expected by the predict API of the core estimator.
289
+
input_schema_path: path of input schema.
290
+
291
+
Returns
292
+
-------
293
+
data: Data format after any processing.
294
+
"""
295
+
data = deserialize(data, input_schema_path)
296
+
import torchvision.transforms as transforms
297
+
preprocess = transforms.Compose([
298
+
transforms.Resize(256),
299
+
transforms.CenterCrop(224),
300
+
transforms.Normalize(
301
+
mean=[0.485, 0.456, 0.406],
302
+
std=[0.229, 0.224, 0.225]
303
+
),
304
+
])
305
+
input_tensor = preprocess(data)
306
+
input_batch = input_tensor.unsqueeze(0)
307
+
return input_batch
308
+
309
+
Save ``score.py`` and verify prediction works:
310
+
311
+
.. code-block:: python3
312
+
313
+
>>> uri = ("https://github.com/oracle-samples/oci-data-science-ai-samples/tree/master/model_deploy_examples/images/dog_resized.jpg")
0 commit comments