Skip to content

Commit e094c74

Browse files
committed
ODSC-38627: update docs
1 parent 55530ee commit e094c74

File tree

4 files changed

+89
-39
lines changed

4 files changed

+89
-39
lines changed

docs/source/user_guide/model_registration/framework_specific_instruction.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
frameworks/sparkpipelinemodel
1010
frameworks/lightgbmmodel
1111
frameworks/xgboostmodel
12+
frameworks/huggingfacemodel
1213
frameworks/automlmodel
1314
frameworks/genericmodel
14-
15+

docs/source/user_guide/model_registration/frameworks/huggingface.rst renamed to docs/source/user_guide/model_registration/frameworks/huggingfacemodel.rst

Lines changed: 25 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ See `API Documentation <../../../ads.model_framework.html#ads.model.framework.hu
1010
Overview
1111
========
1212

13-
The ``ads.model.framework.huggingface_model.HuggingFacePipelineModel`` class in ADS is designed to allow you to rapidly get a HuggingFace Pipeline into production. The ``.prepare()`` method creates the model artifacts that are needed to deploy a functioning pipeline without you having to configure it or write code. However, you can customize the required ``score.py`` file.
13+
The ``ads.model.framework.huggingface_model.HuggingFacePipelineModel`` class in ADS is designed to allow you to rapidly get a HuggingFace pipelines into production. The ``.prepare()`` method creates the model artifacts that are needed to deploy a functioning pipeline without you having to configure it or write code. However, you can customize the required ``score.py`` file.
1414

1515
.. include:: ../_template/overview.rst
1616

@@ -22,15 +22,13 @@ Load a `ImageSegmentationPipeline <https://huggingface.co/docs/transformers/main
2222

2323
.. code-block:: python3
2424
25-
from transformers import pipeline
26-
27-
segmenter = pipeline(task="image-segmentation", model="facebook/detr-resnet-50-panoptic", revision="fc15262")
28-
preds = segmenter(
29-
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
30-
)
31-
32-
preds
25+
>>> from transformers import pipeline
3326
27+
>>> segmenter = pipeline(task="image-segmentation", model="facebook/detr-resnet-50-panoptic", revision="fc15262")
28+
>>> preds = segmenter(
29+
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
30+
... )
31+
>>> preds
3432
[{'score': 0.987885,
3533
'label': 'LABEL_184',
3634
'mask': <PIL.Image.Image image mode=L size=960x686>},
@@ -65,17 +63,10 @@ Prepare Model Artifact
6563
# More info here - https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
6664
6765
68-
Instantiate a ``HuggingFacePipelineModel()`` object with a HuggingFace Pipelines model. Each instance accepts the following parameters:
69-
70-
* ``artifact_dir: str``. Artifact directory to store the files needed for deployment.
71-
* ``auth: (Dict, optional)``: Defaults to ``None``. The default authentication is set using the ``ads.set_auth`` API. To override the default, use ``ads.common.auth.api_keys()`` or ``ads.common.auth.resource_principal()`` and create the appropriate authentication signer and the ``**kwargs`` required to instantiate the ``IdentityClient`` object.
72-
* ``estimator: Callable``. Any model object generated by the PyTorch framework.
73-
* ``properties: (ModelProperties, optional)``. Defaults to ``None``. The ``ModelProperties`` object required to save and deploy model.
66+
Instantiate a ``HuggingFacePipelineModel()`` object with HuggingFace pipelines. All the pipelines related files are saved under the ``artifact_dir``.
7467

7568
For more detailed information on what parameters that ``HuggingFacePipelineModel`` takes, refer to the `API Documentation <../../../ads.model_framework.html#ads.model.framework.huggingface_model.HuggingFacePipelineModel>`__
76-
All the pipelines related files are saved under the ``artifact_dir``.
7769

78-
.. include:: ../_template/initialize.rst
7970

8071

8172
Summary Status
@@ -106,35 +97,31 @@ Deploy and Generate Endpoint
10697
10798
>>> # Deploy and create an endpoint for the huggingface_pipeline_model
10899
>>> huggingface_pipeline_model.deploy(
109-
display_name="HuggingFace Pipeline Model For Image Segmentation",
110-
deployment_log_group_id="ocid1.loggroup.oc1.xxx.xxxxx",
111-
deployment_access_log_id="ocid1.log.oc1.xxx.xxxxx",
112-
deployment_predict_log_id="ocid1.log.oc1.xxx.xxxxx",
113-
)
100+
... display_name="HuggingFace Pipeline Model For Image Segmentation",
101+
... deployment_log_group_id="ocid1.loggroup.oc1.xxx.xxxxx",
102+
... deployment_access_log_id="ocid1.log.oc1.xxx.xxxxx",
103+
... deployment_predict_log_id="ocid1.log.oc1.xxx.xxxxx",
104+
... )
114105
>>> print(f"Endpoint: {huggingface_pipeline_model.model_deployment.url}")
115-
.. parsed-literal::
116106
https://modeldeployment.{region}.oci.customer-oci.com/ocid1.datasciencemodeldeployment.oc1.xxx.xxxxx
117107
118108
Run Prediction against Endpoint
119109
===============================
120110

121111
.. code-block:: python3
122112
123-
# Download an image
124-
import PIL.Image
125-
import requests
126-
import cloudpickle
127-
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
128-
129-
image = PIL.Image.open(requests.get(image_url, stream=True).raw)
130-
image_bytes = cloudpickle.dumps(image)
131-
132-
# Generate prediction by invoking the deployed endpoint
133-
preds = huggingface_pipeline_model.predict(image)["prediction"]
134-
print([{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds['prediction']])
113+
>>> # Download an image
114+
>>> import PIL.Image
115+
>>> import requests
116+
>>> import cloudpickle
117+
>>> image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
135118
136-
.. parsed-literal::
119+
>>> image = PIL.Image.open(requests.get(image_url, stream=True).raw)
120+
>>> image_bytes = cloudpickle.dumps(image)
137121
122+
>>> # Generate prediction by invoking the deployed endpoint
123+
>>> preds = huggingface_pipeline_model.predict(image)["prediction"]
124+
>>> print([{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds['prediction']])
138125
[{'score': 0.9879, 'label': 'LABEL_184'},
139126
{'score': 0.9973, 'label': 'snow'},
140127
{'score': 0.9972, 'label': 'cat'}]
@@ -157,6 +144,7 @@ Predict with Multiple Arguments
157144
If your model takes more than one argument, you can pass in through dictionary with the keys as the argument name and values as the value of the arguement.
158145

159146
.. code-block:: python3
147+
160148
>>> your_huggingface_pipeline_model.verify({"parameter_name_1": "parameter_value_1", ..., "parameter_name_n": "parameter_value_n"})
161149
>>> your_huggingface_pipeline_model.predict({"parameter_name_1": "parameter_value_1", ..., "parameter_name_n": "parameter_value_n"})
162150
@@ -166,7 +154,7 @@ Run Prediction with oci sdk
166154

167155
Model deployment endpoints can be invoked with the oci sdk. This example invokes a model deployment with the oci sdk with a ``bytes`` payload:
168156

169-
`bytes` payload example
157+
``bytes`` payload example
170158
------------------------------
171159

172160
.. code-block:: python3
@@ -183,7 +171,6 @@ Model deployment endpoints can be invoked with the oci sdk. This example invokes
183171
184172
>>> preds = requests.post(endpoint, data=image_bytes, auth=ads.common.auth.default_signer()['signer'], headers=headers).json()
185173
>>> print([{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds['prediction']])
186-
.. parsed-literal::
187174
[{'score': 0.9879, 'label': 'LABEL_184'},
188175
{'score': 0.9973, 'label': 'snow'},
189176
{'score': 0.9972, 'label': 'cat'}]

docs/source/user_guide/model_registration/quick_start.rst

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -257,6 +257,67 @@ Create a model, prepare it, verify that it works, save it to the model catalog,
257257
#Register TensorFlow model
258258
model_id = tf_model.save(display_name="TensorFlow Model")
259259
260+
HuggingFace Pipelines
261+
---------------------
262+
263+
.. code-block:: python3
264+
265+
from transformers import pipeline
266+
import tempfile
267+
import PIL.Image
268+
import ads
269+
import requests
270+
import cloudpickle
271+
272+
## download the image
273+
image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
274+
image = PIL.Image.open(requests.get(image_link, stream=True).raw)
275+
image_bytes = cloudpickle.dumps(image)
276+
277+
## download the pretrained model
278+
classifier = pipeline(model="openai/clip-vit-large-patch14")
279+
classifier(
280+
images=image,
281+
candidate_labels=["animals", "humans", "landscape"],
282+
)
283+
284+
## Initiate a HuggingFacePipelineModel instance
285+
zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
286+
287+
## Prepare a model artifact
288+
conda = "oci://bucket@namespace/path/to/conda/pack"
289+
python_version = "3.8"
290+
zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
291+
292+
## Test data
293+
data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
294+
body = cloudpickle.dumps(data) # convert image to bytes
295+
296+
## Verify
297+
zero_shot_image_classification_model.verify(data=data)
298+
zero_shot_image_classification_model.verify(data=body)
299+
300+
## Save
301+
zero_shot_image_classification_model.save()
302+
303+
## Deploy
304+
log_group_id = "<log_group_id>"
305+
log_id = "<log_id>"
306+
zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=100,
307+
wait_for_completion=False,
308+
deployment_log_group_id = log_group_id,
309+
deployment_access_log_id = log_id,
310+
deployment_predict_log_id = log_id)
311+
zero_shot_image_classification_model.predict(image)
312+
zero_shot_image_classification_model.predict(body)
313+
314+
### Invoke the model by sending bytes
315+
auth = ads.common.auth.default_signer()['signer']
316+
endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
317+
headers = {"Content-Type": "application/octet-stream"}
318+
requests.post(endpoint, data=body, auth=auth, headers=headers).json()
319+
320+
260321
Other Frameworks
261322
----------------
262323

docs/source/user_guide/model_serialization/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Model Serialization
1313
quick_start
1414
automlmodel
1515
genericmodel
16+
huggingfacemodel
1617
lightgbmmodel
1718
pytorchmodel
1819
sklearnmodel

0 commit comments

Comments
 (0)