Skip to content

Commit 55530ee

Browse files
committed
ODSC-38627: adding doc
1 parent 071979a commit 55530ee

File tree

2 files changed

+251
-1
lines changed

2 files changed

+251
-1
lines changed
Lines changed: 250 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,250 @@
1+
.. HuggingFacePipelineModel:
2+
3+
HuggingFacePipelineModel
4+
************************
5+
6+
.. versionadded:: 2.8.2
7+
8+
See `API Documentation <../../../ads.model_framework.html#ads.model.framework.huggingface_model.HuggingFacePipelineModel>`__
9+
10+
Overview
11+
========
12+
13+
The ``ads.model.framework.huggingface_model.HuggingFacePipelineModel`` class in ADS is designed to allow you to rapidly get a HuggingFace Pipeline into production. The ``.prepare()`` method creates the model artifacts that are needed to deploy a functioning pipeline without you having to configure it or write code. However, you can customize the required ``score.py`` file.
14+
15+
.. include:: ../_template/overview.rst
16+
17+
The following steps take your trained ``HuggingFacePipelineModel`` model and deploy it into production with a few lines of code.
18+
19+
**Create a HuggingFace Pipeline**
20+
21+
Load a `ImageSegmentationPipeline <https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ImageSegmentationPipeline>`_ pretrained model.
22+
23+
.. code-block:: python3
24+
25+
from transformers import pipeline
26+
27+
segmenter = pipeline(task="image-segmentation", model="facebook/detr-resnet-50-panoptic", revision="fc15262")
28+
preds = segmenter(
29+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
30+
)
31+
32+
preds
33+
34+
[{'score': 0.987885,
35+
'label': 'LABEL_184',
36+
'mask': <PIL.Image.Image image mode=L size=960x686>},
37+
{'score': 0.997345,
38+
'label': 'snow',
39+
'mask': <PIL.Image.Image image mode=L size=960x686>},
40+
{'score': 0.997247,
41+
'label': 'cat',
42+
'mask': <PIL.Image.Image image mode=L size=960x686>}]
43+
44+
Prepare Model Artifact
45+
======================
46+
47+
.. code-block:: python3
48+
49+
>>> from ads.common.model_metadata import UseCaseType
50+
>>> from ads.model import HuggingFacePipelineModel
51+
52+
>>> import tempfile
53+
54+
>>> # Prepare the model
55+
>>> artifact_dir = "huggingface_pipeline_model_artifact"
56+
>>> huggingface_pipeline_model = HuggingFacePipelineModel(model, artifact_dir=artifact_dir)
57+
>>> huggingface_pipeline_model.prepare(
58+
... inference_conda_env="<your-conda-pack-path>",
59+
... inference_python_version="<your-python-version>",
60+
... training_conda_env="<your-conda-pack-path>",
61+
... use_case_type=UseCaseType.OTHER,
62+
... force_overwrite=True,
63+
...)
64+
# You don't need to modify the score.py generated. The model can be loaded by the transformers.pipeline.
65+
# More info here - https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
66+
67+
68+
Instantiate a ``HuggingFacePipelineModel()`` object with a HuggingFace Pipelines model. Each instance accepts the following parameters:
69+
70+
* ``artifact_dir: str``. Artifact directory to store the files needed for deployment.
71+
* ``auth: (Dict, optional)``: Defaults to ``None``. The default authentication is set using the ``ads.set_auth`` API. To override the default, use ``ads.common.auth.api_keys()`` or ``ads.common.auth.resource_principal()`` and create the appropriate authentication signer and the ``**kwargs`` required to instantiate the ``IdentityClient`` object.
72+
* ``estimator: Callable``. Any model object generated by the PyTorch framework.
73+
* ``properties: (ModelProperties, optional)``. Defaults to ``None``. The ``ModelProperties`` object required to save and deploy model.
74+
75+
For more detailed information on what parameters that ``HuggingFacePipelineModel`` takes, refer to the `API Documentation <../../../ads.model_framework.html#ads.model.framework.huggingface_model.HuggingFacePipelineModel>`__
76+
All the pipelines related files are saved under the ``artifact_dir``.
77+
78+
.. include:: ../_template/initialize.rst
79+
80+
81+
Summary Status
82+
==============
83+
84+
.. include:: ../_template/summary_status.rst
85+
86+
.. figure:: ../figures/summary_status.png
87+
:align: center
88+
89+
Register Model
90+
==============
91+
92+
.. code-block:: python3
93+
94+
>>> # Register the model
95+
>>> model_id = huggingface_pipeline_model.save()
96+
97+
Model is successfully loaded.
98+
['.model-ignore', 'score.py', 'config.json', 'runtime.yaml', 'preprocessor_config.json', 'pytorch_model.bin']
99+
100+
'ocid1.datasciencemodel.oc1.xxx.xxxxx'
101+
102+
Deploy and Generate Endpoint
103+
============================
104+
105+
.. code-block:: python3
106+
107+
>>> # Deploy and create an endpoint for the huggingface_pipeline_model
108+
>>> huggingface_pipeline_model.deploy(
109+
display_name="HuggingFace Pipeline Model For Image Segmentation",
110+
deployment_log_group_id="ocid1.loggroup.oc1.xxx.xxxxx",
111+
deployment_access_log_id="ocid1.log.oc1.xxx.xxxxx",
112+
deployment_predict_log_id="ocid1.log.oc1.xxx.xxxxx",
113+
)
114+
>>> print(f"Endpoint: {huggingface_pipeline_model.model_deployment.url}")
115+
.. parsed-literal::
116+
https://modeldeployment.{region}.oci.customer-oci.com/ocid1.datasciencemodeldeployment.oc1.xxx.xxxxx
117+
118+
Run Prediction against Endpoint
119+
===============================
120+
121+
.. code-block:: python3
122+
123+
# Download an image
124+
import PIL.Image
125+
import requests
126+
import cloudpickle
127+
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
128+
129+
image = PIL.Image.open(requests.get(image_url, stream=True).raw)
130+
image_bytes = cloudpickle.dumps(image)
131+
132+
# Generate prediction by invoking the deployed endpoint
133+
preds = huggingface_pipeline_model.predict(image)["prediction"]
134+
print([{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds['prediction']])
135+
136+
.. parsed-literal::
137+
138+
[{'score': 0.9879, 'label': 'LABEL_184'},
139+
{'score': 0.9973, 'label': 'snow'},
140+
{'score': 0.9972, 'label': 'cat'}]
141+
142+
143+
Predict with Image
144+
------------------
145+
146+
Predict Image by passing a PIL.Image.Image object using the ``data`` argument in ``predict()`` to predict a single image.
147+
The image will be converted to bytes using cloudpickle so it can be passed to the endpoint.
148+
It will be loaded back to PIL.Image.Image in ``score.py`` before pass into the pipeline.
149+
150+
.. note::
151+
- The payload size limit is 10 MB. Read more about invoking a model deployment `here <https://docs.oracle.com/iaas/data-science/using/model-dep-invoke.htm#model_dep_invoke>`_.
152+
- Model deployment currently does not support internet(coming soon), hence you cannot pass in a url.
153+
154+
Predict with Multiple Arguments
155+
-------------------------------
156+
157+
If your model takes more than one argument, you can pass in through dictionary with the keys as the argument name and values as the value of the arguement.
158+
159+
.. code-block:: python3
160+
>>> your_huggingface_pipeline_model.verify({"parameter_name_1": "parameter_value_1", ..., "parameter_name_n": "parameter_value_n"})
161+
>>> your_huggingface_pipeline_model.predict({"parameter_name_1": "parameter_value_1", ..., "parameter_name_n": "parameter_value_n"})
162+
163+
164+
Run Prediction with oci sdk
165+
==============================
166+
167+
Model deployment endpoints can be invoked with the oci sdk. This example invokes a model deployment with the oci sdk with a ``bytes`` payload:
168+
169+
`bytes` payload example
170+
------------------------------
171+
172+
.. code-block:: python3
173+
174+
>>> # The OCI SDK must be installed for this example to function properly.
175+
>>> # Installation instructions can be found here: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/pythonsdk.htm
176+
177+
>>> import requests
178+
>>> import oci
179+
>>> import ads
180+
>>> import cloudpickle
181+
>>> headers = {"Content-Type": "application/octet-stream"}
182+
>>> endpoint = huggingface_pipeline_model.model_deployment.url + "/predict"
183+
184+
>>> preds = requests.post(endpoint, data=image_bytes, auth=ads.common.auth.default_signer()['signer'], headers=headers).json()
185+
>>> print([{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds['prediction']])
186+
.. parsed-literal::
187+
[{'score': 0.9879, 'label': 'LABEL_184'},
188+
{'score': 0.9973, 'label': 'snow'},
189+
{'score': 0.9972, 'label': 'cat'}]
190+
191+
192+
Example
193+
=======
194+
195+
.. code-block:: python3
196+
197+
from transformers import pipeline
198+
import tempfile
199+
import PIL.Image
200+
import ads
201+
import requests
202+
import cloudpickle
203+
204+
## download the image
205+
image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
206+
image = PIL.Image.open(requests.get(image_link, stream=True).raw)
207+
image_bytes = cloudpickle.dumps(image)
208+
209+
## download the pretrained model
210+
classifier = pipeline(model="openai/clip-vit-large-patch14")
211+
classifier(
212+
images=image,
213+
candidate_labels=["animals", "humans", "landscape"],
214+
)
215+
216+
## Initiate a HuggingFacePipelineModel instance
217+
zero_shot_image_classification_model = HuggingFacePipelineModel(classifier, artifact_dir=empfile.mkdtemp())
218+
219+
## Prepare a model artifact
220+
conda = "oci://bucket@namespace/path/to/conda/pack"
221+
python_version = "3.8"
222+
zero_shot_image_classification_model.prepare(inference_conda_env=conda, inference_python_version = python_version, force_overwrite=True)
223+
224+
## Test data
225+
data = {"images": image, "candidate_labels": ["animals", "humans", "landscape"]}
226+
body = cloudpickle.dumps(data) # convert image to bytes
227+
228+
## Verify
229+
zero_shot_image_classification_model.verify(data=data)
230+
zero_shot_image_classification_model.verify(data=body)
231+
232+
## Save
233+
zero_shot_image_classification_model.save()
234+
235+
## Deploy
236+
log_group_id = "<log_group_id>"
237+
log_id = "<log_id>"
238+
zero_shot_image_classification_model.deploy(deployment_bandwidth_mbps=100,
239+
wait_for_completion=False,
240+
deployment_log_group_id = log_group_id,
241+
deployment_access_log_id = log_id,
242+
deployment_predict_log_id = log_id)
243+
zero_shot_image_classification_model.predict(image)
244+
zero_shot_image_classification_model.predict(body)
245+
246+
### Invoke the model by sending bytes
247+
auth = ads.common.auth.default_signer()['signer']
248+
endpoint = zero_shot_image_classification_model.model_deployment.url + "/predict"
249+
headers = {"Content-Type": "application/octet-stream"}
250+
requests.post(endpoint, data=body, auth=auth, headers=headers).json()

docs/source/user_guide/model_registration/frameworks/pytorchmodel.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ Deploy and Generate Endpoint
215215

216216
.. code-block:: python3
217217
218-
>>> # Deploy and create an endpoint for the TensorFlow model
218+
>>> # Deploy and create an endpoint for the PyTorch model
219219
>>> pytorch_model.deploy(
220220
display_name="PyTorch Model For Classification",
221221
deployment_log_group_id="ocid1.loggroup.oc1.xxx.xxxxx",

0 commit comments

Comments
 (0)