You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, the ``serialize`` in ``GenericModel`` class is True, and it will serialize the model using cloudpickle. However, you can set ``serialize=False`` to disable it. And serialize the model on your own. You just need to copy the serialized model into the ``.artifact_dir``. This example shows step by step how you can do that.
202
+
The example is illustrated using an AutoMLx model.
203
+
204
+
.. code-block:: python3
205
+
206
+
import automl
207
+
import ads
208
+
from automl import init
209
+
from sklearn.datasets import fetch_openml
210
+
from sklearn.model_selection import train_test_split
Now copy the model.pkl file and paste into the ``model_artifact_folder`` folder. And open the score.py in the ``model_artifact_folder`` folder and add implement the ``load_model`` function. You can also edit ``pre_inference`` and ``post_inference`` function. Below is an example implementation of the score.py.
248
+
Replace your score.py with the code below.
249
+
250
+
.. code-block:: python3
251
+
:emphasize-lines: 28, 29, 30, 31, 122
252
+
253
+
# score.py 1.0 generated by ADS 2.8.2 on 20230301_065458
254
+
import os
255
+
import sys
256
+
import json
257
+
from functools import lru_cache
258
+
259
+
model_name = 'model.pkl'
260
+
261
+
262
+
"""
263
+
Inference script. This script is used for prediction by scoring server when schema is known.
264
+
"""
265
+
266
+
@lru_cache(maxsize=10)
267
+
def load_model(model_file_name=model_name):
268
+
"""
269
+
Loads model from the serialized format
270
+
271
+
Returns
272
+
-------
273
+
model: a model instance on which predict API can be invoked
Returns data type information fetch from input_schema.json.
291
+
292
+
Parameters
293
+
----------
294
+
input_schema_path: path of input schema.
295
+
296
+
Returns
297
+
-------
298
+
data_type: data type fetch from input_schema.json.
299
+
300
+
"""
301
+
data_type = {}
302
+
if os.path.exists(input_schema_path):
303
+
schema = json.load(open(input_schema_path))
304
+
for col in schema['schema']:
305
+
data_type[col['name']] = col['dtype']
306
+
else:
307
+
print("input_schema has to be passed in in order to recover the same data type. pass `X_sample` in `ads.model.framework.sklearn_model.SklearnModel.prepare` function to generate the input_schema. Otherwise, the data type might be changed after serialization/deserialization.")
308
+
return data_type
309
+
310
+
def deserialize(data, input_schema_path):
311
+
"""
312
+
Deserialize json serialization data to data in original type when sent to predict.
313
+
314
+
Parameters
315
+
----------
316
+
data: serialized input data.
317
+
input_schema_path: path of input schema.
318
+
319
+
Returns
320
+
-------
321
+
data: deserialized input data.
322
+
323
+
"""
324
+
325
+
import pandas as pd
326
+
import numpy as np
327
+
import base64
328
+
from io import BytesIO
329
+
if isinstance(data, bytes):
330
+
return data
331
+
332
+
data_type = data.get('data_type', '') if isinstance(data, dict) else ''
333
+
json_data = data.get('data', data) if isinstance(data, dict) else data
Returns prediction given the model and data to predict
380
+
381
+
Parameters
382
+
----------
383
+
model: Model instance returned by load_model API.
384
+
data: Data format as expected by the predict API of the core estimator. For eg. in case of sckit models it could be numpy array/List of list/Pandas DataFrame.
385
+
input_schema_path: path of input schema.
386
+
387
+
Returns
388
+
-------
389
+
predictions: Output from scoring server
390
+
Format: {'prediction': output from model.predict method}
391
+
392
+
"""
393
+
features = pre_inference(data, input_schema_path)
394
+
yhat = post_inference(
395
+
model.predict(features)
396
+
)
397
+
return {'prediction': yhat}
398
+
399
+
Save the score.py and now call verify to check if it works locally.
0 commit comments