You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, the ``serialize`` in ``GenericModel`` class is True, and it will serialize the model using cloudpickle. However, you can set ``serialize=False`` to disable it. And serialize the model on your own. You just need to copy the serialized model into the ``.artifact_dir``. This example shows step by step how you can do that.
222
+
The example is illustrated using an AutoMLx model.
223
+
224
+
.. code-block:: python3
225
+
226
+
import automl
227
+
import ads
228
+
from automl import init
229
+
from sklearn.datasets import fetch_openml
230
+
from sklearn.model_selection import train_test_split
Now copy the model.pkl file and paste into the ``model_artifact_folder`` folder. And open the score.py in the ``model_artifact_folder`` folder and add implement the ``load_model`` function. You can also edit ``pre_inference`` and ``post_inference`` function. Below is an example implementation of the score.py.
268
+
Replace your score.py with the code below.
269
+
270
+
.. code-block:: python3
271
+
:emphasize-lines: 28, 29, 30, 31, 122
272
+
273
+
# score.py 1.0 generated by ADS 2.8.2 on 20230301_065458
274
+
import os
275
+
import sys
276
+
import json
277
+
from functools import lru_cache
278
+
279
+
model_name = 'model.pkl'
280
+
281
+
282
+
"""
283
+
Inference script. This script is used for prediction by scoring server when schema is known.
284
+
"""
285
+
286
+
@lru_cache(maxsize=10)
287
+
def load_model(model_file_name=model_name):
288
+
"""
289
+
Loads model from the serialized format
290
+
291
+
Returns
292
+
-------
293
+
model: a model instance on which predict API can be invoked
Returns data type information fetch from input_schema.json.
311
+
312
+
Parameters
313
+
----------
314
+
input_schema_path: path of input schema.
315
+
316
+
Returns
317
+
-------
318
+
data_type: data type fetch from input_schema.json.
319
+
320
+
"""
321
+
data_type = {}
322
+
if os.path.exists(input_schema_path):
323
+
schema = json.load(open(input_schema_path))
324
+
for col in schema['schema']:
325
+
data_type[col['name']] = col['dtype']
326
+
else:
327
+
print("input_schema has to be passed in in order to recover the same data type. pass `X_sample` in `ads.model.framework.sklearn_model.SklearnModel.prepare` function to generate the input_schema. Otherwise, the data type might be changed after serialization/deserialization.")
328
+
return data_type
329
+
330
+
def deserialize(data, input_schema_path):
331
+
"""
332
+
Deserialize json serialization data to data in original type when sent to predict.
333
+
334
+
Parameters
335
+
----------
336
+
data: serialized input data.
337
+
input_schema_path: path of input schema.
338
+
339
+
Returns
340
+
-------
341
+
data: deserialized input data.
342
+
343
+
"""
344
+
345
+
import pandas as pd
346
+
import numpy as np
347
+
import base64
348
+
from io import BytesIO
349
+
if isinstance(data, bytes):
350
+
return data
351
+
352
+
data_type = data.get('data_type', '') if isinstance(data, dict) else ''
353
+
json_data = data.get('data', data) if isinstance(data, dict) else data
Returns prediction given the model and data to predict
400
+
401
+
Parameters
402
+
----------
403
+
model: Model instance returned by load_model API.
404
+
data: Data format as expected by the predict API of the core estimator. For eg. in case of sckit models it could be numpy array/List of list/Pandas DataFrame.
405
+
input_schema_path: path of input schema.
406
+
407
+
Returns
408
+
-------
409
+
predictions: Output from scoring server
410
+
Format: {'prediction': output from model.predict method}
411
+
412
+
"""
413
+
features = pre_inference(data, input_schema_path)
414
+
yhat = post_inference(
415
+
model.predict(features)
416
+
)
417
+
return {'prediction': yhat}
418
+
419
+
Save the score.py and now call verify to check if it works locally.
0 commit comments