aisquared.serving package

Submodules

aisquared.serving.deploy_model module

aisquared.serving.deploy_model.deploy_model(saved_model: str, model_type: str, host: str = '127.0.0.1', port: int = 2244, custom_objects: dict | None = None, additional_functions_file: str | None = None)[source]

Deploy a model to a Flask server on the specified host

Parameters:
  • saved_model (Path-like) – The path to the saved model directory or model file

  • model_type (str) – The type of model

  • host (str (default '127.0.0.1')) – The host to deploy to

  • port (int (default 2244)) – The port to deploy to

  • custom_objects (dict or None (default None)) – Any custom objects to load when using a BeyondML model

  • additional_functions_file (file-like or None (default None)) – File name containing additional functions (which have to be named preprocess and postprocess, if created) that are used during the prediction process

aisquared.serving.deploy_model.load_beyondml_model(model: str, custom_objects: dict)[source]

Load a BeyondML model with custom objects

aisquared.serving.get_remote_prediction module

aisquared.serving.get_remote_prediction.get_remote_prediction(data: dict | str | ndarray | list, host: str = '127.0.0.1', port: int = 2244) list[source]

Send data to use for prediction

Parameters:
  • data (dict, str, np.ndarray, or list) – The data to be predicted on

  • host (str (default '127.0.0.1')) – The host to use

  • port (int (default '2244')) – The port to use

Notes

  • If data is a dictionary, it is expected to already be correctly formatted

  • If data is a string, it is expected to already be correctly formatted

Returns:

predictions – The predictions from the deployed model

Return type:

list

Module contents

The aisquared.serving package contains utilities to serve models to a local REST endpoint.

Here is an example of how to serve a simple keras model using these utilities:

>>> # Assume model is already trained and stored in memory as model
>>> from aisquared import serving
>>> serving.save_keras_model(model, 'my_model')
>>> serving.deploy_model(
    'my_model',
    'keras',
    additional_functions_file = '<optional file containing `preprocess` and `postprocess` functions, if applicable>'
)
App created successfullly. Serving and awaiting requests

And to retrieve predictions from the model:

>>> # From a separate terminal, assume data is already loaded
>>> from aisquared import serving
>>> serving.get_remote_predictions(data) # Do not need to change host or port if predicting from the same machine
*predictions*