Models

Get all models

Get a list of all models you have access to.

models = client.get_models()

These models are divided in three groups:

  • public: models that are available to everyone;
  • owned: models you created;
  • shared: models that are shared privately with you.

Public models

Public models are provided by SeeMe.ai, other users, or your own organisation.

public_models = [model for model in models if model.public]

Your own models

A list of the models you created:

own_models = [ model for model in models if model.user_id == client.user_id]

Shared models

A list of models that others have shared with you in private.

shared_with_me = [model for model in models if model.shared_with_me]

Create a model

application_id = client.get_application_id(
    Framework.PYTORCH,
    Framework.FASTAI,
    "2.1.0",
    "2.7.13"
)
from seeme.types import Model

my_model = Model(
    name= "Cats and dogs",
    description= "Recognize cats and dogs in pictures.",
    privacy_enabled= False,
    auto_convert= True,
    application_id= application_id
)

my_model = client.create_model(my_model)
ParameterTypeDescription
modelModelEntire model object

Every model has the following properties:

PropertyTypeDescription
idstrUnique id
created_atstrThe creation date
updated_atstrLast updated date
namestrThe model name
descriptionstrThe model description
notesstrNotes on the model
user_idstrThe user id of the model creator
can_inferenceboolFlag indicating whether the model can make predictions or not
kindstrType of AI application, possible values: “image_classification”, “object_detection”, “text_classification”, “structured”, “language_model”, “ner”, “language_model”, “llm”.
has_logoboolFlag indicating whether the model has a logo or not
logostrName and extension of the logo file (mostly for internal purpose)
publicboolFlag indicating whether the model is public or not
configstrAdditional config stored in a JSON string
active_version_idstrThe id of the current model version (see versions below)
application_idstrThe application id of the active model version (see applications)
has_ml_modelboolFlag indicating whether the model has a Core ML model
has_onnx_modelboolFlag indicating whether the model has an ONNX model
has_onnx_int8_modelboolFlag indicating whether the model has an 8-bit quantized model
has_tflite_modelboolFlag indicating whether the model has a Tensorflow Lite model
has_labels_fileboolFlag indicating whether a file will all the labels (classes) is available
shared_with_meboolFlag indicating whether the model has been shared with you
auto_convertboolFlag indicating whether the model will be automatically converted to the supported model formats (see applications). Default value: True.
privacy_enabledboolFlag indicating whether privacy is enabled. If set to ‘True’, no inputs (images, text files, …) will be stored on the server or the mobile/edge device. Default value: False.

Get a model

Use the model id to get all the metadata of the model:

client.get_model(my_model.id)
ParameterTypeDescription
model_idstrUnique id for the model

Update a model

Update any property of the model:

my_model = client.get_model(my_model.id)
my_model.description = "Updated for documentation purposes"

client.update_model(my_model)
ParameterTypeDescription
modelModelThe entire model object

Delete a model

Delete a model using its id.

client.delete_model(my_model.id)
ParameterTypeDescription
model_idstrUnique id for the model

Upload a model file

You can upload the model file by calling the upload_model. Make sure the application_id is set to the desired AI application, framework, and version.

my_model = client.upload_model(
    my_model.id,
    folder="directory/to/model",
    filename="your_exported_model_file.pkl"
)
ParameterTypeDescription
model_idstrUnique id for the model
folderstrName of the folder that contains the model file (without trailing ‘/’), default value “data”
filenamestrName of the file to be uploaded, default value “export.pkl”

This returns an updated my_model, where if successful the can_inference will be set to True.

If auto_convert is enabled, all possible conversions for the selected application_id (see Applications below) will be available.

Download model file(s)

Download the file(s) associated with the current active (production) model.

client.download_active_model(
    my_model,
    asset_type=AssetType.PKL,
    download_folder="."
)
ParameterTypeDescription
modelModelThe entire model object
asset_typeAssetTypeThe model type you want to download.
Default: PKL;
Possible values: PKL, MLMODEL, TFLITE, ONNX, ONNX_INT8, LABELS, NAMES, WEIGHTS, CFG, CONVERSION_CFG, LOGO.
download_folderstrThe folder where you would like to download the model. Default: . (i.e. current directory)

If the asset_type exists, the model file will be downloaded to my_model.active_model_id.{asset_type}. One exception, the labels file will receive a .txt extension.

If you want to download a specific model version, have a look at Download a model version

Upload model logo

my_model = client.upload_logo(
    my_model.id,
    folder="directory/to/logo", 
    filename="logo_filename.jpg"
)
ParameterTypeDescription
model_idstrUnique id for the model
folderstrName of the folder that contains the logo file (without trailing ‘/’), default value “data”
filenamestrName of the file to be uploaded, default value “logo.jpg”. Supported formats: jpg, jpeg, png.

Download model logo

client.get_logo(my_model)
ParameterTypeDescription
modelModelThe entire model object