Models
Get all models
Get a list of all models you have access to.
models = client.get_models()
These models are divided in three groups:
- public: models that are available to everyone;
- owned: models you created;
- shared: models that are shared privately with you.
Public models
Public models are provided by SeeMe.ai, other users, or your own organisation.
public_models = [model for model in models if model.public]
Your own models
A list of the models you created:
own_models = [ model for model in models if model.user_id == client.user_id]
Shared models
A list of models that others have shared with you in private.
shared_with_me = [model for model in models if model.shared_with_me]
Create a model
application_id = client.get_application_id(
Framework.PYTORCH,
Framework.FASTAI,
"2.1.0",
"2.7.13"
)
from seeme.types import Model
my_model = Model(
name= "Cats and dogs",
description= "Recognize cats and dogs in pictures.",
privacy_enabled= False,
auto_convert= True,
application_id= application_id
)
my_model = client.create_model(my_model)
Parameter | Type | Description |
---|---|---|
model | Model | Entire model object |
Every model has the following properties:
Property | Type | Description |
---|---|---|
id | str | Unique id |
created_at | str | The creation date |
updated_at | str | Last updated date |
name | str | The model name |
description | str | The model description |
notes | str | Notes on the model |
user_id | str | The user id of the model creator |
can_inference | bool | Flag indicating whether the model can make predictions or not |
kind | str | Type of AI application, possible values: “image_classification”, “object_detection”, “text_classification”, “structured”, “language_model”, “ner”, “language_model”, “llm”. |
has_logo | bool | Flag indicating whether the model has a logo or not |
logo | str | Name and extension of the logo file (mostly for internal purpose) |
public | bool | Flag indicating whether the model is public or not |
config | str | Additional config stored in a JSON string |
active_version_id | str | The id of the current model version (see versions below) |
application_id | str | The application id of the active model version (see applications) |
has_ml_model | bool | Flag indicating whether the model has a Core ML model |
has_onnx_model | bool | Flag indicating whether the model has an ONNX model |
has_onnx_int8_model | bool | Flag indicating whether the model has an 8-bit quantized model |
has_tflite_model | bool | Flag indicating whether the model has a Tensorflow Lite model |
has_labels_file | bool | Flag indicating whether a file will all the labels (classes) is available |
shared_with_me | bool | Flag indicating whether the model has been shared with you |
auto_convert | bool | Flag indicating whether the model will be automatically converted to the supported model formats (see applications). Default value: True . |
privacy_enabled | bool | Flag indicating whether privacy is enabled. If set to ‘True’, no inputs (images, text files, …) will be stored on the server or the mobile/edge device. Default value: False . |
Get a model
Use the model id
to get all the metadata of the model:
client.get_model(my_model.id)
Parameter | Type | Description |
---|---|---|
model_id | str | Unique id for the model |
Update a model
Update any property of the model:
my_model = client.get_model(my_model.id)
my_model.description = "Updated for documentation purposes"
client.update_model(my_model)
Parameter | Type | Description |
---|---|---|
model | Model | The entire model object |
Delete a model
Delete a model using its id
.
client.delete_model(my_model.id)
Parameter | Type | Description |
---|---|---|
model_id | str | Unique id for the model |
Upload a model file
You can upload the model file by calling the upload_model
. Make sure the application_id
is set to the desired AI application, framework, and version.
my_model = client.upload_model(
my_model.id,
folder="directory/to/model",
filename="your_exported_model_file.pkl"
)
Parameter | Type | Description |
---|---|---|
model_id | str | Unique id for the model |
folder | str | Name of the folder that contains the model file (without trailing ‘/’), default value “data” |
filename | str | Name of the file to be uploaded, default value “export.pkl” |
This returns an updated my_model
, where if successful the can_inference
will be set to True
.
If auto_convert
is enabled, all possible conversions for the selected application_id
(see Applications below) will be available.
Download model file(s)
Download the file(s) associated with the current active (production) model.
client.download_active_model(
my_model,
asset_type=AssetType.PKL,
download_folder="."
)
Parameter | Type | Description |
---|---|---|
model | Model | The entire model object |
asset_type | AssetType | The model type you want to download. Default: PKL ;Possible values: PKL , MLMODEL , TFLITE , ONNX , ONNX_INT8 , LABELS , NAMES , WEIGHTS , CFG , CONVERSION_CFG , LOGO . |
download_folder | str | The folder where you would like to download the model. Default: . (i.e. current directory) |
If the asset_type
exists, the model file will be downloaded to my_model.active_model_id.{asset_type}
. One exception, the labels
file will receive a .txt
extension.
If you want to download a specific model version, have a look at Download a model version
Upload model logo
my_model = client.upload_logo(
my_model.id,
folder="directory/to/logo",
filename="logo_filename.jpg"
)
Parameter | Type | Description |
---|---|---|
model_id | str | Unique id for the model |
folder | str | Name of the folder that contains the logo file (without trailing ‘/’), default value “data” |
filename | str | Name of the file to be uploaded, default value “logo.jpg”. Supported formats: jpg , jpeg , png . |
Download model logo
client.get_logo(my_model)
Parameter | Type | Description |
---|---|---|
model | Model | The entire model object |