List

Gets basic information about all the versions of a model

4 variables
24 variables

Gets basic information about all the versions of a model.

If you expect that a model has many versions, or if you need to handle only a limited number of results at a time, you can request that the list be retrieved in batches (called pages).

If there are no versions that match the request parameters, the list request returns an empty response body: {}

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services

Input

This building block consumes 4 input parameters

  = Parameter name
  = Format

parent STRING Required

Required. The name of the model for which to list the version

pageToken STRING

Optional. A page token to request the next page of results.

You get the token from the next_page_token field of the response from the previous call

pageSize INTEGER

Optional. The number of versions to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the next_page_token field.

The default value is 20, and the maximum page size is 100

filter STRING

Optional. Specifies the subset of versions to retrieve

Output

This building block provides 24 output parameters

  = Parameter name
  = Format

versions[] OBJECT

Represents a version of the model.

Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list

versions[].description STRING

Optional. The description specified for the version when it was created

versions[].framework ENUMERATION

Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are TENSORFLOW, SCIKIT_LEARN, XGBOOST. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose SCIKIT_LEARN or XGBOOST, you must also set the runtime version of the model to 1.4 or greater.

Do not specify a framework if you're deploying a custom prediction routine

versions[].etag BINARY

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetVersion, and systems are expected to put that etag in the request to UpdateVersion to ensure that their change will be applied to the model as intended

versions[].isDefault BOOLEAN

Output only. If true, this version will be used to handle prediction requests that do not specify a version.

You can change the default version by calling projects.methods.versions.setDefault

versions[].state ENUMERATION

Output only. The state of a version

versions[].manualScaling OBJECT

Options for manually scaling a model

versions[].manualScaling.nodes INTEGER

The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to nodes * number of hours since last billing cycle plus the cost for each prediction performed

versions[].name STRING

Required.The name specified for the version when it was created.

The version name must be unique within the model it is created in

versions[].serviceAccount STRING

Optional. Specifies the service account for resource access control

versions[].pythonVersion STRING

Optional. The version of Python used in prediction. If not set, the default version is '2.7'. Python '3.5' is available when runtime_version is set to '1.4' and above. Python '2.7' works with all supported runtime versions

versions[].lastUseTime ANY

Output only. The time the version was last used for prediction

versions[].predictionClass STRING

Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field.

Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must set runtimeVersion to 1.4 or greater.

The following code sample provides the Predictor interface:

class Predictor(object):
"""Interface for constructing custom predictors."""

def predict(self, instances, **kwargs):
    """Performs custom prediction.

    Instances are the decoded values from the request. They have already
    been deserialized from JSON.

    Args:
        instances: A list of prediction input instances.
        **kwargs: A dictionary of keyword args provided as additional
            fields on the predict request body.

    Returns:
        A list of outputs containing the prediction results. This list must
        be JSON serializable.
    """
    raise NotImplementedError()

@classmethod
def from_path(cls, model_dir):
    """Creates an instance of Predictor using the given path.

    Loading of the predictor should be done in this method.

    Args:
        model_dir: The local directory that contains the exported model
            file along with any additional files uploaded when creating the
            version resource.

    Returns:
        An instance implementing this Predictor class.
    """
    raise NotImplementedError()

Learn more about the Predictor interface and custom prediction routines

versions[].packageUris[] STRING

versions[].deploymentUri STRING

Required. The Cloud Storage location of the trained model used to create the version. See the guide to model deployment for more information.

When passing Version to projects.models.versions.create the model service uses the specified location as the source of the model. Once deployed, the model version is hosted by the prediction service, so this location is useful only as a historical record. The total number of model files can't exceed 1000

versions[].autoScaling OBJECT

Options for automatically scaling a model

versions[].autoScaling.minNodes INTEGER

Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least rate * min_nodes * number of hours since last billing cycle, where rate is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed.

Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at least min_nodes. You will be charged for the time in which additional nodes are used.

If not specified, min_nodes defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes.

You can set min_nodes when creating the model version, and you can also update min_nodes for an existing version:

<pre> update_body.json: { 'autoScaling': { 'minNodes': 5 } } </pre>

HTTP request:

<pre> PATCH https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json </pre>

versions[].createTime ANY

Output only. The time the version was created

versions[].labels OBJECT

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

versions[].labels.customKey.value STRING

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

versions[].errorMessage STRING

Output only. The details of a failure or a cancellation

versions[].machineType STRING

Optional. The type of machine on which to serve the model. Currently only applies to online prediction service.

<dl> <dt>mls1-c1-m2</dt> <dd> The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated name for this machine type is "mls1-highmem-1". </dd> <dt>mls1-c4-m2</dt> <dd> In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The deprecated name for this machine type is "mls1-highcpu-4". </dd> </dl>

versions[].runtimeVersion STRING

Optional. The AI Platform runtime version to use for this deployment. If not set, AI Platform uses the default stable version, 1.0. For more information, see the runtime version list and how to manage runtime versions

nextPageToken STRING

Optional. Pass this token as the page_token field of the request for a subsequent call