Patch

Updates a specific model resource

33 variables
11 variables

Updates a specific model resource.

Currently the only supported fields to update are description and default_version.name

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services

Input

This building block consumes 33 input parameters

  = Parameter name
  = Format

name STRING Required

Required. The project name

updateMask ANY

Required. Specifies the path, relative to Model, of the field to update.

For example, to change the description of a model to "foo" and set its default version to "version_1", the update_mask parameter would be specified as description, default_version.name, and the PATCH request body would specify the new value, as follows: { "description": "foo", "defaultVersion": { "name":"version_1" } }

Currently the supported update masks are description and default_version.name

onlinePredictionConsoleLogging BOOLEAN

Optional. If true, online prediction nodes send stderr and stdout streams to Stackdriver Logging. These can be more verbose than the standard access logs (see onlinePredictionLogging) and can incur higher cost. However, they are helpful for debugging. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high QPS. Estimate your costs before enabling this option.

Default is false

regions[] STRING

description STRING

Optional. The description specified for the model when it was created

onlinePredictionLogging BOOLEAN

Optional. If true, online prediction access logs are sent to StackDriver Logging. These logs are like standard server access logs, containing information like timestamp and latency for each request. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.

Default is false

etag BINARY

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetModel, and systems are expected to put that etag in the request to UpdateModel to ensure that their change will be applied to the model as intended

labels OBJECT

Optional. One or more labels that you can add, to organize your models. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

labels.customKey.value STRING Required

Optional. One or more labels that you can add, to organize your models. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

name STRING

Required. The name specified for the model when it was created.

The model name must be unique within the project it is created in

defaultVersion OBJECT

Represents a version of the model.

Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list

defaultVersion.description STRING

Optional. The description specified for the version when it was created

defaultVersion.framework ENUMERATION

Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are TENSORFLOW, SCIKIT_LEARN, XGBOOST. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose SCIKIT_LEARN or XGBOOST, you must also set the runtime version of the model to 1.4 or greater.

Do not specify a framework if you're deploying a custom prediction routine

defaultVersion.etag BINARY

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetVersion, and systems are expected to put that etag in the request to UpdateVersion to ensure that their change will be applied to the model as intended

defaultVersion.isDefault BOOLEAN

Output only. If true, this version will be used to handle prediction requests that do not specify a version.

You can change the default version by calling projects.methods.versions.setDefault

defaultVersion.state ENUMERATION

Output only. The state of a version

defaultVersion.manualScaling OBJECT

Options for manually scaling a model

defaultVersion.manualScaling.nodes INTEGER

The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to nodes * number of hours since last billing cycle plus the cost for each prediction performed

defaultVersion.name STRING

Required.The name specified for the version when it was created.

The version name must be unique within the model it is created in

defaultVersion.serviceAccount STRING

Optional. Specifies the service account for resource access control

defaultVersion.pythonVersion STRING

Optional. The version of Python used in prediction. If not set, the default version is '2.7'. Python '3.5' is available when runtime_version is set to '1.4' and above. Python '2.7' works with all supported runtime versions

defaultVersion.lastUseTime ANY

Output only. The time the version was last used for prediction

defaultVersion.predictionClass STRING

Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field.

Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must set runtimeVersion to 1.4 or greater.

The following code sample provides the Predictor interface:

class Predictor(object):
"""Interface for constructing custom predictors."""

def predict(self, instances, **kwargs):
    """Performs custom prediction.

    Instances are the decoded values from the request. They have already
    been deserialized from JSON.

    Args:
        instances: A list of prediction input instances.
        **kwargs: A dictionary of keyword args provided as additional
            fields on the predict request body.

    Returns:
        A list of outputs containing the prediction results. This list must
        be JSON serializable.
    """
    raise NotImplementedError()

@classmethod
def from_path(cls, model_dir):
    """Creates an instance of Predictor using the given path.

    Loading of the predictor should be done in this method.

    Args:
        model_dir: The local directory that contains the exported model
            file along with any additional files uploaded when creating the
            version resource.

    Returns:
        An instance implementing this Predictor class.
    """
    raise NotImplementedError()

Learn more about the Predictor interface and custom prediction routines

defaultVersion.packageUris[] STRING

defaultVersion.deploymentUri STRING

Required. The Cloud Storage location of the trained model used to create the version. See the guide to model deployment for more information.

When passing Version to projects.models.versions.create the model service uses the specified location as the source of the model. Once deployed, the model version is hosted by the prediction service, so this location is useful only as a historical record. The total number of model files can't exceed 1000

defaultVersion.autoScaling OBJECT

Options for automatically scaling a model

defaultVersion.autoScaling.minNodes INTEGER

Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least rate * min_nodes * number of hours since last billing cycle, where rate is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed.

Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at least min_nodes. You will be charged for the time in which additional nodes are used.

If not specified, min_nodes defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes.

You can set min_nodes when creating the model version, and you can also update min_nodes for an existing version:

<pre> update_body.json: { 'autoScaling': { 'minNodes': 5 } } </pre>

HTTP request:

<pre> PATCH https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json </pre>

defaultVersion.createTime ANY

Output only. The time the version was created

defaultVersion.labels OBJECT

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

defaultVersion.labels.customKey.value STRING Required

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels

defaultVersion.errorMessage STRING

Output only. The details of a failure or a cancellation

defaultVersion.machineType STRING

Optional. The type of machine on which to serve the model. Currently only applies to online prediction service.

<dl> <dt>mls1-c1-m2</dt> <dd> The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated name for this machine type is "mls1-highmem-1". </dd> <dt>mls1-c4-m2</dt> <dd> In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The deprecated name for this machine type is "mls1-highcpu-4". </dd> </dl>

defaultVersion.runtimeVersion STRING

Optional. The AI Platform runtime version to use for this deployment. If not set, AI Platform uses the default stable version, 1.0. For more information, see the runtime version list and how to manage runtime versions

Output

This building block provides 11 output parameters

  = Parameter name
  = Format

response OBJECT

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse

response.customKey.value ANY

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse

name STRING

The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}

error OBJECT

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details.

You can find out more about this error model and how to work with it in the API Design Guide

error.message STRING

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client

error.details[] OBJECT

error.details[].customKey.value ANY

error.code INTEGER

The status code, which should be an enum value of google.rpc.Code

metadata OBJECT

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any

metadata.customKey.value ANY

Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any

done BOOLEAN

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available