Async Batch Annotate
|
|||||
|
|
Run asynchronous image detection and annotation for a list of images.
Progress and results can be retrieved through the
google.longrunning.Operations
interface.
Operation.metadata
contains OperationMetadata
(metadata).
Operation.response
contains AsyncBatchAnnotateImagesResponse
(results).
This service will write image annotation outputs to json files in customer GCS bucket, each json file containing BatchAnnotateImagesResponse proto
Authorization
To use this building block you will have to grant access to at least one of the following scopes:
- View and manage your data across Google Cloud Platform services
- Apply machine learning models to understand and label images
Input
This building block consumes 25 input parameters
Name | Format | Description |
---|---|---|
outputConfig |
OBJECT |
The desired output location and metadata |
outputConfig.gcsDestination |
OBJECT |
The Google Cloud Storage location where the output will be written to |
outputConfig.gcsDestination.uri |
STRING |
Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples:
If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files |
outputConfig.batchSize |
INTEGER |
The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will
be generated. If Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations |
requests[] |
OBJECT |
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information |
requests[].image |
OBJECT |
Client image to perform Google Cloud Vision API tasks over |
requests[].image.content |
BINARY |
Image content, represented as a stream of bytes.
Note: As with all |
requests[].image.source |
OBJECT |
External image source (Google Cloud Storage or web URL image location) |
requests[].image.source.gcsImageUri |
STRING |
Use The Google Cloud Storage URI of the form
|
requests[].image.source.imageUri |
STRING |
The URI of the source image. Can be either:
When both |
requests[].features[] |
OBJECT |
The type of Google Cloud Vision API detection to perform, and the maximum
number of results to return for that type. Multiple |
requests[].features[].type |
ENUMERATION |
The feature type |
requests[].features[].maxResults |
INTEGER |
Maximum number of results of this type. Does not apply to
|
requests[].features[].model |
STRING |
Model to use for the feature. Supported values: "builtin/stable" (the default if unset) and "builtin/latest" |
requests[].imageContext |
OBJECT |
Image context and/or feature-specific parameters |
requests[].imageContext.cropHintsParams |
OBJECT |
Parameters for crop hints annotation request |
requests[].imageContext.cropHintsParams.aspectRatios[] |
FLOAT |
|
requests[].imageContext.productSearchParams |
OBJECT |
Parameters for a product search request |
requests[].imageContext.productSearchParams.productCategories[] |
STRING |
|
requests[].imageContext.productSearchParams.filter |
STRING |
The filtering expression. This can be used to restrict search results based on Product labels. We currently support an AND of OR of key-value expressions, where each expression within an OR must have the same key. An '=' should be used to connect the key and value. For example, "(color = red OR color = blue) AND brand = Google" is acceptable, but "(color = red OR brand = Google)" is not acceptable. "color: red" is not acceptable because it uses a ':' instead of an '=' |
requests[].imageContext.productSearchParams.productSet |
STRING |
The resource name of a ProductSet to be searched for similar images. Format is:
|
requests[].imageContext.languageHints[] |
STRING |
|
requests[].imageContext.webDetectionParams |
OBJECT |
Parameters for web detection request |
requests[].imageContext.webDetectionParams.includeGeoResults |
BOOLEAN |
Whether to include results derived from the geo information in the image |
requests[].imageContext.latLongRect |
OBJECT |
Rectangle determined by min and max |
= Parameter name
= Format
outputConfig OBJECT The desired output location and metadata |
outputConfig.gcsDestination OBJECT The Google Cloud Storage location where the output will be written to |
outputConfig.gcsDestination.uri STRING Google Cloud Storage URI prefix where the results will be stored. Results will be in JSON format and preceded by its corresponding input URI prefix. This field can either represent a gcs file prefix or gcs directory. In either case, the uri should be unique because in order to get all of the output files, you will need to do a wildcard gcs search on the uri prefix you provide. Examples:
If multiple outputs, each response is still AnnotateFileResponse, each of which contains some subset of the full list of AnnotateImageResponse. Multiple outputs can happen if, for example, the output JSON is too large and overflows into multiple sharded files |
outputConfig.batchSize INTEGER The max number of response protos to put into each output JSON file on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 response protos will
be generated. If Currently, batch_size only applies to GcsDestination, with potential future support for other output configurations |
requests[] OBJECT Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information |
requests[].image OBJECT Client image to perform Google Cloud Vision API tasks over |
requests[].image.content BINARY Image content, represented as a stream of bytes.
Note: As with all |
requests[].image.source OBJECT External image source (Google Cloud Storage or web URL image location) |
requests[].image.source.gcsImageUri STRING Use The Google Cloud Storage URI of the form
|
requests[].image.source.imageUri STRING The URI of the source image. Can be either:
When both |
requests[].features[] OBJECT The type of Google Cloud Vision API detection to perform, and the maximum
number of results to return for that type. Multiple |
requests[].features[].type ENUMERATION The feature type |
requests[].features[].maxResults INTEGER Maximum number of results of this type. Does not apply to
|
requests[].features[].model STRING Model to use for the feature. Supported values: "builtin/stable" (the default if unset) and "builtin/latest" |
requests[].imageContext OBJECT Image context and/or feature-specific parameters |
requests[].imageContext.cropHintsParams OBJECT Parameters for crop hints annotation request |
requests[].imageContext.cropHintsParams.aspectRatios[] FLOAT |
requests[].imageContext.productSearchParams OBJECT Parameters for a product search request |
requests[].imageContext.productSearchParams.productCategories[] STRING |
requests[].imageContext.productSearchParams.filter STRING The filtering expression. This can be used to restrict search results based on Product labels. We currently support an AND of OR of key-value expressions, where each expression within an OR must have the same key. An '=' should be used to connect the key and value. For example, "(color = red OR color = blue) AND brand = Google" is acceptable, but "(color = red OR brand = Google)" is not acceptable. "color: red" is not acceptable because it uses a ':' instead of an '=' |
requests[].imageContext.productSearchParams.productSet STRING The resource name of a ProductSet to be searched for similar images. Format is:
|
requests[].imageContext.languageHints[] STRING |
requests[].imageContext.webDetectionParams OBJECT Parameters for web detection request |
requests[].imageContext.webDetectionParams.includeGeoResults BOOLEAN Whether to include results derived from the geo information in the image |
requests[].imageContext.latLongRect OBJECT Rectangle determined by min and max |
Output
This building block provides 11 output parameters
Name | Format | Description |
---|---|---|
done |
BOOLEAN |
If the value is |
response |
OBJECT |
The normal response of the operation in case of success. If the original
method returns no data on success, such as |
response.customKey.value |
ANY |
The normal response of the operation in case of success. If the original
method returns no data on success, such as |
name |
STRING |
The server-assigned name, which is only unique within the same service that
originally returns it. If you use the default HTTP mapping, the
|
error |
OBJECT |
The You can find out more about this error model and how to work with it in the API Design Guide |
error.code |
INTEGER |
The status code, which should be an enum value of google.rpc.Code |
error.message |
STRING |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client |
error.details[] |
OBJECT |
|
error.details[].customKey.value |
ANY |
|
metadata |
OBJECT |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any |
metadata.customKey.value |
ANY |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any |
= Parameter name
= Format
done BOOLEAN If the value is |
response OBJECT The normal response of the operation in case of success. If the original
method returns no data on success, such as |
response.customKey.value ANY The normal response of the operation in case of success. If the original
method returns no data on success, such as |
name STRING The server-assigned name, which is only unique within the same service that
originally returns it. If you use the default HTTP mapping, the
|
error OBJECT The You can find out more about this error model and how to work with it in the API Design Guide |
error.code INTEGER The status code, which should be an enum value of google.rpc.Code |
error.message STRING A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client |
error.details[] OBJECT |
error.details[].customKey.value ANY |
metadata OBJECT Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any |
metadata.customKey.value ANY Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any |