List
|
|||||
|
|
||||
Lists workflows that match the specified filter in the request
Authorization
To use this building block you will have to grant access to at least one of the following scopes:
- View and manage your data across Google Cloud Platform services
Input
This building block consumes 3 input parameters
| Name | Format | Description |
|---|---|---|
parent Required |
STRING |
Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region} |
pageToken |
STRING |
Optional. The page token, returned by a previous call, to request the next page of results |
pageSize |
INTEGER |
Optional. The maximum number of results to return in each response |
= Parameter name
= Format
|
parent STRING Required Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region} |
|
pageToken STRING Optional. The page token, returned by a previous call, to request the next page of results |
|
pageSize INTEGER Optional. The maximum number of results to return in each response |
Output
This building block provides 35 output parameters
| Name | Format | Description |
|---|---|---|
nextPageToken |
STRING |
Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent |
templates[] |
OBJECT |
A Cloud Dataproc workflow template resource |
templates[].placement |
OBJECT |
Specifies workflow execution target.Either managed_cluster or cluster_selector is required |
templates[].placement.clusterSelector |
OBJECT |
A selector that chooses target cluster for jobs based on metadata |
templates[].placement.clusterSelector.clusterLabels |
OBJECT |
Required. The cluster labels. Cluster must have all labels to match |
templates[].placement.clusterSelector.clusterLabels.customKey.value |
STRING |
Required. The cluster labels. Cluster must have all labels to match |
templates[].placement.clusterSelector.zone |
STRING |
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used |
templates[].placement.managedCluster |
OBJECT |
Cluster that is managed by the workflow |
templates[].placement.managedCluster.labels |
OBJECT |
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster |
templates[].placement.managedCluster.labels.customKey.value |
STRING |
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster |
templates[].placement.managedCluster.clusterName |
STRING |
Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters |
templates[].updateTime |
ANY |
Output only. The time template was last updated |
templates[].parameters[] |
OBJECT |
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector) |
templates[].parameters[].validation |
OBJECT |
Configuration for parameter validation |
templates[].parameters[].fields[] |
STRING |
|
templates[].parameters[].name |
STRING |
Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters |
templates[].parameters[].description |
STRING |
Optional. Brief description of the parameter. Must not exceed 1024 characters |
templates[].name |
STRING |
Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} |
templates[].version |
INTEGER |
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request |
templates[].id |
STRING |
Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters |
templates[].jobs[] |
OBJECT |
A job executed by the workflow |
templates[].jobs[].hadoopJob |
OBJECT |
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html) |
templates[].jobs[].hiveJob |
OBJECT |
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN |
templates[].jobs[].prerequisiteStepIds[] |
STRING |
|
templates[].jobs[].labels |
OBJECT |
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job |
templates[].jobs[].labels.customKey.value |
STRING |
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job |
templates[].jobs[].sparkJob |
OBJECT |
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN |
templates[].jobs[].sparkSqlJob |
OBJECT |
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries |
templates[].jobs[].pysparkJob |
OBJECT |
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN |
templates[].jobs[].scheduling |
OBJECT |
Job scheduling options |
templates[].jobs[].pigJob |
OBJECT |
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN |
templates[].jobs[].stepId |
STRING |
Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters |
templates[].createTime |
ANY |
Output only. The time template was created |
templates[].labels |
OBJECT |
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template |
templates[].labels.customKey.value |
STRING |
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template |
= Parameter name
= Format
|
nextPageToken STRING Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent |
|
templates[] OBJECT A Cloud Dataproc workflow template resource |
|
templates[].placement OBJECT Specifies workflow execution target.Either managed_cluster or cluster_selector is required |
|
templates[].placement.clusterSelector OBJECT A selector that chooses target cluster for jobs based on metadata |
|
templates[].placement.clusterSelector.clusterLabels OBJECT Required. The cluster labels. Cluster must have all labels to match |
|
templates[].placement.clusterSelector.clusterLabels.customKey.value STRING Required. The cluster labels. Cluster must have all labels to match |
|
templates[].placement.clusterSelector.zone STRING Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used |
|
templates[].placement.managedCluster OBJECT Cluster that is managed by the workflow |
|
templates[].placement.managedCluster.labels OBJECT Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster |
|
templates[].placement.managedCluster.labels.customKey.value STRING Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster |
|
templates[].placement.managedCluster.clusterName STRING Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters |
|
templates[].updateTime ANY Output only. The time template was last updated |
|
templates[].parameters[] OBJECT A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector) |
|
templates[].parameters[].validation OBJECT Configuration for parameter validation |
|
templates[].parameters[].fields[] STRING |
|
templates[].parameters[].name STRING Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters |
|
templates[].parameters[].description STRING Optional. Brief description of the parameter. Must not exceed 1024 characters |
|
templates[].name STRING Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} |
|
templates[].version INTEGER Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request |
|
templates[].id STRING Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters |
|
templates[].jobs[] OBJECT A job executed by the workflow |
|
templates[].jobs[].hadoopJob OBJECT A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html) |
|
templates[].jobs[].hiveJob OBJECT A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN |
|
templates[].jobs[].prerequisiteStepIds[] STRING |
|
templates[].jobs[].labels OBJECT Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job |
|
templates[].jobs[].labels.customKey.value STRING Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job |
|
templates[].jobs[].sparkJob OBJECT A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN |
|
templates[].jobs[].sparkSqlJob OBJECT A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries |
|
templates[].jobs[].pysparkJob OBJECT A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN |
|
templates[].jobs[].scheduling OBJECT Job scheduling options |
|
templates[].jobs[].pigJob OBJECT A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN |
|
templates[].jobs[].stepId STRING Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters |
|
templates[].createTime ANY Output only. The time template was created |
|
templates[].labels OBJECT Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template |
|
templates[].labels.customKey.value STRING Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template |