List

Lists regions/{region}/jobs in a project

7 variables
101 variables

Lists regions/{region}/jobs in a project

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services

Input

This building block consumes 7 input parameters

  = Parameter name
  = Format

projectId STRING Required

Required. The ID of the Google Cloud Platform project that the job belongs to

region STRING Required

Required. The Cloud Dataproc region in which to handle the request

jobStateMatcher ENUMERATION

Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs).If filter is provided, jobStateMatcher will be ignored

pageToken STRING

Optional. The page token, returned by a previous call, to request the next page of results

pageSize INTEGER

Optional. The number of results to return in each response

clusterName STRING

Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster

filter STRING

Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax:field = value AND field = value ...where field is status.state or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be either ACTIVE or NON_ACTIVE. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator.Example filter:status.state = ACTIVE AND labels.env = staging AND labels.starred = *

Output

This building block provides 101 output parameters

  = Parameter name
  = Format

jobs[] OBJECT

A Cloud Dataproc job resource

jobs[].labels OBJECT

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job

jobs[].labels.customKey.value STRING

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job

jobs[].driverOutputResourceUri STRING

Output only. A URI pointing to the location of the stdout of the job's driver program

jobs[].statusHistory[] OBJECT

Cloud Dataproc job status

jobs[].statusHistory[].state ENUMERATION

Output only. A state message specifying the overall job state

jobs[].statusHistory[].details STRING

Output only. Optional job state details, such as an error description if the state is ERROR

jobs[].statusHistory[].stateStartTime ANY

Output only. The time when this state was entered

jobs[].statusHistory[].substate ENUMERATION

Output only. Additional state information, which includes status reported by the agent

jobs[].sparkSqlJob OBJECT

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries

jobs[].sparkSqlJob.queryList OBJECT

A list of queries to run on a cluster

jobs[].sparkSqlJob.queryList.queries[] STRING

jobs[].sparkSqlJob.queryFileUri STRING

The HCFS URI of the script that contains SQL queries

jobs[].sparkSqlJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";)

jobs[].sparkSqlJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";)

jobs[].sparkSqlJob.jarFileUris[] STRING

jobs[].sparkSqlJob.loggingConfig OBJECT

The runtime logging config of the job

jobs[].sparkSqlJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].sparkSqlJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].sparkSqlJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten

jobs[].sparkSqlJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten

jobs[].sparkJob OBJECT

A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN

jobs[].sparkJob.args[] STRING

jobs[].sparkJob.fileUris[] STRING

jobs[].sparkJob.mainClass STRING

The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris

jobs[].sparkJob.archiveUris[] STRING

jobs[].sparkJob.mainJarFileUri STRING

The HCFS URI of the jar file that contains the main class

jobs[].sparkJob.jarFileUris[] STRING

jobs[].sparkJob.loggingConfig OBJECT

The runtime logging config of the job

jobs[].sparkJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].sparkJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].sparkJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

jobs[].sparkJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

jobs[].yarnApplications[] OBJECT

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.Beta Feature: This report is available for testing purposes only. It may be changed before final release

jobs[].yarnApplications[].state ENUMERATION

Required. The application state

jobs[].yarnApplications[].name STRING

Required. The application name

jobs[].yarnApplications[].trackingUrl STRING

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access

jobs[].yarnApplications[].progress FLOAT

Required. The numerical progress of the application, from 1 to 100

jobs[].pysparkJob OBJECT

A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN

jobs[].pysparkJob.jarFileUris[] STRING

jobs[].pysparkJob.loggingConfig OBJECT

The runtime logging config of the job

jobs[].pysparkJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].pysparkJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].pysparkJob.properties OBJECT

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

jobs[].pysparkJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

jobs[].pysparkJob.args[] STRING

jobs[].pysparkJob.fileUris[] STRING

jobs[].pysparkJob.pythonFileUris[] STRING

jobs[].pysparkJob.mainPythonFileUri STRING

Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file

jobs[].pysparkJob.archiveUris[] STRING

jobs[].reference OBJECT

Encapsulates the full scoping used to reference a job

jobs[].reference.projectId STRING

Required. The ID of the Google Cloud Platform project that the job belongs to

jobs[].reference.jobId STRING

Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server

jobs[].hadoopJob OBJECT

A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html)

jobs[].hadoopJob.args[] STRING

jobs[].hadoopJob.fileUris[] STRING

jobs[].hadoopJob.mainClass STRING

The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris

jobs[].hadoopJob.archiveUris[] STRING

jobs[].hadoopJob.mainJarFileUri STRING

The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'

jobs[].hadoopJob.jarFileUris[] STRING

jobs[].hadoopJob.loggingConfig OBJECT

The runtime logging config of the job

jobs[].hadoopJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].hadoopJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].hadoopJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code

jobs[].hadoopJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code

jobs[].placement OBJECT

Cloud Dataproc job config

jobs[].placement.clusterName STRING

Required. The name of the cluster where the job will be submitted

jobs[].placement.clusterUuid STRING

Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted

jobs[].status OBJECT

Cloud Dataproc job status

jobs[].status.state ENUMERATION

Output only. A state message specifying the overall job state

jobs[].status.details STRING

Output only. Optional job state details, such as an error description if the state is ERROR

jobs[].status.stateStartTime ANY

Output only. The time when this state was entered

jobs[].status.substate ENUMERATION

Output only. Additional state information, which includes status reported by the agent

jobs[].driverControlFilesUri STRING

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri

jobs[].scheduling OBJECT

Job scheduling options

jobs[].scheduling.maxFailuresPerHour INTEGER

Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10

jobs[].pigJob OBJECT

A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN

jobs[].pigJob.loggingConfig OBJECT

The runtime logging config of the job

jobs[].pigJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].pigJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

jobs[].pigJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code

jobs[].pigJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code

jobs[].pigJob.continueOnFailure BOOLEAN

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries

jobs[].pigJob.queryFileUri STRING

The HCFS URI of the script that contains the Pig queries

jobs[].pigJob.queryList OBJECT

A list of queries to run on a cluster

jobs[].pigJob.queryList.queries[] STRING

jobs[].pigJob.jarFileUris[] STRING

jobs[].pigJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value])

jobs[].pigJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value])

jobs[].jobUuid STRING

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time

jobs[].hiveJob OBJECT

A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN

jobs[].hiveJob.continueOnFailure BOOLEAN

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries

jobs[].hiveJob.queryFileUri STRING

The HCFS URI of the script that contains Hive queries

jobs[].hiveJob.queryList OBJECT

A list of queries to run on a cluster

jobs[].hiveJob.queryList.queries[] STRING

jobs[].hiveJob.jarFileUris[] STRING

jobs[].hiveJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";)

jobs[].hiveJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";)

jobs[].hiveJob.properties OBJECT

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code

jobs[].hiveJob.properties.customKey.value STRING

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code

nextPageToken STRING

Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListJobsRequest