Cancel

Starts a job cancellation request

3 variables
99 variables

Starts a job cancellation request. To access the job resource after cancellation, call regions/{region}/jobs.list or regions/{region}/jobs.get

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services

Input

This building block consumes 3 input parameters

  = Parameter name
  = Format

projectId STRING Required

Required. The ID of the Google Cloud Platform project that the job belongs to

region STRING Required

Required. The Cloud Dataproc region in which to handle the request

jobId STRING Required

Required. The job ID

Output

This building block provides 99 output parameters

  = Parameter name
  = Format

labels OBJECT

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job

labels.customKey.value STRING

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job

driverOutputResourceUri STRING

Output only. A URI pointing to the location of the stdout of the job's driver program

statusHistory[] OBJECT

Cloud Dataproc job status

statusHistory[].state ENUMERATION

Output only. A state message specifying the overall job state

statusHistory[].details STRING

Output only. Optional job state details, such as an error description if the state is ERROR

statusHistory[].stateStartTime ANY

Output only. The time when this state was entered

statusHistory[].substate ENUMERATION

Output only. Additional state information, which includes status reported by the agent

sparkSqlJob OBJECT

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries

sparkSqlJob.queryList OBJECT

A list of queries to run on a cluster

sparkSqlJob.queryList.queries[] STRING

sparkSqlJob.queryFileUri STRING

The HCFS URI of the script that contains SQL queries

sparkSqlJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";)

sparkSqlJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";)

sparkSqlJob.jarFileUris[] STRING

sparkSqlJob.loggingConfig OBJECT

The runtime logging config of the job

sparkSqlJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

sparkSqlJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

sparkSqlJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten

sparkSqlJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten

sparkJob OBJECT

A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN

sparkJob.args[] STRING

sparkJob.fileUris[] STRING

sparkJob.mainClass STRING

The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris

sparkJob.archiveUris[] STRING

sparkJob.mainJarFileUri STRING

The HCFS URI of the jar file that contains the main class

sparkJob.jarFileUris[] STRING

sparkJob.loggingConfig OBJECT

The runtime logging config of the job

sparkJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

sparkJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

sparkJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

sparkJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

yarnApplications[] OBJECT

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.Beta Feature: This report is available for testing purposes only. It may be changed before final release

yarnApplications[].state ENUMERATION

Required. The application state

yarnApplications[].name STRING

Required. The application name

yarnApplications[].trackingUrl STRING

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access

yarnApplications[].progress FLOAT

Required. The numerical progress of the application, from 1 to 100

pysparkJob OBJECT

A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN

pysparkJob.jarFileUris[] STRING

pysparkJob.loggingConfig OBJECT

The runtime logging config of the job

pysparkJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

pysparkJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

pysparkJob.properties OBJECT

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

pysparkJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code

pysparkJob.args[] STRING

pysparkJob.fileUris[] STRING

pysparkJob.pythonFileUris[] STRING

pysparkJob.mainPythonFileUri STRING

Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file

pysparkJob.archiveUris[] STRING

reference OBJECT

Encapsulates the full scoping used to reference a job

reference.projectId STRING

Required. The ID of the Google Cloud Platform project that the job belongs to

reference.jobId STRING

Optional. The job ID, which must be unique within the project.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server

hadoopJob OBJECT

A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html)

hadoopJob.args[] STRING

hadoopJob.fileUris[] STRING

hadoopJob.mainClass STRING

The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris

hadoopJob.archiveUris[] STRING

hadoopJob.mainJarFileUri STRING

The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'

hadoopJob.jarFileUris[] STRING

hadoopJob.loggingConfig OBJECT

The runtime logging config of the job

hadoopJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

hadoopJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

hadoopJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code

hadoopJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code

placement OBJECT

Cloud Dataproc job config

placement.clusterName STRING

Required. The name of the cluster where the job will be submitted

placement.clusterUuid STRING

Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted

status OBJECT

Cloud Dataproc job status

status.state ENUMERATION

Output only. A state message specifying the overall job state

status.details STRING

Output only. Optional job state details, such as an error description if the state is ERROR

status.stateStartTime ANY

Output only. The time when this state was entered

status.substate ENUMERATION

Output only. Additional state information, which includes status reported by the agent

driverControlFilesUri STRING

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri

scheduling OBJECT

Job scheduling options

scheduling.maxFailuresPerHour INTEGER

Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10

pigJob OBJECT

A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN

pigJob.loggingConfig OBJECT

The runtime logging config of the job

pigJob.loggingConfig.driverLogLevels OBJECT

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

pigJob.loggingConfig.driverLogLevels.customKey.value ENUMERATION

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

pigJob.properties OBJECT

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code

pigJob.properties.customKey.value STRING

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code

pigJob.continueOnFailure BOOLEAN

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries

pigJob.queryFileUri STRING

The HCFS URI of the script that contains the Pig queries

pigJob.queryList OBJECT

A list of queries to run on a cluster

pigJob.queryList.queries[] STRING

pigJob.jarFileUris[] STRING

pigJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value])

pigJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value])

jobUuid STRING

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time

hiveJob OBJECT

A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN

hiveJob.continueOnFailure BOOLEAN

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries

hiveJob.queryFileUri STRING

The HCFS URI of the script that contains Hive queries

hiveJob.queryList OBJECT

A list of queries to run on a cluster

hiveJob.queryList.queries[] STRING

hiveJob.jarFileUris[] STRING

hiveJob.scriptVariables OBJECT

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";)

hiveJob.scriptVariables.customKey.value STRING

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";)

hiveJob.properties OBJECT

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code

hiveJob.properties.customKey.value STRING

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code