Update

Updates the state of an existing Cloud Dataflow job

151 variables
148 variables

Updates the state of an existing Cloud Dataflow job.

To update the state of an existing job, we recommend using projects.locations.jobs.update with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.update is not recommended, as you can only update the state of jobs that are running in us-central1

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services
  • View and manage your Google Compute Engine resources
  • View your Google Compute Engine resources
  • View your email address

Input

This building block consumes 151 input parameters

  = Parameter name
  = Format

projectId STRING Required

The ID of the Cloud Platform project that the job belongs to

jobId STRING Required

The job ID

location STRING

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job

requestedState ENUMERATION

The job's requested state.

UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state

clientRequestId STRING

The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it

id STRING

The unique ID of this job.

This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job

currentStateTime ANY

The timestamp associated with the current state

transformNameMapping OBJECT

The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job

transformNameMapping.customKey.value STRING Required

The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job

environment OBJECT

Describes the environment in which a Dataflow Job runs

environment.clusterManagerApiService STRING

The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com"

environment.tempStoragePrefix STRING

The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is:

Google Cloud Storage:

storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.experiments[] STRING

environment.version OBJECT

A structure describing which components and their versions of the service are required in order to run the job

environment.version.customKey.value ANY Required

A structure describing which components and their versions of the service are required in order to run the job

environment.serviceAccountEmail STRING

Identity to run virtual machines as. Defaults to the default account

environment.sdkPipelineOptions OBJECT

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way

environment.sdkPipelineOptions.customKey.value ANY Required

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way

environment.flexResourceSchedulingGoal ENUMERATION

Which Flexible Resource Scheduling mode to run in

environment.workerPools[] OBJECT

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job

environment.workerPools[].workerHarnessContainerImage STRING

Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry

environment.workerPools[].machineType STRING

Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].diskType STRING

Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].kind STRING

The kind of the worker pool; currently only harness and shuffle are supported

environment.workerPools[].dataDisks[] OBJECT

Describes the data disk used by a workflow job

environment.workerPools[].subnetwork STRING

Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK"

environment.workerPools[].ipConfiguration ENUMERATION

Configuration for VM IPs

environment.workerPools[].taskrunnerSettings OBJECT

Taskrunner configuration settings

environment.workerPools[].taskrunnerSettings.workflowFileName STRING

The file to store the workflow in

environment.workerPools[].taskrunnerSettings.languageHint STRING

The suggested backend language

environment.workerPools[].taskrunnerSettings.commandlinesFileName STRING

The file to store preprocessing commands in

environment.workerPools[].taskrunnerSettings.tempStoragePrefix STRING

The prefix of the resources the taskrunner should use for temporary storage.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.workerPools[].taskrunnerSettings.baseTaskDir STRING

The location on the worker for task-specific subdirectories

environment.workerPools[].taskrunnerSettings.baseUrl STRING

The base URL for the taskrunner to use when accessing Google Cloud APIs.

When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators".

If not specified, the default value is "http://www.googleapis.com/"

environment.workerPools[].taskrunnerSettings.logToSerialconsole BOOLEAN

Whether to send taskrunner log info to Google Compute Engine VM serial console

environment.workerPools[].taskrunnerSettings.continueOnException BOOLEAN

Whether to continue taskrunner if an exception is hit

environment.workerPools[].taskrunnerSettings.taskUser STRING

The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root"

environment.workerPools[].taskrunnerSettings.vmId STRING

The ID string of the VM

environment.workerPools[].taskrunnerSettings.alsologtostderr BOOLEAN

Whether to also send taskrunner log info to stderr

environment.workerPools[].taskrunnerSettings.taskGroup STRING

The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel"

environment.workerPools[].taskrunnerSettings.harnessCommand STRING

The command to launch the worker harness

environment.workerPools[].taskrunnerSettings.logDir STRING

The directory on the VM to store logs

environment.workerPools[].taskrunnerSettings.oauthScopes[] STRING

environment.workerPools[].taskrunnerSettings.dataflowApiVersion STRING

The API version of endpoint, e.g. "v1b3"

environment.workerPools[].taskrunnerSettings.logUploadLocation STRING

Indicates where to put logs. If this is not specified, the logs will not be uploaded.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.workerPools[].taskrunnerSettings.streamingWorkerMainClass STRING

The streaming worker main class name

environment.workerPools[].autoscalingSettings OBJECT

Settings for WorkerPool autoscaling

environment.workerPools[].autoscalingSettings.algorithm ENUMERATION

The algorithm to use for autoscaling

environment.workerPools[].autoscalingSettings.maxNumWorkers INTEGER

The maximum number of workers to cap scaling at

environment.workerPools[].metadata OBJECT

Metadata to set on the Google Compute Engine VMs

environment.workerPools[].metadata.customKey.value STRING Required

Metadata to set on the Google Compute Engine VMs

environment.workerPools[].defaultPackageSet ENUMERATION

The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language

environment.workerPools[].network STRING

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default"

environment.workerPools[].zone STRING

Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].numWorkers INTEGER

Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].numThreadsPerWorker INTEGER

The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming)

environment.workerPools[].diskSourceImage STRING

Fully qualified source image for disks

environment.workerPools[].packages[] OBJECT

The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.

This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run

environment.workerPools[].teardownPolicy ENUMERATION

Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down.

If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs.

If unknown or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].onHostMaintenance STRING

The action to take on host maintenance, as defined by the Google Compute Engine API

environment.workerPools[].poolArgs OBJECT

Extra arguments for this worker pool

environment.workerPools[].poolArgs.customKey.value ANY Required

Extra arguments for this worker pool

environment.workerPools[].diskSizeGb INTEGER

Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default

environment.dataset STRING

The dataset for the current project where various workflow related tables are stored.

The supported resource type is:

Google BigQuery: bigquery.googleapis.com/{dataset}

environment.internalExperiments OBJECT

Experimental settings

environment.internalExperiments.customKey.value ANY Required

Experimental settings

environment.serviceKmsKeyName STRING

If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK).

Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

environment.userAgent OBJECT

A description of the process that generated the request

environment.userAgent.customKey.value ANY Required

A description of the process that generated the request

stageStates[] OBJECT

A message describing the state of a particular execution stage

stageStates[].executionStageName STRING

The name of the execution stage

stageStates[].currentStateTime ANY

The time at which the stage transitioned to this state

stageStates[].executionStageState ENUMERATION

Executions stage states allow the same set of values as JobState

jobMetadata OBJECT

Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view

jobMetadata.datastoreDetails[] OBJECT

Metadata for a Datastore connector used by the job

jobMetadata.datastoreDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.datastoreDetails[].namespace STRING

Namespace used in the connection

jobMetadata.sdkVersion OBJECT

The version of the SDK used to run the job

jobMetadata.sdkVersion.version STRING

The version of the SDK used to run the job

jobMetadata.sdkVersion.versionDisplayName STRING

A readable string describing the version of the SDK

jobMetadata.sdkVersion.sdkSupportStatus ENUMERATION

The support status for this SDK version

jobMetadata.fileDetails[] OBJECT

Metadata for a File connector used by the job

jobMetadata.fileDetails[].filePattern STRING

File Pattern used to access files by the connector

jobMetadata.bigqueryDetails[] OBJECT

Metadata for a BigQuery connector used by the job

jobMetadata.bigqueryDetails[].dataset STRING

Dataset accessed in the connection

jobMetadata.bigqueryDetails[].projectId STRING

Project accessed in the connection

jobMetadata.bigqueryDetails[].query STRING

Query used to access data in the connection

jobMetadata.bigqueryDetails[].table STRING

Table accessed in the connection

jobMetadata.pubsubDetails[] OBJECT

Metadata for a PubSub connector used by the job

jobMetadata.pubsubDetails[].topic STRING

Topic accessed in the connection

jobMetadata.pubsubDetails[].subscription STRING

Subscription used in the connection

jobMetadata.bigTableDetails[] OBJECT

Metadata for a BigTable connector used by the job

jobMetadata.bigTableDetails[].instanceId STRING

InstanceId accessed in the connection

jobMetadata.bigTableDetails[].tableId STRING

TableId accessed in the connection

jobMetadata.bigTableDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.spannerDetails[] OBJECT

Metadata for a Spanner connector used by the job

jobMetadata.spannerDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.spannerDetails[].databaseId STRING

DatabaseId accessed in the connection

jobMetadata.spannerDetails[].instanceId STRING

InstanceId accessed in the connection

type ENUMERATION

The type of Cloud Dataflow job

projectId STRING

The ID of the Cloud Platform project that the job belongs to

createdFromSnapshotId STRING

If this is specified, the job's initial state is populated from the given snapshot

pipelineDescription OBJECT

A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics

pipelineDescription.originalPipelineTransform[] OBJECT

Description of the type, names/ids, and input/outputs for a transform

pipelineDescription.originalPipelineTransform[].id STRING

SDK generated id of this transform instance

pipelineDescription.originalPipelineTransform[].displayData[] OBJECT

Data provided with a pipeline or transform to provide descriptive info

pipelineDescription.originalPipelineTransform[].outputCollectionName[] STRING

pipelineDescription.originalPipelineTransform[].kind ENUMERATION

Type of transform

pipelineDescription.originalPipelineTransform[].inputCollectionName[] STRING

pipelineDescription.originalPipelineTransform[].name STRING

User provided name for this transform instance

pipelineDescription.displayData[] OBJECT

Data provided with a pipeline or transform to provide descriptive info

pipelineDescription.displayData[].key STRING

The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system

pipelineDescription.displayData[].shortStrValue STRING

A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip

pipelineDescription.displayData[].label STRING

An optional label to display in a dax UI for the element

pipelineDescription.displayData[].url STRING

An optional full URL

pipelineDescription.displayData[].timestampValue ANY

Contains value if the data is of timestamp type

pipelineDescription.displayData[].javaClassValue STRING

Contains value if the data is of java class type

pipelineDescription.displayData[].boolValue BOOLEAN

Contains value if the data is of a boolean type

pipelineDescription.displayData[].strValue STRING

Contains value if the data is of string type

pipelineDescription.displayData[].durationValue ANY

Contains value if the data is of duration type

pipelineDescription.displayData[].int64Value INTEGER

Contains value if the data is of int64 type

pipelineDescription.displayData[].namespace STRING

The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering

pipelineDescription.displayData[].floatValue FLOAT

Contains value if the data is of float type

pipelineDescription.executionPipelineStage[] OBJECT

Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning

pipelineDescription.executionPipelineStage[].componentSource[] OBJECT

Description of an interstitial value between transforms in an execution stage

pipelineDescription.executionPipelineStage[].kind ENUMERATION

Type of tranform this stage is executing

pipelineDescription.executionPipelineStage[].outputSource[] OBJECT

Description of an input or output of an execution stage

pipelineDescription.executionPipelineStage[].name STRING

Dataflow service generated name for this stage

pipelineDescription.executionPipelineStage[].inputSource[] OBJECT

Description of an input or output of an execution stage

pipelineDescription.executionPipelineStage[].id STRING

Dataflow service generated id for this stage

pipelineDescription.executionPipelineStage[].componentTransform[] OBJECT

Description of a transform executed as part of an execution stage

replaceJobId STRING

If this job is an update of an existing job, this field is the job ID of the job it replaced.

When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job

tempFiles[] STRING

name STRING

The user-specified Cloud Dataflow job name.

Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job.

The name must match the regular expression [a-z]([-a-z0-9]{0,38}[a-z0-9])?

steps[] OBJECT

Defines a particular step within a Cloud Dataflow job.

A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.

Here's an example of a sequence of steps which together implement a Map-Reduce job:

  • Read a collection of data from some source, parsing the collection's elements.

  • Validate the elements.

  • Apply a user-defined function to map each element to some value and extract an element-specific key value.

  • Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.

  • Write the elements out to some data sink.

Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce

steps[].name STRING

The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job

steps[].kind STRING

The kind of step in the Cloud Dataflow job

steps[].properties OBJECT

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL

steps[].properties.customKey.value ANY Required

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL

replacedByJobId STRING

If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job

executionInfo OBJECT

Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job

executionInfo.stages OBJECT

A mapping from each stage to the information about that stage

executionInfo.stages.customKey OBJECT

Add additional named properties

executionInfo.stages.customKey.stepName[] STRING

currentState ENUMERATION

The current state of the job.

Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified.

A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made.

This field may be mutated by the Cloud Dataflow service; callers cannot mutate it

location STRING

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job

startTime ANY

The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service

stepsLocation STRING

The GCS location where the steps are stored

createTime ANY

The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service

labels OBJECT

User-defined labels for this job.

The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:

  • Keys must conform to regexp: \p{Ll}\p{Lo}{0,62}
  • Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
  • Both keys and values are additionally constrained to be <= 128 bytes in size.

labels.customKey.value STRING Required

User-defined labels for this job.

The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:

  • Keys must conform to regexp: \p{Ll}\p{Lo}{0,62}
  • Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
  • Both keys and values are additionally constrained to be <= 128 bytes in size.

Output

This building block provides 148 output parameters

  = Parameter name
  = Format

requestedState ENUMERATION

The job's requested state.

UpdateJob may be used to switch between the JOB_STATE_STOPPED and JOB_STATE_RUNNING states, by setting requested_state. UpdateJob may also be used to directly set a job's requested state to JOB_STATE_CANCELLED or JOB_STATE_DONE, irrevocably terminating the job if it has not already reached a terminal state

clientRequestId STRING

The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it

id STRING

The unique ID of this job.

This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job

currentStateTime ANY

The timestamp associated with the current state

transformNameMapping OBJECT

The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job

transformNameMapping.customKey.value STRING

The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job

environment OBJECT

Describes the environment in which a Dataflow Job runs

environment.clusterManagerApiService STRING

The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com"

environment.tempStoragePrefix STRING

The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is:

Google Cloud Storage:

storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.experiments[] STRING

environment.version OBJECT

A structure describing which components and their versions of the service are required in order to run the job

environment.version.customKey.value ANY

A structure describing which components and their versions of the service are required in order to run the job

environment.serviceAccountEmail STRING

Identity to run virtual machines as. Defaults to the default account

environment.sdkPipelineOptions OBJECT

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way

environment.sdkPipelineOptions.customKey.value ANY

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way

environment.flexResourceSchedulingGoal ENUMERATION

Which Flexible Resource Scheduling mode to run in

environment.workerPools[] OBJECT

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job

environment.workerPools[].workerHarnessContainerImage STRING

Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry

environment.workerPools[].machineType STRING

Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].diskType STRING

Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].kind STRING

The kind of the worker pool; currently only harness and shuffle are supported

environment.workerPools[].dataDisks[] OBJECT

Describes the data disk used by a workflow job

environment.workerPools[].subnetwork STRING

Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK"

environment.workerPools[].ipConfiguration ENUMERATION

Configuration for VM IPs

environment.workerPools[].taskrunnerSettings OBJECT

Taskrunner configuration settings

environment.workerPools[].taskrunnerSettings.workflowFileName STRING

The file to store the workflow in

environment.workerPools[].taskrunnerSettings.languageHint STRING

The suggested backend language

environment.workerPools[].taskrunnerSettings.commandlinesFileName STRING

The file to store preprocessing commands in

environment.workerPools[].taskrunnerSettings.tempStoragePrefix STRING

The prefix of the resources the taskrunner should use for temporary storage.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.workerPools[].taskrunnerSettings.baseTaskDir STRING

The location on the worker for task-specific subdirectories

environment.workerPools[].taskrunnerSettings.baseUrl STRING

The base URL for the taskrunner to use when accessing Google Cloud APIs.

When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators".

If not specified, the default value is "http://www.googleapis.com/"

environment.workerPools[].taskrunnerSettings.logToSerialconsole BOOLEAN

Whether to send taskrunner log info to Google Compute Engine VM serial console

environment.workerPools[].taskrunnerSettings.continueOnException BOOLEAN

Whether to continue taskrunner if an exception is hit

environment.workerPools[].taskrunnerSettings.taskUser STRING

The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root"

environment.workerPools[].taskrunnerSettings.vmId STRING

The ID string of the VM

environment.workerPools[].taskrunnerSettings.alsologtostderr BOOLEAN

Whether to also send taskrunner log info to stderr

environment.workerPools[].taskrunnerSettings.taskGroup STRING

The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel"

environment.workerPools[].taskrunnerSettings.harnessCommand STRING

The command to launch the worker harness

environment.workerPools[].taskrunnerSettings.logDir STRING

The directory on the VM to store logs

environment.workerPools[].taskrunnerSettings.oauthScopes[] STRING

environment.workerPools[].taskrunnerSettings.dataflowApiVersion STRING

The API version of endpoint, e.g. "v1b3"

environment.workerPools[].taskrunnerSettings.logUploadLocation STRING

Indicates where to put logs. If this is not specified, the logs will not be uploaded.

The supported resource type is:

Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

environment.workerPools[].taskrunnerSettings.streamingWorkerMainClass STRING

The streaming worker main class name

environment.workerPools[].autoscalingSettings OBJECT

Settings for WorkerPool autoscaling

environment.workerPools[].autoscalingSettings.algorithm ENUMERATION

The algorithm to use for autoscaling

environment.workerPools[].autoscalingSettings.maxNumWorkers INTEGER

The maximum number of workers to cap scaling at

environment.workerPools[].metadata OBJECT

Metadata to set on the Google Compute Engine VMs

environment.workerPools[].metadata.customKey.value STRING

Metadata to set on the Google Compute Engine VMs

environment.workerPools[].defaultPackageSet ENUMERATION

The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language

environment.workerPools[].network STRING

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default"

environment.workerPools[].zone STRING

Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].numWorkers INTEGER

Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].numThreadsPerWorker INTEGER

The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming)

environment.workerPools[].diskSourceImage STRING

Fully qualified source image for disks

environment.workerPools[].packages[] OBJECT

The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.

This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run

environment.workerPools[].teardownPolicy ENUMERATION

Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down.

If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs.

If unknown or unspecified, the service will attempt to choose a reasonable default

environment.workerPools[].onHostMaintenance STRING

The action to take on host maintenance, as defined by the Google Compute Engine API

environment.workerPools[].poolArgs OBJECT

Extra arguments for this worker pool

environment.workerPools[].poolArgs.customKey.value ANY

Extra arguments for this worker pool

environment.workerPools[].diskSizeGb INTEGER

Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default

environment.dataset STRING

The dataset for the current project where various workflow related tables are stored.

The supported resource type is:

Google BigQuery: bigquery.googleapis.com/{dataset}

environment.internalExperiments OBJECT

Experimental settings

environment.internalExperiments.customKey.value ANY

Experimental settings

environment.serviceKmsKeyName STRING

If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK).

Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

environment.userAgent OBJECT

A description of the process that generated the request

environment.userAgent.customKey.value ANY

A description of the process that generated the request

stageStates[] OBJECT

A message describing the state of a particular execution stage

stageStates[].executionStageName STRING

The name of the execution stage

stageStates[].currentStateTime ANY

The time at which the stage transitioned to this state

stageStates[].executionStageState ENUMERATION

Executions stage states allow the same set of values as JobState

jobMetadata OBJECT

Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view

jobMetadata.datastoreDetails[] OBJECT

Metadata for a Datastore connector used by the job

jobMetadata.datastoreDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.datastoreDetails[].namespace STRING

Namespace used in the connection

jobMetadata.sdkVersion OBJECT

The version of the SDK used to run the job

jobMetadata.sdkVersion.version STRING

The version of the SDK used to run the job

jobMetadata.sdkVersion.versionDisplayName STRING

A readable string describing the version of the SDK

jobMetadata.sdkVersion.sdkSupportStatus ENUMERATION

The support status for this SDK version

jobMetadata.fileDetails[] OBJECT

Metadata for a File connector used by the job

jobMetadata.fileDetails[].filePattern STRING

File Pattern used to access files by the connector

jobMetadata.bigqueryDetails[] OBJECT

Metadata for a BigQuery connector used by the job

jobMetadata.bigqueryDetails[].dataset STRING

Dataset accessed in the connection

jobMetadata.bigqueryDetails[].projectId STRING

Project accessed in the connection

jobMetadata.bigqueryDetails[].query STRING

Query used to access data in the connection

jobMetadata.bigqueryDetails[].table STRING

Table accessed in the connection

jobMetadata.pubsubDetails[] OBJECT

Metadata for a PubSub connector used by the job

jobMetadata.pubsubDetails[].topic STRING

Topic accessed in the connection

jobMetadata.pubsubDetails[].subscription STRING

Subscription used in the connection

jobMetadata.bigTableDetails[] OBJECT

Metadata for a BigTable connector used by the job

jobMetadata.bigTableDetails[].instanceId STRING

InstanceId accessed in the connection

jobMetadata.bigTableDetails[].tableId STRING

TableId accessed in the connection

jobMetadata.bigTableDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.spannerDetails[] OBJECT

Metadata for a Spanner connector used by the job

jobMetadata.spannerDetails[].projectId STRING

ProjectId accessed in the connection

jobMetadata.spannerDetails[].databaseId STRING

DatabaseId accessed in the connection

jobMetadata.spannerDetails[].instanceId STRING

InstanceId accessed in the connection

type ENUMERATION

The type of Cloud Dataflow job

projectId STRING

The ID of the Cloud Platform project that the job belongs to

createdFromSnapshotId STRING

If this is specified, the job's initial state is populated from the given snapshot

pipelineDescription OBJECT

A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics

pipelineDescription.originalPipelineTransform[] OBJECT

Description of the type, names/ids, and input/outputs for a transform

pipelineDescription.originalPipelineTransform[].id STRING

SDK generated id of this transform instance

pipelineDescription.originalPipelineTransform[].displayData[] OBJECT

Data provided with a pipeline or transform to provide descriptive info

pipelineDescription.originalPipelineTransform[].outputCollectionName[] STRING

pipelineDescription.originalPipelineTransform[].kind ENUMERATION

Type of transform

pipelineDescription.originalPipelineTransform[].inputCollectionName[] STRING

pipelineDescription.originalPipelineTransform[].name STRING

User provided name for this transform instance

pipelineDescription.displayData[] OBJECT

Data provided with a pipeline or transform to provide descriptive info

pipelineDescription.displayData[].key STRING

The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system

pipelineDescription.displayData[].shortStrValue STRING

A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip

pipelineDescription.displayData[].label STRING

An optional label to display in a dax UI for the element

pipelineDescription.displayData[].url STRING

An optional full URL

pipelineDescription.displayData[].timestampValue ANY

Contains value if the data is of timestamp type

pipelineDescription.displayData[].javaClassValue STRING

Contains value if the data is of java class type

pipelineDescription.displayData[].boolValue BOOLEAN

Contains value if the data is of a boolean type

pipelineDescription.displayData[].strValue STRING

Contains value if the data is of string type

pipelineDescription.displayData[].durationValue ANY

Contains value if the data is of duration type

pipelineDescription.displayData[].int64Value INTEGER

Contains value if the data is of int64 type

pipelineDescription.displayData[].namespace STRING

The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering

pipelineDescription.displayData[].floatValue FLOAT

Contains value if the data is of float type

pipelineDescription.executionPipelineStage[] OBJECT

Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning

pipelineDescription.executionPipelineStage[].componentSource[] OBJECT

Description of an interstitial value between transforms in an execution stage

pipelineDescription.executionPipelineStage[].kind ENUMERATION

Type of tranform this stage is executing

pipelineDescription.executionPipelineStage[].outputSource[] OBJECT

Description of an input or output of an execution stage

pipelineDescription.executionPipelineStage[].name STRING

Dataflow service generated name for this stage

pipelineDescription.executionPipelineStage[].inputSource[] OBJECT

Description of an input or output of an execution stage

pipelineDescription.executionPipelineStage[].id STRING

Dataflow service generated id for this stage

pipelineDescription.executionPipelineStage[].componentTransform[] OBJECT

Description of a transform executed as part of an execution stage

replaceJobId STRING

If this job is an update of an existing job, this field is the job ID of the job it replaced.

When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job

tempFiles[] STRING

name STRING

The user-specified Cloud Dataflow job name.

Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job.

The name must match the regular expression [a-z]([-a-z0-9]{0,38}[a-z0-9])?

steps[] OBJECT

Defines a particular step within a Cloud Dataflow job.

A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job.

Here's an example of a sequence of steps which together implement a Map-Reduce job:

  • Read a collection of data from some source, parsing the collection's elements.

  • Validate the elements.

  • Apply a user-defined function to map each element to some value and extract an element-specific key value.

  • Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection.

  • Write the elements out to some data sink.

Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce

steps[].name STRING

The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job

steps[].kind STRING

The kind of step in the Cloud Dataflow job

steps[].properties OBJECT

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL

steps[].properties.customKey.value ANY

Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL

replacedByJobId STRING

If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job

executionInfo OBJECT

Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job

executionInfo.stages OBJECT

A mapping from each stage to the information about that stage

executionInfo.stages.customKey OBJECT

Add additional named properties

executionInfo.stages.customKey.stepName[] STRING

currentState ENUMERATION

The current state of the job.

Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified.

A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made.

This field may be mutated by the Cloud Dataflow service; callers cannot mutate it

location STRING

The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job

startTime ANY

The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service

stepsLocation STRING

The GCS location where the steps are stored

createTime ANY

The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service

labels OBJECT

User-defined labels for this job.

The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:

  • Keys must conform to regexp: \p{Ll}\p{Lo}{0,62}
  • Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
  • Both keys and values are additionally constrained to be <= 128 bytes in size.

labels.customKey.value STRING

User-defined labels for this job.

The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:

  • Keys must conform to regexp: \p{Ll}\p{Lo}{0,62}
  • Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
  • Both keys and values are additionally constrained to be <= 128 bytes in size.