Aggregated
|
|||||
|
|
List the jobs of a project across all regions
Authorization
To use this building block you will have to grant access to at least one of the following scopes:
- View and manage your data across Google Cloud Platform services
- View and manage your Google Compute Engine resources
- View your Google Compute Engine resources
- View your email address
Input
This building block consumes 6 input parameters
Name | Format | Description |
---|---|---|
projectId Required |
STRING |
The project which owns the jobs |
filter |
ENUMERATION |
The kind of filter to use |
location |
STRING |
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job |
pageToken |
STRING |
Set this to the 'next_page_token' field of a previous response to request additional results in a long list |
pageSize |
INTEGER |
If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit |
view |
ENUMERATION |
Level of information requested in response. Default is |
= Parameter name
= Format
projectId STRING Required The project which owns the jobs |
filter ENUMERATION The kind of filter to use |
location STRING The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job |
pageToken STRING Set this to the 'next_page_token' field of a previous response to request additional results in a long list |
pageSize INTEGER If there are many jobs, limit response to at most this many. The actual number of jobs returned will be the lesser of max_responses and an unspecified server-defined limit |
view ENUMERATION Level of information requested in response. Default is |
Output
This building block provides 68 output parameters
Name | Format | Description |
---|---|---|
jobs[] |
OBJECT |
Defines a job to be run by the Cloud Dataflow service |
jobs[].requestedState |
ENUMERATION |
The job's requested state.
|
jobs[].clientRequestId |
STRING |
The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it |
jobs[].id |
STRING |
The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job |
jobs[].currentStateTime |
ANY |
The timestamp associated with the current state |
jobs[].transformNameMapping |
OBJECT |
The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job |
jobs[].transformNameMapping.customKey.value |
STRING |
The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job |
jobs[].environment |
OBJECT |
Describes the environment in which a Dataflow Job runs |
jobs[].environment.clusterManagerApiService |
STRING |
The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com" |
jobs[].environment.tempStoragePrefix |
STRING |
The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
jobs[].environment.experiments[] |
STRING |
|
jobs[].environment.version |
OBJECT |
A structure describing which components and their versions of the service are required in order to run the job |
jobs[].environment.version.customKey.value |
ANY |
A structure describing which components and their versions of the service are required in order to run the job |
jobs[].environment.serviceAccountEmail |
STRING |
Identity to run virtual machines as. Defaults to the default account |
jobs[].environment.sdkPipelineOptions |
OBJECT |
The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way |
jobs[].environment.sdkPipelineOptions.customKey.value |
ANY |
The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way |
jobs[].environment.flexResourceSchedulingGoal |
ENUMERATION |
Which Flexible Resource Scheduling mode to run in |
jobs[].environment.workerPools[] |
OBJECT |
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job |
jobs[].environment.dataset |
STRING |
The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset} |
jobs[].environment.internalExperiments |
OBJECT |
Experimental settings |
jobs[].environment.internalExperiments.customKey.value |
ANY |
Experimental settings |
jobs[].environment.serviceKmsKeyName |
STRING |
If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY |
jobs[].environment.userAgent |
OBJECT |
A description of the process that generated the request |
jobs[].environment.userAgent.customKey.value |
ANY |
A description of the process that generated the request |
jobs[].stageStates[] |
OBJECT |
A message describing the state of a particular execution stage |
jobs[].stageStates[].executionStageName |
STRING |
The name of the execution stage |
jobs[].stageStates[].currentStateTime |
ANY |
The time at which the stage transitioned to this state |
jobs[].stageStates[].executionStageState |
ENUMERATION |
Executions stage states allow the same set of values as JobState |
jobs[].jobMetadata |
OBJECT |
Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view |
jobs[].jobMetadata.datastoreDetails[] |
OBJECT |
Metadata for a Datastore connector used by the job |
jobs[].jobMetadata.sdkVersion |
OBJECT |
The version of the SDK used to run the job |
jobs[].jobMetadata.sdkVersion.version |
STRING |
The version of the SDK used to run the job |
jobs[].jobMetadata.sdkVersion.versionDisplayName |
STRING |
A readable string describing the version of the SDK |
jobs[].jobMetadata.sdkVersion.sdkSupportStatus |
ENUMERATION |
The support status for this SDK version |
jobs[].jobMetadata.fileDetails[] |
OBJECT |
Metadata for a File connector used by the job |
jobs[].jobMetadata.bigqueryDetails[] |
OBJECT |
Metadata for a BigQuery connector used by the job |
jobs[].jobMetadata.pubsubDetails[] |
OBJECT |
Metadata for a PubSub connector used by the job |
jobs[].jobMetadata.bigTableDetails[] |
OBJECT |
Metadata for a BigTable connector used by the job |
jobs[].jobMetadata.spannerDetails[] |
OBJECT |
Metadata for a Spanner connector used by the job |
jobs[].type |
ENUMERATION |
The type of Cloud Dataflow job |
jobs[].projectId |
STRING |
The ID of the Cloud Platform project that the job belongs to |
jobs[].createdFromSnapshotId |
STRING |
If this is specified, the job's initial state is populated from the given snapshot |
jobs[].pipelineDescription |
OBJECT |
A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics |
jobs[].pipelineDescription.originalPipelineTransform[] |
OBJECT |
Description of the type, names/ids, and input/outputs for a transform |
jobs[].pipelineDescription.displayData[] |
OBJECT |
Data provided with a pipeline or transform to provide descriptive info |
jobs[].pipelineDescription.executionPipelineStage[] |
OBJECT |
Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning |
jobs[].replaceJobId |
STRING |
If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a |
jobs[].tempFiles[] |
STRING |
|
jobs[].name |
STRING |
The user-specified Cloud Dataflow job name. Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
|
jobs[].steps[] |
OBJECT |
Defines a particular step within a Cloud Dataflow job. A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job. Here's an example of a sequence of steps which together implement a Map-Reduce job:
Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce |
jobs[].steps[].name |
STRING |
The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job |
jobs[].steps[].kind |
STRING |
The kind of step in the Cloud Dataflow job |
jobs[].steps[].properties |
OBJECT |
Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL |
jobs[].steps[].properties.customKey.value |
ANY |
Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL |
jobs[].replacedByJobId |
STRING |
If another job is an update of this job (and thus, this job is in
|
jobs[].executionInfo |
OBJECT |
Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job |
jobs[].executionInfo.stages |
OBJECT |
A mapping from each stage to the information about that stage |
jobs[].executionInfo.stages.customKey |
OBJECT |
Add additional named properties |
jobs[].currentState |
ENUMERATION |
The current state of the job. Jobs are created in the A job in the This field may be mutated by the Cloud Dataflow service; callers cannot mutate it |
jobs[].location |
STRING |
The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job |
jobs[].startTime |
ANY |
The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service |
jobs[].stepsLocation |
STRING |
The GCS location where the steps are stored |
jobs[].createTime |
ANY |
The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service |
jobs[].labels |
OBJECT |
User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
|
jobs[].labels.customKey.value |
STRING |
User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
|
nextPageToken |
STRING |
Set if there may be more results than fit in this response |
failedLocation[] |
OBJECT |
Indicates which [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) failed to respond to a request for data |
failedLocation[].name |
STRING |
The name of the [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that failed to respond |
= Parameter name
= Format
jobs[] OBJECT Defines a job to be run by the Cloud Dataflow service |
jobs[].requestedState ENUMERATION The job's requested state.
|
jobs[].clientRequestId STRING The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it |
jobs[].id STRING The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job |
jobs[].currentStateTime ANY The timestamp associated with the current state |
jobs[].transformNameMapping OBJECT The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job |
jobs[].transformNameMapping.customKey.value STRING The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job |
jobs[].environment OBJECT Describes the environment in which a Dataflow Job runs |
jobs[].environment.clusterManagerApiService STRING The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com" |
jobs[].environment.tempStoragePrefix STRING The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object} |
jobs[].environment.experiments[] STRING |
jobs[].environment.version OBJECT A structure describing which components and their versions of the service are required in order to run the job |
jobs[].environment.version.customKey.value ANY A structure describing which components and their versions of the service are required in order to run the job |
jobs[].environment.serviceAccountEmail STRING Identity to run virtual machines as. Defaults to the default account |
jobs[].environment.sdkPipelineOptions OBJECT The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way |
jobs[].environment.sdkPipelineOptions.customKey.value ANY The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way |
jobs[].environment.flexResourceSchedulingGoal ENUMERATION Which Flexible Resource Scheduling mode to run in |
jobs[].environment.workerPools[] OBJECT Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job |
jobs[].environment.dataset STRING The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset} |
jobs[].environment.internalExperiments OBJECT Experimental settings |
jobs[].environment.internalExperiments.customKey.value ANY Experimental settings |
jobs[].environment.serviceKmsKeyName STRING If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY |
jobs[].environment.userAgent OBJECT A description of the process that generated the request |
jobs[].environment.userAgent.customKey.value ANY A description of the process that generated the request |
jobs[].stageStates[] OBJECT A message describing the state of a particular execution stage |
jobs[].stageStates[].executionStageName STRING The name of the execution stage |
jobs[].stageStates[].currentStateTime ANY The time at which the stage transitioned to this state |
jobs[].stageStates[].executionStageState ENUMERATION Executions stage states allow the same set of values as JobState |
jobs[].jobMetadata OBJECT Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view |
jobs[].jobMetadata.datastoreDetails[] OBJECT Metadata for a Datastore connector used by the job |
jobs[].jobMetadata.sdkVersion OBJECT The version of the SDK used to run the job |
jobs[].jobMetadata.sdkVersion.version STRING The version of the SDK used to run the job |
jobs[].jobMetadata.sdkVersion.versionDisplayName STRING A readable string describing the version of the SDK |
jobs[].jobMetadata.sdkVersion.sdkSupportStatus ENUMERATION The support status for this SDK version |
jobs[].jobMetadata.fileDetails[] OBJECT Metadata for a File connector used by the job |
jobs[].jobMetadata.bigqueryDetails[] OBJECT Metadata for a BigQuery connector used by the job |
jobs[].jobMetadata.pubsubDetails[] OBJECT Metadata for a PubSub connector used by the job |
jobs[].jobMetadata.bigTableDetails[] OBJECT Metadata for a BigTable connector used by the job |
jobs[].jobMetadata.spannerDetails[] OBJECT Metadata for a Spanner connector used by the job |
jobs[].type ENUMERATION The type of Cloud Dataflow job |
jobs[].projectId STRING The ID of the Cloud Platform project that the job belongs to |
jobs[].createdFromSnapshotId STRING If this is specified, the job's initial state is populated from the given snapshot |
jobs[].pipelineDescription OBJECT A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics |
jobs[].pipelineDescription.originalPipelineTransform[] OBJECT Description of the type, names/ids, and input/outputs for a transform |
jobs[].pipelineDescription.displayData[] OBJECT Data provided with a pipeline or transform to provide descriptive info |
jobs[].pipelineDescription.executionPipelineStage[] OBJECT Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning |
jobs[].replaceJobId STRING If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a |
jobs[].tempFiles[] STRING |
jobs[].name STRING The user-specified Cloud Dataflow job name. Only one Job with a given name may exist in a project at any given time. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
|
jobs[].steps[] OBJECT Defines a particular step within a Cloud Dataflow job. A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job. Here's an example of a sequence of steps which together implement a Map-Reduce job:
Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce |
jobs[].steps[].name STRING The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job |
jobs[].steps[].kind STRING The kind of step in the Cloud Dataflow job |
jobs[].steps[].properties OBJECT Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL |
jobs[].steps[].properties.customKey.value ANY Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL |
jobs[].replacedByJobId STRING If another job is an update of this job (and thus, this job is in
|
jobs[].executionInfo OBJECT Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job |
jobs[].executionInfo.stages OBJECT A mapping from each stage to the information about that stage |
jobs[].executionInfo.stages.customKey OBJECT Add additional named properties |
jobs[].currentState ENUMERATION The current state of the job. Jobs are created in the A job in the This field may be mutated by the Cloud Dataflow service; callers cannot mutate it |
jobs[].location STRING The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job |
jobs[].startTime ANY The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service |
jobs[].stepsLocation STRING The GCS location where the steps are stored |
jobs[].createTime ANY The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service |
jobs[].labels OBJECT User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
|
jobs[].labels.customKey.value STRING User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions:
|
nextPageToken STRING Set if there may be more results than fit in this response |
failedLocation[] OBJECT Indicates which [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) failed to respond to a request for data |
failedLocation[].name STRING The name of the [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that failed to respond |