Get

Returns information about a specific job

3 variables
246 variables

Returns information about a specific job. Job information is available for a six month period after creation. Requires that you're the person who ran the job, or have the Is Owner project role

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data in Google BigQuery
  • View and manage your data across Google Cloud Platform services
  • View your data across Google Cloud Platform services

Input

This building block consumes 3 input parameters

  = Parameter name
  = Format

projectId STRING Required

[Required] Project ID of the requested job

jobId STRING Required

[Required] Job ID of the requested job

location STRING

The geographic location of the job. Required except for US and EU. See details at https://cloud.google.com/bigquery/docs/locations#specifying_your_location

Output

This building block provides 246 output parameters

  = Parameter name
  = Format

jobReference OBJECT

jobReference.location STRING

The geographic location of the job. See details at https://cloud.google.com/bigquery/docs/locations#specifying_your_location

jobReference.jobId STRING

[Required] The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters

jobReference.projectId STRING

[Required] The ID of the project containing this job

status OBJECT

status.errorResult OBJECT

status.errorResult.reason STRING

A short error code that summarizes the error

status.errorResult.message STRING

A human-readable description of the error

status.errorResult.location STRING

Specifies where the error occurred, if present

status.errorResult.debugInfo STRING

Debugging information. This property is internal to Google and should not be used

status.errors[] OBJECT

status.errors[].reason STRING

A short error code that summarizes the error

status.errors[].message STRING

A human-readable description of the error

status.errors[].location STRING

Specifies where the error occurred, if present

status.errors[].debugInfo STRING

Debugging information. This property is internal to Google and should not be used

status.state STRING

[Output-only] Running state of the job

statistics OBJECT

statistics.completionRatio NUMBER

[TrustedTester] [Output-only] Job progress (0.0 -> 1.0) for LOAD and EXTRACT jobs

statistics.startTime INTEGER

[Output-only] Start time of this job, in milliseconds since the epoch. This field will be present when the job transitions from the PENDING state to either RUNNING or DONE

statistics.totalBytesProcessed INTEGER

[Output-only] [Deprecated] Use the bytes processed in the query statistics instead

statistics.query OBJECT

statistics.query.modelTrainingCurrentIteration INTEGER

[Output-only, Beta] Deprecated; do not use

statistics.query.numDmlAffectedRows INTEGER

[Output-only] The number of rows affected by a DML statement. Present only for DML statements INSERT, UPDATE or DELETE

statistics.query.totalBytesProcessed INTEGER

[Output-only] Total bytes processed for the job

statistics.query.billingTier INTEGER

[Output-only] Billing tier for the job

statistics.query.ddlOperationPerformed STRING

The DDL operation performed, possibly dependent on the pre-existence of the DDL target. Possible values (new values might be added in the future): "CREATE": The query created the DDL target. "SKIP": No-op. Example cases: the query is CREATE TABLE IF NOT EXISTS while the table already exists, or the query is DROP TABLE IF EXISTS while the table does not exist. "REPLACE": The query replaced the DDL target. Example case: the query is CREATE OR REPLACE TABLE, and the table already exists. "DROP": The query deleted the DDL target

statistics.query.statementType STRING

The type of query statement, if valid. Possible values (new values might be added in the future): "SELECT": SELECT query. "INSERT": INSERT query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "UPDATE": UPDATE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "DELETE": DELETE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "MERGE": MERGE query; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language. "ALTER_TABLE": ALTER TABLE query. "ALTER_VIEW": ALTER VIEW query. "CREATE_FUNCTION": CREATE FUNCTION query. "CREATE_MODEL": CREATE [OR REPLACE] MODEL ... AS SELECT ... . "CREATE_PROCEDURE": CREATE PROCEDURE query. "CREATE_TABLE": CREATE [OR REPLACE] TABLE without AS SELECT. "CREATE_TABLE_AS_SELECT": CREATE [OR REPLACE] TABLE ... AS SELECT ... . "CREATE_VIEW": CREATE [OR REPLACE] VIEW ... AS SELECT ... . "DROP_FUNCTION" : DROP FUNCTION query. "DROP_PROCEDURE": DROP PROCEDURE query. "DROP_TABLE": DROP TABLE query. "DROP_VIEW": DROP VIEW query

statistics.query.totalSlotMs INTEGER

[Output-only] Slot-milliseconds for the job

statistics.query.totalBytesProcessedAccuracy STRING

[Output-only] For dry-run jobs, totalBytesProcessed is an estimate and this field specifies the accuracy of the estimate. Possible values can be: UNKNOWN: accuracy of the estimate is unknown. PRECISE: estimate is precise. LOWER_BOUND: estimate is lower bound of what the query would cost. UPPER_BOUND: estimate is upper bound of what the query would cost

statistics.query.totalBytesBilled INTEGER

[Output-only] Total bytes billed for the job

statistics.query.modelTraining OBJECT

statistics.query.modelTraining.expectedTotalIterations INTEGER

[Output-only, Beta] Expected number of iterations for the create model query job specified as num_iterations in the input query. The actual total number of iterations may be less than this number due to early stop

statistics.query.modelTraining.currentIteration INTEGER

[Output-only, Beta] Index of current ML training iteration. Updated during create model query job to show job progress

statistics.query.timeline[] OBJECT

statistics.query.timeline[].totalSlotMs INTEGER

Cumulative slot-ms consumed by the query

statistics.query.timeline[].activeUnits INTEGER

Total number of units currently being processed by workers. This does not correspond directly to slot usage. This is the largest value observed since the last sample

statistics.query.timeline[].completedUnits INTEGER

Total parallel units of work completed by this query

statistics.query.timeline[].elapsedMs INTEGER

Milliseconds elapsed since the start of query execution

statistics.query.timeline[].pendingUnits INTEGER

Total parallel units of work remaining for the active stages

statistics.query.reservationUsage[] OBJECT

statistics.query.reservationUsage[].name STRING

[Output-only] Reservation name or "unreserved" for on-demand resources usage

statistics.query.reservationUsage[].slotMs INTEGER

[Output-only] Slot-milliseconds the job spent in the given reservation

statistics.query.cacheHit BOOLEAN

[Output-only] Whether the query result was fetched from the query cache

statistics.query.undeclaredQueryParameters[] OBJECT

statistics.query.undeclaredQueryParameters[].name STRING

[Optional] If unset, this is a positional parameter. Otherwise, should be unique within a query

statistics.query.queryPlan[] OBJECT

statistics.query.queryPlan[].readMsMax INTEGER

Milliseconds the slowest shard spent reading input

statistics.query.queryPlan[].shuffleOutputBytes INTEGER

Total number of bytes written to shuffle

statistics.query.queryPlan[].parallelInputs INTEGER

Number of parallel input segments to be processed

statistics.query.queryPlan[].status STRING

Current status for the stage

statistics.query.queryPlan[].name STRING

Human-readable name for stage

statistics.query.queryPlan[].computeRatioMax NUMBER

Relative amount of time the slowest shard spent on CPU-bound tasks

statistics.query.queryPlan[].startMs INTEGER

Stage start time represented as milliseconds since epoch

statistics.query.queryPlan[].writeMsMax INTEGER

Milliseconds the slowest shard spent on writing output

statistics.query.queryPlan[].shuffleOutputBytesSpilled INTEGER

Total number of bytes written to shuffle and spilled to disk

statistics.query.queryPlan[].readMsAvg INTEGER

Milliseconds the average shard spent reading input

statistics.query.queryPlan[].waitMsAvg INTEGER

Milliseconds the average shard spent waiting to be scheduled

statistics.query.queryPlan[].recordsRead INTEGER

Number of records read into the stage

statistics.query.queryPlan[].writeMsAvg INTEGER

Milliseconds the average shard spent on writing output

statistics.query.queryPlan[].waitRatioMax NUMBER

Relative amount of time the slowest shard spent waiting to be scheduled

statistics.query.queryPlan[].waitMsMax INTEGER

Milliseconds the slowest shard spent waiting to be scheduled

statistics.query.queryPlan[].writeRatioAvg NUMBER

Relative amount of time the average shard spent on writing output

statistics.query.queryPlan[].computeRatioAvg NUMBER

Relative amount of time the average shard spent on CPU-bound tasks

statistics.query.queryPlan[].completedParallelInputs INTEGER

Number of parallel input segments completed

statistics.query.queryPlan[].waitRatioAvg NUMBER

Relative amount of time the average shard spent waiting to be scheduled

statistics.query.queryPlan[].recordsWritten INTEGER

Number of records written by the stage

statistics.query.queryPlan[].readRatioMax NUMBER

Relative amount of time the slowest shard spent reading input

statistics.query.queryPlan[].readRatioAvg NUMBER

Relative amount of time the average shard spent reading input

statistics.query.queryPlan[].id INTEGER

Unique ID for stage within plan

statistics.query.queryPlan[].endMs INTEGER

Stage end time represented as milliseconds since epoch

statistics.query.queryPlan[].writeRatioMax NUMBER

Relative amount of time the slowest shard spent on writing output

statistics.query.queryPlan[].computeMsAvg INTEGER

Milliseconds the average shard spent on CPU-bound tasks

statistics.query.queryPlan[].inputStages[] INTEGER

statistics.query.queryPlan[].computeMsMax INTEGER

Milliseconds the slowest shard spent on CPU-bound tasks

statistics.query.ddlTargetRoutine OBJECT

statistics.query.ddlTargetRoutine.datasetId STRING

[Required] The ID of the dataset containing this routine

statistics.query.ddlTargetRoutine.routineId STRING

[Required] The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters

statistics.query.ddlTargetRoutine.projectId STRING

[Required] The ID of the project containing this routine

statistics.query.ddlTargetTable OBJECT

statistics.query.ddlTargetTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

statistics.query.ddlTargetTable.projectId STRING

[Required] The ID of the project containing this table

statistics.query.ddlTargetTable.datasetId STRING

[Required] The ID of the dataset containing this table

statistics.query.totalPartitionsProcessed INTEGER

[Output-only] Total number of partitions processed from all partitioned tables referenced in the job

statistics.query.schema OBJECT

statistics.query.modelTrainingExpectedTotalIteration INTEGER

[Output-only, Beta] Deprecated; do not use

statistics.query.estimatedBytesProcessed INTEGER

[Output-only] The original estimate of bytes processed for the job

statistics.query.referencedTables[] OBJECT

statistics.query.referencedTables[].tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

statistics.query.referencedTables[].projectId STRING

[Required] The ID of the project containing this table

statistics.query.referencedTables[].datasetId STRING

[Required] The ID of the dataset containing this table

statistics.numChildJobs INTEGER

[Output-only] Number of child jobs executed

statistics.totalSlotMs INTEGER

[Output-only] Slot-milliseconds for the job

statistics.parentJobId STRING

[Output-only] If this is a child job, the id of the parent

statistics.quotaDeferments[] STRING

statistics.reservationUsage[] OBJECT

statistics.reservationUsage[].slotMs INTEGER

[Output-only] Slot-milliseconds the job spent in the given reservation

statistics.reservationUsage[].name STRING

[Output-only] Reservation name or "unreserved" for on-demand resources usage

statistics.creationTime INTEGER

[Output-only] Creation time of this job, in milliseconds since the epoch. This field will be present on all jobs

statistics.load OBJECT

statistics.load.badRecords INTEGER

[Output-only] The number of bad records encountered. Note that if the job has failed because of more bad records encountered than the maximum allowed in the load job configuration, then this number can be less than the total number of bad records present in the input data

statistics.load.inputFileBytes INTEGER

[Output-only] Number of bytes of source data in a load job

statistics.load.inputFiles INTEGER

[Output-only] Number of source files in a load job

statistics.load.outputRows INTEGER

[Output-only] Number of rows imported in a load job. Note that while an import job is in the running state, this value may change

statistics.load.outputBytes INTEGER

[Output-only] Size of the loaded data in bytes. Note that while a load job is in the running state, this value may change

statistics.extract OBJECT

statistics.extract.inputBytes INTEGER

[Output-only] Number of user bytes extracted into the result. This is the byte count as computed by BigQuery for billing purposes

statistics.extract.destinationUriFileCounts[] INTEGER

statistics.endTime INTEGER

[Output-only] End time of this job, in milliseconds since the epoch. This field will be present whenever a job is in the DONE state

selfLink STRING

[Output-only] A URL that can be used to access this resource again

id STRING

[Output-only] Opaque ID field of the job

configuration OBJECT

configuration.load OBJECT

configuration.load.allowQuotedNewlines BOOLEAN

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false

configuration.load.hivePartitioningOptions OBJECT

configuration.load.hivePartitioningOptions.sourceUriPrefix STRING

[Optional, Trusted Tester] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter)

configuration.load.hivePartitioningOptions.mode STRING

[Optional, Trusted Tester] When set, what mode of hive partitioning to use when reading data. Two modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet

configuration.load.useAvroLogicalTypes BOOLEAN

[Optional] If sourceFormat is set to "AVRO", indicates whether to enable interpreting logical types into their corresponding types (ie. TIMESTAMP), instead of only using their raw types (ie. INTEGER)

configuration.load.skipLeadingRows INTEGER

[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped

configuration.load.timePartitioning OBJECT

configuration.load.timePartitioning.field STRING

[Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED

configuration.load.timePartitioning.expirationMs INTEGER

[Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value

configuration.load.timePartitioning.type STRING

[Required] The only type supported is DAY, which will generate one partition per day

configuration.load.timePartitioning.requirePartitionFilter BOOLEAN

configuration.load.autodetect BOOLEAN

[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON sources

configuration.load.destinationEncryptionConfiguration OBJECT

configuration.load.destinationEncryptionConfiguration.kmsKeyName STRING

[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key

configuration.load.schemaUpdateOptions[] STRING

configuration.load.schemaInline STRING

[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT"

configuration.load.rangePartitioning OBJECT

configuration.load.rangePartitioning.range OBJECT

[TrustedTester] [Required] Defines the ranges for range partitioning

configuration.load.rangePartitioning.range.end INTEGER

[TrustedTester] [Required] The end of range partitioning, exclusive

configuration.load.rangePartitioning.range.interval INTEGER

[TrustedTester] [Required] The width of each interval

configuration.load.rangePartitioning.range.start INTEGER

[TrustedTester] [Required] The start of range partitioning, inclusive

configuration.load.rangePartitioning.field STRING

[TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64

configuration.load.nullMarker STRING

[Optional] Specifies a string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value

configuration.load.schema OBJECT

configuration.load.schemaInlineFormat STRING

[Deprecated] The format of the schemaInline property

configuration.load.quote STRING

[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true

configuration.load.writeDisposition STRING

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.load.destinationTableProperties OBJECT

configuration.load.destinationTableProperties.labels OBJECT

[Optional] The labels associated with this table. You can use these to organize and group your tables. This will only be used if the destination table is newly created. If the table already exists and labels are different than the current labels are provided, the job will fail

configuration.load.destinationTableProperties.labels.customKey.value STRING

[Optional] The labels associated with this table. You can use these to organize and group your tables. This will only be used if the destination table is newly created. If the table already exists and labels are different than the current labels are provided, the job will fail

configuration.load.destinationTableProperties.friendlyName STRING

[Optional] The friendly name for the destination table. This will only be used if the destination table is newly created. If the table already exists and a value different than the current friendly name is provided, the job will fail

configuration.load.destinationTableProperties.description STRING

[Optional] The description for the destination table. This will only be used if the destination table is newly created. If the table already exists and a value different than the current description is provided, the job will fail

configuration.load.ignoreUnknownValues BOOLEAN

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names

configuration.load.sourceFormat STRING

[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value is CSV

configuration.load.destinationTable OBJECT

configuration.load.destinationTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.load.destinationTable.projectId STRING

[Required] The ID of the project containing this table

configuration.load.destinationTable.datasetId STRING

[Required] The ID of the dataset containing this table

configuration.load.encoding STRING

[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties

configuration.load.clustering OBJECT

configuration.load.clustering.fields[] STRING

configuration.load.hivePartitioningMode STRING

[Optional, Trusted Tester] If hive partitioning is enabled, which mode to use. Two modes are supported: - AUTO: automatically infer partition key name(s) and type(s). - STRINGS: automatic infer partition key name(s). All types are strings. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error

configuration.load.createDisposition STRING

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.load.sourceUris[] STRING

configuration.load.maxBadRecords INTEGER

[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV and JSON. The default value is 0, which requires that all records are valid

configuration.load.allowJaggedRows BOOLEAN

[Optional] Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats

configuration.load.fieldDelimiter STRING

[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',')

configuration.load.projectionFields[] STRING

configuration.labels OBJECT

The labels associated with this job. You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key

configuration.labels.customKey.value STRING

The labels associated with this job. You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key

configuration.dryRun BOOLEAN

[Optional] If set, don't actually run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it wasn't a dry run. Behavior of non-query jobs is undefined

configuration.jobType STRING

[Output-only] The type of the job. Can be QUERY, LOAD, EXTRACT, COPY or UNKNOWN

configuration.extract OBJECT

configuration.extract.printHeader BOOLEAN

[Optional] Whether to print out a header row in the results. Default is true

configuration.extract.compression STRING

[Optional] The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro

configuration.extract.destinationUris[] STRING

configuration.extract.sourceTable OBJECT

configuration.extract.sourceTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.extract.sourceTable.projectId STRING

[Required] The ID of the project containing this table

configuration.extract.sourceTable.datasetId STRING

[Required] The ID of the dataset containing this table

configuration.extract.destinationFormat STRING

[Optional] The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO. The default value is CSV. Tables with nested or repeated fields cannot be exported as CSV

configuration.extract.fieldDelimiter STRING

[Optional] Delimiter to use between fields in the exported data. Default is ','

configuration.extract.destinationUri STRING

[Pick one] DEPRECATED: Use destinationUris instead, passing only one URI as necessary. The fully-qualified Google Cloud Storage URI where the extracted table should be written

configuration.copy OBJECT

configuration.copy.destinationTable OBJECT

configuration.copy.destinationTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.copy.destinationTable.projectId STRING

[Required] The ID of the project containing this table

configuration.copy.destinationTable.datasetId STRING

[Required] The ID of the dataset containing this table

configuration.copy.sourceTables[] OBJECT

configuration.copy.sourceTables[].tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.copy.sourceTables[].projectId STRING

[Required] The ID of the project containing this table

configuration.copy.sourceTables[].datasetId STRING

[Required] The ID of the dataset containing this table

configuration.copy.destinationEncryptionConfiguration OBJECT

configuration.copy.destinationEncryptionConfiguration.kmsKeyName STRING

[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key

configuration.copy.createDisposition STRING

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.copy.sourceTable OBJECT

configuration.copy.sourceTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.copy.sourceTable.projectId STRING

[Required] The ID of the project containing this table

configuration.copy.sourceTable.datasetId STRING

[Required] The ID of the dataset containing this table

configuration.copy.writeDisposition STRING

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_EMPTY. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.jobTimeoutMs INTEGER

[Optional] Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job

configuration.query OBJECT

configuration.query.maximumBillingTier INTEGER

[Optional] Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default

configuration.query.preserveNulls BOOLEAN

[Deprecated] This property is deprecated

configuration.query.writeDisposition STRING

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_EMPTY. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.query.timePartitioning OBJECT

configuration.query.timePartitioning.field STRING

[Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED

configuration.query.timePartitioning.expirationMs INTEGER

[Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value

configuration.query.timePartitioning.type STRING

[Required] The only type supported is DAY, which will generate one partition per day

configuration.query.timePartitioning.requirePartitionFilter BOOLEAN

configuration.query.query STRING

[Required] SQL query text to execute. The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL

configuration.query.userDefinedFunctionResources[] OBJECT

configuration.query.userDefinedFunctionResources[].resourceUri STRING

[Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path)

configuration.query.userDefinedFunctionResources[].inlineCode STRING

[Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code

configuration.query.destinationTable OBJECT

configuration.query.destinationTable.tableId STRING

[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

configuration.query.destinationTable.projectId STRING

[Required] The ID of the project containing this table

configuration.query.destinationTable.datasetId STRING

[Required] The ID of the dataset containing this table

configuration.query.queryParameters[] OBJECT

configuration.query.queryParameters[].name STRING

[Optional] If unset, this is a positional parameter. Otherwise, should be unique within a query

configuration.query.useLegacySql BOOLEAN

Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false

configuration.query.clustering OBJECT

configuration.query.clustering.fields[] STRING

configuration.query.destinationEncryptionConfiguration OBJECT

configuration.query.destinationEncryptionConfiguration.kmsKeyName STRING

[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key

configuration.query.createDisposition STRING

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion

configuration.query.maximumBytesBilled INTEGER

[Optional] Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default

configuration.query.schemaUpdateOptions[] STRING

configuration.query.priority STRING

[Optional] Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE

configuration.query.allowLargeResults BOOLEAN

[Optional] If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size

configuration.query.rangePartitioning OBJECT

configuration.query.rangePartitioning.range OBJECT

[TrustedTester] [Required] Defines the ranges for range partitioning

configuration.query.rangePartitioning.range.end INTEGER

[TrustedTester] [Required] The end of range partitioning, exclusive

configuration.query.rangePartitioning.range.interval INTEGER

[TrustedTester] [Required] The width of each interval

configuration.query.rangePartitioning.range.start INTEGER

[TrustedTester] [Required] The start of range partitioning, inclusive

configuration.query.rangePartitioning.field STRING

[TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64

configuration.query.parameterMode STRING

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query

configuration.query.useQueryCache BOOLEAN

[Optional] Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true

configuration.query.tableDefinitions OBJECT

[Optional] If querying an external data source outside of BigQuery, describes the data format, location and other properties of the data source. By defining these properties, the data source can then be queried as if it were a standard BigQuery table

configuration.query.tableDefinitions.customKey OBJECT

Add additional named properties

configuration.query.tableDefinitions.customKey.ignoreUnknownValues BOOLEAN

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored

configuration.query.tableDefinitions.customKey.autodetect BOOLEAN

Try to detect schema and format options automatically. Any option specified explicitly will be honored

configuration.query.tableDefinitions.customKey.sourceFormat STRING

[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE"

configuration.query.tableDefinitions.customKey.compression STRING

[Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats

configuration.query.tableDefinitions.customKey.hivePartitioningMode STRING

[Optional, Trusted Tester] If hive partitioning is enabled, which mode to use. Two modes are supported: - AUTO: automatically infer partition key name(s) and type(s). - STRINGS: automatic infer partition key name(s). All types are strings. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error. Note: this setting is in the process of being deprecated in favor of hivePartitioningOptions

configuration.query.tableDefinitions.customKey.sourceUris[] STRING

configuration.query.tableDefinitions.customKey.maxBadRecords INTEGER

[Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats

configuration.query.flattenResults BOOLEAN

[Optional] If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened

configuration.query.defaultDataset OBJECT

configuration.query.defaultDataset.projectId STRING

[Optional] The ID of the project containing this dataset

configuration.query.defaultDataset.datasetId STRING

[Required] A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters

user_email STRING

[Output-only] Email address of the user who ran the job

kind STRING

[Output-only] The type of the resource

etag STRING

[Output-only] A hash of this resource