Detect Intent

Processes a natural language query and returns structured, actionable data as a result

48 variables
109 variables

Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries

Authorization

To use this building block you will have to grant access to at least one of the following scopes:

  • View and manage your data across Google Cloud Platform services
  • View, manage and query your Dialogflow agents

Input

This building block consumes 48 input parameters

  = Parameter name
  = Format

session STRING Required

Required. The name of the session this query is sent to. Format: projects/<Project ID>/agent/sessions/<Session ID>. It's up to the API caller to choose an appropriate session ID. It can be a random number or some type of user identifier (preferably hashed). The length of the session ID must not exceed 36 bytes

queryInput OBJECT

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text,.

  3. An event that specifies which intent to trigger.

queryInput.audioConfig OBJECT

Instructs the speech recognizer how to process the audio content

queryInput.audioConfig.audioEncoding ENUMERATION

Required. Audio encoding of the audio content to process

queryInput.audioConfig.singleUtterance BOOLEAN

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance

queryInput.audioConfig.languageCode STRING

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language

queryInput.audioConfig.phraseHints[] STRING

queryInput.audioConfig.sampleRateHertz INTEGER

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details

queryInput.audioConfig.modelVariant ENUMERATION

Optional. Which variant of the Speech model to use

queryInput.event OBJECT

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?"

queryInput.event.languageCode STRING

Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language

queryInput.event.name STRING

Required. The unique identifier of the event

queryInput.event.parameters OBJECT

Optional. The collection of parameters associated with the event

queryInput.event.parameters.customKey.value ANY Required

Optional. The collection of parameters associated with the event

queryInput.text OBJECT

Represents the natural language text to be processed

queryInput.text.text STRING

Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters

queryInput.text.languageCode STRING

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language

queryParams OBJECT

Represents the parameters of the conversational query

queryParams.sessionEntityTypes[] OBJECT

Represents a session entity type.

Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types").

Note: session entity types apply to all queries, regardless of the language

queryParams.sessionEntityTypes[].name STRING

Required. The unique identifier of this session entity type. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>.

<Entity Type Display Name> must be the display name of an existing entity type in the same agent that will be overridden or supplemented

queryParams.sessionEntityTypes[].entityOverrideMode ENUMERATION

Required. Indicates whether the additional data should override or supplement the developer entity type definition

queryParams.sessionEntityTypes[].entities[] OBJECT

An entity entry for an associated entity type

queryParams.payload OBJECT

Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported

queryParams.payload.customKey.value ANY Required

Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported

queryParams.geoLocation OBJECT

An object representing a latitude/longitude pair. This is expressed as a pair of doubles representing degrees latitude and degrees longitude. Unless specified otherwise, this must conform to the WGS84 standard. Values must be within normalized ranges

queryParams.geoLocation.longitude NUMBER

The longitude in degrees. It must be in the range [-180.0, +180.0]

queryParams.geoLocation.latitude NUMBER

The latitude in degrees. It must be in the range [-90.0, +90.0]

queryParams.resetContexts BOOLEAN

Optional. Specifies whether to delete all contexts in the current session before the new ones are activated

queryParams.contexts[] OBJECT

Represents a context

queryParams.contexts[].lifespanCount INTEGER

Optional. The number of conversational query requests after which the context expires. If set to 0 (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries

queryParams.contexts[].name STRING

Required. The unique identifier of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

The Context ID is always converted to lowercase, may only contain characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long

queryParams.contexts[].parameters OBJECT

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryParams.contexts[].parameters.customKey.value ANY Required

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryParams.sentimentAnalysisRequestConfig OBJECT

Configures the types of sentiment analysis to perform

queryParams.sentimentAnalysisRequestConfig.analyzeQueryTextSentiment BOOLEAN

Optional. Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text

queryParams.timeZone STRING

Optional. The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used

outputAudioConfig OBJECT

Instructs the speech synthesizer on how to generate the output audio content

outputAudioConfig.audioEncoding ENUMERATION

Required. Audio encoding of the synthesized audio content

outputAudioConfig.synthesizeSpeechConfig OBJECT

Configuration of how speech should be synthesized

outputAudioConfig.synthesizeSpeechConfig.speakingRate NUMBER

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error

outputAudioConfig.synthesizeSpeechConfig.effectsProfileId[] STRING

outputAudioConfig.synthesizeSpeechConfig.volumeGainDb NUMBER

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that

outputAudioConfig.synthesizeSpeechConfig.pitch NUMBER

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch

outputAudioConfig.synthesizeSpeechConfig.voice OBJECT

Description of which voice to use for speech synthesis

outputAudioConfig.synthesizeSpeechConfig.voice.name STRING

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and gender

outputAudioConfig.synthesizeSpeechConfig.voice.ssmlGender ENUMERATION

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request

outputAudioConfig.sampleRateHertz INTEGER

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality)

inputAudio BINARY

Optional. The natural language speech audio to be processed. This field should be populated iff query_input is set to an input audio config. A single request can contain up to 1 minute of speech audio data

Output

This building block provides 109 output parameters

  = Parameter name
  = Format

outputAudioConfig OBJECT

Instructs the speech synthesizer on how to generate the output audio content

outputAudioConfig.audioEncoding ENUMERATION

Required. Audio encoding of the synthesized audio content

outputAudioConfig.synthesizeSpeechConfig OBJECT

Configuration of how speech should be synthesized

outputAudioConfig.synthesizeSpeechConfig.speakingRate NUMBER

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error

outputAudioConfig.synthesizeSpeechConfig.effectsProfileId[] STRING

outputAudioConfig.synthesizeSpeechConfig.volumeGainDb NUMBER

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that

outputAudioConfig.synthesizeSpeechConfig.pitch NUMBER

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch

outputAudioConfig.synthesizeSpeechConfig.voice OBJECT

Description of which voice to use for speech synthesis

outputAudioConfig.synthesizeSpeechConfig.voice.name STRING

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and gender

outputAudioConfig.synthesizeSpeechConfig.voice.ssmlGender ENUMERATION

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request

outputAudioConfig.sampleRateHertz INTEGER

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality)

queryResult OBJECT

Represents the result of conversational query or event processing

queryResult.webhookSource STRING

If the query was fulfilled by a webhook call, this field is set to the value of the source field returned in the webhook response

queryResult.fulfillmentText STRING

The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, fulfillment_messages should be preferred

queryResult.parameters OBJECT

The collection of extracted parameters

queryResult.parameters.customKey.value ANY

The collection of extracted parameters

queryResult.sentimentAnalysisResult OBJECT

The result of sentiment analysis as configured by sentiment_analysis_request_config

queryResult.sentimentAnalysisResult.queryTextSentiment OBJECT

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text

queryResult.sentimentAnalysisResult.queryTextSentiment.score FLOAT

Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment)

queryResult.sentimentAnalysisResult.queryTextSentiment.magnitude FLOAT

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative)

queryResult.intentDetectionConfidence FLOAT

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). If there are multiple knowledge_answers messages, this value is set to the greatest knowledgeAnswers.match_confidence value in the list

queryResult.allRequiredParamsPresent BOOLEAN

This field is set to:

  • false if the matched intent has required parameters and not all of the required parameter values have been collected.
  • true if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.

queryResult.speechRecognitionConfidence FLOAT

The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult

queryResult.queryText STRING

The original conversational query text:

  • If natural language text was provided as input, query_text contains a copy of the input.
  • If natural language speech audio was provided as input, query_text contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked.
  • If automatic spell correction is enabled, query_text will contain the corrected user input.

queryResult.diagnosticInfo OBJECT

The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice

queryResult.diagnosticInfo.customKey.value ANY

The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice

queryResult.outputContexts[] OBJECT

Represents a context

queryResult.outputContexts[].lifespanCount INTEGER

Optional. The number of conversational query requests after which the context expires. If set to 0 (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries

queryResult.outputContexts[].name STRING

Required. The unique identifier of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

The Context ID is always converted to lowercase, may only contain characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long

queryResult.outputContexts[].parameters OBJECT

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryResult.outputContexts[].parameters.customKey.value ANY

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryResult.intent OBJECT

Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics

queryResult.intent.outputContexts[] OBJECT

Represents a context

queryResult.intent.outputContexts[].lifespanCount INTEGER

Optional. The number of conversational query requests after which the context expires. If set to 0 (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries

queryResult.intent.outputContexts[].name STRING

Required. The unique identifier of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

The Context ID is always converted to lowercase, may only contain characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long

queryResult.intent.outputContexts[].parameters OBJECT

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryResult.intent.outputContexts[].parameters.customKey.value ANY

Optional. The collection of parameters associated with this context. Refer to this doc for syntax

queryResult.intent.defaultResponsePlatforms[] ENUMERATION

queryResult.intent.action STRING

Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces

queryResult.intent.name STRING

The unique identifier of this intent. Required for Intents.UpdateIntent and Intents.BatchUpdateIntents methods. Format: projects/<Project ID>/agent/intents/<Intent ID>

queryResult.intent.messages[] OBJECT

Corresponds to the Response field in the Dialogflow console

queryResult.intent.messages[].payload OBJECT

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform

queryResult.intent.messages[].payload.customKey.value ANY

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform

queryResult.intent.messages[].platform ENUMERATION

Optional. The platform that this message is intended for

queryResult.intent.inputContextNames[] STRING

queryResult.intent.webhookState ENUMERATION

Optional. Indicates whether webhooks are enabled for the intent

queryResult.intent.followupIntentInfo[] OBJECT

Represents a single followup intent in the chain

queryResult.intent.followupIntentInfo[].followupIntentName STRING

The unique identifier of the followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>

queryResult.intent.followupIntentInfo[].parentFollowupIntentName STRING

The unique identifier of the followup intent's parent. Format: projects/<Project ID>/agent/intents/<Intent ID>

queryResult.intent.displayName STRING

Required. The name of this intent

queryResult.intent.rootFollowupIntentName STRING

Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output.

Format: projects/<Project ID>/agent/intents/<Intent ID>

queryResult.intent.mlDisabled BOOLEAN

Optional. Indicates whether Machine Learning is disabled for the intent. Note: If ml_diabled setting is set to true, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off

queryResult.intent.isFallback BOOLEAN

Optional. Indicates whether this is a fallback intent

queryResult.intent.parameters[] OBJECT

Represents intent parameters

queryResult.intent.parameters[].entityTypeDisplayName STRING

Optional. The name of the entity type, prefixed with @, that describes values of the parameter. If the parameter is required, this must be provided

queryResult.intent.parameters[].prompts[] STRING

queryResult.intent.parameters[].defaultValue STRING

Optional. The default value to use when the value yields an empty result. Default values can be extracted from contexts by using the following syntax: #context_name.parameter_name

queryResult.intent.parameters[].mandatory BOOLEAN

Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value

queryResult.intent.parameters[].name STRING

The unique identifier of this parameter

queryResult.intent.parameters[].isList BOOLEAN

Optional. Indicates whether the parameter represents a list of values

queryResult.intent.parameters[].value STRING

Optional. The definition of the parameter value. It can be:

  • a constant string,
  • a parameter value defined as $parameter_name,
  • an original parameter value defined as $parameter_name.original,
  • a parameter value from some context defined as #context_name.parameter_name.

queryResult.intent.parameters[].displayName STRING

Required. The name of the parameter

queryResult.intent.resetContexts BOOLEAN

Optional. Indicates whether to delete all contexts in the current session when this intent is matched

queryResult.intent.trainingPhrases[] OBJECT

Represents an example that the agent is trained on

queryResult.intent.trainingPhrases[].timesAddedCount INTEGER

Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased

queryResult.intent.trainingPhrases[].type ENUMERATION

Required. The type of the training phrase

queryResult.intent.trainingPhrases[].name STRING

Output only. The unique identifier of this training phrase

queryResult.intent.parentFollowupIntentName STRING

Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with CreateIntent or BatchUpdateIntents, in order to make this intent a followup intent.

It identifies the parent followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>

queryResult.intent.events[] STRING

queryResult.intent.priority INTEGER

Optional. The priority of this intent. Higher numbers represent higher priorities. If this is zero or unspecified, we use the default priority 500000.

Negative numbers mean that the intent is disabled

queryResult.languageCode STRING

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes

queryResult.webhookPayload OBJECT

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response

queryResult.webhookPayload.customKey.value ANY

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response

queryResult.fulfillmentMessages[] OBJECT

Corresponds to the Response field in the Dialogflow console

queryResult.fulfillmentMessages[].listSelect OBJECT

The card for presenting a list of options to select from

queryResult.fulfillmentMessages[].listSelect.title STRING

Optional. The overall title of the list

queryResult.fulfillmentMessages[].quickReplies OBJECT

The quick replies response message

queryResult.fulfillmentMessages[].quickReplies.quickReplies[] STRING

queryResult.fulfillmentMessages[].quickReplies.title STRING

Optional. The title of the collection of quick replies

queryResult.fulfillmentMessages[].card OBJECT

The card response message

queryResult.fulfillmentMessages[].card.title STRING

Optional. The title of the card

queryResult.fulfillmentMessages[].card.subtitle STRING

Optional. The subtitle of the card

queryResult.fulfillmentMessages[].card.imageUri STRING

Optional. The public URI to an image file for the card

queryResult.fulfillmentMessages[].basicCard OBJECT

The basic card message. Useful for displaying information

queryResult.fulfillmentMessages[].basicCard.title STRING

Optional. The title of the card

queryResult.fulfillmentMessages[].basicCard.formattedText STRING

Required, unless image is present. The body text of the card

queryResult.fulfillmentMessages[].basicCard.subtitle STRING

Optional. The subtitle of the card

queryResult.fulfillmentMessages[].carouselSelect OBJECT

The card for presenting a carousel of options to select from

queryResult.fulfillmentMessages[].linkOutSuggestion OBJECT

The suggestion chip message that allows the user to jump out to the app or website associated with this agent

queryResult.fulfillmentMessages[].linkOutSuggestion.destinationName STRING

Required. The name of the app or site this chip is linking to

queryResult.fulfillmentMessages[].linkOutSuggestion.uri STRING

Required. The URI of the app or site to open when the user taps the suggestion chip

queryResult.fulfillmentMessages[].simpleResponses OBJECT

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse

queryResult.fulfillmentMessages[].image OBJECT

The image response message

queryResult.fulfillmentMessages[].image.imageUri STRING

Optional. The public URI to an image file

queryResult.fulfillmentMessages[].image.accessibilityText STRING

Optional. A text description of the image to be used for accessibility, e.g., screen readers

queryResult.fulfillmentMessages[].payload OBJECT

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform

queryResult.fulfillmentMessages[].payload.customKey.value ANY

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform

queryResult.fulfillmentMessages[].text OBJECT

The text response message

queryResult.fulfillmentMessages[].text.text[] STRING

queryResult.fulfillmentMessages[].platform ENUMERATION

Optional. The platform that this message is intended for

queryResult.fulfillmentMessages[].suggestions OBJECT

The collection of suggestions

queryResult.action STRING

The action name from the matched intent

outputAudio BINARY

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty

webhookStatus OBJECT

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details.

You can find out more about this error model and how to work with it in the API Design Guide

webhookStatus.code INTEGER

The status code, which should be an enum value of google.rpc.Code

webhookStatus.message STRING

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client

webhookStatus.details[] OBJECT

webhookStatus.details[].customKey.value ANY

responseId STRING

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues