Press "Enter" to skip to content
Попробовать

Using Dialogflow for NLP in Smartcalls

0

Natural language processing (NLP) in Smartcalls can now be implemented using Google Dialogflow. Dialogflow Connector supports real-time media streaming between SC and Dialogflow, the latency is being kept as small as possible, thus increasing responsiveness of your voice bot to make as similar to live person as possible. In addition, it’s very easy to use the connector.

Setting up an agent

  1. Speech synthesis is disabled by default for Dialogflow agent, so it should be enabled manually.
  2. Click on the Speech tab in the agent settings to setup Speech synthesis options.

3. Enable Automatic Text to Speech by clicking on the toggle button, choose MP3 or OGG in the Output Audio Encoding dropdown (IMPORTANT: only MP3 and OGG are currently supported) and one of the available voices – we highly recommend to use WaveNet-powered voices, since they sound much better than standard ones. Save settings by clicking the Save button in the top right corner.

Adding an agent

  1. After you have logged in you need to open the Integrations section in the main menu and select Dialogflow tab.

2. Click the Add Agent button and choose the service account json file of your agent you have previously created and downloaded from Google Cloud.

3. If everything worked well you should see the uploaded agent.

4. Now we can use it in either outbound or inbound call scenarios

Using an agent in call scenario

  1. Open the SC scenario editor.
  2. In the left panel with building blocks you will see the Dialogflow Connector block.

3. Drag&drop the block to the editor area.

4. Click on the block to see its settings.

5. In the Select Agent dropdown choose the agent you’ve uploaded previously.

6. There are 5 checkboxes available:

а) Execute sendQuery after connection to the agent – sends a query to the agent with specified params. It allows to push the agent to make it start the conversation first.

b) Process Synthesize speech Response – if there is a Telephony Response specified for the intent in the Responses section that has “Synthesize Speech” type you have an option to use one of the built-in SC synthesis options to say the text that is returned from the agent.

On the Dialogflow Agent side it looks as follows.

c) Process Audio Response –  if checked, SC will play specified audio file if there is a Telephony “Play Audio” Response specified for the intent in the Responses section.

d) Process Transfer call Response – if checked there is another output from the block.

The specified variable (DF_TRANSFER by default, you can specify your own name for it) will be used to store the number where a call should be transferred. You can use the variable in the Call forwarding block.

e) Process Dialogflow Response Parameters – every time the Dialogflow agent returns a result extracted parameters are being stored in the specified variable (DF_PARAMS by default, you can specify your own name for it) in form of JSON object, if there are already some parameters in the variable then they are merged with the new ones.

 7. The Dialogflow connector’s out port is reached in case when a caller/callee reaches an intent that marked as the end of conversation in the Responses section.

Comments are closed.

%d bloggers like this: