Announcing Worldwide Availability of Inference Studio 6.1

Extending our Support for Google Dialogflow

Screen Shot 2019-04-22 at 11.38.11 PM.png

There has been a wave of recent partnership announcements supporting Google’s Contact Center AI (CCAI). CCAI is being adopted by leading contact center software platforms, system integrators and software developers at an increasingly rapid pace. According to analyst Dave Michels, Google now claims to have 74% of the contact center market as partners and, at last week’s Google NEXT event, Google Cloud Product Manager Shantanu Misra explained that developer adoption for Google’s Dialogflow has grown by close to 600% in just two years. 

Natural Language Processing and Conversational AI are driving the next wave of call center innovation. At Inference, we’re committed to delivering the tools necessary to take full advantage of these emerging technologies. That’s why we released support for Dialogflow in Studio 6.0 just a few short weeks ago, and it’s why we’re excited to unveil Studio 6.1,  which brings to market powerful new features to help businesses harness the power of Google’s Conversational AI solutions.

Making Google Speech-to-Text and Dialogflow Even More Powerful

With Studio 6.0, we introduced support for real-time streaming of speech to API providers like Google. We also deployed support for a Dialogflow node within Studio that enables our partners and customers to leverage Google Dialogflow from within Studio.  

For those who are new to Inference Studio, nodes are elements that you can drag and drop onto a canvas in order to design the flow of a dialog. There are dozens of nodes that you can use to build voice, Web, SMS and email interactions. For example, there is a prompt node which allows you to play pre-recorded prompts or Text-to-Speech to the user. There is also a  Standard Form node which allows you to collect information that was submitted by a user through touch tone or a spoken utterance. And with the 6.0 release, we introduced a Dialog Flow node that allows you to connect directly to a Google Dialogflow agent. 

With 6.1, we’ve introduced a new  Open Form node which offers three powerful new capabilities:

1.       This node extends closed grammar functionality to support foreign languages.

2.       It also improves the Dialogflow integration by adding a streaming interface, reducing latency and making the interactions more natural. Previously you had to use the Cloud Speech-to-Text node to get your raw transcription and then use that transcription in your Dialogflow node. With the Open Form node this is now a single-step process.

3.       The Open Form node enables you to use Dialogflow agents that you’ve built yourself or use a pre-built agent from the Inference library.

Watch this brief demo to see how Open Form types enable you to use agents you create in Dialogflow.

 Take a look at this tutorial to learn how to use Open Forms.

Improved Accuracy for Google Speech-to-Text with Phrase Hints

Google Speech-to-Text now supports open grammars, which means that literally anything spoken can be translated into text. Previously, voice user interfaces used closed grammars, which meant developers had to predict what a caller might say and then build a set of domain-specific grammars to match variations of requests – this is how our Standard Form node has been used. In most cases today, building closed grammars is no longer necessary because Google’s speech engine has been trained to transcribe literally anything a caller might say. 

However, you have to keep in mind that they have been trained on data from millions of customer interactions and not for your domain-specific application. For these use cases you have the option of using closed grammars or Phrase Hints which are a list of phrases that act as "hints" to boost the probability that words or phrases will be recognized. 

We’ve now added Phrase Hints to our Cloud Speech-to-Text node to greatly improve the likelihood that your proprietary vocabulary works reliably.

It’s Now Easier to Give your Virtual Agents a More Human Voice

With Studio 6.0 we introduced Speech-Synthesis Markup Language (SSML) which allows you to customize the way your Virtual Agents speak. For example, you can control the rate, pitch, volume or emphasis of your text-to-speech. With Studio 6.1 we’ve now added an (SSML) Editor. The SSML Editor is a graphical interface for editing SSML (Speech Synthesis Markup Language). It makes it easier to write SSML because standard options are at your fingertips and you can now preview your work without having to call your virtual agent to listen to your SSML.

Send Information from Virtual Agent to Live Agent

With Studio 6.1 we’ve now added a new Screen pop node that  allows you to forward relevant customer information to a call center agent or sales representative when a call is transferred from a Virtual Agent. This ensures that your live agent is fully prepared to take the call because a screen pop displays relevant information from the previous conversation.

We now offer three ways to pass the data through a URL. These options are explained in detail here

Virtual Agents Now Support Refunds and Pre-Authorized Payments 

Our Virtual Agents are able to collect PCI-compliant payments from customers – either as a standalone self-service application or to assist a live agent. With Studio 6.1, this just became power powerful. Virtual Agents can now authorize and capture payments as well as process refunds. 

For more information about building a PCI-Compliant Virtual Agent, take a look at this tutorial.

Learn More

·     You can learn more about our support for Google Speech and Dialogflow here.

·     If you’d like to try Inference for yourself, sign up for a trial account.

·     We’d also be happy to give you a personal demo.

Callan SchebellaComment