Inference Virtual Agents Have a New Set of Skills to Elevate Self Service
Today we announced the latest release of Inference Studio, Studio 6.3, which includes a suite of new features to help businesses enhance their self-service applications. The new features promise benefits such as improved information sharing between virtual agents and live agents, greater speech recognition accuracy and upgraded answering machine detection (AMD) capabilities. Here's a look at what's new.
For some time, Inference has offered a queue callback feature called QforMe, which has been especially valuable to our partners that host and deploy Cisco BroadWorks. QforMe extends BroadWorks UC and Contact Center capabilities by allowing callers to request a callback while holding their place in the queue, a feature that has grown increasingly essential for supporting customers. With Studio 6.3 we have released a new version of QforMe. Callers can now speak to the virtual agent, request a callback and record a message that will be relayed to the live agent along with the call. The virtual agent can also perform functions such as a CRM lookup (based on the caller’s phone number) to retrieve and pass useful customer data to the live agent, including the caller’s name, location, loyalty status and special preferences. This information prepares the live agent for a more efficient and personalized callback, reducing handle time and improving the likelihood that the customers problem will be resolved quickly.
The new version of QforMe complements our portfolio of features that can also be used to extend Cisco BroadWorks including:
· Intelligent Network Routing
· Natural Language Call Steering
· Virtual Agent Dialing
· Predictive, Progressive and Power Dialing
· PCI and HIPAA Compliance
· Many additional tasks that can be used to offer service automation
With Studio 6.3, we continue to enhance our support for Google’s conversational AI APIs, which were released earlier this year. Recently Google announced many new advancements to the technologies that underpin their Contact Center AI. I described those in my a previous blog post.
The advancements have enabled us to improve our Natural Language Understanding functionality. For example, the Cloud Speech-to-Text node employs an enhanced phone model when using US English as the recognition language. This model improves transcription accuracy over the default models by up to 40 percent. Additionally, Inference has increased the maximum number of phrase hints that can be passed from 500 to 5,000 and has included support for Dialogflow’s auto speech adaptation feature, which improves the speech recognition accuracy of virtual agents.
We have also updated AMD handling in Studio 6.3 by splitting the AMD handler into two different event handlers: Machine detection and Beep detection. The Machine detection event handler triggers within three seconds of the call being answered, providing users an option to have a different call flow scenario when a machine is detected. The Beep detection event handler is ideal for leaving voicemail messages as it gets triggered after the voicemail beep signal is detected.
Additional features and improvements in Studio 6.3 include:
· Service Providers can clear stale calls from the Studio interface based on a dynamically set time.
· Send SMS, Reply SMS and Conversation nodes will display the number of SMS blocks that will be consumed based on the text.
· Data Stores now support a retention policy. You can specify the number of days, weeks and months that you would want to retain the data.
· Service Providers can self-provision QForMe waiters.
· Workflow execution has been optimized to improve performance.
· Messaging tasks have an infinite loop protection added.
· Inbound and Outbound SMS logs have unique identifiers.
The Studio 6.3 enhancements will be available to Inference’s current partners as well as to enterprises using Cisco's on-premise solutions in late October. For more information, please visit the resource center.
To learn more, I encourage you to attend our upcoming webinar where our Chief Product Officer will demonstrate the new features and answer questions.