Today, VoiceLabs is announcing support for Microsoft Cortana. VoiceLabs is fortunate to have an amazing customer community, and to our delight our first Cortana integration happened without us knowing about it!
TIM McELREATH is a Senior Technical Architect at Scripps Networks, and he and his team have built one of the most innovative VoiceFirst applications in the world: The Food Network Voice Assistant for Amazon Alexa, Google Home, and Microsoft Cortana.
One of the coolest aspects of this app is that it takes advantage of your TV to deliver value. For example, while watching Barefoot Contessa, you can ask Alexa to list the ingredients for the recipe she is cooking. You can also ask Alexa what is on Food Network right now.
It is one of the first compelling Voice + Visual use cases, and the app is extremely popular because of it.
Here’s how to get Actionable Analytics for your Cortana Skill:
- Initialize as usual:
const VoiceLabs = require(“voicelabs”)(‘xxxxxxxxxxxxxxxxxxxxx‘);
- We call API.ai as our NLP (so we can leverage the language model we use for Google Home/Assistant). So, prior to calling API.ai we take a look at the Bot Framework request to determine if we’re getting a user utterance or one of the built-in Intents (like Microsoft.Launch, similar to the Amazon built-ins). If so, we resolve that to an utterance (‘launch’) that will map to an intent in API.ai.
- After getting the response from our service tier, we format the message object and then determine if VoiceLabs is enabled (VOICELABS_ON is an environment variable managed in the Lambda config). We then create the session and metadata objects using the Bot Framework request as well as the response from API.ai, which has the intent name and parameters.
By doing this, we can track events and start seeing the same apples-to-apples data visualizations we see for our Amazon Alexa and Google Assistant apps. We are already seeing some fascinating differences across Amazon, Google, and Microsoft, and thanks to VoiceLabs we can now make smarter decisions around adding features, fixing conversational ‘dead ends,’ and evolving our natural language model.