Alexa skill  - Translation and Polly Voices (Lexicons)
author By Sanjay Rohila
Blog

Alexa skill  - Translation and Polly Voices (Lexicons)

author By Sanjay Rohila Dec 11, 2018
Alexa skill  - Translation and Polly Voices (Lexicons)
Submit guest post

Managing translation seems a bit easier than managing it in Lex. Alexa has language settings where we can have the same model in different languages. We can different awake word, different utterances in for different languages. And the best part is based on the active/selected languages Alexa automatically selects Polly voice.

Language settings and awake word

Language settings and different awake word for different language models.From above language selection settings we can create a different model for each language, to clone model from one language to another just use JSON Editor and change the utterance based on language. 

alexa language settings

 

alexa spanish language awake wordalexa english language awake word

 

For example, I have intent called book_car. I have English utterances as:

"I want to book a car"
"book a car"

And in Spanish model I have different utterance for same intent:

"Quiero reservar un coche para"
"reservar un coche"

Alexa has locale parameter in the request which tells us which language request coming from, so based on that we can translate out response and Polly voice will automatically handle accent and pronunciation for that. As illustrated in the blog about single controller for Alexa and Lex, when we are using one controller, it might be a good idea to leave our success to big translation players and manage content in English only. So we have only English responses in our lambda and do the translation on runtime based on the locale in the request.

Controlling Polly Voice in Response

Above was an example of when Alexa handles Polly voices for us and does that as per language selected in skill. But we can achieve/control that by ourselves by using SSML in outputSpeech. Let's say, we have an introduction tour in our skill. Instead of being autonomous and explain in plan voice we can have actual conversation kind responses and make it more UX wise. Below is an example of use case:

User: "Alexa, Ask my assistant how does it work?"
Alexa: "Well, you can rent a car with my assistant. This is how you can do so:
<Brian>: 'Alexa, launch my assistant'.
<Amy>: 'Hi, welcome to my assistant. How can I help you?'
<Brian>: 'Rent a car'.
<Amy>: 'Sure, How many days you want to rent it for?'
<Brian>: 'Today only'.
<Amy>: 'Great!, In a bit rental service will call you and get it delivered to you.'
"

To achieve something like this, we have to use SSML response:

This gives a bit more idea about how a conversation should go.

Alexa Skills Kit Sound Library (Fun Part)

The sound library provides a wide range of cool sounds which we can use in our SSML messages. we don't have to manage mp3 files anywhere. Fun intent I have with sounds:

order_status
"<speak>On the way <audio src='soundbank://soundlibrary/transportation/amzn_sfx_motorcycle_engine_idle_01'/></speak>"
Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us