Alexa listener session stays open for 8 seconds, If the user doesn't ask anything in 8 seconds then Alexa ends the session (which is good enough to make us think it's not always listening. ;) )
Sometimes after a fulfilment, we want to confirm from the user if he got a satisfying answer or want to ask something else. We have reprompt for that. What prompts does is, It waits for an initial 8 seconds then respond with a reprompt message and opens the session for another 8 seconds for input. If the user doesn't respond within this second 8 seconds, then Alexa ends the session.
I have an FAQ skill, where users ask a question and Alexa answers. But closing the session after answer doesn't seem very user-friendly, so I have added reprompt which asks the user if they want to continue or got the answer. This is how conversation looks like:
User: 'Alexa, ask faq assistant Can I claim expenses as contractor?'
Alexa: 'Yeah, sure. You can claim expenses through Keka.'
(Alexa waits for 8 second here, then asks)
Alexa: 'Do you want to know anything else?'
Alexa: 'Go Ahead, ask me'
Alexa: 'Thank you. It's pleasure to help you.'
To achieve this we can use reprompt. Following is the response code I am sending:
"text": "Yeah, sure. You can claim expenses through Keka."
"text": "Do you want to know anything else?"
Mostly the reprompt is to confirm user if he got what he wanted or no and user will mostly respond with Yes or No. To Handle the reprompt request we can use in-built AMAZON intents.
There are lot's of in-built intent but we are specifically interested in AMAZON.YesIntent and AMAZON.NoIntent for the reprompt purpose. So when the user responds with Yes or No, we are gonna get these intents triggered and respond back accordingly (close the conversation or keep it open).
Reprompt in Echo Show display
Above example works fine with voice responses. Now Let's see how we can do the similar thing with echo show display template also. As we have seen display directives in the earlier post [link], we can add rich text (https://developer.amazon.com/docs/custom-skills/display-interface-reference.html#supported-markup) also in templates. We have action tag which we are going to use to mimic reprompt features. In the case of reprompt, we can add tertiaryText in textContent with action tags. Read more about textContent: https://developer.amazon.com/docs/custom-skills/display-interface-reference.html#textcontent-object-specifications . Below is our display response same scenario.
(https://developer.amazon.com/docs/custom-skills/display-interface-reference.html#touch-selection-events) is triggered when a user selects action element on the screen. So in our code, we will get this request with token values close_session or open_session and we can respond back accordingly (close the conversation or keep it open).
This is how Lambda pseudo looks like (This is not the final code I am using in production. I have some templateBuilder function in between final response and handleIntent functions. And the handleIntent function is triggering core controller [https://www.srijan.net/blog/alexa-n-lex-one-controller-to-rule-both] to get templateType and content):
Alexa reprompt/confirmation with voice and screen the final response with prompt and action links looks like this: