In this video, Emmanuel Henri gives a brief introduction of what it means to understand a user and what are the main factors of a successful comprehension of the user.
- [Instructor] Now we have explored examples of devices, frameworks, and you hopefully have a good understanding of the basics of voice UX. Let's get to our trifecta of voice UX so we can establish how to plan a good voice system. And the first item in our trifecta is to properly understand the user. Without that, no conversations or actions will occur. We've all experienced a situation where we talk to a person or a machine and are misunderstood. This can lead to the wrong interaction, action, and sometimes results in a misunderstanding simply because the point didn't come across.
Talking to a machine and being misunderstood can lead to frustration and, more often than not, abandoning the device and buying a different brand. We often lack the patience we have with human beings when we deal with devices simply because we don't have to be patient with the machines. So the challenge for anyone building a voice controlled device is to make sure the user is understood as closely as possible from the beginning. Or if that doesn't work, there are ways to mitigate the misunderstanding.
So how do we successfully make this part efficient and eliminate as many potential sticking points when the conversation occurs with a machine? There are three items that need to happen when a human interacts with a machine, and they are: confirm the understanding of what the user requested, if misunderstood, present an opportunity for the user to quickly correct the machine or keep the response short enough so the user has an opportunity to ask again, as with any conversations between people, offer an opportunity to expand on the topic or continue the conversation in context.
We'll explore these in more detail soon, but as a general guideline, when a machine nails these factors, we feel more comfortable to ask more questions and get into a conversation.