Join Chris Nodder for an in-depth discussion in this video Exploring some example questions, part of UX Foundations: Usability Testing.
- Because we want study participants to act like they would in the real world, we typically phrase our questions as tasks, and then get the participants to complete the tasks with the interface we're testing. Let's spend some time turning a couple of questions the team might have into usability tasks. One big question that we often want to ask is Can users find the right place to carry out an action? This is easy to do in a study. We just give them a suitable task that involves carrying out the action either on the way to completing the task or as the end goal of the task.
For example, if we want to know whether people can find the filtering function of a search engine, we might give them a task to search for a specific item from a very large range of similar items. If we designed our interface well, there's a large chance they'll use the filter function to complete that task. Actually, if they don't use it, that's an interesting piece of data in its own right. Another type of question comes up when you add new functionality or options within an existing process. You might ask, how many distractions are there? Or, what issues have we introduced by changing the flow? Give participants an exploratory task that requires them to think for themselves and then sit back and watch.
An example might be, adding an interest-rate calculator to a mortgage quote screen. You think the calculator will add value, but you might be worried about whether it distracts visitors from their primary task of getting a mortgage. Just giving study participants a task as broad as getting a mortgage on a particular home allows you to watch for issues during the flow. You won't necessarily be testing how well the interest-rate calculator works because that isn't your primary aim. If some people use it, that's an added bonus.
But your primary research question is about distraction from the flow, not about use of the calculator. Your server logs might show that people have errors or abandon at a certain place on your site, or your help desk might be getting lots of calls around a certain screen. You know what, and where the problem is, but you don't know why it's a problem. You can observe the behavior that leads to errors or abandonment by asking people to perform a directed task that takes them through that point in the flow.
This will give you the 'why' data that will allow you to make a design change. An example might be help desk calls from people getting unrealistic answers back from the mortgage interest calculator in the previous example. Running participants through a specific task to use the calculator, you might find that the terminology on one of the fields requesting data is so vague that people often type in the home value, rather than the monthly payment value. The help desk could tell you what the problem was, but not why it was a problem.
Watching people working with the calculator gives you the why answer and also some ideas for how to fix it. Another type of question that development teams ask all the time is, will they like our new functionality? The best way to test this is before you've even written the code, using paper mock-ups of the design. Instead of using the real code, you show participants paper mock-ups of the screens and they "click" through each screen using a pen as their mouse. The tasks for this type of study are just the same as for studies using actual code.
Users tasks don't change much over time, and watching them complete the tasks with your paper prototype will show you where the issues are before you've spent any money on development work. So, different question-types need different types of tasks, some exploratory and some directed. There are also some questions that you can't answer by observation alone. And those are what we'll cover next.
- What is usability testing?
- Finding the right participants
- Making a screener
- Asking the right questions
- Avoiding bias
- Making a task list
- Creating the test environment
- Running a pilot study
- Moderating sessions
- Capturing real-time observations
- Analyzing and reporting your results