Join Chris Nodder for an in-depth discussion in this video Running a pilot study, part of UX Foundations: Usability Testing.
- A Dry Run of the process, is a great way of finding out what you've forgotten. That's especially true when you are new to usability testing, but it's always good practice to iron out potential pitfalls, by running a pilot study. You'll find out which tasks have strange or confusing wording, which areas of the system still have development bugs that might throw users off, and whether you have all the documents and information you need, to run the study. Do a Dry Run close to the time you'll be running the actual sessions, so that you work from the same code base.
The easiest way to do this, is just to have a team member play the role of the participant. Preferably, choose someone who isn't intimately aware of how the software works, so you can catch things like terminology in your tasks, that you would use everyday, but which end users may not be aware of. Do a Dry Run of every stage of the process. Meet the pilot participant in your reception area, and finish back at the same place. This gives you a chance to practice what you're going to say to people at every stage.
Someone who works at your company, is likely to be able to do the tasks faster than your real participants, but you will still get an indication of whether you have enough time for the tasks you had planned. Remember, that although you have an hour and a half scheduled for the session, quite a bit of that time will be taken up with paperwork, getting the participants settled, and wrap up at the end. You'll probably end up with about one hour of actual task time. Make sure you have a little bit of time after this Dry Run session to go back and make any necessary changes.
Running the pilot study will give you a lot more confidence when you greet your first, real participant in the reception area.
- What is usability testing?
- Finding the right participants
- Making a screener
- Asking the right questions
- Avoiding bias
- Making a task list
- Creating the test environment
- Running a pilot study
- Moderating sessions
- Capturing real-time observations
- Analyzing and reporting your results