Note: Because this is an ongoing series, viewers will not receive a certificate of completion.
Skill Level Beginner
- We're all biased, whether we want to believe it or not. Knowing what some of these biases are will help you to minimize them in your user research. Research bias is caused by mistakes in how you set up and run your study that end up effecting what you learn. To show you how hard it can be to identify bias, here's a quick quiz for you. Imagine you're working for the navy during World War II, you've been given a diagram showing all the damage that aircraft suffered after returning from bombing sorties. Seeing this diagram, where would you add extra armor? Pause the video if you'd like to this about it for a while. This was the problem given to Abraham Wald, a statistician. The navy had already decided to add armor to areas that showed the most damage. But Wald told them to do the complete opposite. Why? Well the navy had only been able to analyze the bombers that returned, Wald realized that this meant the planes were still capable of flying. The navy hadn't analyzed any of the bombers that were shit down, it followed that being shot anywhere other than where the damage was observed was probably fatal for the plane and its crew. This is called survivorship bias and it's something that you face when you recruit people for usability studies. For instance, if you're doing work with people inside your company, be careful if you let managers pick participants from their departments for you, are they picking the most experienced people? The highest performing? Or the ones they just want to get rid of for a couple of hours In each case, there's a kind of survivorship bias at play. If you're doing eCommerce work, do you only present surveys and recruiting questions after checkout? Obviously you don't want to interrupt the flow but if you wait until someone's checked out, then you're only recruiting survivors, people who made it through the process. Rather than the ones who dropped off somewhere along the way who by the way might have much more interesting stories to tell you. There are other kinds of bias too, environmental bias is caused by putting people in unrealistic environments or making them interact with unrealistic tasks and systems. I once ran a study with kids where I asked them what type of games they like playing on the computer. The first kid said flying games. The second child said the same thing. When the third one said games with airplanes, I started getting concerned. When I walked back to the lobby with that child, I saw the big poster for Flight Simulator up on the wall, all the kids had been sitting staring at it while they waited for their session. That poster ruined my study and it was all my fault because I hadn't checked out the environment for things that could bias my study. You can also add bias in the tasks you chose for user studies. If you just cherry pick tasks that you know will work in the shaky version of the app that you're testing, you can't ask for satisfaction ratings afterwards. You've biased your participants view of the app. Unless participants experience the whole end to end task, you won't truly know how the new features fit in with the rest of the process. Facilitator bias is the bias that you add to how you interact with participants, for instance how you ask questions or even the way you sit and act. The first step towards removing bias in your research sessions is to be aware of the types of bias that exist. You can find out more about how to reduce the bias in your sessions by watching my usability testing course.