Competitor usability testing lets you learn from the issues you see users face with competitor software, so you can avoid those same issues in your products. Find out how this type of testing can help your team, and learn about some of the issues to avoid if you run this type of usability study.
- I found that product teams often get worried about the idea of observing users working with competitor products. But unless your lawyers have specifically instructed you to steer clear, there's a lot to be learned from a competitor usability study. Let's be clear, you don't need to run a usability study if you're just trying to copy a competitor's interface. You can copy it just by playing with it at your desk. But that's the point here, you're not trying to copy. By running the study you'll learn from the issues that you see users face with the software as they work through it. Interaction design, terminology, task flow, navigation mechanisms, and layout can all cause issues. Why not find them in someone else's design before you bake the same problems into yours. If anything, competitor usability testing gives you the reasons not to copy somebody else's designs. When I worked at Microsoft, we evaluated out-of-the-box experience, the part from receiving the device to getting it up and running, for several peripherals and PCs including ones with other operating systems. It was interesting to see users struggling with the same concepts, regardless of the operating system or manufacturer logo. I know that no copying happened as a result of these user tests, the Microsoft interfaces were typically already too far along to be unduly influenced by these results. Instead, the studies gave us a great understanding of user's comfort level with technology. They helped us to understand where the problems with the software were more likely due to our implementation or to user's level of familiarity with key concepts. Surprisingly, they also gave the team some morale boosts when it became clear that other operating systems that were supposed to just work, didn't always do what it says on the box. There are some particular points you'll need to bear in mind if you run competitor studies. If you can't watch users working with a software in their own environment, choose a neutral location like a shared workspace or a hotel conference room to host the study. If you run the study in your office participants may think that you want them to criticize the software. Ensure you got the product setup properly, it won't be as easy to troubleshoot someone else's product on the fly and you probably don't want to turn this into a test of someone else's help desk. Use the same tasks as you would for user tests on your own software. This gives you a good comparison point. Good usability test tasks won't include any terminology that's specific to your application anyway. So they should be equally applicable to any other product that has the same functions. Capture solid metrics like time on task or success rate so that you can make clear, impartial statements about the product. Hiring a neutral third party to run the sessions is a good way to make sure no bias creeps in. And never lie; if participants ask you who you work for, you must tell them and be prepared to answer their subsequent questions. When you get the results, never ever let marketing use them in advertisements. Your internal numbers just won't be trusted by potential customers. If they want numbers to share externally, the marketing department should hire an independent third party to run a summative benchmarking study of your competitors. If your legal group says there's just no way you can run this type of competitor study, don't worry, you can still learn a lot by doing a heuristic evaluation or cognitive walkthrough of the main task flows in the competing product. I've got a video dedicated to each of these techniques that you can watch in the UX Insights Weekly Course.
Note: Because this is an ongoing series, viewers will not receive a certificate of completion.