This video discusses analyzing the data that comes out of your user experience research and a general methodology you can use to break down research data; gather, mine for key takeaways, organize with card sort, and map for context.
- Once a round of research is complete, you'll need to know how to interpret the data and make recommendations for the rest of the team. You'll need to tie all of the individual pieces of data together to create meaningful, actionable insights. Each methodology has particular data analysis procedures. For instance, you can't necessarily analyze quantitative survey data in the same way that you would moderated interview notes. However, it's the onus of the user experience researcher to not just report back facts, but also synthesize data into information the team can use.
Depending on the phase and type of research, that might take the form of personas, recommended changes to an interface, or even hypotheses for the next round of research. Regardless of the type of research, here are some general tips, and a rough process that you can use to ensure that you get the most out of your data. While it may sound obvious, the first thing you need to do after finishing the study is to gather and organize all the data. If you've done something like a survey or an A/B test, you'll probably be able to export the raw data.
Most quantitative tools have simple analysis and charting functions that you can use. For instance, most survey tools will be able to tell you the average of a rating question, or give you a breakdown of how many participants answered a certain way. While this information is valuable, you won't get the full picture if you stop there. For instance, let's say that you ask participants to rate their experience with an e-commerce site, on a scale of one to seven. Maybe the average is right in the middle, at 3.5, but none of the participants actually rated it that way.
They all rated the site exceptionally well or exceptionally poor. This tells you that some participants had a great experience, while others were very dissatisfied, which is a very different story than everybody having just an okay experience. You need to make sure that you look at the full scope of data before you make any conclusions. If you did moderated research, like interviews or usability tests, it's likely that you'll have a combination of hand-written and typed notes from a variety of people, digital audio and video files, and possibly papers filled out by the participants.
When analyzing moderated research, I recommend making a big spreadsheet with a row for every participant. Include their general demographic information, the notes from each session, and links to any other files or information pertaining to the research. Having one big overview helps take a look at the big picture. The next step is to start breaking down the huge amount of information, mining the notes for pertinent facts, quotes, or points, that relate to the key goals of your research. Let's say that you were looking through the notes of a moderated usability test of a new mobile application, and you were testing users reaction to navigation.
You might look through each participant's notes, and notice that none of them were able to find the main menu the first time. That would be a key element to note. I recommend writing each of these individual findings on a single sticky note. If you are able to have team members observe sessions, I recommend having this breakdown process occur in the debriefing section with as much of the project team as possible. Remind each team member of the key goals and hypotheses of the study, and have everyone mine their own notes and write up the main things they observed.
Including everyone helps to make sure that all parties are invested in the research process, and understand the full breadth of work. It also ensures that you don't miss any points of view. If you're analyzing data on your own, I recommend creating one set of takeaways for each participant, so you can spot trends across people. Once you've mined the notes for key points, the next step is to organize. I actually like to use one of the research methodologies, called card sorting, to help organize my findings.
Typically, I find that it makes sense to do a closed card sort, and make the main categories match the main goals of the research. Essentially, you look at each of the key points you and your team have identified, and sort them into the predefined goal categories. For instance, if you were usability testing a mobile application's ease of use, you might have had goals that sounded something like this. Do users know how to enter their credentials? Do users understand the hamburger menu icon? Can users easily log out? Given this, your finding categories would be Credentials, Hamburger Icon, and Logout.
You'd also want to create an Other category for additional findings that don't directly correlate to your predefined goals. Throughout the card sorting process, you'll be able to easily spot trends and potential anomalies. For instance, if every one of 10 participants had trouble finding something, it's very likely that there's a usability and findability issue that you need to address. There are several digital tools for card sorting, but I find that the physical process of sorting sticky notes helps visualize the data and spot trends more quickly.
If you're able to do this with the team, you can take the time to discuss why each finding is important, what it means in the context of the product or project, and potential solutions or recommendations. Doing this process on your own can also help you think through the context, and crystallize findings into meaningful insights. If you're having trouble understanding or articulating the context of your findings, I recommend mapping the main takeaways across two main dimensions that are important to your project, and examining the relationship of where the findings fall in the matrix.
For instance, if you are analyzing the data from a usability test of an e-commerce site, you might map findings on a grid with the X axis being impact to users, and the Y axis being impact to business. Anything that severely impacts both the user and the business, like user not being able to find a product they need, or being able to checkout, would be mapped in the top right corner. Anything that is not important to either would be mapped into the bottom left corner, and so on. Going through the process of determining where each finding falls will help you frame the context, and once you've mapped everything, you'll get an image of the highest priorities.
You can make the two dimensions be anything that makes sense for the context of your research. For instance, if you were investigating the ease of use of a new signup process, you might make one axis what phase of signup, and the other axis severity of problem. Remember, it's not just enough to report back hard facts. You need to give your team a deep understanding of what you've found, and what it means in the full picture of the product or project.
This course introduces the fundamentals of user experience research so that anyone can understand the benefits and start integrating research into their everyday design and development process. Start watching to learn how to use UX research to find the answers to the most basic questions about your customers—who, what, when, why, and how—and drive better user experiences and business outcomes.
- An overview of research methods, including usability testing, interviewing, eye tracking, surveys, and many more
- A review of the main types of research, including quantitative and qualitative, behavioral and attitudinal, and moderated vs. unmoderated
- Determining the right methodologies based on organizational environment, client type, and project stage
- Targeting the right research participants
- Crafting the right questions in the right way
- Analyzing and presenting your data