navigate site menu

Start learning with our library of video tutorials taught by experts. Get started

Foundations of UX: Usability Testing

Foundations of UX: Usability Testing

with Chris Nodder

 


Run your own basic usability study to find out just what your users need from your website, application, or device—and learn where to focus design improvements to have the biggest impact. Author Chris Nodder shows how to design a study so that it answers your questions, recruit the right participants, and set up the test environment. The course also teaches you how to moderate and observe a usability session, interact with participants and ask the right kind of questions, and then analyze the results and share them with your team in a meaningful way.
Topics include:
  • What is usability testing?
  • Finding the right participants
  • Making a screener
  • Asking the right questions
  • Avoiding bias
  • Making a task list
  • Creating the test environment
  • Running a pilot study
  • Moderating sessions
  • Capturing real-time observations
  • Analyzing and reporting your results

show more

author
Chris Nodder
subject
Web, User Experience, Web Design, Web Foundations
level
Beginner
duration
1h 29m
released
Jun 07, 2013

Share this course

Ready to join? get started


Keep up with news, tips, and latest courses.

submit Course details submit clicked more info

Please wait...

Search the closed captioning text for this course by entering the keyword you’d like to search, or browse the closed captioning text by selecting the chapter name below and choosing the video title you’d like to review.



Introduction
Welcome
00:00 (music playing)
00:04 Hello, I'm Chris Nodder. Welcome to Foundations of UX Usability Testing.
00:10 Usability Testing is an activity that every software development team should
00:14 perform early and often with their sites and products.
00:17 It's the fastest and easiest way to find out whether the thing you plan on
00:21 delivering will meet your user's needs. That includes whether it behaves the way
00:26 they expect. Whether it gives them what they wanted.
00:28 And whether they can even work their way through your screens.
00:33 There's a big difference between having a good idea for a product or service and
00:36 actually delivering it in a way that people want to use.
00:39 Usability Testing makes sure you're on track.
00:44 This course doesn't make any assumptions about your background, although, it's
00:47 primarily aimed at people who work in or are learning about software design and development.
00:54 Usability Testing is a highly sought after skill.
00:56 And this course provides a solid grounding in how to run a typical study.
01:02 This course has practical tips on how to schedule, design, and run your own
01:06 Usability Tests, and then apply what you learned to your own product.
01:10 Now, I want to help you learn how to run your own Usability Test, so that you can
01:15 make your users even happier. So with that, let's get started.
01:19
Collapse this transcript
Using the exercise files
00:00 To help you conduct your own usability tests, we've included several documents
00:04 in the Exercise Files directory that accompanies this course.
00:09 We've made these free and available to all users.
00:13 Unlike most other lynda.com courses, you don't need to have the documents open
00:17 while you watch. Instead, they'll be useful to you when
00:20 you're planning your own study later on. The documents take the concepts we
00:25 discuss, and either expand on them with more examples than what we show in the
00:29 course, or give you generic templates that you can customize for your own use
00:33 when you run your own studies. Feel free to use them and modify them to
00:37 suit your needs.
00:38
Collapse this transcript
1. What Is Usability Testing?
What is usability testing?
00:00 Usability testing involves watching representative users working with your
00:04 product so that you can make improvements based on what you see.
00:07 Usability testing gives you invaluable feedback about how your users behave with
00:13 your product. Knowing how users behave helps you create
00:17 a much more suitable site or application. Rather than just guessing about what
00:21 people might like or need, you can see their reactions firsthand and then make
00:25 sure your product contains just the right features.
00:29 Learning about issues early in the process saves development time and money.
00:34 Rather than spending time developing the wrong thing, a quick usability test will
00:38 tell you whether you're on the right track or not.
00:41 Don't be afraid to run usability tests using paper prototypes before you start
00:45 writing any actual code. Getting the data directly from your users
00:50 is a much better way of doing design work than arguing about features among the team.
00:54 In fact, usability testing is a great way of stopping arguments.
00:59 Rather than spending time fighting over whose idea is best, put the concept in
01:03 front of the real users to see how well they can work with it.
01:06 There's nothing quite like real user feedback to help you determine the best
01:10 foot forward. The thing that's separating usability
01:12 testing from many other methods is with usability testing, you're seeing real
01:16 behavior, what people do, rather than just asking what people think.
01:21 Often, watching people do something is the only way to really understand where
01:27 their issues lie and how to fix the problems you see.
01:29
Collapse this transcript
What you can usability test
00:00 You can usability test pretty much any product on computers, tablets or phones,
00:06 or even using paper. You can evaluate hardware, from coffee
00:09 machines to nuclear power stations, or software, from a simple mobile app to a
00:13 full office productivity suite. You can do it at any stage of the
00:17 process, from early paper prototype concepts through to the finished code.
00:23 User testing helps you understand the interaction between hardware, software,
00:27 and business processes, from the perspective of the people who work with
00:31 the product every day. Testing early, and testing often, means
00:36 that you stay in touch with your users' needs throughout the development process.
00:41 The earlier in the process you start user testing, the more likely it is that
00:45 you'll be able to make changes based on what you find, and the cheaper those
00:48 changes will be to implement. For that reason, usability testing is a
00:52 bit of a misnomer. It makes people think about testing
00:55 finished things. Actually, usability sessions are much
00:59 more useful for helping you work out what you should even build in the first place.
01:02
Collapse this transcript
Planning your first test
00:00 Usability Testing can be broken down into three stages.
00:04 You have pre-test tasks, like finding and scheduling participants, and working out
00:08 what questions you need answers to. You have the test sessions themselves,
00:13 and then, you have the post session analysis and reporting, where you work
00:16 out what you learned and decide what to do about it.
00:20 There are three things you need to focus on getting right for the study.
00:23 Your participants, the tasks you'll ask them to complete for you, and the
00:27 environment you'll use for running the study.
00:30 By environment, I mean both the physical location, like a conference room or even
00:35 somewhere like your user's home or office, and also the technology
00:38 environment, what type of device, operating system and so forth you plan on using.
00:45 By splitting the work out this way, you can easily focus on the important bits at
00:49 each point in the process without getting overwhelmed.
00:52 The rest of this course will take you through the pre-test, test session and
00:56 post-test stages. The pre-test pieces will tell you what
01:00 you need to do to make sure you have good participants, useful tasks and a suitable
01:05 environment for running the study. The test session pieces will tell you how
01:10 to be a good session moderator, what your teammates can do to observe the sessions,
01:14 and how to make participants feel at ease.
01:18 The post-test pieces will show you how to analyze the information you got from the
01:21 study and turn it into actionable data that you can use with your team in order
01:25 to make changes to your product
01:26
Collapse this transcript
2. Recruiting Participants
How many people should you study?
00:01 The great thing about usability testing is that after running sessions with five
00:05 participants, you'll have seen about 80% of the issues that exist in your product.
00:10 You can run more people, but the benefits, the number of extra issues you
00:14 find, drops off quite quickly. Also, even after five people, it's likely
00:19 that you'll have seen enough severe issues to keep you busy fixing things for
00:23 a while. It's better to run a small study, make
00:27 some changes, and then test again to confirm your changes had the right
00:30 effect, than it is to run one big study and just get more confident that you
00:35 found all the issues. Remember, you aren't looking for any kind
00:39 of statistical significance. Instead, you're looking to identify and
00:43 fix barriers to people using your product.
00:48 If even just three of your study participants have an issue with part of
00:51 your UI, it's worth investigating solutions, because that indicates that
00:55 many more people would have that same issue in the real world.
01:00 It doesn't matter exactly how many people.
01:01 The issue is likely to be severe enough that it needs fixing anyway.
01:07 If you're used to recruiting for surveys or other quantitative studies, you may be
01:11 tempted to run a whole bunch more people through your study.
01:14 Remember though, even if large numbers may give you confidence intervals, and
01:17 predict the exact number of customers who had that problem in real life, in this
01:22 situation, we really don't care how many people have the problem.
01:25 You just want to know it exists and what triggers it, so that you can fix it.
01:30 Five people is enough to know it wasn't completely due to some random act that
01:34 one individual undertook. If three or more people in the study had
01:38 the same issue, then it really merits extra attention.
01:42 Because you'll be running several studies during the course of your product
01:45 development process, you'll be gathering more and more data about users and the
01:49 issues they have. So, although you might only run five
01:52 participants in each study, you'll end up with 20 or 30 data points by the time you
01:58 ship the product.
01:59
Collapse this transcript
Finding the right participants
00:01 There are two ways of finding participants for your study.
00:03 One is to do it yourself, the other is to pay somebody to recruit them for you.
00:09 If you have more time on your hands than available money, you'll probably be doing
00:12 it yourself. First, you have to work out what
00:16 attributes your study participants should have.
00:18 Then you need to find a large number of people who are interested in helping you
00:22 out and match them against your attributes.
00:25 We'lkk talk in detail about the participant attrivutes that you should
00:29 look for in the next video. But for now, just remember it probably
00:33 won't work very well just dragging people in off the streets to be participants in
00:36 your usability study. You'll have specific recruiting
00:40 requirements based on your product, the questions you have, your location, and
00:45 whether you're looking at making your existing customers happy or at acquiring
00:49 new ones. There are many ways of finding suitable
00:53 people to be participants. You can use classified ads on sites like
00:57 craigslist You can run online ads using Google adwords.
01:00 Or physical ads in locations like supermarkets, libraries, and other places
01:04 with bulletin boards. You can use online forums or social media.
01:09 Here it's est to direct people to a page on your site so they know that the
01:13 posting is legitimate. If you've planned ahead, you can add an
01:17 I'll give feedback product improvement checkbox to your registration and contact
01:22 us forms. You might be able to persuade your sales
01:25 team to let you contact some existing customers who would be open to this sort
01:29 of research. You could obviously also advertise for
01:32 participants on your site. But be aware that if you do this, you''ll
01:36 be introducing what's called selection bias.
01:39 In other words, the people who come to your site have already self selected
01:43 themselves as being interested in your company.
01:46 So they have more knowledge of your products than the general population will.
01:49 And that can mean they behave differently than people who've never been exposed to
01:52 your products before. You'll need to start recruiting at least
01:56 one to two weeks before the scheduled study.
01:57 It'll take time to find enough people. Even if you end up with a long list of
02:02 potential participants, it's hard to find individuals who will be available at the
02:05 exact times you need them to turn up. I normally work on the assumption that
02:10 I'll be calling about ten people for each participant I end up scheduling for any
02:14 given study. That means you'll need a participant
02:17 database of at least 50 and preferably many more individuals to draw from.
02:23 If you have the money it's much easier to use an existing recruiting company.
02:28 It's not cheap. Costs for recruiting each participant can
02:31 run from $100 to $300, and that doesn't include the participant gratuity.
02:37 However, the company does everything for you, from initially finding suitable
02:41 participants, through giving them directions on how to get to your
02:43 location, and even calling them to remind them to show up.
02:48 If you add up all the time it will take you to do these tasks, that cost can seem
02:51 very worthwhile. Here's one tip if you decide to use a
02:55 recruiting company. They're often used to scheduling focus
02:59 groups, so it helps if you use the industry jargon and tell them you're
03:02 running individual in-depth interviews, so they know to schedule only one
03:06 participant for each slot. For your first couple of studies, until
03:11 you've convinced someone to give you the budget to use an external recruiter,
03:15 you'll most likely be doing all the recruiting yourself.
03:18 The rest of this course makes the assumption that you're recruiting
03:20 participants yourself. Even if you do use an external recruiter,
03:24 you'll still need to be aware of the same issues and you'll still need to give your
03:28 recruiter a set of participant attributes and potential recruiting questions.
03:31
Collapse this transcript
Creating a list of participant attributes
00:01 Different Usability Tests will have different participant characteristics.
00:04 For instance, if you're testing advanced features, it's likely you'll want to
00:09 recruit people who've been using your product for a while.
00:12 If instead you're interested in how easily people can sign up for your
00:15 service, you'll want to recruit people who aren't already members.
00:18 For every Usability Test, you'll have to ask who is your audience and what subset
00:24 of this audience do you care about for the current set of questions you want to answer.
00:28 Lots of development teams build their software to satisfy the requirements of a
00:32 set of personas. Personas are fake people who have all the
00:36 important attributes the team cares about.
00:40 If you have personas, you can use them as the basis of your recruiting process.
00:43 Just work out which personas would be performing the tasks that you care about
00:48 and then recruit people who share your persona's primary characteristics.
00:54 If you don't have personas already, just write down a list of the attributes you
00:58 think your users are likely to have. Remember, each additional attribute you
01:03 add to your recruitment wish list, will reduce the number of people who could
01:06 potentially take part in your Usability study.
01:09 So make sure to keep it to the most important things.
01:13 Some examples of attributes you'd care about might be the age range, gender,
01:17 experience levels and habits, and whether they're existing users or not.
01:22 Make sure that each attribute you list is measurable.
01:25 Don't say old, say 65 plus. Don't say experienced, say, can describe
01:32 how to use at least three advanced features, and then, list what those
01:36 advanced features might be. It's best to create this attribute list
01:39 along with all the other members of the team, so everyone is bought in.
01:43 That's because a common excuse people on the development team give for not wanting
01:47 to fix issues is that participants somehow weren't indicative of your real users.
01:54 If everybody was involved in determining the characteristics that you're
01:57 recruiting for, then they won't have that excuse.
01:59
Collapse this transcript
Making a screener
00:00 Now you have to find out whether each person who answers your advert is
00:04 qualified to take part. You do that by having a set of questions
00:08 that you go through with each potential participant.
00:12 We mentioned that each of the attributes that you've listed has to be measurable.
00:15 However, you also don't want to make it clear from the questions you ask, what
00:20 you want the answer to be. Let's go through some examples of good
00:24 questions for some common attributes you might be recruiting for.
00:28 Let's say you want someone who's moderately active online.
00:31 You might specify that as meaning they spend between 30 minutes to two hours
00:35 online each day. So, what's the best way of asking that question?
00:39 Obviously, if we just said do you spend between 30 minutes and 2 hours online
00:44 each day, we'd be giving away the answer we want.
00:47 Instead it's best to get your potential participant to pick from a range of numbers.
00:52 So you might as which of the following best describes how much time you spend
00:57 online each day? Less than 10 minutes, 10 to 30 minutes,
01:01 30 to 60 minutes, 1 to 2 hours, 2 to 4 hours, or more than 4 hours.
01:08 Then you'd accept anyone who chose the third or fourth answer.
01:12 To ask how active someone is at online shopping, you might ask about purchase
01:17 frequency using a similar scale. Maybe less than once a month, Once or
01:21 twice a month, once or twice a week, or several times a week?
01:26 Alternatively, you might care more about how much time someone spends browsing
01:30 online stores, rather than the number of purchases they make.
01:35 Obviously the questions you ask will depend upon the description you came up
01:38 with of which user types you care most about.
01:43 Sometimes it will be too hard to ask a question directly, so you need to find a
01:47 proxy for it. For instance, what if you're looking for
01:50 people who love to listen to music. You could ask, do you love to listen to
01:55 music, but that doesn't quantify the response.
01:58 Someone could answer yes, whether they just listen to the radio when they're
02:02 driving or whether they download 20 new songs each week.
02:07 Writing your questions out in a way that makes them answerable also helps you work
02:11 out exactly what it is you care about. By asking on a sliding scale, or giving
02:16 several options, is not clear to the respondent what the right answer is.
02:20 You don't even necessarily have to read out all the options.
02:24 If it's an easy enough question to answer You can just take the answer and see if
02:29 it falls within your acceptable range. So, for each attribute you need to work
02:34 out the question you're asked the range of options you're provide to respondents
02:38 and also what range of answers you'll accept.
02:43 If your using an external company to do your recruiting they will expect this
02:46 type of list from you. A good company will help you determine
02:50 the list so that you ask the right questions and get the right people
02:53 showing up for your study.
02:55
Collapse this transcript
Qualifying questions
00:00 There are some other qualifying questions you'll want to ask each potential participant.
00:05 Unless you're running remote usabilty studies, the participant will need to be
00:09 in the same city as you, so they can easily show up for the study.
00:12 They obviously also need to be available at the times you're running your study.
00:19 It's good to check if people have any accessibility requirements like using a
00:23 wheelchair or not being able to climb stairs so that you can accommodate them.
00:28 If the person does qualify, you'll also need to make sure that you have a phone
00:32 number and email address, so that you can send study details out, and so, that you
00:37 can contact the person if there are any changes to your schedule.
00:41 It's also best to put the questions in the order that let's you disqualify the
00:45 most people first. That means you'll waste less of your time
00:49 and the potential participant's time going through a whole set of questions
00:53 only to turn them down near the end. For instance, if you care about
00:58 smartphone owning music lovers who've downloaded more than five apps in the
01:02 last two months, ask the smartphone question first because all the rest hinge
01:09 on that. If someone doesn't have a smart phone,
01:10 they won't be downloading apps and it doesn't matter how much they like music.
01:15 And normally, you'll want to ask whether someone's available at the scheduled
01:18 study times before you even go into the recruiting questions at all.
01:23 Remember, that even if a participant doesn't have the right attributes for
01:27 your current study, they may still be a great fit for a future one.
01:31 If you have to decline a participant, tell them that they don't meet your
01:35 criteria for your current study. But ask them if it's okay to keep their
01:39 details on record for future studies. And remember, this is also an opportunity
01:43 to find out if they know anyone else who might be a good participant for you.
01:47 Typically, someone who qualifies or nearly qualifies will know other people
01:52 who meet the same criteria. We've put an example screener in the
01:56 exercise files that go along with this course.
01:59 It shows how a core might progress. The order in which you might want to ask
02:03 questions to qualify participants. And the style of questions to keep
02:06 participants from guessing the answer you want.
02:09 It also contains post-qualification instructions.
02:13 If you want, you can substitute your own questions and then use this as the basis
02:18 for your own recruiting process.
02:19
Collapse this transcript
Convincing participants to show up
00:01 Finding qualified participants is only half of the battle, once you found them
00:05 you have to make sure they're motivated to show up for your study.
00:09 Even with the best of intentions, sometimes it's impossible for people to
00:12 show up. So although we said five people is enough
00:15 per study It's worth scheduling a few more in order account for no shows.
00:20 Or someone who's available on short notice to cover for any gaps in your schedule.
00:24 There are 2 main things that you can do to help ensure that people show up.
00:29 1 is to schedule your sessions at a suitable time and the other is to reward
00:34 people for showing up either with cash or something else with value.
00:39 Think of the times that your participants are most likely to be able to come in for
00:43 a usability session. For instance, evenings might be better
00:47 for people who find it hard to take time off work.
00:49 Day times may be better for parents because the kids will still be in school.
00:55 Often, the nature of the participant profile you want to recruit will
00:58 determine when you need to run your sessions.
01:01 Also, be careful not to schedule sessions during national holidays, religious
01:06 festivals, or school vacations. People may forget about those dates when
01:10 they initially agree to be a participant. But when the day comes, they'll realize
01:14 they have much more important things to do than to sit in a room answering your
01:18 questions, and they'll cancel, or worse, just not show up.
01:23 People typically like giving feedback, but that's not a good enough reason for
01:27 them to show up for your study. You'll need to offer participants an
01:30 incentive as well. What's in it for them?
01:35 Most frequently, that incentive is money. The going rate will depend upon your location.
01:40 Recruiting people with no particular skills may require an incentive of around
01:44 $75 for one and a half hours. Recruiting skilled workers may cost you
01:49 quite a bit more. At a certain point, people stop caring
01:53 about the money, and do it more because they're interested in giving their opinion.
01:57 For instance, in the past, I've recruited executives based purely on their interest
02:02 in the product we were testing. Sure, I paid them, but they really didn't
02:06 care about the cash. It may be that you have something else
02:09 you can give people instead of cash. Something that costs you less, but that
02:14 people value more. For instance, a number of free downloads,
02:18 a subscription to your product, or, like Microsoft does for their user testing, a
02:23 choice of gratuity from a list of software the company makes.
02:28 If you work in a large organization and you're using internal staff members as
02:32 your participants, you don't have to pay them, but it's still good to give them
02:36 something to say thanks for their time. When I worked in a bank we gave internal
02:41 participants a coffee mug with the usability team's logo and phone number on it.
02:46 That way, the participant got a token gift and we got additional recruits when
02:50 other people in their office got a cool new mug and called to see how they can
02:54 get one too. You should let potential participants
02:58 know how long the session is during the initial recruiting call.
03:03 Mention these points again when you send confirmation details.
03:06 This is where you sell people on the concept of your study and also convince
03:10 them that it's worth their while to attend.
03:12
Collapse this transcript
Planning your study schedule
00:01 Before you start calling people to recruit for your study, you need to know
00:04 what times you want them to show up. You're going to need to spread the
00:08 usability sessions out during the day with enough time between each one for you
00:12 to put the room and system back into its initial state ready for the next person.
00:18 You need to work that out beforehand, so that you can give each participant a firm
00:22 time slot to show up for their study session.
00:25 It's theoretically possible to run five participant sessions of one and a half
00:30 hours in one day. But you'll be completely drained by the
00:33 end of it. Instead of cramming all the sessions into
00:37 one day, split them over two days. Use the morning of the first day to set
00:42 up the study environment and do a run-through to check that everything's working.
00:46 Then, run two sessions in the afternoon. The next day, run three sessions with
00:51 time between them to tidy up, put things back to their default settings, and
00:55 discuss observations with the rest of the team.
00:58 Leave enough time that afternoon to run an additional session if you had a
01:02 no-show participant. You'll also need to schedule time for the
01:07 team to get together and discuss what they saw and what they plan to do about it.
01:12 This debrief time should be as close as possible to the sessions, because
01:16 otherwise, people will forget what they saw and they may be tempted to go off and
01:21 make changes without discussing them first.
01:25 Being focused and responsive in front of participants takes a lot of energy.
01:29 Make sure you give yourself sufficient time to recover between each one.
01:34 Even after many years of doing this work, I still feel worn out by the end of a day
01:39 of usability sessions.
01:40
Collapse this transcript
Calling and calling again
00:01 Recruiting participants involves spending a lot of time on the phone.
00:05 Although it's possible to do some of the recruiting process online and emails and
00:09 using survey tools, there's something about speaking directly to people that
00:13 helps you understand whether they're likely to be a good participant or not.
00:16 You'll need to call people to go through the screener.
00:21 Send them confirmation details, and then call them back again the day before
00:24 they're due to show up, so that you can reconfirm that they'll attend.
00:28 The first call will be the recruiting call.
00:31 You'll explain who you are, why you're calling, remind the person how you got
00:36 their details, and then check they're still interested in participating before
00:40 going through your screener questions. If the individual meets your profile, you
00:46 can arrange a session time with them and then check their email details so that
00:50 you can send a confirmation email. Immediately after the call, send out the
00:55 confirmation email with all the necessary information in it.
00:59 Start by thanking the individual for agreeing to participate.
01:02 Recap what it is they'll be doing, in general terms, and then give them
01:06 instructions on how to find your location.
01:08 Also, if you asked people to bring something specific with them, remind them
01:14 of that in the email, too. I found it really helps if you tell
01:19 people to bring the email with them to the session.
01:22 You really don't care if they do or not, but it will mean they have the
01:26 instructions with them when they realize what they don't know where they're going
01:29 or when they get to your building's reception desk and don't know who to ask for.
01:34 It's also really important to call every participant on the day before the study
01:38 and reconfirm that they'll be attending. This is a good time to reiterate how
01:43 important the study is to you and how thankful you are that they agreed to attend.
01:48 Your aim at this point is to make sure that people remember their commitment.
01:53 This call is really helpful to ensure that everybody shows up or at least you
01:57 have time to reschedule or find a replacement if they tell you they can't
02:00 attend the session. Once you've been through this recruiting
02:05 process even a single time you'll realize why recruiting companies charge what they
02:10 do to find participants for you. The time and effort involved can take you
02:14 away from your normal job. You might have to make the calls in the
02:18 evenings in order to reach your potential participants.
02:21 Ensuring that everybody turns up when they said they would can turn into a nightmare.
02:26 However, it's all worth it when you end up with well-qualified participants who
02:31 give you great product feedback.
02:32
Collapse this transcript
3. Working Out What Questions to Ask
Asking the right questions
00:01 Asking your users direct questions doesn't always work very well.
00:04 People are normally okay answering questions that relate to things they've
00:08 done in the past or tasks they perform regularly, these are called behavioral questions.
00:13 On the other hand, people are not very good answering questions that are forward
00:17 looking and speculative, like do you think you'd use this product or how would
00:22 you like to be able to do a certain thing?
00:25 More to the point, they'll still give you an answer, but those answers aren't very believable.
00:31 There are a couple of reasons for this. One is that peoples' visualization of the
00:35 thing you're talking about might be very different from your intentions.
00:38 For instance, say you were talking about remote working.
00:42 They may see a future with personal jet packs to fly between meetings, when
00:46 instead, you were thinking about teleconferencing.
00:49 Another reason is that people just don't know what the future holds.
00:52 And what they say they'll do is often at odds with what they actually end up doing.
00:57 For instance, most people would say they're very concerned about their online security.
01:02 But then, those same people end up reusing the same weak password on
01:06 multiple websites. Or, people might tell you they intend to
01:10 save money rather than spend it. But they still often give in to
01:14 short-term temptations. Why are we talking about this?
01:19 Well, many types of research questions actually ask people to predict what
01:23 they'll do in the future. Focus groups often show people screens
01:27 and say, do you think you'd use this feature?
01:30 Surveys ask people to say which of five potential features they'd find most useful.
01:36 Market research interviews often ask people to visualize themselves using a product.
01:42 Because what people say and what they end up doing are often two different things.
01:46 The answers they give to these speculative questions aren't as
01:50 trustworthy as we'd like. Also, people sometimes just tell white
01:55 lies in order to impress. Think for example about how truthful the
01:58 profiles on dating sites are. What we need to do instead, and what
02:04 Usability Tests are really good at is find questions that rely on people's real
02:09 actions and demonstrated behaviors. Then, rather than asking people to
02:13 predict what they'd do in the future, we can watch them doing it right now.
02:19 The best way is to give people tasks to perform with our product and then see how
02:23 well they can perform those tasks. That tells us whether we've chosen the
02:27 right features to implement. And whether we did it in a way that
02:30 people can really use.
02:32
Collapse this transcript
Collecting valuable metrics
00:01 Before you run a Usability study. You really need to know what you're
00:04 trying to find out. Different types of answers you need will
00:08 require different types of participant tasks within the study.
00:13 You might have general questions, like how can well can people working with the
00:17 shopping pages on your site? Well, you might have specific questions,
00:20 like how long it takes someone to find your contact information if they need to
00:23 call you? Knowing how you'll use the answers you
00:27 get from a usability study to improve the product ensures that you ask the right
00:30 questions in a measurable way. There's nothing worse than finishing a
00:35 Usability Test and then realizing you can't do much with the findings.
00:38 Doing the planning upfront means that you can move straight from the Usability Test
00:42 findings to making positive changes to your product.
00:46 When we get people to perform tasks with a product, we can capture three distinct
00:50 types of metrics. Efficiency, that's how long it took them
00:54 to do the task. Effectiveness, which is how many errors
00:57 they made. And satisfaction, how they felt about the
01:01 task, frustrated or happy with the outcome.
01:05 Pretty much any question you want to answer will fall into one of these three categories.
01:08 Also, you'll soon find that it's pretty easy to plug dollar values into each of
01:14 these answers. For instance, a certain number of errors
01:17 will lead to abandonment which has a defined cost.
01:21 Some errors will lead to people calling the help desk which has an average cost
01:25 per call. High satisfaction leads to people telling
01:28 others about their good experiences which has a knock on effect on sales.
01:32 Conversely, low satisfaction ends up with lower engagement.
01:36 And so, less revenue from a repeated sales or add impressions for instance.
01:40 Being able to define problems in terms of their relative costs gives you a good
01:45 reason to fix them and also a way to prioritize which ones to work on first.
01:50 Frustrating user issues with a high dollar value that have quick and easy
01:54 fixes are the first things most teams work on, because it gives them the most
01:59 return on their effort. These measures of usability come in very
02:03 handy when you're doing cost benefit analysis for set of features or bug fixes
02:08 you want to implement. The data from your usability studies can
02:11 help you prioritize future work items based on user need and potential dollar value.
02:16
Collapse this transcript
Sometimes the best question is no question
00:01 A big benefit of usability studies is that, as one as quantitative data, which
00:06 is percentages, proportions, and other hard numbers, these studies also provide
00:10 lots of qualitative data that is participants words, actions, and attitudes.
00:17 Although quality of data can sometimes be harder to analyze, the real advantage of
00:21 it is that it lets you find out about issues you didn't know you had.
00:26 If you think for a second about your knowledge of the product you work on, you
00:30 can split it into four quadrants. Things that you know you know, things
00:34 that you don't know that you know, things you know that you don't know, and things
00:36 that you don't know that you don't know. Things that you know you know are the
00:46 data you already have about your customers and how they like to work with
00:49 your product. Things that you don't know that you know
00:53 are some of the implicit ideas that you use when you build that product.
00:57 Most types of research that companies perform center on things that we know we
01:01 don't know. In other words, stuff we know we need to
01:05 find out in order to be successful. That tends to be quantitative information.
01:10 But often the real problems and the things that will make customers really
01:14 super happy if we resolve them lie in the quadrant of things we don't know that we
01:19 don't know. When you run regular research like
01:23 surveys, you get answers to the questions you asked, but that's about it.
01:27 You have to know what it is that you don't know in order to write good survey questions.
01:32 When you watch someone working with your product, all sorts of serendipitous
01:36 things can happen in front of you. You'll undoubtedly be surprised to learn
01:40 about the attitudes that users bring to the interaction, the mental models
01:43 they've built up about the task, or the terminology they use to describe the
01:47 things they see and do. All of these observations give you
01:51 additional rich information about things you didn't know you didn't know.
01:56 You can use that information to help you build a better and more user-centric product.
02:01 Of course, because usability tests also give you lots of quantitative data to
02:05 help you ask the questions you already knew you had, you end up getting a lot of
02:09 value from a relatively small investment.
02:11
Collapse this transcript
Exploring some example questions
00:01 Because we want study participants to act like they would in the real world, we
00:05 typically phrase our questions as tasks. And then get the participants to complete
00:10 the tasks with the interface with testing.
00:13 Let's spend some time telling a couple of questions that a team might have, into
00:16 usability tasks. One big question that we often want to
00:20 ask, is can users find the right place to carry out an action?
00:24 This is easy to do in a study. We just give them a suitable task that
00:28 involves carrying out the action, either on the way to completing the task, or as
00:33 the end goal of the task. For example, if we want to know whether
00:36 people can find the filtering function of a search engine, we might give them a
00:40 task to search for a specific item from a very large range of similar items.
00:46 If we designed our interface well, there's a large chance they'll use the
00:49 Filter function to complete that task. Actually, if they don't use it, that's an
00:53 interesting piece of data in its own right.
00:57 Another type of question comes up, when you add new functionality or options
01:00 within an existing process. You might ask, how many distractions are
01:05 there, or what issues have we introduced by changing the flow.
01:10 Give participants an exploratory task that requires them to think for
01:13 themselves, and then sit back and watch. An example might be adding an interest
01:19 rate calculator, to a mortgage quote screen.
01:21 You'd think the calculator would add value.
01:24 You might be worried about whether it distracts visitors from their primary
01:27 task of getting a mortgage. Just giving study participants a task as
01:31 broad as getting a mortgage on a particular home, allows you to watch for
01:36 issues during the flow. You won't necessarily be testing how well
01:40 the interest rate calculator works, because that isn't your primary aim.
01:45 If some people use it, that's an added bonus.
01:47 But your primary research question, is about distraction from the flow, not
01:52 about use of the calculator. Your server logs might show that people
01:57 have errors or abandon at a certain place on your site.
02:00 Or, your help desk might be getting lots of calls around a certain screen.
02:04 You know what and where the problem is, but you don't know why it's a problem.
02:10 You can observe the behavior that leads to errors or abandonment, by asking
02:14 people to perform a directed task, that takes them through that point in the flow.
02:19 This will give you the y data, that will allow you to make a design change.
02:25 An example, might be help desk calls from people getting unrealistic answers back
02:30 from the mortgage interest calculator in the previous example.
02:34 Running participants through a specific task to use the calculator.
02:38 You might find that the terminology on one of the fields requesting data, is so
02:42 vague, that people often type in the home value, rather than the monthly payment value.
02:47 The help desk could you tell what the problem was, but not why it was a problem.
02:54 Watching people working with a calculator, gives you the y answer, and
02:58 also some ideas for how to fix it. Another type of question that development
03:03 teams ask all the time is, will they like our new functionality?
03:08 The best way to test this, is before you've even written the code, using paper
03:12 mock ups of the design. Instead of using the real code, you show
03:17 participants paper markups of the screens, and they click through each
03:20 screen using a pen as their mouse. The tasks for this type of study, are
03:25 just the same as for studies using actual code.
03:28 Users tasks don't change much over time, and watching them complete the task with
03:33 your paper prototype, will show you where the issues are before you've spent any
03:36 money on development work. So, different question types need
03:42 different types of tasks. Some exploratory and some directed.
03:46 They're also some questions that you can't answer by observation alone, and
03:50 those are what we'll cover next.
03:52
Collapse this transcript
Writing post-session questions
00:00 After a usability session, you're likely to have a couple of questions for the participant.
00:06 Some of your questions may be to do with what the participant said or did during
00:10 the session. Others may be about how the participant
00:13 does this task in their daily life. You may also want to ask questions about
00:18 the participant's satisfaction with the task they performed.
00:23 These are all things that you obviously can't create tasks for.
00:26 Instead, the questions are more interview style.
00:29 However, it's important that your questions stay behavioral.
00:34 In other words, remember to only ask about things the participant has already
00:38 done, not about things they may do in the future.
00:41 There really isn't any point in asking questions that talk about future states
00:46 or maybes. You'll get answers, but they aren't
00:48 necessarily believable. If you want to know more about what a
00:53 participant was saying or doing while they went through a task, it's best to
00:56 take them back through that task, either in their heads or using the computer.
01:02 When you get to the part you were interested in, ask them what they
01:05 remember about that part of the task. The less prompting you give them, the
01:09 more likely it is that the words they use will be their own rather than something
01:13 you guided them into saying. You may instead be interested in how this
01:19 task compares to what they do in their normal lives.
01:22 It's fine to ask questions that get people talking about their regular
01:25 approach to a task. Often, they'll draw comparisons without
01:30 you even needing to ask them. If you do need to ask, be sure to use
01:34 neutral terms. Don't say, did you prefer the task today
01:39 to your normal approach? Instead ask, how did the task today
01:42 compare to your normal approach? This is much less leading and it allows
01:47 the participant to tell you what they really think, rather than what they think
01:51 you want to hear. If you want to gather satisfaction
01:55 metrics, it's best to ask for satisfaction at the end of the relevant task.
01:59 Normally, we ask people to give us a rating on a scale using words like can
02:04 your rate your experience on a five-point scale where 1 is very dissatisfied and 5
02:09 is very satisfied? It's important to follow up with a
02:12 question like, can you tell me why you gave that task whatever rating it was?
02:16 So that you get at the reason behind their satisfaction or dissatisfaction.
02:22 It's funny how many times the reasons behind people's writings have nothing to
02:25 do with the features you just implemented.
02:27 It's important to know this so that you don't continue building things you think
02:31 people love. Only to find out it's because of
02:33 something silly like the shape of the logo on your Home screen.
02:36 Post-session interviews are not a good time to start trying to teach things to
02:42 participants or to try and convince them to change their opinion or attitudes
02:46 towards your product. Even if they just told you they hate your
02:50 product, at this point, the best thing you can do is to ask them why rather than
02:55 try to sell them on its virtues.
02:57
Collapse this transcript
Avoiding bias
00:01 Whenever you ask questions, you have to work very hard not to introduce bias.
00:07 Bias is where you tell people your expectation of the answer in the question
00:11 you ask. For instance, an obviously biased
00:14 question might be, you liked that experience, didn't you?
00:19 Slightly less obvious, but still biased, would be, tell me how much you liked that experience.
00:24 What if they didn't like it at all? A bias-free question might be, tell me
00:29 how you feel about the task you just completed.
00:32 Often, it's hard to ask a question in an unbiased way.
00:37 You have to look at the reason why you're asking the question.
00:41 Are you trying to get people to say things you wish they'd said, or to say
00:45 nice things about your product, or to fit in with your view of the world?
00:50 Participants want to please you so they'll pick up on tiny queues in your
00:54 questions in order to give you the answers they think you'll like.
00:58 One trick to ensure you don't ask biased questions is not to ask questions at all.
01:04 Often, it's sufficient to get the participant talking about the topic
01:08 you're interested in, and then gently guide them towards the area you care most about.
01:14 In other words, ask a very general question, and then listen, probe and validate.
01:22 Once you've got the participant talking by asking the very general question, it's
01:26 time to be quiet, and let the participant think about that answer and then speak.
01:32 Don't add more information, or ask five questions in one long sentence, or just
01:37 continue to talk because you haven't thought through how to ask your question properly.
01:40 Instead, ask your question, then listen. Once you have an answer, it's okay to
01:48 probe by asking follow-on questions. Use simple probing questions, like, tell
01:55 me more about that, or, does that happen in other situations as well?
01:59 Then, validate what you heard by summarizing it, or repeating it back to
02:04 the participant. You can say something like, what I think
02:08 I hear you saying is, or so let me check, you're saying.
02:13 By listening, probing and validating, you can make sure you've captured the
02:18 information you need, and that you truly understood the participant's perspective.
02:23
Collapse this transcript
4. Making a Task List
Turning questions into tasks
00:01 Once we understand what questions we have, we need to turn those into tasks
00:05 that participants can perform for us. Remember, because we're doing behavioral
00:11 research, we're using these tasks as a way of getting people to show us how they
00:14 behave in certain circumstances. In turn, that behavior answers our questions.
00:21 Because the tasks are written down for the participant to read out loud, it's
00:25 best to keep the instructions quite short.
00:27 Normally, we do this by making the task quite broad.
00:33 We don't want to give step-by-step directions.
00:36 Because that would just lead participants through the process.
00:39 And we wouldn't learn much from them. Instead, it's normally best to set a
00:44 scene where you describe the output you want or the end result you're looking for.
00:48 That way, participants can choose their own method to get to the end result
00:53 without feeling like they're being guided through.
00:57 This type of task is called an exploratory task.
01:01 A typical exploratory task on a travel website might be you want to book a three
01:06 day trip to Seattle for two people anytime in June.
01:11 Your travel dates are flexible. Your budget is $1500 for travel and accommodation.
01:17 Use this site to find a suitable option. You can see how there are multiple ways
01:23 that somebody could complete this task. Each thing they do, each exploration,
01:28 tells us something about how people try to perform the task, and whether the site
01:33 meets their expectations. There are some times when you might want
01:37 participants to use a specific piece of the application, or to take a certain
01:41 route through your site. For instance, you might have a prototype
01:45 that only has a couple of working task flows.
01:47 Or you might know from weblog information that there's a problem with a specific
01:53 area, and now you want to see people using that area, in order to work out why
01:57 the problem occurs. In those situations, we use directed
02:02 tasks instead. Rather than giving broad end goals we
02:07 instead say where we want someone to start.
02:10 And what we want them to achieve. As an example, on the same travel
02:14 website, we might say use the flexible flight feature to find the cheapest
02:19 flight from L A to seattle during the second week of June.
02:24 By telling our participants exactly where to start, we aren't getting much
02:28 realistic behavior, but we're ensuring that they use the feature we care about.
02:34 Another type of question is one that we typically only use once during a session.
02:37 When people first see the interface they'll be working with.
02:42 This is when we ask first impression questions.
02:45 This type of question is just what it sounds like.
02:48 It's a way of seeing what parts of the interface jump out at people, and what
02:52 they think they can do with the product. Typically, we ask people this question as
02:57 soon as they've opened an app or navigated to a web page.
03:01 If their answers are different to our expectations, it suggests that our
03:05 marketing messaging or our interface isn't getting across the key elements of
03:09 the product. During a study, we'd normally start with
03:14 a first impressions task, then move on to exploratory tasks before finishing with
03:19 some directed tasks if we needed to. The beauty of a first impressions task is
03:25 that there are no right or wrong answers, so even if the participant is nervous
03:28 when they start off, we lead them in with a task they can't possibly have problems with.
03:34 That sets the scene for the rest of the study.
03:37 Exploratory tasks often take some time to complete, just because of their open
03:41 nature However, they are good at giving people an overview of the product they're
03:46 working with. So, assuming that the task is realistic,
03:49 this is very much in keeping with how your users might work with the product in
03:53 real life. The directed tasks, if you use them, are
03:58 normally better done, after somebody has familiarized themselves with the whole product.
04:03 So running them after an exploratory task or two gives people a chance to get up to
04:07 speed before being taken down a particular route.
04:12 Using first impression, exploratory, and directed tasks will allow you to get
04:17 greater insight to your participants' behavior.
04:20
Collapse this transcript
Task list logistics
00:01 We normally present tasks to participants on a piece of paper for them to read,
00:06 that way the participant can refer back to the task if they get confused.
00:10 And we make sure that every participant gets the same instructions, which keeps
00:14 the study variables the same between participants.
00:18 We put each task on its own piece of paper for a couple of reasons.
00:21 One is that it prevents participants from reading ahead and maybe finding a way to
00:26 do something from clues in subsequent tasks.
00:30 Also, by handing tasks to someone one by one, it stops them from feeling overwhelmed.
00:37 If you have a particularly slow participant, they won't leave the session
00:41 feeling like they failed because they only got through a couple of tasks.
00:46 For the same reason, it's sensible not to number the tasks, and not to give them
00:50 labels like Task or Instructions. A good task should feel like it's setting
00:56 the scene for someone to carry out an action that they would happily do in
00:59 their normal lives. It's important to make sure the wording
01:03 of your tasks is different from the terms used in the products interface.
01:07 For instance, if you were usability testing Apple's iTunes software, you
01:11 probably shouldn't have a task to create a playlist.
01:15 Instead, this becomes something like, "Group some songs so you can play them
01:19 together whenever you want." It sometimes leads to painful phrasing, but it's
01:24 important not to give participants too many clues in your task that might lead
01:29 them to the right place in the interface. When this happens, you haven't really
01:33 learned anything about your users' behavior, other than how well they can
01:37 match words on the task with words on the screen.
01:41 It's also good to create a version of the task list for your observers to use as a reference.
01:46 The observer copy can include the task goals and the reason for including the task.
01:52 And it can leave space for the observers to write notes.
01:57 The task list is central to a good usability study.
02:00 It's your primary way of communicating with each participant while the study is
02:04 in progress. Writing good tasks that set the scene but
02:08 don't give away critical information is hard, and it's something you'll get
02:12 better at over time. You'll quickly find out if your task list
02:16 leads people to the answer, or is overly confusing.
02:20 If that ends up being the case, it's okay to get participants back on track.
02:24 But be sure to change the task wording before your next participant.
02:29 Once you've come up with good wording for tasks, you can reuse the same task in
02:33 future studies. User's tasks don't change much over time.
02:38 They normally want to do the same, relatively limited set of things with
02:41 your software, even if your product features change.
02:46 Using the same tasks across multiple studies let's you measure improvements in
02:50 your software. For instance, if the task starts taking
02:53 less time, or people can complete it with less errors or questions.
02:57
Collapse this transcript
5. Room and Equipment Preparation
Creating the test environment
00:01 For most of your usability sessions it's likely that you'll be bringing
00:04 participants into your work space to take part in usability studies.
00:08 So what you need to do to set the space up appropriately, first off let's forget
00:14 the fancy setup's you see in research facilities.
00:17 One way mirrors and multiple ceiling and wall mounted cameras are nice tools.
00:21 But they really aren't necessary for the vast majority of usability tests.
00:27 Instead, what you need is a quiet place where you can interact with people ffrom
00:31 outside the company without being disturbed.
00:34 Somewhere close to your reception area with easy access to restrooms tends to
00:38 work best. That way, you don't have to bring people
00:41 through your working environment or down miles of corridors to get to the place
00:45 where you'll run the study. Try not to have too many distractions,
00:49 like marketing posters, toy collections and so on.
00:54 A small conference room works best. Also take a look around and see whether
00:58 you'd be happy bringing someone important into the room.
01:01 Have you tidied up all the bits of dead computer that are lying in the corner?
01:05 Have you cleaned the whiteboard off? Have you emptied the trash?
01:10 Often because you use the space all the time you get used to the mess.
01:15 Someone coming in for the first time might be quite shocked.
01:18 This is the image they'll have of your organization, make sure its a good one.
01:24 If you don't have a suitable public space, let's say you're a starter working
01:28 out of your parent's basement. You'll need to find a location that
01:31 wouldn't be too creepy for someone to visit.
01:34 Hotels typically rent out conference rooms.
01:36 Or you might find a local hot desking service that rents out individual offices
01:41 or conference rooms by the day or week. A real advantage of this type of
01:46 location, is that they have reception staff who greet and keep track of participants.
01:51 They also have clean toilets and sometimes even free coffee.
01:55 These locations are for a neutral environment with no indication of your
01:59 company's brand and that in itself can be very useful.
02:03 One way to avoid the headaches of setting up a suitable environment is to go to
02:07 your participant's location, that could be their office, their home, or even
02:12 their commute to work, if that's where they'll be using your product.
02:16 However, the additional randomness that this introduces and the lack of space for
02:20 observers to watch means you should really consider using an environment
02:24 that's under your control for at least your first couple of studies.
02:27
Collapse this transcript
Making do with what you've got
00:01 The minimum setup you need for running a usability test is access to participants,
00:05 the system you want to test, and a pencil and paper to record notes.
00:10 You don't need fancy screen capture software or video cameras.
00:13 Sometimes those extra things can get in the way of capturing the true behavior
00:17 you want to observe. Depending upon how many teammates you'll
00:22 have watching each session, you may need to give them a way to see what's on the
00:25 participants screen via a second monitor. If you have more than two extra
00:31 observers, it's best to hide them in an adjoining conference room, so that you
00:35 don't freak out your participant. Think about it.
00:38 Having lots of people watching your every move can be unnerving.
00:42 However, this is easy to do with very little technology.
00:45 You can use your phone system as an audio link, so long as you mute the observer
00:49 side phone. And you can run a cable or use screen
00:52 sharing conferencing software to give the observers a view of the participants computer.
00:59 So it's likely that you won't have to spend any money in order to create a
01:02 suitable test location within your office.
01:05 The tools that you use for your regular work can be repurposed to run a usability study.
01:09
Collapse this transcript
Running studies in real locations
00:01 Sometimes you're going to want to watch users working with a system in its actual environment.
00:07 For instance, people use their mobile phones when they're on a train, in a
00:10 waiting room, or sitting in front of the TV.
00:14 They don't use them so much in an office, where they have other tools available to
00:17 them with bigger screens and faster data access.
00:21 Early in a product development cycle, you might need to see naturalistic behaviors
00:24 so you understand how people's environment affects how they work with
00:28 your product. Again, near the time you release, you'll
00:32 want to see how well a system performs in it's real location.
00:36 That means, taking the usability test to the user, rather than bringing the user
00:41 into your artificial environment. It's a good idea to save these types of
00:46 site based studies until you've done a couple in a more controlled environment.
00:51 Get some practice at moderating and observing a session before you add the
00:55 extra randomness of a live environment into the mix.
00:59 However, there are some real benefits to seeing how people behave in a
01:03 naturalistic setting. So I really encourage you to do this type
01:06 of study once you've got some experience. When you go into users' environments,
01:11 less is more. What is important is to note down
01:15 behaviors and quotes. And all you really need for that is a
01:19 notepad and pen. Any other equipment just becomes a burden.
01:25 Sometimes it's good to have a still camera with you, so that you can take
01:28 photos of the things that participants interact with.
01:30 The paper forms they use, or the other systems they work with.
01:34 But audio and video recording are very unlikely to help you.
01:38 And trying to take notes on a computer is going to be very unwieldy, especially if
01:42 you're balancing on the corner of someone's cubicle, or you're joining them
01:45 on their journey to work. Running studies in real locations is one
01:50 of the best ways to find the surprising things that can really lead to great
01:53 product insights. The things you didn't know that you
01:56 didn't know. Finding solutions to even just one or two
02:00 real world problems that users face can be enough to differentiate your product
02:04 in the marketplace, and really delight your customers.
02:07
Collapse this transcript
6. Ready to Test!
Creating a test plan
00:01 A test plan is a great reminder and to do list.
00:05 It's a way to keep track of everything you need to do to make sure you have the
00:08 right participants, at the right time, in the right location with the right setup,
00:13 and the right set of tasks to perform. Having a test plan is also important when
00:18 you communicate with the rest of the team.
00:21 It's one document that tells everyone involved what's going on, when, and why.
00:26 The term test plan sounds quite formal, but really, all that your plan is doing
00:33 is listing out the decisions you've made and the things that need to happen for
00:36 the usability test to take place. This helps you keep track of what's going
00:41 on and what still needs to be done. Most of the information in the test plan
00:46 is stuff that comes from other documents you used during the planning and
00:49 execution of the study. For instance, your participant profile
00:53 and recruiting criteria, the study schedule, and your task list.
00:58 You're also pulling information that might not be written down anywhere, such
01:02 as the research questions that lead to your task list.
01:06 Most often, the research plan is a placeholder document that contains a bare
01:10 minimum of original information and instead links off other documents.
01:15 That way you aren't creating extra work for yourself, but you still have
01:18 something that makes sure you did all the work you need to do.
01:22 The main parts of the test plan are, a description of your participant profile,
01:26 a link to the screener used to recruit participants, who will be doing the
01:31 recruiting, how participants will be rewarded.
01:34 The research questions you want to answer, a link to the task list you'll
01:39 use to answer those questions, a link to your post-session question list, where
01:44 the test will take place, how the room will be set up.
01:47 What equipment will be used to show the product to participants, for instance, a
01:51 PC, smartphone, or paper product type, and how it will be configured.
01:55 And finally, a link to the test schedule, when the sessions will take place and
02:00 when the team will meet to discuss the findings.
02:03 We've put an example test plan in the exercise files that go along with this course.
02:07 It lists out the common tasks that need to be performed before you can run a
02:10 study and its being filled in from example study.
02:13 If you want, you can substitute your own study information and then use this as
02:17 the basis for your own test plan document.
02:19
Collapse this transcript
Usability test formalities
00:01 Because your participants will be putting a lot of trust in you, there's a certain
00:04 level of responsibility that you take on. Even if they're just employees from the
00:09 office down the hall, participants have give you their time to help you out.
00:14 The usability study environment means that you know a lot more about what's
00:18 going on than they do, and as a result Each participant is relying on you to
00:23 guide them through the process. Messing up will reflect badly on you and
00:28 your organization. It's actually quite easy to do the right thing.
00:34 Everyone involved in the study must understand that the participant is the
00:37 most important person in the room. Once you have that concept, everything
00:41 else falls into place. You need a way to make sure the
00:44 participants are comfortable. That means treating them like a guest but
00:48 also a bit more. Because they're in an unfamiliar
00:51 environment, you'll have to guide their actions.
00:55 Offer them refreshments and use of the restroom before you start.
00:58 And make it clear that they can grab a drink or use the restroom at any point
01:01 during the session. Also tell them up front exactly how long
01:06 you expect the session to last. That way they have a good expectation of
01:09 the timings involved. Time moves very differently when you feel
01:13 like you're being watched. Participants will be looking to you to
01:17 take the initiative. They may feel bad asking for something
01:20 they need, so it's up to you to offer it to them before they need it.
01:24 It's also important to make sure that participants don't feel like they're
01:27 being evaluated. They really aren't.
01:30 Each participant is helping you to evaluate the system.
01:34 To that end you need to make sure that your tasks are clear and that all the
01:39 observers are quiet and respectful. You need to make sure you don't use words
01:43 like test and evaluate in conjunction with a participant.
01:47 For instance, it's okay to say that you want them to work through some tasks to
01:51 help you work out how well the system meets their needs.
01:54 But it's not okay to say that they'll be doing a test to see how well they can use
01:58 the system. It's also essential, you don't ask
02:03 participants to do anything they wouldn't do in normal life.
02:06 For instance, entering their personal information on a side.
02:09 Paying for things with their own credit card.
02:11 Signing up for offers that might have a future cost you're not going to cover, or
02:16 even going to certain locations on the web that they would feel uncomfortable visiting.
02:21 Participants should feel that they're contributing something useful.
02:24 That means that they deserve good attention from the moderator during the
02:28 study so that they know their comments are being noted and their actions are
02:31 useful to you. However, you should never keep a
02:34 participant longer than you said the study would take even if they aren't finished.
02:39 It's your fault that you tried to cram too much into the study Not the
02:42 participants fault for taking time on their tasks.
02:45 At the end of the study, participants should be confident that they did a good
02:49 job of helping you evaluate the system. And they should leave with a reward or
02:53 gratuity for their time. If participants leave feeling this way,
02:57 they'll tell other people that they had a good experience working with you.
03:00 Then, you'll have more willing participants for future studies.
03:03
Collapse this transcript
Run a pilot study
00:01 A dry run of the process is a great way of finding out what you've forgotten.
00:06 That's especially true when you're new to usability testing but it's always good
00:10 practice to iron out potential pitfalls by running a pilot study.
00:15 You'll find out which tasks have strange or confusing wording, which areas of the
00:19 system still have development bugs that might throw users off, and whether you
00:23 have all the documents and information you need to run the study.
00:28 Do a dry run close to the time you'll be running the actual sessions so that you
00:32 work from the same code base. The easiest way to do this is just to
00:37 have a team member play the role of the participant.
00:40 Preferably, choose someone who isn't intimately aware of how the software
00:44 works, so you can catch things like terminology in your tasks that you would
00:48 use every day, but which end users may not be aware of.
00:52 Do a dry run of every stage of the process.
00:55 Meet the pilot participant in your reception area and finish back at the
00:59 same place. This gives you a chance to practice what
01:02 you'e going to say to people at every stage.
01:06 Someone who works at your company is likely to be able to do the tasks faster
01:10 than your real participants. But you'll still have an indication of
01:13 whether you have enough time for the task you have planned.
01:17 Remember that although you have an hour and a half scheduled for the session,
01:20 quite a bit of that time will be taken up with paperwork.
01:24 Getting the participant settled. And wrap-up at the end.
01:27 You'll probably end up with about one hour of actual task time.
01:32 Make sure you have a little bit of time after this dry-run session to go back and
01:36 make any necessary changes. Running the pilot study will give you a
01:40 lot more confidence when you greet your first real participant in the reception area.
01:44
Collapse this transcript
7. Moderating a Session
The moderator's role
00:01 Its incredibly hard to moderate usability sessions well.
00:05 It requires a combination of patience, empathy, professionalism and
00:09 relationship-building that only comes with practice.
00:14 On top of that, you must also be a good observer, recording participants' actions
00:18 and thinking ahead, so you know how you'll deal with their next moves.
00:23 My best advice to you as you start moderating is to be humble.
00:28 Create a relationship where you play the role of apprentice to the participant.
00:33 This has several advantages. It immediately creates a respectful
00:37 environment rather than an evaluative one.
00:40 It also forces you to be observant and patient, the participant is the master
00:45 after all. And empathetic, because you're really
00:48 trying to put yourself in their shoes and learn from them.
00:53 When you're the developer of a system that you think is the best thing in the
00:55 world, it's hard to be humble. It's easy to see participants as stupid
01:00 if they don't understand something or lacking in taste if they tell you that
01:04 your interface isn't satisfying to use. However, once you get past your initial
01:09 pride, watching how participants behave with your interface will give you amazing
01:13 insights into how to make your product truly useful and turn it into something
01:17 that truly is the best thing in the world for the people who use it.
01:21 My other suggestion is to not be nervous. That's easy to say and hard to do.
01:28 When you start out as a moderator, you'll be playing a role that's new to you.
01:33 If you're nervous, your nervousness will spread to your participant, because they
01:37 won't trust that you know what you're doing.
01:40 There's only one way to reduce nervousness and that's through practice.
01:45 That's when you run pilot studies. So you've practiced what you'll be doing
01:49 in advance. I'd encourage you to use a written script
01:54 for much of your interacting with your participants.
01:56 This way you make sure that you aren't introducing bias just with the words you
02:01 use or with the variation between what you tell one participant and the next.
02:06 For instance, if you tell one person that you're looking for their feedback, but
02:09 tell another that you're running a test. They might have different impressions of
02:13 what they've let themselves in for and respond differently to you.
02:18 We've included an example Moderator script in the course exercise files.
02:23 The key points that we want to get across to participants are, that the study isn't
02:27 a test of them, but instead, they're helping us to test the product.
02:32 That their involvement is voluntary, that they can stop at any time.
02:38 That they should not feel obliged to perform a task that they don't want to or
02:41 give personal information to us. And most importantly, that we're very
02:45 thankful to them for giving us their time.
02:49 By using a combination of dry run sessions, written scripted interactions,
02:54 and a humble apprentice like attitude, you'll go a long way towards putting your
02:58 participants at ease. Once they're relaxed, they'll work more
03:02 naturally and you'll see their true behaviors.
03:05 That way, you'll gain better insights into what it is that you need to fix to
03:09 make them truly happy with your product.
03:11
Collapse this transcript
Think-aloud protocol and minimal questions
00:01 Once you're in the room with the participant, almost every interaction you
00:05 have with them should follow a script. That may sound strange considering you
00:09 hope to see interesting new behaviors in your participants.
00:13 There's a difference between seeing those behaviors and reacting to them.
00:19 I've already mentioned that you should use a script for the introductory part of
00:22 the study, to make sure you cover all the important points.
00:25 But what does a moderator do after these opening formalities?
00:29 My advice to you is don't do anything. In most usability studies, I tend to use
00:36 the think out loud protocol. That is, I ask participants to think out
00:42 loud as they go through the tasks they're performing for me.
00:46 This removes the need for most questions you might feel like asking.
00:51 Any question you ask interrupts the participant's concentration, it might
00:55 even make them change their behavior. For instance if you were to ask which
01:01 option are you thinking of choosing? You draw the participant's attention to
01:04 the options. They may have been looking at and
01:07 thinking about a completely different part of the screen.
01:10 Even when participants are following the think out loud protocol, there will be
01:14 quiet times. That's okay.
01:17 Your idea of an uncomfotable silence is based on regular conversations.
01:22 When a participant is thinking, you should wait quite a bit longer than you
01:26 normally would before interrupting them. The participant may even have forgotten
01:30 that you're in the room. I tend to sit to the side of the
01:33 participant and slightly behind them. So they don't feel inclined to try engage
01:37 me in conversation. You introduce the think out loud protocol
01:41 as part of the initial script. Then, when you hand the particiapant
01:45 their first task, say, please read this out loud and then go ahead and do it, and
01:51 remember to think out loud as you through the session.
01:55 If the participant is quiet and has obviously forgotten to think out loud,
02:00 remind them by saying please remember to think out loud.
02:04 Don't ask what are you thinking or other variations.
02:08 They may not be thinking at all at that point, but before you say anything,be
02:13 sure they really have forgotten to think out loud.
02:17 You don't want to disturb them while they're paying attention to something on
02:20 the screen. Or focusing on a problem.
02:24 If the participant asks you a question during the study, you're first reaction
02:28 may be to answer it in order to help them out.
02:31 However, before you do that try redirecting them with your own questions,
02:35 like what do you think you should do? Or, what would you normally do here?
02:42 Once you have that answer you may choose to provide some information, for instance
02:46 what the frequently asked questions page might say.
02:48 Or what help desk might tell them. But doing so changes things.
02:52 Because now you've given the participant help they normally wouldn't have had.
02:57 Obviously, it's completely fine to help the participant out if they're asking you
03:00 a procedural question, or whether they can take a break.
03:05 You'll undoubtedly have questions as a result of what you see.
03:09 However, by just writing down the question and waiting, you'll get answers
03:13 to many of those questions based on what the participant does next.
03:17 Is normally best to save the rest of your questions until the end of the task.
03:20 Or better still, until the end of the session.
03:23
Collapse this transcript
Session wrap-up: Participant questions, thanks, and gratuity
00:01 At the end of each session, be sure to ask the participant whether they have any questions.
00:06 The things that they ask at this point are a good indication of whether they
00:09 understood the progress through the study and what they think of the thing they
00:13 were testing. For instance, it's always gratifying when
00:16 participants ask how soon they'll be able to have the features they work with.
00:21 That shows they were engaged and what the functionality you showed them.
00:25 On the other hand, if they ask questions that show they didn't really understand
00:29 the philosophy behind the application, then you probably still have a way to go.
00:35 Remember though, this isn't the time to be selling your product to someone.
00:39 They've just given you some really useful feedback.
00:42 And most likely they'd see the disconnect between what they worked on and any sales
00:47 pitch that you might give them at this point.
00:51 Always thank the participant again for their time and give them that gratuity.
00:56 The gratuity is something you have to give people just for showing up, even if
01:00 you subsequently don't use them, or if you find they didn't meet your criteria.
01:05 It's really important to treat every participant well because, otherwise, you
01:09 can give your whole organization a bad reputation.
01:13 Once you've escorted the participant back out of the building, it's time to do a
01:16 quick wrap-up with your team members who were observing.
01:21 This is where you find out if there are any showstopper issues that you need to
01:24 change for the next session, either in your task wording, your task order, or
01:29 changes to the product itself. Although big changes are probably not a
01:33 good idea, if something is very obviously not working, and the problem is stopping
01:38 you from getting good data,. Then, if the fix is fast and simple, it
01:42 makes sense to get the change in place before the next participant shows up.
01:47 This is also a good time to remind team members who are observing that they
01:51 should not run back to their desks and start making random changes.
01:55 It's important that they stay and watch all the participants before they decide
01:59 on design changes. So they can benefit from seeing how
02:03 several different people approach the same task.
02:05
Collapse this transcript
8. Observing a Session
The observer's role: Active observation
00:01 Observers have a very specific role in usability test sessions.
00:05 In fact, you might want to show them this video so that they understand what's
00:09 required of them. There's a big difference between passive
00:13 and active observation. Passive observation is what we do when
00:18 we're watching TV. We sit back and let the images on the
00:21 screen entertain us. Active observation is different.
00:25 When you're actively observing, it's more like being in a classroom where you're
00:29 learning things and taking notes. Usability sessions are all about active observation.
00:37 You're watching for certain behaviors and taking notes on the things that you see
00:40 the participant doing. At the end of a usability session your
00:44 writing hand should hurt. And you should have several pages of notes.
00:48 Your head should be full of all the interesting things you've seen.
00:53 The best way of doing active observation is just to write down what you see
00:57 without processing it too much. Focus on the participants' actions and
01:02 quotes, what they do and what they say. Stay away from trying to write down your
01:08 interpretation of the reasons they did things or any potential fixes.
01:13 If you're thinking about fixes, your mind isn't on the session anymore, and you're
01:17 no longer actively observing. They'll be plenty of time to talk about
01:21 solutions as a team after the sessions. Everyone on the team can and should
01:28 observe usability sessions, developers, designers, project managers, marketing
01:34 people and especially managers should all be present.
01:37 The information you get from watching real users is priceless.
01:43 It lets you know where the pain points are in a product and it also suggests
01:47 ways of resolving them. Usability sessions give you real data
01:51 rather than just relying on team members' opinions.
01:55 If there are only a couple of people observing, it might be okay for you to be
01:59 in the same room as the moderator and the participant.
02:02 However, you have to take a vow of silence.
02:06 The moderator is the only person who should be interacting with the participant.
02:12 This is because the moderator knows the overall test plan and knows what types of
02:16 interaction might cause problems or introduce bias into the study.
02:22 If observers have questions, they should write them down and then at the end of
02:26 the session, they should hand them to the moderator to ask the participants.
02:30 The reason for this, is even if observers think they're asking good questions.
02:35 In reality the questions are typically driven by a product idea or philosophy
02:39 they have, rather than by the behavior they've observed.
02:43 So they tend to ask leading questions. Leading questions are ones like don't you
02:50 think that such and such. Polite participants might find it hard to
02:54 say no I don't. Another common one is tell me what you
02:59 liked about such and such a thing. Well what if they didn't like anything
03:03 about it. If there are more than two extra
03:07 observers, it's best for the observers to sit in a different room with a video and
03:11 audio feed. That allows the observers to come and go
03:16 without interrupting the participant. This also takes the stress off the
03:20 participant because there are less pairs of eyes watching their every move.
03:24
Collapse this transcript
Capturing real-time observations
00:01 There are three main ways of helping observers track the things they see.
00:04 A notepad and pen, an observer copy of the task sheet, or a shared online document.
00:12 If you let observers use their own notepads then they can transfer those
00:15 observations to post it notes so that you can stick the notes up on the wall.
00:20 By working as a team to rearrange those notes into themes you can identify common
00:25 issues that several observers noticed. Pulling observations from the notes into
00:31 post it notes is also a great exercise for getting all of your observers to work
00:35 together and discuss what they saw during the study sessions.
00:38 Alternatively, you might decide to print off copies of the task sheets for
00:43 observers to use. Often in this situation, you show the
00:48 task wording, then add what you hope to learn from this task, and leave some
00:52 space for observers to write their findings.
00:56 This helps organize the findings in people minds because they know what the
00:59 aim of the task is so they can keep their notes task centric.
01:03 Be very careful not to mix up the observer and participant copies of the
01:08 task list. There's no point running a study if
01:11 you've given participants explanations of what you want them to do in the tasks.
01:17 Another note-taking option is to use sharing software, like a Google Docs
01:20 document, that all your observers can access at once.
01:24 The benefit of this is that all the observations are captured digitally,
01:29 which makes storing and manipulating them much easier.
01:31 The downside is that everyone is working in the same document, which might lead to groupthink.
01:38 In other words, if one person starts writing an observation, other people may
01:42 choose not to, even if their observation might have been subtly different.
01:46 That, in turn, might mean you have less creative ideas for fixing the issues you find.
01:53 Obviously, each one of these issues has its pros and cons.
01:57 Digital note-taking gives you an electronic record of everything that your
02:01 observers wrote down, but I've found that typically, handwriting notes works best,
02:06 because it's a faster way to go back and add more notes, or draw links between
02:10 different notes. Also, there's no excuse for observers to
02:14 have their laptops open. Which is always a big temptation to start
02:17 checking email. I also strongly suggest that you use a
02:21 method that lets you group your findings into themes.
02:24
Collapse this transcript
Video: Record or not record?
00:01 Don't get carried away with the idea of audio or video recording usability sessions.
00:07 In my experience, you'll seldom have the time to go back and look at the videos.
00:12 Sometimes it can be useful to make a highlights reel to show people who didn't
00:15 attend what the problems are, but typically the hassle of setting up video
00:19 recording just isn't worth it. Recording on video also makes the
00:24 observers lazy. You think that the video camera is doing
00:27 all of the watching, so you don't have to.
00:29 That leads to poor quality notes, so you'll miss important observations.
00:34 The other downside to video, is that it makes participants more nervous.
00:40 The video camera is a barrier between you and them.
00:42 And it just makes an already stressful situation even worse.
00:46 Do yourself a favor. Unless you know for a fact that you'll
00:50 need a video feed to a different room for observers, or you'll need to make a
00:54 highlights tape to convince people who couldn't attend.
00:57 Don't bother with video cameras. If you do record video, make sure you
01:02 tell your participants that this will happening.
01:05 And get their consent on a consent form, much like a model release form.
01:09
Collapse this transcript
9. Analyzing and Reporting Your Results
Analyzing with your team
00:01 As soon as each session is over, each observer should go back through their
00:04 notes, and make sure that everything they've written is understandable.
00:09 Working together, the observers and the moderator should quickly discuss what
00:12 they saw, and whether there's anything that needs to change before the next
00:15 participant arrives. Normally we try and keep the environment
00:19 the same for all participants in the study.
00:21 But if there's an obvious problem with the wording of a task, or something in
00:25 the product that's prevented participants from continuing, then it makes sense to
00:30 fix it if possible. After each participant, you can also
00:34 write up on a white board or a flip chart, any of the quantitative metrics
00:38 that you're tracking for the study. For instance, you might be keeping count
00:43 of the number of times participants have errors in certain tasks.
00:46 Or, the number of times participants refer to help texts.
00:51 You might be capturing rough timings for specific tasks, or asking participants to
00:55 give you a satisfaction rating. Writing those numbers down while everyone
00:59 is in the room, means you have to reach consensus on what things actually were
01:03 errors, or how long a task actually took. After you've run all of your participant
01:10 sessions, it's time to gather the qualitative information together.
01:14 This information is all the observation notes that your observers took.
01:18 Include both, what went well and what needs improvement.
01:21 Nows the time to pull out all the quotes and behavior descriptions, and write down
01:28 each one on an individual sticky note. Then start grouping them into themes.
01:33 As you do this, each observer will be reminded of what they saw during the
01:37 sessions, and will be able to recreate in their minds, the issues and events that
01:41 led to the user quote or behavior. There's a reason why I like using sticky
01:45 notes and a blank piece of wall for this task.
01:48 It encourages conversation, and allows everyone who observed to take part in
01:52 creating the groups and themes. The conversations that happen during this
01:56 exercise, are the start of potential solutions that you could implement in
02:00 your product. Now that the study is over, it's time to
02:05 have those conversations, and get people thinking about how to fix the pain points
02:09 they saw. This discussion might get heated at
02:12 times, because two individuals interpret what they saw differently.
02:15 However, as long as people are always speaking from participant data, rather
02:20 than from their own opinion, it's likely that some good potential solutions will emerge.
02:26 Sometimes you might end up with several possible solutions, but not be quite sure
02:30 which one is the best. It's okay if you don't have data from
02:34 this usability session to help you decide.
02:37 You can use a task in your next usability study, to get more information, or to try
02:41 out a potential solution. It's important to hold this data analysis
02:46 meeting as soon as possible after the sessions have finished.
02:50 It's best to do it as soon as the final participant has left.
02:53 Leaving it any longer, means that people on the team start going off and creating
02:57 their own solutions. And they'll quickly forget many of the
03:00 observations they made.
03:01
Collapse this transcript
Creating a minimal report
00:01 Although, usability testing gives every observer a better appreciation of what
00:05 real users are like, people's memory of what happened in each participant session
00:09 and each usability study will get cloudy over time.
00:14 I personally don't like writing formal usability reports because I know they
00:18 seldom get read. However, I do see a lot of value in a
00:22 quick summary document that describes what happened and what the team plan on
00:26 doing about it. What I've found over the years is that
00:29 when you want to look back on old studies you ran, what you're normally trying to
00:34 remember is what the interface looked like when it was tested and what the
00:37 participants did with it. For that reason, the best report is
00:42 sometimes just a set of screenshots of the interface with problem areas circled
00:46 and called out. Participant quotes always resonate well
00:50 with people, so it's worth including those as well.
00:54 If the team also decided during the results analysis meeting what they would
00:58 do about the problems they saw, then the report is a good place to document that information.
01:03 It's also worth quickly describing the participant profiles, so that later on
01:07 you can easily work out whether they were novice or expert users or what other
01:11 attributes they had that might have made them behave the way they did.
01:16 Most of the other stuff that someone would need to know is included in your
01:19 task plan. Assuming you followed my advice and wrote
01:22 your task plan out, you can link to it or attach it to this brief report for people
01:27 who really want the details of what tasks were performed and what the study
01:31 rational was. Although, you need to write the report
01:35 and do it while the information is still fresh in your mind.
01:38 The most important thing is that the team members attending the sessions see things
01:42 for themselves. Reports are most useful as historical
01:46 document, not as a good way of persuading people to make changes.
01:51 It's easy enough for people to ignore a report, but it's much harder for them to
01:55 ignore a poster on the wall that calls out the issues and lists the things that
01:59 they said they'd fix. I suggest you print out the important
02:02 pages from your report in a poster format, and put them on the wall in the
02:06 team area. You can even check off the action items
02:09 as each thing finds its way into the interface.
02:12
Collapse this transcript
Remember to retest!
00:00 User testing isn't a one-off thing. And it's not something you should
00:04 necessarily do just before you release because you won't have time to make any changes.
00:09 Instead, run multiple sessions with a small number of participants throughout
00:14 the development cycle to give the team feedback on how well things are going.
00:18 Re-test problem areas once the team has made changes to the code.
00:23 But also, run some regular exploratory tasks during each test session too, just
00:28 to make sure that the changes didn't introduce new issues to areas that were
00:32 working well before. User-centered teams start to rely on the
00:37 information from the usability test sessions to see how well they're meeting
00:41 their release goals for efficiency, effectiveness, and satisfaction.
00:46 You can encourage your team to be user-centered by putting up a poster with
00:49 these measures on the wall of the team area, and tracking changes in the key
00:53 measures from test to test. Once teams see the benefits of usability
00:58 testing, they typically start scheduling regular tests, either every month or
01:02 every couple of iterations on agile projects.
01:06 This frequency gives them enough time to make changes between test cycles, but not
01:10 go so long without user feedback that they get too carried away making untested
01:14 design changes.
01:15
Collapse this transcript
Capturing metrics on the impact of usability testing
00:01 By now, you're already sold on the idea of doing Usability Testing.
00:05 However, there will most likely be other people in your company who aren't so sure.
00:10 That's why if you want good job security, it's important to keep track of the
00:14 changes that were implemented as a result of user testing.
00:18 Even more importantly, you want to track how much money those changes potentially
00:22 save the company. It's hard for anyone in the organization
00:26 to argue against a process that saves many and makes the product better.
00:32 Tracking this isn't too hard. You just have to extrapolate from your
00:37 usability test finding to your whole user population.
00:40 For instance, if you found that a usability fix saved an average of ten
00:43 minutes or a certain proportion of help desk calls, those figures have dollar
00:48 values associated with them. Extrapolating to the whole user
00:52 population and turning the figure into dollars per year is likely to yield some
00:56 impressive statistics. It also helps if you have quotes from
01:00 well-respected people on the development team saying how usability helped them.
01:05 And from important customers who've seen the benefits of Usability Testing and
01:08 improvements in the product. The best thing to do is to seek out this
01:13 information by asking people what benefits they've seen from Usability Testing.
01:18 Archive that information away in a presentation deck that's always evolving,
01:22 more recent, more awesome comments, replace older remarks.
01:27 Having the data in presentation format, means you can quickly share it at a
01:30 moments notice, and you can easily copy and paste into your annual performance
01:34 review or resume. You might also find that this information
01:39 is useful when the team needs to produce a business case for making changes to the product.
01:44 Having quick access to the benefits you gained from previous usability
01:47 enhancements will help demonstrate that the team cares deeply about users and
01:51 will be making the changes for the right reasons.
01:54
Collapse this transcript
Conclusion
Usability testing: Any team can do it
00:00 Usability testing is a skill. Remaining neutral during a session is
00:05 hard and only comes with practice. Making participants feel at ease requires
00:11 that you feel relaxed yourself, and that can be hard when you're just starting out.
00:15 However, the benefits of running usability sessions are so great that it's
00:20 worth going through the learning curve. Development teams almost always have at
00:25 least one team member with sufficient empathy to become a good session moderator.
00:31 The main benefits of usability testing are to make the product work better for
00:34 your targeted users and to improve their satisfaction so that existing customers
00:39 use the product more, and so that more new people start using them.
00:43 However, there are other benefits to usability testing.
00:48 It makes the whole team more aware of who their end users truly are.
00:51 It gives them examples to use in their conversations rather than just saying, my
00:55 mum could do it. Watching users struggling or succeeding
00:59 with a product also brings the team closer together, and gives them a focus
01:03 that's more customer-centric than code-centric.
01:06 It's important to involve the whole team. Even if they're initially reluctant.
01:10 Give each of your other team members roles.
01:14 So they become active observers. Put the session times in their schedules.
01:18 And get them excited about seeing real users working with their code.
01:22 It's also important to include the whole team in analyzing the study findings.
01:27 Work together to understand the issues and propose solutions.
01:30 Then bring everyone back together to usability test those solutions after
01:34 they've been coded.
01:35
Collapse this transcript
Next steps
00:01 So, what's next? First, get a study schedule no matter
00:06 what stage of the development process you're at, you'll learn things that will
00:09 improve your product. Even if that improvement has to wait
00:13 until the next release. Even if you have to improvise everything
00:17 about the sessions and even if you have to lure the rest of the team in with a
00:21 promise of cupcakes, cookies or pizza. The things they see will convince them
00:25 that there's room for more studies in the future.
00:28 And, with each study you'll get better at moderating, the team will get better at
00:33 observing and the number of issue you find and resolve will grow.
00:37 Usability studies are something that any team can do.
00:42 And the more you do them, the more you'll learn about your users' needs.
00:46 That means you'll end up developing more focused software.
00:49 In turn, because it works the way they expect, your users will love your product more.
00:57 Regardless of the other benefits you get, running usability sessions will put the
01:00 whole team more in touch with your customers, which can only mean good
01:04 things for the quality and suitability of your product in the future.
01:08 So, thanks for watching this course, and good luck with your first usability test.
01:13
Collapse this transcript


Suggested courses to watch next:

Creating a Responsive Web Design (1h 31m)
Chris Converse


Responsive Design Fundamentals (2h 15m)
James Williamson

Foundations of UX: Content Strategy (46m 23s)
Patrick Nichols


Are you sure you want to delete this bookmark?

cancel

Bookmark this Tutorial

Name

Description

{0} characters left

Tags

Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading
cancel

bookmark this course

{0} characters left Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading

Error:

go to playlists »

Create new playlist

name:
description:
save cancel

You must be a lynda.com member to watch this video.

Every course in the lynda.com library contains free videos that let you assess the quality of our tutorials before you subscribe—just click on the blue links to watch them. Become a member to access all 104,069 instructional videos.

get started learn more

If you are already an active lynda.com member, please log in to access the lynda.com library.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Access to lynda.com videos

Your organization has a limited access membership to the lynda.com library that allows access to only a specific, limited selection of courses.

You don't have access to this video.

You're logged in as an account administrator, but your membership is not active.

Contact a Training Solutions Advisor at 1 (888) 335-9632.

How to access this video.

If this course is one of your five classes, then your class currently isn't in session.

If you want to watch this video and it is not part of your class, upgrade your membership for unlimited access to the full library of 2,024 courses anytime, anywhere.

learn more upgrade

You can always watch the free content included in every course.

Questions? Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com.

You don't have access to this video.

You're logged in as an account administrator, but your membership is no longer active. You can still access reports and account information.

To reactivate your account, contact a Training Solutions Advisor at 1 1 (888) 335-9632.

Need help accessing this video?

You can't access this video from your master administrator account.

Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com for help accessing this video.

preview image of new course page

Try our new course pages

Explore our redesigned course pages, and tell us about your experience.

If you want to switch back to the old view, change your site preferences from the my account menu.

Try the new pages No, thanks

site feedback

Thanks for signing up.

We’ll send you a confirmation email shortly.


By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

Keep up with news, tips, and latest courses with emails from lynda.com.

By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

   
submit Lightbox submit clicked