Usability Testing

Your constantly-updated definition of Usability Testing and collection of videos and articles. Be a conversation starter: Share this page and inspire others!
1,231 shares

What is Usability Testing?

Usability testing is the practice of testing how easy a design is to use with a group of representative users. It usually involves observing users as they attempt to complete tasks and can be done for different types of designs. It is often conducted repeatedly, from early development until a product’s release.

“It’s about catching customers in the act, and providing highly relevant and highly contextual information.”

— Paul Maritz, CEO at Pivotal

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    If you just focus on the evaluation activity typically with usability testing, you're actually doing *nothing* to improve the usability of your process. You are still creating bad designs. And just filtering them out is going to be fantastically wasteful in terms of the amount of effort. So, you know, if you think about it as a production line, we have that manufacturing analogy and talk about screws. If you decide that your products aren't really good enough

  2. 00:00:32 --> 00:01:02

    for whatever reason – they're not consistent or they break easily or any number of potential problems – and all you do to *improve* the quality of your product is to up the quality checking at the end of the assembly line, then guess what? You just end up with a lot of waste because you're still producing a large number of faulty screws. And if you do nothing to improve the actual process in the manufacturing of the screws, then just tightening the evaluation process

  3. 00:01:02 --> 00:01:17

    – raising the hurdle, effectively – is really not the way to go. Usability evaluations are a *very* important tool. Usability testing, in particular, is a very important tool in our toolbox. But really it cannot be the only one.

Table of contents

Usability Testing Leads to the Right Products

Through usability testing, you can find design flaws you might otherwise overlook. When you watch how test users behave while they try to execute tasks, you’ll get vital insights into how well your design/product works. Then, you can leverage these insights to make improvements. Whenever you run a usability test, your chief objectives are to:

1) Determine whether testers can complete tasks successfully and independently.

2) Assess their performance and mental state as they try to complete tasks, to see how well your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity.

5) Find solutions.

While usability tests can help you create the right products, they shouldn’t be the only tool in your UX research toolbox. If you just focus on the evaluation activity, you won’t improve the usability overall.

There are different methods for usability testing. Which one you choose depends on your product and where you are in your design process.

Usability Testing is an Iterative Process

To make usability testing work best, you should:

1) Plan

a. Define what you want to test. Ask yourself questions about your design/product. What aspect/s of it do you want to test? You can make a hypothesis from each answer. With a clear hypothesis, you’ll have the exact aspect you want to test.

b. Decide how to conduct your test – e.g., remotely. Define the scope of what to test (e.g., navigation) and stick to it throughout the test. When you test aspects individually, you’ll eventually build a broader view of how well your design works overall.

2) Set user tasks

a. Prioritize the most important tasks to meet objectives (e.g., complete checkout), no more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals.

c. Create scenarios where users can try to use the design naturally. That means you let them get to grips with it on their own rather than direct them with instructions.

3) Recruit testers – Know who your users are as a target group. Use screening questionnaires (e.g., Google Forms) to find suitable candidates. You can advertise and offer incentives. You can also find contacts through community groups, etc. If you test with only 5 users, you can still reveal 85% of core issues.

4) Facilitate/Moderate testing –Set up testing in a suitable environment. Observe and interview users. Notice issues. See if users fail to see things, go in the wrong direction or misinterpret rules. When you record usability sessions, you can more easily count the number of times users become confused. Ask users to think aloud and tell you how they feel as they go through the test. From this, you can check whether your designer’s mental model is accurate: Does what you think users can do with your design match what these test users show?

If you choose remote testing, you can moderate via Google Hangouts, etc., or use unmoderated testing. You can use this software to carry out remote moderated and unmoderated testing and have the benefit of tools such as heatmaps.

Keep usability tests smooth by following these guidelines.

1) Assess user behavior – Use these metrics:

Quantitative – time users take on a task, success and failure rates, effort (how many clicks users take, instances of confusion, etc.)

Qualitative – users’ stress responses (facial reactions, body-language changes, squinting, etc.), subjective satisfaction (which they give through a post-test questionnaire) and perceived level of effort/difficulty

2) Create a test report – Review video footage and analyzed data. Clearly define design issues and best practices. Involve the entire team.

Overall, you should test not your design’s functionality, but users’ experience of it. Some users may be too polite to be entirely honest about problems. So, always examine all data carefully.

Learn More about Usability Testing

Take our course on usability testing.

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    When you create something, you design something,  you need to know if it works for your users. And you need to get that design in front of them. And the only way that you can make sure that it meets their expectations is to have them actually *play with it*. Usability testing is *the number one* technique for determining how usable your product is. We want to see how *successful* our users are, see if they're on the right track and if we're getting the reactions that we *want* from a design.

  2. 00:00:31 --> 00:01:04

    'Ah... I'm not really sure what the users will think!'  *Better test it.* 'Uh... too much fighting with our team internally over what to do!' *Better test it!* Usability testing helps you check in with your user expectations. And it's a way of you making  sure that you're not just stuck in your own ideas about a design and actually bringing in an  end-user from the outside to get some *more clarity and focus*. And the reason why this class is going to  help you is you'll benefit from the 15 years of my

  3. 00:01:04 --> 00:01:32

    personal experience and *hundreds and hundreds of  usability tests* that I've conducted over the years. We're going to start from the very beginning of  *how to create a test plan and recruit participants*, and then go into *moderation skills, tips and techniques*. You'll also learn *how to report on tests* so you can take that data and represent it in a way that makes sense, you can communicate to your team and learn how to *change your design based on the data that you get from usability tests*,

  4. 00:01:32 --> 00:01:36

    most importantly. I hope you can join me on this class. I look forward to working with you!

Here’s a quick-fire method to conduct usability testing.

See some real-world examples of usability testing.

Take some helpful usability testing tips.

How to conduct usability testing?

To conduct usability testing effectively:

  1. Start by defining clear, objective goals and recruit representative users.

  2. Develop realistic tasks for participants to perform and set up a controlled, neutral environment for testing.

  3. Observe user interactions, noting difficulties and successes, and gather qualitative and quantitative data.

  4. After testing, analyze the results to identify areas for improvement.

For a comprehensive understanding and step-by-step guidance on conducting usability testing, refer to our specialized course on Conducting Usability Testing.

When to do usability testing?

Conduct usability testing early and often, from the design phase to development and beyond. Early design testing uncovers issues when they are more accessible and less costly to fix. Regular assessments throughout the project lifecycle ensure continued alignment with user needs and preferences. Usability testing is crucial for new products and when redesigning existing ones to verify improvements and discover new problem areas. Dive deeper into optimal timing and methods for usability testing in our detailed article “Usability: A part of the User Experience.”

Show Hide video transcript
  1. Transcript loading…


Incorporate insights from William Hudson, CEO of Syntagm, to enhance usability testing strategies. William recommends techniques like tree testing and first-click testing for early design phases to scrutinize navigation frameworks. These methods are exceptionally suitable for isolating and evaluating specific components without visual distractions, focusing strictly on user understanding of navigation. They're advantageous for their quantitative nature, producing actionable numbers and statistics rapidly, and being applicable at any project stage. Ideal for both new and existing solutions, they help identify problem areas and assess design elements effectively.

How to do usability testing for mobile applications?

To conduct usability testing for a mobile application:

  1. Start by identifying the target users and creating realistic tasks for them.

  2. Collect data on their interactions and experiences to uncover issues and areas for improvement.

  3. For instance, consider the concept of ‘tappability’ as explained by Frank Spillers, CEO: focusing on creating task-oriented, clear, and easily tappable elements is crucial.

Employing correct affordances and signifiers, like animations, can clarify interactions and enhance user experience, avoiding user frustration and errors. Dive deeper into mobile usability testing techniques and insights by watching our insightful video with Frank Spillers.

Show Hide video transcript
  1. Transcript loading…

How many participants do you need for most usability tests?

For most usability tests, the ideal number of participants depends on your project’s scope and goals. Our video featuring William Hudson, CEO of Syntagm, emphasizes the importance of quality in choosing participants as it significantly impacts the usability test's results.

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    I wanted to say a bit more about this important issue of recruiting participants. The quality of the results hinges entirely on the quality of the participants. If you're asking participants to do things and they're not paying attention or they're simply skipping through as quickly as they can – which does happen – then you're going to be very disappointed with the results

  2. 00:00:32 --> 00:01:01

    and possibly simply have to write off the whole thing as an expensive waste of time. So, recruiting participants is a very important topic, but it's surprisingly difficult. Or, certainly, it can be. You have the idea that these people might want to help you improve your interactive solution – whatever it is; a website, an app, what have you – and lots of people *are* very motivated to do that. And you simply pay them a simple reward and everyone goes away quite happy.

  3. 00:01:01 --> 00:01:32

    But it's certainly true with *online research* that there are people who would simply take part in order to get the reward and do very little for it. And it comes as quite a shock, I'm afraid, if you're a trusting person, that this kind of thing happens. I was involved in a fairly good-sized study in the U.S. – a university, who I won't name – and we had as participants in a series of studies students, their parents and the staff of the university.

  4. 00:01:32 --> 00:02:05

    And, believe it or not, the students were the best behaved of the lot in terms of actually being conscientious in answering the questions or performing the tasks as required or as requested. Staff were possibly even the worst. And I think their attitude was "Well, you're already paying me, so why won't you just give me this extra money without me having to do much for it?" I really don't understand the background to that particular issue.

  5. 00:02:05 --> 00:02:32

    And the parents, I'm afraid, were not a great deal better. So, we had to throw away a fair amount of data. Now, when I say "a fair amount", throwing away 10% of your data is probably pretty extreme. Certainly, 5% you might want to plan for. But the kinds of things that these participants get up to – particularly if you're talking about online panels, and you'll often come across panels if you go to the tool provider, if you're using, say for example, a card-sorting tool

  6. 00:02:32 --> 00:03:03

    or a first-click test tool and they offer you respondents for a price each, then be aware that those respondents have signed up for this purpose, for the purpose of doing studies and getting some kind of reward. And some of them are a little bit what you might call on the cynical side. They do as little as possible. We've even on card sort studies had people log in, do nothing for half an hour and then log out and claim that they had done the study.

  7. 00:03:03 --> 00:03:31

    So, it can be as vexing as that, I'm afraid. So, the kinds of things that people get up to: They do the minimum necessary; that was the scenario I was just describing. They can answer questions in a survery without reading them. So, they would do what's called *straightlining*. Straightlining is where they are effectively just answering every question the same in a straight line down the page or down the screen. And they also could attempt to perform tasks without understanding them.

  8. 00:03:31 --> 00:04:04

    So, if you're doing a first-click test and you ask them, "Go and find this particular piece of apparel, where would you click first?", they'd just click. They're not reading it; they didn't really read the question. They're not looking at the design mockup being offered; they're just clicking, so as to get credit for doing this. Like I say, I don't want to paint all respondents with this rather black brush, but it's *some* people do this. And we just have to work out how to keep those people from polluting our results. So, the reward is sometimes the issue, that if you are too generous in the reward

  9. 00:04:04 --> 00:04:30

    that you're offering, you will attract the wrong kind of participant. Certainly I've seen that happen within organizations doing studies on intranets, where somebody decided to give away a rather expensive piece of equipment at the time: a DVD reader, which was – when this happened – quite a valuable thing to have. And the quality of the results plummetted. Happily, it was something where we could actually look at the quality of the results and

  10. 00:04:30 --> 00:05:01

    simply filter out those people who really hadn't been paying much attention to what they were supposed to be doing. So, like I say, you can expect for online studies to discard been 5 and 10% of your participants' results. You also – if you're doing face-to-face research – and you're trying to do quantitative sorts of numbers, say, you'd be having 20 or 30 participants, you probably won't have a figure quite as bad as that, but I still have seen, even in face-to-face card sorts, for example,

  11. 00:05:01 --> 00:05:33

    people literally didn't *understand* what they were supposed to be doing, or didn't get what they were supposed to be doing, and consequently their results were not terribly useful. So, you're not going to get away with 100% valuable participation, I'm afraid. And so, I'm going to call these people who aren't doing it, and some of them are not doing it because they don't understand, but the vast majority are not doing it because they don't want to spend the time or the effort; I'm going to call them *failing participants*. And the thing is, we actually need to be able to *find* them in the data and take them out.

  12. 00:05:33 --> 00:06:01

    You have to be careful how you select participants, how you filter them and how you actually measure the quality of their output, as it were. And one of the big sources of useful information are the actual tools that you are using. In an online survey, you can see how long people have spent, you can see how many questions they have answered. And, similarly, with first-click testing, you can see how many of the tasks they completed; you can see how long they spent doing it.

  13. 00:06:01 --> 00:06:30

    And with some of these, we actually can also see how successful they were. In both of the early-design testing methods – card sorting and first-click testing – we are allowed to nominate "correct" answers – which is, I keep using the term in double-quotes here because there are no actually correct answers in surveys, for example; so, I'm using "correct" in a particular way: "Correct" is what we think they should be doing when they're doing a card sort, *approximately*, or, in particular, when they're doing a *first-click test*,

  14. 00:06:30 --> 00:07:03

    that we think they ought to be clicking around about here. Surveys as a group are a completely different kettle of fish, as it were. There are really no correct answers when you start. You've got your list of research questions – things that you want to *know* – but what you need to do is to incorporate questions and answers in such a way that you can check that people are indeed *paying attention* and *answering consistently*. So, you might for example change the wording of a question and reintroduce it later on

  15. 00:07:03 --> 00:07:33

    to see if you get the same answer. The idea is to be able to get a score for each participant. And the score is your own score, about basically how much you trust them or maybe the *inverse* of how much you trust them. So, as the score goes up, your trust goes down. So, if these people keep doing inconsistent or confusing things, like replying to questions with answers that aren't actually real answers – you've made them up – or not answering two questions which are effectively the same the same way, etc.,

  16. 00:07:33 --> 00:08:02

    then you would get to a point where you'd say, "Well, I just don't trust this participant," and you would yank their data from your results. Happily, most of these tools do make it easy for you to yank individual results. So, we have to design the studies to *find* these failing participants. And, as I say, for some these tools – online tools we'll be using – that is relatively straightforward, but tedious. But with surveys, in particular, you are going to have to put quite a bit of effort into that kind of research.

  17. 00:08:02 --> 00:08:32

    Steps we can take in particular: Provide consistency checks between tasks or questions. Ensure that "straightlined" results – where people are always answering in the same place on each and every question down the page – ask the same question again in slightly different wording or with the answers in a different order. Now, I wouldn't go around changing the order of answers on a regular basis. You might have one part of the questionnaire where "good" is on the right and "bad" is on the left;

  18. 00:08:32 --> 00:09:00

    and you might decide to change it in a completely different part of the questionnaire and make it really obvious that you've changed it to those who are paying attention. But whatever it is that you do, what you're *trying* to do is to find people who really aren't paying much attention to the directions on the survey or whatever the research tool is, and catch them out and pull them out of your results. And of the issues you should be aware of if you're paying for participants from something

  19. 00:09:00 --> 00:09:30

    like your research tool *supplier* is that you can go back to them and say, "These people did not do a very good job of completing this survey, this study." And ask them to refund you for the cost of those. You tell them that you're having to pull their data out of your results. Also, it helps to tidy up their respondent pool. Perhaps it's not your particular concern, but if you do end up using them again, it would be nice to know that some of these people who are simply gaming the system have been removed from the respondent pool.

  20. 00:09:30 --> 00:09:45

    So, reporting them – getting them removed from the pool – is a sensible thing to be doing. And, finally, devising a scoring system to check the consistency and also checking for fake responses and people who are just not basically doing the research as you need them to do it.

He shares insightful experiences and stresses on carefully selecting and recruiting participants to ensure constructive and reliable feedback. The process involves meticulous planning and execution to identify and discard data from non-contributive participants and to provide meaningful and trustworthy insights are gathered to improve the interactive solution, be it an app or a website. Remember the emphasis on participant's attentiveness and consistency while performing tasks to avoid compromising the results. Watch the full video for a more comprehensive understanding of participant recruitment and usability testing.

How to analyze usability test results?

To analyze usability test results effectively, first collate the data meticulously. Next, identify patterns and recurrent issues that indicate areas needing improvement. Utilize quantitative data for measurable insights and qualitative data for understanding user behavior and experience. Prioritize findings based on their impact on user experience and the feasibility of implementation. For a deeper understanding of analysis methods and to ensure thorough interpretation, refer to our comprehensive guides on Analyzing Qualitative Data and Usability Testing. These resources provide detailed insights, aiding in systematically evaluating and optimizing user interaction and interface design.

Is usability testing qualitative or quantitative?

Usability testing is predominantly qualitative, focusing on understanding users' thoughts and experiences, as highlighted in our video featuring William Hudson, CEO of Syntagm. 

Show Hide video transcript
  1. Transcript loading…

It enables insights into users' minds, asking why things didn't work and what's going through their heads during the testing phase. However, specific methods, like tree testing and first-click testing, present quantitative aspects, providing hard numbers and statistics on user performance. These methods can be executed at any design stage, providing actionable feedback and revealing navigation and visual design efficacy.

How to do remote usability testing?

To conduct remote usability testing effectively, establish clear objectives, select the right tools, and recruit participants fitting your user profile. Craft tasks that mirror real-life usage and prepare concise instructions. During the test, observe users’ interactions and note their challenges and behaviors. For an in-depth understanding and guide on performing unmoderated remote usability testing, refer to our comprehensive article, Unmoderated Remote Usability Testing (URUT): Every Step You Take, We Won’t Be Watching You.

User testing vs usability testing - what's the difference?

Some people use the two terms interchangeably, but User Testing and Usability Testing, while closely related, serve distinct purposes. User Testing focuses on understanding users' perceptions, values, and experiences, primarily exploring the 'why' behind users' actions. It is crucial for gaining insights into user needs, preferences, and behaviors, as elucidated by Ann Blanford, an HCI professor, in our enlightening video. 

Show Hide video transcript
  1. Transcript loading…

She elaborates on the significance of semi-structured interviews in capturing users' attitudes and explanations regarding their actions. Usability Testing primarily assesses users' ability to achieve their goals efficiently and complete specific tasks with satisfaction, often emphasizing the ease of interface use. Balancing both methods is pivotal for comprehensively understanding user interaction and product refinement.

What are the benefits of usability testing?

Usability testing is crucial as it determines how usable your product is, ensuring it meets user expectations. It allows creators to validate designs and make informed improvements by observing real users interacting with the product. Benefits include:

  • Clarity and focus on user needs.

  • Avoiding internal bias.

  • Providing valuable insights to achieve successful, user-friendly designs. 

Show Hide video transcript
  1. Transcript loading…

By enrolling in our Conducting Usability Testing course, you’ll gain insights from Frank Spillers, CEO of Experience Dynamics, extensive experience learning to develop test plans, recruit participants, and convey findings effectively.

Where to learn about usability testing?

Explore our dedicated Usability Expert Learning Path at Interaction Design Foundation to learn Usability Testing. We feature a specialized course, Conducting Usability Testing, led by Frank Spillers, CEO of Experience Dynamics. This course imparts proven methods and practical insights from Frank's extensive experience, guiding you through creating test plans, recruiting participants, moderation, and impactful reporting to refine designs based on the results. Engage with our quality learning materials and expert video lessons to become proficient in usability testing and elevate user experiences!

Show Hide video transcript
  1. Transcript loading…

Earn a Gift, Answer a Short Quiz!

Question 1

What is the primary purpose of usability testing?

1 point towards your gift

Literature on Usability Testing

Here's the entire UX literature on Usability Testing by the Interaction Design Foundation, collated in one place:

Learn more about Usability Testing

Take a deep dive into Usability Testing with our course User Research – Methods and Best Practices .

How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.

In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.

This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!

By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!

We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!

All open-source articles on Usability Testing

Please check the value and try again.

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!

Share Knowledge, Get Respect!

Share on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

Interaction Design Foundation - IxDF. (2016, June 2). What is Usability Testing?. Interaction Design Foundation - IxDF.

New to UX Design? We're Giving You a Free eBook!

The Basics of User Experience Design

Download our free ebook “The Basics of User Experience Design” to learn about core concepts of UX design.

In 9 chapters, we'll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

A valid email address is required.
315,866 designers enjoy our newsletter—sure you don't want to receive it?