Usability Evaluation

Your constantly-updated definition of Usability Evaluation and collection of videos and articles. Be a conversation starter: Share this page and inspire others!
263 shares

What is Usability Evaluation?

Usability evaluation assesses how easy and enjoyable it is for users to achieve their goals while using a product. Designers use qualitative and quantitative research methods to identify User Experience (UX) issues.

"If a User is having a problem, it's our problem."
– Steve Jobs

Usability evaluation is important in design and essential for user satisafaction, but it's not enough by itself. In this quick video, you'll find out why usability evaluation is only one part of the whole picture. When you use it along with other tools designers have, you can come up with better and more useful solutions.

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    If you just focus on the evaluation activity typically with usability testing, you're actually doing *nothing* to improve the usability of your process. You are still creating bad designs. And just filtering them out is going to be fantastically wasteful in terms of the amount of effort. So, you know, if you think about it as a production line, we have that manufacturing analogy and talk about screws. If you decide that your products aren't really good enough

  2. 00:00:32 --> 00:01:02

    for whatever reason – they're not consistent or they break easily or any number of potential problems – and all you do to *improve* the quality of your product is to up the quality checking at the end of the assembly line, then guess what? You just end up with a lot of waste because you're still producing a large number of faulty screws. And if you do nothing to improve the actual process in the manufacturing of the screws, then just tightening the evaluation process

  3. 00:01:02 --> 00:01:17

    – raising the hurdle, effectively – is really not the way to go. Usability evaluations are a *very* important tool. Usability testing, in particular, is a very important tool in our toolbox. But really it cannot be the only one.

To fully understand usability evaluation, it’s necessary to grasp the concept of usability first. The International Organization for Standardization (ISO) defines usability as:

“The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.”

– ISO 9241-11, Ergonomics of human-system interaction—Part 11, Guidance on usability

Usability Evaluation measures (or in some cases, predicts) this effectiveness, efficiency and satisfaction. You can use usability evaluation methods at any stage of design or development.

  • Effectiveness checks how accurately users achieve goals in specific situations.

  • Efficiency looks at the resources used to accomplish goals.

  • Satisfaction examines how comfortable and pleasant the system is to users.

Let's take an online fitness tracker app as an example. You've downloaded the app to log your daily workouts. Effectiveness measures how accurately the app records the type and duration of your exercises. If it consistently gets this right, that's effective tracking.

Efficiency looks at how the app uses your phone's resources. Is it draining your battery too quickly while you log your workouts?

Satisfaction refers to how users feel about the entire experience. For example, how easy is it to set your fitness goals and track your progress?

In this example, user evaluation helps designers create a fitness app that is easy to use and meets the users' needs.

Table of contents

When to Use Usability Evaluation

Designers who learn how to use usability evaluation throughout the design process will find themselves at an advantage.

Evaluate early on in your design process: An architect will always check that a building's foundation is solid before construction begins. Similarly, usability evaluation can be a powerful foundational step in the early design and prototyping stages.

For example, a design team might use usability tests on an early prototype of a smartphone to catch potential issues like unclear navigation. You can save time and resources when issues like this are found and fixed early.

William Hudson, UX strategist and educator, explains why tree testing and first-click testing can be useful early in your design process.

Show Hide video transcript
  1. 00:00:00 --> 00:00:34

    We're going to be looking at a couple  of testing tools or testing techniques, one called tree testing and the other called first-click testing, which are *admirably suitable* for early design. So, you're working on – you're thinking of how your website should be navigated or you're working on some very rough wireframes or  general ideas about how that navigation should be

  2. 00:00:34 --> 00:01:05

    presented in the context of your project, and you'd like to get some feedback. And these two techniques are really absolutely ideal for that scenario. But, having said that, both of these techniques: tree testing and first-click testing can be used at  *any stage* in project development, so all the way from early design through to final buffing and  polishing, say, of the navigation framework. If you want to just make a tiny change and see  how that affects the site *before* you launch,

  3. 00:01:05 --> 00:01:31

    then something like tree testing may be absolutely ideal. And one of the real strengths of these tools is that they're pretty  isolated in the sense that you're only testing one or two very specific components of your overall solution. So, tree testing in fact is absolutely laser-focused on your navigation; that is all you're presenting, and you're not presenting it with any kind of visual component.

  4. 00:01:31 --> 00:02:02

    It's presented, it's simulated by the testing tool. So, you just upload a navigation tree that you want to test and you give your participants some goals and off they go; and you're testing now  how they understand those words – nothing to distract them; there are no visual distractions at all. But they do lose a little bit of the context that way, as we'll see a bit later on, too. So, early-design testing allows design components to be evaluated *prior* to extensive development efforts.

  5. 00:02:02 --> 00:02:32

    So, you could be doing this in the first week of a new project. There would be no problem at all in that. It might take a little while to *organize*, but certainly you could be seeing results only  a couple of weeks into your project. There are *many* forms of early-design testing. The particular attraction of these is that they are what we call *quantitative* – they produce numbers where we get to  actually see some statistics on how people perform in these particular tasks.

  6. 00:02:32 --> 00:03:01

    Paper prototyping I list as an alternative here. That is not a quantitative process *typically*. We're not actually looking mostly at success when we're talking about paper prototyping. We're looking at a *qualitative* focus, so we're trying to understand what's going through people's minds and we ask them about  why things didn't work and what's going on inside their head. That's very similar to usability testing, and usability testing is predominantly a qualitative process.

  7. 00:03:01 --> 00:03:34

    So, these are almost entirely quantitative; they're done *online*, and that means that we're not really going to be able to get  inside our participants' heads, but that is both a benefit and a drawback. The benefit is that we're going to get some hard numbers out of it and it's very quick to do. We will be typically testing with dozens, scores of participants – certainly, 100 participants wouldn't be difficult to imagine. Whereas in usability testing, a day's work is represented by about seven

  8. 00:03:34 --> 00:04:00

    participants, maybe up to 90-minute sessions, so they are quite different animals from that point of view. And I won't be talking any more about usability testing in this. It was just by way of comparison. And usability testing, of course, as I mentioned is a qualitative technique. The *reasons* that you might consider early-design testing – well, it's relatively quick and inexpensive; done online, primary costs with that are actually *recruitment costs*

  9. 00:04:00 --> 00:04:30

    – getting people to come and take our studies; so, there isn't the renting of a lab for any kind of qualitative research; there isn't the hiring of researchers; You set up the project; you leave it to run; you recruit participants to it, and you check on your results a few days later. It can be as straightforward as that. Only minimal navigation or visual design is needed; you don't have to have thought things through absolutely to the end of the process.

  10. 00:04:30 --> 00:05:01

    You can get some really good feedback about what's working and what it isn't from the very earlier stages, and again that's really one of the main strengths of the whole approach in both of these cases. Very effective for early design, and – as I mentioned  already – can be used at any stage in a project. And I've hinted at this, but I will now put it in very precise terms that it's a *goal-oriented focus*; in all these cases we present our navigation or our visual design along with some tasks

  11. 00:05:01 --> 00:05:32

    – a list of tasks that we would like users to try to pursue. And we're looking at how they *perform*; so, that is the quantitative aspect of it. It is purely *what they do*. There is no element of asking them how they feel about it or why they've made those particular decisions – although those are optional extras in some cases. Certainly, the *why* aspect of it can be; but *predominantly* we're talking about the numbers that come out of the end. And that's why we need at least several score participants

  12. 00:05:32 --> 00:06:00

    – something around the 30–50 mark as a starting point. For existing solutions, you could use early-design testing for identifying problem areas. So, if you've got some problems on a page or with a site in general but you just don't really have a handle on what those problems are, then trying to give people some specific  tasks and then giving them your navigation structure may produce some *really* very straightforward feedback on that issue.

  13. 00:06:00 --> 00:06:31

    You also can use it for collecting data for improving designs, along similar lines; you would try out things that you know people seem to be struggling with or that are really important to your organization's goals with a particular solution and see how people fare. *New solutions* – well, evaluating and improving design elements; it's really more or less exactly the same process, but you might be at the beginning of a new solution, be trying to understand what was good, what was bad about the *old solution*.

  14. 00:06:31 --> 00:07:01

    Certainly, knowing what worked is a good situation to be in when you're setting off to revamp a design. *Alternatives* – well, I already hinted at paper prototyping, and I've already said that that is primarily qualitative; we're talking about very small numbers of people for a fair bit of researcher effort. But it is different and it is qualitative, so we won't be going into detail about that here. Web app or analytics – the problem there is that we know what people do but we don't really know why they've done it.

  15. 00:07:01 --> 00:07:31

    So, we might find that people were bouncing between two pages in a website or a mobile app and not really understand *why*. So, we've got numbers that say "There's something going on here" but no way of finding out. So, if you have suspicions that it's navigation-related or the visual design we test with the first-click testing, then we can try that out pretty quickly and relatively cheaply, compared with some other approaches – particularly the qualitative ones, which do tend to be more expensive.

  16. 00:07:31 --> 00:07:57

    And, finally, card sorting is another technique you might consider. It can be used both qualitatively and quantitatively. And we won't be talking about that in these modules, but  you will find information on card sorting on the Interaction Design Foundation website. There's a very extensive encyclopedia article that I wrote about card sorting several years ago, which you should find pretty useful in that respect.

Evaluate throughout your design process: Continuous user feedback keeps a product on the right track throughout the design process. Designers who get user feedback throughout product development can rest assured that their product meets user needs and expectations.

Usability evaluation happens at every step of a project, including during the requirements, analysis and design, implementation, testing and deployment stages.

Usability evaluation happens at every step of a project, including during the requirements, analysis and design, implementation, testing and deployment stages.

© Interaction Design Foundation, CC BY-SA 4.0

Evaluate before you launch a new product or relaunch an existing product: Always conduct usability evaluation when you're getting ready to make big changes or launch a new product. You can think of it like taste-testing a new recipe before you decide to use it for a special occassion.

For example, if you’ve been hired to redesign an online store, usability evaluation will help you ensure the site is user-friendly.

You’d use remote usability testing to make sure the process of buying products is smooth and error-free and it’ll be easy for more people to make purchases.

Next, we’ll look at the aesthetic usability effect and learn about the importance of visual elements in your design.

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    There is something called esthetic usability effect, which means that if something looks good, it's not really perceived by people as being more usable, even if it's not. So those visuals are important and we really, really need to keep that in mind. A lot of the users have now become very tech savvy. A lot of the people, you know, using apps and using websites have been using them for 15 or 20 years now, and they are very skilled.

  2. 00:00:31 --> 00:01:02

    So they understand the typical patterns. They know that checkout icons should be in the top right corner. They know how a good logging and registration works. So all things like that are kind of predetermined and there's pretty little innovation here. So you can do research to test the viability of the product. There isn't really much way to innovate and user interface. The visuals can be the differentiator. And of course we can use a design system like material design, but that would be

  3. 00:01:02 --> 00:01:21

    a horrible world to live in because those apps would be all looking boring and soulless and people want that design. That's why people buy Apple products, among other things, because they look good and they are different than everything else, at least sometimes. So people buy with their eyes and we really need to remember that.

Why Usability Evaluation is Important in User Experience (UX) Design

Usability evaluation acts as a sense checker and keeps User Experience (UX) designers user-focused. Usability, a subset of UX design, makes sure that products are simple to use, work well, and meet users’ expectations.

User Experience (UX) encompasses the overall emotional and psychological response a user has when using a product.

Let’s look at why usability evaluation is important in UX design:

  1. User-centered design: Usability evaluation keeps the design focused on the people who will use the product. It ensures the user’s needs drive design decisions. This leads to a more effective and user-friendly interface.

  2. Problem identification: Usability evaluation methods help find potential problems within the user experience. Feedback from real users gives the design team useful insights that help them identify opportunities.

  3. Iterative improvement: Regular usability testing helps designers make continuous improvements. They test, improve, and test designs again, which leads to small but valuable improvements and a better user experience.

  4. Reduced costs: Design teams can use early-stage usability checks to save time and resources. It's a cost-friendly way to prevent expensive changes after the product is released.

  5. Competitive advantage: Products that go through usability testing often do better than their competition. It can result in users sticking aroundmore and help the brand build a solid reputation.

Alan Dix, professor and bestselling author, walks through three non-negotiable usability guidelines.

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    One of the early standards that  mentioned usability was ISO 9241. And it talked about three crucial  issues for user interfaces. One of them was *effectiveness* – does it do the right thing? Does it get things done that are important? The second was *efficiency* – does it do that with the minimum effort? The minimum mental effort?

  2. 00:00:32 --> 00:01:03

    The minimum physical effort?  Or is it taking extraneous effort that's unnecessary? And very often, people only quote those two because there was a third one as well, which is *satisfaction*: Does it make you feel good? Do you feel happy having used this system or used this piece of software? And so, that last one is often missed entirely. And that's all about the *emotion*, the way you feel.

  3. 00:01:03 --> 00:01:32

    And it was often ignored, often missed in the  past. What's now happened is that's become perhaps in some ways more important than the other two. Emotion is important because it's good to feel emotion. But also, *emotion affects the bottom line in business*. If your employees are happy, they tend  to be more productive. So, if you're designing a production line or  an office or wherever the environment,  

  4. 00:01:32 --> 00:02:09

    if you can have software and systems that  make people feel good, they'll tend to work better. And certainly you want your customers to feel happy because they're the people who  are usually going to buy your goods. So, if you've not made your  customers happy, they don't buy anything. So, emotion is important to us as humans, but  it's also important from a business point of view.

Usability Testing vs Usability Evaluation - What’s the difference?

Usability testing involves observing the behavior of real users and is the most used usability evaluation method.

Take a group of people trying out a new mobile app for example. UX researchers will observe to see if any issues come up. These observations could include issues like a menu that’s difficult to find and a confusing signin process.

Usability evaluation includes usability testing, asking users for feedback (also known as inquiry) and examining the product's design (also known as inspection).

The Three Main Types of Usability Evaluation

The three main types of usability evaluation are usability testing, usability inquiry, and usability inspection. Together, these methods provide rich qualitative user insights. Researchers use these insights to gain a better understanding of user interactions, preferences, and challenges.

Often conducted before quantitative research, the qualitative research methods used in usability evaluation provide insight into user attitudes and behaviors. Qualitative research methods are typically done with 10 participants or less. Researchers use interviews and focus groups in combination with usability testing to obtain these qualitative user insights.

Often conducted before quantitative research, the qualitative research methods used in usability evaluation provide insight into user attitudes and behaviors. Qualitative research methods are typically done with 10 participants or less. Researchers use interviews and focus groups in combination with usability testing to obtain these qualitative user insights.

© Interaction Design Foundation, CC BY-SA 4.0

1. Usability Testing

Like watching people trying to cook a new recipe, usability testing involves observing real users. In the same way a chef in the kitchen can see if an ingredient is hard to find, UX designers can spot issues by watching how users behave. This qualitative hands-on approach provides actionable insights that ultimately help make the product easier to use.

2. Usability Inquiry

Usability inquiry involves talking to users to find out what they expect and what they need. It's important to get information from users to make designs people like and help them achieve their goals. Two common ways to do this are focus groups and interviews.

In focus groups, participants come together to discuss their experiences. For example, you can organize a focus group to learn how a group of gamers feels about a new mobile game’s interface.

Interviews involve one-on-one conversations. An example of this would be when a designer talks directly to a smartphone user to find out what they like and what their challenges are.

Ann Blandford, Professor of Human-Computer Interaction at University College London, explains the pros and cons of user interviews.

Show Hide video transcript
  1. 00:00:00 --> 00:00:35

    So, semi-structured interviews – well, any  interview, semi-structured or not, gets at people's perceptions, their values, their experiences as they see it, their explanations about why they do the things that they do, why they hold the attitudes that they do. And so, they're really good at getting at  the *why* of what people do,

  2. 00:00:35 --> 00:01:02

    but not the *what* of what people do. That's much better addressed with *observations* or *combined methods* such as contextual inquiry  where you both observe people working and also interview them, perhaps in an interleaved way about why they're doing the things that they're doing or getting them to explain more about how things work and what they're trying to achieve.

  3. 00:01:02 --> 00:01:32

    So, what are they *not* good for? Well, they're not good for the kinds of questions where people have difficulty recalling or where people might have  some strong motivation for saying something that perhaps isn't accurate. I think of those two concerns, the first is probably the bigger in HCI

  4. 00:01:32 --> 00:02:00

    – that... where things are unremarkable, people are often *not aware* of what they do; they have a lot of *tacit knowledge*. If you ask somebody how long something took, what you'll get is their *subjective impression* of that, which probably bears very little relation to the actual time something took, for example. I certainly remember doing a set of interviews some years ago

  5. 00:02:00 --> 00:02:32

    where we were asking people about how they performed a task. And they told us that it was  like a three- or four-step task. And then, when we got them to show us how they did it, it actually had about 20, 25 steps to it. And the rest of the steps they just completely took for granted; you know – they were: 'Of course we do that! Of course we—' – you know – 'Of course that's the way it works! Of course we have to turn it on!' And they just took that so much for granted that *it would never have come out in an interview*.

  6. 00:02:32 --> 00:03:11

    I mean, I literally can't imagine the interview that would really have got that full task sequence. And there are lots of things that people do or things that they assume that the interviewer knows about, that they just won't say and won't  express at all. So, interviews are not good for those things; you really need to *observe* people to get that kind of data. So, it's good to be aware of what interviews are good for and also what they're less well-suited for. That's another good example of a kind of  question that people are really bad at answering,

  7. 00:03:11 --> 00:03:31

    not because they're intentionally deceiving usually, but because we're *not* very good at *anticipating what we might do in the future*, or indeed our *attitudes to future products*, unless you can give somebody a very faithful kind of mock-up

  8. 00:03:31 --> 00:03:56

    and help them to really  imagine the scenario in which they might use it. And then you might get slightly more reliable  information. But that's not information I would ever really rely on, which is why *anticipating future product design is such a challenge* and interviewing isn't the best way  of getting that information.

Both focus groups and interviews can help UX designers understand what users want and need.

3. Usability Inspection

Usability inspection involves expert assessments to find usability problems. Heuristic evaluations and cognitive and pluralistic walkthroughs are methods used to test how easy a product is to use.

Heuristic evaluations use predefined principles. For example, think about testing a mobile app. You'd have a list of guidelines or rules for making a good app, like "clear navigation" and "simple registration." If you had trouble using the menu, you'd use these rules to recommend changes to the app.

The ten Nielsen-Molich usability heuristics are visibility of system status, system match to the real world, user control and freedom, consistency and standards, error prevention, recognition instead of recall, flexibility and efficiency of use, aesthetic and minimalist design, help users recognize, diagnose and recover from errors and help and documentation. These usability heuristics help UX designers measure how user-friendly a digital product is.

The ten Nielsen-Molich usability heuristics are visibility of system status, system match to the real world, user control and freedom, consistency and standards, error prevention, recognition instead of recall, flexibility and efficiency of use, aesthetic and minimalist design, help users recognize, diagnose and recover from errors and help and documentation. These usability heuristics help UX designers measure how user-friendly a digital product is.

© Interaction Design Foundation, CC BY-SA 4.0

Cognitive walkthroughs are when experts pretend to be users. They go through the interface one step at a time to find any problems with how user-friendly it is.

An example of a cognitive walkthrough is when a team of UX specialists, engineers, and experts evaluate a new mobile app:

  • They plan which specific user activities they want to check. This includes tasks like signing in, finding a product, and completing a purchase.

  • They perform each step while putting themselves in the mindset of a first-time user of the app.

  • The goal of this exercise is to identify any problems with how easy the mobile app is to use.

Pluralistic walkthroughs are similar to cognitive walkthroughs. A group of experts, users, and other stakeholders work together to share their perspectives.

For example, a user might say, "I can't figure out how to buy that Nike shoe in size 38." The product manager might interpret that as, "This design doesn't yet match our project goals. It doesn’t give the user the feedback that this particular size is out of stock." This gives a broader view of what needs fixing.

These techniques provide a structured approach to evaluating usability. They also provide useful ideas for improving the design.

Quick Comparison Table: The Three Usability Evaluation Methods

Usability

Evaluation Method

Pros

Cons

Examples

Helpful Tools

Usability Testing

- Real user interactions

- Identifies actual user issues

- Provides direct user feedback

- Requires user recruitment and coordination

- Can be resource-intensive

- Limited to the skills and insights of the test users

UX researchers observe users completing a list of tasks on their new food-ordering app. They make notes of potential usability issues.

- UserTesting

- Maze

Usability Inquiry

- In-depth user opinions

- Uncovers user preferences and expectations

- Facilitates open-ended discussions

- Highly dependent on user availability

- Biased by users’ personal perspective

- May not uncover all usability issues

A UX designer wants to gather user feedback and opinions about their new e-commerce website. They conduct user interviews, focus groups, and surveys.

- SurveyMonkey

- Zoom for remote interviews

- Microsoft Forms

Usability

Inspection

- Expert-driven assessments

- Identifies potential issues

- Cost-effective and quicker

- May not catch all real user issues

- Limited to the expertise of the inspectors

- Less user-focused than testing and inquiry methods

A group of UX designers and developers evaluate their new mobile game UI using predefined criteria like heuristics.

- Nielsen's 10 Usability Heuristics

- Lyssna (previously UsabilityHub)

- UXCheck

How to Recruit Users for Usability Evaluation

You’ll gain useful, valuable insights into the usability of your product if you take a well-thought-out approach to recruitment for usability evaluation.

Recruitment Planning Guidelines

Understand Your Users: Before you start recruiting, figure out who your ideal users are. Take the time you need to get clear about details like their age, what they like, and why they might use your product. This will help you find the right people for the usability evaluation.

Look for people who are genuinely interested in what your product is about. Think about this as if you’re starting a sci-fi book club. Would you invite sci-fi enthusiasts or romance readers? Sci-fi readers who already enjoy the genre are a better fit. You can expect them to engage and provide relevant feedback during book club meetings. In the same way, feedback from users who are likely to use your product in their real lives will provide the valuable insights you’re after.

Refine Your Recruitment Messages: First impressions matter. Your recruitment message is the first point of contact with potential participants. When writing your message, tell users why you're doing this usability evaluation and what's in it for them. Make it easy for users to express interest, learn more and sign up to participate.

Offer Incentives: Incentives can be a powerful tool in user recruitment. Experiment with rewards you offer participants, such as gift cards and discounts. Incentives show your appreciation for their time.

Prioritize Diversity: You’ll gain a better understanding of what real users may think of your product if your recruitment efforts are inclusive. Inclusive recruitment results in a healthy mix of backgrounds and experiences and contributes to more in-depth usability insights.

Instructor William Hudson talks about user research recruitment and how to identify participants who aren’t a good fit.

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    I wanted to say a bit more about this important issue of recruiting participants. The quality of the results hinges entirely on the quality of the participants. If you're asking participants to do things and they're not paying attention or they're simply skipping through as quickly as they can – which does happen – then you're going to be very disappointed with the results

  2. 00:00:32 --> 00:01:01

    and possibly simply have to write off the whole thing as an expensive waste of time. So, recruiting participants is a very important topic, but it's surprisingly difficult. Or, certainly, it can be. You have the idea that these people might want to help you improve your interactive solution – whatever it is; a website, an app, what have you – and lots of people *are* very motivated to do that. And you simply pay them a simple reward and everyone goes away quite happy.

  3. 00:01:01 --> 00:01:32

    But it's certainly true with *online research* that there are people who would simply take part in order to get the reward and do very little for it. And it comes as quite a shock, I'm afraid, if you're a trusting person, that this kind of thing happens. I was involved in a fairly good-sized study in the U.S. – a university, who I won't name – and we had as participants in a series of studies students, their parents and the staff of the university.

  4. 00:01:32 --> 00:02:05

    And, believe it or not, the students were the best behaved of the lot in terms of actually being conscientious in answering the questions or performing the tasks as required or as requested. Staff were possibly even the worst. And I think their attitude was "Well, you're already paying me, so why won't you just give me this extra money without me having to do much for it?" I really don't understand the background to that particular issue.

  5. 00:02:05 --> 00:02:32

    And the parents, I'm afraid, were not a great deal better. So, we had to throw away a fair amount of data. Now, when I say "a fair amount", throwing away 10% of your data is probably pretty extreme. Certainly, 5% you might want to plan for. But the kinds of things that these participants get up to – particularly if you're talking about online panels, and you'll often come across panels if you go to the tool provider, if you're using, say for example, a card-sorting tool

  6. 00:02:32 --> 00:03:03

    or a first-click test tool and they offer you respondents for a price each, then be aware that those respondents have signed up for this purpose, for the purpose of doing studies and getting some kind of reward. And some of them are a little bit what you might call on the cynical side. They do as little as possible. We've even on card sort studies had people log in, do nothing for half an hour and then log out and claim that they had done the study.

  7. 00:03:03 --> 00:03:31

    So, it can be as vexing as that, I'm afraid. So, the kinds of things that people get up to: They do the minimum necessary; that was the scenario I was just describing. They can answer questions in a survery without reading them. So, they would do what's called *straightlining*. Straightlining is where they are effectively just answering every question the same in a straight line down the page or down the screen. And they also could attempt to perform tasks without understanding them.

  8. 00:03:31 --> 00:04:04

    So, if you're doing a first-click test and you ask them, "Go and find this particular piece of apparel, where would you click first?", they'd just click. They're not reading it; they didn't really read the question. They're not looking at the design mockup being offered; they're just clicking, so as to get credit for doing this. Like I say, I don't want to paint all respondents with this rather black brush, but it's *some* people do this. And we just have to work out how to keep those people from polluting our results. So, the reward is sometimes the issue, that if you are too generous in the reward

  9. 00:04:04 --> 00:04:30

    that you're offering, you will attract the wrong kind of participant. Certainly I've seen that happen within organizations doing studies on intranets, where somebody decided to give away a rather expensive piece of equipment at the time: a DVD reader, which was – when this happened – quite a valuable thing to have. And the quality of the results plummetted. Happily, it was something where we could actually look at the quality of the results and

  10. 00:04:30 --> 00:05:01

    simply filter out those people who really hadn't been paying much attention to what they were supposed to be doing. So, like I say, you can expect for online studies to discard been 5 and 10% of your participants' results. You also – if you're doing face-to-face research – and you're trying to do quantitative sorts of numbers, say, you'd be having 20 or 30 participants, you probably won't have a figure quite as bad as that, but I still have seen, even in face-to-face card sorts, for example,

  11. 00:05:01 --> 00:05:33

    people literally didn't *understand* what they were supposed to be doing, or didn't get what they were supposed to be doing, and consequently their results were not terribly useful. So, you're not going to get away with 100% valuable participation, I'm afraid. And so, I'm going to call these people who aren't doing it, and some of them are not doing it because they don't understand, but the vast majority are not doing it because they don't want to spend the time or the effort; I'm going to call them *failing participants*. And the thing is, we actually need to be able to *find* them in the data and take them out.

  12. 00:05:33 --> 00:06:01

    You have to be careful how you select participants, how you filter them and how you actually measure the quality of their output, as it were. And one of the big sources of useful information are the actual tools that you are using. In an online survey, you can see how long people have spent, you can see how many questions they have answered. And, similarly, with first-click testing, you can see how many of the tasks they completed; you can see how long they spent doing it.

  13. 00:06:01 --> 00:06:30

    And with some of these, we actually can also see how successful they were. In both of the early-design testing methods – card sorting and first-click testing – we are allowed to nominate "correct" answers – which is, I keep using the term in double-quotes here because there are no actually correct answers in surveys, for example; so, I'm using "correct" in a particular way: "Correct" is what we think they should be doing when they're doing a card sort, *approximately*, or, in particular, when they're doing a *first-click test*,

  14. 00:06:30 --> 00:07:03

    that we think they ought to be clicking around about here. Surveys as a group are a completely different kettle of fish, as it were. There are really no correct answers when you start. You've got your list of research questions – things that you want to *know* – but what you need to do is to incorporate questions and answers in such a way that you can check that people are indeed *paying attention* and *answering consistently*. So, you might for example change the wording of a question and reintroduce it later on

  15. 00:07:03 --> 00:07:33

    to see if you get the same answer. The idea is to be able to get a score for each participant. And the score is your own score, about basically how much you trust them or maybe the *inverse* of how much you trust them. So, as the score goes up, your trust goes down. So, if these people keep doing inconsistent or confusing things, like replying to questions with answers that aren't actually real answers – you've made them up – or not answering two questions which are effectively the same the same way, etc.,

  16. 00:07:33 --> 00:08:02

    then you would get to a point where you'd say, "Well, I just don't trust this participant," and you would yank their data from your results. Happily, most of these tools do make it easy for you to yank individual results. So, we have to design the studies to *find* these failing participants. And, as I say, for some these tools – online tools we'll be using – that is relatively straightforward, but tedious. But with surveys, in particular, you are going to have to put quite a bit of effort into that kind of research.

  17. 00:08:02 --> 00:08:32

    Steps we can take in particular: Provide consistency checks between tasks or questions. Ensure that "straightlined" results – where people are always answering in the same place on each and every question down the page – ask the same question again in slightly different wording or with the answers in a different order. Now, I wouldn't go around changing the order of answers on a regular basis. You might have one part of the questionnaire where "good" is on the right and "bad" is on the left;

  18. 00:08:32 --> 00:09:00

    and you might decide to change it in a completely different part of the questionnaire and make it really obvious that you've changed it to those who are paying attention. But whatever it is that you do, what you're *trying* to do is to find people who really aren't paying much attention to the directions on the survey or whatever the research tool is, and catch them out and pull them out of your results. And of the issues you should be aware of if you're paying for participants from something

  19. 00:09:00 --> 00:09:30

    like your research tool *supplier* is that you can go back to them and say, "These people did not do a very good job of completing this survey, this study." And ask them to refund you for the cost of those. You tell them that you're having to pull their data out of your results. Also, it helps to tidy up their respondent pool. Perhaps it's not your particular concern, but if you do end up using them again, it would be nice to know that some of these people who are simply gaming the system have been removed from the respondent pool.

  20. 00:09:30 --> 00:09:45

    So, reporting them – getting them removed from the pool – is a sensible thing to be doing. And, finally, devising a scoring system to check the consistency and also checking for fake responses and people who are just not basically doing the research as you need them to do it.

Where to Find Usability Evaluation Participants

Existing Users: Your existing user base is a goldmine for usability feedback. It's best to avoid assuming they'll join in. Invite your users to join through emails, website pop-ups, social media groups, or even have sales and customer service teams reach out.

Online Platforms: You can tap into the large pool of potential participants available online via social media platforms, user forums, and online communities.

Think about where your target users might naturally gather online. Platforms like Reddit, LinkedIn groups or Discord channels provide spaces where users share their experiences. Many of the users found in these online spaces are willing to take part in usability studies.

Collaborate with User Research Platforms: User research platforms connect UX designers with potential participants. These platforms make it easy to find a lot of different users, which can save time and give you a more diverse group to learn from.

The Nielsen Norman Group discusses five ways to recruit participants for user research.

How to Deal with Common Usability Evaluation Challenges

A lack of budget, time and buy-in from stakeholders can derail even the best-laid usability evaluation plan. Experienced UX designers understand the importance of forward planning. Instead of viewing these challenges as obstacles, try to see them as opportunities.

How to Handle Small Budgets and Tight Timelines

Tight budgets and timelines can limit resources available for usability evaluations. You can still get valuable insights into user experiences if you learn to work with these challenges.

Strategies you can use to deal with a restrictive usability testing budget and time constraints:

  1. Focus on Key Objectives: Identify the aspects of usability that match project goals. You can then allocate limited resources to the most impactful areas of usability.

    For example, an online clothing store would focus on making it easy to buy clothing and search for specific items. They’d focus their resources on testing the checkout and search functionality.

  2. Lean Methodologies: Efficient usability testing methods like guerrilla testing are quick, informal approaches that provide valuable insights.

  3. Open-Source Tools: Open-source usability testing tools keep costs down. These tools will help you conduct usability assessments without a big financial investment.

Managing Resistance to Usability Changes

Resistance to usability changes, often from stakeholders, poses another challenge for many UX designers. It’s necessary to overcome this resistance if you want to improve certain aspects of the product based on usability findings.

Strategies you can use to overcome resistance to usability changes:

  1. Keep Communication Open: Establish open lines of communication between researchers and stakeholders. Clearly explain why you suggest the changes you’ve proposed.

  2. Show Clear Proof: Back up the suggested changes with data, feedback from users, and usability metrics.

  3. Highlight Long-Term Benefits: Point out the good that will happen in the long-run because of user-centric design. Teach stakeholders how valuable it is to fix usability issues early in the design process. Emphasize benefits like more satisfied users, spending less money, and building stronger loyalty to the brand.

Get your free template for “Good Questions for Stakeholder Interviews”
Good Questions for Stakeholder Interviews Good Questions for Stakeholder Interviews
Secure form
Please provide your name.
We respect your privacy
Please provide a valid email address.
315,797 designers enjoy our newsletter—sure you don't want to receive it?

Learn More about Usability Evaluation

Read the IxDF’s open-access textbook entry on usability evaluation.

Take our course Interaction Design for Usability.

Read How to Involve Stakeholders in your User Research.

Watch our How to Get Started with Usability Testing Master Class.

Take our course The Practical Guide to Usability.

Read UX research on a budget.

Watch a video about Formative vs. Summative Evaluations on the Nielsen Norman Group website.

Read and learn more about current usability standards.

Read How to Recruit Users for Usability Studies.

Get your free template for “Usability Test Checklist”
Usability Test Checklist Usability Test Checklist
Secure form
Please provide your name.
We respect your privacy
Please provide a valid email address.
315,797 designers enjoy our newsletter—sure you don't want to receive it?

How do you evaluate the usability of a website?

UX design professionals use usability testing to evaluate how easy it is to use a website:

1. Recruit real users to conduct usability testing.

2. Use Heuristic evaluation such as Nielsen's 10 heuristics, to identify potential usability problems.

3. Perform cognitive walkthroughs to identify potential usability issues.

4. Analyze user feedback and available analytics from user feedback forms, support tickets, and web analytics.

Accessibility and usability are closely related in web design. Watch the video below and read the article, Usability for All, to learn more.

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    There are two main reasons. The first that it's a legal requirement in almost all countries that you make websites that anyone can use, even if they have reduced abilities. This isn't the best reason, but for organizations with legal compliance is a priority. It can be a very powerful one. The second issue is that we find that accessibility is closely connected with general usability and search engine optimization (SEO). When we do things to improve accessibility,

  2. 00:00:31 --> 00:01:01

    we end up improving both of these other topics. It turns out that search engine optimization and accessibility have more in common than you might think. They both need to deal with technology that's trying to understand the pages. In the case of accessibility, assistive technology needs to present it to users with reduced abilities who perhaps cannot see it or hear it. So assistive technologies will attempt to present web pages in an appropriate form for those users. For search engines,

  3. 00:01:01 --> 00:01:33

    the Web crawlers need to understand the contents of the pages so they can be indexed correctly. So the structure of the content needs to make sense in both cases, cascading style sheets can work wonders in making a messy HTML page look brilliant. But style sheets are complex to interpret. Assistive technology and search crawlers may simply ignore them. That means if you're relying on style sheets to present your content in a meaningful order, those adjustments go away. Here are some general guidelines for implementing accessibility in web design.

  4. 00:01:33 --> 00:02:04

    One of the keys to accessibility is to design for assistive technologies or to at least be aware of assistive technologies. When you're designing, if you're looking at visual impairment then the screen readers are the main assistive technology there. Screen readers take the contents of the screen and read it out to you. They are now built into most platforms by default, including smartphones. But if you want to ensure your website works well, the screen is you should try it out. An accessibility specialist may be helpful in that respect.

  5. 00:02:04 --> 00:02:31

    Screen readers deal with written content, but an important issue is that we need to provide non text content in alternative media. This is the dreaded ALTtext that you are frequently prompted for when creating images. This is because screen readers are great for reading out text in HTML, but if it happens to be text embedded in an image, it has little hope. And for meaning pictorial images Despite the rise of AI, users are likely to get a description

  6. 00:02:31 --> 00:03:01

    of what an image shows rather than what it was intended to mean. And just as a quick side note, ALT text with decorative images should always be empty and empty ALT-tag tells assistive technology that is not important and can be ignored. If you've got video clips or audio recordings on your site, you need to provide text alternatives for that. Closed captions and transcripts are best and are now provided automatically by many tools. Of course, one of the real usability advantages of text

  7. 00:03:01 --> 00:03:24

    alternatives is that you can search them. So if you were looking on your internet or on a website for something that somebody said, then you could find that in the transcript and have the entire text available to you there. But most changes for accessibility do benefit all users, especially when you start to think about how can we simplify this layout? How can we make the whole thing easier to use?

What is a usability checklist?

A usability checklist outlines the steps needed during a usability test. The researcher is the only person who uses the usability checklist. They do so to ensure they remember to do or say anything important during the test.

Get your free template for “Usability Test Checklist”
Usability Test Checklist Usability Test Checklist
Secure form
Please provide your name.
We respect your privacy
Please provide a valid email address.
315,797 designers enjoy our newsletter—sure you don't want to receive it?

To learn more, read this article for a semi-structured qualitative study (SSQS) checklist.

What is the difference between usability and heuristic evaluation?

Heuristic evaluation is a specific method within usability evaluation. Experts, often UX professionals, test the product using some predetermined rules or guidelines.

The heuristics are rules of thumb for good design and usability, like Jakob Nielsen’s 10 usability heuristics. Read how to conduct a heuristic evaluation for usability to learn more.

William Hudson, author and instructor in user-centered design, shares what it means to conduct a heuristic evaluation.

Show Hide video transcript
  1. 00:00:00 --> 00:00:33

    In this session, I'm going to be talking about something that's referred to either as *expert evaluation* or *heuristic evaluation*. It's an evaluation done by one or more experts  using a set of guidelines, and evaluating whether a solution meets those guidelines, how well it meets the guidelines, where it is deficient. So, expert or heuristic evaluations rely on the experience and the expertise of the evaluator.

  2. 00:00:33 --> 00:01:00

    So, you can't really do these things without  understanding some of the basic concepts of interaction design and usability. I mentioned at the outset that you would be using guidelines, but those guidelines are *not* self-explanatory, so you have to understand what a good solution to a particular problem, what you're trying to achieve, would look like because as you're doing evaluations and as the industry changes on a  regular basis,

  3. 00:01:00 --> 00:01:36

    then you have to appreciate whether or not the solutions you're seeing actually conform to the guidelines in front of you. Heuristics are these rules of thumb based on *good  practice and known problems in design*. And they can be used from the very early design through to finished solutions. And you can even do expert or heuristic evaluations on just sketches if that would be helpful. It probably is more sensible a little bit later in the process, but certainly there's no impediment to looking at maybe the general layout of screens

  4. 00:01:36 --> 00:02:00

    and saying, well, this screen is quite possibly overly complicated for the problem in hand and the customers or users that you're trying to target. It is relatively inexpensive in that hiring in a consultant for one or two days is actually very much cheaper than conducting usability evaluations.

  5. 00:02:00 --> 00:02:37

    But immediately following, you'll notice that I mentioned that it's *not as effective as testing with real users*. And that is certainly the case. However, if you had a lot of novel designs and you wanted to get some idea about whether they were going to be effective, then inviting people in who actually do  usability testing who are experts in the field will get you a lot of feedback without nearly so  much cost as a lot of usability testing, which can get quite expensive just because of having  to recruit, reward, hire facilities, and so on.

  6. 00:02:37 --> 00:03:03

    Jakob Nielsen published his book on User  Interface Engineering back in the early 1990s, and these are his 10 basic UI (user interface)  heuristics. And they haven't really changed, although when we actually go out to do something like benchmarking, we have a very much more detailed set of heuristics.

  7. 00:03:03 --> 00:03:30

    But these are a useful starting point, and they're talking about fairly generic concepts like *visibility of system status* and making sure that people understand where they are in the process. And that, of course, is  a good thing no matter what you're doing. And detailed design does actually flow out of that – for example, letting people know that they've got things in their shopping basket. That is an example of the visibility of system status.

  8. 00:03:30 --> 00:04:00

    *Match between the system and the real world* – and that's something I've already alluded to when I was referring to terminology. The mapping sometimes is also physical. If you're talking about the natural tendency for increasing the quantity of something, it tends to be *up*. So, if you've got a slider, then up or to the right is 'more' and down or to the left is 'less'. And that's just what we call *natural mapping*. *User control and freedom* – being flexible, allowing people to go back and fix things.

  9. 00:04:00 --> 00:04:33

    Bear in mind that when user  interface design was relatively new on sort of the large scale back in the 1990s when Windows 3.1, which was kind of the very first successful version of Windows, and of course the World Wide Web came around about the same time, it was uncommon, it was very unusual to have Undo functions. If you made a mistake and you needed to fix it, then you had to fix it yourself. There was no Control-Z or any kind of undo facility.

  10. 00:04:33 --> 00:05:04

    It was something that you had to do, and we take that for granted now, but it was not the case in the early days. *Consistency and standards* – users should not have to wonder whether different words, situations or actions mean the same thing. And this continues to be a problem in some areas. Certainly on intranets within large organizations, you would find that one department had its own set of visual guidelines with its own visual language which was totally different to the next department.

  11. 00:05:04 --> 00:05:30

    And if you were unlucky enough to have to move between those departments on the intranet, then you were in a bit of trouble. It doesn't happen so much these days with the web – e-commerce, for example; people do try very hard to make sure that users are going to have a fairly painless experience, and so we do tend to see things laid out with very *similar terminology and visual language* between totally different e-commerce sites. And to be honest, there,

  12. 00:05:30 --> 00:06:01

    Amazon, because they are so large and popular, has been something of a yardstick. And most people, when they're asking for advice on how to do something in e-commerce, I would refer them to the Amazon site and usually for very good reason. *Error prevention* is much more successful than dealing with errors. Certainly if you're having to discard data or reject data because users did not understand how you wanted it formatted, you should *not insist that people punctuate things exactly the way you need them*.

  13. 00:06:01 --> 00:06:30

    You can do whatever you like with the punctuation once you've got the basic data from them. If you want the phone numbers without punctuation, then take the punctuation out of the phone number after you've got it. If you don't like the spaces in the credit card numbers, then take the spaces out of the credit card numbers. So, that isn't something that Jakob talks about here, but it is a different form of error prevention, and I wholeheartedly recommended presenting users  with errors and telling them they've done bad

  14. 00:06:30 --> 00:07:00

    and should do it over is *not good user experience*. *Recognition rather than recall* – and this is the basic premise of *all* user interfaces these days. That's the way that we've moved. Back in the 1970s and 1980s, most systems were command line based and you had to remember the syntax and spelling of the next command you wanted to enter. And when Windows and the Mac came along, both having stolen their designs from Xerox PARC,

  15. 00:07:00 --> 00:07:31

    then we got what we used to refer to as *WYSIWYG* – What You See Is What You Get. We don't talk about that much these days, but it was all about *recognition*, which people are very much better at, than recall; so, you can *recognize things much more easily* than you can recall them from scratch. *Flexibility and efficiency of use* – and usually there is a trade-off between what you might call *design for learning*  and *design for efficiency*. That is all tied up with flexibility and efficiency of use.

  16. 00:07:31 --> 00:08:06

    By making things *flexible and efficient*, you're often making them *harder to use*. So, that's where the tension in the design comes in. *Aesthetic and minimalist design* – people like  websites that look attractive and that they trust from a visual design perspective. And it is important that we *do not put too much in front of users at once*. And so, that's what we mean  by minimalist design. *Help users recognize, diagnose and recover from errors* – something that's actually these days largely overlooked,

  17. 00:08:06 --> 00:08:33

    but it's still extremely important on more complex systems; things like Microsoft Office, most of the Adobe apps do have behind them a huge body of *help and documentation* – usually pretty awfully organized and presented, I have to say. It used to be better ten years ago, and we've just for some reason stopped worrying too much about that. So, it used to be that if you were looking at a dialog and you wanted help

  18. 00:08:33 --> 00:08:54

    with that dialog, you could click on a button  and you would get help on that dialog. The best you can hope for these days is that you click on Help and you get taken to a website, and you now have to work out how you're going to find out about this specific issue that you are having with this specific dialog. So, things have gone a little bit backwards in recent years on that front.

What are some highly cited scientific research about usability evaluation?

1. Nielsen, J. (1993). Usability Engineering. Academic Press.

You’ll find a framework for usability engineering and improving the usability of interactive systems in this book by Jakob Nielsen.

2. Brooke, J. (1996). SUS: A quick and dirty usability scale. In Usability evaluation in industry (Vol. 189, pp. 4-7). CRC Press.

Designers use John Brooke’s System Usability Scale (SUS) to evaluate the usability of products and services.

3. Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57-78.

Lewis’s paper presents the IBM Computer Usability Satisfaction Questionnaires (CSUQ). These are well-established standardized questionnaires for assessing user satisfaction.

4. Tullis, T., & Stetson, J. (2004). A comparison of questionnaires for assessing website usability. Paper presented at the Usability Professionals Association Conference.

This conference paper compares various questionnaires for assessing website usability.

5. Bangor, A., Kortum, P. T., & Miller, J. T. (2009). An empirical evaluation of the system usability scale. International Journal of Human-Computer Interaction, 24(6), 574-594.

This paper critically evaluates the System Usability Scale (SUS).

If you’d like to cite content from the IxDF website, click the ‘cite this article’ button near the top of your screen.

Is usability testing formative or summative?

Usability testing can be both formative and summative. How you classify it depends on when the testing takes place and for what reason,

Designers use formative usability testing during the initial stages of the design process. You can then use summative testing after the product is released to see how well it works.

Learn more about formative vs summative usability evaluation on the Nielsen Norman Group website.

What are some recommended books that cover usability in UX well?
What are the 5 usability evaluation criteria?

The five usability evaluation criteria are:

1. how easily you can learn it,

2. how efficiently you can use it,

3. how well you remember how to use it,

4. how often you make mistakes,

5. and how much you like using it.

Each of these five factors aren’t always equally important for every project.

Short on time or resources? These 5 Simple Usability Tips are easy to implement and won’t break the bank.

What is usability in HCI (Human Computer Interaction)?

Usability is a cornerstone concept in Human-Computer Interaction (HCI). The five usability factors in HCI are how easy it is to learn, how efficiently it works, how well you remember how to use it, how often mistakes happen, and how satisfied you are with the experience.

You can learn more about the relatively new discipline of Human-Computer Interaction, with our foundational course.

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    How many times have you heard there's been a big accident, whether it's a plane accident, a train accident or something like that, and people say, "Oh, it was human error."? Right? It was due to human error. The person  didn't do the right thing at the right point; they didn't notice something that was important,  and things went wrong. So, just imagine instead the wing falls off the plane because there's metal fatigue where the wing joined the plane. Now, you would say it was due to the metal fatigue, but you wouldn't say, "It was metal error."

  2. 00:00:30 --> 00:01:01

    You would say it's a *design error* because the designer of the plane, the engineers, the detail designers *should have* understood the nature of metal and the fact that you do get metal fatigue after a while. You should either design it so that where there's metal fatigue it *doesn't fundamentally mean the plane will crash*, or you design it so that you can *detect* when that metal fatigue is happening and then  *take preventive maintenance*. There are a number of strategies you've got because you *understand*

  3. 00:01:01 --> 00:01:34

    metal as a material *has known ways of failing*. We as humans have *limits and constraints* and *ways that we fail* in the sense we don't always do things in the perfect way – just like a piece of metal doesn't. As a designer, your job is to *understand those limitations* of people as actors in the system and *ensure the design of the system as a whole works even when those happen*.

  4. 00:01:34 --> 00:02:02

    So, whenever you hear about human error, it was human error! But typically it wasn't the operator or the  pilot or the nurse or the doctor in the hospital; it was typically the *designer of the system* that's there. If you treat users *as well* as a piece of metal, you probably are dealing with  them a lot better than they usually are dealt with. The user is at the *heart* of what you do. But *understand* those users – understand the nature of them.

  5. 00:02:02 --> 00:02:07

    And so, *then* you'll start to treat them far better, hopefully, than a piece of metal.

Can I conduct usability testing remotely?

Remote usability testing is a widely used research method. UX researchers use online tools to capture how test participants use a digital product. The most common types of data collected are screen and voice recordings.

In moderated remote testing, UX researchers follow along in real time and talk to users as they do specific tasks. Moderated testing works well for tricky tasks where more talking and asking questions will help with testing.

In unmoderated testing, the researcher shares a set list of tasks, and the participant does them on their own. Remote usability testing is a practical and money-saving way to observe real people do real tasks.

You can learn more by taking our popular Conducting Usability Testing UX design course.

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    When you create something, you design something,  you need to know if it works for your users. And you need to get that design in front of them. And the only way that you can make sure that it meets their expectations is to have them actually *play with it*. Usability testing is *the number one* technique for determining how usable your product is. We want to see how *successful* our users are, see if they're on the right track and if we're getting the reactions that we *want* from a design.

  2. 00:00:31 --> 00:01:04

    'Ah... I'm not really sure what the users will think!'  *Better test it.* 'Uh... too much fighting with our team internally over what to do!' *Better test it!* Usability testing helps you check in with your user expectations. And it's a way of you making  sure that you're not just stuck in your own ideas about a design and actually bringing in an  end-user from the outside to get some *more clarity and focus*. And the reason why this class is going to  help you is you'll benefit from the 15 years of my

  3. 00:01:04 --> 00:01:32

    personal experience and *hundreds and hundreds of  usability tests* that I've conducted over the years. We're going to start from the very beginning of  *how to create a test plan and recruit participants*, and then go into *moderation skills, tips and techniques*. You'll also learn *how to report on tests* so you can take that data and represent it in a way that makes sense, you can communicate to your team and learn how to *change your design based on the data that you get from usability tests*,

  4. 00:01:32 --> 00:01:36

    most importantly. I hope you can join me on this class. I look forward to working with you!

How will AI affect UX research?

UX researchers can use AI to streamline tedious work processes. AI tools can look at data, spot trends and figure out how users feel about your product. UX designers who choose to make use of the AI tools available will have more time and mental energy to focus on more important tasks.

It’s important to know AI tools can bring in human biases when people use AI-powered results in decision-making. Designers must use critical thinking when dealing with any type of AI-generated content. Read ‘How Can Designers Adapt to New Technologies?’ to learn more.

AI will eventually impact all areas of UX design. Let’s hear what Don Norman, author and co-founder of the Nielsen Norman Group, thinks about AI.

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    Don't forget the word A: Artificial it isn't intelligent is pattern matching and it's doing things, but it has actually no deep understanding at all. None at all. But it could do wonderful things. So I've tried using it. well, you know, I'm not so good at sketching. Maybe I can ask you to draw the pictures that illustrate my concepts. And it works reasonably well, except I can't just say, Oh, here's a picture I want

  2. 00:00:31 --> 00:01:02

    and I get a nice, wonderful result. I have to say, here's a picture I want and then look at it and say, That's not what I want. And then I have to figure out a way of describing what I want and then go back and forth. We don't do the technical drawings anymore, but whether we do it with the aid of a computer program it can do the stress analysis and tell us whether it's strong enough, whether it's going to work, that that simply makes your work more powerful.

  3. 00:01:02 --> 00:01:30

    Let it do that stuff. So it's a collaboration because we think of the idea and then we have to judge what it produces to say, whether that's at all what we thought of. And sometimes and this is a great thing about collaboration, sometimes it will produce something that's so weird and strange, and we sit and look at it and say, Oh, wow, I would never have thought of that. Maybe that's a really important direction to go. And then you can spend a few more days shaping it with it.

  4. 00:01:30 --> 00:01:54

    So we're going to have to learn to think and design in a very different way. But that's not true of every advanced technology over the ages. Every time a new technology comes in, it changes the way we behave. And most of the time in a positive ways. But it takes a while for us to get used to it. And that's what I think will happen with AI.

Answer a Short Quiz to Earn a Gift

Question 1

What is the primary goal of usability evaluation?

1 point towards your gift

Literature on Usability Evaluation

Here's the entire UX literature on Usability Evaluation by the Interaction Design Foundation, collated in one place:

Learn more about Usability Evaluation

Take a deep dive into Usability Evaluation with our course The Practical Guide to Usability .

Every product or website should be easy and pleasurable to use, but designing an effective, efficient and enjoyable product is hardly the result of good intentions alone. Only through careful execution of certain usability principles can you achieve this and avoid user dissatisfaction, too. This course is designed to help you turn your good intentions into great products through a mixture of teaching both the theoretical guidelines as well as practical applications surrounding usability.

Countless pieces of research have shown that usability is important in product choice, but perhaps not as much as users themselves believe; it may be the case that people have come to expect usability in their products. This growing expectation puts even more pressure on designers to find the sweet spot between function and form. It is meanwhile critical that product and web developers retain their focus on the user; getting too lost within the depths of their creation could lead to the users and their usability needs getting waylaid. Through the knowledge of how best to position yourself as the user, you can dodge this hazard. Thanks to that wisdom, your product will end up with such good usability that the latter goes unnoticed!

Ultimately, a usable website or product that nobody can access isn’t really usable. A usable website, for example, is often overlooked when considering the expansion of a business. Even with the grandest intentions or most “revolutionary” notions, the hard truth is that a usable site will always be the windpipe of commerce—if users can’t spend enough time on the site to buy something, then the business will not survive. Usability is key to growth, user retention, and satisfaction. So, we must fully incorporate it into anything we design. Learn how to design products with awesome usability through being led through the most important concepts, methods, best practices, and theories from some of the most successful designers in our industry with “The Practical Guide to Usability.”

All open-source articles on Usability Evaluation

Please check the value and try again.

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!

Privacy Settings
By using this site, you accept our Cookie Policy and Terms of Use.

Share Knowledge, Get Respect!

Share on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

Interaction Design Foundation - IxDF. (2016, June 6). What is Usability Evaluation?. Interaction Design Foundation - IxDF.

New to UX Design? We're Giving You a Free eBook!

The Basics of User Experience Design

Download our free ebook “The Basics of User Experience Design” to learn about core concepts of UX design.

In 9 chapters, we'll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

A valid email address is required.
315,797 designers enjoy our newsletter—sure you don't want to receive it?