Your constantly-updated definition of Qualitative Research and
collection of videos and articles. Be a conversation starter: Share this page and inspire others!
2,023shares
What is Qualitative Research?
Qualitative research is the methodology researchers use to gain deep contextual understandings of users via non-numerical means and direct observations. Researchers focus on smaller user samples—e.g., in interviews—to reveal data such as user attitudes, behaviors and hidden factors: insights which guide better designs.
“There are also unknown unknowns, things we don’t know we don’t know.”
— Donald Rumsfeld, Former U.S. Secretary of Defense
ShowHide
video transcript
00:00:00 --> 00:00:30
When we want to say whether something's good or not, it's not so obvious. And this unit is all about evaluation. Ah, well, it's a lovely day here in Tiree. I'm looking out the window again. But how do we know it's a lovely day? Well, I won't turn the camera around to show you, because I'll probably never get it pointing back again. But I can tell you the Sun's shining; there's a blue sky.
00:00:30 --> 00:01:00
I could go and measure the temperature. It's probably not that warm because it's not early in the year. But there's a number of metrics or measures I could use. Or perhaps I should go out and talk to people and see if there's people sitting out and saying how lovely it is or they're all huddled inside. Now, for me, this sunny day seems like a good day. But last week it was the Tiree Wave Classic, and there were people windsurfing. The best day for them was not a sunny day.
00:01:00 --> 00:01:31
It was actually quite a dull day, quite a cold day. But it was the day with the best wind. They didn't care about the Sun; they cared about the wind. So, if I'd asked them, I might have got a very different answer than if I'd asked a different visitor to the island or if you'd asked me about it. Evaluation is absolutely crucial to knowing whether something is right. But, you know, the methods of it are important – they are important to do. But they tend to be a bit boring to talk about, to be honest, because you end up with long lists of things to check.
00:01:31 --> 00:02:00
When you're looking at an actual system, though, it becomes more interesting again. But it's not so interesting to talk about. What I want to do is talk more about the broader issues about *how* you choose *what kind* of evaluation to do and some of the issues that surround it. And it *can* be almost a conflict between people within HCI. It's between those who are more quantitative. So, when I was talking about the sunny day, I could go and measure the temperature. I could measure the wind speed if I was a surfer
00:02:00 --> 00:02:33
– a whole lot of numbers about it – as opposed to those who want to take a more qualitative approach. So, instead of measuring the temperature, those are the people who'd want to talk to people to find out more about what it *means* to be a good day. And we could do the same for an interface. I can look at a phone and say, "How long did it take me to make a phone call?" Or I could ask somebody whether they're happy with it: what does the phone make them feel about? – different kinds of questions to ask. Also, you might ask those questions, and you can ask this in both a qualitative and quantitative way in a sealed setting.
00:02:33 --> 00:03:03
You might take somebody into a room, give them perhaps a new interface to play with. So, you might take the computer, give them a set of tasks to do and see how long they take to do it. *Or* you might go out and watch people in their real lives using some piece of – it might be existing software; it might be new software, or just actually observing how they do things. There's a bit of overlap here – I should have mentioned at the beginning – between evaluation techniques and empirical studies. And you might do empirical studies very, very early on,
00:03:03 --> 00:03:31
and they share a lot of features with evaluation. They're much more likely to be wild studies. And there are advantages to each. In a laboratory situation when you've brought people in, you can control what they're doing; you can guide them in particular ways. However, that tends to make it both more – shall we say – robust that you know what's going on, but less about the real situation. In the real world, it's what people often call "ecologically valid"; it's about what they *really* are up to.
00:03:31 --> 00:04:02
But I said it's much less controlled, harder to measure – all sorts of things. Very often, it's rare or it's rarer to find more quantitative in the wild. But you can find both. You can both go out and perhaps do a measure of people outside. You might go out on a sunny day and see how many people are smiling. Count the number of smiling people each day and use that as your measure – a very quantitative measure that's in the wild. More often, you might in the wild just go and ask people – it's a more qualitative thing.
00:04:02 --> 00:04:34
Similarly, in the lab, you might do a quantitative thing – some sort of measurement – or you might ask something more qualitative – more open-ended. Also, you might do away with the users entirely. So, you might have users there doing it, or you might actually use what's called an *expert evaluation* method or an analytic method of evaluation. By having a structured set of questions, somebody who's got a bit of expertise, a bit of knowledge,
00:04:34 --> 00:05:05
can often have a very good estimate of whether something is really likely to work or not. So, you can have that sort of expert-based or analytic-based evaluation method, or you can have something where you get real users in. Most people I think would say that in the end you do want to see some real users there; you can't do it all by expert methods. But often the expert methods are cheaper and quicker to do early on in the design process. So, usually both are needed, and in fact that's the general message I think I'd like to give you about this.
00:05:05 --> 00:05:30
That, in general, it's the *combination* of different kinds of methods which tend to be most powerful. So, sometimes at different stages: you might do expert evaluation or analytic evaluation early, more with real users later. Although probably you'll want to see some users at all stages. Particularly, quantitative and qualitative methods, which are often seen as very, very different, and people will tend to focus on one or the other.
00:05:30 --> 00:06:03
Personally, I find they fit together. Quantitative methods tend to tell me whether something happens and how common it is to happen – whether it's something I expect to see in practice commonly. Qualitative methods – the ones which are more about asking people open-ended questions – either to both tell me new things I didn't think about before, but also give me the "Why?" answers, if I'm trying to understand *why* it is I'm seeing a phenomenon. So, the qualitative things – the measurements – say, "Yeah, there's something happening. People are finding this feature difficult."
00:06:03 --> 00:06:30
The qualitative thing helps me understand *what it is about it that is difficult* and helps me to solve it. So, I find they give you *complementary* things – they work together. The other thing you have to think about when choosing methods is what's appropriate for the particular situation. And these things don't always work. Sometimes, you can't do an in-the-wild experiment. If it's about, for instance, systems for people in outer space, you're going to have to do it in a laboratory.
00:06:30 --> 00:07:05
You're not going to go up there and experiment while people are flying around the planet. So, sometimes you can't do one thing or the other – it doesn't make sense. Similarly, with users – if you're designing something for chief executives of Fortune 100 companies, you're not going to get 20 of them in a room and do a user study with them. That's not practical. So, you have to understand what's practical, what's reasonable and choose your methods accordingly. Key to all of this is understanding the purpose of your experimentation.
00:07:05 --> 00:07:30
Why are you doing the evaluation in the first place? What do you want to get out of it? And there's usually said to be two main kinds of user evaluation. The first of them is what's called *formative evaluation*. And that's about "How can I make something better?". So, you've designed an interface and you're partway through. This is in the iterative process. You're in that iterative process, and you're thinking: "Okay, how do I make it better? How do I find out what's wrong with it?"
00:07:30 --> 00:08:04
In fact, people often focus on what's wrong. The making it better sometimes is a better way to think about it. But very often people look for usability faults or flaws. Maybe you should be looking for *usability opportunities*. But whichever way, your aim is about making this thing better that you have in front of you. So, that's about improving the design. The other kind of evaluation you might do is towards the end of that process, which is "Is it good enough? Does it meet some criteria?". Perhaps somebody's giving you something that says: "I've got to put this into the company, and everybody has got to be able to use this
00:08:04 --> 00:08:30
within ten minutes; otherwise, it's no good." So, you have some sort of criteria you're trying to reach. So, that's more about contractual or sales obligations, and it's an endpoint thing. The two of these will often use very similar methods. You might measure people's performances, do a whole range of things. But in the first of them – the formative one – your aim is about improving things. It's about unpacking what's wrong to make it better.
00:08:30 --> 00:09:02
In the second, your aim is about finding out whether you've done it well enough. Sometimes, people use this to try and *prove* that they've done it well enough. So, there's an interesting tension that goes on there. However, those two are important. But there's a third, which is often missed, which is: In practice people *are* doing things, but often forget and don't realize what they're doing. There isn't a good name for this one. I sometimes call it "explorative", "investigative", "exploratory".
00:09:02 --> 00:09:35
And this is about when you want to understand something. So, I might be giving somebody a new mobile interface to use because that's the interface I'm going to deliver and I want to make it better. But I might give them the interface to use because I want to understand how they would use something like it. So, say it's a life-logging application – it's about health monitoring. You know, "How well are you feeling today?" and stuff like that. I might be more interested in finding out how that would go into their lives,
00:09:35 --> 00:10:05
how it would fit with their lives, how it would make sense to them – the kinds of things they would want to log. Later on, then, I might throw away completely what I've designed. So, it wasn't an early design; it was more an exploratory thing – a thing to find out. Now, you'll certainly do that from an academic research point of view if you're doing a Ph.D. or if you're doing research in HCI. But it's also true early in a design process. Your aim is more to understand the situation than it is to make something that's going to get better
00:10:05 --> 00:10:33
or to say it's good enough. It's very easy to confuse these goals. That's why I'm telling you about them – because your goal, what you're really after, might be investigative but you might address your experiment as if it's summative – a "good enough" answer: "Yes, it was good enough." That doesn't tell you anything. So, if I had this health application and I found that people enjoyed using it,
00:10:33 --> 00:11:02
what does that tell me? What have I learned? So, if you know *what* you're trying to address, you can then tune your evaluation for that. And when does this process end? Evaluation could go on forever, especially if you think about these iterative processes. And that's not true – the summative one is when you *do* get to the end. But in these formative evaluations, when do you get to the end of that? Now, there would have been a time when you'd say, "Well, it's when we deliver the product;
00:11:02 --> 00:11:30
when it goes into shrink-wrap and we put it on shelves." Nowadays, you may have heard the term "perpetual beta": the idea that with web applications you're constantly putting them up there, tweaking them, making them better, experimenting effectively, often with real users. So, in some sense, real use is the ultimate evaluation. Because of that, actually as you design one of the things you might want to think about is how you are going to get that information from use
00:11:30 --> 00:12:03
in order to help you design. In fact, last week at the Wave Classic – the surfer event – I've been involved in designing a local history application for the island that I'm on. And we were able, just in time, to get a version of this out for the Wave Classic. I know, because of the number of downloads and access to feeds from logs, that some people were using the application. But I don't know whether they used any of the history things or they just used some of the other facilities on it,
00:12:03 --> 00:12:26
because I was a bit last-minute; I didn't get a chance to get the logging in. So, I'm getting real use, but I wasn't getting information to help improve the future one. So, certainly for future prototypes, we will actually have this in. But when you design, you can actually think about how you're going to gather information from real use to help you improve things.
See how you can use qualitative research to expose hidden truths about users and iteratively shape better products.
Qualitative research is a subset of user experience (UX) research and user research. By doing qualitative research, you aim to gain narrowly focused but rich information about why users feel and think the ways they do. Unlike its more statistics-oriented “counterpart”, quantitative research, qualitative research can help expose hidden truths about your users’ motivations, hopes, needs, pain points and more to help you keep your project’s focus on track throughout development. UX design professionals do qualitative research typically from early on in projects because—since the insights they reveal can alter product development dramatically—they can prevent costly design errors from arising later. Compare and contrast qualitative with quantitative research here:
Qualitative research
Quantitative Research
You Aim to Determine
The “why” – to get behind how users approach their problems in their world
The “what”, “where” & “when” of the users’ needs & problems – to help keep your project’s focus on track during development
Methods
Loosely structured (e.g., contextual inquiries) – to learn why users behave how they do & explore their opinions
Highly structured (e.g., surveys) – to gather data about what users do & find patterns in large user groups
Number of Representative Users
Often around 5
Ideally 30+
Level of Contact with Users
More direct & less remote (e.g., usability testing to examine users’ stress levels when they use your design)
Less direct & more remote (e.g., analytics)
Statistically
You need to take great care with handling non-numerical data (e.g., opinions), as your own opinions might influence findings
Reliable – given enough test users
Regarding care with opinions, it’s easy to be subjective about qualitative data, which isn’t as comprehensively analyzable as quantitative data. That’s why design teams also apply quantitative research methods, to reinforce the “why” with the “what”.
Qualitative Research Methods You Can Use to Get Behind Your Users
You have a choice of many methods to help gain the clearest insights into your users’ world – which you might want to complement with quantitative research methods. In iterative processes such as user-centered design, you/your design team would use quantitative research to spot design problems, discover the reasons for these with qualitative research, make changes and then test your improved design on users again. The best method/s to pick will depend on the stage of your project and your objectives. Here are some:
Diary studies – You ask users to document their activities, interactions, etc. over a defined period. This empowers users to deliver context-rich information. Although such studies can be subjective—since users will inevitably be influenced by in-the-moment human issues and their emotions—they’re helpful tools to access generally authentic information.
Interviews
Structured – You ask users specific questions and analyze their responses with other users’.
Semi-structured – You have a more free-flowing conversation with users, but still follow a prepared script loosely.
Ethnographic – You interview users in their own environment to appreciate how they perform tasks and view aspects of tasks.
Get your free template for “How to Structure a User Interview”
Unmoderated – Users complete tests remotely: e.g., through a video call.
Guerrilla – “Down-the-hall”/“down-and-dirty” testing on a small group of random users or colleagues.
Get your free template for “How to Plan a Usability Test”
User observation – You watch users get to grips with your design and note their actions, words and reactions as they attempt to perform tasks.
Qualitative research can be more or less structured depending on the method.
Qualitative Research – How to Get Reliable Results
Some helpful points to remember are:
Participants – Select a number of test users carefully (typically around 5). Observe the finer points such as body language. Remember the difference between what they do and what they say they do.
Moderated vs. unmoderated – You can obtain the richest data from moderated studies, but these can involve considerable time and practice. You can usually conduct unmoderated studies more quickly and cheaply, but you should plan these carefully to ensure instructions are clear, etc.
Types of questions – You’ll learn far more by asking open-ended questions. Avoid leading users’ answers – ask about their experience during, say, the “search for deals” process rather than how easy it was. Try to frame questions so users respond honestly: i.e., so they don’t withhold grievances about their experience because they don’t want to seem impolite. Distorted feedback may also arise in guerrilla testing, as test users may be reluctant to sound negative or to discuss fine details if they lack time.
Location –Think how where users are might affect their performance and responses. If, for example, users’ tasks involve running or traveling on a train, select the appropriate method (e.g., diary studies for them to record aspects of their experience in the environment of a train carriage and the many factors impacting it).
Overall, no single research method can help you answer all your questions. Nevertheless, The Nielsen Norman Group advise that if you only conduct one kind of user research, you should pick qualitative usability testing, since a small sample size can yield many cost- and project-saving insights. Always treat users and their data ethically. Finally, remember the importance of complementing qualitative methods with quantitative ones: You gain insights from the former; you test those using the latter.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
00:00:33 --> 00:01:00
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
00:01:00 --> 00:01:37
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
00:01:37 --> 00:01:44
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.