Laddering Questions Drilling Down Deep and Moving Sideways in UX Research

- 697 shares
- 8 years ago
User interviews are a qualitative research method where researchers engage in a dialogue with participants to understand their mental models, motivations, pain points, and latent needs.
“To find ideas, find problems. To find problems, talk to people.”
– Julie Zhou, former VP, Product Design at Facebook, author of The Making of a Manager
Research is the initial step in the design process. It helps you understand what your user feels, wants, and appreciates. It also helps gain insights for future designs and identify the pain points in the current solution. User interviews will help you structure your design process and deliver optimized solutions that will resonate with the users.
Design teams typically perform user interviews with the potential users of a design as part of the empathize phase of the design thinking process. User interviews follow a structured methodology whereby the interviewer prepares several topics to cover, records what is said, and systematically analyzes the conversation after the interview.
User interviews are one of the most commonly used methods in user research. They can cover almost all user-related topics and be used, for example, to gather information on users’ feelings, motivations, daily routines, or how they use various products.
The interviews often follow the same methodology as qualitative interviews in other fields but with the specific purpose of informing a design project. Because user interviews typically have to fit into a design or development process, practical concerns such as limited time or resources often play a role when deciding how to conduct such interviews. For instance, user interviews can be conducted over a video or voice call if time is restricted. On the other hand, in projects with sufficient time and resources, researchers can perform the interview in the user’s home, and designers might even be flown overseas if the users reside in another country.
While many interview methods used in design projects are borrowed from other fields, such as ethnography and psychology, some have been created specifically for use in design contexts. An example is contextual interviews in the participant’s everyday environment. Contextual interviews can provide more insights about the environment in which a design will be used. As such, a contextual interview might uncover flaws within a product’s design (e.g., the product is too heavy to be carried around the house by the user) that a typical user interview might not
User interviews can be used in different stages of product development, from discovery to usability testing. Conducting a user interview is simply a question of choosing the right user or users to interview, asking them pre-determined questions (or free-form questions if used following an observation), and then reporting on their answers to enable further decision-making.
Ann Blandford, Professor of Human-Computer Interaction at University College London, emphasizes the need to keep an open mind when approaching a user interview.
As an interviewer, I think it's important to go in with an attitude of *openness*, with an attitude of *wanting to learn*, of *respecting the expertise* of your interviewee, whatever kind of expertise that is, and wanting to learn from them. I sometimes perhaps take that to extremes.
But I think that an important part of interviewing well is to listen to people and not to come with preconceptions about what they're going to say or what they can share. And the other, of course, is about an interviewing style that's open and respectful and *not aggressive* to people so that they do feel relaxed and able to say things and not feel judged for what they're sharing and what they're saying.
So, quite a lot of my research now is around health technologies, particularly. So, that can lead into some fairly sensitive topics. We've done projects looking at how people might use technology to manage their alcohol consumption or their exercise or their diet. And these are all topics that people can feel very defensive about. You know – "How dare you suggest I drink too much or I'm a couch potato or I eat too many carbs!" or whatever.
You know, they're topics where people *can* feel judged. But if people feel judged, then they're not going to give you a real understanding of how you might design technologies that will *really* help them. And so, it's really important to *not* be judgmental – to be open, to respect where they're coming from
and to understand what matters to *them*, as opposed to what you think should matter to them. Well, I certainly work quite hard with students to try to get them to question their own assumptions and not to expose their assumptions when they're interviewing – so that they are actually open to hearing what people are really saying, what people are really trying to express.
Another point is if interviewers are not *intentionally* leading. By coming with too many assumptions about what you're expecting people to care about or expecting people to say, you can unwittingly lead people quite a long way. And leading questions will result in you hearing what you expected to hear, which doesn't necessarily give you the information that you actually need to gain insight from any study.
And surely the point of doing interviews is to get some insight that you didn't already have. If you already knew the answer, there'd be no point in interviewing people about the topic, whatever it is. D.H.: You always have assumptions, right? Otherwise, you wouldn't do a study. A.B.: Yes. D.H.: So, what's the best way to sort of balance or use your assumptions in a constructive way? A.B.: So, I think what you try to do is to have questions more than assumptions.
A *qualitative* study I think is driven by a *question*. *Quantitative* studies are more often driven by a *hypothesis* – i.e. you have a belief and you want to test it – whereas qualitative studies are much more driven by questions. And I've certainly got partway through several studies and suddenly realized that I had a set of assumptions
that aren't necessarily valid. And so, I'm forced to question my own assumptions.
User Interviews are of 8 different types such as structured, unstructured, contextual, and group, to name a few.
© Interaction Design Foundation, CC BY-SA 4.0
User interviews are a versatile and indispensable design research tool, serving as a gateway to invaluable insights that shape user-centric solutions. Depending on your goals and who the interview participants are, we can classify user interviews into eight categories.
Structured interviews adhere to a meticulously planned set of questions, providing a systematic approach to information gathering.
This type is characterized by its rigidity, ensuring a standardized process that facilitates easy participant comparison. Designers opt for structured interviews when seeking specific, targeted information. These interviews require an organized means of collecting quantitative data.
In contrast, unstructured interviews embrace a more open-ended and flexible approach. Participants are encouraged to express themselves freely, leading to a qualitative exploration of their thoughts and experiences.
This type is favored when the goal is to uncover niche insights that may not emerge through predefined questions. This allows for a deeper understanding of user perspectives and motivations.
Semi-structured interviews balance the rigidity of structured interviews and the flexibility of unstructured ones.
Designers prepare a set of predetermined questions but allow room for participants to elaborate on their responses. This format combines the benefits of both worlds, offering depth and consistency while accommodating the richness of qualitative data.
Here’s Ann Blandford with more on semi-structured interviews and how they differ from structured and unstructured ones.
In a sense, the answer is in the expression 'semi-structured'. So, a completely structured interview is a *conversation where all the questions are pre-scripted* and very often the answers are *closed answers*, so like an option 1 from 5 or a closed question like 'Yes' / 'No'.
And if you're going to do that kind of interview, really it's very much like a *survey*, and you might almost do it – it might be better as a written survey rather than as an interview. At the other end of the spectrum, you just have a *conversation, a chat with somebody*, which might or might not be on a particular topic or might range across a whole spread of topics and cover all sorts of material not necessarily with any particular structure and not with a particularly obvious purpose to it.
*Semi-structured lie between these extremes*, i.e. there is some kind of structure to it; you typically have a *topic guide* or a *semi-structured interview script* that determines the kinds of topics that you're going through and an initial planned order with which you'll cover those topics. But it's also free form in that people are welcome to give long answers, like the answers I'm giving you now.
And you can elaborate on questions; so, if somebody says something particularly interesting that you hadn't anticipated, you might follow up on that. And indeed sometimes people answer the topic that you'd thought of as being the fifth topic – they might introduce that when you thought you were still talking about the second topic; and so, it kind of *free-flows* to some degree and feels natural. But as an interviewer, you have an agenda;
you have a set of topics that you want to cover; you perhaps even have some detailed plans of how you're going to ask some of the questions so that you get through the topics that *you* want to cover even if participants maybe go off topic at times or – you know – divert into other things. So, that's roughly what a semi-structured interview is. It's a *conversation*, but it has a set of *topics* that you want to cover
and is *organized* such that you're reasonably confident that you will cover those topics, but perhaps in a way that is responding to the participant and their interests and to some extent their agenda as well, as well as that of the interviewer, but it's not a completely fixed structure where you're not allowed to deviate from what you've planned ahead of time. In HCI, I think they're the most common form of interview because
we typically have an agenda; we're concerned with finding out from users or from potential users about the requirements for future designs or about the way that people do their work so that we can design new technologies to support that work or that activity better. Or maybe there's already an existing technology that you're testing,
and so you want to know specific things about what works well, what doesn't, and so you have an agenda as an interviewer; it's not just a casual conversation. But you do want to know a set of things about that thing. But you also want to hear what the user has to say; you want to listen to them, especially when they say things that are *unexpected*,
as well as finding out the answers to questions that you've asked. So, as interviews go, they are – I would say – the most common form used in HCI.
Contextual interviews unfold in the user’s natural environment, providing a unique perspective on how they interact with products in their day-to-day lives.
This type ventures beyond the controlled setting, uncovering insights that might be missed in a traditional interview setup. Observing users in their context allows designers to identify specific pain points, preferences, and behaviors that influence the user experience in a real-world scenario.
Expert interviews involve engaging with individuals possessing specialized knowledge or experience relevant to the design context.
These individuals could be industry experts or professionals with specific domain expertise. Their insights contribute a layer of knowledge to the design process. These interviews help refine solutions with experienced perspectives that are not apparent through user interviews alone.
In the era of digital connectivity, remote interviews overcome geographical constraints. They leverage technology to facilitate conversations between designers and participants.
This type is particularly convenient when engaging with users located in different regions. Remote interviews ensure accessibility and flexibility, allowing designers to gather insights without the limitations of physical proximity.
Group interviews involve the simultaneous participation of multiple individuals, fostering dynamic interactions. This format encourages participants to build on each other's responses, unveiling shared experiences and diverse perspectives within the group.
Group interviews are beneficial when exploring collective opinions and group dynamics or seeking insights into how individuals influence each other's perspectives.
Stakeholder interviews extend beyond end users to include individuals with a vested interest in the project's success. These could be internal stakeholders, decision-makers, or individuals representing different organizational departments.
Engaging with stakeholders ensures alignment between design goals and broader organizational objectives. This also helps foster a holistic approach that considers the overall impact of the design solution.
Understanding the applications of these eight types of user interviews empowers designers to strategically choose the most fitting approach based on project goals and the specific information sought. Each type brings a unique flavor to the user research process and contributes to creating designs that resonate with user needs and expectations.
Practices like Leading Questions, Biased Language, Lack of Empathy, etc. should be avoided during a User Interview
© Interaction Design Foundation, CC BY-SA 4.0
User interviews are powerful for extracting meaningful insights, but their success depends on careful planning and execution. To ensure a fruitful user research process, designers must navigate potential hindrances that could compromise the authenticity and depth of the gathered information.
Avoid steering participants toward specific responses with leading questions. Instead, try to craft questions that are neutral and open-ended. This encourages genuine and unbiased insights. Leading questions can unknowingly influence participants, compromising the integrity of the data collected.
Be vigilant about using language that may introduce bias into the interview. Phrasing questions in a way that favors a particular response can distort the authenticity of participants' answers. Designers should aim for neutrality and clarity to ensure participants feel comfortable expressing their thoughts.
Resist the temptation to overwhelm participants with excessive information before or during the interview. Providing too much context leads participants to tailor their responses based on what they think the interviewer wants to hear. In contrast, user interviews aim to make participants express their natural thoughts and experiences.
Build rapport and create a comfortable environment in a user interview. Refrain from rushing through questions without allowing participants the space to share their thoughts. Acknowledge their experiences and be genuinely interested in their perspective to foster open communication.
Ann Blandford offers practical tips on how to build rapport with interviewees through the structure of questions.
Ditte Hvas Mortensen: Thinking about questions, there's a certain sequence to how you best do it. So, maybe you could start by saying something about what the best way is to start an interview. Ann Blandford: For sure, the best way to start any interview is with opening questions that set people at ease, that assure them of what kinds of topics are going to be covered,
that give them a sense of what will be done with the data, though maybe that will be even before the interview starts, as I think about it. But it's about *setting somebody at ease*, about *helping to build rapport with them*. Obviously, each of us as an interviewer has our own personal style. And also every *interviewee* has their own personal style. And so, no two interviews are actually the same as each other.
I can't imagine ever running two almost identical interviews because they do so much depend on the participants – both the interviewers and the interviewees involved. But it is initially about setting somebody at ease, asking them comparatively innocuous questions – for example, what their role is in the organization if it's about a work system or their experiences of using a technology *like* the one that you're thinking about designing.
D.M.: So, it sounds like it's also pretty concrete questions. A.B.: Usually initial ones – it's best in my experience to make them reasonably concrete. One can move on to more abstract or more speculative questions later. Or, you know, questions that perhaps get at more sensitive feelings and values and emotions will come
later in a conversation when people have settled in and have started to feel comfortable in the situation, as opposed to, you know, starting with "So, how do you *feel about* _____?" – you know. That's not likely to set somebody at ease – if you kind of head straight into those things at the outset. And then, at the end, it's important to wrap up in a way that again leaves people feeling that they've said what
they want to say, that there aren't any topics that *they* thought were important as part of this interview, that they had an agenda, giving them a chance to articulate anything that you might have missed, and also giving them a sense of what's going to happen *afterwards*, you know: Are you going to give them a report back? Are you going to advise their managing director about new technology requirements?
What will be done with the data – what's the *value of the interview* for them? Some people really care about that; others perhaps less so. So, it's about being sensitive to what different people need. And in the middle – I mean, that's obviously the bulk of it – it's really about *planning it well ahead of time* so that you've made sure that you're covering all the topics that you're aiming to cover in
a rational and sensible order. As I've said already, people may sometimes answer and introduce topics that they've thought about already and answer future questions before you've even asked those questions. That requires you to be on your toes and think, "Oh yes, they've already answered that completely," or, "They've already *partly* answered that," and then, you know, picking up on what they've already said so that you're showing that you've been listening to them, and pursuing that topic a bit more later.
But it's about *making it flow as a natural conversation* as far as possible, while also *covering the topics that you want to cover*. So, it's important to get the structure so that it's a natural one that flows for most people – even if some people will run it differently. And part of checking that is about *piloting it* – you know – running through it with a friend or with somebody who at least should be able to *pretend* that they're a participant in that study first, to make sure that
you've got a coherent set of questions that are *comprehensible*, that are *using the user's language*, talking in terms that make sense to them and that they can engage with. So, you know, none of us gets interviews right the first time. It's usually worth trialing them out and going from there. D.M.: So, it sounds like you should try to follow the user's or the participant's
– if she brings something up earlier than you had expected, you just go along with it, or...? A.B.: That's my style, certainly, you know, because once somebody's in the flow if they're talking about things that you want to cover *anyway*, it just seems most natural, then, as a conversation to let them carry on on that line; and then, when they've finished, perhaps bring it back to make sure that you are covering everything that you wanted to cover
– because otherwise it starts to feel very disjointed and people may well forget the thing that they'd already half-started to say, and so you'll then have lost it forever. So, it's much easier if people can actually just carry on in the flow, as long as they're not going *wildly off-topic* for too long a time. And actually respecting participants can very often involve them going off-topic for some parts of it.
And you're gently trying to bring them back, and exactly how you do that probably depends on your interviewing style, actually. I personally probably let people run on a bit longer than perhaps some other interviewers would because I want to find an opportune moment to get people back on track. I think probably the worst one I had was – again, it was a little while back –
where somebody wanted to express a lot about the unions in their organization and was determined to tell me about industrial relations even though my focus was on *technology design*. They were seeing the introduction of new technology as being closely linked to other aspects of their relationship with management. And, of course, those weren't directly relevant to me.
But if I had just kept shutting them off, then I don't think they'd have talked properly about their attitudes to the technology, either; so, it was about respecting the other things that they felt that they wanted to say that were in *their* minds related, even though they weren't as directly relevant to the user interaction design – to me. So, they were less directly relevant to me. It might have taken me slightly longer to get all the information that I needed in that situation, than it would have done had they completely stuck to my script.
But on the other hand, it helped me to build rapport. I think it gave me better information. It certainly meant that I had a better relationship with the people I was interviewing in that setting.
Steer clear of assumptions regarding the user's knowledge or familiarity with the product or topic. Clarify terms, avoid industry jargon, and ensure that participants fully understand the context are some of the key pointers. This prevents misunderstandings that could impact the accuracy of their feedback.
It is vital to balance guiding the conversation and allowing participants to express themselves freely. Avoid dominating the dialogue or interrupting excessively. You should create an environment where participants feel valued, listened to, and are encouraged to share their experiences without interruption.
Explore contradictions or inconsistencies in participants' responses with appropriate sensitivity. There may be occasions where people contradict their statements. These variations can offer valuable insights into the complexity of user experiences. However, when you question them on such discrepancies, they might get defensive or uncomfortable. Approach these contradictions carefully and delve deeper to uncover the nuances that might not be immediately apparent.
User interviews extend beyond verbal communication. Pay attention to non-verbal cues such as body language, facial expressions, and tone of voice. Neglecting these cues may result in overlooking subtle yet significant indicators of a participant's sentiments or attitudes.
Avoid separating user insights from their broader context. Consider external factors influencing participants' responses, such as cultural differences or situational circumstances. It is crucial to account for these factors to avoid misinterpretations and incomplete understandings of the user experience.
After the interview, resist the urge to immediately move on to the next task. Take time for post-interview reflection to analyze the gathered data, identify patterns, and uncover more profound insights. Skipping this critical step may result in overlooking key findings and hinder the overall impact of the user interview process.
In this next video, Ann Blandford shares some precautions to consider during the analysis.
Ditte Hvas Mortensen: Are there some common pitfalls that you should be aware of when your search for themes? Ann Blandford: *Confirmation bias* is, I think, the biggest one: that you think you've spotted something fairly early on in the data, which, particularly if that's something that supports a pre-existing assumption that you had – you know – before starting the study.
And so, it's so much easier to spot data that supports that position than it is to spot data that contradicts it, particularly if the contradiction is a little bit kind of *tangential*. So, that's why I kind of emphasize that you have to look explicitly for contradictory evidence in the data because that can be harder to spot. So, you have to ask yourself specifically "What would contradictory evidence look like?"
or "What *might* it look like?" You know – "Is there any of that? *Why* might it be there?" So, I think confirmation bias is the biggest. Another trap is there's one participant with slightly outlandish but very engaging and exciting views,
and they express themselves in a wonderful way, and you almost start writing the narrative as if that person is the central figure – as opposed to being a slight outlier in the data set. And so, their perceptions and their attitudes and values are given more *weight* than those of perhaps less articulate or less extreme participants. D.M.: Are there any other sort of common problems that you often experience students have when they do analysis?
And do you have some good advice for how to tackle those problems? A.B.: I think the big one is the one I've already mentioned – about getting overwhelmed by the data and not really knowing where to start or when to stop; having so many themes and not being prepared to let go of any of them;
or having difficulty working out which ones matter and which ones don't. And *data gathering* for some people is more exciting than data analysis. In practice, a good analysis takes longer than data gathering.
It takes longer to analyze 10 hours of interview data than it does to gather 10 hours of interview data, *even* when you take into account the time it took to recruit 10 appropriate participants in the first place, which can be non-trivial. But analysis is much more of a solitary activity, whereas data gathering is inherently social. You know – you're interviewing people; you're learning things; it's fun and engaging.
So, some people can get quite bogged down in analysis, I think. And it feels like a huge task, and you need to break it down into *small steps* of – you know – "I'll deal with three pages of data today." – you know – or, "I'll do *this* step of coding." or, "I'll do this little bit of stuff." You've got to break it down into manageable chunks.
And also, just dive in and *do* some, actually, you know. The longer you leave it – and it's just this pile of transcriptions or whatever – you know, the harder it is to get started and the more overwhelming it can feel. So, I think *feeling overwhelmed* is probably the biggest challenge, and not being prepared to drop themes or not being able to quite see how themes fit together to tell a bigger story.
That's of significance for HCI research or for technology design. And going off on a weird tangent in the analysis, and forgetting why you were doing the study in the first place. So, with those, I think the best way to deal with it is to *talk to people about it* – you know – to get somebody else involved,
even in just talking through ideas with you.
By removing these pitfalls, designers can elevate the quality of user interviews. This will ensure the data collected is authentic, unbiased, and representative of the user's experiences.
While user interviews are a go-to method in user research, time or budgetary constraints may make it difficult to conduct interviews. Here are some other research methods to consider under such circumstances.
Methods like Focus Groups, Usability Tests etc, can be tried as an alternative to User Interview.
© Interaction Design Foundation, CC BY-SA 4.0
Focus groups offer a dynamic alternative to one-on-one interviews. Bringing together a small group of participants fosters group dynamics, allowing researchers to observe interactions and gather collective insights. This method is particularly effective for exploring diverse perspectives, uncovering shared experiences, and understanding group dynamics that might not surface in individual interviews. Remember that your designs will be used by multiple users different from one another.
Usability testing involves evaluating the effectiveness of a product's interface through real-time user interaction. This alternative method employs prototypes or actual product versions, allowing researchers to observe users navigating the system. Usability testing provides insights into user interactions, pain points, and preferences in a controlled environment. By incorporating prototypes, designers can assess the functionality of specific features, ensuring a user-friendly design. This method is especially valuable for refining the user experience iteratively, based on direct user feedback, ultimately leading to more robust and user-centric design solutions.
In-person observation involves directly witnessing users' behaviors and actions in their natural environment. By immersing researchers in the users' context, this method unveils nuances that may be missed in a controlled setting. The in-person approach provides a holistic understanding of how users integrate products or services into their daily lives. Designers should conduct the observation without external influence on the subject user’s behavior.
Market research extends the scope beyond individual user experiences to broader market trends and preferences. This alternative leverages quantitative data, surveys, and statistical analysis to uncover patterns at a larger scale. Market research complements user interviews by providing a macro-level understanding that informs strategic decisions and market positioning.
Discovery research focuses on the initial exploration of a problem space or a new product idea. It involves gathering insights from various sources, including user interviews, surveys, and secondary research. By combining diverse methods, discovery research lays the foundation for understanding the landscape before diving into more targeted investigations.
Rather than relying solely on one method, integrate various user research methods to understand user needs comprehensively. You can get insights from different angles by combining interviews, usability testing, and surveys.
This eventually results in more informed and user-centric design decisions.
Take the IxDF course User Research – Methods and Best Practices to learn more about interviews and other qualitative research methods.
For a concise overview of user interviews, watch How To Conduct Effective User Interviews, a Master Class webinar with Joshua Seiden, Co-Author of Lean UX and Founder of Seiden Consulting.
User interviews for UX research: Refer to this article to learn how User interviews are incorporated into UX research.
User Interviews 101: Learn about all the do’s and don’ts of User Interviews.
To learn how to make sense of all the qualitative data, see How to Do a Thematic Analysis of User Interviews.
Successful research interviews hinge on a systematic approach. The basic steps include defining clear research objectives, selecting appropriate participants, crafting well-structured and open-ended questions, ensuring a comfortable environment, actively listening to participants, documenting findings accurately, and conducting thoughtful follow-up analysis. These steps collectively contribute to the success of a research interview. They foster genuine engagement, extract meaningful insights, and lay the groundwork for informed design decisions. Designers should also carefully consider selecting the research methodology that best suits their objectives. Selection of the sample groups to conduct the interviews is also vital to maintain the research context.
Watch this video to learn about the interview analysis process.
Ditte Hvas Mortensen: Talk about the steps that are normally involved when you analyze an interview. And if we start at the beginning – when you've done your interviews, you have a lot of data; you have video, audio, notes, and that can be a bit overwhelming! What's the first step that you take? Ann Blandford: Usually, the first step – certainly for audio data – is *transcribing*.
Sometimes, you will transcribe everything. There are some very detailed video-coding methods that involve *annotating* fine details of what's in a video or larger chunks. It's obviously much harder to transcribe video because it's a much more dynamic medium with too many details
often. Unless there are very, very specific things that you're making notes of, that's a much harder thing to do. And photos you might just annotate in various – again, depending on the purpose of the study: what the question is. So, I'll focus on transcribed audio because that's the most common kind of data to use to focus on for interviews. Normally, once you've got a transcription, you would start to familiarize yourself with it.
I personally like – if I've got *time*, which, of course, I don't often have anymore – but if I do have time, I like to transcribe my own data because the very act of transcribing it is starting to get me much more familiar with the data and I'm starting to think about how I'll code it. And I often make notes on the side – of things that I notice when I'm transcribing that I want to look at in
more detail when I actually get around to doing the analysis. Some colleagues don't like doing their own transcribing. They feel that that's not a good use of their time. And so, they then have to invest a bit more time in getting familiar with the data after it's been transcribed by somebody else. And they may do that by starting to code it. And by "coding", what I mean is making notes in the transcription of particular kinds of points that are made
or particular words or phrases that are used, or particular ideas that seem to be expressed in the data. And so, there's a layer of getting familiar with the data that involves a *first level of coding*. I always like doing that first with pen and paper and colored pencils and things like that
– to get myself more familiar with the data. If you have a lot more data, though, and if you've got a bigger data set, it's useful to use qualitative data analysis tools at this point – tools like NVivo or ATLAS or Dedoose or MAXQDA. There are probably others I don't know the names of
– all of which basically help you to code the data, to be consistent in the descriptions of the codes as you're annotating bits of text. You can tell that I think about this visually because my hand is going backwards and forwards in front of the screen here. And you're building up a set of codes that capture the main ideas in the data. If you have some very, very clear questions from the outset, you might have predefined your codes that you're
using and then look through the data to find and play instances of those codes. If you're doing an *exploratory analysis*, you may be developing a set of codes as you go through the data. And certainly, you will almost always find that you'll get partway through a data set and realize that there's some important idea that you've actually encountered a few times before this point and you've only just noticed that it matters.
So, you'll almost certainly have to go through the whole data set more than once to be sure that it's all coded consistently, or you may find that your mental definition of a code drifts as you go through the data set, and so it started to mean one thing at the beginning, and then by the end it actually means something that's not completely unrelated but is importantly different.
And, of course, you may also realize that you've got two separate codes for things that you *thought* were importantly different, but actually they're both – you realize that they are actually the same basic idea and that the differences don't matter too much. So, you then might combine two codes into one – so, you can separate them out; you can combine them. But what you're trying to do through this process is partly just *get more familiar with the data*.
Develop ways of navigating through the data set where you're starting to work out what the important themes in the data are, how those themes are related. So, some of it is in the overt actions that you're doing – whether that's with a pencil and paper, or whether it's using qualitative data analysis software. So, the external actions are helping actually the internal – the *cognition*.
Your making sense of the data is a really important part of what's going on in this process. And then, you identify which are the themes that seem to matter to you in the data. Obviously, if you've got a very, very specific question from the outset, then the themes will fall out quite quickly because you've already decided what they are.
But if it's a more exploratory analysis, then that does actually involve creating narratives from codes; working out how codes relate to each other; building stories; looking for supporting evidence, but also for contradictory evidence, and for explanations all in the data; building accounts, building narratives of what you're finding in the data – what the evidence for that is.
If there's contradictory evidence that's suggesting that your narrative isn't actually accurate, then you need to take that into account and think about what's wrong with your story or why some of the data might be contradicting other bits of the data. Maybe it's because you've got one or two participants who have a very different background and very different
experience from other participants – in which case, is it about narrowing the generalizability of your conclusions and going, "It applies to people like this, but it doesn't apply to people like that."? Or it's important to understand why your results are the way they are. If it all points in the same direction and you've looked through the data and there are no contradictions,
then you can be pretty confident. As part of the process of developing a reasonable narrative, you're typically writing it up because the process of writing is very often an important part of actually doing those later stages of sense-making of the data. And very often we use examples from the data:
you know, illustrative quotations to show how we've interpreted the data. So, you're building the report, kind of, as you're doing the late stages of analysis. And by the time you've finished analysis, you've almost finished writing it up, as well. So, I think those are the main stages, but really what matters is that you're making sense of the data
and finding things in the data that perhaps weren't immediately obvious to somebody else. And sometimes, you find things in the data where there aren't quotations from the data that actually say what it is you've realized, because your insights go beyond what's actually overtly available in the data to things that underlie that that are actually what's driving the things that people are saying
and the account of the use of technology or the way that technology is designed. D.M.: It sounds like an analysis is typically a very iterative process where you go from data to coding to themes and maybe even back to gathering more data. And I guess sort of *the searching for something that opposes your themes* is really an important part
both in checking your data and in telling you when you are done with the analysis. A.B.: Yes, and methods such as Grounded theory talk about *theoretical saturation*, which is the idea that actually you get more data and you're not learning anything new. So, you analyze new data, and all the themes in it are themes that are already familiar to you.
And that's the ideal stopping point for a study. But as I've already said, you know – pragmatically, realistically we often have to stop a study earlier than the point where we've got saturation, and that may mean that the theory or the insights are more bounded or have a narrower scope or less certainty underpinning them.
A user interview focus framework is a practical plan that ensures we create things people genuinely need. It aligns with the idea that people don't just buy products; they buy solutions to their problems. This framework consists of five straightforward steps: find the right people looking for a solution, share your idea with them, check if they're willing to pay, make sure your solution works, and, if successful, grow and automate. It's a smart way to use user interviews to build things that truly solve real problems.
Conducting user interviews involves a systematic approach. Begin by defining clear research objectives and identifying the target audience. Craft open-ended questions to encourage participants to share their experiences openly. The next step is to choose a suitable interview format. Depending on the research goals, it could be structured, unstructured, or semi-structured. Ensure a comfortable environment for participants, whether in-person or remote. Listen to responses, ask follow-up questions for depth, and document findings meticulously. Post-interview, analyze the data, identify patterns and iterate the design process based on the insights gathered. By following this structured approach, you can conduct an insightful user interview.
The number of user interviews needed depends on the research goals and the project’s complexity. While there's no one-size-fits-all answer, a common rule of thumb is to conduct at least five to eight interviews per user segment. This typically uncovers recurring patterns and provides a solid foundation for decision-making. However, the ideal number may vary based on the project's scope, the diversity of the user base, and the level of detail required. Iterative testing and continuous feedback loops may prompt additional interviews as the design evolves. If you continuously upgrade your designs based on your research insights, consider conducting another round of interviews after every significant update.
User interviews are generally safe when conducted ethically and with the well-being of participants in mind. Researchers should prioritize informed consent, clearly communicate the purpose of the interview, and ensure participants' anonymity when necessary. Protect sensitive information and adhere to data privacy regulations. Remote interviews should be conducted on secure platforms, and any incentives offered should be reasonable and ethical. By following ethical guidelines, user interviews can provide valuable insights while respecting participants' rights and privacy. Questions asked in the interviews should be carefully curated to avoid hurting any participant’s personal, emotional, or cultural sentiments. After considering the above factors, user interviews can be safely used for design research.
User interviews offer a personalized and in-depth understanding of user experiences, allowing designers to uncover insights beyond quantitative data. They facilitate empathy by connecting directly with users, leading to more human-centered designs. The qualitative nature of user interviews is invaluable for exploring motivations, pain points, and emotions. Additionally, the adaptability of user interviews, whether in-person or remote and their ability to uncover contextual insights make them a versatile tool for various stages of the design process. User interviews can be used in multiple types and areas of research design. Choosing user interviews empowers designers to create solutions that authentically resonate with user needs.
So, semi-structured interviews – well, any interview, semi-structured or not, gets at people's perceptions, their values, their experiences as they see it, their explanations about why they do the things that they do, why they hold the attitudes that they do. And so, they're really good at getting at the *why* of what people do,
but not the *what* of what people do. That's much better addressed with *observations* or *combined methods* such as contextual inquiry where you both observe people working and also interview them, perhaps in an interleaved way about why they're doing the things that they're doing or getting them to explain more about how things work and what they're trying to achieve.
So, what are they *not* good for? Well, they're not good for the kinds of questions where people have difficulty recalling or where people might have some strong motivation for saying something that perhaps isn't accurate. I think of those two concerns, the first is probably the bigger in HCI
– that... where things are unremarkable, people are often *not aware* of what they do; they have a lot of *tacit knowledge*. If you ask somebody how long something took, what you'll get is their *subjective impression* of that, which probably bears very little relation to the actual time something took, for example. I certainly remember doing a set of interviews some years ago
where we were asking people about how they performed a task. And they told us that it was like a three- or four-step task. And then, when we got them to show us how they did it, it actually had about 20, 25 steps to it. And the rest of the steps they just completely took for granted; you know – they were: 'Of course we do that! Of course we—' – you know – 'Of course that's the way it works! Of course we have to turn it on!' And they just took that so much for granted that *it would never have come out in an interview*.
I mean, I literally can't imagine the interview that would really have got that full task sequence. And there are lots of things that people do or things that they assume that the interviewer knows about, that they just won't say and won't express at all. So, interviews are not good for those things; you really need to *observe* people to get that kind of data. So, it's good to be aware of what interviews are good for and also what they're less well-suited for. That's another good example of a kind of question that people are really bad at answering,
not because they're intentionally deceiving usually, but because we're *not* very good at *anticipating what we might do in the future*, or indeed our *attitudes to future products*, unless you can give somebody a very faithful kind of mock-up
and help them to really imagine the scenario in which they might use it. And then you might get slightly more reliable information. But that's not information I would ever really rely on, which is why *anticipating future product design is such a challenge* and interviewing isn't the best way of getting that information.
Do you want to improve your UX / UI Design skills? Join us now
You earned your gift with a perfect score! Let us send it to you.
We've emailed your gift to name@email.com.
Do you want to improve your UX / UI Design skills? Join us now
Here's the entire UX literature on User Interviews by the Interaction Design Foundation, collated in one place:
Take a deep dive into User Interviews with our course User Research – Methods and Best Practices .
How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change, , link to us, or join us to help us democratize design knowledge!