The Basics of Recruiting Participants for User Research

- 923 shares
- 4 years ago
Participant screening is a process to filter participants for user research and market research. Researchers find the right participants with screener questionnaires.
User researchers use screener questions to get relevant data and make informed decisions about who should participate in a study. The selection of the right participants is essential to get accurate and useful insights. Participants should represent potential users and have relevant knowledge and user experience (UX).
In the video, Human-Computer Interaction and UX Expert William Hudson explains how best to screen research participants.
Participant screening is a foundational step in UX research. With thorough screening processes, researchers can guarantee their work effectively informs and improves the design process, which ultimately leads to better user experiences. Below are a few reasons why teams should invest in participant screening:
When researchers select participants who reflect the target user base, they can gather insights that are directly applicable to the users they design for—crucial for informed design decisions.
Screening focuses research on individuals whose experiences and feedback are most valuable. It’s important in studies with limited time and resources.
Participant screening contributes to the overall validity and reliability of the research findings. If the study includes a representative sample of the user population, researchers can be more confident their results are accurate and apply them to the broader user base.
Proper screening helps mitigate selection bias. This is critical to create inclusive and equitable designs.
In research that involves meaningful comparisons of different user groups—e.g., experts vs. novices—participant screening makes sure participants accurately represent these groups.
With a clear understanding of participants’ characteristics, researchers can tailor research methods and questions to elicit deeper insights. For example, understanding participants' familiarity with social media platforms can inform how to structure a user interview or usability test.
“You do research because you want to have impact. […] If you don’t have high-quality participants, that’s kind of a non-starter […] If you're not talking to the right people, it's really hard to make the right decisions.”
— John-Henry Forster, Former Senior Vice President of Product at User Interviews
It can be a challenge for researchers to recruit participants for user research studies. They must find interested individuals, arrange the study appointments, incentivize prompt attendance and reliably organize it all.
Additionally, researchers must filter out attendees who lack the right type of experiences to provide valuable feedback or insights; subpar participants can impact research quality and design choices in adverse ways.
Researchers should follow the steps below to screen participants efficiently and effectively:
The first step to decide who to screen out. Behaviors, experiences and attitudes are the best criteria to define user groups. These are psychographics. Other information like gender, age or location are demographics.
User researchers often prefer psychographics. Good criteria to screen participants might include:
How familiar are they with a product: Screening participants based on their familiarity with the product helps researchers understand different levels of user experience, from novices to experts. This can shed light on how intuitive a product is for new users versus the additional functionalities that more seasoned users may leverage.
What they use a product for: Researchers need to understand what users use the product for to identify different use cases and the specific needs of each. This can help customize the product to meet varied user expectations and use conditions.
How often they use a product: This can help differentiate between occasional users and power users and provide insights into how the product fits into daily routines and its perceived value to different user segments.
Where they use a product (also called "context of use"): The knowledge of where users typically interact with the product—e.g., at home, at work, in transit—can uncover contextual factors that impact the user experience.
With the careful selection of criteria based on psychographics, researchers can create more effective screener questionnaires to identify the most relevant participants for their studies. This ensures realistic results grounded in the actual user behavior and attitudes, which leads to more informed and actionable design decisions.
The objective of screener questions is to obtain genuine and unbiased insights from potential participants. Researchers should take into account that participants—unknowingly—often give answers they think a researcher wants to hear. That’s why, researchers should ask non-leading and open-ended questions.
For example, to test how often users use Facebook to search and find musical events, researchers would not want to ask:
“Is Facebook a good way to find musical events?”
But rather:
“How do you use social media?”
This is because they do not want to give away the purpose of the study upfront. The goal is for participants to answer honestly, and thus questions should not lead to a particular answer. Some participants might just want to participate in the study for an incentive and thus are willing to not answer honestly.
For example, the following screener question reveals the objective to the participants that the researchers might be looking for:
“Do you play mobile games at least once a week?”
Instead, a more neutral question is:
“What do you typically do in your free time on a weekly basis?”[LT3]
If asked the first question, participants would potentially have the idea the research is about mobile game apps and would likely respond in a way they believe aligns with the researcher's expectations or their interpretation of what seems like the “right” answer.
Open-ended and non-leading questions encourage participants to share their experiences and behaviors without bias. This type of question does not imply that there is a correct answer or a particular perspective that the researcher wants. As a result, participants are more likely to provide candid and varied responses, which can offer deeper and more meaningful insights into their habits, needs and preferences.
Overall, these questions allow user researchers to better select their participants, so they truly represent the user group and select those for further UX research methods like card sorting, user testing, etc. Ultimately, the goal is to select participants who can provide the most relevant and insightful feedback for the particular study.
As the team completes their screener questions, they can start to think about participant recruitment. Researchers should use the screener questions to filter out participants who do not match the target audience necessary for testing new or existing designs.
The success of UX research relies on the recruitment of the right participants. Researchers can employ various methods based on the study's needs, timeline and budget. There are several options to recruit the right participants:
It’s imperative user researchers funnel out the wrong participants for the study, so the results are true and relevant to the design project.
© Interaction Design Foundation, CC BY-SA 4.0
Use a recruiting agency: Specialized agencies can streamline the recruitment process via their networks of potential participants. They can quickly identify and screen individuals based on the study’s requirements. While this method can save time and guarantee a high-quality pool of participants, it is often more expensive than other options.
Use an automated recruiting platform: Platforms automate the recruitment and screening processes using algorithms and databases. They offer access to a broad audience and can filter participants based on specific criteria. This method is efficient and cost-effective but may require additional screening to ensure participant quality.
Use existing users: Recruit from your product's current user base to provide valuable insights, especially for feedback on new features or usability testing. These participants already engage and are familiar with the product, which can lead to more relevant feedback. However, to maintain a balance and avoid bias, it’s best to include new users, too.
Use hallway recruiting or guerilla testing: Researchers approach people in public spaces or within their own organizations for quick and informal testing or interviews. It’s fast and low-cost but may not always provide participants who are representative of your target user base.
Use online forums and social media: Platforms such as Reddit, LinkedIn or specific forums can be excellent sources to find participants with particular characteristics or interests. This method is cost-effective and can reach a wide audience, but it requires more effort to screen and validate.
To achieve success, customize screener questions and the recruitment process according to your study's objectives and context. Continuously refine questions to align with research goals and participant profiles. Remain flexible and patient in recruitment to adapt to changing demographics and availability.
After collecting screener responses, researchers analyze the data to identify candidates who match the study criteria and bring diverse perspectives and experiences to the research. Researchers score responses based on relevance to the research objectives and create comprehensive participant profiles. Scoring allows for objective ranking, while profiling delves into nuances like psychographics and demographics.
Researchers compare profiles against study requirements to find candidates who meet the basic criteria and offer diverse insights. Iterative analysis may refine criteria or explore data deeper for a balanced participant group. This careful selection process guarantees the outcomes will be relevant, robust and actionable.
At this point in the process, researchers can also take the opportunity to choose “floaters” or backup participants even if they don’t fully match the criteria for the target audience.
For example, if the target audience of a study includes university students studying for their PhDs and one drops out of the study at the last minute, researchers could include a university student studying for a Master's degree. While they don’t match the criteria 100%, they might be a convenient replacement to fill the empty spot on short notice.
Researchers may follow up with potential participants to verify their information and confirm their willingness and availability to participate in the study. Researchers should:
Schedule sessions and prepare participants: Finally, researchers schedule study sessions and provide participants with all necessary information, such as the study location (for in-person studies), technical requirements (for remote studies) and what to expect during the session—study duration, pre-study questionnaires, the setup of specific apps, etc.
Verify information: It's essential to confirm the accuracy of the information participants provided during screening. This can include the reconfirmation of demographic details, previous experiences or specific behaviors relevant to the study. This certifies participants fit the intended profile and will contribute meaningfully to the research outcomes or if the researchers should replace them.
Confirm willingness and availability: Researchers need to verify participants remain interested and available to participate. Review session dates and times, any compensation offered and confirm participants understand the commitment required. At this point, the team should openly discuss and sign any legal matters like non-disclosure agreements (NDAs) and privacy permissions.
Technical requirements: For remote studies, researchers must see to it participants have the necessary technical capabilities. Verify internet connection speed, guarantee compatibility with the virtual meeting software and provide troubleshooting support for technical issues.
When researchers meticulously follow these steps, they can facilitate a smooth and efficient study process with well-informed and prepared participants who are comfortable with their involvement in the research. Through goodwill, researchers can potentially keep the door open with participants for future research collaborations, as well.
Common pitfalls in participant screening include:
The recruitment of participants who don't fit the profile needed for the study.
The underestimation of the importance of screening questions.
The disregard for the diversity required to obtain comprehensive insights.
Researchers can avoid these pitfalls if they clearly define the target audience and their characteristics before recruitment, create precise and relevant screening questions to filter out unsuitable candidates and supply a diverse participant pool for varied perspectives.
Actionable insights include the revision of screening questionnaires based on initial feedback with frequent consistency checks, the continuous observation of the recruitment process to ensure it attracts the right participants—to avoid “failing participants,” the alteration of the questions’ language to avoid “straightlined results” and report and score those who simply try to game the system.
Watch and listen as William Hudson explains the various pitfalls researchers encounter during the recruitment process and how best to avoid them.
For instance, if a study aims to understand the behaviors of experienced scuba divers, the screening process should first include open-ended and non-leading questions to confirm the participants scuba dive. Then, more specific questions about their scuba diving experience levels, such as the number of training hours they’ve logged, the depths and locations they’ve dove, the number of years they've scuba dove or the frequency of their dives.
If a participant responds they have the highest level of scuba diving experience along with the highest number of training hours, the deepest depths dove, the most locations dove, the most number of years and the highest dive frequency, then there’s a good indication the results might be straightlined.
Designers need to design for all users and keep in mind the user experience looks, feels and sounds different to everyone, especially those with disabilities. It starts with recruitment, screening and serious user research. Accessibility is usability that focuses on people with disabilities.
© Interaction Design Foundation, CC BY-SA 4.0
Researchers recruit and screen participants for accessibility studies through specific strategies tailored to include individuals with disabilities. Examine the approach below:
1. Clearly identify what types of disabilities are relevant to the study. Accessibility includes a wide range of needs, including visual, auditory, motor and cognitive impairments. Researchers should test for exactly who they need—a hard-of-hearing person will most likely not have the same responses as someone with a learning disability, for example.
2. Use platforms and organizations that cater to individuals with disabilities. This could include disability advocacy groups or job fairs, online communities, schools and specialized agencies. The recruitment message should be accessible with clear language and alternative formats as needed.
3. Design accessible screening questionnaires for all potential participants. This includes the use of screen-reader-friendly formats, options for those with visual impairments and clear and straightforward language with language options.
4. Communicate with potential participants to accommodate their needs. For instance, you could provide information in Braille, use sign language interpreters or employ text-based communication for those who have auditory impairments.
5. Be flexible in scheduling sessions and consider the various needs of participants with disabilities. This includes transportation challenges and the need for breaks during sessions.
6. Establish the screening process respects participants' dignity and privacy. Be sensitive during the discussion of disabilities. Use gentle language and avoid making assumptions about participants' abilities.
In the video below, UX Expert and CEO of Experience Dynamics, Frank Spillers, discusses more about the recruitment process for accessibility tests in UX design.
Let's talk now about actually how to get users, how to recruit them, and then a little bit more about moderation and how to basically take action on the feedback that you get from an accessibility test. The whole point of it right now for recruiting users, you can get some help from disability organizations or schools or disability coordinators, especially if you're testing younger populations.
And and then also kind of like attending job fairs or groups, organizations where people with disabilities are meeting you. You know, you can attend advocacy groups or, for example, the National Federation of the Blind has chapters across the United States. Maybe there's a disability group in your area. Maybe there's an accessibility group or somebody that you know that, you know, friend or friend of the family
or something like that that could connect you to an organization. So this is a good source of recruiting. And sometimes for a lot of people, it starts with, you know, you have somebody who is dyslexic or somebody who is mobility impaired. You know, they're in a wheelchair and they, you know, maybe they go to a work center or in a program where they can connect you to somebody. They're And so that's a good source for recruiting.
I mean, the other the other way is to outsource the recruit so that, you know, an agency can handle that for you. And when you recruit users, you create a screener. So a screener and we'll include a template here for you is going to help you kind of target who you need to target. So the key questions to add to a regular screener are what disabilities do you currently have? This is the user, What assistive technology are you using?
And you can often provide examples, you know, such as screen reader, voice input, alternative keyboard or pointing device. You know, just kind of they're not going to use the word assistive technology or certainly 18. That's jargon for for you and me. But if you describe to them just kind of some of the examples of how assistive technology is used, they'll they'll tell you, yeah, I use you know, I do I turn my contrast up, you know, and so forth, that kind of thing. So if they tell you that they have, you know, they're legally blind
or they're not legally blind, but they you know, they have visual impairments and they use this, this and that. And so it's important to get a context for what tool it is you're testing. Identify that upfront, maybe ask them what version, if they know what that is. A lot of people won't know what that is. How long have you used it? How do you currently use it? These are important questions to understand the context of their familiarity with assistive technology tools. One of the dirty little secrets that I realized a long time ago
that no one talks about is that a lot of the assistive technology tools require a level of proficiency, a level of understanding, and a level of familiarity with the settings, configuration and so forth. In other words, the usability of assistive technology tools is an issue as well. And so a lot of users struggle with complex interfaces and these assistive technology device tools and devices and so forth. And for young people, they have to be taught how to use these,
you know, like in schools. So there'll be a like a, a program where that user or that person will go through a training and how they actually use the tool. And this is very common, but not everybody gets the training and not everybody, you know, has, you know, the experience of a disability when they're young. Some people get it later in life, for example. So what types of things do you do is important? You know, you don't don't ask someone if they're an expert.
That doesn't work. It's all relative. You're asked someone if they're an expert, you might be an expert in outlook or mail, and I still struggle with that after using it for 20 years. So that's, you know, if you want to find out if someone's technical inclined, ask them if they have a smartwatch, ask them if they like to play with technology, ask them these kinds of things, you know, do you did you set up your own system? Did you need help? You know, are you the kind of person that troubleshoot? That's the kind of question to to smoke out
someone's expertise with the technology. It's useless asking someone on a scale of 1 to 3 or 1 to 5 if they're an expert or not. So what types of things do you do? Will Also, it's kind of getting more into a field study type question. You can ask them between phone and desktop and if they use, for example, digital Braille reader, how do you know what they access on that, what they use their devices for, you know, iPad and so forth?
More questions to ask. So, you know, users may ask you how long is it and so forth. So you can you can give them an estimate. So, for example, you can say we'll need 30 to 60 minutes, no more than an hour. And this is where a driver and will help you, you know, calibrate the time they're recession shouldn't take any more than an hour. So it should be, you know, 60 minutes,
60 to 90 minutes, I would say at the max. And this is with, for example, like 7 to 10 users, you might have 7 to 10 users at a 60 minute, you know, average appointment. And that's that's pretty typical for the accessibility testing that we do. I think the lowest number of users we've done is like five, you know, on a kind of agile accessibility test where especially when you go back and do accessibility Q&A, you know, testing and queuing at doing some quick
checking after after the optimization has occurred. So make sure you're on the right track. Then you're going to test on a smaller number of users. It's amazing what even seven users can can get in terms of insight, I've just my mind has been blown by seven users. Wow. Just unbelievable what you can learn and a few other things too. So if you have different types of users, need to make sure you're accommodating them.
If you have someone that's hard of hearing, you know, would you like an assistant listening device? Or this is more like if you're bringing them to your lab or do they need a sign language interpreter if they're deaf? You know, these are the kind of questions that are being asked for a lot of people. They may have someone helping them, so you'll be liaising with them and coordinating with them things like just giving someone a taxi to your facility. So the usual types
of politeness supply in offering someone a drink, you know, making sure that they're comfortable if they need the toilet or the bathroom, you know, they, they have that, you know, all that that stuff that goes with good moderation is is critical once you get them into the task. Now, as you prepare, as you're getting ready for your tests, it's important that you get familiar with a screen reader. So screen readers are good. Remember, though, that there are contrived.
In other words, if you start getting good at a screen reader, you're not the user, right? So remember that golden rule of UX, You are not the user. And so you can pretend you're the user that you're not the user, and screen reader is a nice way to actually get into how a screen reader thinks and just to kind of do some quick checking of your own and it takes a while to get comfortable familiar with the screen reader. If you're a developer, you should at least choose VoiceOver, TalkBack, get comfortable with it. Here's a couple of videos you can watch that will familiarize
you with it and show you how to set it up and access the settings and that your your homework assignment is over. The next month is to practice using your screen or your get familiar with just basic navigation. So two, three, four times a week, pull out your screen reader, turn it on, you know, go for it. You know, just kind of like cruise around the website and or on your phone and then go turn it off and carry on. And so forth and so on.
Maybe if you have a dedicated device, it'll be easier. You can just go pick it up and play. So that's your your homework. That'll help. You also get used to the fast talking speech engine there that a lot of people get blown out by. You know, a lot of people are like, wow, you know, it's overwhelming for a lot of people doing usability testing at first. And accessibility testing is like twice as overwhelming to observe. Just like for accessibility, issues are twice as difficult to experience,
You know, as a user with disabilities, it's twice as tricky to observe a session when you are not familiar, but you will gain familiarity and over time you'll get comfortable with it. So getting familiar to the screen reader can help. So some of the logistics we've already talked about testing in the user's native setup, that's important, right? Be sensitive to user's comfort levels as well. Right. So very you know, you have someone on the autism spectrum disorder, right now and
someone on the dyslexia spectrum disorder or dyslexia spectrum, I think is the latest is what it's called. So dyslexia is not just one thing. It's not on and off. There's a spectrum of it. Autism has a spectrum. So you have some people that are very uncomfortable just being around other people with autism. You know, that's one of the things if you're not familiar with the different types of learning disabilities, different types of these would be cognitive.
You want to kind of get familiar, ask people, contact an organization. There's a lot of very valuable disability education about, you know, different, different disabilities online. And you can you can Google or YouTube, these videos of people talking about it. But autism spectrum means like, yeah, intense shyness and users may shut down in the middle, you know, So you have to be very careful, very gentle when you're moderating is don't be demanding or pushy or rushing with your users.
Know when to back off. Let them have space. You know, sometimes just say nothing. Just let let the stress pass. You know, dyslexia is essentially a stress response. So if you're if you get someone dyslexia, ask them to start reading stuff, to test, to see if it's dyslexic, friendly. Well, you're no wonder they're stressing out because it's you know, it's they know that you're looking for information from them. So talking to people on the phone as you recruit them so they know your voice is a really good idea.
We found this particular strategy to be extremely helpful. And if they need to have somebody else there to support them, that's totally fine. Tell them what to expect. You know? So explain the whole scenario on the phone. Go through it a number of times with them so that it's totally clear they don't have any issues or questions. They don't feel exploited or taken advantage of. You know, tell them what that you're trying to do. Most people these days will totally understand
the need to improve accessibility and will be very willing to help you, especially with the high level of unemployment in the disability community. Remember, something like 70% of blind users are unemployed. So there's a massive marginalization out there. Incentivize your users. Don't give people ten or $20 in the United States. We're giving people between 50 to $100 an average of about $75 for that one hour
or hour and a half session and depends on how much work you're making them do. I honestly charge by the pain? I also like the idea of providing a baseline, and then if you find an accessibility issue, we'll give you a dollar for each one you find. And what I like about that idea is it incentivizes the pain or most people go through and find pain through or accessibility. And so now you're actually rewarding. That's just the same idea as paying a hacker to find issues in your security model and pain.
You know, incentivize that. So it's the same type of incentivization program. And the other thing to make sure ask how people want to be paid, whether it's cash or, you know, check or whatever, and cash is usually king, so usually cash does the trick, just like with traditional use for studies.
Access the Accessibility Template mentioned in the video below:
Dive deeper into user research with our courses:
Data-Driven Design: Quantitative Research for UX
Conducting Usability Testing
User Research – Methods and Best Practices
Read more from the IxDF: How to Screen Research Participants and The Basics of Recruiting Participants for User Research.
Watch the IxDF Master Class How to Get Started with Usability Testing with Cory Lebson for more user research insights.
Usability.gov offers a handy Guide to Recruiting Usability Test Participants.
Michael Margolis, UX Research Partner at GV offers a primer on Finding Participants.
Learn about participant screening in the insightful article, Recruiting and Screening Candidates for User Research Projects, as well as the importance of “floaters” in How and Why to Recruit Backup Participants (aka “Floaters”) in User Research from Nielsen Norman Group.
Read about tips for effective participant screening from this article, 11 tips for effectively screening test participants from User Testing.
Learn how to create screener surveys via User Interviews.
Listen to Former User Interviews’ Senior Vice President, JH Forster, in the Awkard Silences podcast episode Unlocking Innovation with the Right Research Participants.
During participant screening, you should be aware of several ethical considerations to ensure the process is fair, respectful, and protects the rights of participants:
Informed consent: Make sure participants understand the purpose of the user research study, what participation involves and any potential risks or benefits. Obtain their informed consent before they participate in any screening or research activities.
Privacy and confidentiality: Protect participants' personal information. Only collect data necessary for the study and keep it confidential. Use secure methods to store and handle data.
Non-discrimination: Avoid discrimination in the screening process. Verify the criteria do not unjustly exclude individuals based on race, gender, age, disability or other factors to the study's objectives.
Transparency: Be clear and honest about the purpose of the screening and the nature of the study. Avoid misleading participants about the study's goals or their role in it.
Voluntary participation: Participation should always be voluntary. Participants should know they can withdraw from the screening or study at any time without penalty.
Respect for participants: Treat all potential and selected participants with respect and dignity. This includes respecting their time and the provision of clear instructions and feedback.
Follow along with Creativity Expert and IxDF Instructor Alan Dix as he discusses building trust during the research process.
To guarantee diversity in your UX research participant group, follow these strategies:
Define what diversity means for your project: Understand the dimensions of diversity relevant to the research, such as age, gender, ethnicity, geographic location and user experience level.
Set clear recruitment goals: Based on the project definition, establish concrete, measurable objectives for participant diversity.
Use varied recruitment methods: Don't rely on a single channel. Use different platforms and methods to reach a wide range of potential participants.
Screen for diversity: Incorporate questions in the screening process so your participants meet the diversity criteria you've set.
Monitor and adjust: Continually assess the diversity of the participant pool and make adjustments to the recruitment strategy as needed.
While there might not be books entirely dedicated to the topic of participant screening alone, as it usually falls under the wider umbrellas of UX research, market research or psychology. However, several popular books provide substantial insights into participant screening as part of their broader discussion on research methodologies:
Kuniavsky, M. (2003). Observing the User Experience: A Practitioner's Guide to User Research. San Francisco, CA: Morgan Kaufmann. This book provides a comprehensive overview of user experience research, including methods for recruiting and screening participants. It's an excellent resource for anyone involved in UX design and research.
Hall, E. (2013). Just Enough Research. New York, NY: A Book Apart. Erika Hall's book is known for its practical advice and straightforward approach to research in design. It includes sections on how to conduct effective research, which inherently involves participant screening.
Portigal, S. (2023). Interviewing Users: How to Uncover Compelling Insights. Brooklyn, NY: Rosenfeld Media.- While focusing more on the interviewing aspect, this book also delves into how to find and select the right participants for user interviews, which is a crucial part of the screening process.
Buley, L. (2013). The User Experience Team of One: A Research and Design Survival Guide. Brooklyn, NY: Rosenfeld Media. This book is particularly useful for solo researchers or small teams. It covers a range of topics including how to conduct research effectively, which involves identifying and screening the right participants.
Handle sensitive information collected during participant screening with strict adherence to data collection and protection principles for confidentiality and security. Here are steps to properly manage this information:
Keep data anonymous: Remove or alter any personal identifiers from the data as soon as possible. Use codes or pseudonyms instead of real names.
Secure storage: Store sensitive information in secure, encrypted formats. Limit access to this data to only those who need it for the study.
Consent and transparency: Confirm participants are aware of what sensitive information the team will collect, why it is necessary and how the researchers will use it. Obtain attendees’ informed consent specifically for the use and storage of sensitive data.
Data minimization: Only collect information that is absolutely necessary for the objectives of the study.
Compliance with laws and regulations: Adhere to local and international data protection laws and regulations, such as the General Data Protection Regulation (GDPR).
Data usage limits: Use the sensitive information only for the purposes stated during the consent process. Do not use the data for unrelated studies or activities without obtaining additional consent.
Data disposal: Once the study no longer needs the data or after it concludes, securely dispose of the sensitive information according to proper data destruction protocols.
Follow along as William Hudson, UX Research Expert, discusses the difference between qualitative and quantitative research:
Screening for Qualitative Studies:
Screen for participants who can provide rich, detailed responses. Qualitative research seeks in-depth insights, so participants should be comfortable with open-ended questions and discussions.
Select a diverse range of participants who can offer varied viewpoints. While the sample size is generally smaller, the goal is to cover a wide spectrum of experiences and attitudes.
Choose participants who are articulate and willing to share their thoughts and feelings. The ability to communicate experiences and opinions is crucial in qualitative studies.
Screening for Quantitative Studies:
Screen for participants who represent the larger population for the study. Quantitative research aims to generalize results, so the sample should accurately reflect the demographics of the target audience.
Plan for a larger number of participants, as statistical validity requires larger sample sizes than qualitative research.
While qualitative research may allow for more flexibility in participant selection, quantitative studies often require stricter adherence to predefined criteria to ensure the sample's relevance to the research questions.
In summary, qualitative screening focuses on finding participants who can provide depth and narrative, while quantitative screening emphasizes representativeness and adherence to specific demographic or behavioral criteria.
Yes, you can use social media for participant screening and recruitment in UX research. Social media platforms like LinkedIn, Reddit, Instagram, TikTok, etc., offer a vast pool of potential participants which makes them valuable tools to reach diverse and specific demographics.
To do so, researchers can identify target platforms—for example, LinkedIn might be more suitable for professionals, while Instagram or TikTok could be better for younger audiences. Develop compelling posts or ads that clearly outline what the study requires from participants and what they can expect from it. The message should resonate with the target audience you wish to recruit.
Use relevant hashtags to reach broader audiences or post in specific groups related to your research area—find participants who are genuinely interested in the topic and direct interested individuals to a screening survey or form to collect more detailed information to validate they meet the study criteria.
“Monitor, engage, and be transparent; these have always been the keys to success in the digital space.”
— Dallas Lawrence, Chief Strategy & Communications Officer at Telly
Remote Studies:
Technical requirements: Screen for participants who have the necessary technology, such as a reliable internet connection, a computer or mobile device and any specific software or apps required for the remote study.
Digital proficiency: Guarantee participants are comfortable using digital tools and platforms, as they will need to navigate these independently during the study.
Time zone considerations: Consider participants' time zones to schedule sessions that are convenient for everyone involved.
Communication skills: Look for participants who can communicate clearly and effectively in a remote setting, as misunderstandings can be more common without face-to-face interaction.
Once the screening is complete, teams can continue their remote work with unmoderated remote usability tests. Learn more with IxDF’s 4 Common Types of Usability Tests template:
In-Person Studies:
Geographic location: Screen for participants located near the study venue or willing to travel to the location.
Availability: Confirm participants can come to the study location at the scheduled times.
Health and safety: Depending on current health guidelines, screen for participants who meet certain health criteria or agree to follow safety protocols during in-person sessions.
The number of participants to screen to find the ideal candidate for a research study depends on several factors, such as the specificity of the criteria, the diversity of the target population and the overall size of the desired participant group. Generally, it’s better to screen a larger pool than the target number to account for individuals who may not meet all criteria or may drop out of the study.
For qualitative studies, a smaller, more focused participant group is usually sufficient. In quantitative studies aimed at statistical analysis, you may need to screen many more participants to ensure a representative sample of the broader population. 40 to 50 attendees is a good start, but studies can go up to hundreds or even thousands of individuals, depending on the study's scale and the prevalence of the target demographic.
In quantitative research, a good sample size is crucial for reliable results. William Hudson, CEO of Syntagm, talks about the importance of statistical significance with an example in the video below.
Compensation for UX research participants can vary widely depending on the study's length, complexity and target demographic. Whenever possible, give participants options for how they receive compensation. Choices can increase participation rates and participant satisfaction. Some ideas include:
Cash payments
Gift cards
Discounts or vouchers
Product giveaways
Travel reimbursements
Following up with participants after the screening process is paramount to maintain engagement and a smooth execution of your study. Here's how to effectively manage this process:
1. Send a thank-you message immediately after the screening—express gratitude for their time and interest. This sets a positive tone for future interactions.
2. Inform participants about the next steps. Let them know whether they have been selected for the study and provide details about what they should expect if they move forward.
3. For those chosen, arrange the study sessions promptly. Offer multiple time slots and use scheduling tools to make the process easier for both parties.
4. Send any necessary materials or information they need to prepare for the study. This could include consent forms, guidelines or background information about the study.
For guidance in this arena, use the IxDF’s Research Consent Form template:
5. Send reminders as the study date approaches. Include the date, time, location (for in-person studies) or instructions (for remote studies) so participants are well-prepared.
6. After the study, follow up with participants to thank them again and offer an opportunity for them to provide feedback on their experience. This can improve future studies and participant satisfaction.
Do you want to improve your UX / UI Design skills? Join us now
You earned your gift with a perfect score! Let us send it to you.
We've emailed your gift to name@email.com.
Do you want to improve your UX / UI Design skills? Join us now
Here's the entire UX literature on Participant Screening by the Interaction Design Foundation, collated in one place:
Take a deep dive into Participant Screening with our course Data-Driven Design: Quantitative Research for UX .
Quantitative research is about understanding user behavior at scale. In most cases the methods we’ll discuss are complementary to the qualitative approaches more commonly employed in user experience. In this course you’ll learn what quantitative methods have to offer and how they can help paint a broader picture of your users’ experience of the solutions you provide—typically websites and apps.
Since quantitative methods are focused on numerical results, we’ll also be covering statistical analysis at a basic level. You don’t need any prior knowledge or experience of statistics, and we won’t be threatening you with mathematical formulas. The approach here is very practical, and we’ll be relying instead on the numerous free tools available for analysis using some of the most common statistical methods.
In the “Build Your Portfolio: Research Data Project”, you’ll find a series of practical exercises that will give you first-hand experience of the methods we’ll cover. If you want to complete these optional exercises, you’ll create a series of case studies for your portfolio which you can show your future employer or freelance customers.
Your instructor is William Hudson. He’s been active in interactive software development for around 50 years and HCI/User Experience for 30. He has been primarily a freelance consultant but also an author, reviewer and instructor in software development and user-centered design.
You earn a verifiable and industry-trusted Course Certificate once you’ve completed the course. You can highlight it on your resume, your LinkedIn profile or your website.
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change, , link to us, or join us to help us democratize design knowledge!