Your constantly-updated definition of Quantitative Research and
collection of videos and articles. Be a conversation starter: Share this page and inspire others!
1,403shares
What is Quantitative Research?
Quantitative research is the methodology which researchers use to test theories about people’s attitudes and behaviors based on numerical and statistical evidence. Researchers sample a large number of users (e.g., through surveys) to indirectly obtain measurable, bias-free data about users in relevant situations.
“Quantification clarifies issues which qualitative analysis leaves fuzzy. It is more readily contestable and likely to be contested. It sharpens scholarly discussion, sparks off rival hypotheses, and contributes to the dynamics of the research process.”
— Angus Maddison, Notable scholar of quantitative macro-economic history
ShowHide
video transcript
Transcript loading…
See how quantitative research helps reveal cold, hard facts about users which you can interpret and use to improve your designs.
Use Quantitative Research to Find Mathematical Facts about Users
Quantitative research is a subset of user experience (UX) research. Unlike its softer, more individual-oriented “counterpart”, qualitative research, quantitative research means you collect statistical/numerical data to draw generalized conclusions about users’ attitudes and behaviors. Compare and contrast quantitative with qualitative research, below:
Quantitative Research
Qualitative Research
You Aim to Determine
The “what”, “where” & “when” of the users’ needs & problems – to help keep your project’s focus on track during development
The “why” – to get behind how users approach their problems in their world
Methods
Highly structured (e.g., surveys) – to gather data about what users do & find patterns in large user groups
Loosely structured (e.g., contextual inquiries) – to learn why users behave how they do & explore their opinions
Number of Representative Users
Ideally 30+
Often around 5
Level of Contact with Users
Less direct & more remote (e.g., analytics)
More direct & less remote (e.g., usability testing to examine users’ stress levels when they use your design)
Statistically
Reliable – if you have enough test users
Less reliable, with need for great care with handling non-numerical data (e.g., opinions), as your own opinions might influence findings
Quantitative research is often best done from early on in projects since it helps teams to optimally direct product development and avoid costly design mistakes later. As you typically get user data from a distance—i.e., without close physical contact with users—also applying qualitative research will help you investigate why users think and feel the ways they do. Indeed, in an iterative design process quantitative research helps you test the assumptions you and your design team develop from your qualitative research. Regardless of the method you use, with proper care you can gather objective and unbiased data – information which you can complement with qualitative approaches to build a fuller understanding of your target users. From there, you can work towards firmer conclusions and drive your design process towards a more realistic picture of how target users will ultimately receive your product.
Author / Copyright holder: Teo Yu Siang and the Interaction Design Foundation. Copyright terms and license: CC BY-NC-SA 3.0
Quantitative analysis helps you test your assumptions and establish clearer views of your users in their various contexts.
Quantitative Research Methods You Can Use to Guide Optimal Designs
There are many quantitative research methods, and they help uncover different types of information on users. Some methods, such as A/B testing, are typically done on finished products, while others such as surveys could be done throughout a project’s design process. Here are some of the most helpful methods:
A/B testing – You test two or more versions of your design on users to find the most effective. Each variation differs by just one feature and may or may not affect how users respond. A/B testing is especially valuable for testing assumptions you’ve drawn from qualitative research. The only potential concerns here are scale—in that you’ll typically need to conduct it on thousands of users—and arguably more complexity in terms of considering the statistical significance involved.
Analytics –With tools such as Google Analytics, you measure metrics (e.g., page views, click-through rates) to build a picture (e.g., “How many users take how long to complete a task?”).
Desirability Studies –You measure an aspect of your product (e.g., aesthetic appeal) by typically showing it to participants and asking them to select from a menu of descriptive words. Their responses can reveal powerful insights (e.g., 78% associate the product/brand with “fashionable”).
Surveys and Questionnaires – When you ask for many users’ opinions, you will gain massive amounts of information. Keep in mind that you’ll have data about what users say they do, as opposed to insights into what they do. You can get more reliable results if you incentivize your participants well and use the right format.
Tree Testing –You remove the user interface so users must navigate the site and complete tasks using links alone. This helps you see if an issue is related to the user interface or information architecture.
Another powerful benefit of conducting quantitative research is that you can keep your stakeholders’ support with hard facts and statistics about your design’s performance—which can show what works well and what needs improvement—and prove a good return on investment. You can also produce reports to check statistics against different versions of your product and your competitors’ products.
Most quantitative research methods are relatively cheap. Since no single research method can help you answer all your questions, it’s vital to judge which method suits your project at the time/stage. Remember, it’s best to spend appropriately on a combination of quantitative and qualitative research from early on in development. Design improvements can be costly, and so you can estimate the value of implementing changes when you get the statistics to suggest that these changes will improve usability. Overall, you want to gather measurements objectively, where your personality, presence and theories won’t create bias.
Learn More about Quantitative Research
Take our User Research course to see how to get the most from quantitative research.
ShowHide
video transcript
00:00:00 --> 00:00:33
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
00:00:33 --> 00:01:00
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
00:01:00 --> 00:01:37
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
00:01:37 --> 00:01:44
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
What is the difference between qualitative and quantitative research?
Qualitative and quantitative research differ primarily in the data they produce. Quantitative research yields numerical data to test hypotheses and quantify patterns. It's precise and generalizable. Qualitative research, on the other hand, generates non-numerical data and explores meanings, interpretations, and deeper insights. Watch our video featuring Professor Alan Dix on different types of research methods.
ShowHide
video transcript
Transcript loading…
This video elucidates the nuances and applications of both research types in the design field.
What is a good sample size for quantitative research?
In quantitative research, determining a good sample size is crucial for the reliability of the results. William Hudson, CEO of Syntagm, emphasizes the importance of statistical significance with an example in our video.
ShowHide
video transcript
Transcript loading…
He illustrates that even with varying results between design choices, we need to discern whether the differences are statistically significant or products of chance. This ensures the validity of the results, allowing for more accurate interpretations. Statistical tools like chi-square tests can aid in analyzing the results effectively. To delve deeper into these concepts, take William Hudson’s Data-Driven Design: Quantitative UX Research Course.
Why is quantitative research important?
Quantitative research is crucial as it provides precise, numerical data that allows for high levels of statistical inference. Our video from William Hudson, CEO of Syntagm, highlights the importance of analytics in examining existing solutions.
ShowHide
video transcript
00:00:00 --> 00:00:30
*When and Why to use Analytics* Primarily, we're going to need to be using analytics on existing solutions. So, if you're talking about *green field* – which is a brand-new solution, hasn't been built and delivered yet – versus *brown field* – which is something that's already running but perhaps we want to improve it – then we're decidedly on the brown field side.
00:00:30 --> 00:01:00
So, we're looking at existing solutions because it's only existing solutions that can provide us with the analytics. If you haven't got an existing solution, you're going to have to use another technique. And there are obviously many other techniques, but they're not going to provide you with much in the way of *quantitative data*. We do have early-research methods, which we'll be talking about very briefly as an alternative, but predominantly analytics for existing deployed solutions.
00:01:00 --> 00:01:31
Having said that, then if you're looking at a rework of an existing site or app, then looking at current analytics can tell you a lot about what you might like to address; what questions you might like to raise with your team members, stakeholders, users. So, those are important considerations. A good starting point in organizations or teams with low UX maturity is analytics because analytics are easier to sell – to be honest – than qualitative methods.
00:01:31 --> 00:02:01
If you're new to an organization, if they're only just getting into user experience, then trying to persuade colleagues that they should be making important decisions on the basis of six to eight qualitative sessions, which is typically what we do in the usability lab, then you should find by comparison web analytics a much easier thing to persuade people with. And the other issue particularly relevant to qualitative methods
00:02:01 --> 00:02:33
is that quantitative methods tend to be very, very much cheaper – certainly on the scale of data, you are often having to talk in terms of hundreds of dollars or pounds per participant in a *qualitative* study, for various expenses; whereas a hundred dollars or pounds will get you potentially hundreds or thousands of users. And, in fact, if you're talking about platforms like Google Analytics which are free, there is no cost other than the cost of understanding and using
00:02:33 --> 00:03:01
the statistics that you get out; so, obviously it is very attractive from a cost perspective. Some of the things that we'll be needing to talk about as alternatives to analytics or indeed *in addition* to analytics: Analytics can often *highlight* areas that we might need to investigate, and we would then have to go and consider what alternatives we might use to get to the bottom of that particular problem.
00:03:01 --> 00:03:32
Obviously, *usability testing* because you'll need to establish *why* users are doing what they're doing. You can't know from analytics what users' motivations are. All you can know is that they went to *this* page and then they went to *that* page. So, the way to find out if it isn't obvious when you look at the pages – like there's something wrong or broken or the text makes no sense – is to bring users in and watch them actually doing it, or even use remote sessions – watching users doing the thing that has
00:03:32 --> 00:04:00
come up as a big surprise in your analytics data. A/B testing is another relatively low-cost approach. It's – again – a *quantitative* one, so we're talking about numbers here. And A/B testing, sometimes called *multivariate testing*, is also performed using Google Tools often, but many, many other tools are available as well; and you show users different designs;
00:04:00 --> 00:04:33
and you get statistics on how people behaved and how many converted, for example. And you can then decide "Well, yes, putting that text there with this picture over here is better than the other way around." People do get carried away with this, though; you can do this ad nauseam, to the point where you're starting to change the background color by minute shades to work out which gets you the best result. These kinds of results tend to be fairly temporary. You get a glitch and then things just settle down afterwards.
00:04:33 --> 00:05:03
So, mostly in user experience we're interested in things which actually really change the user experience rather than getting you temporary blips in the analytics results. And then, finally, *contextual inquiry* and *early-design testing*: Contextual inquiry is going out and doing research in the field – so, with real users doing real things to try to find out how they operate in this particular problem domain; what's important to them; what frustrations they have;
00:05:03 --> 00:05:30
how they expect a solution to be able to help them. And early-design testing – mostly in the web field these days but can also be done with software and mobile apps; approaches like *tree testing* which simulate a menu hierarchy. And you don't actually have to do anything other than put your menu hierarchy into a spreadsheet and upload it – it's as simple as that; and then give users tasks and see how they get on.
00:05:30 --> 00:06:00
And you can get some very interesting and useful results from tree testing. And another early-design testing approach is *first-click testing*. So, you ask users to do something and you show them a screenshot – it doesn't have to be of an existing site; it can be just a design that you're considering – and find out where they click, and is where they click helpful to them? Or to you? So, these are examples of early-design testing – things that you can do *before* you start building
00:06:00 --> 00:06:34
a product to work out what the product should look like or what the general shape or terminology or concepts in the product should be. And both of these can be used to find out whether you're on the right track. I have actually tested solutions for customers where users had no idea what the proposition was: "What does this site do?"; "What are they actually trying to sell me?" or "What is the purpose of it?" – and it's a bit late to be finding that out in usability testing towards the end of a project, I have to say. And that was indeed exactly what happened in this particular example
00:06:34 --> 00:07:08
I'm thinking of. So, doing some of these things really early on is very important and, of course, is totally the opposite of trying to use web analytics, which can only be done when you finish. So, do bear in mind that you do need some of these approaches to be sure that you're heading in the right direction *long before* you start building web pages or mobile app screens. Understand your organization's *goals* for the interactive solution that you're building.
00:07:08 --> 00:07:31
Make sure that you know what they're trying to get out of it. Speak to stakeholders – stakeholders are people typically within your organization who have a vested interest in your projects. So, find out what it's supposed to be doing; find out why they're rebuilding this site or why this mobile app is being substantially rewritten. You need to know that; so, don't just jump in and start looking for interesting numbers.
00:07:31 --> 00:08:02
It's not necessarily going to be that useful. Do know the solutions; become familiar with them. Find out how easy it is to use them for the kinds of things which your stakeholders or others have told you are important. Understand how important journeys through the app or website work. And get familiar with the URLs – that's, I'm afraid, something that you're going to be seeing a lot of in analytics reports – the references for the individual pages or screens,
00:08:02 --> 00:08:33
and so that you'll understand, when you actually start looking at reports of user journeys, what that actually means – "What do all these URLs mean in my actual product?" So, you're going to have to do some homework on that front. You're also going to have to know the users – you need to speak to the users; find out what they think is good and bad about your solutions; find out how they think about this problem domain and how it differs from others and what kind of solutions they know work and what kind of problems they have with typical solutions.
00:08:33 --> 00:08:59
Also ask stakeholders and colleagues about known issues and aspirations for current solutions. So, you know, if you're in the process of rebuilding a site or an app, *why* – is it just slow-ish? Is it just the wrong technology? Maybe. Or are there things which were causing real problems in the previous or current version and that you're hoping to address those in the rebuild.
Quantitative methods, like analytics and A/B testing, are pivotal for identifying areas for improvement, understanding user behaviors, and optimizing user experiences based on solid, empirical evidence. This empirical nature ensures that the insights derived are reliable, allowing for practical improvements and innovations. Perhaps most importantly, numerical data is useful to secure stakeholder buy-in and defend design decisions and proposals. Explore this approach in our Data-Driven Design: Quantitative Research for UX Research course and learn from William Hudson’s detailed explanations of when and why to use analytics in the research process.
When to use quantitative research?
After establishing initial requirements, statistical data is crucial for informed decisions through quantitative research. William Hudson, CEO of Syntagm, sheds light on the role of quantitative research throughout a typical project lifecycle in this video:
ShowHide
video transcript
00:00:00 --> 00:00:32
This is a very typical project lifecycle in high-level terms. Generally start off with *requirements* – finding out what's needed, and we go off and talk to stakeholders. And one of the problems we have with *user requirements*, in particular, is that often analysts and requirements researchers in the IT world tend to go off and want to ask *users* what they want.
00:00:32 --> 00:01:02
They don't really understand that users don't quite know what they want, that you actually need to do user research, and that is one of the biggest issues that we face in user experience: is the lack of understanding of user research and the whole field of user experience. From requirements, we might expect to be doing surveys to find out – particularly if we have an existing offering of some kind – we might find out what's good about it, what's not so good about it,
00:01:02 --> 00:01:31
what people would like to do with it. And surveys might be helpful in those particular areas. Now, bear in mind that generally when we're talking about surveys, we already need to have some idea of the questions and the kinds of answers people are going to give us. It is really a very bad plan to launch a large survey without doing some early research on that, doing some qualitative research on how people think about these questions and these topics
00:01:31 --> 00:02:00
and trying to understand it a little bit better before we launch a major initiative in terms of survey research. We can also use surveys in *analysis and design* perhaps to ask people which kinds of things might work better for their particular needs and behaviors. We also can start to employ *early-design testing*, even in the analysis and design phase so that we've got perhaps some wireframes that we're thinking about on the *design* side,
00:02:00 --> 00:02:30
and we can start to *test* them – start to try to find out: "Will people understand this? Will they be able to perform the most important tasks from perspective?" I have been involved in user testing of new product ideas where users had *no idea* what the service being offered was about because it was just presented *so confusingly*; there was no clear message; there was no clear understanding of the concepts behind the message because it wasn't very clear to start with, and so on.
00:02:30 --> 00:03:00
So, early-design testing really has an important role to play there. *Implementation* and *testing* – that's when we can start doing a lot more in terms of evaluating what's going on with our products. There we would employ *usability evaluations*. And the things that I've called "early-design testing", by the way, can be done later on too. It's just they don't really involve the finished product. So, they're perhaps not quite as relevant. But if we've got questions about how the navigation might be changed,
00:03:00 --> 00:03:32
then we might fall back to the tree testing where we're just showing people the navigation hierarchy rather than the whole site and asking them to perform tasks and just tweak the navigation as required to improve that. And one of my big complaints with our whole industry – still, after all these decades! – is that we do tend only to be allowed to do usability evaluations, and we do tend to wait until implementation has taken place
00:03:32 --> 00:04:02
and the product is being tested before we start to try to involve real users, which really is far too late in the whole process. If you want to be able to be confident in the concepts and the terminology that your interactive solution is providing to your users and customers, then that needs to start way back at the beginning of the project cycle. And then, finally, once we've got live solutions available,
00:04:02 --> 00:04:30
we can use *analytics* for websites and apps and we can also use A/B and multivariate testing to make sure that our designs are optimal. If we find problems, we might set up an A/B experiment to see whether this particular alternative would be a better solution or we could go down the multivariate route where we provide permutations of a *number* of different design elements on a particular page and see which of those elements proved to be the most effective.
00:04:30 --> 00:05:00
The fact that if you're doing project development, software development in an iterative environment – like agile, for example – then you might be doing a little bit of this in every single iteration; so, there might be a little bit of work on the requirements at the front and there might be a little bit of design and analysis. Having said that, there is usually some upfront requirements and analysis and design that has to go on so that you know what *shape* your project is
00:05:00 --> 00:05:30
– what *shape and size* I think is perhaps a better or more complete description – because in order for you to be able to even guess at how long this is going to take you, you need to have *scoped* it. And to scope it means to set the boundaries, and to set the boundaries means to understand the requirements and to understand what kind of solutions would be acceptable; so, there will be some of this done always up front. Anybody who sets on a major project *without* doing upfront requirements analysis and design of some sort
00:05:30 --> 00:05:34
is – I'm afraid – probably asking for trouble.
During the analysis and design phases, quantitative research helps validate user requirements and understand user behaviors. Surveys and analytics are standard tools, offering insights into user preferences and design efficacy. Quantitative research can also be used in early design testing, allowing for optimal design modifications based on user interactions and feedback, and it’s fundamental for A/B and multivariate testing once live solutions are available.
How to write a quantitative research question?
To write a compelling quantitative research question:
Create clear, concise, and unambiguous questions that address one aspect at a time.
Use common, short terms and provide explanations for unusual words.
Avoid leading, compound, and overlapping queries and ensure that questions are not vague or broad.
According to our video by William Hudson, CEO of Syntagm, quality and respondent understanding are vital in forming good questions.
ShowHide
video transcript
Transcript loading…
He emphasizes the importance of addressing specific aspects and avoiding intimidating and confusing elements, such as extensive question grids or ranking questions, to ensure participant engagement and accurate responses. For more insights, see the article Writing Good Questions for Surveys.
Is survey research qualitative or quantitative?
Survey research is typically quantitative, collecting numerical data and statistical analysis to make generalizable conclusions. However, it can also have qualitative elements, mainly when it includes open-ended questions, allowing for expressive responses. Our video featuring the CEO of Syntagm, William Hudson, provides in-depth insights into when and how to effectively utilize surveys in the product or service lifecycle, focusing on user satisfaction and potential improvements.
ShowHide
video transcript
Transcript loading…
He emphasizes the importance of surveys in triangulating data to back up qualitative research findings, ensuring we have a complete understanding of the user's requirements and preferences.
Is descriptive research qualitative or quantitative?
Descriptive research focuses on describing the subject being studied and getting answers to questions like what, where, when, and who of the research question. However, it doesn’t include the answers to the underlying reasons, or the “why” behind the answers obtained from the research. We can use both f qualitative and quantitative methods to conduct descriptive research. Descriptive research does not describe the methods, but rather the data gathered through the research (regardless of the methods used).
When we use quantitative research and gather numerical data, we can use statistical analysis to understand relationships between different variables. Here’s William Hudson, CEO of Syntagm with more on correlation and how we can apply tests such as Pearson’s r and Spearman Rank Coefficient to our data.
ShowHide
video transcript
Transcript loading…
This helps interpret phenomena such as user experience by analyzing session lengths and conversion values, revealing whether variables like time spent on a page affect checkout values, for example.
Which sampling technique is most desirable in quantitative research?
Random Sampling: Each individual in the population has an equitable opportunity to be chosen, which minimizes biases and simplifies analysis.
Systematic Sampling: Selecting every k-th item from a list after a random start. It's simpler and faster than random sampling when dealing with large populations.
Stratified Sampling: Segregate the population into subgroups or strata according to comparable characteristics. Then, samples are taken randomly from each stratum.
Cluster Sampling: Divide the population into clusters and choose a random sample.
Multistage Sampling: Various sampling techniques are used at different stages to collect detailed information from diverse populations.
Convenience Sampling: The researcher selects the sample based on availability and willingness to participate, which may only represent part of the population.
Quota Sampling: Segment the population into subgroups, and samples are non-randomly selected to fulfill a predetermined quota from each subset.
These are just a few techniques, and choosing the right one depends on your research question, discipline, resource availability, and the level of accuracy required. In quantitative research, there isn't a one-size-fits-all sampling technique; choosing a method that aligns with your research goals and population is critical. However, a well-planned strategy is essential to avoid wasting resources and time, as highlighted in our video featuring William Hudson, CEO of Syntagm.
ShowHide
video transcript
Transcript loading…
He emphasizes the importance of recruiting participants meticulously, ensuring their engagement and the quality of their responses. Accurate and thoughtful participant responses are crucial for obtaining reliable results. William also sheds light on dealing with failing participants and scrutinizing response quality to refine the outcomes.
What are the 4 types of quantitative research?
The 4 types of quantitative research are Descriptive, Correlational, Causal-Comparative/Quasi-Experimental, and Experimental Research. Descriptive research aims to depict ‘what exists’ clearly and precisely. Correlational research examines relationships between variables. Causal-comparative research investigates the cause-effect relationship between variables. Experimental research explores causal relationships by manipulating independent variables. To gain deeper insights into quantitative research methods in UX, consider enrolling in our Data-Driven Design: Quantitative Research for UX course.
What are the strengths of quantitative research?
The strength of quantitative research is its ability to provide precise numerical data for analyzing target variables.This allows for generalized conclusions and predictions about future occurrences, proving invaluable in various fields, including user experience. William Hudson, CEO of Syntagm, discusses the role of surveys, analytics, and testing in providing objective insights in our video on quantitative research methods, highlighting the significance of structured methodologies in eliciting reliable results.
This course empowers you to leverage quantitative data to make informed design decisions, providing a deep dive into methods like surveys and analytics. Whether you’re a novice or a seasoned professional, this course at Interaction Design Foundation offers valuable insights and practical knowledge, ensuring you acquire the skills necessary to excel in user experience research. Explore our diverse topics to elevate your understanding of quantitative research methods.
How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
The term card sorting applies to a wide variety of activities involving the grouping and/or naming of objects or concept
Book chapter
Open Access—Link to us!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.