Revolutionize UX Design with VR Experiences

- 477 shares
- 4 weeks ago
Eye tracking in user experience (UX) design is a technique to use specialized technology to track and analyze where users look on a digital interface and for how long. Designers use this data to understand user attention and behavior, and optimize the placement of elements like buttons and menus to enhance the user experience.
© Interaction Design Foundation, CC BY-SA 4.0
Eye tracking is an essential technique in UX design, especially in usability testing and user research. It offers a detailed real-time view of user engagement and interaction patterns as it records where and how users look at various elements on webpages and mobile apps.
Eye tracking evolved from direct observation studies in the 19th century as a rudimentary tool to become a cornerstone of user research. By the late 1990s, marketing research and advertising agencies began leveraging this technology to study how individuals consume online content.
Advancements in technology in areas such as virtual reality (VR) have brought enhancements in eye tracking and the ability to pinpoint user actions such as pupil dilation and gaze patterns. Eye tracking glasses and VR headsets with eye tracking have become sophisticated, affordable and helpful tools.
Eye tracking solutions for UX designers and researchers have also become more accessible and cost-effective in any case. Modern smartphones can serve as accurate eye trackers, making this tool indispensable for mobile UX research. The evolution of hardware and software has enabled diverse applications of eye tracking across marketing, UX design, psychological research and gaming.
Eye tracking leverages equipment that uses specialized infrared light to create reflections in a user's eyes. Cameras then capture these reflections to ascertain the eye's position and movement. Designers or researchers measure fixation points—where the gaze lingers—and saccades—the rapid movements between these points. This data helps them understand the visual paths that users take during tasks such as reading or exploring a webpage. Tools like heat maps and gaze plots visually represent these paths. They provide clear, actionable insights into user behavior. When designers observe and interpret visual patterns, they can make informed decisions to improve digital products and interfaces.
© Interaction Design Foundation, CC BY-SA 4.0
In UX design and particularly UX research and user testing, eye tracking is a crucial asset to observe natural user behavior without interference. With eye tracking, designers can:
Eye tracking provides invaluable data on where users focus their attention and for how long. This information is crucial for UX designers to understand which elements capture attention and which do not. This allows them to make targeted improvements for their target audience in the information architecture and more. For instance, after designers analyze fixation and saccade patterns, they can refine the visual hierarchy of their designs to better match user expectations and needs.
Through the detailed analysis of eye movement data, eye tracking helps identify areas within a user interface that cause confusion or frustration. This early detection of usability issues can save significant time and resources in the product design process. When designers collect data, designers can make adjustments before further development stages.
Watch this video to understand usability in more depth:
What usability is, and basically it's the extent to which a product can be used by specified users to achieve specified goals with three things: effectiveness, efficiency and satisfaction, in a specified context of use. Okay, that's the official definition of usability. It's been around for a really long time. But the *effectiveness* – okay – is it effective? So, if a person comes to a website, an app, you know, anything – can they do what they're supposed to do?
*Efficiency* – can they do it quickly? Do they get stuck? Do they get sidetracked? Do they go in some totally different direction? And *satisfaction* – do they feel good? Okay, that's the more emotive kind of aspect. Do they feel good about their experience? We want to make sure that what we're creating makes sense to our users and meets their needs. Are we meeting their needs?
Eye tracking measures how often—and how long—users look at and engage with certain elements. Also, eye tracking can reveal smooth flow areas. These insights can help guide designers to replicate successful elements across different parts of the interface.
Eye tracking allows designers to examine the effectiveness of visual design elements and optimize the visual hierarchy of a design. With eye tracking, they can see how users navigate a page. From there, designers can fine-tune the visual hierarchy to optimize it for the user’s natural flow.
Layout sets the foundation for effective visual communication. To make more effective layouts, designers can use approaches such as the red square method.
Watch this video where Michal Malewicz, Creative Director and CEO of Hype4, explains the red square method in detail.
If you have spacings between elements, as you can see here, it's really good to have *the same space between similar types of elements*. So, if you're looking at it, if you remove the little red squares, you're going to instantly see that this all makes sense. But it's really easy to destroy this whole concept by just having random distances. And just those distances alone are something that makes or breaks a layout, because this is just looking a lot better.
And this comes from something called the *red square method*, and you can look it up online on your own as well. This is something that I use quite a lot to talk about grids and layouts because I think that this is one of the easier ways for people to actually get a grasp on the layout part. But consistency in space is also important between similar elements. So, if you have two columns of text and a photo, it's really good to have them have the same spacing all the time.
So, if it doesn't really jump all over the place, our brains are just going to be a lot happier about it. And what you can do to learn this better is to try and recreate some other designs by first *block-framing* them. And block-framing means adding just shapes on top of things. So, a font is a rectangle; a photo is like just a circle; a different photo or a title is another rectangle. And then try to *align just those rectangles* – not the fonts, not the photos, not the little icons – just the rectangles because
that's going to give you the understanding of what you're really dealing with. And, as you can see, we're coming back to the whole thing that design is just about moving rectangles around. Once you learn that everything is a rectangle – well, or a circle, but generally is a simple shape and you have to position them well in space that they have the proper spacing from each other, proper distance, proper hierarchy, then it's going to be a lot easier to just switch it over to the actual images.
This is something that is really important for the *hierarchy* and the *layout*. And if they are perfect, then people are going to forgive you for a lot. So, they're like the drum beat, basically, of your design if it was a song. If the drum beat is off, then everything just kind of falls apart.
Designers can also compare different design alternatives effectively. Eye tracking provides concrete data on how different designs perform in terms of user engagement and ease of navigation. Therefore, designers can make evidence-based decisions to enhance interface appeal and functionality. This is particularly useful in A/B testing scenarios where subtle differences in design can have a large impact on user experience.
UX Strategist and Consultant William Hudson explains A/B testing:
A/B testing is all about changes in behavior. We present people with alternative designs and we look to see how much that alters their subsequent response. So in the simple A/B case, we show them design A, we show them design B, and we measure typically a completion goal, which a lot of subject areas in user experience we refer to as conversions.
So signing up to a newsletter, adding an item to a shopping basket, making a donation to a charity. These are all things that are important to their respective organizations. And typically for the interactive technology that we're working on. So websites and and apps, for example. So these are the things often that we're measuring, but they're not the only things that we can measure. We can measure really straightforward stuff like time spent on page, time spent in the site and also bounce rates.
For example, we'll be looking at some of those a bit later on. Just a reminder that because A/B testing is done very late in the day with live sites and large numbers of users, you really want to make sure that your solution is sound before you get this far. You're not going to be able to test everything that is possibly worrying you or possibly causing problems to users. It's just too long involved and potentially expensive in terms
of user loyalty and also the amount of effort you'd have to put into it. So we are looking at using A/B testing to basically polish the solution rather than to rework it. Bear that in mind and make sure that you've done adequate testing up to this point. Also, bear in mind that A/B testing tends to be focused on individual pages, so it is possible to have multi-page tests, but
it's a more complex area than we're going to be looking at in this lesson. So experiments have research questions that basically the things that you're trying to answer and because A/B testing focuses on changes in behavior, the research questions are going to be centered on defined goals. And as I've mentioned already, typically conversions. So will as an example, moving the add button above the fold improve sales conversions? I would imagine it would actually do something. I always find people
are making the mistake of getting too talkative on the first screen of the page and the actual “buy this” or “add to basket” button gets pushed further and further down until users actually don't even see it. Will a more clearly worded charitable purpose increase donations? If people have a better understanding of what your charity's about or where this money is going, would that improve conversions for those users? So both of these can be A/B tested by using goals that you almost
certainly have already defined in your analytic solution. So these are very good candidates for A/B and multivariate testing. But I'll give you some examples of bad questions too. So obviously I will repeat the words “don't ask this” when I've mentioned them because they're not meant as examples that you should be taking away. Conversely, research questions that are not directly related to improved goal completions tend not to be suitable for AB testing.
And a kind of vague question like “will better product photos reduce questions to customer service?”, don't ask this, is the sort of thing that you simply cannot effectively test in A/B testing. And the reason is that there are all kinds of channels to customer service and only some of them are through the website and only some of them can be effectively measured as goals. So it's just not a suitable scenario for A/B testing. There is a related question you could ask though,
which might be just as good, although not exactly equivalent, and that would be: “Will better product photos improve sales conversions?” Because if it reduces queries to customer service, it's almost certain that people are going to be much more confident about placing orders, adding those things to their basket. So that is a very easily measured outcome in terms of A/B testing, and that is the kind of question that A/B testing is very good at.
So simply rewording or rethinking the question in terms of defined user and business goals is one way of getting to a satisfactory conclusion, even if you have a slightly squiffy question to start with.
Eye tracking offers empirical evidence to validate design choices, and removes guesswork and personal biases from the design process. This data-driven approach helps identify what works and what doesn't in a design, which leads to more successful designs.
Eye tracking can uncover user frustrations through indicators like rapid eye movements or fixed gazes, signaling confusion or annoyance. If designers understand these pain points, they can improve the user experience by addressing areas of frustration.
Eye tracking data informs personalized experiences as it lets designers tailor interactions based on individual user behaviors. This personalization makes users feel understood and valued, which ultimately enhances their interaction with the product.
Eye tracking serves as a valuable learning tool in UX research. It helps designers and researchers understand user behavior, visual perception and cognitive processing. This knowledge contributes to continuously enhancing UX practices.
Ultimately, eye tracking leads to designs that more effectively meet user needs, and results in increased user satisfaction, higher likelihood of return usage and positive product recommendations. It therefore can contribute to the product's success in the market.
In this heat map example, users looked longer at the summer sales banner than anywhere else.
© H&M / Oculid, Fair Use
To leverage eye tracking, UX researchers or designers typically do the following:
Designers begin with a clear research question. From there, they form a hypothesis. They then select the essential metrics and create specific tasks and visual stimuli to facilitate the study. This structured approach helps them tailor their eye tracking work to yield actionable insights that can directly enhance their design.
Designers can conduct eye tracking studies in lab environments or remotely, and can do either moderated or unmoderated studies. This flexibility allows them to gather data in the most suitable context for their specific study needs. It could be to closely observe user behavior in a controlled environment or to capture more natural user interactions in their everyday settings.
For studies that focus primarily on heat maps, it’s better to recruit at least 30 participants to ensure a robust data set. The recruitment process and the study's scale can vary based on the hypothesis and the eye tracking metrics that need analysis. Designers can use software-only solutions, which are significantly more affordable than traditional eye-tracker studies. These enable them to conduct tests globally, which enhances the diversity and volume of data they collect.
These heat maps are from the eye tracking studies of three websites. Red areas are where users looked the most; yellow areas represent fewer views, with blue as the least-viewed areas and no fixations for the grey areas.
© Nielsen Norman Group, Fair Use
Eye tracking technology provides UX designers with a wide range of visual data, such as gaze plots, heat maps and gaze replays. These tools offer insights into visual attention patterns and user interactions. They help designers understand where users look, for how long and what elements capture their attention and areas of interest. Eye tracking usability testing data is crucial for designers to identify design elements that work well and those that need improvement.
Designers integrate eye tracking with other research methodologies such as surveys and click-rate analysis. This multifaceted approach lets them obtain a comprehensive understanding of user behavior, and captures data that users might not report, themselves. This integration enriches the collected data. It also offers a more complete view of user interactions and preferences. The combination of qualitative insights from interviews and quantitative data from eye tracking creates a robust foundation for informed design decisions.
Author and Human-Computer Interaction Expert, Professor Alan Dix explains the difference between qualitative and quantitative research and data:
Ah, well – it's a lovely day here in Tiree. I'm looking out the window again. But how do we know it's a lovely day? Well, I could – I won't turn the camera around to show you, because I'll probably never get it pointing back again. But I can tell you the Sun's shining. It's a blue sky. I could go and measure the temperature. It's probably not that warm, because it's not early in the year. But there's a number of metrics or measures I could use. Or perhaps I should go out and talk to people and see if there's people sitting out and saying how lovely it is
or if they're all huddled inside. Now, for me, this sunny day seems like a good day. But last week, it was the Tiree Wave Classic. And there were people windsurfing. The best day for them was not a sunny day. It was actually quite a dull day, quite a cold day. But it was the day with the best wind. They didn't care about the Sun; they cared about the wind. So, if I'd asked them, I might have gotten a very different answer than if I'd asked a different visitor to the island
or if you'd asked me about it. And it can be almost a conflict between people within HCI. It's between those who are more *quantitative*. So, when I was talking about the sunny day, I could go and measure the temperature. I could measure the wind speed if I was a surfer – a whole lot of *numbers* about it – as opposed to those who want to take a more *qualitative* approach. So, instead of measuring the temperature, those are the people who'd want to talk to people to find out more about what *it means* to be a good day.
And we could do the same for an interface. I can look at a phone and say, "Okay, how long did it take me to make a phone call?" Or I could ask somebody whether they're happy with it: What does the phone make them feel about? – different kinds of questions to ask. Also, you might ask those questions – and you can ask this in both a qualitative and quantitative way – in a sealed setting. You might take somebody into a room, give them perhaps a new interface to play with. You might – so, take the computer, give them a set of tasks to do and see how long they take to do it. Or what you might do is go out and watch
people in their real lives using some piece of – it might be existing software; it might be new software, or just actually observing how they do things. There's a bit of overlap here – I should have mentioned at the beginning – between *evaluation techniques* and *empirical studies*. And you might do empirical studies very, very early on. And they share a lot of features with evaluation. They're much more likely to be wild studies. And there are advantages to each. In a laboratory situation, when you've brought people in,
you can control what they're doing, you can guide them in particular ways. However, that tends to make it both more – shall we say – *robust* that you know what's going on but less about the real situation. In the real world, it's what people often call "ecologically valid" – it's about what they *really* are up to. But it is much less controlled, harder to measure – all sorts of things. Very often – I mean, it's rare or it's rarer to find more quantitative in-the-wild studies, but you can find both.
You can both go out and perhaps do a measure of people outside. You might – you know – well, go out on a sunny day and see how many people are smiling. Count the number of smiling people each day and use that as your measure – a very quantitative measure that's in the wild. More often, you might in the wild just go and ask people. It's a more qualitative thing. Similarly, in the lab, you might do a quantitative thing – some sort of measurement – or you might ask something more qualitative – more open-ended. Particularly quantitative and qualitative methods,
which are often seen as very, very different, and people will tend to focus on one *or* the other. *Personally*, I find that they fit together. *Quantitative* methods tend to tell me whether something happens and how common it is to happen, whether it's something I actually expect to see in practice commonly. *Qualitative* methods – the ones which are more about asking people open-ended questions – either to both tell me *new* things that I didn't think about before,
but also give me the *why* answers if I'm trying to understand *why* it is I'm seeing a phenomenon. So, the quantitative things – the measurements – say, "Yeah, there's something happening. People are finding this feature difficult." The qualitative thing helps me understand what it is about it that's difficult and helps me to solve it. So, I find they give you *complementary things* – they work together. The other thing you have to think about when choosing methods is about *what's appropriate for the particular situation*. And these things don't always work.
Sometimes, you can't do an in-the-wild experiment. If it's about, for instance, systems for people in outer space, you're going to have to do it in a laboratory. You're not going to go up there and experiment while people are flying around the planet. So, sometimes you can't do one thing or the other. It doesn't make sense. Similarly, with users – if you're designing something for chief executives of Fortune 100 companies, you're not going to get 20 of them in a room and do a user study with them.
That's not practical. So, you have to understand what's practical, what's reasonable and choose your methods accordingly.
An important point about eye tracking is that it can go far beyond surface-level analytics to provide deep insights into the subconscious behaviors of users. Eye tracking user testing approaches capture data that users themselves might not be aware of. This includes instinctive reactions to visual stimuli. This level of insight is invaluable for designers to create interfaces that are not only functional but also intuitively align with user expectations and behaviors.
© Interaction Design Foundation, CC BY-SA 4.0
Among UX research methods and testing approaches, eye tracking presents a few challenges and limitations for designers. These include:
Eye tracking may not always indicate user awareness. Users might look at a specific area on the screen but not notice important features. This makes it uncertain whether they have actually perceived something.
Eye tracking problems exist in that eye tracking can fail to capture what users perceive in their peripheral vision. This makes it difficult to ascertain if they have overlooked anything. While eye tracking focuses on fixations and gaze points, it doesn't account for peripheral vision where fixations do not occur.
To solely rely on eye tracking does not provide insights into the reasons behind users' visual focus. It indicates the location and duration of user gaze but does not reveal their cognitive processes. To understand the underlying reasons, designers must have direct communication with users or use surveys in their experiences research.
Eye tracking's effectiveness varies among individuals due to factors such as eyewear, pupil size and eye movement. These factors can pose challenges for some UX eye tracking tools and eye tracking devices to accurately account for all users.
Despite the limitations, eye tracking significantly contributes to user understanding. When researchers and designers thoroughly understand its capabilities and constraints, combined with alternative user research methods, they can make optimal use of eye tracking in UX testing and research. Also, when designers test with diverse user groups and use the best eye tracking software for usability testing, they can tighten their grasp of what their designs need to be more effective.
© Interaction Design Foundation, CC BY-SA 4.0
Eye tracking testing in UX research requires a structured approach. Here are 10 steps to effectively conduct eye tracking tests:
Designers or researchers should choose diverse real users who represent their product's audience from various backgrounds. This can help them gain a comprehensive understanding of how users interact with the product. It’s important to ensure the eye tracking system supports participants who wear glasses or adjust participant selection accordingly.
Test with five or fewer people to efficiently identify most issues without redundancy. Conduct multiple tests at different times to assess the effectiveness of changes.
It’s vital to ensure UX eye tracking tools and computers are ready well in advance of the test day to avoid any last-minute issues. Test the equipment to make sure it’s working and ready, to minimize the chances of real-life glitches affecting the test and eye tracking data.
Conduct tests in a calm and quiet environment. Inform participants about the focus of the product, and that the purpose is to test the product, not them. It’s helpful to utilize viewing rooms with one-way mirrors and soundproofing, if possible. In any case, it’s vital to minimize the impact of the observers on the users.
Provide printed instructions for each test part to help them remember what to do. This will also help keep the test smooth and focused, since users shouldn’t have to ask questions.
Prompt participants to use their information when they fill out forms to simulate real usage and identify any form-related issues. Reassure them that their data will be safe, and confirm that they have all the data they need before they start.
Record observations about each participant, including behavior and verbal feedback, to complement eye tracker data and identify usage patterns. For example, users’ facial expressions and utterances can shed powerful insights in user experiences research.
Professor of Human-Computer Interaction, UCL, Ann Blandford discusses the advantages and disadvantages of different data-gathering methods like video, audio, and notes.
Ditte Hvas Mortensen: In relation to data gathering, there are obviously different ways of doing it. You can record video or sound, or you can take notes. Can you say something about the advantages or disadvantages of doing it in different ways? Ann Blandford: Yes. So, I think it depends on how the data-gathering method is going to affect what
data you can gather. So, sometimes people are not comfortable being recorded. And they don't *want* to be voice-recorded. And you'll get more out of the conversation if you just take notes. Of course, you don't get quite such high-quality data if you just take notes. On the other hand, it's easier to analyze because you haven't got so much data.
And you can't do as much in-depth analysis if you've only got notes, because you can only analyze what you recognized at the time as being important, and you can't pick up anything more from it later. So, I certainly like to audio-record where possible for the kinds of studies that we do. And different people may have different needs, and therefore that might be more or less important to them.
We also use quite a lot of still photos, particularly in healthcare. We have to have quite a lot of control over what actually features in an image so that it doesn't violate people's privacy. So, using still photos allows us to take photos of technology and make sure that it doesn't include any inappropriate information. Whereas video – well, firstly, video means that you've got a *lot* more data to analyze.
And it can be a lot harder to analyze it. And it depends on the question that you're asking in the study, as to whether or not that effort is merited. And for a lot of us, it's not merited, but also it's harder to control what data is recorded. So, it's more likely to compromise people's privacy in ways that we haven't got ethical clearance for. So, we don't use a lot of video ourselves.
But also, particularly if one is trying to understand the work situation, it's often also valuable to take *real notes*, whether those are diagrams of how things are laid out or other notes about, you know, important features of the context that wouldn't be recorded in an audio stream. And also, video can be quite *off-putting* for people.
You know, it's just that much more intrusive. And people may become much more self-conscious with a video than with audio only. So, it can affect the quality of the data that you get for that reason. So, I think when you're choosing your data-gathering *tools*, you need to think about what impact they will have in the environment.
It may or may not be *practical* to set up a video camera, quite apart from anything else. Audio tends not to be so intrusive. As I say, there are times when just written notes will actually serve the purpose better. But it also depends on what you're going to *do* with the data. You know – how much data do you need? What kinds of analysis are your going to do of that data? And hence, what *depth of data* do you actually need to have access to, anyway?
If you've got more data than you can deal with, then it can feel overwhelming, and that can actually be quite a deterrent to get on with analysis. And analysis can be really slowed down if, as a student or other researcher, you just feel so overwhelmed by what you've got that you don't know where to start! Actually, that's not a good place to be. So, having too much data can often be as difficult as not having enough.
But what matters most is that you've got an *appropriate* kind of data for the questions of the study.
Conduct post-test discussions with participants to gather feedback on their experience and thoughts. Ask them about their feelings and thoughts during the test. It’s important to be mindful as to how one asks questions like these—to avoid leading questions. Honest answers from users will give the best insights to complement the data from tracking eye movements from them.
Clearly explain test results to clients, including what technical terms and data mean. Also show users all notes and data. That will build their trust and understanding of the testing process.
Design tests around real events using common language to avoid leading participants and accurately assess product usability. For example, if a designer wants to assess how well users can change the expiration date of a credit card on a website, it’s better to ask users to make any needed changes to their payment information on the site—as opposed to “update your credit card’s expiration date.”
This study revealed that the more a product was on the bottom, the less attention participants paid to it.
© Eye Square / Oculid, Fair Use
In no particular order, here are some examples of popular and helpful eye tracking brands:
1. Eye Square is a specialist provider of neuromarketing research that incorporates eye tracking, facial coding and emotion analytics. This platform captures and analyzes real, explicit and implicit reactions of customers in many settings, and shows brands what occurs and why.
© Mike Stevens, Fair Use
2. Eyes Decide provides online eye tracking solutions, helpful for market research and product design. Their integrated platform supports constructing, running and analyzing studies.
© Mike Stevens, Fair Use
3. Eyezag provides mobile and desktop eye tracking solutions via standard webcams; tests run in-browser. They offer self-service and managed service solutions.
© Mike Stevens, Fair Use
4. RealEye offers screen-based webcam eye tracking, mouse-tracking and facial coding, with a platform that supports 10 languages. The system is connectable to any panel or survey platform, and they have basic surveys also available to add to studies.
© Mike Stevens, Fair Use
Remember, UX eye tracking is a valuable asset for designers and researchers to apply in their product design. However, it takes a thorough understanding of the potential—and potential limitations—of the available technology to capture the most relevant parts of the user behaviors they need. Once a design team has a fund of accurate data to work with, they can work to fine-tune outstanding areas and retest the next iteration to see if the users’ gaze pays the desired attention.
Take our course User Research – Methods and Best Practices.
Read the Nielsen-Norman Group’s free report, How to Conduct Eyetracking Studies.
Read Eye Tracking In Mobile UX Research by Mariana Macedo for more insights into measuring eye movement as a type of research.
Consult Eye Tracking in UX: Decoding User Behavior for Seamless Design by Ehsan Jamalzadeh for more details and tips.
Go to Eye Tracking: What Is It & How to Use It for Usability Testing by Madison Zoey Vettorino for further helpful insights and more.
Read The Top 15 Eye Tracking Platforms for Market & User Research by Mike Stevens for valuable examples of tools and further information.
Eye tracking studies measure several key metrics to understand how users view and interact with visual elements. The primary metrics include:
Fixations: These occur when the eyes stop moving and focus on a specific point. The number, duration, and location of fixations provide insights into what attracts attention and holds it.
Saccades: These are rapid eye movements between fixations. To analyze saccades helps researchers understand how users navigate through information and what elements they skip.
Pupil dilation: Changes in pupil size can indicate interest or cognitive load. Larger pupils often suggest increased attention or mental effort.
Gaze path: This is the sequence of fixations and saccades, showing the path that the eyes travel across a visual field. It helps to map out how a viewer processes information and in what order.
Heatmaps: These visual representations aggregate data from multiple users to show areas of high and low visual attention. High-attention areas appear warmer (redder) on the heatmap.
Time to first fixation: This metric measures how quickly an element attracts attention after exposure to the stimulus. It is useful for determining the immediate impact of a design.
By measuring these metrics, designers can optimize user interfaces, marketers can enhance advertisements, and researchers can understand visual and cognitive processes more deeply.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
Yes, eye tracking data can predict user behavior to some extent. Here are key ways eye tracking data aids in predicting user behavior:
Attention prediction: By analyzing where users most frequently fix their gaze, designers can predict which parts of a design will attract attention. This helps them to optimize content placement to match user expectations and improve usability.
Interest and engagement: Eye tracking can reveal the elements that hold a user's interest the longest, suggesting engagement levels. This is particularly useful in enhancing features or content that users find appealing.
Usability issues: To track how eyes move across a screen helps identify areas where users might struggle. For instance, if many users skip over a crucial navigation button, the designer may not have placed it prominently. This can predict potential usability enhancements.
Conversion optimization: In e-commerce, to understand which products or information capture attention and for how long can predict buying behavior. This allows businesses to adjust layouts to increase the likelihood of purchase.
By collecting and analyzing these aspects, eye tracking provides valuable predictions that can influence design decisions and improve user experience. This leads to more effective and intuitive interactions.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
Researchers encounter several challenges when conducting eye tracking tests. First, technical issues such as calibration errors can affect the accuracy of the data collected. Eye trackers must precisely detect where a participant looks, but various factors including lighting conditions and participant movement can lead to errors. Second, participants' differences in physiology, such as eye shape or pupil size, may also complicate data accuracy. Furthermore, the intrusive nature of some eye tracking devices can influence participant behavior, and lead to less natural responses. To tackle these challenges, researchers should ensure proper calibration of the eye tracking device before each session and consider using less intrusive equipment. It is also beneficial to conduct tests in a controlled environment to minimize external variables that could affect the data.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
Eye tracking significantly enhances design accessibility by providing insights that help designers create more intuitive and inclusive user interfaces. This technology allows researchers to observe precisely where users focus their gaze when interacting with a design. This data reveals which areas attract attention and which do not. This enables designers to adjust layouts, typography, and colors to improve user engagement and accessibility.
For users with disabilities, eye tracking offers valuable information. Designers can use this data to tailor interfaces that accommodate varying needs, and make digital content more accessible for people with visual impairments or cognitive disorders. For example, they can optimize the placement of essential elements on a page to ensure they are more noticeable to users with limited vision.
Additionally, eye tracking helps identify navigational challenges within a design. When designers understand how users naturally interact with a site or application, they can simplify navigation to reduce the cognitive load on users, making technology usable for a broader audience.
Watch our video on accessibility to understand this important subject in more detail:
Accessibility ensures that digital products, websites, applications, services and other interactive interfaces are designed and developed to be easy to use and understand by people with disabilities. 1.85 billion folks around the world who live with a disability or might live with more than one and are navigating the world through assistive technology or other augmentations to kind of assist with that with your interactions with the world around you. Meaning folks who live with disability, but also their caretakers,
their loved ones, their friends. All of this relates to the purchasing power of this community. Disability isn't a stagnant thing. We all have our life cycle. As you age, things change, your eyesight adjusts. All of these relate to disability. Designing accessibility is also designing for your future self. People with disabilities want beautiful designs as well. They want a slick interface. They want it to be smooth and an enjoyable experience. And so if you feel like
your design has gotten worse after you've included accessibility, it's time to start actually iterating and think, How do I actually make this an enjoyable interface to interact with while also making sure it's sets expectations and it actually gives people the amount of information they need. And in a way that they can digest it just as everyone else wants to digest that information for screen reader users a lot of it boils down to making sure you're always labeling
your interactive elements, whether it be buttons, links, slider components. Just making sure that you're giving enough information that people know how to interact with your website, with your design, with whatever that interaction looks like. Also, dark mode is something that came out of this community. So if you're someone who leverages that quite frequently. Font is a huge kind of aspect to think about in your design. A thin font that meets color contrast
can still be a really poor readability experience because of that pixelation aspect or because of how your eye actually perceives the text. What are some tangible things you can start doing to help this user group? Create inclusive and user-friendly experiences for all individuals.
Yes, it is possible to conduct eye tracking studies remotely. Advancements in technology have led to the development of web-based eye tracking software that uses standard webcams to record gaze data. This innovation allows researchers to gather eye tracking data without the need for specialized equipment or face-to-face interaction.
In remote eye tracking, participants can complete tasks on their devices at home while the software captures where they look on the screen in real-time. This method is particularly useful for studies that aim to understand how users interact with websites or digital advertisements.
However, remote eye tracking may present some challenges compared to traditional methods. The accuracy of gaze data can vary depending on the quality of the participant's webcam and lighting conditions. Additionally, researchers have less control over the testing environment, which can introduce variables that affect the data.
Despite these challenges, remote eye tracking offers a flexible and scalable option for conducting research with diverse and geographically dispersed participants. This process and the tools that facilitate it open up opportunities for gathering data in naturalistic settings, providing insights into user behavior in real-world scenarios.
Read The Top 15 Eye Tracking Platforms for Market & User Research by Mike Stevens for valuable examples of tools and further information.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
A typical eye tracking session lasts between 15 to 30 minutes. This duration is ideal to gather sufficient data without causing discomfort or fatigue to participants. The length of a session can vary depending on the study's objectives and the complexity of the tasks involved. Shorter sessions, lasting around 15 minutes, are common for studies that focus on specific aspects of user interaction, such as to understand how users navigate a webpage or interact with an advertisement.
These shorter sessions help keep the participant's engagement high and ensure that the data collected reflects their natural behavior without fatigue influencing their responses. Longer sessions, which may extend up to 30 minutes or more, are necessary for more detailed studies. These might involve complex tasks or require participants to interact with multiple interfaces. In such cases, researchers need more time to observe how users adapt to different design elements over a longer period. Regardless of the session length, it is crucial for researchers to ensure the comfort of participants. This involves setting clear expectations, providing breaks if needed, and using equipment that minimizes physical strain.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
Eye tracking can impact participant privacy, primarily because it involves collecting detailed data about where individuals look and for how long. This data, while valuable for research, could potentially reveal sensitive information about a person's interests, habits or even health conditions. To protect privacy, researchers must handle eye tracking data with care. They must ensure that they store and transmit data securely to prevent unauthorized access. It is also essential to anonymize the data, which means removing any information that could identify individual participants.
Participants should always give informed consent before participating in an eye tracking study. This consent process involves explaining how researchers will use the eye tracking data, what they will record, and how they will protect participants' privacy. Researchers must also let participants know that they have the right to withdraw from the study at any time. To follow these guidelines helps minimize the impact on privacy and ensures that the study adheres to ethical standards. By being transparent about the use and protection of eye tracking data, researchers can maintain trust and uphold the integrity of their studies.
Take our course User Research – Methods and Best Practices.
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
Cultural differences significantly affect eye tracking studies because they influence how people view and interpret visual information. For example, individuals from different cultural backgrounds may focus on various elements of a webpage or advertisement due to differing norms and values. Western cultures often follow a left-to-right reading pattern. This influences how participants scan and process information. In contrast, cultures accustomed to right-to-left scripts, like Arabic, might exhibit different eye movement patterns.
These cultural distinctions can lead to different gaze patterns, which researchers need to consider when they design eye tracking studies and interpret their results. For instance, the placement of important content or calls to action might need adjustment based on the predominant reading patterns of the target audience to ensure effectiveness.
Furthermore, cultural differences in non-verbal communication, such as the significance of eye contact, can also impact how participants respond to visual stimuli. In some cultures, direct eye contact is less common. This could affect how people engage with images or videos of faces during studies.
To address these challenges, researchers should design studies that are culturally sensitive and consider these variations when they select participants and create test materials. This approach helps ensure the data accurately reflects the intended audience's behavior and preferences.
Professor Alan Dix explains why it’s important to design with culture in mind:
As you're designing, it's so easy just to design for the people that you know and for the culture that you know. However, cultures differ. Now, that's true of many aspects of the interface; no[t] least, though, the visual layout of an interface and the the visual elements. Some aspects are quite easy just to realize like language, others much, much more subtle.
You might have come across, there's two... well, actually there's three terms because some of these are almost the same thing, but two terms are particularly distinguished. One is localization and globalization. And you hear them used almost interchangeably and probably also with slight differences because different authors and people will use them slightly differently. So one thing is localization or internationalization. Although the latter probably only used in that sense. So localization is about taking an interface and making it appropriate
for a particular place. So you might change the interface style slightly. You certainly might change the language for it; whereas global – being globalized – is about saying, "Can I make something that works for everybody everywhere?" The latter sounds almost bound to fail and often does. But obviously, if you're trying to create something that's used across the whole global market, you have to try and do that. And typically you're doing a bit of each in each space.
You're both trying to design as many elements as possible so that they are globally relevant. They mean the same everywhere, or at least are understood everywhere. And some elements where you do localization, you will try and change them to make them more specific for the place. There's usually elements of both. But remembering that distinction, you need to think about both of those. The most obvious thing to think about here is just changing language. I mean, that's a fairly obvious thing and there's lots of tools to make that easy.
So if you have... whether it's menu names or labels, you might find this at the design stage or in the implementation technique, there's ways of creating effectively look-up tables that says this menu item instead of being just a name in the implementation, effectively has an idea or a way of representing it. And that can be looked up so that your menus change, your text changes and everything. Now that sounds like, "Yay, that's it!"
So what it is, is that it's not the end of the story, even for text. That's not the end of the story. Visit Finland sometime. If you've never visited Finland, it's a wonderful place to go. The signs are typically in Finnish and in Swedish. Both languages are used. I think almost equal amounts of people using both languages, their first language, and most will know both. But because of this, if you look at those lines, they're in two languages.
The Finnish line is usually about twice as large as the Swedish piece of text. Because Finnish uses a lot of double letters to represent quite subtle differences in sound. Vowels get lengthened by doubling them. Consonants get separated. So I'll probably pronounce this wrong. But R-I-T-T-A, is not "Rita" which would be R-I-T-A . But "Reet-ta". Actually, I overemphasized that, but "Reetta". There's a bit of a stop.
And I said I won't be doing it right. Talk to a Finnish person, they will help put you right on this. But because of this, the text is twice as long. But of course, suddenly the text isn't going to fit in. So it's going to overlap with icons. It's going to scroll when it shouldn't scroll. So even something like the size of the field becomes something that can change. And then, of course, there's things like left-to-right order. Finnish and Swedish both are left-to-right languages. But if you were going to have, switch something say to an Arabic script from a European script,
then you would end up with things going the other way round. So it's more than just changing the names. You have to think much more deeply than that. But again, it's more than the language. There are all sorts of cultural assumptions that we build into things. The majority of interfaces are built... actually the majority are built not even in just one part of the world, but in one country, you know the dominance... I'm not sure what percentage,
but a vast proportion will be built, not just in the USA, but in the West Coast of the USA. Certainly there is a European/US/American centeredness to the way in which things are designed. It's so easy to design things caught in those cultures without realizing that there are other ways of seeing the world. That changes the assumptions, the sort of values that are built into an interaction.
The meanings of symbols, so ticks and crosses, mostly will get understood and I do continue to use them. However, certainly in the UK, but even not universally across Europe. But in the UK, a tick is a positive symbol, means "this is good". A cross is a "blah, that's bad". However, there are lots of parts of the world where both mean the same. They're both a check. And in fact, weirdly, if I vote in the UK,
I put a cross, not against the candidate I don't want but against the candidate I do want. So even in the UK a cross can mean the same as a tick. You know – and colors, I said I do redundantly code often my crosses with red and my ticks with green because red in my culture is negative; I mean, it's not negative; I like red (inaudible) – but it has that sense of being a red mark is a bad mark.
There are many cultures where red is the positive color. And actually it is a positive color in other ways in Western culture. But particularly that idea of the red cross that you get on your schoolwork; this is not the same everywhere. So, you really have to have quite a subtle understanding of these things. Now, the thing is, you probably won't. And so, this is where if you are taking something into a different culture, you almost certainly will need somebody who quite richly understands that culture.
So you design things so that they are possible for somebody to come in and do those adjustments because you probably may well not be in the position to be able to do that yourself.
Copyright holder: Tommi Vainikainen _ Appearance time: 2:56 - 3:03 Copyright license and terms: Public domain, via Wikimedia Commons
Copyright holder: Maik Meid _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY 2.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Norge_93.jpg
Copyright holder: Paju _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY-SA 3.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Kaivokselan_kaivokset_kyltti.jpg
Copyright holder: Tiia Monto _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY-SA 3.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Turku_-_harbour_sign.jpg
To analyze eye tracking data requires a combination of technical and analytical skills.
Familiarity with eye tracking technology and the software to collect and analyze the data is crucial. Researchers need to understand how to set up the equipment, calibrate it correctly and troubleshoot common issues.
Statistical skills are also essential for analyzing eye tracking data. Researchers must know how to interpret metrics such as fixation duration, saccade paths and heatmaps. These metrics provide insights into where, how long and how often participants look at specific areas of a screen. To understand statistical principles helps in determining the significance of observed patterns and in making reliable inferences from the data.
Attention to detail is another important skill. Eye tracking studies generate large volumes of data, and meticulous attention is necessary to ensure accurate analysis. Researchers must carefully manage and scrutinize the data to avoid errors that could skew results.
Critical thinking is vital. Analysts must not only handle data competently but also interpret the results within the context of the study’s goals. They need to ask the right questions and consider various factors that could influence the data, such as the participant's age, cultural background or familiarity with the tested interface.
Eye tracking integrates well with other UX research methods to provide comprehensive insights into user behavior and preferences. When designers or researchers combine it with techniques like usability testing, surveys and interviews, eye tracking offers a deeper understanding of how users interact with a product.
During usability testing, eye tracking can reveal what users actually look at while trying to complete tasks, and complement the verbal feedback they provide. This method allows researchers to see if users find interfaces intuitive or if their gaze patterns indicate confusion. For example, if users frequently miss a crucial button or link, the design may need adjustment.
Surveys and interviews, on the other hand, gather subjective data about user preferences and satisfaction. Eye tracking adds objective data to these insights. It confirms or questions the accuracy of self-reported information. It can show whether users' stated preferences align with their actual viewing behavior.
To incorporate eye tracking with A/B testing is another powerful combination. Researchers can compare how different design variations influence where users look, providing concrete evidence to support design decisions.
Overall, integrating eye tracking with other research methods enriches the data collected, and offers a multi-dimensional view of user experience that helps create more user-friendly and effective designs.
Take our Master Class Design with Data: A Guide to A/B Testing with Zoltan Kollin, Design Principal at IBM.
Consultant Editor and Author, William Hudson explains A/B testing in this video:
A/B testing is all about changes in behavior. We present people with alternative designs and we look to see how much that alters their subsequent response. So in the simple A/B case, we show them design A, we show them design B, and we measure typically a completion goal, which a lot of subject areas in user experience we refer to as conversions.
So signing up to a newsletter, adding an item to a shopping basket, making a donation to a charity. These are all things that are important to their respective organizations. And typically for the interactive technology that we're working on. So websites and and apps, for example. So these are the things often that we're measuring, but they're not the only things that we can measure. We can measure really straightforward stuff like time spent on page, time spent in the site and also bounce rates.
For example, we'll be looking at some of those a bit later on. Just a reminder that because A/B testing is done very late in the day with live sites and large numbers of users, you really want to make sure that your solution is sound before you get this far. You're not going to be able to test everything that is possibly worrying you or possibly causing problems to users. It's just too long involved and potentially expensive in terms
of user loyalty and also the amount of effort you'd have to put into it. So we are looking at using A/B testing to basically polish the solution rather than to rework it. Bear that in mind and make sure that you've done adequate testing up to this point. Also, bear in mind that A/B testing tends to be focused on individual pages, so it is possible to have multi-page tests, but
it's a more complex area than we're going to be looking at in this lesson. So experiments have research questions that basically the things that you're trying to answer and because A/B testing focuses on changes in behavior, the research questions are going to be centered on defined goals. And as I've mentioned already, typically conversions. So will as an example, moving the add button above the fold improve sales conversions? I would imagine it would actually do something. I always find people
are making the mistake of getting too talkative on the first screen of the page and the actual “buy this” or “add to basket” button gets pushed further and further down until users actually don't even see it. Will a more clearly worded charitable purpose increase donations? If people have a better understanding of what your charity's about or where this money is going, would that improve conversions for those users? So both of these can be A/B tested by using goals that you almost
certainly have already defined in your analytic solution. So these are very good candidates for A/B and multivariate testing. But I'll give you some examples of bad questions too. So obviously I will repeat the words “don't ask this” when I've mentioned them because they're not meant as examples that you should be taking away. Conversely, research questions that are not directly related to improved goal completions tend not to be suitable for AB testing.
And a kind of vague question like “will better product photos reduce questions to customer service?”, don't ask this, is the sort of thing that you simply cannot effectively test in A/B testing. And the reason is that there are all kinds of channels to customer service and only some of them are through the website and only some of them can be effectively measured as goals. So it's just not a suitable scenario for A/B testing. There is a related question you could ask though,
which might be just as good, although not exactly equivalent, and that would be: “Will better product photos improve sales conversions?” Because if it reduces queries to customer service, it's almost certain that people are going to be much more confident about placing orders, adding those things to their basket. So that is a very easily measured outcome in terms of A/B testing, and that is the kind of question that A/B testing is very good at.
So simply rewording or rethinking the question in terms of defined user and business goals is one way of getting to a satisfactory conclusion, even if you have a slightly squiffy question to start with.
Gwizdka, J., Dillon, A., & Zhang, Y. (2020). Eye-Tracking as a Method for Enhancing Research on Information Search. In Advances in the Human Side of Service Engineering (pp. 161–181). Springer International Publishing.
This publication discusses how eye tracking is useful to enhance research on information search. The authors argue that the human eye plays a crucial role in information acquisition from the external world, and much of contemporary information technology relies on visual processing. Eye tracking methods are considered to offer theoretically reliable measures of visual attention and search task activities. The paper presents examples of eye tracking tools and how they capture data, and examines how eye tracking data has been used to assess cognitive factors in information search. It provides valuable insights into the potential of eye tracking for advancing research in this area.
1. Bojko, A. (2013). Eye Tracking the User Experience: A Practical Guide to Research. Rosenfeld Media.
This book is a practical guide for how to conduct eye tracking studies in UX research. It covers the entire process, from planning and conducting studies to analyzing and interpreting the results. The book has been influential in making eye tracking more accessible to UX researchers by providing a step-by-step approach and best practices. It has helped establish eye tracking as a standard tool in the UX researcher's toolkit.
2. Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye Tracking: A Comprehensive Guide to Methods and Measures. OUP Oxford.
This comprehensive book provides a thorough introduction to eye tracking, covering the underlying theory, hardware, data analysis and applications. It has been influential in establishing a common framework for eye tracking research and has become a standard reference for researchers in the field. The book covers a wide range of topics, from the physiology of the eye to advanced data analysis techniques. This makes it a valuable resource for both novice and experienced eye tracking researchers.
Here's the entire UX literature on Eye-Tracking In UX Design by the Interaction Design Foundation, collated in one place:
Take a deep dive into Eye-Tracking In UX Design with our course User Research – Methods and Best Practices .
How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change, , link to us, or join us to help us democratize design knowledge!