Games User Research

Your constantly-updated definition of Games User Research and collection of videos and articles. Be a conversation starter: Share this page and inspire others!
352 shares

What is Games User Research?

Games User Research focuses on understanding players' behavior, interactions, and experiences in video games. Researchers use methodologies like observations, interviews, and surveys to gather valuable data. This data helps improve games, remove bugs, and increase player experience.

Steve Bromley, a games user research expert who has worked with companies such as Sony Interactive Entertainment and EA, gives an overview of Games User Research:

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    So start from the beginning, what is games user research? If is the first time that you've come across it. The way I describe it to game teams is games user research run structured studies to make games better. Part of the reason we do this is because game development throws up a whole bunch of questions as teams are building software and as they're thinking about what is this game or this experience, we're making a traditional process for developing games.

  2. 00:00:30 --> 00:01:03

    It's a very waterfall process compared to a lot of game development, and it usually goes through phases and ideation phase where teams are wondering, is this getting fun? Is is there an idea here that is worth building a game around? And is it getting the type of emotional response and type of reaction that we're expecting from our games? They then go into a pre-production process where they're building prototypes to see, can we turn this small idea into something that stands up as a game and work out what the scope will be of the game

  3. 00:01:03 --> 00:01:35

    with what's going to be in or out? They then go into production, actually building a game and creating all the art and the assets and the levels and the design and things like tutorials, teaching people how to play the game and then they go through a phase of post-production at the end with a balancing or tuning. Now if you do come from a game to user research background, you'll recognize that a lot of these are potential research objectives, things that we have a skill set and some methods that we can apply to help game

  4. 00:01:35 --> 00:01:39

    developers make confident decisions around some of those questions they have.

Table of contents

The Role of a Games User Researcher

Games user researchers combine principles of psychology, human-computer interaction, and UX design to study how players interact with video games. Their studies uncover potential issues in the game mechanics, user interface, or any other aspect that could negatively affect the player's experience.

Games user researchers work closely with game designers, producers, and the UX team to ensure the game aligns with the designer’s vision and meets the player's expectations.

Researchers analyze the collected data to identify patterns, trends, and potential usability issues. Then, they communicate these findings to the design team in a way that is easy to understand. Their conclusions create empathy between the design team and the players. Researchers also provide actionable recommendations to improve the game's design and enhance the player experience.

Differences between User Research and Games User Research

While games user research and user research are similar, there are a few key differences.

User Needs: Tasks vs Entertainment

User researchers focus on user needs—what users want to achieve, their challenges, etc.

For example, the design team for an e-commerce website creates a mega menu to help users see all product categories at once. However, the research team finds that their users find it challenging to find the products because of the way they are ordered on the menu. The design team incorporates this insight to restructure the navigation. The navigation is a functional element to help users complete a goal, in this case, to find products quickly.

On the other hand, games are a form of entertainment, an art form, like movies. Scriptwriters write movies with the intention of creating an emotional response in the viewer. Game designers do the same but for games. Therefore, in games, user researchers approach their work from the designer’s point of view. Unlike other products where designers rely on user research to decide what to create, game designers begin with a vision, and researchers evaluate if users' experience matches the designers’ vision

For example, if the purpose of a horror game is to make players feel scared and nervous, the research team’s goal is to find out how the players are feeling. If the players are not “scared enough,” the designer can use research insights to reach their vision.

Difficulty

User researchers ask users, “What difficulties do you encounter while using our platform/product/service?” Designers use this research to remove these difficulties.

However, in games, difficulty improves the experience. It's essential as it keeps players interested while enhancing their skills. As players overcome challenges, they have more fun.

A warrior in armor with a sword and shield fighting a red dragon in a castle from the video game Dark Souls Remastered.

An excellent example of difficulty in games is the popular game series Dark Souls. The Dark Souls games are famous for being incredibly difficult to complete. However, this is what has made them popular. Players enjoy the satisfaction of overcoming the challenges of the game.

© Bandai Namco, Fair Use

Researchers need to identify when difficulty is intentional or accidental:

  • When difficulty is intentional, it elevates the gaming experience. As players move through the game, they will become bored if it does not get progressively more difficult. As intentional difficulty increases, so does the skill of the player. Equally, If level 5 is easy compared to level 3, this will not meet the players’ expectation of increasing difficulty through the game.

  • When difficulty is accidental, it reduces the player's enjoyment level. The game does not immerse the player, who may eventually give up. Examples of unintentional difficulty include glitches and bugs, poor balancing, or illogical changes in gameplay.

A collection of player characters in the video game World of Warcraft. They are surrounded by skeletons of the player characters who were infected with the Corrupted Blood spell.

In the massively multiplayer online game World of Warcraft, there was an incident known as the “Corrupted Blood Incident.” The game's developers introduced a new enemy who cast a spell on players, giving them a contagious disease. Only when a player defeated the enemy was the disease healed. Many players left the area where the enemy existed without beating it and spread the disease throughout the game. This unintended consequence resulted in many players having difficulty playing the game. Their characters would get infected, die, respawn, and then catch the disease again, repeating the cycle—an example of accidental difficulty.

© Blizzard Entertainment, Fair Use

Secrecy

Game researchers need to understand secrecy's crucial role in game development. Marketing and advertising are critical to a game’s commercial success. Game studios usually protect their games to prevent leaks that could disrupt marketing strategies.

Secrecy directly impacts research methods, especially those involving the public. Some research methods, like public surveys, are not usable when you must preserve secrecy. Therefore, the research methods in games user research can differ from those used in other industries.

Why Is Games User Research Important?

“Games user researchers bring structure to the playtesting process so that game developers are confident that the game they’re making is experienced by players in the way they want them to be experiencing it.”

 – Steve Bromley, Games user research expert

The video game industry is highly competitive. Studios aim to create games that captivate players and keep them engaged so they don’t switch to a competitor’s game. Games user researchers help achieve this by identifying issues to fix throughout development.

Steve Bromley explains the concept of playtesting and why applying user research practices to the process is important:

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    Game development is an environment where they already have an idea about how they approach things. And one of the first things you'll encounter if you talk to game developers is the concept of playtesting. Now, playtesting has been a traditional thing in the game industry forever, as long as they have been making games, developers have been putting their game in front of people and seeing what they think. And as a researcher, we might have some some thoughts about how they're approaching this.

  2. 00:00:30 --> 00:01:01

    But the idea at least of seeing what game developers, what players experience of games is, is very familiar and very common as part of the game development process. Putting on our UX hat or user research background both, We can notice that their typical approach of let's put some people in front of this game, maybe we'll give them a survey, but we'll just see what they think about it. Introduces a number of risks often be played, tested, run with inappropriate players that we forget who's convenient.

  3. 00:01:01 --> 00:01:31

    They might even be playing themselves or asking someone else they work with to play and see what they think. Processing often occurs quite late in the development process as they're coming up to launch. They might then start exposing it to players, which again we know is user research as it often too late to make any changes. At that point, the method selection might not be appropriate, they might just be running surveys convenient or pinning in front of people and just asking them what do you think at the end?

  4. 00:01:31 --> 00:02:00

    And again, as user research, as we know, there are different ways of approaching that a lot. And also because it's coming from a unprofessional and it's not professional research background, some of the questions are the way they approach answering these questions are going to be biased. They're going to ask people, Do you like my game? Not recognizing that there might be nuance or better ways of answering those questions. That's where we step in as games user researchers, we take a lot of the best practice

  5. 00:02:00 --> 00:02:33

    from UX elsewhere and from the scientific methods to bring some structure to their playtesting process. Again, going through steps where we'll do proper planning, working out what are the research objectives and what we need to learn from the study will prepare an appropriate study design and some methods. We use rigorous methods of collecting that data and asking the unbiased questions to make sure we're getting good quality data back from that. And we go through a formal analysis process. We look at all that raw data and we dive deep into it to work out

  6. 00:02:33 --> 00:03:00

    what is the meaning, what does this data represent? Ultimately, the reason we do all of that is because it gives answers to the game teams. They had all those questions like, Is our game fun to play it? Understand how to get through the tutorial. Is this game too easy or too hard? And we can run. Our studies, come up with reliable answers, and then tell game developers or game designers or game producers

  7. 00:03:00 --> 00:03:10

    some answers so that they can make confident decisions about what they need to change about their game or what they need to prioritize in their development process.

Let’s look at a simple scenario: a game studio is building a game. They know what type of game they want to make and begin developing it. 

1. What Happens If You Don’t Use Games User Research?

Only the game studio employees play the game throughout the development process. Since they’ve built it, they are biased and only identify and fix issues they find themselves. This approach saves them time and money and means they can launch the game sooner. 

When the game launches, they look at the user feedback and discover their game is filled with bugs. Players also find the game too complicated and don’t understand how to play. Ultimately, the game doesn't sell many copies, and the industry considers it a failure.

2. What Happens If You Do Use Games User Research?

Throughout the development process, the game studio employs user research. Since the users testing the game are new to it, they highlight and identify many issues the studio hadn’t noticed.

Using these findings, they fix the issues and continue to employ user research to identify and fix further issues. This approach adds extra time to the development process and costs money, but the studio feels more confident their game will succeed. 

When the game launches, the feedback is positive, and the players enjoy the game. Ultimately, the game sells many copies, and the industry considers it successful.

Games User Research Sets You Up for Success

The process of game design is, of course, much more complex than this. However, these scenarios show how vital games user research is to game development.

Researchers can provide important information to guide the design team when they understand the players' preferences and behaviors. This results in better game experiences, higher player retention rates, and commercially successful games.

Games User Research Methods

Games user researchers use many of the same user research methods employed in other industries, such as observation, interviews, surveys and analytics.

Once the development team has a working version or demo of the game, researchers will do most of their user research through playtesting. Playtesting is where users play the game, and researchers collect data using qualitative and quantitative approaches.

Researchers conduct playtesting with one-to-one or small groups and large groups of 20+ players (also known as mass playtesting or multi-seat testing).

Typically, researchers employ qualitative methods for small groups and quantitative methods for mass playtesting.

Qualitative Research 

Researchers use qualitative research to observe and talk with players in smaller groups to understand them better. Qualitative research helps us discover how players feel, act, and think, leading to better designs.

Qualitative research methods include:

  • User interviews – It's helpful to talk to players throughout game development. In the ideation phase, you can understand their behaviors and preferences. Later on, through playtesting, you can ask about their experience with the game and how it makes them feel.

  • Observation Watch your players as they play. Record their choices, successes, and struggles.

  • Diary Studies Participants record their thoughts, feelings, and experiences. This approach allows you to gain insights into users' experiences over a period of time.

Quantitative Research

Researchers use quantitative research to study people's attitudes and behaviors based on statistical data. Due to the importance of secrecy in games, quantitative research methods like public surveys may not be available during game development. This research method is more likely to result in someone leaking the game than if you work directly with a smaller group of users.

However, games have many variables and require many users to test them. For this reason, game studios employ mass playtesting, where dozens of players test the game simultaneously. Given these large user groups, researchers can use quantitative methods to collect data. 

Quantitative research methods include:

  • Analytics – Monitor players' actions while they play. Examples of analytics include:

    • Time spent playing.

    • How many times players had to restart a level.

    • How many items players have collected.

  • Surveys and Questionnaires – Gather information from players after they have played the game. Use carefully crafted questions to understand how players felt and what they experienced when they played the game.

When Does Games User Research Begin?

Researchers conduct research throughout the whole game development process. It is essential to keep the bridge between designers and players strong. This empathy avoids scenarios where designers build features and find out too late that players don’t like them.

Ideation

In the ideation phase, game designers want to understand:

  • Do users find this idea fun?

  • Is it worth building a game from this idea?

  • Does this idea give users the emotional response and reaction we want?

To answer these questions, researchers speak to target users of the game. Since secrecy is crucial in game design, researchers typically use qualitative methods such as interviews and diary studies. This research helps design teams create mental models of their target users to understand their expectations and how they think.

For example, a design team plans to build a platform game like Super Mario. Before they begin development, they interview users who love platform games. Through this research, they discover players prefer platform games with strong narratives. When the character grows, learns, and changes over the story's progression, it makes it worth investing their time in the game.

Pre-production

As soon as designers have a playable demo, they playtest it. The players will reveal if the basics of the game are fun.

At this stage, researchers can begin using the observation method. By watching how users play and approach the game, they can understand essential information to inform game development.

For example, in the platform game, researchers might discover that players try to explore areas of the level that don’t exist. This discovery can translate into a design element where the team includes secret and hidden areas to add further engagement and player retention to the game.

Production

The production phase is where studios build the whole game. Game studios conduct playtesting regularly to ensure that the game meets the designers' vision and that players enjoy it.

Since mass testing is now possible, researchers can employ quantitative methods like analytics to understand where designers can improve the game.

For example, players may spend more time on a particular level than others. These findings show that the level's objectives need to be clearer and that it should be easier for players to achieve them.

Post-Production

During post-production, studios balance and tune the game. Researchers gather much information from playtests at this stage to inform final design decisions.

For example, a game studio builds a game where you can play multiple characters. The majority of players are choosing one character in particular. Researchers discover that this is because, compared to the other characters, this specific character is much stronger. These findings inform the design team that they must refine this character to make them equally powerful as the other characters.

Post-Launch

Before the widespread use of the internet, video game developers could not update games after releasing them. Occasionally, developers released updated versions of popular games with improvements, but this was rare.

Most game developers now have the option to update their games after launch. Since secrecy is no longer a concern, researchers can use quantitative methods such as public surveys and questionnaires to gather vast amounts of player feedback and data. With this new data, studios can fix previously undiscovered issues and improve the player experience.

Some game developers also employ early access. Early access is when developers release a game unfinished before they complete development. Indie developers and crowdfunded games often use this tactic to fund the game's completion. Early access contradicts the need for secrecy in game development but ultimately can be more beneficial to some game studios.

Early access is a particularly effective way to gather study participants. Since players are usually fans of the game genre or developer, they will happily play an unfinished game.

A warrior in armor with a large sword on their back standing on a roof top looking out across a castle courtyard filled with people in the game Baldur's Gate 3.

An example of a successful early access release is the role-playing game Baldur’s Gate 3. Developer Larian Studios released the early access version of the game in October 2020. Baldur’s Gate 3 is a sprawling, incredibly complex game with many paths players can take. The early access period lasted almost three years until Larian Studios released the game in September 2023. Early access allowed Larian Studios to collect vast user feedback to improve the game. Baldur’s Gate 3 became one of the best-selling, top-rated games of 2023.

© Larian Studios, Fair Use

How to Approach Games User Research

Researchers use a 4-step process to carry out their study.

1. Plan

The first step in the research process is to define the objectives—what is the purpose of the research? What do you want to find out? Talk to the game designers, producers, and UX designers to understand what they want to discover. 

Researchers may also read through previous research and analysis to understand what others have uncovered. It is also essential they play the current version of the game, as well as similar games from competitors.

Armed with this knowledge, researchers can answer the following questions:

  • What goals do we want to reach with our research, and what do we want to learn from it?

  • What plan do we have to carry out the study?

  • When are we scheduling the study, and who are our targeted participants?

  • Which stakeholders will benefit from the study results? 

Those working individually or on a small design team may already know the purpose of their research. However, anyone conducting research must ask themselves these questions to ensure they get the data they need.

2. Prepare

Depending on the research objectives and the game development stage, researchers choose the appropriate research method(s). 

If the research aims to understand how a game level makes your players feel, they may use a qualitative method like user interviews. On the other hand, they might use a quantitative method like analytics to learn how long players spend on a certain level.

Researchers also need to gather research participants. Design consultancy IDEO's uses a method to recruit “Extremes” and “Mainstreams.” This method enables researchers to cover the entire spectrum of the target group. 

Extremes are users who, for example: 

  • Have minimal gaming experience.

  • Prefer other genres.

If these participants enjoy the game, most other players will, too. 

If you use the extremes and mainstreams methods, remember to include the mainstream users as well. Mainstream users are the ones who represent the majority of your target group.

A bell curve detailing where extreme users lie. In the example, users that match the targeted height, age, and weight are in the middle of the curve. These users make up most research participants. Users who do not match the targeted audience, also known as extreme users, are on the edges of the curve.

Always include a small number of extreme users in your study. They are more likely to highlight issues only newcomers will encounter with your game.

© Interaction Design Foundation, CC BY-SA 4.0

3. Collect

Many researchers also act as moderators when using qualitative research methods like interviews or observations. Moderators help their research participants feel comfortable. A solid ability to empathize and be inquisitive is beneficial to being a moderator.

4. Action

Once researchers have gathered their data, they analyze it and present their findings. How researchers present their data will depend on the research purpose and methods. However, it is essential to present data in a way the design and development teams can easily understand.

Learn More about Games User Research

Take our course User Research – Methods and Best Practices to build your foundational knowledge of user research.

Watch Steve Bromley’s Master Class, How to Become a Games User Researcher, for insights from a GUR expert.

You can also read Steve’s blog to learn about GUR in more detail.

Learn about the differences and similarities between game and mainstream user research.

Access a library of presentations from the Games User Research Summit, presented by industry experts.

Questions about Games User Research

What are some highly cited scientific research about games user research?

Some highly cited research on games user research and related topics include:

Desurvire, H., & El-Nasr, M. S. (2013). Methods for Game User Research: Studying Player Behavior to Enhance Game Design. IEEE Computer Graphics and Applications, 33(4), 82-87. 

Mirza-Babaei, P., Nacke, L., & Drachen, A. (2018). Games User Research Methods. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (CHI PLAY '18 Extended Abstracts) (pp. 1-4). Association for Computing Machinery. 

Nacke, L. E. (2017). Games user research and gamification in human-computer interaction. XRDS, 24(1), 48–51.

Smeddinck, J., Krause, M., & Lubitz, K. (2020). Mobile Game User Research: The World as Your Lab? [Epub ahead of print]. arXiv. 2012.00378

Shin, Y., Kim, J., Jin, K., & Kim, Y. B. (2020). Playtesting in Match 3 Game Using Strategic Plays via Reinforcement Learning. IEEE Access, 8, 51593-51600.

Lee, I., Kim, H., & Lee, B. (2021). Automated Playtesting with a Cognitive Model of Sensorimotor Coordination. In Proceedings of the 29th ACM International Conference on Multimedia (MM '21) (pp. 4920–4929). Association for Computing Machinery.

Mirza-Babaei, P., Stahlke, S., Wallner, G., & Nova, A. (2020). A Postmortem on Playtesting: Exploring the Impact of Playtesting on the Critical Reception of Video Games. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20) (pp. 1-12). Association for Computing Machinery.

Ariyurek, S., Surer, E., & Betin-Can, A. (2022). Playtesting: What is Beyond Personas [Epub ahead of print]. arXiv. 2107.11965

Sevin, R., & DeCamp, W. (2020). Video Game Genres and Advancing Quantitative Video Game Research with the Genre Diversity Score. Computational and Game Journal, 9, 401–420.

Díaz, C., Ponti, M., Haikka, P., et al. (2020). More than data gatherers: Exploring player experience in a citizen science game. Qualitative User Experience, 5, 1.

If you’d like to cite content from the IxDF website, click the ‘cite this article’ button near the top of your screen.

What are some recommended books on games user research?
What are the best practices for conducting playtesting?

Playtesting, a crucial practice in game design, offers researchers insights into real users' interactions in a game. You should follow these best practices when you conduct playtesting:

  • Define Objectives: Determine your playtest learning goals.

  • Choose Participants: Select a representative target audience.

  • Prepare Materials: Ensure a testable game/product state.

  • Create Environment: Set a conducive play/observation space.

  • Observe and Note: Watch and record participant interactions.

  • Ask Questions: Encourage open feedback from participants.

  • Analyze Feedback: Review and apply session insights.

  • Iterate, Repeat: Continue to improve via multiple tests.

Researchers will often use interviews to gain insights from users after playtesting. Ann Blandford, Professor of Human-Computer Interaction at University College London, explains the pros and cons of user interviews in this video:

Show Hide video transcript
  1. 00:00:00 --> 00:00:35

    So, semi-structured interviews – well, any  interview, semi-structured or not, gets at people's perceptions, their values, their experiences as they see it, their explanations about why they do the things that they do, why they hold the attitudes that they do. And so, they're really good at getting at  the *why* of what people do,

  2. 00:00:35 --> 00:01:02

    but not the *what* of what people do. That's much better addressed with *observations* or *combined methods* such as contextual inquiry  where you both observe people working and also interview them, perhaps in an interleaved way about why they're doing the things that they're doing or getting them to explain more about how things work and what they're trying to achieve.

  3. 00:01:02 --> 00:01:32

    So, what are they *not* good for? Well, they're not good for the kinds of questions where people have difficulty recalling or where people might have  some strong motivation for saying something that perhaps isn't accurate. I think of those two concerns, the first is probably the bigger in HCI

  4. 00:01:32 --> 00:02:00

    – that... where things are unremarkable, people are often *not aware* of what they do; they have a lot of *tacit knowledge*. If you ask somebody how long something took, what you'll get is their *subjective impression* of that, which probably bears very little relation to the actual time something took, for example. I certainly remember doing a set of interviews some years ago

  5. 00:02:00 --> 00:02:32

    where we were asking people about how they performed a task. And they told us that it was  like a three- or four-step task. And then, when we got them to show us how they did it, it actually had about 20, 25 steps to it. And the rest of the steps they just completely took for granted; you know – they were: 'Of course we do that! Of course we—' – you know – 'Of course that's the way it works! Of course we have to turn it on!' And they just took that so much for granted that *it would never have come out in an interview*.

  6. 00:02:32 --> 00:03:11

    I mean, I literally can't imagine the interview that would really have got that full task sequence. And there are lots of things that people do or things that they assume that the interviewer knows about, that they just won't say and won't  express at all. So, interviews are not good for those things; you really need to *observe* people to get that kind of data. So, it's good to be aware of what interviews are good for and also what they're less well-suited for. That's another good example of a kind of  question that people are really bad at answering,

  7. 00:03:11 --> 00:03:31

    not because they're intentionally deceiving usually, but because we're *not* very good at *anticipating what we might do in the future*, or indeed our *attitudes to future products*, unless you can give somebody a very faithful kind of mock-up

  8. 00:03:31 --> 00:03:56

    and help them to really  imagine the scenario in which they might use it. And then you might get slightly more reliable  information. But that's not information I would ever really rely on, which is why *anticipating future product design is such a challenge* and interviewing isn't the best way  of getting that information.

Read our article How to Conduct User Interviews to learn more.

What tools are used in games user research?

In games user research, researchers employ various tools to understand player behavior, preferences, and experiences. These tools include:

  • Analytics Software: Track in-game player behavior and metrics.

  • Survey Tools: Gather feedback on game design via surveys.

  • Game Testing Platforms: Collect player data using specialist software.

  • Eye Tracking Technology: Track player gaze for insight into attention focus.

  • Physiological Measurement Tools: Measure heart rate, brain activity, and signals for emotional and physical engagement insights.

  • Heat Maps: Discover a game's most interacted areas using user data.

  • Audio and Video Recording Tools: Capture player reactions during playtesting sessions.

William Hudson, User Experience Strategist, teaches you how to ensure you collect good quality data from your research:

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    We're going to be talking about ensuring quality. One of the really most straightforward  things – and it's quite quick and simple to do – is to walk through the survey with other people. And this is something in HCI referred to as a "cognitive walkthrough", but it's really nothing to be afraid of. I've used the word "cognitive" and it sounds frightening, but it shouldn't be. It's just we want people to tell us what they think we mean by these questions.

  2. 00:00:32 --> 00:01:01

    So, even the most carefully designed surveys can cause confusion, frustration and certainly unexpected responses. And we try to minimize that risk by walking through the surveys with a small number of participants or proxies. And when I say "small number", it might be as few as three – maybe. Just like in usability testing, if  you find on the first walkthrough that you've got a number of problems and then on the second walkthrough that you've *still* got a number of problems,

  3. 00:01:01 --> 00:01:30

    even after you've fixed the first ones, and on the  third walkthrough you have more problems *again*, then you need to keep going until you've actually  had a couple of walkthroughs without any new problems. You might end up conceivably with  seven, but you would have been very unlucky if that actually turns out to be the case; you must  have misunderstood some aspect of your audience in terms of them understanding the questions or  realizing what it is that you're asking. The best way of doing this is to actually get people to  either visit you or you go and visit people

  4. 00:01:30 --> 00:02:03

    with the questionnaire, and you can do this remotely if  you wish; you can walk through the questionnaire on the screen. But it's a lot better to have people  face-to-face in this kind of situation. Certainly when I'm doing user research, being able to see  what people are doing with their faces or maybe just with their *posture* – you know – you can tell when  people are uncomfortable or confused a lot easier, I think. I mean, I recently read an article  that says you can do all that from voice alone, but I'm a bit skeptical, personally, having  sat next to people trying to struggle with a piece of user interface.

  5. 00:02:03 --> 00:02:31

    You can *tell* that they're struggling. It's not just the delay; it's the facial expression, and maybe a little *sigh* – that kind of thing. You might get the sighs online – maybe – if you have a really good microphone. But face-to-face does work a lot better for that kind of thing. Use "think-aloud" protocol. Don't overuse it. I fear that a lot of people believe that think-aloud protocol means that participants should be speaking all  the time – and that is counterproductive.

  6. 00:02:31 --> 00:03:00

    We know that speaking continuously is taxing mentally; so, you're actually detracting from ability of people to understand complicated questions or to think in  any kind of deep way about what they're looking at. So, use "think-aloud" to have participants say what  gives them pause for thought. So, I deliberately prime participants in research that I'm doing to say, "Let me know particularly if something is confusing to you or not the way that you think it should be, or anything that

  7. 00:03:00 --> 00:03:32

    basically makes you stop and think, you should mention to me." And if I notice that somebody is perhaps not thinking aloud as much as would be useful to me, then I do ask them to tell me, but I think it would be better to err on the side of not too much thinking  aloud than to have it as a continuous stream of words. And do, of course, talk to participants about  their responses, especially if they aren't what you were expecting; so, just confirm that they've understood the question and they're answering in a way that is consistent with your expectations.

  8. 00:03:32 --> 00:04:03

    But you could run the surveys with a small number of *proxies*. Proxies are people who aren't actually  in your intended audience, but they're near enough. So, these might be fellow workers – people  from another department who perhaps fit your   demographic in some way, but they aren't familiar  with the survey, or perhaps they aren't familiar with a particular product or service, and so you  can ask them about it as if they were new users – potentially. And just for the purposes of walking  through surveys, that is perfectly acceptable.

  9. 00:04:03 --> 00:04:38

    The next step is to do at least one *trial run*, so  with a very small number of real participants. And I would suggest that you try to aim for maybe 30. One of the main reasons for doing this is that once you launch a survey, most survey tools will not let you change it, which would actually – in some cases – it would be very convenient, especially if it's just a misspelling. But because *anything* that you change could  potentially alter the results in terms of participants' responses,

  10. 00:04:38 --> 00:05:00

    most simply freeze the survey until you're done; and if you need to launch it again, you need to start again. So, you want to make sure it's right before you launch it to your wide audience. So, try to recruit 25 or 30 test participants. Check the responses; see if they make sense to you. Look at the timing; you'll get timing from almost any online survey service.

  11. 00:05:00 --> 00:05:30

    You'll find out how long people spend; you'll actually get start time, finish time and duration, in my experience. And see if it makes sense. If you're getting people going through it really quickly, it means they're not very engaged;  so, you should ignore those; but if your average is considerably longer than you're expecting, then see if you can find some non-essential questions to drop out or expect to increase the incentive to your participants. And then, finally, get your colleagues to take the survey because  they may well answer in different ways

  12. 00:05:30 --> 00:06:03

    than you were expecting and that will just make sure  that you haven't got any technical issues. And if, like me, you typically ask people for their email  addresses so they can be entered into a prize draw of some sort, then you can easily identify your  colleagues in the test run because you know their email addresses – tell them to use your corporate  email address or your organization's email address. Otherwise, you can have just a quick test run  separate from the first in order just to see how these slightly different users react to your questionnaire.

  13. 00:06:03 --> 00:06:35

    Although your screening question should remove *most* unsuitable participants, there will still be the issue of engagement. And poor engagement means that people just aren't reading  the questions thoroughly or they're answering as quickly as they can. Straight-lining is just where you go just down one column with your mouse or pencil. And, in this particular case, it doesn't  make a great deal of sense because some of the questions required you to answer at the other end of the spectrum. So, "I would like being bit by a shark" – and you really ought to strongly disagree with that if you were paying any attention!

  14. 00:06:35 --> 00:07:00

    So, that's just a flippant or frivolous example, but you can get the general idea that if by changing the polarity of some of these things, you can make sure that anybody who is just straight-lining is going to be somewhat obvious. The other way of finding out about straight-lining is to calculate the standard deviation for each  participant in a group of questions;

  15. 00:07:00 --> 00:07:34

    and if their answers vary almost not at all, you  will get a zero – or very close to zero – standard deviation, and that should give you some pause for concern. You should avoid grids, but where they are necessary; that gives you a chance to actually  look at the data and say, "Well, these people aren't answering honestly, or they're not reading the questions." The other issue I mentioned much earlier on – the other *approach* I mentioned much earlier on – was to put *detractors* in, so questions   that you would expect people not to answer  because they're made up or frivolous in some way.

  16. 00:07:34 --> 00:08:03

    And, conversely, you can put in things which you  would expect people to answer quite frequently; and if they don't, then you have to again give  these people an increase in their *suspiciousness score* – you know – their laziness score goes up  so that you actually know that you should dismiss them from your results. Include responses that should have a high incidence and investigate outlying participants. And what I mean by "outlying"  is that you always get a spread of responses

  17. 00:08:03 --> 00:08:33

    and people who are in the sort of outer one standard  deviation or so are doing things somewhat differently to everyone else; so, you can just have a scan through those if they're seeming suspicious to you; you can then take them out. Look for also too many or too few answers, in general. You know – if you're giving people lots of options to choose and they choose everything, then that usually isn't right. Unhelpful open-ended responses are always extremely obvious, but you have to go through and read them;

  18. 00:08:33 --> 00:09:01

    so, if people are just putting nonsense, gibberish – and they'll put gibberish in even if you put in, of course, a minimum word count or letter count. You say that you want at least 10 words or 10 characters, and  they'll just rattle their fingers on the keyboard until they get there; so, those are quite easy to  spot, but it's time-consuming to do that. People with very quick completion time – that actually is  usually a dead giveaway if people – again – if you look at the spreadsheet download, if you can do that on your tool,

  19. 00:09:01 --> 00:09:33

    you would look at the durations and you would do an average and a standard deviation. People who are outside about three standard deviations are doing it really, really quickly or really, really slowly, and you may well not want to include either of those groups in your survey. Also, look for duplicated email addresses; although people who do this kind of semi-professionally tend to know that they shouldn't duplicate email addresses, but you might get people who are  just not that au fait; and occasionally, of course,  you will get people who do it twice for reasons  that are perfectly legitimate.

  20. 00:09:33 --> 00:10:05

    I've had people, when they've known that I was involved in a  survey – because we give contact details, typically –   write to me and say, "I did it twice because I  think I really just misunderstood it the first time through or realized halfway through the first  time that I misunderstood it, so I did it again." So, that will happen from time to time. That is moderately rare. And as I was mentioning earlier on, do report people whose data you have found to be unhelpful to you, partly because this will help the respondent service to improve their panel,

  21. 00:10:05 --> 00:10:35

    and also they should give you the cost of those respondents back, give you credit for those against your invoice for the service that they've provided. Finally, I wanted to mention a potentially very confusing topic. You will probably discover in most online survey tools the option to limit participants to one per IP address – "IP" standing for "internet protocol". And each one of those represents  a computer, a device, not a person.

  22. 00:10:35 --> 00:11:01

    So, the option that a lot of the tools offer you, which is to  limit participants to one response per IP address, only works if you assume that everyone  is using exactly one device. So, if you're anticipating that all your respondents are going  to have a cell phone – a smartphone, I should say – and they're going to fill it out on that and no one else is going to use their cell phone – the other person's smartphone – to do that,

  23. 00:11:01 --> 00:11:30

    then you're fine. But if there is – like there is in our house – a communal computer that we can all easily use – and we find it particularly useful for certain things – then the fact that I've filled out the survey and then someone else in my household wants to fill it out, they would be told that we'd already filled it out and that would be the end of it. And the same thing can happen at work and in other organizations. So, generally don't do this, unless you're really certain that  you want to limit one participant per device.

  24. 00:11:30 --> 00:12:00

    If, when you're looking at your results, you're  a little bit suspicious about something going on like similar email addresses or very  similar response times, then use a spreadsheet like Excel just to sort by IP address and see if you're getting multiple responses in quick succession from a single device, because that probably indicates that somebody is just gaming the questionnaire – they're just doing it as quickly  as possible to get further credits from the panel.

  25. 00:12:00 --> 00:12:09

    And that would show up as as a number of  very quick completions from the same address in, generally speaking, a short space of time.

Read our Ultimate Guide to learn how, when, and why to use surveys.

What role does games user research play in game accessibility?

Researchers employ playtesting, surveys, and behavior observation in games user research to identify accessibility barriers, like issues faced by players with visual or hearing impairments.

They use these insights to develop game accessibility features that benefit all players. Features include settings that can be customized, such as text size and control schemes.  

You should integrate accessibility early in game development, involve players with disabilities, and iterate based on their feedback.

Accessibility is a key consideration in inclusive design. Inclusive design is an approach to creating accessible products and experiences that are usable and understandable by as many people as possible. Katrin Suetterlin, UX Content Strategist, provides an overview of inclusive design:

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    Universal design is a design practice that is generally considered to be a design that has gone through all the design processes while hoping to take everyone into consideration. You just would have to design it in a way that everyone can use it. The *problem* with the approach of universal design is, though, that *you cannot really design for everyone*.

  2. 00:00:31 --> 00:01:02

    To design with 8 billion people in mind is strictly impossible: a design that is utopian or more academic or more a discourse subject rather than something that we can really and truly design. But you can do *inclusive design* as a part of design practices that strive for inclusion. The difference is that inclusive design is the umbrella above,

  3. 00:01:02 --> 00:01:31

    for example, accessible design because accessibility is an accommodation. It is checking boxes whether your design is truly accessible. And if it serves the purpose for your audiences. Inclusive design is the same umbrella. Universal design is more of a juxtaposition, but it's more battling against inclusive design because if you think of everyone, you think of no one.

  4. 00:01:31 --> 00:02:01

    And in this inclusive way, we can look towards architecture, we can look towards city planning and also how buildings are built in the past decades because they have taken aboard the inclusive approach. They do all the accommodations they can think of – for example, no stairs, ride towards those who have to take a wheelchair. Or it could be beneficial to people with a trolley or with something heavy that they have to carry.

  5. 00:02:01 --> 00:02:30

    So, thinking of this benefits everyone. And now imagine the internet being a public space, and how we can learn from city planning and architecture and interior design, that we are *accommodating those who need accommodation*. And that's what the inclusive approach is about. So, to sum this up, your research cannot be universal, because you cannot serve that, but it can be personalized and it can be also tailored to the needs of the audience you are serving.

Learn more in our course Accessibility: How to Design for All.

How to recruit participants for games user research studies?

Recruitment for games user research studies is a crucial task. Researchers identify and engage with individuals who represent their target audience. To successfully recruit participants, you should follow these steps:

  • Define the Target Audience: Age, experience, preferences.

  • Recruitment Appropriate Users: Tap into social media, forums, and your existing user base.

  • Encourage Participation: Offer game credits, merchandise, and money.

  • Create a Concise Survey: Screen participants efficiently.

  • Ensure Ethical Practices: Transparency, data use, and confidentiality.

  • Plan for Dropouts: Over-recruit for reliability.

  • Maintain Communication: Inform participants about progress.

  • Seek Feedback: Improve future recruitment efforts.

William Hudson, User Experience Strategist, offers practical tips to ensure the effectiveness and efficiency of your recruitment process:

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    I wanted to say a bit more about this important issue of recruiting participants. The quality of the results hinges entirely on the quality of the participants. If you're asking participants to do things and they're not paying attention or they're simply skipping through as quickly as they can – which does happen – then you're going to be very disappointed with the results

  2. 00:00:32 --> 00:01:01

    and possibly simply have to write off the whole thing as an expensive waste of time. So, recruiting participants is a very important topic, but it's surprisingly difficult. Or, certainly, it can be. You have the idea that these people might want to help you improve your interactive solution – whatever it is; a website, an app, what have you – and lots of people *are* very motivated to do that. And you simply pay them a simple reward and everyone goes away quite happy.

  3. 00:01:01 --> 00:01:32

    But it's certainly true with *online research* that there are people who would simply take part in order to get the reward and do very little for it. And it comes as quite a shock, I'm afraid, if you're a trusting person, that this kind of thing happens. I was involved in a fairly good-sized study in the U.S. – a university, who I won't name – and we had as participants in a series of studies students, their parents and the staff of the university.

  4. 00:01:32 --> 00:02:05

    And, believe it or not, the students were the best behaved of the lot in terms of actually being conscientious in answering the questions or performing the tasks as required or as requested. Staff were possibly even the worst. And I think their attitude was "Well, you're already paying me, so why won't you just give me this extra money without me having to do much for it?" I really don't understand the background to that particular issue.

  5. 00:02:05 --> 00:02:32

    And the parents, I'm afraid, were not a great deal better. So, we had to throw away a fair amount of data. Now, when I say "a fair amount", throwing away 10% of your data is probably pretty extreme. Certainly, 5% you might want to plan for. But the kinds of things that these participants get up to – particularly if you're talking about online panels, and you'll often come across panels if you go to the tool provider, if you're using, say for example, a card-sorting tool

  6. 00:02:32 --> 00:03:03

    or a first-click test tool and they offer you respondents for a price each, then be aware that those respondents have signed up for this purpose, for the purpose of doing studies and getting some kind of reward. And some of them are a little bit what you might call on the cynical side. They do as little as possible. We've even on card sort studies had people log in, do nothing for half an hour and then log out and claim that they had done the study.

  7. 00:03:03 --> 00:03:31

    So, it can be as vexing as that, I'm afraid. So, the kinds of things that people get up to: They do the minimum necessary; that was the scenario I was just describing. They can answer questions in a survery without reading them. So, they would do what's called *straightlining*. Straightlining is where they are effectively just answering every question the same in a straight line down the page or down the screen. And they also could attempt to perform tasks without understanding them.

  8. 00:03:31 --> 00:04:04

    So, if you're doing a first-click test and you ask them, "Go and find this particular piece of apparel, where would you click first?", they'd just click. They're not reading it; they didn't really read the question. They're not looking at the design mockup being offered; they're just clicking, so as to get credit for doing this. Like I say, I don't want to paint all respondents with this rather black brush, but it's *some* people do this. And we just have to work out how to keep those people from polluting our results. So, the reward is sometimes the issue, that if you are too generous in the reward

  9. 00:04:04 --> 00:04:30

    that you're offering, you will attract the wrong kind of participant. Certainly I've seen that happen within organizations doing studies on intranets, where somebody decided to give away a rather expensive piece of equipment at the time: a DVD reader, which was – when this happened – quite a valuable thing to have. And the quality of the results plummetted. Happily, it was something where we could actually look at the quality of the results and

  10. 00:04:30 --> 00:05:01

    simply filter out those people who really hadn't been paying much attention to what they were supposed to be doing. So, like I say, you can expect for online studies to discard been 5 and 10% of your participants' results. You also – if you're doing face-to-face research – and you're trying to do quantitative sorts of numbers, say, you'd be having 20 or 30 participants, you probably won't have a figure quite as bad as that, but I still have seen, even in face-to-face card sorts, for example,

  11. 00:05:01 --> 00:05:33

    people literally didn't *understand* what they were supposed to be doing, or didn't get what they were supposed to be doing, and consequently their results were not terribly useful. So, you're not going to get away with 100% valuable participation, I'm afraid. And so, I'm going to call these people who aren't doing it, and some of them are not doing it because they don't understand, but the vast majority are not doing it because they don't want to spend the time or the effort; I'm going to call them *failing participants*. And the thing is, we actually need to be able to *find* them in the data and take them out.

  12. 00:05:33 --> 00:06:01

    You have to be careful how you select participants, how you filter them and how you actually measure the quality of their output, as it were. And one of the big sources of useful information are the actual tools that you are using. In an online survey, you can see how long people have spent, you can see how many questions they have answered. And, similarly, with first-click testing, you can see how many of the tasks they completed; you can see how long they spent doing it.

  13. 00:06:01 --> 00:06:30

    And with some of these, we actually can also see how successful they were. In both of the early-design testing methods – card sorting and first-click testing – we are allowed to nominate "correct" answers – which is, I keep using the term in double-quotes here because there are no actually correct answers in surveys, for example; so, I'm using "correct" in a particular way: "Correct" is what we think they should be doing when they're doing a card sort, *approximately*, or, in particular, when they're doing a *first-click test*,

  14. 00:06:30 --> 00:07:03

    that we think they ought to be clicking around about here. Surveys as a group are a completely different kettle of fish, as it were. There are really no correct answers when you start. You've got your list of research questions – things that you want to *know* – but what you need to do is to incorporate questions and answers in such a way that you can check that people are indeed *paying attention* and *answering consistently*. So, you might for example change the wording of a question and reintroduce it later on

  15. 00:07:03 --> 00:07:33

    to see if you get the same answer. The idea is to be able to get a score for each participant. And the score is your own score, about basically how much you trust them or maybe the *inverse* of how much you trust them. So, as the score goes up, your trust goes down. So, if these people keep doing inconsistent or confusing things, like replying to questions with answers that aren't actually real answers – you've made them up – or not answering two questions which are effectively the same the same way, etc.,

  16. 00:07:33 --> 00:08:02

    then you would get to a point where you'd say, "Well, I just don't trust this participant," and you would yank their data from your results. Happily, most of these tools do make it easy for you to yank individual results. So, we have to design the studies to *find* these failing participants. And, as I say, for some these tools – online tools we'll be using – that is relatively straightforward, but tedious. But with surveys, in particular, you are going to have to put quite a bit of effort into that kind of research.

  17. 00:08:02 --> 00:08:32

    Steps we can take in particular: Provide consistency checks between tasks or questions. Ensure that "straightlined" results – where people are always answering in the same place on each and every question down the page – ask the same question again in slightly different wording or with the answers in a different order. Now, I wouldn't go around changing the order of answers on a regular basis. You might have one part of the questionnaire where "good" is on the right and "bad" is on the left;

  18. 00:08:32 --> 00:09:00

    and you might decide to change it in a completely different part of the questionnaire and make it really obvious that you've changed it to those who are paying attention. But whatever it is that you do, what you're *trying* to do is to find people who really aren't paying much attention to the directions on the survey or whatever the research tool is, and catch them out and pull them out of your results. And of the issues you should be aware of if you're paying for participants from something

  19. 00:09:00 --> 00:09:30

    like your research tool *supplier* is that you can go back to them and say, "These people did not do a very good job of completing this survey, this study." And ask them to refund you for the cost of those. You tell them that you're having to pull their data out of your results. Also, it helps to tidy up their respondent pool. Perhaps it's not your particular concern, but if you do end up using them again, it would be nice to know that some of these people who are simply gaming the system have been removed from the respondent pool.

  20. 00:09:30 --> 00:09:45

    So, reporting them – getting them removed from the pool – is a sensible thing to be doing. And, finally, devising a scoring system to check the consistency and also checking for fake responses and people who are just not basically doing the research as you need them to do it.

Learn more about qualitative user research in User Research – Methods and Best Practices.

Enroll in Data-Driven Design: Quantitative Research for UX for more on quantitative user research methods.

What ethical considerations are there in games user research?

Games user researchers must prioritize ethical considerations to ensure respect for players. They must maintain integrity in research practices and responsibly use data. Key ethical considerations include:

  • Consent and Privacy: Obtain and respect player consent and privacy.

  • Transparency and Honesty: Ensure openness and honesty in research practices.

  • Data Security: Safeguard data confidentiality and security rigorously.

  • Avoid Bias: Strive to eliminate biases in research.

  • Minimize Harm: Design research to minimize participant harm.

  • Beneficence and Respect: Respect participant autonomy, dignity, and benefits.

Some games feature violence and other potentially distressing elements. Ann Blandford, Professor of Human-Computer Interaction at University College London, discusses how she asks research participants about emotionally charged and critical incidents:

Show Hide video transcript
  1. 00:00:00 --> 00:00:32

    Ditte Hvas Mortensen: In interviews, you often want to ask people about their past experiences or how they normally do something. But what are good ways of getting them to recollect correctly? Ann Blandford: People are typically not good at recollecting where you give an abstract question.

  2. 00:00:32 --> 00:01:04

    So, I'm going to use a hopefully very, very familiar example for pretty much everybody, which is *going shopping*. If you ask somebody how they go shopping, they'd probably give you a very short answer. You know, for grocery shopping – they'd probably give a very short answer like, you know: "Well I write a list. And then, I go around the shop and I pick things up and I put them in my basket. And then, I go to the checkout." If you ask them to describe exactly what they did the *last time they went shopping*

  3. 00:01:04 --> 00:01:32

    – the *most recent* shopping trip they did – they'll add a whole pile of other details, like, you know, that they got to the bread section and remembered that they hadn't picked up any bananas or whatever. And then, they discovered that they'd planned to cook some meal that had five ingredients, but they couldn't find the fourth ingredient in the shop, so then they had to rethink what it was they were going to cook.

  4. 00:01:32 --> 00:02:05

    And so, they put some other ingredients back, and then – you know. And so, you get a whole load of details that way that show that it's not such a linear process as we would naturally describe it if you just asked somebody a general question about going shopping. And shopping is a very mundane activity that most of us have to do one or more times every week. So, a lot of the activities are kind of things that we take for granted.

  5. 00:02:05 --> 00:02:37

    Therefore, asking people about a very specific instance of it is likely to give you a whole set of details that people wouldn't mention at all if you just asked them about the generalization. So, asking about specific instances – either the most recent or one that's memorable for some other reason often gets a lot of information that you might not have access to otherwise.

  6. 00:02:37 --> 00:03:04

    And the other way to get people to remember things is invoking incidents that are particularly memorable for one reason or another, things that people remember because they had some high emotional content. And often for technology design, they are the *big* things that happen, the *memorable* things that happen.

  7. 00:03:04 --> 00:03:36

    So, a while back, I did a study of how people use diaries and the different kinds of technologies that they used for managing their diary and their to-do lists. And I asked people about critical incidents that had happened in diary management. And people told me things like putting an event in the wrong year so that they *missed* an event because they'd accidentally put it in last year rather than this year.

  8. 00:03:36 --> 00:04:02

    And that caused embarrassment. And then, you could start to explore what it was about the design of the diary that had made that kind of error so easy for people to make. One participant told me about how he had left his digital – his PDA, his organizer – in his back pocket when he'd put it in the wash. That actually didn't tell me much about the interaction design.

  9. 00:04:02 --> 00:04:32

    But it was the thing that *he* remembered as being a particularly critical incident in relation to his PDA. And I guess what it did highlight was the challenges of having a backup and of reconstructing the events that had been in that PDA before it went for its swim in the washing machine. So, those kinds of things can trigger memories, and, you know.

  10. 00:04:32 --> 00:05:02

    You don't know what's going to come out from those, but often they're interesting – they're valuable. They tell you about things that didn't quite work in a design. Another example was when we did a study of ambulance control. We asked people about critical incidents. This is again going back a little way. And some people remembered really *major* incidents where *everybody* would know – you know;

  11. 00:05:02 --> 00:05:34

    it hit the national press, and lots of people would have been aware of that incident. But the controllers could talk about their own personal role and what information they needed and how they got hold of that information – you know: how they used the displays; how they worked together; how the technology supported them in making decisions and, indeed, sometimes how it *didn't* support them in making those decisions. One or two participants told us much more *personally pertinent* incidents

  12. 00:05:34 --> 00:06:03

    of, you know, a particular call that had come in that had had an emotional impact on them – like when a small baby was involved in an incident, you know, in some kind of medical emergency. And they clearly engaged a lot with that, and so could again talk about it in a lot of detail *because* it had an emotional significance for them – that made it more memorable and hence made their memories a bit more reliable.

  13. 00:06:03 --> 00:06:30

    So, critical incidents are often kind of negative in one way or another – and it's much more common that they're negative than that they're positive. And, of course, it's nice to find out about *positive* incidents as well because they tell you about *good design features* rather than just things that didn't work well. But, of course, HCI is often about trying to *improve* design. And so, it *is* about finding out about the things that don't work well for a system

  14. 00:06:30 --> 00:07:03

    and how they would support people through the more challenging situations that they encounter. So, I really start asking about a *memorable* incident rather than an emotionally-charged or challenging incident. And how they define "memorable" is really up to them. But then I will try to follow through with questions that enable *me* to end up feeling like I understand the course of the incident and the consequences,

  15. 00:07:03 --> 00:07:36

    particularly in relation to the use of technology in technology design and how technology might have helped better. So, I suppose I'm working with my agenda of what I'm trying to get out of it. So, I'm not really seeking to understand the depths of the emotions that they went through – because that has *ethical* implications, quite apart from anything else, but it's also *not* what's really generally most relevant for thinking about technology design.

  16. 00:07:36 --> 00:07:48

    So, I'm kind of keeping my interview script and what I'm trying to get out of the interview in mind the whole time, but the questions are free-form and situated.

To learn more, watch the Master Class Webinar Ethics In Design: A Practical Guide from Guthrie Weinschenk, COO of The Team W, Inc.

What is the role of psychology in games user research?

Games user research (GUR) uses crucial insights from psychology about player behaviors and motivations. Researchers use these insights to enhance gaming experiences.

Researchers focus on player psychology in GUR to improve game design and engagement. They understand player motivation through theories like flow theory, a state of mind where a person is completely absorbed and focused in an activity.

You can use psychology to create intuitive interfaces and positive game mechanisms. To do this, you must study players’ behavior and how they learn.

GUR includes usability testing, where researchers analyze player interactions to refine game mechanics and improve the player experience.

Psychology in GUR leads to more engaging, rewarding games, offering players more profound, meaningful experiences.

Alan Dix, Professor and Expert in Human-Computer Interaction, teaches you how psychological principles can enhance player engagement:

Show Hide video transcript
  1. 00:00:00 --> 00:00:31

    I'm going to talk to you now about what I call peak experience. I should explain this is my phrase. So you'll find this... Perhaps if other people talk about it they'll use different words for it. And by peak experience, I don't mean like the most fantastic experience that anybody has ever had in their lives. What I mean is that each of us have different things that for us

  2. 00:00:31 --> 00:01:02

    are the best thing for that moment from the best food to the best film. What I'm going to do is imagine you've got a group of children who are coming for tea. Now, this is going to vary a little bit from country to country. You can fill in the equivalent things for yourself, but I'm going to think about what this might be like in Britain. So I've got the children coming to tea. And so perhaps what I do is I give them tinned [baked] beans. Now, you may or may not like tinned beans and you will get the odd child who really dislikes it.

  3. 00:01:02 --> 00:01:30

    But on the whole most children are happy with baked beans. It works for them. But then afterwards you give them a pound [GBP £1] each perhaps and you go to the local sweet shop. And let's forget about our teeth and what makes us fat. We're just going to give them sweets this time. So you give them a pound each, they go to the sweetshop. Do they all buy the same thing? Well, of course not. They'll all buy something different.

  4. 00:01:30 --> 00:02:04

    The baked beans are good enough for everybody. It probably wasn't any child's favorite food, but it was good enough. For most of them, it wasn't the worst thing. But imagine the chocolate bar that is okay for every child. Maybe the thing you'd buy to take in the car with you is the share pack? Maybe you’d get it for that. But if you give the child a pound each, they will each choose the chocolate bar which is their absolute favorite. The one that is good enough, nobody buys.

  5. 00:02:04 --> 00:02:34

    And that's true of children, but it's also true perhaps of a wedding reception. You'll choose a food. If it's everybody is going to have the same thing or maybe two choices. So you don't choose the most interesting, exciting food. You often choose something that's a bit bland because then you know that everybody will be happy with it, even if it's not the best thing for them. If you're designing for everybody, you end up with something

  6. 00:02:34 --> 00:03:03

    that is second best for each one of them. Now, sometimes you're designing for that thing that's good enough for everybody. Like if you’re getting an office product that everybody in the office has to use. However, sometimes particularly when an individual’s using it, you need the thing that they want, not the thing everybody's happy with. So baked bean design – this is the good-enough design –

  7. 00:03:03 --> 00:03:33

    is when others choose for us and it's choosing for lots of people at the same time, when we have to share it; that's the sensible thing to do. But when we have to choose something for ourselves, then it has to be the best. So think about *game design*. When you're designing a game, a good enough game is good for no one. So in certain sorts of web services, in certain sorts of entertainment projects, what you want is that thing that some people are going to get absolutely excited about.

  8. 00:03:33 --> 00:04:02

    And actually you don't care if there are other people that hate it so long as enough people think that it's the best thing. This is crucially important because if you design for that average, when individuals are choosing, you will always be wrong. You've heard about this happening when you have perhaps the Olympics or some other major sporting event. And the papers all complain because the selectors

  9. 00:04:02 --> 00:04:30

    chose the person for the national team who's a bit variable in their performance. They're not very reliable. The reliable person who isn't necessarily always best, but one who always comes fourth never wins. But sometimes having somebody who's a bit unreliable, who sometimes will be at the end of the pack, but they're the person who might also be the one who got sometimes at the front. So for a race, you get nothing for being in the middle;

  10. 00:04:30 --> 00:05:01

    you only get something for being one of the top few places. It changes the strategy for choice. So if you think about the sort of quality of experience and different users, the good-enough product has got a sort of “okay for everyone.” But if you have a peak product, one that some people really like, those people will like it a lot better than the average product, even if everybody else thinks it's worse. And of course, if there are sufficient of these peak products, one of them will always beat your good-enough product.

  11. 00:05:01 --> 00:05:31

    It's a different design strategy. So for... I was going to say traditional interface design. For the kind of interface design which is about these average products. Then when you're looking, you can look at that user profile. What's a typical user? What's the range of users? You might go for the central persona, the person who gives you a sort of average feel for the whole group. So you're after something that's average, typical.

  12. 00:05:31 --> 00:06:02

    You're methods of doing this are going to be more process-driven, more using heuristics. You're going to go from a need to a solution and the need wants to be one that a lot of people have. When you design for peak experience, you do this differently. You are after individual users. Niches. Sometimes extreme personas that capture some small part of the market. But what you want to do is capture that part of the market really, really well.

  13. 00:06:02 --> 00:06:30

    You might go for the specific or the eclectic design rather than that sort of average one. You're often after ideas and inspiration rather than methods and processes. And often too, you start off with the concept, the idea. In fact, some of the most successful products were originally designed by one person for one purpose, sometimes for a single user who wasn't them.

  14. 00:06:30 --> 00:07:02

    Sometimes even for themselves; the opposite of what I would normally tell you to do when looking at design, and because of that, they’re excellent for that person and therefore a win for that group when otherwise they would fail. So when do you look for peak performance? It's when there's individual choice and when user experience is central. And you can think of this as a long-tail phenomenon, as in you want many applications for small groups

  15. 00:07:02 --> 00:07:09

    as opposed to the single application that works for everybody. And when that's the case, this is when you're trying to design this way.

Celia Hodent, Ph.D., Game UX Strategist and author of The Gamer's Brain, teaches the cognitive science and psychology that support developing engaging video games in her Master Class, How to Design Engaging Products: Insights from Fortnite's UX.

How to create effective user personas for game design?

A researcher must understand that game design user personas represent various player types. These personas focus design efforts on a relatable character set and avoid the complexity of endless player possibilities.

To develop user personas: 

  1. Gather user data through surveys for rich insights.

  2. Analyze data to form main player archetypes.

  3. Create detailed, engaging personas for each archetype.

  4. Define goals and scenarios for each persona.

  5. Reference personas in shaping game design elements.

  6. Validate design choices and refine them based on feedback.

Frank Spillers, CEO of Experience Dynamics, teaches you how personas can guide design decisions:

Show Hide video transcript
  1. 00:00:00 --> 00:00:33

    So, you've gone out, you've talked to your users. You've done a field study, basically interviews, user interviews and observations of their tasks, of their environment. And what do you do with that? Personas and journey maps. So, I just wanted to talk a little bit about personas in particular. Journey maps are also important, but I'm just going to cover personas. Where you get your data from is hugely important. Most people just use surveys.

  2. 00:00:33 --> 00:01:03

    Some people use focus groups. Those are both market research techniques. They're not appropriate for UX. Why? Because in UX we're looking at *behavior*, not at opinion. And focus groups and surveys do a lot of opinion elicitation. They're fine for marketing, but they really have no place in UX. So, if you do market research and just use  like a segmentation approach or market research approach, you're going to end up with a different type of persona.

  3. 00:01:03 --> 00:01:33

    So, if you do field studies, you'll end up observing their behavior and then creating behavioral profiles or role-based personas, and you'll understand better the context and the conditions under which your mobile app or device mobile content  is being consumed. And that's super important. This is an example of a persona, and you can adjust a persona. But basically you have a scenario. I've started writing my personas in first person.

  4. 00:01:33 --> 00:02:01

    So, literally the notes I take, I'll build the persona based on actually what the user said. And it's about 98% verbatim, like what they said with a few corrections of – you know – grammar and stuff like that. I won't change the meaning. I won't remove the words from their mouth and rephrase them in my head. I won't alter or fabricate them. And it's a technique that I learned from  Whitney Quesenbery, a UX consultant

  5. 00:02:01 --> 00:02:34

    who's written a book about storytelling and personas in particular, because personas are really about *stories*; they're really about telling stories about that context. Try it right now. So, think about the personas that might, the roles that people, the problems they might solve with your app. And in a narrative kind of style, write one or two sentences on their background to kind of set the context; write two or three more sentences on the problems that they're trying to solve, like the tasks, and two or three sentences on the call to action,

  6. 00:02:34 --> 00:03:00

    like what they need, what their desires are or how the design can basically help them get to their task, how can it empower them? How you can meet their needs, their goals, their tasks, their sub-tasks? If you have a lot of questions, it's probably because you need to spend more time with your users. Once you start hanging out with users doing field studies, you get to know what they need and the context of use becomes very apparent.

  7. 00:03:00 --> 00:03:30

    It's one of the most enlightening things I think about personas. But also you understand their *distractions*, their deeper kind of emotional and social context. You understand what multi-tasking they might be doing and what problem solving or the different states, the varied states that they might use, kind of usage scenarios, if you will,  but in different states. So, to create a role-based persona, identify the personas based on your data, so the themes that come out, the different hats that users are trying to wear.

  8. 00:03:30 --> 00:04:03

    And then give them a name, like not just like Sally or Susan or Jim or John or something. But give them an associating  adjective such as Support-Seeker Sally or Finance Fiona, I think was that other one there. Okay, so continuing with persona development, think about these things: *environmental impact*, so the context of use; *cognitive impact*, the time, for example, the time pressure or stress that they might be under.

  9. 00:04:03 --> 00:04:35

    Think of the *social impact* – the triggers that might lead to that such as another user or somebody else online or started a chat or tripped off – I don't know – a support call or whatever. And finally the *behavioral impact* – so, for example, the roles that not just the user has but the roles that the user must assume to complete the task. There's also *mental models*. So, mental models being these past expectations

  10. 00:04:35 --> 00:05:02

    that users bring to their designs; they basically influence how a user problem solves. And think about like the mental model for the restaurant experience – you know – what does that look like? For example, you enter the restaurant; there might be waiting, you might have to check in, you may have to order ahead, you might have a pickup situation. Some restaurants have made a separate place for the food delivery people to go

  11. 00:05:02 --> 00:05:33

    so that they're not standing there waiting with the the diners that are coming to the restaurant for the "live food". You can conduct *task analysis*, which is asking the user to do the task. You know – you can accompany them, a doctor's visit, restaurant visit, whatever it might be. You can accompany the user and look for the language they use, the way they talk about it, look at the physical, social environment, look at their functional needs and priorities and maybe  even the cognitive demands that they have.

  12. 00:05:33 --> 00:06:03

    So, for example, they have to figure something out; they have to fill in a form. They have to submit this. There's a pressure to answer these questions or to know something. Look at the task space, how they're making sense of things, how they're deciding or solving problems for themselves. Remember, we don't just have goals and tasks, but we have *sub-tasks* as well. So, we need all three of those things, not just goals. It's so important that we make sure that we drill deeper down

  13. 00:06:03 --> 00:06:19

    into our users' behavior so we fully understand what it is and bring that out. So, doing that, though, is so well worth it and makes you a much smarter designer and a much smarter design team.

Learn How To Create Actionable Personas in this Master Class from Daniel Rosenberg, UX Professor, Designer, Executive and Early Innovator in HCI.

What are the career paths in games user research?

Career paths in games user research (GUR) are diverse and dynamic. A career in GUR will offer you exciting opportunities if you are passionate about combining the science of user experience with the art of game design. Key career paths in GUR include:

  • User Research Analysts conduct user research and analyze player behavior and feedback. They provide insights to improve game design and player experience.

  • User Experience (UX) Designers for Games focus on designing user interfaces and creating seamless user experiences. They ensure the game is intuitive and enjoyable.

  • Game Data Analysts analyze in-game data to understand player behavior, preferences, and trends. This analysis informs game development and marketing strategies.

  • Playtest Coordinators organize playtesting sessions, gather feedback, and ensure the game meets user expectations.

  • Accessibility Specialists ensure that people with disabilities can play video games. They help to make game design more inclusive.

Elena Chapman, Accessibility Research Manager at Fable, explains what accessibility means and why it is crucial to create games that are inclusive for all players:

Show Hide video transcript
  1. 00:00:00 --> 00:00:30

    Accessibility ensures that digital products, websites, applications, services and other interactive interfaces are designed and developed to be easy to use and understand by people with disabilities. 1.85 billion folks around the world who live with a disability or might live with more than one and are navigating the world through assistive technology or other augmentations to kind of assist with that with your interactions with the world around you. Meaning folks who live with disability, but also their caretakers,

  2. 00:00:30 --> 00:01:01

    their loved ones, their friends. All of this relates to the purchasing power of this community. Disability isn't a stagnant thing. We all have our life cycle. As you age, things change, your eyesight adjusts. All of these relate to disability. Designing accessibility is also designing for your future self. People with disabilities want beautiful designs as well. They want a slick interface. They want it to be smooth and an enjoyable experience. And so if you feel like

  3. 00:01:01 --> 00:01:30

    your design has gotten worse after you've included accessibility, it's time to start actually iterating and think, How do I actually make this an enjoyable interface to interact with while also making sure it's sets expectations and it actually gives people the amount of information they need. And in a way that they can digest it just as everyone else wants to digest that information for screen reader users a lot of it boils down to making sure you're always labeling

  4. 00:01:30 --> 00:02:02

    your interactive elements, whether it be buttons, links, slider components. Just making sure that you're giving enough information that people know how to interact with your website, with your design, with whatever that interaction looks like. Also, dark mode is something that came out of this community. So if you're someone who leverages that quite frequently. Font is a huge kind of aspect to think about in your design. A thin font that meets color contrast

  5. 00:02:02 --> 00:02:20

    can still be a really poor readability experience because of that pixelation aspect or because of how your eye actually perceives the text. What are some tangible things you can start doing to help this user group? Create inclusive and user-friendly experiences for all individuals.

Learn more about accessibility in Elena’s Master Class Introduction to Digital Accessibility.

How to use games user research in virtual reality (VR) game development?

Developers employ games user research (GUR) to enhance gaming experiences when developing virtual reality (VR) games. Researchers use various methods like playtesting and biometric analysis to understand player interactions in VR.

Researchers observe player behavior and reactions in VR playtesting to identify usability issues and engagement levels. They use surveys, interviews, and biometric tools like eye tracking. These qualitative research methods help us understand player emotions.

Researchers apply GUR methods in VR to make informed decisions that improve user interfaces and game immersion. Insights from playtesting and biometric analysis help us to:

  • Refine game mechanics.

  • Enhance player emotional engagement.

In this video, researcher and professor Mel Slater explores design considerations for VR. He provides valuable context for applying games user research in VR game development:

Show Hide video transcript
  1. 00:00:00 --> 00:00:33

    So, I think the whole domain of 2D user  interfaces and 2D design is really well understood, but 3D design or design for virtual reality is  not very well understood. So, a very basic thing is, 'How do I get from A to B? How do I move through the environment?' So, people say, 'Okay, it's very easy; you get the controller and you press a button on the controller and it moves you forward in the direction you're pointing or the direction you're looking.'

  2. 00:00:33 --> 00:01:03

    But if you do that, you get sick. So, okay – then you need a treadmill so that you really walk; the treadmill is expensive and it takes a lot of space and just people are not going  to do it. And then there's another idea about, for walking – walking in place. So, you simulate walking by just walking in place, the system recognizes that, and it moves you in the direction. That's better, certainly better than pointing and clicking.

  3. 00:01:03 --> 00:01:31

    ... There's another paradigm, where you just point at an area, you click, and you are there, but then you lose complete orientation about where you are in the environment. So, a very basic thing like moving through an environment, a virtual environment which is  bigger than the real space you're in, where you can't really walk (inaudible) this is  unsolved; it's still there after 30 years.  So, the most important thing in terms of design  is virtual reality, as I said,

  4. 00:01:31 --> 00:02:02

    is this new media and our natural inclination is to use it to simulate things that happen in the real world. And that's important. Like if you're learning to use some complex machinery which is dangerous, learn it first of all in virtual reality – like with flight simulators, for example – and then you do it in physical reality. ... That's important, but virtual reality is much more than that. This is an example of something you can't really do in physical reality;

  5. 00:02:02 --> 00:02:32

    you can only really do it in virtual reality. And this has always been my interest in virtual reality, which is how you can  use it to push the boundaries and get experiences that you just can't have in physical reality but  which nevertheless have a positive outcome. Many examples have shown that when you change the body, you also change the mind. So, one example is racial bias. When you're embodied in the body of a minority group or a discriminated-against group,

  6. 00:02:32 --> 00:03:00

    it reduces your bias against that group; it can change how you move; it can be used to reduce pain; it can induce cognitive changes – if you  get embodied in a body that looks like Einstein, then you do better on a cognitive test, and  we've even used it in a context of fear of death. So, the first main example I want to talk  about is using this idea of embodiment for self-counseling.

  7. 00:03:00 --> 00:03:34

    So, as I said, this is about narrative. This is a narrative with yourself. Let me explain how it works. So, it's based on something  called *Solomon's paradox* from social psychology, where it's known that we're much better at giving  advice to a friend than we are to ourselves. But suppose you could objectify the you that you  talk to when you talk with yourself on the inside. So, you can make it as if your self is  the friend – the other person. So, maybe talking with yourself as if with another person  might be helpful for personal problem solving.

  8. 00:03:34 --> 00:04:01

    So, here's how it works: We take the person, we scan them, and we make a virtual body that looks like them – we put them in virtual reality; so, if he looks in the mirror in virtual reality, he'll see a replica of his own body. But also in virtual reality we can embody you as anybody – in particular, with *first-person perspective* – you look down; you see the body – and *real-time motion capture* – you move and the body moves with you.

  9. 00:04:01 --> 00:04:34

    We can embody you as anybody – in this particular case, as Sigmund Freud. So, what happens is that first of all you are as yourself and you explain a problem to Sigmund Freud. Then you become Sigmund Freud; you're in his body, and you see and hear yourself explain the problem – now as Sigmund Freud, remember, change the body, change the mind, and also seeing yourself from the outside as a friend, you can maybe come up with some ideas to help this person – you – with the problem.

  10. 00:04:34 --> 00:05:02

    So, embodied as Freud, you could talk back to yourself, and this process keeps changing back and forward until you've reached a resolution. So, basically you're having a conversation with yourself. A journalist from The New Yorker came and he tried this, and he wrote something very interesting afterwards. He spoke with Freud about a problem he had about guilt that he writes about in this article – a guilt to do with his mother being in

  11. 00:05:02 --> 00:05:33

    a care home, and he said, 'Soon I fell into a rhythm. Freud and I talked for about 20 minutes.' Of course, it's himself talking. 'He was insightful. He said many things I'd never said to myself in ordinary life. When I took off the headset, I was  moved. I wanted to tell myself, "Good talk." From his perspective, I'd seemed different, sadder, more ordinary and comprehensible. I told myself to remember that version of me.' In VR, we're at that stage where the paradigms for how to do things

  12. 00:05:33 --> 00:05:47

    and operate in VR is still not invented, other than, 'Oh, let's put up a menu.' This is the wrong way to go. So, for designers interested in this field, it's wide open I think.

Enroll in UX Design for Virtual Reality to develop your understanding of UX in VR.

Do you need a PhD to be a user researcher?

You don't need a PhD to become a user researcher. Practical skills and experience are often more valued than a PhD.

User researchers use diverse methods to understand behaviors, needs, and motivations. Essential skills include empathy, observation, critical thinking, and communication.

Ann Blandford, Professor of Human-Computer Interaction at University College London, offers valuable insights into effective user research techniques:

Show Hide video transcript
  1. 00:00:00 --> 00:00:35

    Ditte Hvas Mortensen: Is there anything else you can do to create a good rapport with your participants? Ann Blandford: Well, obviously there are things that we have control over like the way we dress, our demeanor, the way that we interact, the way that we engage with other people. Being realistic, though, there are also things that we *don't* have control over.

  2. 00:00:35 --> 00:01:03

    So, sometimes my students – who are mostly younger than me and have different demeanors, and they look different; they have different accents; they have different cultures – you know. Sometimes they can get data that I would never ever manage to get.

  3. 00:01:03 --> 00:01:33

    One of my PhD students found out a lot about people's sexual activities related to technology use. I would have found that quite hard to do, frankly, because I'm old enough to be their mother, possibly, and, you know, people would feel less comfortable perhaps talking to me about some topics than others. So, each of us has things where it might be about things that we have *in common* with our interviewees

  4. 00:01:33 --> 00:02:04

    or things where they feel comfortable with us or less comfortable with us for other reasons – about shared culture or contrasting culture or common age or different age or common sex or different sex, you know. Those things you can't actually control. D.M.: What about something like body language? How important is that? Or how much do you think about that when you do an interview?

  5. 00:02:04 --> 00:02:31

    A.B.: Again, well, I think rapport is about *being open*, about not being inappropriately intimate or inappropriately cold. But, actually, every individual relationship is different. So, it's about behaving professionally because – you know – you're not setting up a lifelong friendship here. You are interviewing somebody for a particular purpose about a technology.

  6. 00:02:31 --> 00:03:02

    It may be part of a study series where that individual is involved in some other part of an activity like combining interviews with observations of the same people or having initial interviews and debrief interviews and having a diary study or something in between. So, it's not literally a one-off moment of relationship, but it's also unlikely to lead to a lifelong friendship.

  7. 00:03:02 --> 00:03:35

    So, it is about being professional but not standoffish. It's about thinking about how other people will perceive you and feel comfortable with you without it being inappropriately intimate or inappropriately cool. You know, it's one of many, many social situations that we find ourselves in, and we carry ourselves differently according to the social context.

  8. 00:03:35 --> 00:04:00

    I think each of us will behave slightly differently and respond differently even to interviews on different subjects, interviews with different people in different contexts. Yes. It's about *being human* – you know – not being too cold and too detached about it, but also not being inappropriately overly friendly.

  9. 00:04:00 --> 00:04:33

    D.M.: What about if you have to do interviews in a culture that's different from your own? A.B.: So, I think there are many ways in which a culture is different from your own. I mean, it may be a very obvious way like – you know – I'm very clearly White and Anglo-Saxon, and so there is a very obvious difference if I'm interviewing people from a different race or a different religion or culture.

  10. 00:04:33 --> 00:05:01

    And that's really about being appropriately prepared *if* those things matter – because if it's literally just about the design of a technology that is used by a broad range of people, in a way, you know, those elements of culture probably don't matter *too much*.

  11. 00:05:01 --> 00:05:38

    And it's about *responding appropriately*. My research team is not made up of all White Anglo-Saxon middle-aged people. It's made up of a delightful variety of people from different races, cultures and creeds. And we work together, and it's not an issue . And very often in interviews, it's not an issue. It might *be* an issue where the reason for the interview is driven by that difference.

  12. 00:05:38 --> 00:06:00

    So, right now we're doing a study on the design of technologies for helping people to self-manage with HIV – both, you know, people in *at-risk populations*

  13. 00:06:00 --> 00:06:30

    and by definition that is people who are culturally different from me, because I'm *not* in a high-risk group for HIV. And because it's specifically around that technology, it does mean talking to people about a lifestyle that is different from mine and ways of using technology that are different from something that I would personally use.

  14. 00:06:30 --> 00:07:04

    And so, frankly, the first interviews have been quite *exploratory*. And they've been with people we already know and who are prepared to tell us if we put our foot in it and get things wrong. And they will share with us about how we should use language where we're talking about slightly – you know – topics where there is still a stigma around conditions like that in the broader population.

  15. 00:07:04 --> 00:07:37

    And I think it's about *being open to learning* – but also recognizing that, you know, there may be people who we *can't* work with or where we need to actually recruit an interviewer who has an appropriate background and appropriate qualifications and expertise, to work better, you know. So far in this study, we feel that by being open and by being prepared and ready to learn,

  16. 00:07:37 --> 00:08:00

    we're making good progress. We hope that it will continue that way. And it's just one example. But sometimes you might actually have to choose your interviewer, to match well, to be somebody who has appropriate kind of insider knowledge or empathy with a particular community.

  17. 00:08:00 --> 00:08:19

    But where it's around technology, you know, a lot of us design technologies that are for people who are different from ourselves, and so we actually have to learn to empathize with people from different cultures, from different backgrounds.

To learn more about being a user researcher, read our 15 Guiding Principles.

How do I become a UX researcher with no experience?

To become a UX researcher with no experience, you should focus on UX research fundamentals. Learn UX design principles, user-centered processes, and research methodologies to understand user needs and behaviors.

Engage in self-learning with courses and materials on:

  • UX research methods.

  • Empathy skills.

  • Critical thinking skills. 

Development in these areas is vital in UX research.

Volunteer, intern, or carry out research projects to gain practical experience. A portfolio displaying your projects and findings is vital if you aspire to be a UX researcher.

Networking is crucial for entering the UX field. Join communities, attend events, and seek mentorship from experienced UX researchers for industry insights and career advice.

A perfect place to start is the IxDF course, User Research – Methods and Best Practices.

Earn a Gift, Answer a Short Quiz!

  1. Question 1
  2. Question 2
  3. Question 3
  4. Get Your Gift

Question 1

What is the main goal of Games User Research (GUR)?

1 point towards your gift

Literature on Games User Research

Here's the entire UX literature on Games User Research by the Interaction Design Foundation, collated in one place:

Learn more about Games User Research

Take a deep dive into Games User Research with our course User Research – Methods and Best Practices .

How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.

In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.

This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!

By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!

We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!

All open-source articles on Games User Research

Please check the value and try again.

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!

Share Knowledge, Get Respect!

Share on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

Interaction Design Foundation - IxDF. (2023, December 14). What is Games User Research?. Interaction Design Foundation - IxDF.

New to UX Design? We're Giving You a Free eBook!

The Basics of User Experience Design

Download our free ebook “The Basics of User Experience Design” to learn about core concepts of UX design.

In 9 chapters, we'll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

A valid email address is required.
315,576 designers enjoy our newsletter—sure you don't want to receive it?