Artificial Intelligence (AI) is exceedingly effective at processing large volumes of data; saying that, though, it still calls for human guidance when it comes to applying it in the world—not least since its impacts are impacts that humans will notice. Watch as AI Product Designer Ioana Teleanu introduces the two main types of AI research tools—insight generators and collaborators—and explains how you can apply them in UX research.
ShowHide
video transcript
00:00:00 --> 00:00:31
Before showing you an end to an AI-powered design process. That still happens under human guidance, which is the key point in this conversation. Let's go quickly over how AI can help us. So for ones that can decrease cognitive load, aid decision making by processing large volumes of data, can help us automate repetitive tasks such as formatting images or resizing text.
00:00:31 --> 00:01:03
It can help us by providing more insights into human behavior and usage patterns. Also, with creating prototypes or mockups, all kinds of visual assets, it can spot usability issues and so on. AI can help us at every step of the design process. Covering our blind spots, augmenting our thinking, making us more efficient if we use it correctly. Here's how an end to end design process augmented by AI may look like. But the key to reading this schema are how I prefer to look at it:
00:01:03 --> 00:01:30
there's always a person orchestrating this. The future in which AI could generate a cohesive, coherent, reliable and relevant design process end to end is a very distant future for now, and there's an impetuous need for someone governing over this process, applying critical thinking and showing intentionality at every stage of the design process. Also, because AI shows multiple limitations
00:01:30 --> 00:02:00
on each of the steps shown in the schema. Nielsen Norman Group published an article unpacking the limits that currently surround the use of AI in research. To understand the context of their analysis. You need to know that there are currently two type of AI powered research tools we currently see on the market: insight generators and collaborators. AI Insight generators. These tools summarize user research sessions based only on the transcripts.
00:02:00 --> 00:02:31
Since they don't accept any kind of additional information (context, past research, background information about the product and users and so on), they can be highly problematic in how they generate and present those summaries. While there are some workarounds like uploading background information as session notes to be added to the analysis, it's not the right framing for the source and it's not going to reflect correctly in the analysis and generation. Humans would be much better at this. The scoping and systems
00:02:31 --> 00:03:02
thinking required to understand the interpretation landscape AI collaborators. These work similar to insight generators, but they're slightly better because they accept some contextual information provided by the researcher. For instance, the researcher might show to the AI some human generated interpretation to train it. The tool can then recommend tags for the thematic analysis of the data in addition to session transcripts, collaborators can also analyze researchers notes
00:03:02 --> 00:03:30
and then create themes and insights based on input from multiple sources. But even though they appear to be a bit better, they're still significantly limited and pose a lot of problems, if not used with the right mindset and caution. The limitations they've identified and expand on in detail are: most AI tools can't process visual input, and the biggest problem with that is no human or AI tool can analyze usability testing sessions by the transcript alone.
00:03:30 --> 00:04:03
Usability testing is a method that inherently relies on observing how the user is interacting with the product. Participants often think-aloud describing what they're doing and thinking. Their words do provide valuable information. However, you should never analyze usability tests based only on what participants say. Transcript-only analysis misses important context in user tests because participants don't verbalize all their actions don't describe every element in the product. Not always have a clear understanding or mental model of the product.
00:04:03 --> 00:04:30
So for now, Nielsen Norman's group recommendation is do not trust AI tools that claim to be able to analyze usability testing sessions by transcripts alone. Future tools able to process video visuals will be much more useful for this method. Another problem is the limited understanding of the context. This remains a major problem. AI insight generators don't yet accept the study goals or research question insights or tags from previous
00:04:30 --> 00:05:01
rounds of research, background information about a product or the user groups, contextual information about each participant, new user versus existing user, the list of tasks or interview questions. There is also a problem with the lack of citation and validation, which raises multiple concerns and problems. The tools aren't able to differentiate between the researchers notes and the actual session transcript. A major ethical concern here. We must always clearly separate our own interpretation or assumptions from what the participants said or did.
00:05:01 --> 00:05:33
Another problem with the lack of citation is that it makes verifying accuracy very difficult. AI systems can sometimes produce information that sounds very plausible, but is actually incorrect. Unstable performance and usability issues are another problem. None of the tools they tested had solid usability or performance. They reported outages, errors and unstable performance in general. And then there's the problem of bias. According to Reva Schwartz and her colleagues, AI systems and applications can involve biases
00:05:33 --> 00:06:02
at three levels: systematic, statistical and computational and human biases. AI must be trained on data which can introduce systematic such as historical and institutional and statistical biases, like a dataset sampling that is unrepresentative enough. When people are using a AI-powered results in decision making they can bring in human biases like anchoring bias. So bias can creep into research efforts on multiple levels,
00:06:02 --> 00:06:30
and these tools don't yet have the mechanisms in place to prevent that. I wanted to discuss the limitations reported in the article in detail, because I believe we can easily extrapolate and expand them beyond just research tools. Most of these problems will be observable on other types of AI companions in the design process. Biases in image generation, limitations in being offered context, other kinds of input limitations, not accepting files or images,
00:06:30 --> 00:06:51
output vagueness, generic results, and so on. So I think that this is a necessary frame to keep in mind when interacting and designing with the help of AI. Tools are not very reliable yet and accurate. So take everything they produce with a grain of salt and apply critical thinking at all times.
As Ioana explained, what insight generators have as their primary goal is to provide concise and informative summaries of user research sessions. They’re a form of narrow AI, and they can analyze the transcripts of a research session. With that said—and here’s the especially important bit—they don’t take any additional information into account, like context, past research, or background details about the product or its users. That means that insight generators can’t interpret the complete picture of user interactions and experiences.
Collaborators are more advanced—they can get trained with human-generated interpretations and context, and that includes research goals, questions, and product background. They’re also an example of Narrow AI, not General AI, though, and that’s despite their advanced capabilities. Collaborators can recommend thematic analysis tags, and they can generate insights which are based on the transcripts and contextual data. Collaborators can analyze researchers' notes to create more nuanced themes and insights, too. In spite of that, they’ve got difficulties in handling visual data, too, as well as issues with citation and validation—and then there’s the potential for bias to creep into research results.
Bias in AI can come out of training data (systematic bias), data collection (statistical bias), algorithms (computational bias), or—indeed—human interactions (human bias). Bias is a distorter and can cause problems—not to mention the ethical implications of biased decision-making in design. So, to lessen bias, it’s vital to use diverse and representative data, test and audit AI systems, and give very clear guidelines for ethical use—and that’s how to aim for fair and unbiased AI decisions that benefit everyone.
The Take Away
AI research tools are powerful assistants that can reduce cognitive load, support decision-making (by processing large volumes of data), automate tasks (like image formatting and text resizing), offer deeper insights into human behavior and usage patterns, create prototypes and a whole variety of visual assets, and excel at zeroing in on usability issues.
There are two types of AI research tools, and these are insight generators and collaborators. Insight generators summarize user research sessions—by analyzing transcripts. That said, they can’t consider additional context, a point that limits their understanding of user interactions and experiences. Collaborators give out more context-aware insights through researcher input, but—despite this—they still struggle with visual data, citation, validation, and potential biases.
These tools’ limitations are things you’ve got to overcome; so, it’s imperative that you exercise caution, keep human oversight going, evaluate outputs (and critically so), be mindful of potential biases that can arise, and use AI as a supplementary—not your sole—decision-making source.
Why Is AI so Important and How Is It Changing the World?
You've heard of AI and all the wonderful—and sometimes scary—possibilities. But, unlike sci-fi apocalyptic movies, AI is
334 shares
1 year ago
Open Access—Link to us!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.