An Introduction to Action Research

- 562 shares
- 9 years ago
Action research is a methodology that emphasizes collaboration between researchers and participants to identify problems, develop solutions and implement changes. Designers plan, act, observe and reflect, and aim to drive positive change in a specific context. Action research prioritizes practical solutions and improvement of practice, unlike knowledge generation, which is the priority of traditional methods.
© New Mexico State University, Fair Use
Action research stands out as a unique approach in user experience design (UX design), among other types of research methodologies and fields. It has a hands-on, practical focus, so UX designers and researchers who engage in it devise and execute research that not only gathers data but also leads to actionable insights and solid real-world solutions.
© Interaction Design Foundation, CC BY-SA 4.0
The concept of action research dates back to the 1940s, with its roots in the work of social psychologist Kurt Lewin. Lewin emphasized the importance of action in understanding and improving human systems. The approach rapidly gained popularity across various fields, including education, healthcare, social work and community development.
Kurt Lewin, the Founder of social psychology.
© Wikimedia Commons, Fair Use
In UX design, the incorporation of action research appeared with the rise of human-centered design principles. As UX design started to focus more on users' needs and experiences, the participatory and problem-solving nature of action research became increasingly significant. Action research bridges the gap between theory and practice in UX design. It enables designers to move beyond hypothetical assumptions and base their design decisions on concrete, real-world data. This not only enhances the effectiveness of the design but also boosts its credibility and acceptance among users—vital bonuses for product designers and service designers.
At its core, action research is a systematic, participatory and collaborative approach to research. It emphasizes direct engagement with specific issues or problems and aims to bring about positive change within a particular context. Traditional research methodologies tend to focus solely on the generation of theoretical knowledge. Meanwhile, action research aims to solve real-world problems and generate knowledge simultaneously.
Action research helps designers and design teams gather first-hand insights so they can deeply understand their users' needs, preferences and behaviors. With it, they can devise solutions that genuinely address their users’ problems—and so design products or services that will resonate with their target audiences. As designers actively involve users in the research process, they can gather authentic insights and co-create solutions that are both effective and user-centric.
Moreover, the iterative nature of action research aligns perfectly with the UX design process. It allows designers to continuously learn from users' feedback, adapt their designs accordingly, and test their effectiveness in real-world contexts. This iterative loop of planning, acting, observing and reflecting ensures that the final design solution is user-centric. However, it also ensures that actual user behavior and feedback validates the solution that a design team produces, which helps to make action research studies particularly rewarding for some brands.
Designers can continuously learn from users’ feedback in action research and iterate accordingly.
© Fauxels, Fair Use
Action research in UX design involves several stages. Each stage contributes to the ultimate goal: to create effective and user-centric design solutions. Here is a step-by-step breakdown of the process:
This could be a particular pain point users are facing, a gap in the current UX design, or an opportunity for improvement.
Designers might need to devise new design features, modify existing ones or implement new user interaction strategies.
Designers put their planned actions into practice. They might prototype the new design, implement the new features or test the new user interaction strategies.
As designers implement the action they’ve decided upon, it's crucial to observe its effects and collect data. This could mean that designers track user behaviors, collect user feedback, conduct usability tests or use other data collection methods.
From the collected data, designers reflect on the results, analyze the effectiveness of the action and draw insights. If the action has led to positive outcomes, they can further refine it and integrate it into the final design. If not, they can go back to plan new actions and repeat the process.
An action research example could be where designers do the following:
© Interaction Design Foundation, CC BY-SA 4.0
Outcome: The design team notices a significant decrease in checkout abandonment, which leads to higher conversion rates as more users successfully purchase goods.
Action research splits into three main types: technical, collaborative and critical reflection.
Technical action research focuses on improving the efficiency and effectiveness of a system or process. Designers often use it in organizational contexts to address specific issues or enhance operations. This could be where designers improve the usability of a website, optimize the load time of an application or enhance the accessibility of a digital product.
Learn why accessibility is so important in design in this video:
Accessibility ensures that digital products, websites, applications, services and other interactive interfaces are designed and developed to be easy to use and understand by people with disabilities. 1.85 billion folks around the world who live with a disability or might live with more than one and are navigating the world through assistive technology or other augmentations to kind of assist with that with your interactions with the world around you. Meaning folks who live with disability, but also their caretakers,
their loved ones, their friends. All of this relates to the purchasing power of this community. Disability isn't a stagnant thing. We all have our life cycle. As you age, things change, your eyesight adjusts. All of these relate to disability. Designing accessibility is also designing for your future self. People with disabilities want beautiful designs as well. They want a slick interface. They want it to be smooth and an enjoyable experience. And so if you feel like
your design has gotten worse after you've included accessibility, it's time to start actually iterating and think, How do I actually make this an enjoyable interface to interact with while also making sure it's sets expectations and it actually gives people the amount of information they need. And in a way that they can digest it just as everyone else wants to digest that information for screen reader users a lot of it boils down to making sure you're always labeling
your interactive elements, whether it be buttons, links, slider components. Just making sure that you're giving enough information that people know how to interact with your website, with your design, with whatever that interaction looks like. Also, dark mode is something that came out of this community. So if you're someone who leverages that quite frequently. Font is a huge kind of aspect to think about in your design. A thin font that meets color contrast
can still be a really poor readability experience because of that pixelation aspect or because of how your eye actually perceives the text. What are some tangible things you can start doing to help this user group? Create inclusive and user-friendly experiences for all individuals.
Collaborative action research emphasizes the active participation of stakeholders in the research process. It's about working together to identify issues, co-create solutions and implement changes. In the context of UX design, this could mean that designers collaborate with users to co-design a new feature, work with developers to optimize a process, or partner with business stakeholders to align the UX strategy with business goals.
Critical reflection action research aims to challenge dominant power structures and social injustices within a particular context. It emphasizes the importance of where designers and design teams reflect on the underlying assumptions and values that drive research and decision-making processes. In UX design, this could be where designers question the design biases, challenge the stereotypes, and promote inclusivity and diversity in design decisions.
Like any UX research method or approach, action research comes with its own set of benefits and challenges.
Action research focuses on solving real-world problems. This quality makes it highly relevant and practical. It allows UX designers to create solutions that are not just theoretically sound but also valid in real-world contexts.
Action research involves users in the research process, which lets designers gather first-hand insights into users' needs, preferences and behaviors. This not only enhances the accuracy and reliability of the research but also fosters user engagement and ownership long before user testing of high-fidelity prototypes.
The iterative nature of action research promotes continuous learning and improvement. It enables designers to adapt their designs based on users' feedback and learn from their successes and failures. They can fine-tune better tools and deliverables, such as more accurate user personas, from their findings.
Author and Human-Computer Interaction Expert, Professor Alan Dix explains personas and why they are important:
Personas are one of these things that gets used in very, very many ways during design. A persona is a rich description or description of a user. It's similar in some sense, to an example user, somebody that you're going to talk about. But it usually is not a particular person. And that's for sometimes reasons of confidentiality.
Sometimes it's you want to capture about something slightly more generic than the actual user you talked to, that in some ways represents the group, but is still particular enough that you can think about it. Typically, not one persona, you usually have several personas. We'll come back to that. You use this persona description, it's a description of the example user, in many ways during design. You can ask questions like "What would Betty think?"
You've got a persona called / about Betty, "what would Betty think" or "how would Betty feel about using this aspect of the system? Would Betty understand this? Would Betty be able to do this?" So we can ask questions by letting those personas seed our understanding, seed our imagination. Crucially, the details matter here. You want to make the persona real. So what we want to do is take this persona, an image of this example user, and to be able to ask those questions: will this user..., what will this user feel about
this feature? How will this user use this system in order to be able to answer those questions? It needs to seed your imagination well enough. It has to feel realistic enough to be able to do that. Just like when you read that book and you think, no, that person would never do that. You've understood them well enough that certain things they do feel out of character. You need to understand the character of your persona.
For different purposes actually, different levels of detail are useful. So I'm going to sort of start off with the least and go to the ones which I think are actually seeding that rich understanding. So at one level, you can just look at your demographics. You're going to design for warehouse managers, maybe. For a new system that goes into warehouses. So you look at the demographics, you might have looked at their age. It might be that on the whole that they're older. Because they're managers, the older end. So there's only a small number under 35. The majority
are over 35, about 50:50 between those who are in the sort of slightly more in the older group. So that's about 40 percent of them in the 35 to 50 age group, and about half of them are older than 50. So on the whole list, sort of towards the older end group. About two thirds are male, a third are female. Education wise, the vast majority have not got any sort of further education beyond school. About 57 percent we've got here are school.
We've got a certain number that have done basic college level education and a small percentage of warehouse managers have had a university education. That's some sense of things. These are invented, by the way, I should say, not real demographics. Did have children at home. The people, you might have got this from some big survey or from existing knowledge of the world, or by asking the employer that you're dealing with to give you the statistics. So perhaps about a third of them have got children at home, but two thirds of them haven't.
And what about disability? About three quarters of them have no disability whatsoever. About one quarter do. Actually, in society it's surprising. You might... if you think of disability in terms of major disability, perhaps having a missing limb or being completely blind or completely deaf. Then you start relatively small numbers. But if you include a wider range of disabilities, typically it gets bigger. And in fact can become
very, very large. If you include, for instance, using corrective vision with glasses, then actually these numbers will start to look quite small. Within this, in whatever definition they've used, they've got up to about 17 percent with the minor disability and about eight percent with a major disability. So far, so good. So now, can you design for a warehouse manager given this? Well,
you might start to fill in examples for yourself. So you might sort of almost like start to create the next stage. But it's hard. So let's look at a particular user profile. Again, this could be a real user, but let's imagine this as a typical user in a way. So here's Betty Wilcox. So she's here as a typical user. And in fact, actually, if you look at her, she's on the younger end. She's not necessarily the only one, you usually have several of these. And she's female as well. Notice only up to a third of our warehouse ones are female. So
she's not necessarily the center one. We'll come back to this in a moment, but she is an example user. One example user. This might have been based on somebody you've talked to, and then you're sort of abstracting in a way. So, Betty Wilcox. Thirty-seven, female, college education. She's got children at home, one's seven, one's 15. And she does have a minor disability, which is in her left hand. And it's there's slight problem in her left hand.
Can you design, can you ask, what would Betty think? You're probably doing a bit better at this now. You start to picture her a bit. And you've probably got almost like an image in your head as we talk about Betty. So it's getting better. So now let's go to a different one. You know, this is now Betty. Betty is 37 years old. She's been a warehouse manager for five years and worked for Simpkins Brothers Engineering for 12 years. She didn't go to university, but has studied in her evenings for a business diploma.
That was her college education. She has two children aged 15 and seven and does not like to work late. Presumably because we put it here, because of the children. But she did part of an introductory in-house computer course some years ago. But it was interrupted when she was promoted, and she can no longer afford to take the time. Her vision is perfect, but a left hand movement, remember from the description a moment ago, is slightly restricted because of an industrial accident three years ago.
She's enthusiastic about her work and is happy to delegate responsibility and to take suggestions from the staff. Actually, we're seeing somebody who is confident in her overall abilities, otherwise she wouldn't be somebody happy to take suggestions. If you're not competent, you don't. We sort of see that, we start to see a picture of her. However, she does feel threatened – simply, she is confident in general – but she does feel threatened by the introduction of yet another computer system. The third since she's been working at Simpkins Brothers. So now, when we think about that, do you have a better vision of Betty?
Do you feel you might be in a position to start talking about..."Yeah, if I design this sort of feature, is this something that's going to work with Betty? Or not"? By having a rich description, she becomes a person. Not just a set of demographics. But then you can start to think about the person, design for the person and use that rich human understanding you have in order to create a better design.
So it's an example of a user, as I said not necessarily a real one. You're going to use this as a surrogate and these details really, really matter. You want Betty to be real to you as a designer, real to your clients as you talk to them. Real to your fellow designers as you talk to them. To the developers around you, to different people. Crucially, though, I've already said this, there's not just one. You usually want several different personas because the users you deal with are all different.
You know, we're all different. And the user group – it's warehouse managers – it's quite a relatively narrow and constrained set of users, will all be different. Now, you can't have one persona for every user, but you can try and spread. You can look at the range of users. So now that demographics picture I gave, we actually said, what's their level of education? That's one way to look at that range. You can think of it as a broad range of users.
The obvious thing to do is to have the absolute average user. So you almost look for them: "What's the typical thing? Yes, okay." In my original demographics the majority have no college education, they were school educated only. We said that was your education one, two thirds of them male – I'd have gone for somebody else who was male. Go down the list, bang in the centre. Now it's useful to have that center one, but if that's the only person you deal with, you're not thinking about the range. But certainly you want people who in some sense
cover the range, that give you a sense of the different kinds of people. And hopefully also by having several, reminds you constantly that they are a range and have a different set of characteristics, that there are different people, not just a generic user.
Action research involves multiple iterations of planning, acting, observing and reflecting, which can be time- and resource-intensive.
It can be difficult to implement changes and observe their effects in real-world contexts. This is due to the complexity and unpredictability of real-world situations.
Since action research involves close collaboration with stakeholders, there's a risk of subjectivity and bias influencing the research outcomes. It's crucial for designers to maintain objectivity and integrity throughout the research process.
It can be a challenge to ensure all participants understand the nature of the research and agree to participate willingly. Also, it’s vital to safeguard the privacy of participants and sensitive data.
The iterative nature of action research might lead to expanding goals, and make the project unwieldy.
The contextual focus of action research may limit the extent to which designers can generalize findings from field studies to other settings.
To begin, designers should define clear objectives. They should ask the following:
What is the problem to try to solve?
What change is desirable as an outcome?
To have clear objectives will guide their research process and help them stay focused.
It’s vital to involve users in the research process. Designers should collaborate with them to identify issues, co-create solutions and implement changes in real time. This will not only enhance the relevance of the research but also foster user engagement and ownership.
To conduct action research means to observe the effects of changes in real-world contexts. This requires a variety of data collection methods. Designers should use methods like surveys, user interviews, observations and usability tests to gather diverse and comprehensive data.
UX Strategist and Consultant, William Hudson explains the value of usability testing in this video:
If you just focus on the evaluation activity typically with usability testing, you're actually doing *nothing* to improve the usability of your process. You are still creating bad designs. And just filtering them out is going to be fantastically wasteful in terms of the amount of effort. So, you know, if you think about it as a production line, we have that manufacturing analogy and talk about screws. If you decide that your products aren't really good enough
for whatever reason – they're not consistent or they break easily or any number of potential problems – and all you do to *improve* the quality of your product is to up the quality checking at the end of the assembly line, then guess what? You just end up with a lot of waste because you're still producing a large number of faulty screws. And if you do nothing to improve the actual process in the manufacturing of the screws, then just tightening the evaluation process
– raising the hurdle, effectively – is really not the way to go. Usability evaluations are a *very* important tool. Usability testing, in particular, is a very important tool in our toolbox. But really it cannot be the only one.
Action research is all about learning from action. Designers should reflect on the outcomes of their actions, analyze the effectiveness of their solutions and draw insights. They can use these insights to inform their future actions and continuously improve the design.
Lastly, designers should communicate and share their findings with all stakeholders. This not only fosters transparency and trust but also facilitates collective learning and improvement.
Action research involves both qualitative and quantitative data, but it's important to remember to place emphasis on qualitative data. While quantitative data can provide useful insights, designers who rely too heavily on it may find a less holistic view of the user experience.
Professor Alan Dix explains the difference between quantitative and qualitative data in this video:
Ah, well – it's a lovely day here in Tiree. I'm looking out the window again. But how do we know it's a lovely day? Well, I could – I won't turn the camera around to show you, because I'll probably never get it pointing back again. But I can tell you the Sun's shining. It's a blue sky. I could go and measure the temperature. It's probably not that warm, because it's not early in the year. But there's a number of metrics or measures I could use. Or perhaps I should go out and talk to people and see if there's people sitting out and saying how lovely it is
or if they're all huddled inside. Now, for me, this sunny day seems like a good day. But last week, it was the Tiree Wave Classic. And there were people windsurfing. The best day for them was not a sunny day. It was actually quite a dull day, quite a cold day. But it was the day with the best wind. They didn't care about the Sun; they cared about the wind. So, if I'd asked them, I might have gotten a very different answer than if I'd asked a different visitor to the island
or if you'd asked me about it. And it can be almost a conflict between people within HCI. It's between those who are more *quantitative*. So, when I was talking about the sunny day, I could go and measure the temperature. I could measure the wind speed if I was a surfer – a whole lot of *numbers* about it – as opposed to those who want to take a more *qualitative* approach. So, instead of measuring the temperature, those are the people who'd want to talk to people to find out more about what *it means* to be a good day.
And we could do the same for an interface. I can look at a phone and say, "Okay, how long did it take me to make a phone call?" Or I could ask somebody whether they're happy with it: What does the phone make them feel about? – different kinds of questions to ask. Also, you might ask those questions – and you can ask this in both a qualitative and quantitative way – in a sealed setting. You might take somebody into a room, give them perhaps a new interface to play with. You might – so, take the computer, give them a set of tasks to do and see how long they take to do it. Or what you might do is go out and watch
people in their real lives using some piece of – it might be existing software; it might be new software, or just actually observing how they do things. There's a bit of overlap here – I should have mentioned at the beginning – between *evaluation techniques* and *empirical studies*. And you might do empirical studies very, very early on. And they share a lot of features with evaluation. They're much more likely to be wild studies. And there are advantages to each. In a laboratory situation, when you've brought people in,
you can control what they're doing, you can guide them in particular ways. However, that tends to make it both more – shall we say – *robust* that you know what's going on but less about the real situation. In the real world, it's what people often call "ecologically valid" – it's about what they *really* are up to. But it is much less controlled, harder to measure – all sorts of things. Very often – I mean, it's rare or it's rarer to find more quantitative in-the-wild studies, but you can find both.
You can both go out and perhaps do a measure of people outside. You might – you know – well, go out on a sunny day and see how many people are smiling. Count the number of smiling people each day and use that as your measure – a very quantitative measure that's in the wild. More often, you might in the wild just go and ask people. It's a more qualitative thing. Similarly, in the lab, you might do a quantitative thing – some sort of measurement – or you might ask something more qualitative – more open-ended. Particularly quantitative and qualitative methods,
which are often seen as very, very different, and people will tend to focus on one *or* the other. *Personally*, I find that they fit together. *Quantitative* methods tend to tell me whether something happens and how common it is to happen, whether it's something I actually expect to see in practice commonly. *Qualitative* methods – the ones which are more about asking people open-ended questions – either to both tell me *new* things that I didn't think about before,
but also give me the *why* answers if I'm trying to understand *why* it is I'm seeing a phenomenon. So, the quantitative things – the measurements – say, "Yeah, there's something happening. People are finding this feature difficult." The qualitative thing helps me understand what it is about it that's difficult and helps me to solve it. So, I find they give you *complementary things* – they work together. The other thing you have to think about when choosing methods is about *what's appropriate for the particular situation*. And these things don't always work.
Sometimes, you can't do an in-the-wild experiment. If it's about, for instance, systems for people in outer space, you're going to have to do it in a laboratory. You're not going to go up there and experiment while people are flying around the planet. So, sometimes you can't do one thing or the other. It doesn't make sense. Similarly, with users – if you're designing something for chief executives of Fortune 100 companies, you're not going to get 20 of them in a room and do a user study with them.
That's not practical. So, you have to understand what's practical, what's reasonable and choose your methods accordingly.
Designers should focus action research on understanding user needs and preferences. If they ignore these in favor of more technical considerations, the resulting design solutions may not meet users' expectations or provide them with a satisfactory experience.
It's important to seek user feedback at each stage of the action research process. Without this feedback, designers may not optimize design solutions for user needs. For example, they may find the information architecture confusing. Additionally, without user feedback, it can be difficult to identify any unexpected problems that may arise during the research process.
Action research requires time and effort to ensure successful outcomes. If designers or design teams don’t permit enough time for the research process, it can lead to rushed decisions and sloppy results. It's crucial to plan ahead and set aside enough time for each stage of the action research process—and ensure that stakeholders understand the time-consuming nature of research and digesting research findings, and don’t push for premature results.
Contextual factors such as culture, environment and demographics play an important role in UX design. If designers ignore these factors, it can lead to ineffective design solutions that don't properly address users' needs and preferences or consider their context.
Professor Alan Dix explains the need to consider users’ culture in design, in this video:
As you're designing, it's so easy just to design for the people that you know and for the culture that you know. However, cultures differ. Now, that's true of many aspects of the interface; no[t] least, though, the visual layout of an interface and the the visual elements. Some aspects are quite easy just to realize like language, others much, much more subtle.
You might have come across, there's two... well, actually there's three terms because some of these are almost the same thing, but two terms are particularly distinguished. One is localization and globalization. And you hear them used almost interchangeably and probably also with slight differences because different authors and people will use them slightly differently. So one thing is localization or internationalization. Although the latter probably only used in that sense. So localization is about taking an interface and making it appropriate
for a particular place. So you might change the interface style slightly. You certainly might change the language for it; whereas global – being globalized – is about saying, "Can I make something that works for everybody everywhere?" The latter sounds almost bound to fail and often does. But obviously, if you're trying to create something that's used across the whole global market, you have to try and do that. And typically you're doing a bit of each in each space.
You're both trying to design as many elements as possible so that they are globally relevant. They mean the same everywhere, or at least are understood everywhere. And some elements where you do localization, you will try and change them to make them more specific for the place. There's usually elements of both. But remembering that distinction, you need to think about both of those. The most obvious thing to think about here is just changing language. I mean, that's a fairly obvious thing and there's lots of tools to make that easy.
So if you have... whether it's menu names or labels, you might find this at the design stage or in the implementation technique, there's ways of creating effectively look-up tables that says this menu item instead of being just a name in the implementation, effectively has an idea or a way of representing it. And that can be looked up so that your menus change, your text changes and everything. Now that sounds like, "Yay, that's it!"
So what it is, is that it's not the end of the story, even for text. That's not the end of the story. Visit Finland sometime. If you've never visited Finland, it's a wonderful place to go. The signs are typically in Finnish and in Swedish. Both languages are used. I think almost equal amounts of people using both languages, their first language, and most will know both. But because of this, if you look at those lines, they're in two languages.
The Finnish line is usually about twice as large as the Swedish piece of text. Because Finnish uses a lot of double letters to represent quite subtle differences in sound. Vowels get lengthened by doubling them. Consonants get separated. So I'll probably pronounce this wrong. But R-I-T-T-A, is not "Rita" which would be R-I-T-A . But "Reet-ta". Actually, I overemphasized that, but "Reetta". There's a bit of a stop.
And I said I won't be doing it right. Talk to a Finnish person, they will help put you right on this. But because of this, the text is twice as long. But of course, suddenly the text isn't going to fit in. So it's going to overlap with icons. It's going to scroll when it shouldn't scroll. So even something like the size of the field becomes something that can change. And then, of course, there's things like left-to-right order. Finnish and Swedish both are left-to-right languages. But if you were going to have, switch something say to an Arabic script from a European script,
then you would end up with things going the other way round. So it's more than just changing the names. You have to think much more deeply than that. But again, it's more than the language. There are all sorts of cultural assumptions that we build into things. The majority of interfaces are built... actually the majority are built not even in just one part of the world, but in one country, you know the dominance... I'm not sure what percentage,
but a vast proportion will be built, not just in the USA, but in the West Coast of the USA. Certainly there is a European/US/American centeredness to the way in which things are designed. It's so easy to design things caught in those cultures without realizing that there are other ways of seeing the world. That changes the assumptions, the sort of values that are built into an interaction.
The meanings of symbols, so ticks and crosses, mostly will get understood and I do continue to use them. However, certainly in the UK, but even not universally across Europe. But in the UK, a tick is a positive symbol, means "this is good". A cross is a "blah, that's bad". However, there are lots of parts of the world where both mean the same. They're both a check. And in fact, weirdly, if I vote in the UK,
I put a cross, not against the candidate I don't want but against the candidate I do want. So even in the UK a cross can mean the same as a tick. You know – and colors, I said I do redundantly code often my crosses with red and my ticks with green because red in my culture is negative; I mean, it's not negative; I like red (inaudible) – but it has that sense of being a red mark is a bad mark.
There are many cultures where red is the positive color. And actually it is a positive color in other ways in Western culture. But particularly that idea of the red cross that you get on your schoolwork; this is not the same everywhere. So, you really have to have quite a subtle understanding of these things. Now, the thing is, you probably won't. And so, this is where if you are taking something into a different culture, you almost certainly will need somebody who quite richly understands that culture.
So you design things so that they are possible for somebody to come in and do those adjustments because you probably may well not be in the position to be able to do that yourself.
Copyright holder: Tommi Vainikainen _ Appearance time: 2:56 - 3:03 Copyright license and terms: Public domain, via Wikimedia Commons
Copyright holder: Maik Meid _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY 2.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Norge_93.jpg
Copyright holder: Paju _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY-SA 3.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Kaivokselan_kaivokset_kyltti.jpg
Copyright holder: Tiia Monto _ Appearance time: 2:56 - 3:03 Copyright license and terms: CC BY-SA 3.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Turku_-_harbour_sign.jpg
Overall, in the ever-evolving field of UX design, this is one methodology that can serve as a powerful research tool for driving positive change and promoting continuous learning. Since to do action research means to actively involve users in the research process and research projects, and focus on real-world problem-solving, it allows designers to create more user-centered designs. These digital solutions and services will be more likely to resonate with the target users and deliver exceptional user experiences.
Despite its challenges, the benefits of action research far outweigh the risks. Action research is therefore a valuable approach for UX designers who are keen on creating a wide range of impactful and sustainable design solutions. The biggest lesson with action research is to ensure that user needs and preferences are at the center of the research process.
Take our User Research: Methods and Best Practices course.
Take our Master Class Radical Participatory Design: Insights From NASA’s Service Design Lead with Victor Udoewa, Service Design Lead, NASA SBIR/STTR Program.
Read more in-depth information in 3 things design thinking can learn from action research by Amin Mojtahedi, PhD.
Find additional insights in What Technical Communicators and UX Designers Can Learn From Participatory Action Research by Guiseppe.
Discover more insights and tips in Action Research: Steps, Benefits, and Tips by Lauren Stewart.
Action research and design thinking are both methodologies to solve problems and implement changes, but they have different approaches and emphases. Here's how they differ:
Objectives
Action research aims to solve specific problems within a community or organization through a cycle of planning, action, observation and reflection. It focuses on iterative learning and solving real-world problems through direct intervention.
Design thinking focuses on addressing complex problems by understanding the user's needs, re-framing the problem in human-centric ways, creating many ideas in brainstorming sessions, and adopting a hands-on approach in prototyping and testing. It emphasizes innovation and the creation of solutions that are desirable, feasible and viable.
Process
Action research involves a cyclic process that includes:
- Identify a problem.
- Plan an action.
- Implement the action.
- Observe and evaluate the outcomes.
- Reflect on the findings and plan the next cycle.
Design thinking follows a non-linear, iterative process that typically includes five phases:
- Empathize: Understand the needs of those you're designing for.
- Define: Clearly articulate the problem you want to solve.
- Ideate: Brainstorm a range of creative solutions.
- Prototype: Build a representation of one or more of your ideas.
- Test: Return to your original user group and test your idea for feedback.
User Involvement
Action research actively involves participants in the research process. The participants are co-researchers and have a direct stake in the problem at hand.
Design thinking prioritizes empathy with users and stakeholders to ensure that the solutions are truly user-centered. While users are involved, especially in the empathy and testing phases, they may not be as deeply engaged in the entire process as they are in action research.
Outcome
Action research typically aims for practical outcomes that directly improve practices or address issues within the specific context studied. Its success is measurable by the extent of problem resolution or improvement.
Design thinking seeks to generate innovative solutions that may not only solve the identified problem but also provide a basis for new products, services or ways of thinking. The success is often measurable in terms of innovation, user satisfaction and feasibility of implementation.
In summary, while both action research and design thinking are valuable in addressing problems, action research is more about participatory problem-solving within specific contexts, and design thinking is about innovative solution-finding with a strong emphasis on user needs.
Take our Design Thinking: The Ultimate Guide course.
To define the research question in an action research project, start by identifying a specific problem or area of interest in your practice or work setting. Reflect on this issue deeply to understand its nuances and implications. Then, narrow your focus to a question that is both actionable and researchable. This question should aim to explore ways to improve, change or understand the problem better. Ensure the question is clear, concise and aligned with the goals of your project. It must invite inquiry and suggest a path towards finding practical solutions or gaining deeper insights.
For instance, if you notice a decline in user engagement with a product, your research question could be, "How can we modify the user interface of our product to enhance user engagement?" This question clearly targets an improvement, focuses on a specific aspect (the user interface) and implies actionable outcomes (modifications to enhance engagement).
Take our Master Class Radical Participatory Design: Insights From NASA’s Service Design Lead with Victor Udoewa, Service Design Lead, NASA SBIR/STTR Program.
And then if you ask, 'Well, when did participatory design emerge?' You'll hear answers like, 'Well, in the 1970s, I think with collaborative design in Scandinavia.' Some people might say, Okay, 1960 with Karl Linn, the father of Participatory Architecture', or Jane Jacobs with a wonderful book that critiqued the centralized version of urban planning that planted the seeds for participatory urban planning. Some people might even say, Well, you could go into the 1940s with the participatory action research models from Europe,
but notice whether forties, sixties or seventies, the era, all three of those eras on the right are right on top of each other because we're saying that for 300,000 years there has been no participatory design. Really? What if design is just in its simplest form, some kind of time of taking in information,
some time of coming up with an idea or multiple ideas, and then some time where you implement those ideas or test them, and that when you do that in community, it's participatory design. Well, if that's the case, I'm willing to bet that there are many, many more examples of participatory design before the 1940. We could go to ancient Mesopotamia, where civilization was booming and burgeoning, and they were dealing with the problem of how do we grow enough food for all these people?
And we keep increasing the numbers. And when it rains too much, the crops are ruined. And when it doesn't rain enough, the crops are ruined. So they built these large storage basins and then they dug these things called canals that connected these large storage basins of water to each of the farms. And they built up the banks and sides of the river. But even if you go back to 3000 years ago with the actual first codification of the Ayurvedic or the Siddha traditional systems of medicine, they were only encoding folk medicine which had been practiced for thousands of years
before that. So when we come back and we look at this timeline that we had, we realize, well, wait a minute, participatory design has been happening the entire time. There is no separate start of participatory design, different from the start of communities. In fact, when we look at the animals who also practice design or even when we look at the hidden life of trees, where trees below ground are locking roots and sending signals through mycelia warning of impending danger using this particular design in order to grow greater resilience.
This even goes beyond Homo sapiens. We could even go before Homo sapiens 2.6 million years ago. The first sharpened stones. If you look on the left, people would take these rocks and just hit it and create these sharpened stones. And that eventually evolved to stone hand axes in about 1.6 million years ago. And then what we call knapping tools, about 400 to 200,000 years ago. And eventually on the right, what you see there are cutting blades that emerged around 80000 to 40000 years ago,
but it didn't stop there. They then went on to sharp micro blades around 11 to 17,000 years ago. And then eventually 12,000 years ago we have the first axes and chisels and all of this, which you might say is 'OK, it's experimentation' is a type of research. They're trying things and see if they can make this tool better to do what they needed to do. And when we look at the history of participatory research, you realize: Wait a minute, there is no separate beginning of participatory research that is separate from the beginning of communities;
communities have always been doing this, and even before the emergence of Homo sapiens. We can be encouraged by the words of research justice, which says there are different types of knowledge. And I think one of the problems and the reasons that we think, 'Oh, participatory research, participatory design is new' is because we don't understand all the different types of knowledge that exist besides mainstream institutional knowledge or third-person knowing we have lived experiential knowledge, we have cultural knowledge, spiritual knowledge and research justice says all of these knowledges are equal.
And when we begin to understand the plethora, the diversity of knowledge it actually begins to transform our definition of research from investigation to a pluriverse of definitions. And the purpose of research to being, to establish a fact or to reach a conclusion, then becomes transformed to a *pluriverse* of purposes.
Designers use several tools and methods in action research to explore problems and implement solutions. Surveys allow them to gather feedback from a broad audience quickly. Interviews offer deep insights through personal conversations, focusing on users' experiences and needs. Observations help designers understand how people interact with products or services in real environments. Prototyping enables the testing of ideas and concepts through tangible models, and allows for immediate feedback and iteration. Finally, case studies provide detailed analysis of specific instances and offer valuable lessons and insights.
These tools and methods empower designers to collect data, analyze findings and make informed decisions. When designers employ a combination of these approaches, they ensure a comprehensive understanding of the issues at hand and develop effective solutions.
CEO of Experience Dynamics, Frank Spillers explains the need to be clear about the problem that designers should address:
When developing a product or service, it is *essential* to know what problem we are solving for our users. But as designers, we all too easily shift far away from their perspective. Simply put, we forget that *we are not our users*. User research is how we understand what our users *want*, and it helps us design products and services that are *relevant* to people. User research can help you inspire your design,
evaluate your solutions and measure your impact by placing people at the center of your design process. And this is why user research should be a *pillar* of any design strategy. This course will teach you *why* you should conduct user research and *how* it can fit into different work processes. You'll learn to understand your target audience's needs and involve your stakeholders.
We'll look at the most common research techniques, such as semi-structured interviews and contextual inquiry. And we'll learn how to conduct observational studies to *really understand what your target users need*. This course will be helpful for you whether you're just starting out in UX or looking to advance your UX career with additional research techniques. By the end of the course, you'll have an industry-recognized certificate – trusted by leading companies worldwide. More importantly, you'll master *in-demand research skills* that you can start applying to your projects straight away
and confidently present your research to clients and employers alike. Are you ready? Let's get started!
To engage stakeholders in an action research project, first identify all individuals or groups with an interest in the project's outcome. These might include users, team members, clients or community representatives. Clearly communicate the goals, benefits and expected outcomes of the project to them. Use presentations, reports, or informal meetings to share your vision and how their involvement adds value.
Involve stakeholders early and often by soliciting their feedback through surveys, interviews or workshops. This inclusion not only provides valuable insights but also fosters a sense of ownership and commitment to the project. Establish regular update meetings or newsletters to keep stakeholders informed about progress, challenges and successes. Finally, ensure there are clear channels for stakeholders to share their input and concerns throughout the project.
This approach creates a collaborative environment where stakeholders feel valued and engaged, leading to more meaningful and impactful outcomes.
Author, Speaker and Leadership Coach, Todd Zaki Warfel explains how to present to clients and stakeholders in this video:
This new narrative starts with *identifying your audience and intent*. The way you pitch an idea to a client, peer or executive requires an *adjustment to your language and approach*. Client – they're more of an occasional traveler. They don't know the system; they don't know the ins and outs. They're less likely to share your language. I mean, you're probably speaking design. They speak business and outcomes and results. So, you may need to establish a basic level of understanding.
Clients and executives are also less patient and don't want to waste 20 minutes going through every single detail just to get to the answer. Karen's what you would call an occasional traveler. She expected high-fidelity visual comps and just had a bottom-liner approach. Karen's one of these busy executives. She doesn't have time for – nor does she need to or want to hear – all the details. She just needs to know *why she should care*, *why it matters to her line of business*,
and then she can decide to support your proposal ...or not. Now, the last time the team had presented to Karen, they spent the *entire 60-minute meeting* walking her through *their process* and *justifying their decisions*. If you know anything about executives – a 60-minute meeting; *not* a good idea. Here's the rub. They never addressed the value of the business, and the team didn't come in with a clear ask.
In the own words of the team, it was the most grueling 60 minutes of their entire careers at this company. So, what do we do? Well, this time around, we started with audience and intent. We changed the story and wrote a new narrative and developed a new plan. We started with the *intended outcome* with Karen, shared a few *stories* and then highlighted the *value* that our approach would bring to her business – and quickly gained approval from her. And I'll never forget the moment; it was like eight minutes into my presentation.
I actually looked down at my watch to check, when Karen interrupted me mid-sentence and said, 'Okay, Todd – I get it. You've done your homework; you've clearly shown how the solution solves the problem and how it's better than my original idea. What do we need to do to move forward? What do you need from *me* to deliver this?'
To measure the impact of an action research project, start by defining clear, measurable objectives at the beginning. These objectives should align with the goals of your project and provide a baseline against which you can measure progress. Use quantitative metrics such as increased user engagement, sales growth or improved performance scores for a tangible assessment of impact. Incorporate qualitative data as well, such as user feedback and case studies, to understand the subjective experiences and insights gained through the project.
Conduct surveys or interviews before and after the project to compare results and identify changes. Analyze this data to assess how well the project met its objectives and what effect it had on the target issue or audience. Document lessons learned and unexpected outcomes to provide a comprehensive view of the project's impact. This approach ensures a holistic evaluation, combining numerical data and personal insights to gauge the success and influence of your action research project effectively.
Take our Master Class Design KPIs: From Insights to Impact with Vitaly Friedman, Senior UX consultant, European Parliament, and Creative Lead, Smashing Magazine.
When unexpected results or obstacles emerge during action research, first, take a step back and assess the situation. Identify the nature of the unexpected outcome or obstacle and analyze its potential impact on your project. This step is crucial for understanding the issue at hand.
Next, communicate with your team and stakeholders about the situation. Open communication ensures everyone understands the issue and can contribute to finding a solution.
Then, consider adjusting your research plan or design strategy to accommodate the new findings or to overcome the obstacles. This might involve revisiting your research questions, methods or even the design problem you are addressing.
Always document these changes and the reasons behind them. This documentation will be valuable for understanding the project's evolution and for future reference.
Finally, view these challenges as learning opportunities. Unexpected results can lead to new insights and innovations that strengthen your project in the long run.
By remaining flexible, communicating effectively, and being willing to adjust your approach, you can navigate the uncertainties of action research and continue making progress towards your goals.
Professor Alan Dix explains externalization, a creative process that can help designers to adapt to unexpected roadblocks and find a good way forward:
What I want to focus is four different reasons for why externalization can be powerful. Some of you might remember – this was quite a few years ago now, so some of you might be too young to remember, but during the war in Afghanistan Rumsfeld talked about three kinds of knowns. He talked about the *known knowns*:
the things you know you know; the *known unknowns*: the things you know you don't know, and the *unknown unknowns*: the things you don't know – you don't even know that you don't know them. Now, Rumsfeld's answer to the last of these was just to blow everyone to pieces. We're not going to suggest this as a general user experience and user interface design strategy. However, what I want to focus is – he mentioned three things here, and there's a missing one. So, we have the known knowns,
the known unknowns, the unknown unknowns, what about *unknown knowns*? Now, I'm going to say these are the most interesting things. These are things you know – you or perhaps people you're working with know – *tacit things*, but they *aren't aware that you know*. So, this is true of your own understanding of a problem area. Or it might be true as you talk to your users or your stakeholders or your clients,
to actually uncover the things that they know and yet don't know that they know. And externalization is about that process that takes the unknown knowns *and turns them into known knowns*. There's all sorts of different kinds of externalization used in design, ways we put our knowledge into the world. There's – I mean, classic is *drawing and sketches*. You can have *models* and actually models in multiple senses
– models in the sense of physical models of buildings and things like that, but also mathematical models and models of that kind; *diagrams*, formal diagrams; *mathematical formulae* – again, I mentioned before my first love was mathematics, so I like my formulae; *spoken words* – so, things we say; criteria words, dimension words, abstract ideas – sort of adjectives often, even, or adverbs. People will use those words to talk about their area, but not
– if you ask them what are the critical features, what are the critical criteria, they wouldn't necessarily be able to name them. So, often the words we speak externalize things; so, listening to other people. Also learning to listen to yourself – often easier with *written words*, whether again it's your own or somebody else's because then you can analyze this. It's a classic thing to do, for traditionally in requirements engineering but also for working out what – to understand a new situation is to look at the written documentation, the manuals,
the procedures – perhaps things that have been written about an area. And people often do things like noun-verb analysis, of pulling out, trying to pull out critical things. Do that to your own writing about something in order to try and understand what you know about it. *Computer programs*, software embodies tacit knowledge in physical things or at least digital things that you can analyze. *Acting out* – talking about the sort of internet-enabled Swiss Army Knife,
and it was only as I acted that I made it external, did something physically, that I saw there was a problem with covering where you might decide to put your little screen. So, why are you doing this? I've sort of already given this a little bit away with the Rumsfeld one, but I'm going to give you four different reasons for why externalization can be powerful. So, we're going to talk about the *informational use*, which is about existing ideas; *formational use* – the way that externalization creates new ideas;
*transformational*, which is actually using the external thing to do stuff; and in the end *transcendental*, but not necessarily in the way that you might hear the word 'transcendental' used. So, first of all, *informational*. Now, the informational way of using ideas is, shall we say, the *classic* form of externalization. I've got books behind me, so I'm going to pull a book off the shelf. So, this is *Human-Computer Interaction* by Dix, Finlay, Abowd and Beale.
So, the informational model is probably the way you'd normally think about writing. You know – we knew things about human-computer interaction. We wrote them into the book; people buy the book; people read the book, and then into their heads... human-computer interaction. So, an idea in this person's head – the person on the left here is our
font of knowledge on the subject; writes it down in some way, and then the person on the right reads or sees that, or it may be a video lecture like this – it doesn't have to be necessarily written. And the information is passed on. You notice I've drawn them slightly different in each state because typically when things pass on, the idea that gets written down may be not quite the abstract thought in the head, and what the person understands when they read what's written down
may be different again. But the idea is that you're passing knowledge from mind to mind. There are the things I know and then I want to pass on to you. That's a classic model of information transfer model of stuff. So, that happens – you know – so, that is a thing that occurs. And in some sense that's what I'm doing here. I've thought about these issues; I've drawn slides. So, probably things are happening dynamically, but a lot of it I knew already;
I've transferred into slides, into ideas that I'm going to pass on and talking to them. So, that's the informational model, a classic model of what you do with external representations. However, if as a designer you've sketched things, if you've ever done creative writing and you've had a character, you know it's not all that. So... you sort of want the character to do something and you can't get the character to do something, because you know the character wouldn't do it that way.
Suddenly, as things flow out, they become more concrete, more clear, more explicit. So, as you draw your sketch of that user interface, you suddenly start – things become clearer that before were very sort of fuzzy in your head. You sort of half knew them, and the act of putting them out there makes them explicit. So, when you – even in something like this video, I've drawn slides for it, and in drawing those slides I have made my thoughts much more explicit.
You've probably been in that situation where you write something down and then you suddenly think, 'I didn't know that before.' Perhaps you notice afterwards; you've written it down; you read it back and you think, 'Did I write that? I didn't know that.' A classic thing – I said, talk to any writer, but you'll have almost certainly experienced it yourself during your design in sketching or whatever if you're more of a
graphic designer, if you're an interaction designer, the way you've done it. So, here what's happening is you've got this fuzzy thought in your head; you externalize it in some way – you write it down, and during that process of externalization, you're forced to both think about the idea and make it more explicit, more concrete. So, actually what's happening is the idea in your head is being transformed from something a bit fuzzy to being something that you've actually got a bit more of a handle on.
So, that's *formational understanding* coming through externalization. So, let's go one step further – *transformational*. So, having externalized... and during that process almost certainly made some of your thoughts much more concrete and explicit, can you use those materials to do stuff? One that I've been using recently which is in this – that is using again the *physical representation*
as a way of *thinking* about stuff is with producing slides for talks and for videos. So, what I've been doing is I'll print out my initial ideas of slides. So, what I then do is – I normally do this with scissors and I forgot to bring my scissors with me into the... perhaps I'll tear it and hopefully tear it without... and on the line; I'll tear it again. So, I end up with each slide on a piece of paper. And in true Blue Peter fashion...
here are some I prepared earlier. And then what I find is that when I've done that, it's much easier – I certainly find it easier to spread them all out on the table top and then sort them in and put them into different categories. And then, once I'm sort of satisfied with them, I might – and I was trying to see if any of these
have got notes on – I might write little notes on them. I'm just thumbing through to see if I can find one with a note on – like there, I've scribbled some notes and comments into that one. Sometimes, I'll (inaudible) – I might cut out a few blank ones, and if there are gaps, I might write some notes for a new slide to go in there, put them into there, or once I've organized them, put them into little packets like this, and suddenly now I have my talk all organized. I know what I've got; I know where the gaps are. I don't feel worried about slides
that I think I might use, because if they're sitting at the side of the table, I've not lost them, even if I decide not to put them in. Very powerful – well, I find it powerful: *reasoning using the physical representation*, chopping it up. Now, in principle, I could do this all in PowerPoint. I could pull slides around, sort them around. But there's something about having them there that certainly makes it feel easier to do and for a lot of purposes. So, you're thinking actually physically using the materials
in order to do stuff. So, that's *transformational use of externalization*. Oh yes, that's right – I almost forgot; you might have heard – if not, I'll explain it to you then the ideas of *external distributed cognition*. Some people talk about *embodied thinking* or *embodied cognition*: the idea that when we think, our thoughts are part of the world *outside*; they're not just in our heads. They're part of our interactions with the world. Now, that's something important as a designer, to think about in relation to your users – you know – that they're not necessarily
a pure cognitive creature. They're a creature with hands and arms, and there's a world they're living in. And we often *offload* – people talk about 'offloading cognition'. Sometimes that's about *memory*: you know – the fact that you don't remember all your telephone numbers but they're in your telephone. Sometimes it's about offloading *thought processes*. So, if I want to do a complex sum, I don't try and do it all in my head. It's complex. I write it down and scribble and draw,
sometimes draw little pictures and stuff like that. And so, effectively our cognition is not just in our head, but it's part of the way we relate to our environment. So, that's something to think about as a designer because your users are doing it, but also as *part of your design*. You are thinking externally embodied, and you can deliberately create techniques to help you do that.
So, the last kind of externalization we'll talk about is *transcendental externalization*. So, the idea that the – well, actually, I say it's 'kind'; it's not 'kind', really – they're all different things that happen as we externalize. So, this is where our *internal* thoughts and ideas become the *object* of thought. By naming it, by being able to talk about it, suddenly now I can talk about that as an issue. If it's in graphic design, at the point at which you label it and you say, 'Oh, actually, the reason why
I find that's – I dislike that image, I don't find that a powerful one; the reason I like that one instead is because of *balance*.' Or... back to the HCI book. Now, when we were first shown the example cover for this book, it was identical, except that the hand was the other way around. So, I was going to get the right one here. So, the hand is doing... *that* is what you're seeing the hand do.
But actually it did it that way round and at a slight angle. And I remember looking at it and thinking... there was something wrong with it. It didn't *feel* right. But I realized one of the things I didn't like was the fact that because the hand did that, it was sort of pointing
bottom right towards top left. In Western tradition – I mean, I'm not sure how this looks to somebody whose writing goes the other way around, but certainly in left-to-right writing, things that go up like that are seen as *dynamic* – going up, in fact. Things that do *that*.... And the arrow was sort of – I mean, it wasn't so much pointing down, but it was doing the wrong (inaud.). Having *articulated* that, was able to then take that back to design.
But having been able to *articulate*, to *say*, to give a *name* to what was going on, it was about dynamism, it was about the left-to-right movement, then we were able to discuss it. Now, *I* wasn't able to produce a better cover, but the person – actually, or probably could have done this (inaud.); all the person did, I think, was do a mirror flip of the image. But it was the ability to articulate it. So, once that vague concept in your head that just didn't feel right
was *articulated*, then it was something you could communicate to others, you could think about yourself and think about alternative ways of doing it. Now, those of you if you've... worked in a sort of more ... a design area that isn't necessarily user experience design, you might have come across the writings of Schon, *The Reflective Practitioner*. So, Don Schon, I mean it's a classic in the design literature; he looked at a number of designers in different areas. There was sort of urban planning design,
architectural design. And in particular, some of this was about the relation with each other, but some of it was the relation with students, and the way in which really top-end designers were *reflective*; they didn't just say, 'Ah, that's a good idea,' but they said, 'This is good *because*'; 'There's a problem here *because*' and they were looking in at their own thinking and their own way they're working in order to
– partly, I said in a teaching concept, to be able to pass that on to a student, or possibly to be able to talk to others. But also, by being – and go for this word – 'Reflective Practitioners', looking in at their own practice, they were able also to be able to lift their own practice to a different level. They were able to think about what they were doing, about what was good about it, what was bad about it
to improve it, to be able to go to new situations and perhaps do a little bit more handle turning in the sense of saying, 'Have I thought about these issues?' But by being reflective, taking what had been a tacit, albeit successful practice and lifting it up to a higher plane.
Action research can significantly contribute to inclusive and accessible design by directly involving users with diverse needs in the research and design process. When designers engage individuals from various backgrounds, abilities and experiences, they can gain a deeper understanding of the wide range of user requirements and preferences. This approach ensures that the products or services they develop cater to a broader audience, including those with disabilities.
Furthermore, action research allows for iterative testing and feedback loops with users. This quality enables designers to identify and address accessibility challenges early in the design process. The continuous engagement helps in refining designs to be more user-friendly and inclusive.
Additionally, action research fosters a culture of empathy and understanding within design teams, as it emphasizes the importance of seeing the world from the users' perspectives. This empathetic approach leads to more thoughtful and inclusive design decisions, ultimately resulting in products and services that are accessible to everyone.
By prioritizing inclusivity and accessibility through action research, designers can create more equitable and accessible solutions that enhance the user experience for all.
Take our Master Class How to Design for Neurodiversity: Inclusive Content and UX with Katrin Suetterlin, UX Content Strategist, Architect and Consultant.
If we ask ourselves: 'Is our inclusion enough? Do we have methods that apply for every kind of audience?' Hi, my name is Katrin Suetterlin. I have been a UX practitioner for roughly a decade, a writer for more than two decades. And I want to bring you this Master Class where I think you will highly, highly benefit from the insights that I can share with you, being a neurodivergent designer myself. If you are looking into UX design, UI design, interaction design,
that should be neuro-inclusive for all kinds of scenarios and user groups. Most of the time we see that it's not enough. At the end of this masterclass, you will be walking away with hands-on practices that you can incorporate in your daily work, but also personal and subjective experience as a neurodivergent person navigating a world with a late diagnosis, what it means to be masking in a world where neurotypicals are the majority.
Many people might be neurodivergent and not even know it. They might be navigating some struggles, and this masterclass can teach you how to be more inclusive in all of what you're doing. I can't wait to dive into this topic with you and show you a way where you can be inclusive on the go. I'll see you there!
To ensure the reliability and validity of data in action research, follow these steps:
Define clear research questions: Start with specific, clear research questions to guide your data collection. This clarity helps in gathering relevant and focused data.
Use multiple data sources: Collect data from various sources to cross-verify information. This triangulation strengthens the reliability of your findings.
Apply consistent methods: Use consistent data collection methods throughout your research. If conducting surveys or interviews, keep questions consistent across participants to ensure comparability.
Engage in peer review: Have peers or experts review your research design and data analysis. Feedback can help identify biases or errors, and enhance the validity of your findings.
Document the process: Keep detailed records of your research process, including how you collected and analyzed data. Documentation allows others to understand and validate your research methodology.
Test and refine instruments: If you’re using surveys or assessment tools, test them for reliability and validity before using them extensively. Pilot testing helps refine these instruments, and ensures they accurately measure what they intend to.
When you adhere to these principles, you can enhance the reliability and validity of your action research data, leading to more trustworthy and impactful outcomes.
Take our Data-Driven Design: Quantitative Research for UX course.
The big question – *why design with data?* There are a number of benefits, though, to quantitative methods. We can get a better understanding of our design issues because it's a different way of looking at the issues. So, different perspectives often lead to better understanding. If you're working in project teams or within organizations who really don't have
a good understanding of *qualitative methods*, being able to supplement those with quantitative research is very important. You might be in a big organization that's very technology-focused. You might just be in a little team that's technology-focused, or you might just be working with a developer who just doesn't get qualitative research. So, in all of these cases, big, small and in between, having different tools in your bag is going to be really, really important. We can get greater confidence in our design decisions.
Overall, that means that we are making much more *persuasive justifications* for design choices.
To analyze data collected during an action research project, follow these steps:
Organize the data: Begin by organizing your data, categorizing information based on types, sources or research questions. This organization makes the data manageable and prepares you for in-depth analysis.
Identify patterns and themes: Look for patterns, trends and themes within your data. This might mean to code qualitative data or use statistical tools for quantitative data to uncover recurring elements or significant findings.
Compare findings to objectives: Match your findings against the research objectives. Assess how the data answers your research questions or addresses the issues you set out to explore.
Use software tools: Consider using data analysis software, especially for complex or large data sets. Tools like NVivo for qualitative data or SPSS for quantitative data can simplify analysis and help in identifying insights.
Draw conclusions: Based on your analysis, draw conclusions about what the data reveals. Look for insights that answer your research questions or offer solutions to the problem you are investigating.
Reflect and act: Reflect on the implications of your findings. Consider how they impact your understanding of the research problem and what actions they suggest for improvement or further investigation.
This approach to data analysis ensures a thorough understanding of the collected data, allowing you to draw meaningful conclusions and make informed decisions based on your action research project.
Professor Ann Blandford, Professor of Human-Computer Interaction, UCL explains valuable aspects of data collection in this video:
Ditte Hvas Mortensen: In relation to data gathering, there are obviously different ways of doing it. You can record video or sound, or you can take notes. Can you say something about the advantages or disadvantages of doing it in different ways? Ann Blandford: Yes. So, I think it depends on how the data-gathering method is going to affect what
data you can gather. So, sometimes people are not comfortable being recorded. And they don't *want* to be voice-recorded. And you'll get more out of the conversation if you just take notes. Of course, you don't get quite such high-quality data if you just take notes. On the other hand, it's easier to analyze because you haven't got so much data.
And you can't do as much in-depth analysis if you've only got notes, because you can only analyze what you recognized at the time as being important, and you can't pick up anything more from it later. So, I certainly like to audio-record where possible for the kinds of studies that we do. And different people may have different needs, and therefore that might be more or less important to them.
We also use quite a lot of still photos, particularly in healthcare. We have to have quite a lot of control over what actually features in an image so that it doesn't violate people's privacy. So, using still photos allows us to take photos of technology and make sure that it doesn't include any inappropriate information. Whereas video – well, firstly, video means that you've got a *lot* more data to analyze.
And it can be a lot harder to analyze it. And it depends on the question that you're asking in the study, as to whether or not that effort is merited. And for a lot of us, it's not merited, but also it's harder to control what data is recorded. So, it's more likely to compromise people's privacy in ways that we haven't got ethical clearance for. So, we don't use a lot of video ourselves.
But also, particularly if one is trying to understand the work situation, it's often also valuable to take *real notes*, whether those are diagrams of how things are laid out or other notes about, you know, important features of the context that wouldn't be recorded in an audio stream. And also, video can be quite *off-putting* for people.
You know, it's just that much more intrusive. And people may become much more self-conscious with a video than with audio only. So, it can affect the quality of the data that you get for that reason. So, I think when you're choosing your data-gathering *tools*, you need to think about what impact they will have in the environment.
It may or may not be *practical* to set up a video camera, quite apart from anything else. Audio tends not to be so intrusive. As I say, there are times when just written notes will actually serve the purpose better. But it also depends on what you're going to *do* with the data. You know – how much data do you need? What kinds of analysis are your going to do of that data? And hence, what *depth of data* do you actually need to have access to, anyway?
If you've got more data than you can deal with, then it can feel overwhelming, and that can actually be quite a deterrent to get on with analysis. And analysis can be really slowed down if, as a student or other researcher, you just feel so overwhelmed by what you've got that you don't know where to start! Actually, that's not a good place to be. So, having too much data can often be as difficult as not having enough.
But what matters most is that you've got an *appropriate* kind of data for the questions of the study.
Baskerville, R. L., & Wood-Harper, A. T. (1996). A critical perspective on action research as a method for information systems research. Journal of Information Technology, 11(3), 235-246.
This influential paper examines the philosophical underpinnings of action research and its application in information systems research, which is closely related to UX design. It highlights the strengths of action research in addressing complex, real-world problems, as well as the challenges in maintaining rigor and achieving generalizability. The paper helped establish action research as a valuable methodology in the information systems and UX design fields.
Di Mascio, T., & Tarantino, L. (2015). New Design Techniques for New Users: An Action Research-Based Approach. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (pp. 83-96). ACM.
This paper describes an action research project that aimed to develop a novel data gathering technique for understanding the context of use of a technology-enhanced learning system for children. The authors argue that traditional laboratory experiments struggle to maintain relevance to the real world, and that action research, with its focus on solving practical problems, is better suited to addressing the needs of new ICT products and their users. The paper provides insights into the action research process and reflects on its value in defining new methods for solving complex, real-world problems. The work is influential in demonstrating the applicability of action research in the field of user experience design, particularly for designing for new and underserved user groups.
Villari, B. (2014). Action research approach in design research. In Proceedings of the 5th STS Italia Conference A Matter of Design: Making Society through Science and Technology (pp. 306-316). STS Italia Publishing.
This paper explores the application of action research in the field of design research. The author argues that design is a complex practice that requires interdisciplinary skills and the ability to engage with diverse communities. Action research is presented as a research strategy that can effectively merge theory and practice, linking the reflective dimension to practical activities. The key features of action research highlighted in the paper are its context-dependent nature, the close relationship between researchers and the communities involved, and the iterative process of examining one's own practice and using research insights to inform future actions. The paper is influential in demonstrating the value of action research in addressing the challenges of design research, particularly in terms of bridging the gap between theory and practice and fostering collaborative, user-centered approaches to design.
Brandt, E. (2004). Action research in user-centred product development. AI & Society, 18(2), 113-133.
This paper reports on the use of action research to introduce new user-centered work practices in two commercial product development projects. The author argues that the growing complexity of products and the increasing importance of quality, usability, and customization demand new collaborative approaches that involve customers and users directly in the development process. The paper highlights the value of using action research to support these new ways of working, particularly in terms of creating and reifying design insights in representations that can foster collaboration and continuity throughout the project. The work is influential in demonstrating the applicability of action research in the context of user-centered product development, where the need to bridge theory and practice and engage diverse stakeholders is paramount. The paper provides valuable insights into the practical challenges and benefits of adopting action research in this domain.
1. Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice. SAGE Publications.
This comprehensive handbook is considered a seminal work in the field of action research. It provides a thorough overview of the history, philosophical foundations, and diverse approaches to action research. The book features contributions from leading scholars and practitioners, covering topics such as participatory inquiry, critical action research, and the role of action research in organizational change and community development. It has been highly influential in establishing action research as a rigorous and impactful research methodology across various disciplines.
2. Stringer, E. T. (2013). Action Research (4th ed.). SAGE Publications.
This book by Ernest T. Stringer is a widely recognized and accessible guide to conducting action research. It provides clear, step-by-step instructions on the action research process, including gathering information, interpreting and explaining findings, and taking action to address practical problems. The book is particularly valuable for novice researchers and practitioners in fields such as education, social work, and community development, where action research is commonly applied. Its practical approach and real-life examples have made it a go-to resource for those seeking to engage in collaborative, solution-oriented research.
3. McNiff, J. (2017). Action Research: All You Need to Know (1st ed.). SAGE Publications.
This book by Jean McNiff provides a comprehensive guide to conducting action research projects. It covers the key steps of the action research process, including identifying a problem, developing an action plan, implementing changes, and reflecting on the outcomes. The book is influential in the field of action research as it offers practical advice and strategies for practitioners across various disciplines, such as education, healthcare, and organizational development. It emphasizes the importance of critical reflection, collaboration, and the integration of theory and practice, making it a valuable resource for those seeking to engage in rigorous, transformative research.
Do you want to improve your UX / UI Design skills? Join us now
You earned your gift with a perfect score! Let us send it to you.
We've emailed your gift to name@email.com.
Do you want to improve your UX / UI Design skills? Join us now
Here's the entire UX literature on Action Research by the Interaction Design Foundation, collated in one place:
Take a deep dive into Action Research with our course User Research – Methods and Best Practices .
How do you plan to design a product or service that your users will love, if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design.
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’.
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world. You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change, , link to us, or join us to help us democratize design knowledge!