Usability Evaluation

Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design.
I'm guessing the reason you're watching this video now at this moment is because you've got an interest in human-computer interaction. However, that doesn't perhaps mean you know exactly what it is. Maybe you just guessed. Or you just think it sounds like a good idea. So what is human-computer interaction? Well, it's got two sides to it. On one side, there's an academic discipline which is about studying the way people interact with technology, and in particular the computer technology.
Nowadays, computers are in virtually everything. It's quite hard sometimes to tell the difference. But then there's another side to it, which is the design discipline and I think probably quite a lot of you watching this video will be from a design side – you're interested in user experience design and interaction design. One side of Human-Computer Interaction is the academic discipline and the other is the applied design discipline, which is about how you create interventions with
technology that make a difference to people. So, one side is studying that computer technology and how it has an impact on people – so the way in which it works. The other is more about saying that, how do we practically change that academic study, that interesting information we have about the way people work, into action? The two of course interact, so on one hand the professional experience
informs the academic discipline. And you'll probably notice that I use lots of examples, some from my own experience, some from stories I know about from elsewhere. I use those from all sorts of times in order to inform my general understanding. Because it works the other way around. The academic understanding, that more generic understanding, feeds back into the design discipline too. So, if I understand how people interact with individual
computers, how people interact together in a group when they're using technology, how environments change the way people are – if those environments have technology in. Then I'll be in a better position to be able to design things effectively for them. So, these two are intermingled. So, if you want to understand Human-Computer Interaction, if you want to be part of this, what kind of things would I like you to know about? What would I hope you would learn from studying HCI?
Well, first of all is bare facts. Facts about the nature of computers. There's facts about the nature of human psychology, physiology – a whole range of things – social interactions. There are facts that you can read in a book, and I've got my own textbook that I produced on this. And there are many, many others. Also on the web there are vast amounts of material. So, in some sense the
facts are easy to get and then you will get some of those. Now hopefully when I talk about HCI you get some of those facts. However, perhaps there are other things which are more important. The second thing I hope you learn from HCI is about analysis. It's about looking at a situation and trying to make sense of it: what's going on there. Because if you understand a situation then you can apply the facts to the situation.
So, it's about analysis, about picking some things apart. Picking the problems that you're having or picking the opportunities apart. And then of course, once you've done that analysis, together with the facts and knowledge from that, you can then do a design job. So you can bring these things together in order to synthesize them, in order to create something that will be a solution to somebody's problem. But perhaps more important almost than all that: obviously you need
the knowledge side and you need the skills to do it. But perhaps the most important thing within HCI, particularly if you think of it in terms of moving on into design, is an attitude of mind. An attitude that focuses on people that sees real users, real situations as center place. That is seeking to understand people however different they are from you. And to do things which are good for them and makes sense in their lives. So, that's sort of what
our purpose is in a way, particularly as we move from just studying people to actually saying, "How do we design something for them?" So, what kinds of things might you want to know about these? And the sort of things I'll often talk about in this context? One set of things is about the basics of design, you know. So, this goes from sort of the processes that people use when they're doing design and you may be using yourself, to methods of evaluation to understand the nature
of human experience. However, undergirding that is fundamental, undeniable, underlying knowledge and theories. Some of those, particularly about the human, about human perception, cognition, emotion – that also leaks into how to understand the way computers work as well, to the extent that it's important from the point of view of their interactions with people. And then from a sort of practical point of view,
you also need to think about and be aware of the issues that happen when systems are implemented. Again, that's partly about the way they're implemented in the computer, but also the way in which they get deployed into the world. This will vary from place to place, but some of the general principles of how you make sure that this wonderful system that you designed actually gets used by people in the real world and makes a real difference to people.
Here, Professor Alan Dix explains the roots of HCI and which areas are particularly important to it.
HCI surfaced in the 1980s with the advent of personal computing, just as machines such as the Apple Macintosh, IBM PC 5150 and Commodore 64 started turning up in homes and offices in society-changing numbers. For the first time, sophisticated electronic systems were available to general consumers for uses such as word processors, games units and accounting aids. Consequently, as computers were no longer room-sized, expensive tools exclusively built for experts in specialized environments, the need to create human-computer interaction that was also easy and efficient for less experienced users became increasingly vital. From its origins, HCI would expand to incorporate multiple disciplines, such as computer science, cognitive science and human-factors engineering.
HCI soon became the subject of intense academic investigation. Those who studied and worked in HCI saw it as a crucial instrument to popularize the idea that the interaction between a computer and the user should resemble a human-to-human, open-ended dialogue. Initially, HCI researchers focused on improving the usability of desktop computers (i.e., practitioners concentrated on how easy computers are to learn and use). However, with the rise of technologies such as the Internet and the smartphone, computer use would increasingly move away from the desktop to embrace the mobile world. Also, HCI has steadily encompassed more fields:
“…it no longer makes sense to regard HCI as a specialty of computer science; HCI has grown to be broader, larger and much more diverse than computer science itself. HCI expanded from its initial focus on individual and generic user behavior to include social and organizational computing, accessibility for the elderly, the cognitively and physically impaired, and for all people, and for the widest possible spectrum of human experiences and activities. It expanded from desktop office applications to include games, learning and education, commerce, health and medical applications, emergency planning and response, and systems to support collaboration and community. It expanded from early graphical user interfaces to include myriad interaction techniques and devices, multi-modal interactions, tool support for model-based user interface specification, and a host of emerging ubiquitous, handheld and context-aware interactions.”
— John M. Carroll, author and a founder of the field of human-computer interaction.
HCI is a broad field which overlaps with areas such as user-centered design (UCD), user interface (UI) design and user experience (UX) design. In many ways, HCI was the forerunner to UX design.
Despite that, some differences remain between HCI and UX design. Practitioners of HCI tend to be more academically focused. They're involved in scientific research and developing empirical understandings of users. Conversely, UX designers are almost invariably industry-focused and involved in building products or services—e.g., smartphone apps and websites. Regardless of this divide, the practical considerations for products that we as UX professionals concern ourselves with have direct links to the findings of HCI specialists about users’ mindsets. With the broader span of topics that HCI covers, UX designers have a wealth of resources to draw from, although much research remains suited to academic audiences. Those of us who are designers also lack the luxury of time which HCI specialists typically enjoy. So, we must stretch beyond our industry-dictated constraints to access these more academic findings. When you do that well, you can leverage key insights into achieving the best designs for your users. By “collaborating” in this way with the HCI world, designers can drive impactful changes in the market and society.
The Interaction Design Foundation’s encyclopedia chapter on Human-Computer Interaction, by John M. Carroll, a founder of HCI, is an ideal source for gaining a solid understanding of HCI as a field of study.
Keep up to date with the latest developments in HCI at the international society for HCI, SIGCHI.
Learn the tools of HCI with our courses on HCI, taught by Professor Alan Dix, author of one of the most well-known textbooks on HCI:
Human-Computer Interaction: The Foundations of UX Design
Cognition in human-computer interaction includes the mental processes occurring between humans and computers. This encompasses perceiving inputs from the computer, processing them in the brain, and producing outputs like physical actions, speech, and facial expressions.
If we want to design effectively for humans working with computers, clearly we need to understand humans. So let's talk a little bit about the nature of humans, the way we operate from our senses through to our actions. And one way to think about this is almost like a computational analogy. If you think about the computer, there's inputs to the computer, the things that you type at it;
the presses you do on its surface. It does some work and it produces outputs. And one of the models for human psychology is to think like that about the human. So on the one side, you've got perception, you've got our senses, our sight, our sound and things like that that feed in. Then you've got your brain in the middle that does lots of chuntering and processing. And then it creates outputs, so physical actions but also speech, and anything else that we're using – facial expressions.
So that's a model of the human, which is the input-output engine model. That can be quite powerful. You can look at those parts individually, and a lot of psychology is about focusing on one part or another part at a time and trying to make sense of what's going on there. However, that doesn't give you, shall we say, a rounded picture of the human. Because we don't just act as an output; we act in order to live our lives.
There's actually a loop that goes on – a complete picture – so we might perceive something... So I might spot something I'm interested in. I might then think, "Oh, I'll go and look at it." And then perhaps I start to move off in that direction to go and have a look at the thing. If you're dealing with a computer, of course, the thing you act on typically is a computer. So maybe I'll look at my computer. I see something. I think, "Ah, I want to read my mail". I reach out and,
you know, if it's a phone, I might tap the icon. If it's on my computer, I might use my touchpad and move the mouse and click the icon. And then, of course, when I click the icon, something happens. There the computer does something back. If it's the world, if I pick something up, if I throw a stone, the stone moves through the world, and I perceive the impact of my actions on the world. I might perceive that by seeing it; I might perceive it by feeling it.
So there's this complete loop that goes from action through to the world, through to perception, to cognition, back to action. And when I drew it as an input/output loop, we started off with perception as if perceptions drove things. However, there's also a good argument to think starting with action. Actions might arise because of perceptions. But actually, what are we about as humans and as any animal? We're there to do things. We're there to do things,
to get food for ourselves, to help us keep comfortable. So it's about *doing things in the world*. And so there's one way to perceive, to look at this picture as *input to output*, but another is to actually focus on that action, about really achieving the things that we want to achieve. Now that's true of our physical existence in the world, but also crucially, that's true when we design digital interactions.
The video above looks at cognition as a continuous input-output loop that goes from action, through to perception (input through our senses), to cognition (mental processing), back to action (the output). Although one might perceive this process as starting with perception, it is vital to remember that perceptions often trigger actions, but at their core, humans and animals focus on performing activities in the world. This understanding is crucial for the design of effective digital interactions.
So we know we're not just designing interfaces, we're designing interactions. We're not just designing interactions, we're designing interventions. That's your job. However, what do we mean by design? What is design? This is one of those things that there'll be 50,000 answers to this question. I'm going to give you one but hopefully it's one that's helpful.
So I'm going to say, first of all, that design is about achieving goals within constraints. So there's some sort of goal or purpose that you're after. In interaction design that might be about an enjoyment goal for people, about giving... having somebody in to be able to see a film or be able to listen to music. Or to be able to engage in a social relationship. It might be a work goal like achieving something efficiently,
being able to produce videos easily. But there's a goal there. And this is true probably of design in general. Even if you're designing pure art, you have some sort of goal, which might be, again, it might be aesthetic or it might be about helping people understand meaning. There's a purpose that you have there. And it's about trying to achieve that purpose. However, there are constraints to that. You do not usually have total freedom, otherwise you become a magician, not a designer. Those constraints are critical.
Some of those design constraints might be about the kinds of medium that you have to work with. If you're a painter, whether you're using oils or watercolors, but as an interaction designer, it's about your computers that you're using, what kind of device somebody is likely to have. What kind of platform they're on. Is it for an Apple or an Android, if it's for a phone or something else? These are sort of broad questions. And sometimes you might have choices on those. So that becomes part of your design remit, to make those choices.
Other times they're given to you. This is going to run in this organization and everybody has this kind of computer, full stop [period]. There's also constraints about time and money. What's available to you? You do not usually design with unlimited money or time. You make choices there. So because there are constraints, you have to make choices and trade-offs between some of your constraints. But constraints are usually given.
So the trade-off you often have to make is between different goals and purposes. Which of multiple goals are you going to achieve? If I'm designing some video editing software, I want to make it obviously as pleasurable and enjoyable to use by the person editing as I can. I don't want them to have a horrible job. However, I also might want to produce the highest-quality video that is possible, because that's going to improve the experience ultimately of a person like yourself watching this video.
It could be that I have to trade these off. I have to have something that's going to take more effort and possibly not very interesting and enjoyable effort by the person doing video editing in order to produce a better quality for you. There is a trade-off. I cannot usually achieve all of my goals and constraints. Trade-offs are essential to design. The second of those constraints, one of the core constraints you have is your materials.
In an art setting that might be about the kind of paint you're using or what you're using, whether you're painting or whether you're doing sculpting. Clearly that makes a difference. If you're a sculptor and if you're using stone or wood, that is going to change the nature of what you produce. This is also true of physical design, and it's also true of interaction design. So in a physical sense, I often – I won't do it now – but I can lift up chairs and things like this and say, ah, look, this chair is made of metal.
If you take the design of a chair that's made of metal it will often have thin legs. If you make that in wood, the legs would break. But similarly, if you take a wooden chair, that's a much more solid one, and made it metal, it would probably be too heavy to move. The materials used change the fundamental nature of design. That's true of physical design, but it's also true of digital design. You have to understand the nature of materials you're using. So if you design something that is initially designed for
a desktop computer and just take the same design and squash it into a phone, it won't work. If you take a design that is designed for your phone and then try and put it without sufficient changes onto a voice interaction, it won't work. You have to understand your materials. So what are your materials? I've already given you some of them, the kind of platform you're on. Your computer is part of the materials. You have to understand
the nature of what's possible. Some of that's obvious, like the screen size, the thing I was just talking about. But also, what computation is possible? Say you're the designer, you're not a builder yourself. Say to the person who is actually constructing this: "Oh, this this has to work like this. Is that possible?". Or are you making things so difficult that you will cause problems elsewhere? What are the fundamental capacities of it? You cannot, for instance, ask that for an interactive video, that you have instant-millisecond
timing between two places, distance on the Earth, because the speed of light constrains you. There are constraints and capacities like that, but also storage capacity. How much video can you store on a computer? That's a limited amount, depending on the kind of device. The kinds of tools you've got to use, the kind of platforms you're on... All of these are part of the material that's available to you as a designer.
Now, it might be that some of the details of that are done by other people, but you have to design something that works within those constraints.You have to understand the material, the digital material. But of course, you also have to understand people. The other aspect of it is the people, the other crucial aspect. So you have to understand the nature of people. Otherwise, you can't design for them. People are part of your materials. You have to understand their psychology, their social nature. And of course, these extra things which are complicated
about the interaction between the people and technology and between people and each other. So you have a rich picture of materials. Now, you might be starting to think – and as I say "materials" and then I put people into that picture – you might have been comfortable with me saying "your computer is a material". Of course... But people as your material, surely that's a little bit functional way to think about people?
Well, it is. People are not a material in the same sense as the paint you choose when you're painting or whether you choose to use stone or wood when you're carving. It's not the same. People have individuality. However, what we say is that if you only treat people as well as you treat materials, you probably treat them better than they are often treated in design. So we treat people at least as good as materials. I'll explain why. How many times have you heard there's been a big accident, whether it's a plane
accident, train accident or something like that. People say, "Oh, it was human error, it was due to human error". The person didn't do the right thing at the right point. They didn't notice something that was important and things went wrong. You might have said it. It might be in a hospital situation, industrial situation. So just imagine instead the wing falls off the plane because there's metal fatigue where the
wing joined the plane. Now you would say it was due to the metal fatigue, but you wouldn't say "it was metal error". You would say "it's a design error". Because the designer of the plane, the engineers, the detail designers would have had to – should have – understood the nature of metal and the fact that you do get metal fatigue after a while. You should either design it so that where there's metal fatigue, it doesn't fundamentally mean the plane will crash.
Or you design it so that you can detect when that metal fatigue is happening and then take preventive maintenance. There are a number of strategies you've got because you understand that metal as a material has known ways of failing. We as humans have limits and constraints and ways that we fail in the sense we don't always do things in the perfect way. Just like a piece of metal doesn't. As a designer your job is to understand those limitations of people as actors in the system.
And ensure that the design of the system as a whole works even when those happen. So whenever you hear about human error, it was human error. But typically, it wasn't the operator or the pilot or the nurse or the doctor in the hospital. It was typically the designer of the system that's there. If you treat users as well as a piece
of metal. You are probably dealing with them a lot better than they usually are dealt with. So having said that, let's just roll back and come back to what's the central message here? The central message is that for you as a designer, the user is at the heart of what you do. Understanding your users, and you have to understand the technology you work with, but understand those users, understand the nature of them. And as I said, then you'll start to treat them far better, hopefully, than a piece of metal.
Design in human-computer interaction, as discussed in the video, is about achieving goals within constraints. It involves understanding the purpose or goal, like enjoyment or work efficiency, and navigating the constraints, such as medium, platform, time, and money, to achieve that purpose.
It is essential to understand the materials, both digital and human, and to make trade-offs between different goals and constraints. Ultimately, the central message is that the user is at the heart of what you do as a designer. Understanding the users and the technology you work with is crucial for successful design.
Ergonomics in Human-Computer Interaction (HCI) refers to the design and implementation of interfaces that ensure user comfort, efficiency, and effectiveness. In this video, HCI expert Prof Alan Dix discusses touch and haptics in user interfaces, highlighting the importance of ergonomics in device design.
There are more and less successful uses of touch and haptics in the user interface. I'll give a few examples of both. Again, I can't imagine most of you won't have a mobile phone, probably within reach of you at this moment. It obviously uses vibrations alert to say a phone call's coming or you've had a text message or WhatsApp message, some sort of notification.
Also, it's quite likely that as you touch that phone, you might get... Some keyboards give you a little vibration back as you click keys. I know I've had to turn it off on some of mine, though, because if it doesn't get it dead right, it can actually be quite confusing. And as I said, certainly one of the virtual keyboards I use on my phone, I tried to use it and I've just turned the vibration off.
But as I said, that can work well or it can work not so well. And similarly, there are techniques to give you a sense of texture on phones by using vibration. In cars, if you've got some ABS brake control, which again most – unless you've got a classic car – probably will have, when you press down the brake if the ABS kicks in – so if there's a bit of a skid – even the slightest slidy-ness, you'll feel a vibration come through your pedal. Now, partly – in early days,
I think that was actually a physical effect – but nowadays that's not being generated by vibration in the car, by the brakes going on and off. But actually is being generated. But in order to train people to try and stop just short of that skidding point, you put the vibration on, which helps you know when it's happened. So that's a really positive use of deliberate vibration, haptic feedback in the car. A slightly less successful one, although still really cool and really nice, was a system
called iDrive that BMW put into their top-end cars in the mid-2000s. And what it did, it was a knob, but it used a motor to give a sense of physical movement. So you got this clicky feeling. But the number of clicks could depend on what it was about. So if you had to... If it was controlling the volume and there was 14 settings to the volume, there would be 14 clicks. If you're controlling your menu and there are four items, there would be
four clicks, and then it would stop. You couldn't go any further. Now I'm assuming it was because the technology was early and they couldn't quite get it right, and as I said, if you don't get these things right, they're very, very sensitive. They actually had to abandon this in a relatively short time. And so, later versions actually reverted to having a knob that really does have click stops, even though it can't do that thing about stopping and starting and changing the number.
It still has haptic feedback. But it's generated physically rather than digitally. A place where haptic feedback is used incredibly successfully is in games. Even simple controllers will often have some sort of vibration in them. But you can get – if you're a real pro-games player – you'll probably have perhaps a steering wheel or force feedback joysticks. So as you steer, you actually feel the resistance of the car, vibration of the motor, all generated.
In virtual reality as well. So games – closely related to virtual reality – there's been again very positive use of haptics for surgery training, because in surgery, it's really crucial the feel of an instrument as you drill or as you cut or as you push. It's really crucial, surgeons feel the difference between different organs. So as they work, they can tell the difference. Slightly coming into technology at the moment, so something you might see, but probably
still at the edge of research and into application, is a thing called *Ultrahaptics*. So this could be used in virtual reality. It could also be used in other sorts of settings. And it uses ultrasound to give a sense of feeling in mid-air. The idea is you have lots and lots of little ultrasound speakers. They generate ultrasound, which creates little points in space
where they all feed up together and make a big bang. A bit like if you see waves sometimes come together and make a big splash, sometimes they cancel each other out. So you design the splash, the sort of splash points to give a sense of feeling; so, you can have your VR glasses on, you might see perhaps a globe in front of you. And as you reach out, you can actually *feel* that globe, even though there's nothing there – it just didn't play now. So these things are coming. We've got a change, both things that are working already,
but also new technologies that are finding their way through. And again, a little bit further down the stream, there's a number of materials that change their shape programmatically, currently still very much in research stage. But I think it won't be that long maybe, when we start to see this kind of thing moving its way through into different kinds of interactions.
Copyright holder: On Demand News-April Brown _ Appearance time: 04:42 - 04:57 _ Link: https://www.youtube.com/watch?v=LGXMTwcEqA4
Copyright holder: Ultraleap _ Appearance time: 05:08 - 05:15 _ Link: https://www.youtube.com/watch?v=GDra4IJmJN0&ab_channel=Ultraleap
For example, mobile phones and cars use haptic feedback to provide users with intuitive and engaging experiences. However, poorly implemented haptic feedback can confuse users. This underscores the importance of ergonomics in HCI to ensure that interfaces are user-friendly, intuitive, and do not cause strain or discomfort, ultimately enhancing the user's overall experience with a device or application.
Human-Computer Interaction (HCI) is crucial due to its direct impact on the user experience.
Given that satisfaction aspect that was in ISO standards and early talk of HCI – it was in NORD for so many years – why is it that now you've probably got a title – you could well have a title – like User Experience Designer or something like that. So user experience has become an intimate part and often the dominant part
of the way in which we look at people interacting with computer technology. So why is that the case? Well, part of it was the shift to service orientation, and that's quite crucial, obviously, for what you're doing today. If you're service oriented, there's a different aspect of user experience than things that are more pay once and go on. So the Internet pushed this movement towards service orientation, the digital goods,
the paying on a subscription basis, rather than the paying once and for all. But because as soon as something becomes a service, then there are multiple choice points. If these are domestic products, then there's a choice point based by the actual user, rather than a client on behalf of the user, perhaps their boss. And if there are more choice points, then usability and user experience increase in importance. Because if the person isn't enjoying what they're doing,
if they're not feeling it's fulfilling them, then they're going to choose another service and swap services. It's easier to swap services than to swap harder, once-and-for-all bought products. So there's a sort of a criticality to user experience that grew out of this service orientation. However, that's not the end of the story. In some ways, and you might have heard that phrase,
"You've never had it so good." – there's an element of this towards user experience. Some of you will probably come across Maslow's hierarchy of needs. There are multiple memes with variations of this triangle going round social media. You've probably seen the ones with Wi-Fi at the bottom. What Maslow said was that there are different levels of need. At the very base there are things like our physical need for food. Are you hungry? Are you thirsty? Are you cold or too hot?
Above that, there are slightly higher-level needs for safety and security. And this involves shelter. Is your roof leaking? Is it a solid house you're in? Do you feel secure where you're at or do you feel in danger? Next up the hierarchy are what's called love needs or social needs, for your children, your parents, your partner, your friends.
Next in the hierarchy again, is esteem or ego. The things that make you feel good. Prestige in your job, status, self-worth, self-confidence. Then Maslow placed at the top of this, although there's variance sometimes, even additional layers. But the original one at the top of this was self-actualization, things like being creative, autonomous, a sense of personal growth and identity and learning, like you're doing now.
What Maslow suggested was we fulfill these needs in a fairly strict order. So we don't try and worry about higher-level needs when the lower-level ones are unsatisfied. So first of all, [if] we're hungry, that dominates everything. After that, there's safety. After that, there's love needs. That was Maslow's suggestion about the way these worked, and these have been used quite a lot in the psychology literature in order to try and
understand the kinds of decision making that people do. And there's certainly a truth to that. Now you can sort of see an equivalence of this for user interfaces. In one sense there's raw functionality. Does it do the job? Does my phone help me make a phone call or help me connect to the Internet? Once you've got that bare functionality, you know, does my light come on when I want it to come on? You then start [thinking] about usability. You know, is it easy to phone somebody? Is it easy to use the Internet? Is it easy to get my lights to come on when I want to?
And then, once you've got that usability, you can start to worry about user experience. Is it a joy, the way in which my lights come on, perhaps automatically, subtly, in just the right way? Or perhaps when I just make this faintest suggestion it's getting dark here and my household assistant realizes that means I'm about. And there's a suggestion in a way you can think of these a little bit like the Maslow ones.
The functionality comes first. Then once you've got functionality, you then worry about usability, and then you worry about user experience. And so you can think about it that, over the years, we've sorted out how to get functionality and technologies improved. Our ability to work out what people need has improved. Then we've sorted out usability, and then of course user experience becomes important at the top of the cake. And again, there is a truth to that. There is... in some sense we need, the lower level needs to be satisfied.
And certainly if you look in certain sorts of situation, if particularly – say, a safety-critical situation – functionality and usability, probably more significant there than user experience. So again, there's some truth to it, but not entirely true, I should say. There are contradictions. So I think there is a truth to this, it's worth thinking about and realizing that when you make design decisions, where you're putting your focus,
is it on user experience, is it on usability, is it on functionality? Think about which are *most critical* for people. However, if you look at both the Maslow's hierarchy and then think about this in terms of user experience, there are clear contradictions. So, the Western obsession with slimming, which is about putting ego and self-actualization potentially, arguably, puts prestige versus hunger
and sort of saying, “I'm prepared to be hungry in order to be the ideal shape that society says I should be.” If I'm talking too long, if my video is too long and you get hungry, does your need or your desire for learning, self-actualization, overcome your desire for that hunger? Now there'll be a point when that probably won't be the case. But we do trade these things off.
Apple, of course, known for their user experience. They create delightful products that make people feel committed to them. It's about identity as well as about joy. But if you unwrap an Apple box – their packaging is an experience, it's like opening these things. However, often over the years, and this is going back, you know, not a recent thing. Apple has sacrificed usability and functionality in order to get something that looks good, feels good,
but sometimes may be a little harder to use. And sometimes doesn't even do the right thing. And yet the computer I'm using now is an Apple computer. So clearly they get something right, even though I know that they lose things as well along the way. So, there's both a truth to this hierarchy and not, but this certainly helps explain
one of the reasons why experience is so crucial. If everything else is right, then it becomes the key differentiator. The thing that says somebody wants to use your product, that you've designed, rather than somebody else's.
As highlighted in the video, the shift towards service orientation, prompted by the internet and digital goods, has made usability and user experience increasingly important. Users now have multiple choice points and can easily swap services if they are not satisfied, which underscores the criticality of user experience. Prof Alan Dix uses the analogy of Maslow’s hierarchy of needs in the context of user interfaces, stating that once the basic needs of functionality and usability are addressed, user experience becomes the key differentiator.
User experience is the factor that will make someone choose your product over another. Therefore, optimizing the HCI is paramount to ensure the success and competitiveness of a product or service.
HCI does not require any knowledge of coding. While coding can be a part of the design process and implementation, it is not necessary for understanding and applying the principles of human-computer interaction.
The first computer, as we know it today, was invented in the 1950s. At that time, computers were room-sized and cost millions of dollars or pounds or euros in current terms. Thomas Watson of IBM famously mispredicted that five computers would be enough forever, reflecting the sentiment of the time. Over the decades, the cost and size of computers have drastically reduced, making them accessible to the general public. By the mid-70s, the first personal computers were coming through, and today, the total number of computers and smartphones exceeds the number of people in the world.
For a detailed evolution of computer technology, watch the video below:
What I'd like to do now is take you on a very high-level view of the evolution of computer technology. The reason for that is to try and expose some of the changes that we have in technology, which obviously influences where we are today. And at different points during the history of the development of computer technology changes happen which influence the way that people
work with technology. Most of those still persist today, so what you see is newer issues emerging, but often the old ones are still pertinent. So, hopefully by giving a sense of the way in which the field develops, we are in a better position to be able to do things now, as well. It's not just a historical exercise; it’s about trying to understand where we are. I want to focus on how many computers – a nice numerical count.
The reason I'm doing that isn't that's the only way of thinking about technology, but it acts as a bit of a proxy for other things like the actual physical size of computing. The numbers of computers increased; the physical size has reduced. From rooms full to things you could hold in your arms. The things you can hold in your hands to things that are now so small you can't see them. So, there's a sense at the physical size. Also,
the cost. When computers are massively expensive, you don't have very many of them, but when they’ve reduced to perhaps pence or cents each, then suddenly you can have vast numbers of them. So, I'm going to go back to the 1950s, first of all, so we're talking about room-sized computers costing in current terms millions of dollars or pounds or euros. There is a famous misquote of Thomas Watson who was in charge of IBM at that time. That said that
five computers of the kind they had at that point would be enough forever. In fact, he didn't quite say that. You can look at the history of how that emerged as a misquote, but it wasn't so far from the zeitgeist of the time, this belief that a few huge computers but physically huge and less powerful in terms of what their computing, far less powerful than you’d hold in your hand today. That would be enough for anything that any large corporation, any government,
any country would ever want. So, think about five smartphones for all of the world or perhaps all of America, or all of Europe, or all of India. And simply saying, “Well, that will be enough. Who could ever want more computing than that?”. And that was said. It was both about the cost of them and everything, but also about this conception. But what would you use a computer for anyway? Move forward to the mid-70s and we have a different type of picture, so at that point
the first personal computers were coming through and you had about 1 personal computer for every 100,000 people in the world, so still not very many. It's something that's very specialized, but now is something that's coming down to human size, so the cost is reduced. We're thinking about probably still quite a lot of thousands of pounds or dollars in currency terms. But not so crazily
different from [the prices] laptops are now. I mean more expensive, but not orders of magnitude more. That starts to change your conception of this device when it becomes human sized. I do struggle to find the industry figures and it depends on what you count as a computer and count as a microprocessor. But it's something of the order of two million PC's and two million smartphones are produced each year.
Certainly in terms of the numbers of computers out there, not that everybody has one, but the total number will exceed the number of people and number of smartphones will be commensurate with the number of people in the world. If you actually look at microprocessors, rather than computers in boxes, or in phones, but actual microprocessor units – the thing that's in the video camera that I'm looking at now. In fact, there's probably
several in the camera that I'm looking at now, let alone within a laptop or computer. Then we're talking about tens or hundreds of thousands of microprocessors per each person on the planet. It does depend a bit on how functional you think of something as being a computer is just being a piece of electronics and it's a bit of debate there. But some of that is things that you can start to get a feel for, like a smart bulb: “Oh yes, there's a microprocessor in there”. But also
if you go into a modern railway carriage, there will be hundreds of microprocessors embedded in that, doing everything from the lighting to the doors and environmental control. So, really, these things are so tiny you don't even know that they are there, but they're doing more and more of the functions that you might have in the past actually used a switch for but doing them in slightly different ways. An example of this that I worked on in the
wild. I should say being used, is a system we called Firefly. It has a different commercial name, but basically if you imagine an hotel and you see all those white lights in the trees outside. We thought what if all of those little lights in the trees could become displays? Well, if it was a hotel and there's a wedding going to happen in the hotel, what if all those tree lights gave the name of the couple about to get married, moving gradually through the lights, twiddling
round? Wouldn't that be lovely? We investigated a number of ways this could be done and in the end moved for putting a single microprocessor behind every LED. In fact in the commercial version there are 1.2 microprocessors behind every LED. But the first prototype had one microprocessor, one computer per pixel. That sounds like crazy, massive overkill. There's an installation of this in Zurich railway station, which I'm not sure exact count, but tens of thousands, maybe not
far short, of 100,000 lights. Vast numbers of lights, so in a cubic meter you could see thousands of lights. We had a Christmas tree with this in and it had a thousand lights in a Christmas tree that was the size of a person, the height of a person. So, thousands of lights, but that meant thousands of computers just in front of you. And we imagined
that maybe at some point this would end up in domestic environments. So, at Christmas, when you turned on your Christmas tree, you might have a thousand computers turning on in your house. Lots and lots of computers because they're cheap, commoditized volume products.
Copyright holder: Tim Colegrove _ Appearance time: 3:02 - 3:09 Copyright license and terms: CC BY-SA 4.0, via Wikimedia Commons _ Link: https://commons.wikimedia.org/wiki/File:Trinity77.jpg
Copyright holder: Mk Illuminations _ Appearance time: 6:30 - 6:40 _ Link: https://www.youtube.com/watch?v=4DD5qLvHANs
If you are looking to study Human-Computer Interaction (HCI), the Interaction Design Foundation (IxDF) is the most authoritative online learning platform. IxDF offers three comprehensive online HCI courses:
HCI: Foundations of UX Design: This course provides a solid foundation in HCI principles and how they apply to UX design.
HCI: Design for Thought and Emotion: Unlock the secrets of the human mind and learn how to apply these insights to your work.
HCI: Perception and Memory: Learn about the role of perception and memory in HCI and how to design interfaces that align with human cognitive capabilities.
Enroll in these courses to enhance your HCI knowledge and skills from the comfort of your home.
Remember, the more you learn about design, the more you make yourself valuable.
Improve your UX / UI Design skills and grow your career! Join IxDF now!
You earned your gift with a perfect score! Let us send it to you.
We've emailed your gift to name@email.com.
Improve your UX / UI Design skills and grow your career! Join IxDF now!
Here's the entire UX literature on Human-Computer Interaction (HCI) by the Interaction Design Foundation, collated in one place:
Take a deep dive into Human-Computer Interaction (HCI) with our course Human-Computer Interaction: The Foundations of UX Design .
Interactions between products/designs/services on one side and humans on the other should be as intuitive as conversations between two humans—and yet many products and services fail to achieve this. So, what do you need to know so as to create an intuitive user experience? Human psychology? Human-centered design? Specialized design processes? The answer is, of course, all of the above, and this course will cover them all.
Human-Computer Interaction (HCI) will give you the skills to properly understand, and design, the relationship between the “humans”, on one side, and the “computers” (websites, apps, products, services, etc.), on the other side. With these skills, you will be able to build products that work more efficiently and therefore sell better. In fact, the Bureau of Labor Statistics predicts the IT and Design-related occupations will grow by 12% from 2014–2024, faster than the average for all occupations. This goes to show the immense demand in the market for professionals equipped with the right design skills.
Whether you are a newcomer to the subject of HCI or a professional, by the end of the course you will have learned how to implement user-centered design for the best possible results.
In the “Build Your Portfolio: Interaction Design Project”, you’ll find a series of practical exercises that will give you first-hand experience of the methods we’ll cover. If you want to complete these optional exercises, you’ll create a series of case studies for your portfolio which you can show your future employer or freelance customers.
This in-depth, video-based course is created with the amazing Alan Dix, the co-author of the internationally best-selling textbook Human-Computer Interaction and a superstar in the field of Human-Computer Interaction. Alan is currently professor and Director of the Computational Foundry at Swansea University.
We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change, , link to us, or join us to help us democratize design knowledge!