Number of co-authors:113
Number of publications with 3 favourite co-authors:Gordon Kurtenbach:16George W. Fitzmaurice:9Abigail Sellen:9
Bill Buxton's 3 most productive colleagues in number of publications:Ben Shneiderman:225Brad A. Myers:154Saul Greenberg:140
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
-- Antoine de Saint Exupéry
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Has also published under the name of:
Personal Homepage: http://www.billbuxton.com
Bill Buxton is an interaction designer and researcher. He is Principal Researcher at Microsoft Research and prior to that, he was Principal of his own Toronto-based boutique design and consulting firm, Buxton Design.
Bill is one of the pioneers in computer music, and has played an important role in the development of computer-based tools for film, industrial design, graphics and animation.As a researcher, he has had a long history with Xerox Palo Alto Research Center and the University of Toronto (where he is still an Associate Professor in the Department of Computer Science, and Visiting Professor at the Knowledge Media Design Institute). During the fall of 2004, he was a lecturer in the Department of Industrial Design at the Ontario College of Art and Design, and during the Spring of 2005, he was a Visiting Scientist at Microsoft Research, Cambridge.
From 1994 until December 2002, he was Chief Scientist of Alias|Wavefront, and from 1995, its parent company SGI Inc. In 2001, the Hollywood Reporter named him one of the 10 most influential innovators in Hollywood. In 2002 Time Magazine named him one of the top 5 designers in Canada, and he was elected to the ACMs CHI Academy.
Publications by Bill Buxton (bibliography)
Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Manual deskterity: an exploration of simultaneous pen + touch direct input. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2793-2802.
Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.
© All rights reserved Hinckley et al. and/or their publisher
Venkatacharya, Patanjali S., Kessler, Jonathan, Hardeman, Tami, Seiber, Ed and Buxton, Bill (2010): What makes a good design critic?: food design vs. product design criticism. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3131-3134.
This panel will bring together leading food design and product design critics. The panelists will include: a leading Atlanta-based food critic and writer, a food stylist, a restaurant architect&designer, and a well-known product design critic familiar with the field of user experience. Together, the panel will compare and contrast how design experts from these two disciplines provide design criticism, and whether there are any novel learning points from each perspective.
© All rights reserved Venkatacharya et al. and/or their publisher
Pahud, Michel, Inkpen, Kori, Benko, Hrvoje, Tang, John C. and Buxton, Bill (2010): Three's company: understanding communication channels in three-way distributed collaboration. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 271-280.
We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.
© All rights reserved et al. and/or their publisher
Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Pen + touch = new tools. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 27-36.
We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.
© All rights reserved Hinckley et al. and/or their publisher
Khan, Azam, Matejka, Justin, Fitzmaurice, George, Kurtenbach, Gord, Burtnyk, Nicolas and Buxton, Bill (2009): Toward the Digital Design Studio: Large Display Explorations. In Human-Computer Interaction, 24 (1) pp. 9-47.
Inspired by our automotive and product design customers using large displays in design centers, visualization studios, and meeting rooms around the world, we have been exploring the use and potential of large display installations for almost a decade. Our research has touched on many aspects of this rich design space, from individual tools to complete systems, and has generally moved through the life cycle of a design artifact: from the creation phase, through communication and collaboration, to presentation and dissemination. As we attempt to preserve creative flow through the phases, we introduce social structures and constraints that drive the design of possible point solutions in the larger context of a digital design studio trail environment built in the lab. Although many of the interactions presented are viable across several design phases, this article focuses primarily on facilitating collaboration. We conclude with critical lessons learned of both what avenues have been fruitful and which roads to avoid. This article lightly covers the whole design process and attempts to inform readers of key factors to consider when designing for designers.
© All rights reserved Khan et al. and/or Taylor and Francis
Greenberg, Saul and Buxton, Bill (2008): Usability evaluation considered harmful (some of the time). In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 111-120.
Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical part of every design process. This is for good reason: usability evaluation has a significant role to play when conditions warrant it. Yet evaluation can be ineffective and even harmful if naively done 'by rule' rather than 'by thought'. If done during early stage design, it can mute creative ideas that do not conform to current interface norms. If done to test radical innovations, the many interface issues that would likely arise from an immature technology can quash what could have been an inspired vision. If done to validate an academic prototype, it may incorrectly suggest a design's scientific worthiness rather than offer a meaningful critique of how it would be adopted and used in everyday practice. If done without regard to how cultures adopt technology over time, then today's reluctant reactions by users will forestall tomorrow's eager acceptance. The choice of evaluation methodology -- if any -- must arise from and be appropriate for the actual problem or research question under consideration.
© All rights reserved Greenberg and Buxton and/or ACM Press
Baecker, Ronald M., Harrison, Steve, Buxton, Bill, Poltrock, Steven and Churchill, Elizabeth F. (2008): Media spaces: past visions, current realities, future promise. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 2245-2248.
Established researchers and practitioners active in the development and deployment of media spaces review what seemed to be promised twenty years ago, what has actually been achieved, and what we might anticipate over the next twenty years.
© All rights reserved Baecker et al. and/or ACM Press
Izadi, Shahram, Butler, Alex, Hodges, Steve, West, Darren, Hall, Malcolm, Buxton, Bill and Molloy, Mike (2008): Experiences with building a thin form-factor touch and tangible tabletop. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 181-184.
Buxton, Bill (2008): The Role of the Artist in the Laboratory. In: (ed.). "Meisterwerke der Computer Kunst". pp. 29-32
Hodges, Steve, Izadi, Shahram, Butler, Alex, Rrustemi, Alban and Buxton, Bill (2007): ThinSight: versatile multi-touch sensing for thin form-factor displays. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 259-268.
ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multi-ple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fingertips and hands are clearly identifiable through the display. The approach of optical sensing also opens up the exciting possibility for detecting other physical objects and visual markers through the display, and some initial experiments are described. We also discuss other novel capabilities of our system: interaction at a distance using IR pointing devices, and IR-based communication with other electronic devices through the display. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making such systems even more deployable. We therefore envisage using ThinSight to capture rich sensor data through the display which can be processed using computer vision techniques to enable both multi-touch and tangible interaction.
© All rights reserved Hodges et al. and/or ACM Press
Buxton, Bill (2007): Sketching User Experiences: Getting the Design Right and the Right Design. Morgan Kaufmann
Buxton, Bill (2007): Sketching User Experiences: Getting the Design Right and the Right Design (Interactive Technologies). Morgan Kaufmann
Bill Buxton and I share a common belief that design leadership together with technical leadership drives innovation. Sketching, prototyping, and design are essential parts of the process we use to create new products. Bill Buxton brings design leadership and creativity to Microsoft. Through his thought-provoking personal examples he is inspiring others to better understand the role of design in their own companies--Bill Gates, Chairman, Microsoft"Informed design is essential.â While it might seem that Bill Buxton is exaggerating or kidding with this bold assertion, neither is the case. In an impeccably argued and sumptuously illustrated book, design star Buxton convinces us that design simply must be integrated into the heart of business--Roger Martin, Dean, Rotman School of Management, University of TorontoDesign is explained, with the means and manner for successes and failures illuminated by engaging stories, true examples and personal anecdotes. In Sketching User Experiences, Bill Buxton clarifies the processes and skills of design from sketching to experience modeling, in a lively and informative style that is rich with stories and full of his own heart and enthusiasm. At the start we are lost in mountain snows and northern seas, but by the end we are equipped with a deep understanding of the tools of creative design.--Bill Moggridge, Cofounder of IDEO and author of Designing Interactions"Like any secret society, the design community has its strange rituals and initiation procedures. Bill opens up the mysteries of the magical process of design, taking us through a land in which story-telling, orange squeezers, the Wizard of Oz, I-pods, avalanche avoidance, bicycle suspension sketching, and faking it are all points on the design pilgrim's journey. There are lots of ideas and techniques in this book to feed good design and transform the way we think about creating useful stuff". -Peter GabrielI love this book. There are very few resources available that see across and through all of the disciplines involved in developing great experiences. This is complex stuff and Buxton's work is both informed and insightful. He shares the work in an intimate manner that engages the reader and you will find yourself nodding with agreement, and smiling at the poignant relevance of his examples.--Alistair Hamilton, Symbol Technologies, NYBooks that have proposed bringing design into HCI are aplenty, though books that propose bringing software in to Design less common. Nevertheless, Bill manages to skilfully steer a course between the excesses of the two approaches and offers something truly in-between. It could be a real boon to the innovation business by bringing the best of both worlds: design and HCI. --Richard Harper, Microsoft Research, CambridgeThere is almost a fervor in the way that new products, with their rich and dynamic interfaces, are being released to the public-typically promising to make lives easier, solve the most difficult of problems, and maybe even make the world a better place. The reality is that few survive, much less deliver on their promise. The folly? An absence of design, and an over-reliance on technology alone as the solution.We need design. But design as described here depends on different skillsets-each essential, but on their own, none sufficient. In this rich ecology, designers are faced with new challenges-challenges that build on, rather than replace, existing skills and practice. Sketching User Experiences approaches design and design thinking as something distinct that needs to be better understood-by both designers and the people with whom they need to work- in order to achieve success with new products and systems. So while the focus is on design, the approach is holistic. Hence, the book speaks to designers, usability specialists, the HCI community, product managers, and business executives. There is an emphasis on balancing the back-end concern with usability and engineering excellence (getting the design right) with an up-front investment in sketching and ideation (getting the right design). Overall, the objective is to build the notion of informed design: molding emerging technology into a form that serves our society and reflects its values. Grounded in both practice and scientific research, Bill Buxton's engaging work aims to spark the imagination while encouraging the use of new techniques, breathing new life into user experience design.. Covers sketching and early prototyping design methods suitable for dynamic product capabilities: cell phones that communicate with each other and other embedded systems, "smartâ appliances, and things you only imagine in your dreams;. Thorough coverage of the design sketching method which helps easily build experience prototypes-without the effort of engineering prototypes which are difficult to abandon;. Reaches out to a range of designers, including user interface designers, industrial designers, software engineers, usability engineers, product managers, and others;. Full of case studies, examples, exercises, and projects, and access to video clips (www.mkp.com/sketching) that demonstrate the principles and methods.About the AuthorTrained as a musician, Bill Buxton began using computers over thirty years ago in his art. This early experience, both in the studio an on stage, helped develop a deep appreciation of both the positive and negative aspects of technology and its impact. This increasingly drew him into both design and research, with a very strong emphasis on interaction and the human aspects of technology. He first came to prominence for his work at the University of Toronto on digital musical instruments and the novel interfaces that they employed. This work in the late 70s gained the attention of Xerox PARC, where Buxton participated in pioneering work in collaborative work, interaction techniques and ubiquitous computing. He then went on to become Chief Scientist of SGI and Alias|Wavefront, where he had the opportunity to work with some of the top film makers and industrial designers in the world. He is now a principal researcher at Microsoft Corp., where he splits his time between research and helping make design a fundamental pillar of the corporate culture. * Covers sketching and early prototyping design methods suitable for dynamic product capabilities: cell phones that communicate with each other and other embedded systems, "smart" appliances, and things you only imagine in your dreams;* Thorough coverage of the design sketching method which helps easily build experience prototypes-without the effort of engineering prototypes which are difficult to abandon;* Reaches out to a range of designers, including user interface designers, industrial designers, software engineers, usability engineers, product managers, and others;* Full of case studies, examples, exercises, and projects, and access to video clips that demonstrate the principles and methods.
© All rights reserved Buxton and/or Morgan Kaufmann
Tohidi, Maryam, Buxton, Bill, Baecker, Ronald M. and Sellen, Abigail (2006): Getting the right design and the design right. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1243-1252.
We present a study comparing usability testing of a single interface versus three functionally equivalent but stylistically distinct designs. We found that when presented with a single design, users give significantly higher ratings and were more reluctant to criticize than when presented with the same design in a group of three. Our results imply that by presenting users with alternative design solutions, subjective ratings are less prone to inflation and give rise to more and stronger criticisms when appropriate. Contrary to our expectations, our results also suggest that usability testing by itself, even when multiple designs are presented, is not an effective vehicle for soliciting constructive suggestions about how to improve the design from end users. It is a means to identify problems, not provide solutions.
© All rights reserved Tohidi et al. and/or ACM Press
Tohidi, Maryam, Buxton, Bill, Baecker, Ronald M. and Sellen, Abigail (2006): User sketches: a quick, inexpensive, and effective way to elicit more reflective user feedback. In: Proceedings of the Fourth Nordic Conference on Human-Computer Interaction 2006. pp. 105-114.
Our aim is to introduce techniques that allow for active involvement of users throughout the design process, starting with the very early stages of ideation and exploration. The approach discussed in this study augments conventional usability testing with a user sketching component. We found that enabling users to sketch their ideas facilitated reflection, and provided a rich medium for discovery and communication of design ideas. We believe that this technique has the potential to complement usability testing in general, in order to generate "reflective" as opposed to purely "reactive" user feedback.
© All rights reserved Tohidi et al. and/or ACM Press
Owen, Russell, Kurtenbach, Gordon, Fitzmaurice, George W., Baudel, Thomas and Buxton, Bill (2005): When it gets more difficult, use both hands: exploring bimanual curve manipulation. In: Graphics Interface 2005 May 9-11, 2005, Victoria, British Columbia, Canada. pp. 17-24.
In this paper we investigate the relationship between bimanual (two-handed) manipulation and the cognitive aspects of task integration, divided attention and epistemic action. We explore these relationships by means of an empirical study comparing a bimanual technique versus a unimanual (one-handed) technique for a curve matching task. The bimanual technique was designed on the principle of integrating the visual, conceptual and input device space domain of both hands. We provide evidence that the bimanual technique has better performance than the unimanual technique and, as the task becomes more cognitively demanding, the bimanual technique exhibits even greater performance benefits. We argue that the design principles and performance improvements are applicable to other task domains.
© All rights reserved Owen et al. and/or their publisher
Buxton, Bill (2005): Piloting through the maze. In Interactions, 12 (6) p. 10.
Buxton, Bill, Baecker, Ronald M., Clark, Wesley, Richardson, Fontaine, Sutherland, Ivan, Sutherland, W. R. Bert and Henderson, Austin (2005): Interaction at Lincoln laboratory in the 1960's: looking forward -- looking back. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1162-1167.
The activity centered around the TX-2 computer at Lincoln Laboratory in the 1960's laid the foundation for much of HCI. Through the use of archival film footage, and live presentations by some of the key protagonists, this panel is intended to contribute to a more general awareness of this work, its historical importance to HCI, and its relevance to research today.
© All rights reserved Buxton et al. and/or ACM Press
Buxton, Bill (2005): Sketching and Experience Design. In: Proceedings of IFIP INTERACT05: Human-Computer Interaction 2005. p. 1.
Among others, Hummels, Djajadiningrat and Overbeeke (Knowing, Doing and Feeling: Communication with your Digital Products. Interdisziplinares Kolleg Kognitions und Neurowissenschaften, Gunne am Mohnesee, March 2-9 2001, 289-308.), have expressed the notion that the real product of design is the resultant "context for experience" rather than the object or software that provokes that experience. This closely corresponds to what I refer to as a transition in focus from a materialistic to an experiential view of design. Paraphrasing what I have already said, is not the physical entity or what is in the box (the "material" product) that is the true outcome of the design process. Rather, it is the behavioural, experiential and emotional responses that come about as a result of its existence and use in the "wild". Designing for experience comes with a whole new level of complexity. This is especially true in this emerging world of information appliances, reactive environments and ubiquitous computing, where, along with those of their users, we have to factor in the convoluted behaviours of the products themselves. Doing this effectively requires both a different mind-set, as well as different techniques. This talk is motivated by a concern that, in general, our current training and work practices are not adequate to meet the demands of this level of design. This is true for those coming from a computer science background, since they do not have sufficient grounding in design, at least in the sense that would be recognized by an architect or industrial designer. Conversely, those from the design arts, while they have the design skills, do not generally have the technical skills to adequately address the design issues relating to the complex embedded behaviours of such devices and systems. Hence, in this talk, we discuss the design process itself, from the perspective of methods, organization, and composition. Fundamental to our approach is the notion that sketching is a fundamental component of design, and is especially critical at the early ideation phase. Yet, due to the temporal nature of what we are designing, conventional sketching is not - on its own - adequate. Hence, if we are to design experience or interaction, we need to adopt something that is to our process that is analogous to what traditional sketching is to the process of conventional industrial design. It is the motivation and exploration of such a sketching process that is the foundation of this presentation.
© All rights reserved Buxton and/or Springer Verlag
Buxton, Bill (2005): The renaissance is over: long live the renaissance. In: Proceedings of the 2005 Conference on Creativity and Cognition 2005. p. 3.
Fitzmaurice, George W., Khan, Azam, Pieke, Robert, Buxton, Bill and Kurtenbach, Gordon (2003): Tracking menus. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 71-79.
We describe a new type of graphical user interface widget, known as a
"tracking menu." A tracking menu consists of a cluster of graphical buttons,
and as with traditional menus, the cursor can be moved within the menu to
select and interact with items. However, unlike traditional menus, when the
cursor hits the edge of the menu, the menu moves to continue tracking the
cursor. Thus, the menu always stays under the cursor and close at hand. In this
paper we define the behavior of tracking menus, show unique affordances of the
widget, present a variety of examples, and discuss design characteristics. We
examine one tracking menu design in detail, reporting on usability studies and
our experience integrating the technique into a commercial application for the
Tablet PC. While user interface issues on the Tablet PC, such as preventing
round trips to tool palettes with the pen, inspired tracking menus, the design
also works well with a standard mouse and keyboard configuration.
© All rights reserved Fitzmaurice et al. and/or ACM Press
Grossman, Tovi, Balakrishnan, Ravin, Kurtenbach, Gordon, Fitzmaurice, George W., Khan, Azam and Buxton, Bill (2002): Creating principal 3D curves with digital tape drawing. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 121-128.
Tsang, Michael, Fitzmaurice, George W., Kurtenbach, Gordon, Khan, Azam and Buxton, Bill (2002): Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 111-120.
We introduce the Boom Chameleon, a novel input/output device consisting of a
flat-panel display mounted on a tracked mechanical boom. The display acts as a
physical window into 3D virtual environments, through which a one-to-one
mapping between real and virtual space is preserved. The Boom Chameleon is
further augmented with a touch-screen and a microphone/speaker combination. We
present a 3D annotation application that exploits this unique configuration in
order to simultaneously capture viewpoint, voice and gesture information.
Design issues are discussed and results of an informal user study on the device
and annotation software are presented. The results show that the Boom Chameleon
annotation facilities have the potential to be an effective, easy to learn and
operate 3D design review system.
© All rights reserved Tsang et al. and/or ACM Press
Buxton, Bill (2001): Less is More (More or Less): Uncommon Sense and the Design of Computers. In: Denning, Peter J. (ed.). "The Invisible Future: The Seamless Integration of Technology Into Everyday Life". McGraw-Hill Companies
Fitzmaurice, George W., Balakrishnan, Ravin, Kurtenbach, Gordon and Buxton, Bill (1999): An Exploration into Supporting Artwork Orientation in the User Interface. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 167-174.
Rotating a piece of paper while drawing is an integral and almost subconscious part of drawing with pencil and paper. In a similar manner, the advent of lightweight pen-based computers allow digital artwork to be rotated while drawing by rotating the entire computer. Given this type of manipulation we explore the implications for the user interface to support artwork orientation. First we describe an exploratory study to further motivate our work and characterize how artwork is manipulated while drawing. After presenting some possible UI approaches to support artwork orientation, we define a new solution called a rotating user interface (RUIs). We then discuss design issues and requirements for RUIs based on our exploratory study.
© All rights reserved Fitzmaurice et al. and/or ACM Press
Balakrishnan, Ravin, Fitzmaurice, George W., Kurtenbach, Gordon and Buxton, Bill (1999): Digital Tape Drawing. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 161-169.
Tape drawing is the art of creating sketches on large scale upright surfaces using black photographic tape. Typically used in the automotive industry, it is an important part of the automotive design process that is currently not computerized. We analyze and describe the unique aspects of tape drawing, and use this knowledge to design and implement a digital tape drawing system. Our system retains the fundamental interaction and visual affordances of the traditional media while leveraging the power of the digital media. Aside from the practical aspect of our work, the interaction techniques developed have interesting implications for current theories of human bimanual interaction.
© All rights reserved Balakrishnan et al. and/or ACM Press
Leganchuk, Andrea, Zhai, Shumin and Buxton, Bill (1998): Manual and Cognitive Benefits of Two-Handed Input: An Experimental Study. In ACM Transactions on Computer-Human Interaction, 5 (4) pp. 326-359.
One of the recent trends in computer input is to utilize users' natural bimanual motor skills. This article further explores the potential benefits of such two-handed input. We have observed that bimanual manipulation may bring two types of advantages to human-computer interaction: manual and cognitive. Manual benefits come from increased time-motion efficiency, due to the twice as many degrees of freedom simultaneously available to the user. Cognitive benefits arise as a result of reducing the load of mentally composing and visualizing the task at an unnaturally low level which is imposed by traditional unimanual techniques. Area sweeping was selected as our experimental task. It is representative of what one encounters, for example, when sweeping out the bounding box surrounding a set of objects in a graphics program. Such tasks cannot be modeled by Fitts' Law alone and have not been previously studied in the literature. In our experiments, two bimanual techniques were compared with the conventional one-handed GUI approach. Both bimanual techniques employed the two-handed "stretchy" technique first demonstrated by Krueger in 1983. We also incorporated the "Toolglass" technique introduced by Bier et al. in 1993. Overall, the bimanual techniques resulted in significantly faster performance than the status quo one-handed technique, and these benefits increased with the difficulty of mentally visualizing the task, supporting our bimanual cognitive advantage hypothesis. There was no significant difference between the two bimanual techniques. This study makes two types of contributions to the literature. First, practically we studied yet another class of transaction where significant benefits can be realized by applying bimanual techniques. Furthermore, we have done so using easily available commercial hardware in the context to our understanding of why bimanual interaction techniques have an advantage over unimanual techniques. A literature review on two-handed computer input and some of the most relevant bimanual human motor control studies is also included.
© All rights reserved Leganchuk et al. and/or ACM Press
Kurtenbach, Gordon, Fitzmaurice, George W., Baudel, Thomas and Buxton, Bill (1997): The Design of a GUI Paradigm Based on Tablets, Two-Hands, and Transparency. In: Pemberton, Steven (ed.) Proceedings of the ACM CHI 97 Human Factors in Computing Systems Conference March 22-27, 1997, Atlanta, Georgia. pp. 35-42.
© All rights reserved Kurtenbach et al. and/or ACM Press
Cooperstock, Jeremy R., Fels, Sidney, Buxton, Bill and Smith, Kenneth C. (1997): Reactive Environments. In Communications of the ACM, 40 (9) pp. 65-73.
Zhai, Shumin, Milgram, Paul and Buxton, Bill (1996): The Influence of Muscle Groups on Performance of Multiple Degree-of-Freedom Input. In: Tauber, Michael J., Bellotti, Victoria, Jeffries, Robin, Mackinlay, Jock D. and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada. pp. 308-315.
The literature has long suggested that the design of computer input devices should make use of the fine, smaller muscle groups and joints in the fingers, since they are richly represented in the human motor and sensory cortex and they have higher information processing bandwidth than other body parts. This hypothesis, however, has not been conclusively verified with empirical research. The present work studied such a hypothesis in the context of designing 6 degree-of-freedom (DOF) input devices. The work attempts to address both a practical need -- designing efficient 6 DOF input devices -- and the theoretical issue of muscle group differences in input control. Two alternative 6 DOF input devices, one including and the other excluding the fingers from the 6 DOF manipulation, were designed and tested in a 3D object docking experiment. Users' task completion times were significantly shorter with the device that utilised the fingers. The results of this study strongly suggest that the shape and size of future input device designs should constitute affordances that invite finger participation in input control.
© All rights reserved Zhai et al. and/or ACM Press
Matias, Edgar, MacKenzie, I. Scott and Buxton, Bill (1996): One-Handed Touch Typing on a QWERTY Keyboard. In Human-Computer Interaction, 11 (1) pp. 1-27.
"Half-QWERTY" is a new, one-handed typing technique designed to facilitate the transfer of two-handed touch-typing skill to the one-handed condition. It is performed on a standard keyboard with modified software or on a special half-keyboard with full-size keys. In an experiment using touch typists, hunt-and-peck typing speeds were surpassed after 3 to 4 hr of practice. Subjects reached 50% of their two-handed typing speed after about 8 hr. After 10 hr, all subjects typed between 41% and 73% of their two-handed speed, ranging from 23.8 to 42.8 words per minute (wpm). In extended testing, subjects achieved average one-handed speeds as high as 60 wpm and 83% of their two-handed rate. These results are important for providing access to disabled users and for designing compact computers.
© All rights reserved Matias et al. and/or Taylor and Francis
Zhai, Shumin, Buxton, Bill and Milgram, Paul (1996): The Partial-Occlusion Effect: Utilizing Semitransparency in 3D Human-Computer Interaction. In ACM Transactions on Computer-Human Interaction, 3 (3) pp. 254-284.
This study investigates human performance when using semitransparent tools in interactive 3D computer graphics environments. The article briefly reviews techniques for presenting depth information and examples of applying semitransparency in computer interface design. We hypothesize that when the user moves a semitransparent surface in a 3D environment, the "partial-occlusion" effect introduced through semitransparency acts as an effective cue in target localization -- an essential component in many 3D interaction tasks. This hypothesis was tested in an experiment in which subjects were asked to capture dynamic targets (virtual fish) with two versions of a 3D box cursor, one with and one without semitransparent surfaces. Results showed that the partial-occlusion effect through semitransparency significantly improved users' performance in terms of trial completion time, error rate, and error magnitude in both monoscopic and stereoscopic displays. Subjective evaluations supported the conclusions drawn from performance measures. The experimental results and their implications are discussed, with emphasis on the relative, discrete nature of the partial-occlusion effect and on interactions between different depth cues. The article concludes with proposals of a few future research issues and applications of semitransparency in human-computer interaction.
© All rights reserved Zhai et al. and/or ACM Press
Cooperstock, Jeremy R., Tanikoshi, Koichiro, Beirne, Garry, Narine, Tracy and Buxton, Bill (1995): Evolution of a Reactive Environment. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 170-177.
A basic tenet of "Ubiquitous computing" (Weiser, 1993 ) is that technology should be distributed in the environment (ubiquitous), yet invisible, or transparent. In practice, resolving the seeming paradox arising from the joint demands of ubiquity and transparency is less than simple. This paper documents a case study of attempting to do just that. We describe our experience in developing a working conference room which is equipped to support a broad class of meetings and media. After laying the groundwork and establishing the context in the Introduction, we describe the evolution of the room. Throughout, we attempt to document the rationale and motivation. While derived from a limited domain, we believe that the issues that arise are of general importance, and have strong implications on future research.
© All rights reserved Cooperstock et al. and/or ACM Press
Kabbash, Paul and Buxton, Bill (1995): The "Prince" Technique: Fitts' Law and Selection Using Area Cursors. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 273-279.
In most GUIs, selection is effected by placing the point of the mouse-driven cursor over the area of the object to be selected. Fitts' law is commonly used to model such target acquisition, with the term A representing the amplitude, or distance, of the target from the cursor, and W the width of the target area. As the W term gets smaller, the index of difficulty of the task increases. The extreme case of this is when the target is a point. In this paper, we show that selection in such cases can be facilitated if the cursor is an area, rather than a point. Furthermore, we show that when the target is a point and the width of the cursor is W, that Fitts' law still holds. An experiment is presented and the implications of the technique are discussed for both 2D and 3D interfaces.
© All rights reserved Kabbash and Buxton and/or ACM Press
Harrison, Beverly L., Ishii, Hiroshi, Vicente, Kim J. and Buxton, Bill (1995): Transparent Layered User Interfaces: An Evaluation of a Display Design to Enhance Focused and Divided Attention. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 317-324.
This paper describes a new research program investigating graphical user interfaces from an attentional perspective (as opposed to a more traditional visual perception approach). The central research issue is how we can better support both focusing attention on a single interface object (without distraction from other objects) and dividing or time sharing attention between multiple objects (to preserve context or global awareness). This attentional trade-off seems to be a central but as yet comparatively ignored issue in many interface designs. To this end, this paper proposes a framework for classifying and evaluating user interfaces with semi-transparent windows, menus, dialogue boxes, screens, or other objects. Semi-transparency fits into a more general proposed display design space of "layered" interface objects. We outline the design space, task space, and attentional issues which motivated our research. Our investigation is comprised of both empirical evaluation and more realistic application usage. This paper reports on the empirical results and summarizes some of the application findings.
© All rights reserved Harrison et al. and/or ACM Press
Fitzmaurice, George W., Ishii, Hiroshi and Buxton, Bill (1995): Bricks: Laying the Foundations for Graspable User Interfaces. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 442-449.
We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call "bricks," are essentially new input devices that can be tightly coupled or "attached" to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the "ActiveDesk." We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.
© All rights reserved Fitzmaurice et al. and/or ACM Press
Buxton, Bill (1995): Proximal Sensing: Supporting Context Sensitive Interaction. In: Robertson, George G. (ed.) Proceedings of the 8th annual ACM symposium on User interface and software technology November 15 - 17, 1995, Pittsburgh, Pennsylvania, United States. p. 169.
This talk addresses the issue of increasing complexity for the user that accompanies new functionality. Briefly, we discuss how complexity can, through appropriate design, be off-loaded to the system -- at least for secondary commands. Consider photography, for example. The 35 mm SLR of a decade ago was analogous to MS-DOS. You could do everything in theory, but in practice, were unlikely to do anything without making an error. When we think of photography, however, we see that there are only two primary decisions: "what" and "when", which correspond to the two primary actions: "point" and "click". By embedding domain-specific knowledge, modern cameras off-load all other decisions to the computer (a.k.a. camera) with the option of overriding the defaults. The net result is that the needs of the novice and expert are met with a single apparatus device. What we do in this presentation is talk about how this type of off-loading can be supported, and why this should be done. We do this by example, drawing mainly on the experiences of the Ontario Telepresence Project.
© All rights reserved Buxton and/or ACM Press
Buxton, Bill (1995): Integrating the periphery and context: A new model of telematics. In: Graphics Interface 95 May 17-19, 1995, Quebec, Quebec, Canada. pp. 239-246.
Baecker, Ronald M., Grudin, Jonathan, Buxton, Bill and Greenberg, Saul (eds.) (1995): Readings in Human-Computer Interaction: Toward the Year 2000. Morgan Kaufmann Publishers
Fitzmaurice, George W., Ishii, Hiroshi and Buxton, Bill (1995): Bricks: Laying the Foundations for Graspable User Interfaces. In: Proceedings of ACM SIGCHI May, 1995. pp. 442-449.
Kurtenbach, Gordon and Buxton, Bill (1994): User Learning and Performance with Marking Menus. In: Adelson, Beth, Dumais, Susan and Olson, Judith S. (eds.) Proceedings of the ACM CHI 94 Human Factors in Computing Systems Conference April 24-28, 1994, Boston, Massachusetts. pp. 258-264.
A marking menu is designed to allow a user to perform a menu selection by either popping-up a radial (or pie) menu, or by making a straight mark in the direction of the desired menu item without popping-up the menu. Previous evaluations in laboratory settings have shown the potential for marking menus. This paper reports on a case study of user behavior with marking menus in a real work situation. The study demonstrates the following: First, marking menus are used as designed. When users become expert with the menus, marks are used extensively. However, the transition to using marks is not one way. Expert users still switch back to menus to refresh their memory of menu layout. Second, marking is an extremely efficient interaction technique. Using a mark on average was 3.5 times faster than using the menu. Finally, design principles can be followed that make menu item/mark associations easier to learn, and interaction efficient.
© All rights reserved Kurtenbach and Buxton and/or ACM Press
Bier, Eric A., Stone, Maureen C., Fishkin, Ken, Buxton, Bill and Baudel, Thomas (1994): A Taxonomy of See-Through Tools. In: Adelson, Beth, Dumais, Susan and Olson, Judith S. (eds.) Proceedings of the ACM CHI 94 Human Factors in Computing Systems Conference April 24-28, 1994, Boston, Massachusetts. pp. 358-364.
In current interfaces, users select objects, apply operations, and change viewing parameters in distinct steps that require switching attention among several screen areas. Our See-Through Interface software reduces steps by locating tools on a transparent sheet that can be moved over applications with one hand using a blackball, while the other hand controls a mouse cursor. The user clicks through a tool onto application objects, simultaneously selecting an operation and an operand. Tools may include graphical filters that display a customized view of application objects. Compared to traditional interactors, these tools save steps, require no permanent screen space, reduce temporal modes, apply to multiple applications, and facilitate customization. This paper presents a taxonomy of see-through tools that considers variations in each of the steps they perform. As examples, we describe particular see-through tools that perform graphical editing and text editing operations.
© All rights reserved Bier et al. and/or ACM Press
Kabbash, Paul, Buxton, Bill and Sellen, Abigail (1994): Two-Handed Input in a Compound Task. In: Adelson, Beth, Dumais, Susan and Olson, Judith S. (eds.) Proceedings of the ACM CHI 94 Human Factors in Computing Systems Conference April 24-28, 1994, Boston, Massachusetts. pp. 417-423.
Four techniques for performing a compound drawing/color selection task were studied: a unimanual technique, a bimanual technique where different hands controlled independent subtasks, and two other bimanual techniques in which the action of the right hand depended on that of the left. We call this latter class of two-handed technique "asymmetric dependent," and predict that because tasks of this sort most closely conform to bimanual tasks in the everyday world, they would give rise to the best performance. Results showed that one of the asymmetric bimanual techniques, called the Toolglass technique, did indeed give rise to the best overall performance. Reasons for the superiority of the technique are discussed in terms of their implications for design. These are contrasted with other kinds of two-handed techniques, and it is shown how, if designed inappropriately, two hands can be worse than one.
© All rights reserved Kabbash et al. and/or ACM Press
Zhai, Shumin, Buxton, Bill and Milgram, Paul (1994): The "Silk Cursor": Investigating Transparency for 3D Target Acquisition. In: Adelson, Beth, Dumais, Susan and Olson, Judith S. (eds.) Proceedings of the ACM CHI 94 Human Factors in Computing Systems Conference April 24-28, 1994, Boston, Massachusetts. pp. 459-464.
This study investigates dynamic 3D target acquisition. The focus is on the relative effect of specific perceptual cues. A novel technique is introduced and we report on an experiment that evaluates its effectiveness. There are two aspects to the new technique. First, in contrast to normal practice, the tracking symbol is a volume rather than a point. Second, the surface of this volume is semi-transparent, thereby affording occlusion cues during target acquisition. The experiment shows that the volume/occlusion cues were effective in both monocular and stereoscopic conditions. For some tasks where stereoscopic presentation is unavailable or infeasible, the new technique offers an effective alternative.
© All rights reserved Zhai et al. and/or ACM Press
MacKenzie, I. Scott and Buxton, Bill (1994): Prediction of Pointing and Dragging Times in Graphical User Interfaces. In Interacting with Computers, 6 (2) pp. 213-227.
An experiment is described which demonstrates that the point-drag sequence common on interactive systems can be modelled as two separate Fitts law tasks -- a point-select task followed by a drag-select task. Strong prediction models were built; however, comparisons with previous models were not as close as the standard error coefficients implied. Caution is therefore warranted in follow-up applications of models built in research settings. Additionally, the previous claim that target height is the appropriate substitute for target width in calculating Fitts' index of difficulty in dragging tasks was not supported. The experiment described varied the dragging target's width and height independently. Models using the horizontal width of the drag target or the smaller of the target's width or height outperformed the target height model.
© All rights reserved MacKenzie and Buxton and/or Elsevier Science
Kurtenbach, Gordon, Moran, Thomas P. and Buxton, Bill (1994): Contextual animation of gestural commands. In: Graphics Interface 94 May 18-20, 1994, Banff, Alberta, Canada. pp. 83-90.
Kurtenbach, Gordon, Moran, Thomas P. and Buxton, Bill (1994): Contextual Animation of Gestural Commands. In Comput. Graph. Forum, 13 (5) pp. 305-314.
Matias, Edgar, MacKenzie, I. Scott and Buxton, Bill (1993): Half-QWERTY: A One-Handed Keyboard Facilitating Skill Transfer from QWERTY. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 88-94.
Half-QWERTY is a new one-handed typing technique, designed to facilitate the transfer of two-handed typing skill to the one-handed condition. It is performed on a standard keyboard, or a special half keyboard (with full-sized keys). In an experiment using touch typists, hunt-and-peck typing speeds were surpassed after 3-4 hours of practice. Subjects reached 50% of their two-handed typing speed after about 8 hours. After 10 hours, all subjects typed between 41% and 73% of their two-handed speed, ranging from 23.8 to 42.8 wpm. These results are important in providing access to disabled users, and for the design of compact computers. They also bring into question previous research claiming finger actions of one hand map to the other via spatial congruence rather than mirror image.
© All rights reserved Matias et al. and/or ACM Press
Hardock, Gary, Kurtenbach, Gordon and Buxton, Bill (1993): A Marking Based Interface for Collaborative Writing. In: Hudson, Scott E., Pausch, Randy, Zanden, Brad Vander and Foley, James D. (eds.) Proceedings of the 6th annual ACM symposium on User interface software and technology 1993, Atlanta, Georgia, United States. pp. 259-266.
We describe a system to support a particular model of document creation. In this model, the document flows from the primary author to one or more collaborators. They annotate it, then return it to the author who makes the final changes. Annotations are made using conventional marks, typically using a stylus. The intent is to match the flow and mark-up of paper documents observed in the everyday world. The system is very much modeled on Wang Freestyle (Perkins, Watt, Workman and Ehrlich, 1989; Francik and Akagi, 1989;&Levine and Ehrlich, in press). Our contribution is to incorporate mark recognition into the system and to explore some novel navigation tools that are enabled by the higher-level data structures that we use. The system is described and the results of initial user-testing are reported.
© All rights reserved Hardock et al. and/or ACM Press
Kurtenbach, Gordon, Sellen, Abigail and Buxton, Bill (1993): An Empirical Evaluation of Some Articulatory and Cognitive Aspects of Marking Menus. In Human-Computer Interaction, 8 (1) pp. 1-23.
We describe marking menus, an extension of pie menus, which are well suited for stylus-based interfaces. Pie menus are circular menus subdivided into sectors, each of which might correspond to a different command. One moves the cursor from the center of the pie into the desired sector. Marking menus are invisible pie menus in which the movement of the cursor during a selection leaves an "ink trail" similar to a pen stroke on paper. The combination of a pie menu and a marking menu supports an efficient transition from novice to expert performance. Novices can "pop-up" a pie menu and make a selection, whereas experts can simply make the corresponding mark without waiting for the menu to appear. This article describes an experiment in which we explored both articulatory and cognitive aspects of marking menus for different numbers of items per menu and using different input devices (mouse, trackball, and stylus). The articulatory aspects are how well subjects could execute the physical actions necessary to select from pie marking menus. Articulatory aspects were investigated by presenting one group of subjects with the task of selecting from fully visible menus. Because one feature of marking menus is that users should be able to select from them without seeing the menus (by making a mark), we also ran two groups of subjects with invisible pie menus: one group with an ink trail and one without. These subjects were therefore faced with the task of either mentally representing the menu or associating marks with the commands they invoked through practice. These then are the cognitive aspects to which we refer. Our results indicate that subjects' performance degraded as the number of items increased. When menus were hidden, however, subjects performance did not degrade as rapidly when menus contained even numbers of items. We also found subjects performed better with the mouse and stylus than with the trackball.
© All rights reserved Kurtenbach et al. and/or Taylor and Francis
Buxton, Bill (1993): HCI and the Inadequacies of Direct Manipulation Systems. In ACM SIGCHI Bulletin, 25 (1) pp. 21-22.
The Direct Manipulation (DM) style of user interface made popular by the Macintosh is becoming a de facto standard. Rather than being taken as a point of departure, it appears to be taken more as a standard to achieve. Using the specification of scope as an example, DM interfaces are shown to be deficient in supporting a transaction fundamental to word processing, information retrieval and CAD. This essay is a plea for designers to break out of the complacency that surrounds the DM approach. It also calls into question the methodologies of HCI for the very limited degree to which they have challenged the DM approach and their paucity of ideas for generating strong new alternatives.
© All rights reserved Buxton and/or ACM Press
MacKenzie, I. Scott and Buxton, Bill (1993): A Tool for the Rapid Evaluation of Input Devices Using Fitts' Law Models. In ACM SIGCHI Bulletin, 25 (3) pp. 58-63.
A tool for building Fitts' law models is described. MODEL BUILDER runs on the Apple Macintosh using any device which connects to the Apple Desktop Bus. After 16 blocks of trials taking about 4-5 minutes, the program provides an immediate (albeit tentative) statistical analysis, showing the coefficients in the prediction equation, the coefficient of correlation, and a regression line with scatter points. MODEL BUILDER can be retrieved anonymously by researchers, educators, developers, or anyone with access to INTERNET through file-transfer-protocol (FTP).
© All rights reserved MacKenzie and Buxton and/or ACM Press
Gaver, William W., Moran, Thomas P., MacLean, Allan, Lovstrand, Lennart, Dourish, Paul, Carter, Kathleen and Buxton, Bill (1992): Realizing a Video Environment: EuroPARC's RAVE System. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 27-35.
At EuroPARC, we have been exploring ways to allow physically separated colleagues to work together effectively and naturally. In this paper, we briefly discuss several examples of our work in the context of three themes that have emerged: the need to support the full range of shared work; the desire to ensure privacy without giving up unobtrusive awareness; and the possibility of creating systems which blur the boundaries between people, technologies and the everyday world.
© All rights reserved Gaver et al. and/or ACM Press
MacKenzie, I. Scott and Buxton, Bill (1992): Extending Fitts' Law to Two-Dimensional Tasks. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 219-226.
Fitts' law, a one-dimensional model of human movement, is commonly applied to two-dimensional target acquisition tasks on interactive computing systems. For rectangular targets, such as words, it is demonstrated that the model can break down and yield unrealistically low (even negative!) ratings for a task's index of difficulty (ID). The Shannon formulation is shown to partially correct this problem, since ID is always >= 0 bits. As well, two alternative interpretations of "target width" are introduced that accommodate the two-dimensional nature of tasks. Results of an experiment are presented that show a significant improvement in the model's performance using the suggested changes.
© All rights reserved MacKenzie and Buxton and/or ACM Press
Sellen, Abigail, Buxton, Bill and Arnott, John (1992): Using Spatial Cues to Improve Videoconferencing. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 651-652.
Sellen, Abigail, Kurtenbach, Gordon and Buxton, Bill (1992): The Prevention of Mode Errors Through Sensory Feedback. In Human-Computer Interaction, 7 (2) pp. 141-164.
The use of different kinds of feedback in preventing mode errors was investigated. Two experiments examined the frequency of mode errors in a text-editing task where a mode error was defined as an attempt to issue navigational commands while in insert mode, or an attempt to insert text while in command mode. In Experiment 1, the effectiveness of kinesthetic versus visual feedback was compared in four different conditions: the use of keyboard versus foot pedal for changing mode (kinesthetic feedback), crossed with the presence or absence of visual feedback to indicate mode. The results showed both kinesthetic and visual feedback to be effective in reducing mode errors. However, kinesthetic was more effective than visual feedback both in terms of reducing errors and in terms of reducing the cognitive load associated with mode changes. Experiment 2 tested the hypothesis that the superiority of this kinesthetic feedback was due to the fact that the foot pedal required subjects actively to maintain insert mode. The results confirmed that the use of a nonlatching foot pedal for switching modes provided a more salient source of information on mode state than the use of a latching pedal. On the basis of these results, we argue that user-maintained mode states prevent mode errors more effectively than system-maintained mode states.
© All rights reserved Sellen et al. and/or Taylor and Francis
Buxton, Bill (1992): Telepresence: Integrating shared task and person spaces. In: Graphics Interface 92 May 11-15, 1992, Vancouver, British Columbia, Canada. pp. 123-129.
MacKenzie, I. Scott, Sellen, Abigail and Buxton, Bill (1991): A Comparison of Input Devices in Elemental Pointing and Dragging Tasks. In: Robertson, Scott P., Olson, Gary M. and Olson, Judith S. (eds.) Proceedings of the ACM CHI 91 Human Factors in Computing Systems Conference April 28 - June 5, 1991, New Orleans, Louisiana. pp. 161-166.
An experiment is described comparing three devices (a mouse, a trackball, and a stylus with tablet) in the performance of pointing and dragging tasks. During pointing, movement times were shorter and error rates were lower than during dragging. It is shown that Fitts' law can model both tasks, and that within devices the index of performance is higher when pointing than when dragging. Device differences also appeared. The stylus displayed a higher rate of information processing than the mouse during pointing but not during dragging. The trackball ranked third for both tasks.
© All rights reserved MacKenzie et al. and/or ACM Press
Mantei, Marilyn, Baecker, Ronald M., Sellen, Abigail, Buxton, Bill, Milligan, Thomas and Wellman, Barry (1991): Experiences in the Use of a Media Space. In: Robertson, Scott P., Olson, Gary M. and Olson, Judith S. (eds.) Proceedings of the ACM CHI 91 Human Factors in Computing Systems Conference April 28 - June 5, 1991, New Orleans, Louisiana. pp. 203-208.
A media space is a system that uses integrated video, audio, and computers to allow individuals and groups to work together despite being distributed spatially and temporally. Our media space, CAVECAT (Computer Audio Video Enhanced Collaboration And Telepresence), enables a small number of individuals or groups located in separate offices to engage in collaborative work without leaving their offices. This paper presents and summarizes our experiences during initial use of CAVECAT, including unsolved technological obstacles we have encountered, and the psychological and social impact of the technology. Where possible we discuss relevant findings from the psychological literature, and implications for design of the next-generation media space.
© All rights reserved Mantei et al. and/or ACM Press
Baecker, Ronald M., Mantei, Marilyn, Buxton, Bill and Fiume, Eugene (1991): The University of Toronto Dynamic Graphics Project. In: Robertson, Scott P., Olson, Gary M. and Olson, Judith S. (eds.) Proceedings of the ACM CHI 91 Human Factors in Computing Systems Conference April 28 - June 5, 1991, New Orleans, Louisiana. pp. 467-468.
Kurtenbach, Gordon and Buxton, Bill (1991): Issues in Combining Marking and Direct Manipulation Techniques. In: Rhyne, James R. (ed.) Proceedings of the 4th annual ACM symposium on User interface software and technology Hilton Head, South Carolina, United States, 1991, Hilton Head, South Carolina, United States. pp. 137-144.
The direct manipulation paradigm has been effective in helping designers create easy to use mouse and keyboard based interfaces. The development of flat display surfaces and transparent tablets are now making possible interfaces where a user can write directly on the screen using a special stylus. The intention of these types of interfaces is to exploit users' existing handwriting, mark-up and drawing skills while also providing the benefits of direct manipulation. This paper reports on a test bed program which we are using for exploring hand-marking types of interactions and their integration with direct manipulation interactions.
© All rights reserved Kurtenbach and Buxton and/or ACM Press
Kurtenbach, Gordon and Buxton, Bill (1991): GEdit: A Test Bed for Editing by Contiguous Gestures. In ACM SIGCHI Bulletin, 23 (2) pp. 22-26.
Buxton, Bill (1990): A Three-State Model of Graphical Input. In: Diaper, Dan, Gilmore, David J., Cockton, Gilbert and Shackel, Brian (eds.) INTERACT 90 - 3rd IFIP International Conference on Human-Computer Interaction August 27-31, 1990, Cambridge, UK. pp. 449-456.
A model to help characterize graphical input is presented. It is a refinement of a model first introduced by Buxton, Hill and Rowley (1985). The importance of the model is that it can characterize both many of the demands of interactive transactions, and many of the capabilities of input transducers. Hence, it provides a simple and usable means to aid finding a match between the two. After an introduction, an overview of approaches to categorizing input is presented. The model is then described and discussed in terms of a number of different input technologies and techniques.
© All rights reserved Buxton and/or North-Holland
Sellen, Abigail, Kurtenbach, Gordon and Buxton, Bill (1990): The Role of Visual and Kinesthetic Feedback in the Prevention of Mode Errors. In: Diaper, Dan, Gilmore, David J., Cockton, Gilbert and Shackel, Brian (eds.) INTERACT 90 - 3rd IFIP International Conference on Human-Computer Interaction August 27-31, 1990, Cambridge, UK. pp. 667-673.
The use of visual and kinesthetic feedback in preventing mode errors was investigated. Mode errors were defined in the context of text editing as attempting to issue navigation commands while in insert mode, or attempting to insert text while in command mode. Twelve novices and twelve expert users of the Unix-based text editor vi performed a simple text editing task in conjunction with a distractor task in four different conditions. These conditions consisted of comparing the use of keyboard versus foot pedal for changing mode, crossed with the presence or absence of visual feedback to indicate mode. Both visual and kinesthetic feedback were effective in reducing mode errors, although for experts visual feedback was redundant given that they were using a foot pedal. Other measures of system usability indicate the superiority of the use of a foot pedal over visual feedback in delivering system state information for this type of task.
© All rights reserved Sellen et al. and/or North-Holland
Brown, Ed, Buxton, Bill and Murtagh, Kevin (1990): Windows on Tablets as a Means of Achieving Virtual Input Devices. In: Diaper, Dan, Gilmore, David J., Cockton, Gilbert and Shackel, Brian (eds.) INTERACT 90 - 3rd IFIP International Conference on Human-Computer Interaction August 27-31, 1990, Cambridge, UK. pp. 675-681.
Users of computer systems are often constrained by the limited number of physical devices at their disposal. For displays, window systems have proven an effective way of addressing this problem. As commonly used, a window system partitions a single physical display into a number of different virtual displays. It is our objective to demonstrate that the model is also useful when applied to input. We show how the surface of a single input device, a tablet, can be partitioned into a number of virtual input devices. The demonstration makes a number of important points. First, it demonstrates that such usage can improve the power and flexibility of the user interfaces that we can implement with a given set of resources. Second, it demonstrates a property of tablets that distinguishes them from other input devices, such as mice. Third, it shows how the technique can be particularly effective when implemented using a touch sensitive tablet. And finally, it describes the implementation of a prototype an "input window manager" that greatly facilitates our ability to develop user interfaces using the technique. The research described has significant implications on direct manipulation interfaces, rapid prototyping, tailorability, and user interface management systems.
© All rights reserved Brown et al. and/or North-Holland
Mountford, S. Joy, Buxton, Bill, Krueger, Myron W., Laurel, Brenda K. and Vertelney, Laurie (1989): Drama and Personality in User Interface Design. In: Bice, Ken and Lewis, Clayton H. (eds.) Proceedings of the ACM CHI 89 Human Factors in Computing Systems Conference April 30 - June 4, 1989, Austin, Texas. pp. 105-108.
The title of this panel immediately leaps out as being out of place. Of all the things that come to mind when one thinks of computers and user interfaces, drama and personality are among the last. The point here is not to make using computers more dramatic, per se, but to learn and borrow from the performing arts about techniques that could improve main stream interface design. The contributions described in this panel are borrowed from the theatrical world, film producing and music. In all the panelists work, the user is at the very center of creating the actual user interface experience, either through direct user participation or via engaging the individual viewer's personality. The panelists' pioneering research has produced and created several examples of new user interface experiences and designs. The discussion will focus on what techniques offer the most promise for facilitating the design of really new experiential user interfaces.
© All rights reserved Mountford et al. and/or ACM Press
Buxton, Bill (1989): Introduction to this Special Issue on Nonspeech Audio. In Human-Computer Interaction, 4 (1) pp. 1-9.
Buxton, Bill (1989): On the Road to Brighton. In ACM SIGCHI Bulletin, 20 (4) pp. 16-17.
Thomas, John C., Brown, John Seely, Buxton, Bill, Curtis, Bill, Landauer, Thomas K., Malone, Thomas W. and Shneiderman, Ben (1986): Human Computer Interaction in the Year 2000. In: Mantei, Marilyn and Orbeton, Peter (eds.) Proceedings of the ACM CHI 86 Human Factors in Computing Systems Conference April 13-17, 1986, Boston, Massachusetts. pp. 253-255.
Much of the work in the field of computer human interaction consists of finding out what is wrong with existing interfaces or which of several existing alternatives is better. Over the next few decades, the possibilities for computer human interaction will explode. This will be due to: 1) continued decrease in the costs of processing and memory, 2) new technologies being invented and existing technologies (e.g., handwriting recognition, speech synthesis) being extended, 3) new applications and 4) new ideas about how people can interact with computers. While changes along these lines are bound to occur, we need not take the view that investigators in human-computer interaction are to be passive observers of some uncontrolled and uncontrollable evolution. Indeed, we can help steer this process by visions of what the future of human computer interaction could and should be like.
© All rights reserved Thomas et al. and/or ACM Press
Buxton, Bill, Scadden, Lawrence A., Foulds, Richard, Shein, Fraser, Rosenstein, Mark and Vanderheiden, Gregg C. (1986): Human Interface Design and the Handicapped User. In: Mantei, Marilyn and Orbeton, Peter (eds.) Proceedings of the ACM CHI 86 Human Factors in Computing Systems Conference April 13-17, 1986, Boston, Massachusetts. pp. 291-297.
Buxton, Bill and Myers, Brad A. (1986): A Study in Two-Handed Input. In: Mantei, Marilyn and Orbeton, Peter (eds.) Proceedings of the ACM CHI 86 Human Factors in Computing Systems Conference April 13-17, 1986, Boston, Massachusetts. pp. 321-326.
Two experiments were run to investigate two-handed input. The experimental tasks were representative of those found in CAD and office information systems. Experiment one involved the performance of a compound selection/positioning task. The two sub-tasks were performed by different hands using separate transducers. Without prompting, novice subjects adopted strategies that involved performing the two sub-tasks simultaneously. We interpret this as a demonstration that, in the appropriate context, users are capable of simultaneously providing continuous data from two hands without significant overhead. The results also show that the speed of performing the task was strongly correlated to the degree of parallelism employed. Experiment two involved the performance of a compound navigation/selection task. It compared a one-handed versus two-handed method for finding and selecting words in a document. The two-handed method significantly outperformed the commonly used one-handed method by a number of measures. Unlike experiment one, only two subjects adopted strategies that used both hands simultaneously. The benefits of the two-handed technique, therefore, are interpreted as being due to efficiency of hand motion. However, the two subjects who did use parallel strategies had the two fastest times of all subjects.
© All rights reserved Buxton and Myers and/or ACM Press
Lee, S. K., Buxton, Bill and Smith, K. C. (1985): A Multi-Touch Three Dimensional Touch-Sensitive Tablet. In: Borman, Lorraine and Curtis, Bill (eds.) Proceedings of the ACM CHI 85 Human Factors in Computing Systems Conference April 14-18, 1985, San Francisco, California. pp. 21-25.
A prototype touch-sensitive tablet is presented. The tablet's main innovation is that it is capable of sensing more than one point of contact at a time. In addition to being able to provide position coordinates, the tablet also gives a measure of degree of contact, independently for each point of contact. In order to enable multi-touch sensing, the tablet surface is divided into a grid of discrete points. The points are scanned using a recursive area subdivision algorithm. In order to minimize the resolution lost due to the discrete nature of the grid, a novel interpolation scheme has been developed. Finally, the paper briefly discusses how multi-touch sensing, interpolation, and degree of contact sensing can be combined to expand our vocabulary in human-computer interaction.
© All rights reserved Lee et al. and/or ACM Press
Buxton, Bill, Bly, Sara A., Frysinger, Steven P., Lunney, David, Mansur, Douglass L., Mezrich, Joseph J. and Morrison, Robert C. (1985): Communicating with Sound. In: Borman, Lorraine and Curtis, Bill (eds.) Proceedings of the ACM CHI 85 Human Factors in Computing Systems Conference April 14-18, 1985, San Francisco, California. pp. 115-119.
The Communicating with Sound panel for CHI'85 will focus on ways of expanding the user interface by using sound as a significant means of output. As a user's communication from the computer has progressed from large (and often smeary) printout to a teletypewriter and, finally, to the multi-window workstation displays of today, the emphasis has remained primarily on visual output. Although many user terminals and workstations have the capability of generating sound, that capability is rarely used for more than audio cues (indicating status such as an error condition or task completion) and simple musical tunes. Research shows that sounds convey meaningful information to users. With examples of such research, the panel members will demonstrate a variety of uses of sound output, discuss issues raised by the work, and suggest further directions. The intent of the panel is to stimulate thinking about expanding the user interface and to discuss areas for future research. In the statements that follow, each panelist will describe his or her own work, including the data and audio dimensions used, the value of the research, remaining issues to be addressed, and suggestions for future research and application. A list of references is included for those who wish further reading.
© All rights reserved Buxton et al. and/or ACM Press
Greif, Irene, Buxton, Bill, MacGregor, Scott, Reed, David R. and Tesler, Larry (1985): Microcomputer User Interface Toolkits: The Commercial State-of-the-Art. In: Borman, Lorraine and Curtis, Bill (eds.) Proceedings of the ACM CHI 85 Human Factors in Computing Systems Conference April 14-18, 1985, San Francisco, California. p. 225.
A well-designed user interface is a very valuable asset: the best available today are based on hundreds of man-years of work combining results of research in human factors, tasteful design reviewed and modified through extensive end-user testing, and many rounds of implementation effort. As a result, the user interface "toolkit" is emerging as the hottest new software item. A toolkit can provide software developers with a programming environment in which the user interface coding is already done so that new applications programs can automatically be integrated with other workstation functions. The panel will evaluate this new trend. Tesler and MacGregor will present the designs of the leading toolkit products from Apple and Microsoft, respectively. Reed will analyze the choices from the point of view of the third party software vendors' requirements. Noting that the effort going into these products may well result in de facto standard setting, Buxton will question the appropriateness of making this commitment based on microcomputer hardware.
© All rights reserved Greif et al. and/or ACM Press
Buxton, Bill, Hill, Rosco and Rowley, Peter (1985): Issues and techniques in touch--sensitive tablet input. In: Graphics Interface 85 May 27-31, 1985, Montreal, Quebec, Canada. pp. 147-149.
Lee, S. K., Buxton, Bill and Smith, K. C. (1985): A multi--touch three dimensional touch--sensitive tablet. In: Graphics Interface 85 May 27-31, 1985, Montreal, Quebec, Canada. pp. 221-222.
Buxton, Bill, Lamb, Martin R., Sherman, David and Smith, K. C. (1983): MENULAY --- An automatic program generation module for a user interface management system. In: Graphics Interface 83 May 9-13, 1983, Edmonton, Alberta, Canada. p. 169.
Buxton, Bill, Fiume, Eugene, Hill, Ralph, Lee, Alison and Woo, Carson C. (1983): Continuous hand--gesture driven input. In: Graphics Interface 83 May 9-13, 1983, Edmonton, Alberta, Canada. pp. 191-195.
Buxton, Bill (1982): An informal study of selection positioning tasks. In: Graphics Interface 82 May 17-21, 1982, Toronto, Ontario, Canada. pp. 323-328.
Show list on your website
Join the design elite and advance:
Changes to this page (author)30 Aug 2013: Added06 Nov 2012: Modified
18 Sep 2012: Modified
18 Sep 2012: Modified
26 Mar 2012: Modified
06 Mar 2012: Modified
18 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
17 Aug 2009: Modified
21 Jul 2009: Modified
29 May 2009: Modified
10 Jun 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
08 Feb 2008: Added
25 Jul 2007: Modified
24 Jul 2007: Added
29 Jun 2007: Modified
29 Jun 2007: Modified
27 Jun 2007: Modified
27 Jun 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
28 Apr 2003: Added
28 Apr 2003: Added
28 Apr 2003: Added
Page maintainer: The Editorial Team