Eliciting usable gestures for multi-display environments
Seyed, T., Burns, C., Costa Sousa, M., Maurer, F., and Tang, A. (2012). Eliciting usable gestures for multi-display environments. In ITS '12: Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces, 41--50.
Acceptance: 29% - 30/103.
Abstract
Multi-display environments (MDEs) have advanced rapidly in recent years, incorporating multi-touch tabletops, tablets, wall displays and even position tracking systems. Designers have proposed a variety of interesting gestures for use in an MDE, some of which involve a user moving their hands, arms, body or even a device itself. These gestures are often used as part of interactions to move data between the various components of an MDE, which is a longstanding research problem. But designers, not users, have created most of these gestures and concerns over implementation issues such as recognition may have influenced their design. We performed a user study to elicit these gestures directly from users, but found a low level of convergence among the gestures produced. This lack of agreement is important and we discuss its possible causes and the implication it has for designers. To assist designers, we present the most prevalent gestures and some of the underlying conceptual themes behind them. We also provide analysis of how certain factors such as distance and device type impact the choice of gestures and discuss how to apply them to real-world systems.
Materials
PDF File (http://hcitang.org/papers/2012-its2012-eliciting-usable-gestures.pdf)
DOI (http://doi.acm.org/10.1145/2396636.2396643)
Keywords
Tabletop; gestures; multi-display environments; multi- surface environments; multi-display interaction; cross-device interaction; touch; mobile devices
BibTeX
@inproceedings{seyed2012eliciting,
year = {2012},
type = {conference},
title = {Eliciting usable gestures for multi-display environments},
publisher = {ACM},
pdfurl = {http://hcitang.org/papers/2012-its2012-eliciting-usable-gestures.pdf},
pages = {41--50},
location = {Cambridge, Massachusetts, USA},
keywords = {Tabletop; gestures; multi-display environments; multi- surface environments;
multi-display interaction; cross-device interaction; touch; mobile devices},
isbn = {978-1-4503-1209-7},
doi = {http://doi.acm.org/10.1145/2396636.2396643},
date-modified = {2014-01-11 20:42:55 +0000},
booktitle = {ITS '12: Proceedings of the 2012 ACM international conference on Interactive
tabletops and surfaces},
author = {Seyed, Teddy and Burns, Chris and Costa Sousa, Mario and Maurer, Frank
and Tang, Anthony},
address = {New York, NY, USA},
acceptance = {29% - 30/103},
abstract = {Multi-display environments (MDEs) have advanced rapidly in recent years,
incorporating multi-touch tabletops, tablets, wall displays and even position
tracking systems. Designers have proposed a variety of interesting gestures
for use in an MDE, some of which involve a user moving their hands, arms, body
or even a device itself. These gestures are often used as part of interactions
to move data between the various components of an MDE, which is a longstanding
research problem. But designers, not users, have created most of these gestures
and concerns over implementation issues such as recognition may have influenced
their design. We performed a user study to elicit these gestures directly from
users, but found a low level of convergence among the gestures produced. This
lack of agreement is important and we discuss its possible causes and the implication
it has for designers. To assist designers, we present the most prevalent gestures
and some of the underlying conceptual themes behind them. We also provide analysis
of how certain factors such as distance and device type impact the choice of
gestures and discuss how to apply them to real-world systems. },
}