KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware

Genest, A., Gutwin, C., Tang, A., Kalyn, M., and Ivkovic, Z. (2013). KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware. In CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work, 157--166.

Acceptance: 35.5% - 139/390.

Abstract

Gestures are a ubiquitous part of human communication over tables, but when tables are distributed, gestures become difficult to capture and represent. There are several problems: extracting arm images from video, representing the height of the gesture, and making the arm embodiment visible and understandable at the remote table. Current solutions to these problems are often expensive, complex to use, and difficult to set up. We have developed a new toolkit -- KinectArms -- that quickly and easily captures and displays arm embodiments. KinectArms uses a depth camera to segment the video and determine gesture height, and provides several visual effects for representing arms, showing gesture height, and enhancing visibility. KinectArms lets designers add rich arm embodiments to their systems without undue cost or development effort, greatly improving the expressiveness and usability of distributed tabletop groupware

Materials

PDF File (http://hcitang.org/papers/2013-cscw2013-kinectarms.pdf)
Video (http://hcitang.org/papers/2013-cscw2013-kinectarms.m4v)
DOI (http://doi.acm.org/10.1145/2441776.2441796)

Keywords

Distributed tabletops, gestures, embodiments, toolkits

BibTeX

@inproceedings{genest2013kinectarms,
  year = {2013},
  videourl = {http://hcitang.org/papers/2013-cscw2013-kinectarms.m4v},
  type = {conference},
  title = {KinectArms: a toolkit for capturing and displaying arm embodiments in
distributed tabletop groupware},
  publisher = {ACM},
  pdfurl = {http://hcitang.org/papers/2013-cscw2013-kinectarms.pdf},
  pages = {157--166},
  location = {San Antonio, Texas, USA},
  keywords = {Distributed tabletops, gestures, embodiments, toolkits},
  isbn = {978-1-4503-1331-5},
  doi = {http://doi.acm.org/10.1145/2441776.2441796},
  date-modified = {2014-01-11 06:56:05 +0000},
  booktitle = {CSCW '13: Proceedings of the 2013 conference on Computer supported
cooperative work},
  author = {Genest, Aaron M. and Gutwin, Carl and Tang, Anthony and Kalyn, Michael
and Ivkovic, Zenja},
  address = {New York, NY, USA},
  acceptance = {35.5% - 139/390},
  abstract = {Gestures are a ubiquitous part of human communication over tables,
but when tables are distributed, gestures become difficult to capture and represent.
There are several problems: extracting arm images from video, representing
the height of the gesture, and making the arm embodiment visible and understandable
at the remote table. Current solutions to these problems are often expensive,
complex to use, and difficult to set up. We have developed a new toolkit --
KinectArms -- that quickly and easily captures and displays arm embodiments.
KinectArms uses a depth camera to segment the video and determine gesture height,
and provides several visual effects for representing arms, showing gesture
height, and enhancing visibility. KinectArms lets designers add rich arm embodiments
to their systems without undue cost or development effort, greatly improving
the expressiveness and usability of distributed tabletop groupware},
}