Remote Embodiment for Shared Activity
Allowing others to see and understand us
When we interact with others at a distance, we are often limited to a shared artefact (e.g. a document). We can augment this using Skype, adding both voice and audio, though frequently, this is not enough – it is easy to misunderstand what someone is saying, or lose sight of what the other person is doing with respect to that document. What is missing is an embodiment of the remote collaborator – that is, we cannot see their bodies, and those bodies’ relationship with the shared artefact or space. In this project, we are exploring how different ways of embodying remote participants can help support effective interaction.
VideoArms. Many of our explorations focus on embodying a remote participants’ arms and hands in a shared visual workspace (e.g. [Genest, A., Gutwin, C., Tang, A., Kalyn, M., and Ivkovic, Z. (2013). KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware. In CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work, 157--166.] [Yarosh, S., Tang, A., Mokashi, S., and Abowd, G. (2013). "Almost touching": parent-child remote communication using the sharetable system. In CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work, 181--192.] [Tang, A., Pahud, M., Inkpen, K., Benko, H., Tang, J., and Buxton, B. (2010). Three's company: understanding communication channels in three-way distributed collaboration. In CSCW '10: Proceedings of the 2010 ACM conference on Computer supported cooperative work, 271--280.] [Tang, A., Genest, A., Shoemaker, G., Gutwin, C., Fels, S., and Booth, K. (2010). Enhancing Expressiveness in Reference Space. In New Frontiers in Telepresence - CSCW 2010 Workshop.] [Tang, A., Neustaedter, C., and Greenberg, S. (2007). Videoarms: embodiments for mixed presence groupware. In People and Computers XX---Engage, 85--102.] [Tang, A., Boyle, M., and Greenberg, S. (2005). Understanding and mitigating display and presence disparity in mixed presence groupware. In Journal of research and practice in information technology, 193--210.] [Tang, A., and Greenberg, S. (2005). Supporting Awareness in Mixed Presence Groupware. In Awareness Systems: Known Results, Theory, Concepts and Future Challenges - Workshop at CHI 2005.] [Tang, A., Neustaedter, C., and Greenberg, S. (2004). VideoArms: supporting remote embodiment in groupware. In Video Proceedings of CSCW 2004.] ). Arms are important as they provide a rich means of expressing intent – both intentionally (e.g. when we explicitly point at things), and unintentionally (e.g. when we are simply working, our arms touch the things we care about). We have designed new ways of capturing and visualizing different characteristics of these arms (e.g. using the Kinect to capture height information [Genest, A., Gutwin, C., Tang, A., Kalyn, M., and Ivkovic, Z. (2013). KinectArms: a toolkit for capturing and displaying arm embodiments in distributed tabletop groupware. In CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work, 157--166.] ), and demonstrated that they can be effective in three-way interaction [Tang, A., Pahud, M., Inkpen, K., Benko, H., Tang, J., and Buxton, B. (2010). Three's company: understanding communication channels in three-way distributed collaboration. In CSCW '10: Proceedings of the 2010 ACM conference on Computer supported cooperative work, 271--280.] . We have also explored how the fidelity of the representation can be used for expressive purposes [Tang, A., Genest, A., Shoemaker, G., Gutwin, C., Fels, S., and Booth, K. (2010). Enhancing Expressiveness in Reference Space. In New Frontiers in Telepresence - CSCW 2010 Workshop.] .
Children and Video. In designing systems for supporting interaction between children, we have found that simply showing video of a person’s head (for example, through Skype) is frequently not enough. Instead, it is useful to scaffold the interaction, for instance by focusing on interaction with a tabletop [Yarosh, S., Tang, A., Mokashi, S., and Abowd, G. (2013). "Almost Touching": Parent-Child Remote Communication Using the ShareTable System - The Video. In CSCW 2013 Video Showcase Program.] [Yarosh, S., Tang, A., Mokashi, S., and Abowd, G. (2013). "Almost touching": parent-child remote communication using the sharetable system. In CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work, 181--192.] , or by allowing for full-body play and interaction with digital objects [Cohen, M., Dillman, K., MacLeod, H., Hunter, S., and Tang, A. (2014). OneSpace: Shared Visual Scenes for Active Freeplay. In CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2177--2180.] [Hunter, S., Maes, P., Tang, A., and Inkpen, K. (2014). WaaZaam! Supporting Creative Play at a Distance in Customized Video Environments. In CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1197--1206.] .
Bodily Representation. We have designed several systems to explore how full-body representation can be used for various applications. With OneSpace [Ledo, D., Aseniero, B., Greenberg, S., Boring, S., and Tang, A. (2013). OneSpace: shared depth-corrected video interaction. In CHI EA '13: CHI '13 Extended Abstracts on Human Factors in Computing Systems, 997--1002.] [Cohen, M., Dillman, K., MacLeod, H., Hunter, S., and Tang, A. (2014). OneSpace: Shared Visual Scenes for Active Freeplay. In CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2177--2180.] [Dillman, K., and Tang, A. (2013). Towards Next-Generation Remote Physiotherapy with Videoconferencing Tools. In University of Calgary.] [Ledo, D., Aseniero, B., Boring, S., Greenberg, S., and Tang, A. (2013). OneSpace: Bringing Depth to Remote Interactions. In Future of Personal Video Communications: Beyond Talking Heads - Workshop at CHI 2013.] [Ledo, D., Aseniero, B., Boring, S., and Tang, A. (2012). OneSpace: Shared Depth-Corrected Video Interaction. In University of Calgary.] , we explored high-fidelity video representation, and how that impacts interaction. In a separate project involving art therapy, we explored the use of simple stickman representations to protect the identity of participants [Jones, B., Hankinson, S., Collie, K., and Tang, A. (2014). Supporting Non-Verbal Visual Communication in Online Group Art Therapy. In CHI EA '14: CHI '14 Extended Abstracts on Human Factors in Computing Systems, 1759--1764.] . We have also been exploring a complete absence of visual representation, exploring the possibility of using vibrotactile, haptic representation [Alizadeh, H., Tang, R., Sharlin, E., and Tang, A. (2014). Haptics in Remote Collaborative Exercise Systems for Seniors. In CHI EA '14: CHI '14 Extended Abstracts on Human Factors in Computing Systems, 2401--2406.]
Publications
Acceptance: 35.5% - 139/390.
Acceptance: 35.5% - 139/390.
Best Paper Nominee (top 5% of submissions)
Invited article
People's choice award
Acceptance: 22.8% - 471/2064. Honourable Mention - Top 5% of all submissions
Acceptance: 22.8% - 471/2064. Honourable Mention - Top 5% of all submissions
Acceptance: 49% - 241/496. 6-page abstract + poster.
Acceptance: 49% - 241/496. 6-page abstract + poster.