“Slurp” Revisited: Using Software Reconstruction to Reflect on Spatial Interactivity and Locative Media

Hand-based gestural interaction in augmented reality (AR) is an increasingly popular mechanism for spatial interactions. However, it presents many challenges. For example, most hand gesture interactions work well for interactions with virtual content and interfaces, but seldom work with physical devices and users’ environment. To explore this, and rather than inventing new paradigms for AR interactions, this paper revisits Zigelbaum, Kumpf, Vazquez, and Ishii's 2008 project ‘Slurp’ - a physical eyedropper to interact with digital content from IoT devices. We revive this historical work in a new modality of AR through a five step process: re-presecencing, design experimentation, scenario making, expansion through generative engagements with designers, and reflection. For the designers we engaged, looking back and designing with a restored prototype helped increased understanding of interactive strategies, intentions and rationales of original work. By revisiting Slurp, we also found many new potentials of its metaphorical interactions that could be applied in the context of emerging spatial computing platforms, smart home devices. In doing so, we discuss the value of mining past works in new domains and demonstrate a new way of thinking about designing interactions in emerging platforms.

Publications

teaser image of “Slurp” Revisited: Using Software Reconstruction to Reflect on Spatial Interactivity and Locative Media

“Slurp” Revisited: Using Software Reconstruction to Reflect on Spatial Interactivity and Locative Media

Proceedings of the Designing Interactive Systems Conference (DIS), 2022.
Keywords: system re-presencing, affordances, metaphor, software reconstruction, historical precedents, gestural interface, augmented reality, spatial interaction, XR interaction

Videos

Talks

Cited By

  • Pull Gestures With Coordinated Graphics on Dual-Screen Devices. Proceedings of the 2022 International Conference on Multimodal Interaction.Vivian Shen and Chris Harrison. source | cite | search
  • Stay In Touch