Ruofei Du

Ruofei Du serves as the Interactive Perception & Graphics Lead at Google AR, productionizing XR innovations. His work focuses on interactive perception, computer graphics, and human-computer interaction. He serves on the program committees of ACM CHI and UIST, and is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. He has published over 40 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH, UIST, TVCG, CVPR, and ICCV. His work won a Distinguished Paper Award in ACM IMWUT, a Best Paper Award at SIGGRAPH Web3D 2016, two CHI Honorable Mentions Awards, and one TVCG Honorable Mentions Award. Dr. Du holds a Ph.D. in Computer Science from University of Maryland, College Park. Website: https://duruofei.com

Alternative Bio for Academic Invited Talk

Ruofei Du is a Staff Research Scientist and Manager at Google and works on creating novel interactive technologies for virtual and augmented reality. His research focuses on interactive perception, computer graphics, and human-computer interaction. He serves on the program committees of ACM CHI and UIST, and is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. He holds 6 US patents and has published over 40 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH, UIST, TVCG, CVPR, and ICCV. His work won a Distinguished Paper Award in ACM IMWUT, a Best Paper Award at SIGGRAPH Web3D 2016, two CHI Honorable Mentions Awards, and one TVCG Honorable Mentions Award. Dr. Du holds a Ph.D. and an M.S. in Computer Science from University of Maryland, College Park; and a B.S. from ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com

Augmented Communication for a Universally Accessible Metaverse

The emergent revolution of generative AI and spatial computing will fundamentally change the way we work and live. However, it remains a challenge how to make information universally accessible, and further, how to make generative AI and spatial computing useful in our daily life. In this talk, we will delve into a series of innovations in augmented programming, augmented interaction, and augmented communication, that aim to make both the virtual metaverse and the physical world universally accessible.

With Visual Blocks and InstructPipe, we empower novice users to unleash their inner creativity, by rapidly building machine learning pipelines with visual programming and generative AI. With Depth Lab, Ad hoc UI, Finger Switches, we present real-time 3D interactions with depth maps, objects, and micro-gestures. Finally, with CollaboVR, GazeChat, Visual Captions, ThingShare, and ChatDirector, we enrich communication with mid-air sketches, gaze-aware 3D photos, LLM-informed visuals, object-focused views, and co-presented avatars.

We conclude the talk with highlights of the Google I/O Keynote, offering a visionary glimpse into the future of a universally accessible metaverse.

Stay In Touch