论文成果 / Publications
2020
QwertyRing: Text Entry on Physical Surfaces Using a Ring
Abstract
The software keyboard is widely used on digital devices such as smartphones, computers, and tablets. The software keyboard operates via touch, which is efficient, convenient, and familiar to users. However, some emerging technology devices such as AR/VR headsets and smart TVs do not support touch-based text entry. In this paper, we present QwertyRing, a technique that supports text entry on physical surfaces using an IMU ring. Evaluation shows that users can type 20.59 words per minute after a five-day training.
Virtual Paving: Rendering a Smooth Path for People with Visual Impairment through Vibrotactile and Audio Feedback
Abstract
We propose Virtual Paving, which aims to assist independent navigation by rendering a smooth path to visually impaired people through multi-modal feedback. This work focuses on the feedback design of Virtual Paving. Firstly, we extracted the design guidelines based on an investigation into visually impaired people’s current mobility practices. Next, we developed a multi-modal solution through co-design and evaluation with visually impaired users. This solution included (1) vibrotactile feedback on the shoulders and waist to give directional cues and (2) audio feedback to describe road conditions ahead of the user. Guided by this solution, 16 visually impaired participants successfully completed 127 out of 128 trials with 2.1m-wide basic paths. Subjective feedback indicated that our solution to render Virtual Paving was easy for users to learn, and it also enabled them to walk smoothly.
EarBuddy: Enabling On-Face Interaction via Wireless Earbuds
Abstract
We propose to use EarBuddy regarding on-body interaction, a real-time system that leverages the microphone in commercial wireless earbuds to detect tapping and sliding gestures near the face and ears. We develop a design space to generate 27 valid gestures and select the eight gestures that were optimal for both human preference and microphone detectability. We collected a dataset on those eight gestures (N=20) and trained deep learning models for gesture detection and classification. Our optimized classifier achieved an accuracy of 95.3%. Finally, we evaluate EarBuddy's usability. Our results show that EarBuddy can facilitate novel interaction and provide a new eyes-free, socially acceptable input method that is compatible with commercial wireless earbuds and has the potential for scalability and generalizability.
PalmBoard: Leveraging Implicit Touch Pressure in Statistical Decoding for Indirect Text Entry
Abstract
We investigated how to incorporate implicit touch pressure, finger pressure applied to a touch surface during typing, to improve text entry performance via statistical decoding. We focused on one-handed touch-typing on indirect interface as an example scenario?collected typing data on a pressure-sensitive touchpad, and analyzed users' typing behavior.Our investigation revealed distinct pressure patterns for different keys and led to a Markov-Bayesian decoder incorporating pressure image data into decoding. It improved the top-1 accuracy from 53% to 74% over a naive Bayesian decoder. We then implemented PalmBoard, a text entry method that implemented the Markov-Bayesian decoder and effectively supported one-handed touch-typing on indirect interfaces.Overall, our investigation showed that incorporating implicit touch pressure is effective in improving text entry decoding.
Understanding Window Management Interactions in AR Headset + Smartphone Interface
Abstract
We envision a combinative use of an AR headset and a smartphone in the future that can provide a more extensive display and precise touch input simultaneously. In this way, the input/output interface of these two devices can fuse to redefine how a user can manage application windows seamlessly on the two devices. In this work, we conducted a formative interview with ten people to provide an understanding of how users would prefer to manage multiple windows on the fused interface. Our interview highlighted that the desire to use a smartphone as a window management interface shaped users' interaction practices of window management operations. This paper reports how their desire to use a smartphone as a window manager is manifested.
Recognizing Unintentional Touch on Interactive Tabletop
Abstract
A multi-touch interactive tabletop is designed to embody the benefits of a digital computer within the familiar surface of a physical tabletop. We leverage gaze direction, head orientation and screen contact data to identify and filter out unintentional touches, so that users can take full advantage of the physical properties of an interactive tabletop. We first conducted a user study to identify behavioral pattern differences (gaze, head and touch) between completing usual tasks on digital versus physical tabletops. Then we compiled our findings into five types of spatiotemporal features, and train a machine learning model to recognize unintentional touches. Finally we evaluated our algorithm in a real-time filtering system. A user study shows that our algorithm is stable and the improved tabletop effectively screens out unintentional touches, and provide more relaxing and natural user experience. By linking their gaze and head behavior to their touch behavior, our work sheds light on the possibility of future tabletop technology to improve the understanding of users’input intention.
Keep the Phone in Your Pocket: Enabling Smartphone Operation with an IMU Ring for Visually Impaired People
Abstract
We present a ring-based input interaction that enables in-pocket smartphone operation. By wearing a ring with an Inertial Measurement Unit on the index finger, users can perform gestures on any surface (e.g., tables, thighs) using subtle, one-handed gestures and receive auditory feedback via earphones. We conducted participatory studies to obtain a set of versatile commands and corresponding gestures. We subsequently trained an SVM model to recognize these gestures and achieved a mean accuracy of 95.5% on 15 classifications. Evaluation results showed that our ring interaction is more efficient than some baseline phone interactions and is easy, private, and fun to use.
MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot
Abstract
Haptic feedback can significantly enhance the realism and immersiveness of virtual reality (VR) systems. In this paper, we propose MoveVR, a technique that enables realistic, multiform force feedback in VR leveraging commonplace cleaning robots. MoveVR can generate tension, resistance, impact and material rigidity force feedback with multiple levels of force intensity and directions. This is achieved by changing the robot's moving speed, rotation, position as well as the carried proxies. We demonstrate the feasibility and effectiveness of MoveVR through interactive VR gaming. In our quantitative and qualitative evaluation studies, participants found that MoveVR provides more realistic and enjoyable user experience when compared to commercially available haptic solutions such as vibrotactile haptic systems.
FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions
Abstract
In the conversations with smart speakers, misunderstandings of users' requests lead to erroneous responses. We propose FrownOnError, a novel interaction technique that enables users to interrupt the responses by intentional but natural facial expressions. This method leverages the human nature that the facial expression changes when we receive unexpected responses. We conducted a first user study (N=12) which revealed the significant difference in the frequency of occurrence and intensity of users' facial expressions between two conditions. Our second user study (N=16) evaluated the user experience and interruption efficiency of FrownOnError and the third user study (N=12) explored suitable conversation recovery strategies after the interruptions.