跳至内容

2021

ReflecTrack: Enabling 3D Acoustic Position Tracking Using Commodity Dual-Microphone Smartphones
(UIST ’21) Yuzhou Zhuang, Yuntao Wang∗, Yukang Yan, Xuhai Xu, Yuanchun Shi
Abstract
3D position tracking on smartphones has the potential to unlock a variety of novel applications, but has not been made widely available due to limitations in smartphone sensors. In this paper, we propose ReflecTrack, a novel 3D acoustic position tracking method for commodity dual-microphone smartphones. A ubiquitous speaker (e.g., smartwatch or earbud) generates inaudible Frequency Modulated Continuous Wave (FMCW) acoustic signals that are picked up by both smartphone microphones. To enable 3D tracking with two microphones, we introduce a reflective surface that can be easily found in everyday objects near the smartphone. Thus, the microphones can receive sound from the speaker and echoes from the surface for FMCW-based acoustic ranging. To simultaneously estimate the distances from the direct and reflective paths, we propose the echo-aware FMCW technique with a new signal pattern and target detection process. Our user study shows that ReflecTrack achieves a median error of 28.4 mm in the 60cm × 60cm × 60cm space and 22.1 mm in the 30cm × 30cm × 30cm space for 3D positioning. We demonstrate the easy accessibility of ReflecTrack using everyday surfaces and objects with several typical applications of 3D position tracking, including 3D input for smartphones, fine-grained gesture recognition, and motion tracking in smartphone-based VR systems.
TypeBoard: Identifying Unintentional Touch on Pressure-Sensitive Touchscreen Keyboards
(UIST'21) Yizheng Gu, Chun Yu, Xuanzhong Chen, Zhuojun Li, Yuanchun Shi
Abstract
Text input is essential in tablet computer interaction. However, tablet software keyboards face the problem of misrecognizing unintentional touch, which affects efficiency and usability. In this paper, we proposed TypeBoard, a pressure-sensitive touchscreen keyboard that prevents unintentional touches. The TypeBoard allows users to rest their fingers on the touchscreen, which changes the user behavior: on average, users generate 40.83 unintentional touches every 100 keystrokes. The TypeBoard prevents unintentional touch with an accuracy of 98.88%. A typing study showed that the TypeBoard reduced fatigue (p < 0.005) and typing errors (p < 0.01), and improved the touchscreen keyboard’ typing speed by 11.78% (p < 0.005). As users could touch the screen without triggering responses, we added tactile landmarks on the TypeBoard, allowing users to locate the keys by the sense of touch. This feature further improves the typing speed, outperforming the ordinary tablet keyboard by 21.19% (p < 0.001). Results show that pressure-sensitive touchscreen keyboards can prevent unintentional touch, improving usability from many aspects, such as avoiding fatigue, reducing errors, and mediating touch typing on tablets.
Just Speak It: Minimize Cognitive Load for Eyes-Free Text Editing with a Smart Voice Assistant
(UIST'21) Jiayue Fan, Chenning Xu, Chun Yu, Yuanchun Shi
Abstract
Entering text precisely by voice, users might encounter colloquial inserts, inappropriate wording, and recognition errors, which brings difculties to voice editing. Users need to locate the errors and then correct them. In eyes-free scenarios, this select-modify mode brings a cognitive burden and a risk of error. This paper introduces neural networks and pre-trained models to understand users’ revision intention based on semantics, reducing the need for the information from users’ statements. We present two strategies. One is to remove the colloquial inserts automatically. The other is to allow users to edit by just speaking out the target words without having to say the context and the incorrect text. Accordingly, our approach can predict whetherto insert orreplace, the incorrect text to replace, and the position to insert. We implement these strategies in SmartEdit, an eyes-free voice input agent controlled with earphone buttons. The evaluation shows that our techniques reduce the cognitive load and decrease the average failure rate by 54.1% compared to descriptive command or re-speaking.
DualRing: Enabling Subtle and Expressive Hand Interaction with Dual IMU Rings
(IMWUT’21) CHEN LIANG, CHUN YU∗ , YUE QIN, YUNTAO WANG, and YUANCHUN SHI
Abstract
We present DualRing, a novel ring-form input device that can capture the state and movement of the user’s hand and fingers. With two IMU rings attached to the user’s thumb and index finger, DualRing can sense not only the absolute hand gesture relative to the ground but also the relative pose and movement among hand segments. To enable natural thumb-to-finger interaction, we develop a high-frequency AC circuit for on-body contact detection. Based on the sensing information of DualRing, we outline the interaction space and divide it into three sub-spaces: within-hand interaction, hand-to-surface interaction, and hand-to-object interaction. By analyzing the accuracy and performance of our system, we demonstrate the informational advantage of DualRing in sensing comprehensive hand gestures compared with single-ring-based solutions. Through the user study, we discovered the interaction space enabled by DualRing is favored by users for its usability, efficiency, and novelty.
Understanding the Design Space of Mouth Microgestures
(DIS’21) Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, Yuntao Wang*
Abstract
As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive inter- actions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users’ preferences of these gestures are not fully understood. In this paper, we investi- gate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.
LightGuide: Directing Visually Impaired People along a Path Using Light Cues
(IMWUT’21) Ciyuan Yang¹, Shuchang Xu¹, Tianyu Yu, Guanhong Liu, Chun Yu*, Yuanchun Shi
Abstract
This work presents LightGuide, a directional feedback solution that indicates a safe direction of travel via the position of a light within the user’s visual field. We prototyped LightGuide using an LED strip attached to the brim of a cap, and conducted three user studies to explore the effectiveness of LightGuide compared to HapticBag, a state-of-the-art baseline solution that indicates directions through on-shoulder vibrations. Results showed that, with LightGuide, participants turned to target directions in place more quickly and smoothly, and navigated along basic and complex paths more efficiently, smoothly, and accurately than HapticBag. Users’ subjective feedback implied that LightGuide was easy to learn and intuitive to use.
Facilitating Text Entry on Smartphones with QWERTY Keyboard for Users with Parkinson’s Disease
(CHI’21) Yutao Wang, Ao Yu, Xin Yi*, Yuanwei Zhang, Ishan Chatterjee, Shwetak Patel, Yuanchun Shi
Abstract
QWERTY is the primary smartphone text input keyboard configuration. However, insertion and substitution errors caused by hand tremors, often experienced by users with Parkinson’s disease, can severely affect typing efficiency and user experience. In this paper, we investigated Parkinson’s users’ typing behavior on smartphones. In particular, we identified and compared the typing characteristics generated by users with and without Parkinson’s symptoms. We then proposed an elastic probabilistic model for input prediction. By incorporating both spatial and temporal features, this model generalized the classical statistical decoding algorithm to correct in sertion, substitution and omission errors, while maintaining direct physical interpretation.
FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision
(CHI’21) Yueting Weng, Chun Yu*, Yingtian Shi, Yuhang Zhao, Yukang Yan, Yuanchun Shi
Abstract
We present FaceSight, a computer vision-based hand-to-face gesture sensing technique for AR glasses. FaceSight fixes an infrared camera onto the bridge of AR glasses to provide extra sensing capability of the lower face and hand behaviors. We obtained 21 hand-to-face gestures and demonstrated the potential interaction benefits through five AR applications. We designed and implemented an algorithm pipeline that achieves classification accuracy of all gestures at 83.06%, proved by the data of 10 users. Due to the compact form factor and rich gestures, we recognize FaceSight as a practical solution to augment input capability of AR glasses in the future.
Revamp: Enhancing Accessible Information Seeking Experience of Online Shopping for Blind or Low Vision Users
(CHI’21) Ruolin Wang, Zixuan Chen, Mingrui “Ray” Zhang, Zhaoheng Li, Zhixiu Liu, Zihan Dang, Chun Yu, Xiang “Anthony” Chen
Abstract
Online shopping has become a valuable modern convenience, but blind or low vision (BLV) users still face significant challenges using it. We propose Revamp, a system that leverages customer reviews for interactive information retrieval. Revamp is a browser integration that supports review-based question-answering interactions on a reconstructed product page. From our interview, we identified four main aspects (color, logo, shape, and size) that are vital for BLV users to understand the visual appearance of a product. Based on the findings, we formulated syntactic rules to extract review snippets, which were used to generate image descriptions and responses to users’ queries.