论文成果 / Publications
2019
Investigating Gesture Typing for Indirect Touch
Abstract
With the development of ubiquitous computing, entering text on HMDs and smart TVs using handheld touchscreen devices (e.g., smartphone and controller) is becoming more and more attractive. In these indirect touch scenarios, the touch input surface is decoupled from the visual display. In this paper, we investigate the feasibility of gesture typing for indirect touch since keeping the finger in touch with the screen during typing makes it possible to provide continuous visual feedback, which is beneficial for increasing the input performance. We propose an improved design to address the uncertainty and inaccuracy of the first touch. Evaluation result shows that users can quickly acquire indirect gesture typing, and type 22.3 words per minute after 30 phases, which significantly outperforms previous numbers in literature. Our work provides the empirical support for leveraging gesture typing for indirect touch.
"I Bought This for Me to Look More Ordinary": A Study of Blind People Doing Online Shopping
Abstract
Online shopping, by reducing the needs for traveling, has become an essential part of lives for people with visual im- pairments. However, in HCI, research on online shopping for them has only been limited to the analysis of accessi- bility and usability issues. To develop a broader and better understanding of how visually impaired people shop online and design accordingly, we conducted a qualitative study with twenty blind people. Our study highlighted that blind people’s desire of being treated as ordinary had signifcantly shaped their online shopping practices: very attentive to the visual appearance of the goods even they themselves could not see and taking great pain to fnd and learn what commodities are visually appropriate for them. This paper reports how their trying to appear ordinary is manifested in online shopping and suggests design implications to support these practices.
Typing on Split Keyboards with Peripheral Vision
Abstract
Split keyboards are widely used on hand-held touchscreen devices (e.g., tablets). However, typing on a split keyboard often requires eye movement and attention switching between two halves of the keyboard, which slows users down and increases fatigue. We explore peripheral typing, a superior typing mode in which a user focuses her visual attention on the output text and keeps the split keyboard in peripheral vision. Our investigation showed that peripheral typing reduced attention switching, enhanced user experience and increased overall performance (27 WPM, 28% faster) over the typical eyes-on typing mode. We also designe GlanceType, a text entry system that supported both peripheral and eyes-on typing modes for real typing scenario.
Clench Interaction: Novel Biting Input Techniques
Abstract
We propose clench interaction that leverages clenching as an actively con- trolled physiological signal that can facilitate interactions. We conducted a user study to investigate users’ ability to control their clench force. We found that users can easily discriminate three force levels, and that they can quickly con- firm actions by unclenching (quick release). We developed a design space for clench interaction based on the results and investigated the usability of the clench interface. Par- ticipants preferred the clench over baselines and indicated a willingness to use clench-based interactions. This novel technique can provide an additional input method in cases where users’ eyes or hands are busy, augment immersive experiences such as virtual/augmented reality, and assist individuals with disabilities.
HandSee: Enabling Full Hand Interaction on Smartphones with Front Camera-based Stereo Vision
Abstract
HandSee is a novel sensing technique that can capture the state and movement of the user’s hands while using smartphone. We place a prism mirror on the front camera to achieve a stereo vision of the scene above the touchscreen surface. Due to this sensing ability, HandSee enables a variety of novel interaction techniques and expands the design space for full hand interaction on smartphones.
VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction
Abstract
Modern touchscreen keyboards are all powered by the word- level auto-correction ability to handle input errors. Unfortu- nately, visually impaired users are deprived of such beneft because a screen-reader keyboard ofers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard al- gorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key con- sidering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users’ touch point location, which enables them to select other keys easily. A user study shows that compared with the cur- rent keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.
EarTouch: Facilitating Smartphone Use for Visually Impaired People in Mobile and Public Scenarios
Abstract
Interacting with a smartphone using touch input and speech output is challenging for blind and visually impaired people in mobile and public scenarios, where only one hand may be available for input ( e.g., while holding a cane ) and using the loud speaker for speech output is constrained by environmental noise, privacy, and social concerns. To address these issues, we propose EarTouch, a one-handed interaction technique that allows users to interact with a smartphone using the ear to tap or draw gestures on the touchscreen and hear the speech output played via the ear speaker privately. In a broader sense, EarTouch brings us an important step closer to accessible smartphones for all users of all abilities.
软件学报|基于空中手势的跨屏幕内容分享技术研究
Abstract
在多屏幕环境中,实现跨屏幕内容分享的现有技术忽视了对空中手势的利用.为验证空中手势,实现并部署了一套跨屏幕界面共享系统,支持使用空中手势完成屏幕间界面内容的实时分享.根据一项让用户设计手势的调研,实现了"抓-拖拽"和"抓-拉-放"两种空中手势,通过对两种手势进行用户实验发现∶用空中手势完成跨屏幕内容分享是新颖、有用、易于掌握的.总体上,"抓-拖拽"手势用户感受更好,但"抓-拉-放"手势对于复杂情景的通用性更好;前者更适合近屏场景,推荐在屏幕相邻且间距不大、操作距离近时使用;后者更适合远屏场景,推荐在目标屏幕间有间隔屏幕或相距较远时使用,因此,将两种手势相结合使用更为合理.
Exploring Low-Occlusion Qwerty Soft Keyboard Using Spatial Landmarks
Abstract
The Qwerty soft keyboard is widely used on mobile devices. However, keyboards often consume a large por- tion of the touchscreen space, occluding the application view on the smartphone and requiring a separate input interface on the smartwatch. Such space consumption can affect the user experience of accessing infor- mation and the overall performance of text input. In order to free up the screen real estate, this article explores the concept of Sparse Keyboard and proposes two new ways of presenting the Qwerty soft keyboard. The idea is to use users’ spatial memory and the reference effect of spatial landmarks on the graphical interface. Our final design K3-SGK displays only three keys while L5-EYOCN displays only five line segments instead of the entire Qwerty layout.