跳至内容

2019

Facilitating Temporal Synchronous Target Selection through User Behavior Modeling
(IMWUT’19) Tengxiang Zhang, Xin Yi*, Ruolin Wang, Jiayuan Gao, Yuntao Wang, Chun Yu, Simin Li, Yuanchun Shi
Abstract
Temporal synchronous target selection is an association-free selection technique: users select a target by generating signals (e.g., finger taps and hand claps) in sync with its unique temporal pattern. However, classical pattern set design and input recognition algorithm of such techniques did not leverage users’ behavioral information, which limits their robustness to imprecise inputs. In this paper, we improve these two key components by modeling users’ interaction behavior. We generated pattern sets for up to 22 targets that minimized the possibility of confusion due to imprecise inputs, validated that the optimized pattern sets could reduce error rate from 23% to 7% for the classical Correlation recognizer. We also tested a novel Bayesian, which achieved higher selection accuracy than the Correlation recognizer when the input sequence is short.
AR Assistive System in Domestic EnvironmentUsing HMDs: Comparing Visualand Aural Instructions
(HCII’19) Shuang He, Yanhong Jia, Zhe Sun, Chenxin Yu, Xin Yi, Yuanchun Shi, Yingqing Xu
Abstract
Household appliances are becoming more varied. In daily life, people usually refer to printed documents while they learn to use different devices. However, augmented reality (AR) assistive systems providing visual and aural instructions have been proposed as an alternative solution. In this work, we evaluated users’ performance of instruction understanding in four different ways: (1) Baseline paper instructions, (2) Visual instructions based on head mounted displays (HMDs), (3) Visual instructions based on computer monitor, (4) Aural instructions. In a Wizard of Oz study, we found that, for the task of making espresso coffee, the helpfulness of visual and aural instructions depends on task complexity. Providing visual instructions is a better way of showing operation details, while aural instructions are suitable for presenting intention of operation. With the same visual instructions on displays, due to the limitation of hardware, the HMD-users complete the task in the longest duration and bear the heaviest perceived cognitive load.

2018

Tap-to-Pair: Associating Wireless Devices with Synchronous Tapping
(IMWUT ’18) Tengxiang Zhang, Xin Yi, Ruolin Wang, Yuntao Wang, Chun Yu, Yiqin Lu and Yuanchun Shi,
Abstract
Tap-to-Pair is a spontaneous device association technique that initiates pairing from advertising devices without hardware or firmware modifications. Tapping an area near the advertising device's antenna can change its signal strength. Users can then associate two devices by synchronizing taps on the advertising device with the blinking pattern displayed by the scanning device. By leveraging the wireless transceiver for sensing, Tap-to-Pair does not require additional resources from advertising devices and needs only a binary display (e.g. LED) on scanning devices.
HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices
(IMWUT’18) Yukang Yan,Chun Yu,XIN YI,Yuanchun Shi
Abstract
We propose HeadGesture, a hands-free input approach to interact with HMD devices. Using HeadGesture, users do not need to raise their arms to perform gestures or operate remote controllers in the air. Instead, they perform simple gestures with head movement to interact. In this way, users' hands are free to perform other tasks and it reduces the hand occlusion of the field of view and alleviates arm fatigue. Evaluation results demonstrate that the performance of HeadGesture is comparable to mid-air hand gestures and users feel significantly less fatigue.
Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands
(UIST 2018) Ke Sun,Chun Yu,Weinan Shi,Lan Liu,Yuanchun Shi
Abstract
We present Lip-Interact, an interaction technique that allows users to issue commands on their smartphone through silent speech. Lip-Interact repurposes the front camera to capture the user's mouth movements and recognize the issued commands with an end-to-end deep learning model. Our system supports 44 commands for accessing both system-level functionalities (launching apps, changing system settings, and handling pop-up windows) and application-level functionalities (integrated operations for two apps). We verify the feasibility of Lip-Interact with three user experiments: evaluating the recognition accuracy, comparing with touch on input efficiency, and comparing with voiced commands with regards to personal privacy and social norms. We demonstrate that Lip-Interact can help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.
CCCF 2018|自然人机交互
史元春
Abstract
人机交互是人与计算机之间为完成某项任务所进行的信息交换过程。计算机形态和使用情境(context)日益复杂多样,交互技术已经成为终端和应用创新的核心竞争力,自然交互是发展趋势。我们希望,人们与手持设备、家居设备、穿戴设备、机器人、无人车,在很多场景中以更自然的模态(比如用语音,用语义丰富的手势,甚至是我们日常的行为)发生互动,人们能获得可理解性与感受效果俱佳的信息反馈。所谓的自然,是在信息呈现和交互表达上,最大程度地符合人对现实世界已有的认知,信道充分,并能降低甚至无须学习成本,而在表达上,还体现在人不需要很精准的表达,可以是某种模糊的表达和传达的方式,而机器端能够给我们准确的理解和精准的服务。
CCCF2018|自然文本输入中的动作建模
喻纯,易鑫,史元春
Abstract
人机界面是人与计算机之间传递、交换信息的桥梁。几十年来,人机界面的发展越来越强调交互的自然性,即用户的交互行为与其生理和认知的习惯相吻合。人机交互的方式经历了命令行、图形用户界面、 触摸交互和三维空中交互的演变。其结果是,交互的自然性逐渐提高,但由于交互接口尺寸的限制和触觉等反馈信道受限,导致交互信号的精度和交互效率逐渐降低。这种自然性和高效性之间的制约关系,成为了人机交互研究中的难题,如何在两者之间取得兼顾和平衡,是具有重要理论价值和实践意义的研究问题。 文本输入是指人通过人机界面向计算机输入文本信息的过程,是最基本的人机交互任务之一。在日常生活中,历史最为悠久和广为接受的文本输入方式是利用键盘。传统的键盘包括物理键盘和较大尺寸的软键盘,用户经过一段时间的练习后,基本上可以实现无错的文本输入。然而,在可穿戴设备、虚拟 / 增强现实等新一 代自然交互场景中,用户往往面临着交互接口尺寸极小(智能手表)、缺乏触觉反馈(空中交互)等挑战, 这使得用户难以保证输入的准确性,输入信号的信噪比较低。此时,传统的纠错能力弱的文本输入技术的输入正确率显著下降,最终导致其无法完成文本输入任务,或者在输入过程中导致用户紧张疲劳,输入效率低,输入体验差。
TOAST: Ten-Finger Eyes-Free Typing on Touchable Surfaces
(Ubicomp 2018) Weinan Shi, Chun Yu,Xin Yi, Zhen Li, Yuanchun Shi
Abstract
Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces.We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users’ typing data.
Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality
(CHI 2018) Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, Yuanchun Shi
Abstract
Eyes-free target acquisition is a basic and important human ability to interact with the surrounding physical world, relying on the sense of space and proprioception. In this research, we leverage this ability to improve interaction in virtual reality (VR), by allowing users to acquire a virtual object without looking at it. We expect this eyes-free approach can effectively reduce head movements and focus changes, so as to speed up the interaction and alleviate fatigue and VR sickness.