• Designing a Smart Helmet for Wildland Firefighters to Avoid Dehydration by Monitoring Bio-signals

    (CHI’21) Jiali Zhang, He Feng, Chee Jen Ngeh, John Raiti, Yuntao Wang, Linda Wagner, Paulo Goncalves, Gulnara Sarymbekova, Jenna James, Paul Albee, Jay Thiagarajan
    Smart Helmet is a new wearable device to monitor wildland fre- fghters’ real-time bio-signal data and alert potential health issues, i.e., dehydration. In this paper, we applied the human-centered de- sign method to develop Smart Helmet for frefghters. We initially conducted multiple rounds of primary research to collect user needs and the deployment constraints by interviewing 80 frefghters. Tar- geted on dehydration caused by heat exhaustion and overexertion, we developed a smart helmet prototype, named FireWorks, with an array of sensors collecting the frefghter’s bio-signals, including body temperature, heart rate, and motions. When abnormal bio- signal levels are detected, the alert system will notify the frefghter and their supervisor. The notifcation is achieved by an on-device algorithm that predicts imminent health risks. Further, we designed a mobile application to display real-time and historical bio-signal data as well as alert users about potential dehydration issues. In the end, we ran user evaluation studies and iterated the prototype based on user feedback, and we ran the functional evaluation to make sure all the implemented functions work properly.

  • Integrating a Voice User Interface into a Virtual Therapy Platform

    (CHI’21) Yun Liu, Lu Wang, William R. Kearns, Linda E Wagner, John Raiti, Yuntao Wang, Weichao Yuwen
    More than 1 in 5 adults in the U.S. serving as family caregivers are the backbones of the healthcare system. Caregiving activities significantly affect their physical and mental health, sleep, work, and family relationships over extended periods. Many caregivers tend to downplay their own health needs and have difficulty accessing support. Failure to maintain their own health leads to diminished ability in providing high-quality care to their loved ones. Voice user interfaces (VUIs) hold promise in providing tailored support family caregivers need in maintaining their own health, such as flexible access and handsfree interactions. This work is the integration of VUIs into a virtual therapy platform to promote user engagement in self-care practices. We conducted user research with family caregivers and subject matter experts, and designed multiple prototypes with user evaluations. Advantages, limitations, and design considerations for integrating VUIs into virtual therapy are discussed.

  • Understanding the Design Space of Mouth Microgestures

    (DIS’21) Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, Yuntao Wang*
    As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive inter- actions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users’ preferences of these gestures are not fully understood. In this paper, we investi- gate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.

  • LightGuide: Directing Visually Impaired People along a Path Using Light Cues

    (IMWUT’21) Ciyuan Yang¹, Shuchang Xu¹, Tianyu Yu, Guanhong Liu, Chun Yu*, Yuanchun Shi
    This work presents LightGuide, a directional feedback solution that indicates a safe direction of travel via the position of a light within the user’s visual field. We prototyped LightGuide using an LED strip attached to the brim of a cap, and conducted three user studies to explore the effectiveness of LightGuide compared to HapticBag, a state-of-the-art baseline solution that indicates directions through on-shoulder vibrations. Results showed that, with LightGuide, participants turned to target directions in place more quickly and smoothly, and navigated along basic and complex paths more efficiently, smoothly, and accurately than HapticBag. Users’ subjective feedback implied that LightGuide was easy to learn and intuitive to use.

  • Facilitating Text Entry on Smartphones with QWERTY Keyboard for Users with Parkinson’s Disease

    (CHI’21) Yutao Wang, Ao Yu, Xin Yi*, Yuanwei Zhang, Ishan Chatterjee, Shwetak Patel, Yuanchun Shi
    QWERTY is the primary smartphone text input keyboard configuration. However, insertion and substitution errors caused by hand tremors, often experienced by users with Parkinson’s disease, can severely affect typing efficiency and user experience. In this paper, we investigated Parkinson’s users’ typing behavior on smartphones. In particular, we identified and compared the typing characteristics generated by users with and without Parkinson’s symptoms. We then proposed an elastic probabilistic model for input prediction. By incorporating both spatial and temporal features, this model generalized the classical statistical decoding algorithm to correct in sertion, substitution and omission errors, while maintaining direct physical interpretation.

  • FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision

    (CHI’21) Yueting Weng, Chun Yu*, Yingtian Shi, Yuhang Zhao, Yukang Yan, Yuanchun Shi
    We present FaceSight, a computer vision-based hand-to-face gesture sensing technique for AR glasses. FaceSight fixes an infrared camera onto the bridge of AR glasses to provide extra sensing capability of the lower face and hand behaviors. We obtained 21 hand-to-face gestures and demonstrated the potential interaction benefits through five AR applications. We designed and implemented an algorithm pipeline that achieves classification accuracy of all gestures at 83.06%, proved by the data of 10 users. Due to the compact form factor and rich gestures, we recognize FaceSight as a practical solution to augment input capability of AR glasses in the future.

  • Revamp: Enhancing Accessible Information Seeking Experience of Online Shopping for Blind or Low Vision Users

    (CHI’21) Ruolin Wang, Zixuan Chen, Mingrui “Ray” Zhang, Zhaoheng Li, Zhixiu Liu, Zihan Dang, Chun Yu, Xiang “Anthony” Chen
    Online shopping has become a valuable modern convenience, but blind or low vision (BLV) users still face significant challenges using it. We propose Revamp, a system that leverages customer reviews for interactive information retrieval. Revamp is a browser integration that supports review-based question-answering interactions on a reconstructed product page. From our interview, we identified four main aspects (color, logo, shape, and size) that are vital for BLV users to understand the visual appearance of a product. Based on the findings, we formulated syntactic rules to extract review snippets, which were used to generate image descriptions and responses to users’ queries.

  • ProxiMic: Convenient Voice Activation via Close-to-Mic Speech Detected by a Single Microphone

    (CHI’21) Yue Qin, Chun Yu*, Zhaoheng Li, Mingyuan Zhong, Yukang Yan, Yuanchun Shi
    Wake-up-free techniques (e.g., Raise-to-Speak) are important for improving the voice input experience. We present ProxiMic, a closeto-mic (within 5 cm) speech sensing technique using only one microphone. With ProxiMic, a user keeps a microphone-embedded device close to the mouth and speaks directly to the device without wake-up phrases or button presses. To detect close-to-mic speech, we use the feature from pop noise observed when a user speaks and blows air onto the microphone. Sound input is first passed through a low-pass adaptive threshold filter, then analyzed by a CNN which detects subtle close-to-mic features (mainly pop noise). Our two-stage algorithm can achieve 94.1% activation recall, 12.3 False Accepts per Week per User (FAWU) with 68 KB memory size, which can run at 352 fps on the smartphone. The user study shows that ProxiMic is eficient, user-friendly, and practical.

  • Tactile Compass: Enabling Visually Impaired People to Follow a Path with Continuous Directional Feedback

    (CHI’21) Guanhong Liu¹, Tianyu Yu¹, Chun Yu*, Haiqing Xu, Shuchang Xu, Ciyuan Yang, Feng Wang, Haipeng Mi, Yuanchun Shi
    Accurate and efective directional feedback is crucial for an electronic traveling aid device that guides visually impaired people in walking through paths. This paper presents Tactile Compass, a hand-held device that provides continuous directional feedback with a rotatable needle pointing toward the planned direction. We conducted two lab studies to evaluate the efectiveness of the feedback solution. Results showed that, using Tactile Compass, participants could reach the target direction in place with a mean deviation of 3.03° and could smoothly navigate along paths of 60cm width, with a mean deviation from the centerline of 12.1cm. Subjective feedback showed that Tactile Compass was easy to learn and use.

  • PTeacher: a Computer-Aided Personalized Pronunciation Training System with Exaggerated Audio-Visual Corrective Feedback

    (CHI’21) Yaohua Bu¹, Tianyi Ma¹, Weijun Li, Hang Zhou, Jia Jia*, Shengqi Chen, Kaiyuan Xu, Dachuan Shi, Haozhe Wu, Zhihan Yang, Kun Li, Zhiyong Wu, Yuanchun Shi, Xiaobo Lu, Ziwei Liu
    Second language (L2) English learners often find it difficult to improve their pronunciations due to the lack of expressive and personalized corrective feedback.We present a Computer-Aided Pronunciation Training system that provides personalized exaggerated audio-visual corrective feedback for mispronunciations to realize the appropriate degrees of audio and visual exaggeration when it comes to individual learners. Therefore, three critical metrics are proposed for both 100 learners and 22 teachers to help us identify the appropriate degrees of exaggeration. User studies demonstrate that our system rectify mispronunciations in a more discriminative, understandable and perceptible manner.

  • Auth+Track: Enabling Authentication Free Interaction on Smartphone by Continuous User Tracking

    (CHI’21) Chen Liang, Chun Yu*, Xiaoying Wei, Xuhai Xu, Yongquan Hu, Yuntao Wang, Yuanchun Shi
    We propose Auth+Track, a novel authentication model that aims to reduce redundant authentication in everyday smartphone usage. To instantiate the Auth+Track model, we present PanoTrack, a prototype that integrates body and near field hand information for user tracking. We install a fisheye camera on the top of the phone to achieve a panoramic vision that can capture both user’s body and on-screen hands. Based on the captured video stream, we develop an algorithm to extract 1) features for user tracking, including body keypoints and their temporal and spatial association, near field hand status, and 2) features for user identity assignment.

  • HulaMove: Using Commodity IMU for Waist Interaction

    (CHI’21) Xuhai Xu, Jiahao Li, Tianyi Yuan, Liang He, Xin Liu, Yukang Yan, Yutao Wang, Yuanchun Shi, Jennifer Mankoff, Anind K. Dey
    We present HulaMove, a novel interaction technique that leverages the movement of the waist as a new eyes-free and hands-free input method. We first conducted a study to understand users’ ability to control their waist. We found that users could easily discriminate eight shifting directions and two rotating orientations. We developed a design space with eight gestures. Using a hierarchical machine learning model, our real-time system could recognize gestures at an accuracy of 97.5%. Finally, we conducted a second user study for usability testing in both real-world scenarios and VR settings. Our study indicated that HulaMove significantly reduced interaction time by 41.8%, and greatly improved users’ sense of presence in the virtual world.

  • LightWrite: Teach Handwriting to The Visually Impaired with A Smartphone

    (CHI’21) Zihan Wu, Chun Yu*, Xuhai Xu, Tong Wei, Tianyuan Zou, Ruolin Wang, Yuanchun Shi
    Learning to write is challenging for blind and low vision (BLV) people. We propose LightWrite, a low-cost, easy-to-access smartphone application that uses voice-based descriptive instruction and feedback to teach BLV users to write English lowercase letters and Arabian digits in a specifically designed font. A two-stage study with 15 BLV users with little prior writing knowledge shows that LightWrite can successfully teach users to learn handwriting characters in an average of 1.09 minutes for each letter. After initial training and 20-minute daily practice for 5 days, participants were able to write an average of 19.9 out of 26 letters that are recognizable by sighted raters.


  • QwertyRing: Text Entry on Physical Surfaces Using a Ring

    (IMWUT’20) Yizheng Gu, Chun Yu*, Zhipeng Li, Zhaoheng Li, Xiaoying Wei, and Yuanchun Shi
    The software keyboard is widely used on digital devices such as smartphones, computers, and tablets. The software keyboard operates via touch, which is efficient, convenient, and familiar to users. However, some emerging technology devices such as AR/VR headsets and smart TVs do not support touch-based text entry. In this paper, we present QwertyRing, a technique that supports text entry on physical surfaces using an IMU ring. Evaluation shows that users can type 20.59 words per minute after a five-day training.

  • Virtual Paving: Rendering a Smooth Path for People with Visual Impairment through Vibrotactile and Audio Feedback

    (IMWUT’20) Shuchang Xu, Ciyuan Yang, Wenhao Ge, Chun Yu, and Yuanchun Shi
    We propose Virtual Paving, which aims to assist independent navigation by rendering a smooth path to visually impaired people through multi-modal feedback. This work focuses on the feedback design of Virtual Paving. Firstly, we extracted the design guidelines based on an investigation into visually impaired people’s current mobility practices. Next, we developed a multi-modal solution through co-design and evaluation with visually impaired users. This solution included (1) vibrotactile feedback on the shoulders and waist to give directional cues and (2) audio feedback to describe road conditions ahead of the user. Guided by this solution, 16 visually impaired participants successfully completed 127 out of 128 trials with 2.1m-wide basic paths. Subjective feedback indicated that our solution to render Virtual Paving was easy for users to learn, and it also enabled them to walk smoothly.

  • EarBuddy: Enabling On-Face Interaction via Wireless Earbuds

    (CHI’20) Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, Anind K. Dey
    We propose to use EarBuddy regarding on-body interaction, a real-time system that leverages the microphone in commercial wireless earbuds to detect tapping and sliding gestures near the face and ears. We develop a design space to generate 27 valid gestures and select the eight gestures that were optimal for both human preference and microphone detectability. We collected a dataset on those eight gestures (N=20) and trained deep learning models for gesture detection and classification. Our optimized classifier achieved an accuracy of 95.3%. Finally, we evaluate EarBuddy’s usability. Our results show that EarBuddy can facilitate novel interaction and provide a new eyes-free, socially acceptable input method that is compatible with commercial wireless earbuds and has the potential for scalability and generalizability.

  • PalmBoard: Leveraging Implicit Touch Pressure in Statistical Decoding for Indirect Text Entry

    (CHI’20) Xin Yi, Chen Wang, Xiaojun Bi, Yuanchun Shi
    We investigated how to incorporate implicit touch pressure, finger pressure applied to a touch surface during typing, to improve text entry performance via statistical decoding. We focused on one-handed touch-typing on indirect interface as an example scenario?collected typing data on a pressure-sensitive touchpad, and analyzed users’ typing behavior.Our investigation revealed distinct pressure patterns for different keys and led to a Markov-Bayesian decoder incorporating pressure image data into decoding. It improved the top-1 accuracy from 53% to 74% over a naive Bayesian decoder. We then implemented PalmBoard, a text entry method that implemented the Markov-Bayesian decoder and effectively supported one-handed touch-typing on indirect interfaces.Overall, our investigation showed that incorporating implicit touch pressure is effective in improving text entry decoding.

  • Understanding Window Management Interactions in AR Headset + Smartphone Interface

    (CHI Extended Abstracts’20) Jie Ren, Yueting Weng, Chengchi Zhou, Chun Yu, Yuanchun Shi
    We envision a combinative use of an AR headset and a smartphone in the future that can provide a more extensive display and precise touch input simultaneously. In this way, the input/output interface of these two devices can fuse to redefine how a user can manage application windows seamlessly on the two devices. In this work, we conducted a formative interview with ten people to provide an understanding of how users would prefer to manage multiple windows on the fused interface. Our interview highlighted that the desire to use a smartphone as a window management interface shaped users’ interaction practices of window management operations. This paper reports how their desire to use a smartphone as a window manager is manifested.

  • Recognizing Unintentional Touch on Interactive Tabletop

    (IMWUT’20) Xuhai Xu, Chun Yu, Yuntao Wang, Yuanchun Shi
    A multi-touch interactive tabletop is designed to embody the benefits of a digital computer within the familiar surface of a physical tabletop. We leverage gaze direction, head orientation and screen contact data to identify and filter out unintentional touches, so that users can take full advantage of the physical properties of an interactive tabletop. We first conducted a user study to identify behavioral pattern differences (gaze, head and touch) between completing usual tasks on digital versus physical tabletops. Then we compiled our findings into five types of spatiotemporal features, and train a machine learning model to recognize unintentional touches. Finally we evaluated our algorithm in a real-time filtering system. A user study shows that our algorithm is stable and the improved tabletop effectively screens out unintentional touches, and provide more relaxing and natural user experience. By linking their gaze and head behavior to their touch behavior, our work sheds light on the possibility of future tabletop technology to improve the understanding of users’input intention.

  • Keep the Phone in Your Pocket: Enabling Smartphone Operation with an IMU Ring for Visually Impaired People

    (IMWUT’20) Guanhong Liu, Yizheng Gu, Yiwen Yin, Chun Yu, Yuntao Wang, Haipeng Mi, Yuanchun Shi
    We present a ring-based input interaction that enables in-pocket smartphone operation. By wearing a ring with an Inertial Measurement Unit on the index finger, users can perform gestures on any surface (e.g., tables, thighs) using subtle, one-handed gestures and receive auditory feedback via earphones. We conducted participatory studies to obtain a set of versatile commands and corresponding gestures. We subsequently trained an SVM model to recognize these gestures and achieved a mean accuracy of 95.5% on 15 classifications. Evaluation results showed that our ring interaction is more efficient than some baseline phone interactions and is easy, private, and fun to use.

  • MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot

    (CHI’20) Yuntao Wang, Zichao (Tyson) Chen, Hanchuan Li, Zhengyi Cao, Huiyi Luo, Tengxiang Zhang, Ke Ou, John Raiti, Chun Yu, Shwetak Patel, Yuanchun Shi
    Haptic feedback can significantly enhance the realism and immersiveness of virtual reality (VR) systems. In this paper, we propose MoveVR, a technique that enables realistic, multiform force feedback in VR leveraging commonplace cleaning robots. MoveVR can generate tension, resistance, impact and material rigidity force feedback with multiple levels of force intensity and directions. This is achieved by changing the robot’s moving speed, rotation, position as well as the carried proxies. We demonstrate the feasibility and effectiveness of MoveVR through interactive VR gaming. In our quantitative and qualitative evaluation studies, participants found that MoveVR provides more realistic and enjoyable user experience when compared to commercially available haptic solutions such as vibrotactile haptic systems.

  • FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions

    (CHI’20) Yukang Yan, Chun Yu, Wengrui Zheng, Ruining Tang, Xuhai Xu, Yuanchun Shi
    In the conversations with smart speakers, misunderstandings of users’ requests lead to erroneous responses. We propose FrownOnError, a novel interaction technique that enables users to interrupt the responses by intentional but natural facial expressions. This method leverages the human nature that the facial expression changes when we receive unexpected responses. We conducted a first user study (N=12) which revealed the significant difference in the frequency of occurrence and intensity of users’ facial expressions between two conditions. Our second user study (N=16) evaluated the user experience and interruption efficiency of FrownOnError and the third user study (N=12) explored suitable conversation recovery strategies after the interruptions.

  • Designing and Evaluating Hand-to-Hand Gestures with Dual Commodity Wrist-Worn Devices

    (IMWUT’20) Yiqin Lu, Bingjian Huang, Chun Yu, Guanhong Liu, Yuanchun Shi
    We explore hand-to-hand gestures, a group of gestures that are performed by touching one hand with the other hand. Hand-to- hand gestures are easy to perform and provide haptic feedback on both hands. Moreover, hand-to-hand gestures generate simultaneous vibration on two hands that can be sensed by dual off-the-shelf wrist-worn devices. Our results show that the recognition accuracy for fourteen gestures is 94.6% when the user is stationary, and the accuracy for five gestures is 98.4% or 96.3% when the user is walking or running, respectively. This is significantly more accurate than a single device worn on either wrist.

  • Investigating Bubble Mechanism for Ray-Casting to Improve 3D Target Acquisition in Virtual Reality

    (IEEE VR’20) Yiqin Lu, Chun Yu, Yuanchun Shi
    We investigate a bubble mechanism for ray-casting in virtual reality. Bubble mechanism identifies the target nearest to the ray, with which users do not have to accurately shoot through the target. We first design the criterion of selection and the visual feedback of the bubble. We then conduct two experiments to evaluate ray-casting techniques with bubble mechanism in both simple and complicated 3D target acquisition tasks. Results show the bubble mechanism significantly improves ray-casting on both performance and preference, and our Bubble Ray technique with angular distance definition is competitive compared with other target acquisition techniques.

  • HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays

    (IMWUT’20) Yukang Yan, Yingtian Shi, Chun Yu, Yuanchun Shi
    We propose HeadCross, a head-based interaction method to select targets on VR and AR head-mounted displays (HMD). Using HeadCross, users control the pointer with head movements and to select a target and can select targets without using their hands. We first conduct a user study to identify users’ behavior differences between performing HeadCross and other head movements. Based on the results, we discuss design implications, extract useful features, and develop the recognition algorithm. In Study 2, we compared HeadCross to baseline method in two typical target selection tasks on both VR and AR interfaces. In Study 3, we compared HeadCross to three alternative designs of head-only selection methods.


  • FlexTouch: Enabling Large-Scale Interaction Sensing Beyond Touchscreens Using Flexible and Conductive Materials

    (IMWUT ’19) YunTao Wang, JianYu Zhou, HanChuan Li, TengXiang Zhang, MinXuan Gao, ZhouLin Cheng, Chun Yu, Shwetak Patel, and YuanChun Shi
    In this paper, we present FlexTouch, a technique that enables large-scale interaction sensing beyond the spatial constraints of capacitive touchscreens using passive low-cost conductive materials. This is achieved by customizing 2D circuit-like patterns with an array of conductive strips that can be easily attached to the sensing nodes on the edge of the touchscreen. FlexTouch requires no hardware modification, and is compatible with various conductive materials (copper foil tape, silver nanoparticle ink, ITO frames, and carbon paint), as well as fabrication methods (cutting, coating, and ink-jet printing).

  • PrivateTalk: Activating Voice Input with Hand-On-Mouth Gesture Detected by Bluetooth Earphones

    (UIST ’19) Yukang Yan, Chun Yu, Yingtian Shi, Minxing Xie
    We introduce PrivateTalk, an on-body interaction technique that allows users to activate voice input by performing the Hand-On-Mouth gesture during speaking. The gesture is per formed as a hand partially covering the mouth from one side. PrivateTalk provides two benefits simultaneously. First, it enhances privacy by reducing the spread of voice while also concealing the lip movements from the view of other people in the environment. Second, the simple gesture removes the need for speaking wake-up words and is more accessible than a physical/software button especially when the device is not in the user’s hands. To recognize the Hand-On-Mouth gesture, we propose a novel sensing technique that leverages the differ ence of signals received by two Bluetooth earphones worn on the left and right ear. Our evaluation shows that the gesture can be accurately detected and users consistently like PrivateTalk and consider it intuitive and effective.

  • Accurate and Low-Latency Sensing of Touch Contact on Any Surface with Finger-Worn IMU Sensor

    (UIST ’19) Yizheng Gu, Chun Yu, Zhipeng Li, Weiqi Li, Shuchang Xu, Xiaoying Wei, Yuanchun Shi
    Head-mounted Mixed Reality (MR) systems enable touch interaction on any physical surface. However, optical methods (i.e., with cameras on the headset) have difficulty in determining the touch contact accurately. We show that a finger ring with Inertial Measurement Unit (IMU) can substantially improve the accuracy of contact sensing from 84.74% to 98.61% (f1 score), with a low latency of 10 ms. We tested different ring wearing positions and tapping postures (e.g., with different fingers and parts). Results show that an IMU-based ring worn on the proximal phalanx of the index finger can accurately sense touch contact of most usable tapping postures. Participants preferred wearing a ring for better user experience. Our approach can be used in combination with the optical touch sensing to provide robust and low-latency contact detection.

  • ProxiTalk: Activate Speech Input by Bringing Smartphone to the Mouth

    (IMWUT ’19) Zhican Yang, Chun Yu, Fengshi Zheng, Yuanchun Shi
    We present ProxiTalk, an interaction technique that allows users to enable smartphone speech input by simply moving it close to their mouths. We study how users use ProxiTalk and systematically investigate the recognition abilities of various data sources (e.g., using a front camera to detect facial features, using two microphones to estimate the distance between phone and mouth). Results show that it is feasible to utilize the smartphone’s built-in sensors and instruments to detect ProxiTalk use and classify gestures. An evaluation study shows that users can quickly acquire ProxiTalk and are willing to use it.

  • Investigating Gesture Typing for Indirect Touch

    (IMWUT ’19) Zhican Yang, Chun Yu, Xin Yi, Yuanchun Shi
    With the development of ubiquitous computing, entering text on HMDs and smart TVs using handheld touchscreen devices (e.g., smartphone and controller) is becoming more and more attractive. In these indirect touch scenarios, the touch input surface is decoupled from the visual display. In this paper, we investigate the feasibility of gesture typing for indirect touch since keeping the finger in touch with the screen during typing makes it possible to provide continuous visual feedback, which is beneficial for increasing the input performance. We propose an improved design to address the uncertainty and inaccuracy of the first touch. Evaluation result shows that users can quickly acquire indirect gesture typing, and type 22.3 words per minute after 30 phases, which significantly outperforms previous numbers in literature. Our work provides the empirical support for leveraging gesture typing for indirect touch.

  • “I Bought This for Me to Look More Ordinary”: A Study of Blind People Doing Online Shopping

    (CHI’19) Guanhong Liu, Xianghua Ding, Chun Yu, Lan Gao, Xingyu Chi, Yuanchun Shi
    Online shopping, by reducing the needs for traveling, has become an essential part of lives for people with visual im- pairments. However, in HCI, research on online shopping for them has only been limited to the analysis of accessi- bility and usability issues. To develop a broader and better understanding of how visually impaired people shop online and design accordingly, we conducted a qualitative study with twenty blind people. Our study highlighted that blind people’s desire of being treated as ordinary had signifcantly shaped their online shopping practices: very attentive to the visual appearance of the goods even they themselves could not see and taking great pain to fnd and learn what commodities are visually appropriate for them. This paper reports how their trying to appear ordinary is manifested in online shopping and suggests design implications to support these practices.

  • Typing on Split Keyboards with Peripheral Vision

    (CHI’19) Yiqin Lu, Chun Yu, Shuyi Fan, Xiaojun Bi, Yuanchun Shi
    Split keyboards are widely used on hand-held touchscreen devices (e.g., tablets). However, typing on a split keyboard often requires eye movement and attention switching between two halves of the keyboard, which slows users down and increases fatigue. We explore peripheral typing, a superior typing mode in which a user focuses her visual attention on the output text and keeps the split keyboard in peripheral vision. Our investigation showed that peripheral typing reduced attention switching, enhanced user experience and increased overall performance (27 WPM, 28% faster) over the typical eyes-on typing mode. We also designe GlanceType, a text entry system that supported both peripheral and eyes-on typing modes for real typing scenario.

  • Clench Interaction: Novel Biting Input Techniques

    (CHI’19) Xuhai Xu, Chun Yu, Anind K. Dey, Jennifer Mankoff
    We propose clench interaction that leverages clenching as an actively con- trolled physiological signal that can facilitate interactions. We conducted a user study to investigate users’ ability to control their clench force. We found that users can easily discriminate three force levels, and that they can quickly con- firm actions by unclenching (quick release). We developed a design space for clench interaction based on the results and investigated the usability of the clench interface. Par- ticipants preferred the clench over baselines and indicated a willingness to use clench-based interactions. This novel technique can provide an additional input method in cases where users’ eyes or hands are busy, augment immersive experiences such as virtual/augmented reality, and assist individuals with disabilities.

  • HandSee: Enabling Full Hand Interaction on Smartphones with Front Camera-based Stereo Vision

    (CHI ’19) Chun Yu, Xiaoying Wei, Shubh Vachher,Yue Qin,Chen Liang,Yueting Weng,Yizheng Gu,Yuanchun Shi
    HandSee is a novel sensing technique that can capture the state and movement of the user’s hands while using smartphone. We place a prism mirror on the front camera to achieve a stereo vision of the scene above the touchscreen surface. Due to this sensing ability, HandSee enables a variety of novel interaction techniques and expands the design space for full hand interaction on smartphones.
    [Video] [Paper]

  • VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction

    (CHI’19) Weinan Shi, Chun Yu, Shuyi Fan, Feng Wang, Tong Wang, Xin Yi, Xiaojun Bi, Yuanchun Shi
    Modern touchscreen keyboards are all powered by the word- level auto-correction ability to handle input errors. Unfortu- nately, visually impaired users are deprived of such beneft because a screen-reader keyboard ofers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard al- gorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key con- sidering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users’ touch point location, which enables them to select other keys easily. A user study shows that compared with the cur- rent keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.
    [Video] [Paper]

  • EarTouch: Facilitating Smartphone Use for Visually Impaired People in Mobile and Public Scenarios

    (CHI ’19)Ruolin Wang, Chun Yu, Xing-Dong Yang, Weijie He, Yuanchun Shi
    Interacting with a smartphone using touch input and speech output is challenging for blind and visually impaired people in mobile and public scenarios, where only one hand may be available for input ( e.g., while holding a cane ) and using the loud speaker for speech output is constrained by environmental noise, privacy, and social concerns. To address these issues, we propose EarTouch, a one-handed interaction technique that allows users to interact with a smartphone using the ear to tap or draw gestures on the touchscreen and hear the speech output played via the ear speaker privately. In a broader sense, EarTouch brings us an important step closer to accessible smartphones for all users of all abilities.

  • Exploring Low-Occlusion Qwerty Soft Keyboard Using Spatial Landmarks

    (ACM Trans. Comput.-Hum. Interact ’19) Ke Sun,Chun Yu,Yuanchun Shi
    The Qwerty soft keyboard is widely used on mobile devices. However, keyboards often consume a large por- tion of the touchscreen space, occluding the application view on the smartphone and requiring a separate input interface on the smartwatch. Such space consumption can affect the user experience of accessing infor- mation and the overall performance of text input. In order to free up the screen real estate, this article explores the concept of Sparse Keyboard and proposes two new ways of presenting the Qwerty soft keyboard. The idea is to use users’ spatial memory and the reference effect of spatial landmarks on the graphical interface. Our final design K3-SGK displays only three keys while L5-EYOCN displays only five line segments instead of the entire Qwerty layout.

  • Facilitating Temporal Synchronous Target Selection through User Behavior Modeling

    (IMWUT’19) Tengxiang Zhang, Xin Yi*, Ruolin Wang, Jiayuan Gao, Yuntao Wang, Chun Yu, Simin Li, Yuanchun Shi
    Temporal synchronous target selection is an association-free selection technique: users select a target by generating signals (e.g., finger taps and hand claps) in sync with its unique temporal pattern. However, classical pattern set design and input recognition algorithm of such techniques did not leverage users’ behavioral information, which limits their robustness to imprecise inputs. In this paper, we improve these two key components by modeling users’ interaction behavior. We generated pattern sets for up to 22 targets that minimized the possibility of confusion due to imprecise inputs, validated that the optimized pattern sets could reduce error rate from 23% to 7% for the classical Correlation recognizer. We also tested a novel Bayesian, which achieved higher selection accuracy than the Correlation recognizer when the input sequence is short.

  • AR Assistive System in Domestic EnvironmentUsing HMDs: Comparing Visualand Aural Instructions

    (HCII’19) Shuang He, Yanhong Jia, Zhe Sun, Chenxin Yu, Xin Yi, Yuanchun Shi, Yingqing Xu
    People usually refer to printed documents while they learn to use different devices. However, augmented reality (AR) assistive systems providing visual and aural instructions have been proposed as an alternative solution. We evaluated users’performance of instruction understanding in four different ways: (1) Baseline paper instructions, (2) Visual instructions based on head mounted displays (HMDs), (3) Visual instructions based on computer monitor, (4) Aural instructions. In a Wizard of Oz study, we found that, for the task of making espresso coffee, the helpfulness of visual and aural instructions depends on task complexity. Providing visual instructions is a better way of showing operation details, while aural instructions are suitable for presenting intention of operation. With the same visual instructions on displays, due to the limitation of hardware, the HMD-users complete the task in the longest duration and bear the heaviest perceived cognitive load.


  • Tap-to-Pair: Associating Wireless Devices with Synchronous Tapping

    (IMWUT ’18) Tengxiang Zhang, Xin Yi, Ruolin Wang, Yuntao Wang, Chun Yu, Yiqin Lu and Yuanchun Shi,
    Tap-to-Pair is a spontaneous device association technique that initiates pairing from advertising devices without hardware or firmware modifications. Tapping an area near the advertising device’s antenna can change its signal strength. Users can then associate two devices by synchronizing taps on the advertising device with the blinking pattern displayed by the scanning device. By leveraging the wireless transceiver for sensing, Tap-to-Pair does not require additional resources from advertising devices and needs only a binary display (e.g. LED) on scanning devices.
    [Video] [Paper]

  • HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices

    (IMWUT’18) Yukang Yan,Chun Yu,XIN YI,Yuanchun Shi
    We propose HeadGesture, a hands-free input approach to interact with HMD devices. Using HeadGesture, users do not need to raise their arms to perform gestures or operate remote controllers in the air. Instead, they perform simple gestures with head movement to interact. In this way, users’ hands are free to perform other tasks and it reduces the hand occlusion of the field of view and alleviates arm fatigue. Evaluation results demonstrate that the performance of HeadGesture is comparable to mid-air hand gestures and users feel significantly less fatigue.

  • Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands

    (UIST 2018) Ke Sun,Chun Yu,Weinan Shi,Lan Liu,Yuanchun Shi
    We present Lip-Interact, an interaction technique that allows users to issue commands on their smartphone through silent speech. Lip-Interact repurposes the front camera to capture the user’s mouth movements and recognize the issued commands with an end-to-end deep learning model. Our system supports 44 commands for accessing both system-level functionalities (launching apps, changing system settings, and handling pop-up windows) and application-level functionalities (integrated operations for two apps). We verify the feasibility of Lip-Interact with three user experiments: evaluating the recognition accuracy, comparing with touch on input efficiency, and comparing with voiced commands with regards to personal privacy and social norms. We demonstrate that Lip-Interact can help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.
    [Video] [Paper]

  • TOAST: Ten-Finger Eyes-Free Typing on Touchable Surfaces

    (Ubicomp 2018) Weinan Shi, Chun Yu,Xin Yi, Zhen Li, Yuanchun Shi
    Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces.We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users’ typing data.
    [Video] [Paper]

  • Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality

    (CHI 2018) Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, Yuanchun Shi
    Eyes-free target acquisition is a basic and important human ability to interact with the surrounding physical world, relying on the sense of space and proprioception. In this research, we leverage this ability to improve interaction in virtual reality (VR), by allowing users to acquire a virtual object without looking at it. We expect this eyes-free approach can effectively reduce head movements and focus changes, so as to speed up the interaction and alleviate fatigue and VR sickness.

  • VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval

    (CHI 2018) Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Sun Ke, Yuanchun Shi
    We propose VirtualGrasp, a novel gestural approach to retrieve virtual objects in virtual reality. Using VirtualGrasp, a user retrieves an object by performing a barehanded gesture as if grasping its physical counterpart. The object-gesture mapping under this metaphor is of high intuitiveness, which enables users to easily discover, remember the gestures to retrieve the objects.

  • ForceBoard: Subtle Text Entry Leveraging Pressure

    (CHI 2018) Mingyuan Zhong, Chun Yu, Qian Wang, Xuhai Xu, Yuanchun Shi
    We present ForceBoard, a pressure-based input technique that enables text entry by subtle finger motion. To enter text, users apply pressure to control a multi-letter-wide sliding cursor on a one-dimensional keyboard with alphabetical ordering, and confirm the selection with a quick release. We examined the error model of pressure control for successive and error-tolerant input, which was incorporated into a Bayesian algorithm to infer user input.
    [Video] [Paper]


  • TouchPower: Interaction-based Power Transfer for Power-as-needed Devices

    (Ubicomp’17, discussion paper) Tengxiang Zhang, Xin Yi, Chun Yu, Yuntao Wang, Nicholas Becker, Yuanchun Shi
    The trend toward ubiquitous deployment of electronic devices demands novel low maintenance power schemes to decrease the burden of maintaining such a large number of devices. In this paper, we propose Interaction-based Power Transfer (IPT): a novel power scheme for power-as-needed devices (i.e., devices that only require power during interaction).
    [Video] [Paper]

  • BlindType: Eyes-Free Text Entry on Handheld Touchpad by Leveraging Thumb’s Muscle Memory

    (UbiComp ’17) Yiqin Lu, Chun Yu, Xin Yi, Yuanchun Shi
    Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen.

  • Is it too small?: Investigating the performances and preferences of users when typing on tiny QWERTY keyboards

    (IJHCS ’17) Xin Yi, Chun Yu, Weinan Shi, Yuanchun Shi
    Typing on tiny QWERTY keyboards on smartwatches is considered challenging or even impractical due to the limited screen space. In this paper, we describe three user studies undertaken to investigate users’ typing abilities and preferences on tiny QWERTY keyboards.

  • COMPASS: Rotational Keyboard on Non-Touch Smartwatches

    (CHI ’17, honorable mention) Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, Yuanchun Shi
    COMPASS is a non-touch text entry technique on smartwatches. It positions multiple cursors on the keyboard, with dynamic-optimization to minimize rotational distance. User reached 12.5 WPM after 90-minute practice.
    [Video] [Paper]

  • Float: One-Handed and Touch-Free Target Selection on Smartwatches

    (CHI ’17) Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, Yuanchun Shi
    We present Float, an interaction technique that enables one-handed and touch-free input on smartwatches based on a combination of wrist tilt and PPG finger gestures.

  • Tap, Dwell or Gesture?: Exploring Head-Based Text Entry Techniques for HMDs

    (CHI ’17) Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, Yuanchun Shi
    We investigated three head-based text entry techniques for HMDs: DwellType, TapType and GestureType. We found gesture typing on HMD using head achieved 25 words per minute.

  • ViVo: Video-Augmented Dictionary for Vocabulary Learning

    (CHI ’17) Yeshuang Zhu, Yuntao Wang, Chun Yu, Shaoyun Shi, Yankai Zhang, Shuang He, Peijun Zhao, Xiaojuan Ma, Yuanchun Shi
    We present ViVo, a novel video-augmented dictionary that provides an inexpensive, convenient, and scalable way to exploit huge online video resources for vocabulary learning. ViVo automatically generates short video clips for learning from existing movies.

  • Word Clarity as a Metric in Sampling Keyboard Test Sets

    (CHI ’17) Xin Yi, Chun Yu, Weinan Shi, Xiaojun Bi, Yuanchun Shi
    We formally define word clarity, and show that it yield 26.4% and 25% difference in error rate and input speed respectively. We propose a Pareto optimization method for sampling test sets with different sizes.

  • CEPT: Collaborative Editing Tool for Non-Native Authors

    (CSCW ’17) Yeshuang Zhu, Shichao Yue, Chun Yu, Yuanchun Shi
    Due to language deficiencies, individual non-native speakers (NNS) face many difficulties while writing. In this paper, we propose to build a collaborative editing system that aims to facilitate the sharing of language knowledge among non-native co-authors, with the ultimate goal of improving writing quality. We describe CEPT, which allows individual co-authors to generate their own revisions as well as incorporating edits from others to achieve mutual inspiration.


  • One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses

    (CHI ’16, honorable mention) Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, Yuanchun Shi
    We present 1D Handwriting, a unistroke gesture technique enabling text entry on a one-dimensional interface. The challenge is to map two-dimensional handwriting to a reduced one-dimensional space, while achieving a balance between memorability and performance efficiency. After an iterative design, we finally derive a set of ambiguous two-length unistroke gestures, each mapping to 1-4 letters. To input words, we design a Bayesian algorithm that takes into account the probability of gestures and the language model. To input letters, we design a pause gesture allowing users to switch into letter selection mode seamlessly. Users studies show that 1D Handwriting significantly outperforms a selection-based technique (a variation of 1Line Keyboard) for both letter input (4.67 WPM vs. 4.20 WPM) and word input (9.72 WPM vs. 8.10 WPM). With extensive training, text entry rate can reach 19.6 WPM. Users’ subjective feedback indicates 1D Handwriting is easy to learn and efficient to use. Moreover, it has several potential applications for other one-dimensional constrained interfaces.

  • Investigating Effects of Post-Selection Feedback for Acquiring Ultra-Small Targets on Touchscreen

    (CHI ’16) Chun Yu, Hongyi Wen, Wei Xiong, Xiaojun Bi, Yuanchun Shi
    In this paper, we investigate the effects of post-selection feedback for acquiring ultra-small (2-4mm) targets on touchscreens. Post-selection feedback shows the contact point on touchscreen after a user lifts his/her fingers to increase users’ awareness of touching. Three experiments are conducted progressively using a single crosshair target, two reciprocally acquired targets and 2D random targets. Results show that in average post-selection feedback can reduce touch error rates by 78.4%, with a compromise of target acquisition time no more than 10%. In addition, we investigate participants’ adjustment behavior based on correlation between successive trials. We conclude that the benefit of post-selection feedback is the outcome of both improved understanding about finger/point mapping and the dynamic adjustment of finger movement enabled by the visualization of the touch point. [Paper]


  • ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data

    (UIST ’15) Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, Yuanchun Shi
    Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users’ mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman’s input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy. [Paper][video]

  • ChinAR: Facilitating Chinese Guqin Learning through Interactive Projected Augmentation

    (Chinese CHI ’15, best paper) Yingxue Zhang, Siqi Liu, Lu Tao, Chun Yu, Yuanchun Shi, Yingqing Xu
    The Guqin, a seven-stringed fretless zither, is the most representative traditional musical instrument in China. However, the complexity of its unique notation and theory has severely limited its popularity in the modern world. With the goal of providing an easy and effective way of learning Guqin, we have created an interactive learning system called ChinAR which employs augmented reality. We have made three main contributions in this paper: (1) a systematic method to design for instrumental learning combing eastern and western musical concepts; (2) a primary validation of the effect of augmented reality in facilitating learning of the Chinese Guqin (3) a natural user interface for the learning system applying gesture detection. The result of user study shows our design is helpful in providing better learning experience and enhancing performance and memorization with markedly less time spent learning. This work shows how a new interface helps promote the use of heritage instruments and culture. [Paper]

  • A Tabletop-Centric Smart Space for Emergency Response

    (IEEE Pervasive Computing ’15) Jie Liu, Yongqiang Qin, Qiang Yang, Chun Yu, and Yuanchun Shi
    This article describes the design and implementation of a smart space for emergency response based on five system design guidelines. A large-scale interactive tabletop with dedicated software serves as the center for collaboration. Information on the tabletop and peripheral devices can be shared via a user interface sharing technique. The authors deployed their smart space in a forest fire simulation and evaluated team performance in comparison with a control group. Analysis of the results reveals that this smart space can significantly improve team performance as well as team cognition. This article is part of a special issue on smart spaces. [Paper]


  • FOCUS: enhancing children’s engagement in reading by using contextual BCI training sessions

    (CHI ’14) Jin Huang, Chun Yu, Yuntao Wang, Yuhang Zhao, Siqi Liu, Chou Mo, Jie Liu, Lie Zhang, Yuanchun Shi
    Reading is an important aspect of a child’s development. Reading outcome is heavily dependent on the level of engagement while reading. In this paper, we present FOCUS, an EEG-augmented reading system which monitors a child’s engagement level in real time, and provides contextual BCI training sessions to improve a child’s reading engagement. A laboratory experiment was conducted to assess the validity of the system. Results showed that FOCUS could significantly improve engagement in terms of both EEG-based measurement and teachers’ subjective measure on the reading outcome. [Paper]

  • TangramTheatre: Presenting Children’s Creation on Multimodal Tabletops

    (CHI EA ’14) Zhun Qu,Chun Yu,Yue Shi,Jin Huang,Li Tian,Yuanchun Shi
    The tangram is a jigsaw-like traditional Chinese art form rich in wittiness and expressiveness. However, there is an absence of efficient support to create animations for children and novice users after designing tangram characters. Thus, we present TangramTheatre, a performance-driven animation tool that combines both creation and animation of physical and virtual characters. TangramTheatre allows users to create characters using seven physical tangram pieces as what they do in real tangram games and then edit animations of these characters. In this paper we present our proof of concept prototype. A preliminary study was conducted to direct a future empirical study with children. The results show that all of the participants express great interests in TangramTheatre. [Paper]

  • BodyRC: Exploring Interaction Modalities Using Human Body as Lossy Signal Transmission Medium

    (UIC, honorable mention) Yuntao Wang, Chun Yu, Lin Du, Jin Huang, Yuanchun Shi
    With the increasing popularity of wearable computing devices, new sensing techniques that enable always-available interaction are highly demanded. In this paper, we propose Body RC, a novel body-based device using human body as lossy signal transmission medium. This device supports on-body interaction and body gesture recognition. In particular, Body RC recognizes the operations of on-body interaction and body gestures by analyzing the electrical properties when transmitting single high frequency analog signal through human body. We evaluate the capabilities and performance of Body RC through two controlled experiments showing robust classification of both on-body interaction and body gesture recognition. In addition, we design a real-time recognition system demonstrating the utility of our technique. [Paper]

  • QOOK: enhancing information revisitation for active reading with a paper book

    (TEI ’14) Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Taoshuai Zhang, Yuanchun Shi
    Revisiting information on previously accessed pages is a common activity during active reading. Both physical and digital books have their own benefits in supporting such activity according to their manipulation natures. In this paper, we introduce QOOK, a paper-book based interactive reading system, which integrates the advanced technology of digital books with the affordances of physical books to facilitate people’s information revisiting process. The design goals of QOOK are derived from the literature survey and our field study on physical and digital books respectively. QOOK allows page flipping just like on a real book and enables people to use electronic functions such as keyword searching, highlighting and bookmarking. A user study is conducted and the study results demonstrate that QOOK brings faster information revisiting and better reading experience to readers. [Paper]

  • Enhancing Collaboration in Competitive Games in Multi-Display Environment

    Jie Liu , Taoshuai Zhang, Yuanchun Shi
    In the Multi-Display Environment (MDE), we introduce a mechanism containing private, public and group workspaces for a computer-mediated tabletop board game through combination of the tabletop and mobile phones. It can sustain the important sociality between players while ensuring privacy and enhancing visual effect. Based on the popular board game Monopoly, we design Copoly on a multi-touch tabletop and mobile phones where players can form groups for collaboration. We explore the patterns of collaboration and its effect on tabletop game experience. Results show that social bonding play an important role on the frequency and patterns of collaboration in tabletop games, and players gain a more joyful experience through both competition and collaboration. [Paper]

  • Defining and Analyzing a Gesture Set for Interactive TV Remote on Touchscreen Phones

    Yuntao Wang, Chun Yu, Yuhang Zhang, Jin Huang, Yuanchun Shi
    In this paper, we recruited 20 participants preforming user-defined gestures on a touch screen phone for 22 TV remote commands. Totally 440 gestures were recorded, analyzed and paired with think-aloud data for these 22 referents. After analyzing these gestures according to extended taxonomy of surface gestures and agreement measure, we presented a user-defined gesture set for interactive TV remote on touch screen phones. Despite the insight of mental models and analysis of gesture set, our findings indicate that people prefer using single-handed thumb and also prefer eyes-free gestures that need no attention switch under TV viewing scenario. Multi-display is useful in text entry and menu access tasks. Our results will contribute to better gesture design in the field of interaction between TVs and touchable mobile phones. [Paper]

  • AirFlow: designing immersive breathing training games for COPD

    Yongqiang Qin, Chris Vincent, Nadia Bianchi-Berthouze, Yuanchun Shi
    Chronic Obstructive Pulmonary Disease (COPD) refers to a collection of lung diseases that result in breathing difficulties. In some cases, the symptoms of COPD can be reduced, by engaging in breathing exercises. Technology can support, and we are developing AirFlow, a suite of interactive computer games, to guide breathing exercises and promote learning. To establish requirements, we interviewed 20 people with COPD, in China, to understand their use of breathing exercises, and learn how technology might fit their lifestyle. The findings informed our design goals. We outline a prototype system, where respiration rate, waveform, and amplitude are captured and used to control a virtual environment. The system will guide users through breathing exercises, and provide training instructions, using a series of games. The immersive environment aims to support a fun and motivating experience, therefore underpinning user confidence. [Paper]

  • uStitchHub: Stitching Multi-Touch Trajectories on Tiled Very Large Tabletops

    Yongqiang Qin,Yue Shi,Yuanchun Shi
    It is common to tile normal sized units together to obtain very large tabletops or interactive digital walls. Usually cameras are adopted to capture the touch input on the surface, which demands for fusing several multi-touch input feeds into a single feed in real time. Common approaches of stitching camera frames into aggregate images do not scale to larger number of cameras. We present uStitch Hub to address these challenges, which is specifically applied in the tabletop domain. It is a fusion mediator which accept detected blobs from multiple cameras that do and don’t have overlapped field of view, remapping blobs to the dimension of the entire large tabletop, match blobs with real touches on the surface, and concatenate touch trajectories which are performed over two or more units. We conducted laboratory evaluation of uStitch Hub on its stitching success rate and as well as latency. The results prove that uStitch Hub processes fast and provides not only good stitch on touch trajectories but also low latency. [Paper]

  • Video avatar-based remote video collaboration

    Siqi Liu, Chun Yu, Yuanchun Shi
    Most existing remote video communication systems are confined to how to accurately deliver adequate information in real time, but lose sight of communicators’ interaction demands. Meanwhile, traditional 2D-based video communications cannot make full use of people’s 3D information, which shows great potential for video communication to achieve immersive, natural and efficient interactions with 3D technique. To enhance the immersion sense and expand interaction modes in video collaboration, the design goal for ‘immersive’ video communication systems was proposed and a novel, avatar-based remote video collaboration system was designed and implemented. Specifically, using Creative Senz3D depth camera, the proposed system extracted people’s foreground images through background segmentation as their video avatars, and located the avatars together in a common virtual scenario. Natural and immersive interactions among people and between people and virtual scenes were also well designed, which expanded modes of interaction and collaboration in video communication. Finally, a user study was conducted and the result indicates that the proposed video avatar-based remote video cooperation mode can effectively enhance people’s sense of immersion in telecommunication. [Paper]


  • Implicit bookmarking: Improving support for revisitation in within-document reading tasks

    (IJHCS ’13) Chun Yu,Ravin Balakrishnan,Ken Hinckley,Tomer Moscovich, Yuanchun Shi
    We explore improving support for revisitation in documents by automatically generating bookmarks based on users’ reading history. After showing that dwell time and number of visits are not appropriate for predicting revisitations in documents, we model the high-level reading task as a sequence of reading blocks and recognize long-distance scrolls as separators between them. A long-distance scroll is defined as a continuous scrolling action which causes the document to be navigated beyond a one-page distance. We propose a new technique, called the Head–Tail (HT) algorithm, to generate bookmarks at the head and the tail of reading blocks, whose validity is quantitatively verified by log data analysis. Two studies were conducted to investigate this HT implicit bookmarking technique. The first is a controlled experiment that compared the HT algorithm to the widely used simple recency algorithm for generating implicit bookmarks, in terms of revisit coverage ability and distance between bookmarks … [Paper]

  • Facilitating parallel web browsing through multiple-page view

    (CHI ’13) Wenchang Xu, Chun Yu, Songmin Zhao, Jie Liu and Yuanchun Shi
    Parallel web browsing describes the behavior where users visit web pages in multiple concurrent threads. Qualitative studies have observed this activity being performed with multiple browser windows or tabs. However, these solutions are not satisfying since a large amount of time is wasted on switch among windows and tabs. In this paper, we propose the multiple-page view to facilitate parallel web browsing. Specifically, we provide users with the experience of visiting multiple web pages in one browser window and tab with extensions of prevalent desktop web browsers. Through user study and survey, we found that 2-4 pages within the window size were preferred for multiple-page view in spite of the diverse screen sizes and resolutions. Analytical results of logs from the user study also showed an improvement of 26.3% in users’ efficiency of performing parallel web browsing tasks, compared to traditional browsing with multiple windows or tabs. [Paper]

  • Understanding performance of eyes-free, absolute position control on touchable mobile phones

    (MobileHCI ’13, honorable mention) Yuntao Wang, Chun Yu, Jie Liu, Yuanchun Shi
    Many eyes-free interaction techniques have been proposed for touchscreens, but few researches have studied human’s eyes-free pointing ability with mobile phones. In this paper, we investigate the single-handed thumb performance of eyes-free, absolute position control on mobile touch screens. Both 1D and 2D experiments were conducted. We explored the effects of target size and location on eyes-free touch patterns and accuracy. Our findings show that variance of touch points per target will converge as target size decreases. The centroid of touch points per target tends to be offset to the left of target center along horizontal direction, and shift toward screen center along vertical direction. Average accuracy drops from 99.6% of 2*2 layout to 85.0% of 4*4 layout, and average per target varies depending on the location of target. Our findings and design implications provide a foundation for future researches based on eyes-free, absolute position control using thumb on mobile devices. [Paper]

  • QOOK: a new physical-virtual coupling experience for active reading

    (UIST ’13 Adjunct) Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi
    We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents. [Paper]

  • Hero: designing learning tools to increase parental involvement in elementary education in china

    Yuhang Zhao, Alexis Hope, Jin Huang, Yoel Sumitro, James Landay, Yuanchun Shi
    In this paper, we present the design of Hero, a suite of learning tools that combine teacher-created extracurricular challenges with in-class motivational tools to help parents become more involved in their child’s education, while also engaging students in their own learning. To inform the design, we conducted field studies and interviews involving 7 primary teachers and 15 different families. We analyzed Chinese parenting styles and problems related to parental involvement, and developed three major themes from the data. We then proposed three design goals and created a high-fidelity prototype after several iterations of user testing. A preliminary evaluation showed that teachers, parents, and students could all benefit from the design. [Paper]

  • Exploring the effect of display size on pointing performance

    Yuntao Wang, Chun Yu, Yongqiang Qin, Dan Li, Yuanchun Shi
    In this paper, we studied how display size affects human pointing performance given the same display field of view when using a mouse device. In total, four display sizes (10.6, 27, 46, 55 inches) and three display field of views (20, 34 and 45) were tested. Our findings show that given the same display field of view, mouse movement time significantly increases as display size increases; but there is no significant effect of display size on pointing accuracy. This research may contribute a new dimension to literature in describing human pointing performance on large displays. [Paper]


  • Clustering web pages to facilitate revisitation on mobile devices

    Jie Liu, Chun Yu, Wenchang Xu, Yuanchun Shi
    Due to small screens, inaccuracy of input and other limitations of mobile devices, revisitation of Web pages in mobile browsers takes more time than that in desktop browsers. In this paper, we propose a novel approach to facilitate revisitation. We designed AutoWeb, a system that clusters opened Web pages into different topics based on their contents. Users can quickly find a desired opened Web page by narrowing down the searching scope to a group of Web pages that share the same topic. Clustering accuracy is evaluated to be 92.4% and computing resource consumption was proved to be acceptable. A user study was conducted to explore user experience and how much AutoWeb facilitates revisitation. Results showed that AutoWeb could save up a significant time for revisitation and participants rated the system highly. [Paper]

  • How much to share: A Repeated Game Model for Peer-to-Peer Streaming under Service Differentiation Incentive

    Xin Xiao*, Qian Zhang, Yuanchun Shi, Yuan Gao.
    In this paper, we propose a service differentiation incentive for P2P streaming system, according to peers’ instant contributions. Also, a repeated game model is designed to analyze how much the peers should contribute in each round under this incentive. Simulations show that satisfying streaming quality is achieved in the Nash Equilibrium state. [Paper]

  • A Scalable Distributed Architecture for Intelligent Vision System

    Guojian Wang*, Linmi Tao, Huijun Di, Xiyong Ye, Yuanchun Shi
    The complexity of intelligent computer vision systems demands novel system architectures that are capable of integrating various computer vision algorithms into a working system with high scalability. The real-time applications of human-centered computing are based on multiple cameras in current systems, which require a transparent distributed architecture. This paper presents an application-oriented service share model for the generalization of vision processing. Based on the model, a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently. The architecture provides a standard interface for loading various modules and a mechanism for modules to acquire inputs and publish processing results that can be used as inputs by others. Using this architecture, a system can load specific applications without considering the common low-layer data processing. We have implemented a prototype vision system based on the proposed architecture. The latency performance and 3-D track function were tested with the prototype system. The architecture is scalable and open, so it will be useful for supporting the development of an intelligent vision system, as well as a distributed sensor system. [Paper]

  • Inertial Body-worn Sensor Data Segmentation by Boosting Threshold-based Detectors

    Yue Shi, Yuanchun Shi, Xia Wang
    Using inertial body-worn sensors, we propose a segmentation approach to detect when a user changes actions. We use Adaboost to combine three threshold-based detectors: force/gravity ratios, peaks of autocorrelation, and local minimums of velocity. Experimenting with the CMU Multi-Modal Activity Database, we find that the first two features are the most important, and our combination approach improves performance with an acceptable level of granularity. [Paper]

  • Watching you moving the mouse, i know who you are

    Chun Yu, Yue Shi, Xinliang Wang, Yuanchun Shi
    Previous research on modeling human’s pointing behavior focuses on user-independent variables such as target width and distance. In this work-in-progress, we investigate a set of user-dependent variables, which are drawn from cursor trajectory data and may represent an individual user’s unique pattern when controlling mouse movement. Using these features, the 8 users in our experiment can be recognized at a promising accuracy as high as 87.5%. [Paper]

  • UI Portals: Sharing Arbitrary Regions of User Interfaces on Traditional and Multi-user Interactive Devices

    Jie Liu, Yuanchun Shi
    This paper introduces UI Portals, a novel approach to help users share their off-the-shelf applications’ user interfaces on traditional and multi-user interactive devices among various platforms. Users can choose an application window or select parts of the window to share. In addition to the traditional single-user mouse-and-keyboard interaction, we provide support for simultaneous interactions on large multi-user interactive surfaces, like tabletops and multi-touch vertical surfaces. We describe the concepts and implementation mechanisms of this approach. Furthermore, we implement UI Portals Toolsets (UIPT), a prototype that demonstrates sharing arbitrary regions of user interfaces among multiple platforms without any change to the application source code. In UIPT, we design a windowing tool dedicated to large multi-user interactive surfaces to fully leverage the benefit of simultaneous interaction. Two typical scenarios demonstrate the utility of UIPT and show how UIPT can help users work with their familiar software applications on different displays and platforms. [Paper]

  • AutoWeb: automatic classification of mobile web pages for revisitation

    Jie Liu, Wenchang Xu, Yuanchun Shi
    Revisitation in mobile Web browsers takes more time than that in desktop browsers due to the limitations of mobile phones. In this paper, we propose AutoWeb, a novel approach to speed up revisitation in mobile Web browsing. In AutoWeb, opened Web pages are automatically classified into different groups based on their contents. Users can more quickly revisit an opened Web page by narrowing down search scope into a group of pages that share the same topic. We evaluated the classification accuracy and the accuracy is 92.4%. Three experiments were conducted to investigate revisitation performance in three specific tasks. Results show AutoWeb can save significant time for revisitation by 29.5%, especially for long time Web browsing, and that it improves overall mobile Web revisitation experience. We also compare automatic classification with other revisitation methods. [Paper]

  • Digging unintentional displacement for one-handed thumb use on touchscreen-based mobile devices

    Wenchang Xu, Jie Liu, Chun Yu, Yuanchun Shi
    There is usually an unaware screen distance between initial contact and final lift-off when users tap on touchscreen-based mobile devices with their fingers, which may affect users’ target selection accuracy, gesture performance, etc. In this paper, we summarize such case as unintentional displacement and give its models under both static and dynamic scenarios. We then conducted two user studies to understand unintentional displacement for the widely-adopted one-handed thumb use on touchscreen-based mobile devices under both scenarios respectively. Our findings shed light on the following four questions: 1) what are the factors that affect unintentional displacement; 2) what is the distance range of the displacement; 3) how is the distance varying over time; 4) how are the unintentional points distributed around the initial contact point. These results not only explain certain touch inaccuracy, but also provide important reference for optimization and future design of UI components, gestures, input techniques, etc. [Paper]

  • Fall Detection on Mobile Phones Using Features from a Five-Phase Model

    Yue Shi, Yuanchun Shi, Xia Wang
    The injuries caused by falls are great threats to the elderly people. With the ability of communication and motion sensing, the mobile phone is an ideal platform to detect the occurrence of fall accidents and help the injured person receive first aid. However, the missed detection and false alarm of the monitoring software will cause annoyance to the users in real use. In this paper, we present a novel fall detection technique using features from a five-phase model which describes the state change of the user’s motion during the fall. Experiment results validate the effectiveness of the algorithm and show that the features derived from the model as gravity-cross rate and non-primarily maximum and minimum points of the acceleration data are useful to improve the precision of the detection. Moreover, we implement the technique as uCare, an Android application that helps elderly people in fall prevention, detection and first aid seeking. [Paper]

  • PiMarking: co-located collaborative digital annotating on large tabletops

    Yongqiang Qin, Chenjun Wu,Yuanchun Shi
    There are situations under which co-located people are required to perform collaborative marking tasks: for example, human resource officers need to review resumes together and teachers need to grade answer sheets after an examination. In this poster, we introduce PiMarking, a collaborative system designed to accommodate user-authenticated marking tasks and face-to-face discussions on large-scale interactive tabletop surfaces. PiMarking makes it easy for user-differentiation, document sharing and synchronized marking among group members. PiMarking puts forward user permission management mechanisms, allowing three modes for document sharing: distributed copy, share display and synchronized marking. We conducted a preliminary study using a realistic resume marking task, which proved the effectiveness of the features provided by PiMarking. [Paper]

  • uEmergency: a collaborative system for emergency management on very large tabletop

    Yongqiang Qin, Jie Liu, Chenjun Wu, Yuanchun Shi
    The vertical display, indirect input and distant communication in traditional Emergency Management Information System provide unintuitive human-computer-interaction, and thus reduce the efficiency of decision-making. This paper presents uEmergency, a multi-user collaborative system for emergency management on very large-scale interactive tabletop. It allows people to carry out face-to-face communication based on a horizontal global map. Real-time situation can be browsed and analyzed directly using their fingers and digital pens. In this paper, we also present the results of a study where two groups carried out a task for fighting forest fire based on this system. The results suggest that uEmergency can effectively help people manipulate objects, analyze situation and collaborate for coping with an emergency. [Paper]

  • FloTree: a multi-touch interactive simulation of evolutionary processes

    Kien Chuan Chua, Yongqiang Qin, et al.
    We present FloTree, a multi-user simulation that illustrates key dynamic processes underlying evolutionary change. Our intention is to create a informal learning environment that links micro-level evolutionary processes to macro-level outcomes of speciation and biodiversity. On a multi-touch table, the simulation represents change from generation to generation in a population of organisms. By placing hands or arms on the surface, visitors can add environmental barriers, thus interrupting the genetic flow between the separated populations. This results in sub-populations that accumulate genetic differences independently over time, sometimes leading to the formation of new species. Learners can morph the result of the simulation into a corresponding phylogenetic tree. The free-form hand and body touch gestures invite creative input from users, encourages social interaction, and provides an opportunity for deep engagement. [Paper]


  • RegionalSliding: enhancing target selection on touchscreen-based mobile devices

    Wenchang Xu, Chun Yu and Yuanchun Shi
    Target selection on mobile devices with touchscreens usually gets users into trouble due to the occlusion of the target by the user’s finger and ambiguity about which part of the finger generates the result point. In this paper, we propose a novel technique to enhance target selection on touchscreen-based mobile devices, named RegionalSliding, which selectively renders the initially “selected” target as well as its “surrounding” targets in a non-occluded area when users press down on the screen and enables users to complete the selection with sliding gestures according to the visual feedback from the rendered area. A preliminary user study shows that RegionalSliding increases the selection accuracy and brings good user experience. [Paper]

  • Enabling Efficient Browsing and Manipulation of Web Tables on Smartphone

    Wenchang Xu and Yuanchun Shi
    Tables are very important carriers of the vast information on the Internet and are widely used in web pages. However, most designs of web tables are only for desktop PCs and just focus on how to visually and logically show large amount of data without considering their visual effects on small-screen devices. Therefore, users suffer inconvenience when browsing web tables on smartphone. In this paper, we propose to enable efficient browsing and manipulation of web tables on smartphone in order to solve the problems of both information retrieval and content replication from web tables. We implemented a mobile web browser on Android 2.1 platform, which deals with web tables in three steps: genuine table detection, table understanding and user interface design. We conducted a user study to test the effects that users used such tool. Experimental results show that the tool increases users’ browsing efficiency of web tables and the novel browsing and manipulation modes are well accepted by users. [Paper]

  • uPlatform: A Customizable Multi-user Windowing System for Interactive Tabletop

    Chenjun Wu, Yue Suo, Chun Yu, Yuanchun Shi
    Interactive tabletop has shown great potential in facilitating face-to-face collaboration in recent years. Yet, in spite of much promising research, one important area that remains largely unexplored is the windowing system on tabletop, which can enable users to work with multiple independent or collaborative applications simultaneously. As a consequence, investigation of many scenarios such as conferencing and planning has been rather limited. To address this limitation, we present uPlatform, a multi-user windowing system specifically created for interactive tabletop. It is built based on three components: 1) an input manager for processing concurrent multi-modal inputs; 2) a window manager for controlling multi-user policies; 3) a hierarchical structure for organizing multi-task windows. All three components allow to be customized through a simple, flexible API. Based on uPlatform, three systems, uMeeting, uHome and uDining are implemented, which demonstrate its efficiency in building multi-user windowing systems on interactive tabletop. [Paper]

  • uMeeting, an Efficient Co-located Meeting System on the Large-Scale Tabletop

    Jie Liu, Yuanchun Shi
    In this paper, we present the uMeeting system, a co-located meeting system on the large-scale tabletop. People are used to sitting around a table to hold a meeting. It is natural and intuitive. The table has a central role to support team activities. Horizontal surfaces rather than vertical ones have inherent features to support the co-located meeting. Existing tabletops aren’t large enough to support more than 4 people’s meeting and the display area for each person is not large enough. Thus we develop uTable, a large-scale multi-touch tabletop. Based on uTableSDK we developed earlier, we develop uMeeting system that supports co-located meeting on the large tabletop uTable. [Paper]

  • Surprise Grabber: a co-located tangible social game using phone hand gesture

    Mingming Fan,Li Tian, Xin Li, Yuanchun Shi, Yu Zhong, Hao Wang
    Social network games (SNGs) are among the most popular games recently. Different from the asynchronous and online based SNGs, we present Surprise Grabber to see how tangible gesture interface could benefit the synchronous co-located social game. In Surprise Grabber, users control a virtual grabber’s moving in 3D game to catch the gifts by using their camera phone. An efficient code running on the phone detects hand motion, delivers results to Serve PC and provides feedbacks in real time. Distinguished from online SNGS, all players stand together in front of a public display. The results of the pilot user studies showed that: 1) Gesture interface was easy to catch up and made the game more immersive; 2) Occasionally inaccuracy in hand motion detection made the game more competitive instead of frustrating players; 3) Players’ performances were obviously influenced by the social atmosphere; 4) In most cases, players’ performances became better or worse at the same time. [Paper]

  • A rotation based method for detecting on-body positions of mobile devices

    Yue Shi, Jie Liu, Yunchun Shi
    We present a novel rotation based method for detecting where a mobile device is worn on a user’s body that utilizes the fusion of the data from accelerometer and gyroscope. Detecting the position of a mobile device could improve the performance of on-body sensor based human activity recognition and the adaptability of many mobile applications. In our method, the radius and angular velocity for a position is calculated based on the data read from the sensors integrated in a mobile device. We have evaluated our method with an experiment to detect four commonly used positions: breast pocket, trouser pocket, hip pocket and hand. [Paper]

  • Smart home on smart phone

    Yu Zhong, Yue Suo, Wenchang Xu, Chun Yu, Xinwei Guo, Yuhang Zhao, Yuanchun Shi
    Mobile phone with high accessibility and usability is regarded as the ideal interface for the users to monitor and control the approaching smart home environment. Moreover, networking technologies and protocols have been advanced enough to support a universal monitoring and controlling interface on smart phones. This paper presents HouseGenie, an interactive, direct manipulation application on mobile, which supports a range of basic home monitoring and controlling functionalities as a replacement of individual remotes of smart home appliances. HouseGenie also addresses several common requirements that may be behind the vision, such as scenario, short-delay alarm, area restriction and so on. We demonstrate that HouseGenie not only provides intuitive presentations and interactions for smart home management, but also improves user experience comparing to present solutions. [Paper]

  • PicoPet: “Real World” digital pet on a handheld projector

    Yuhang Zhao, Chao Xue, Xiang Cao, Yuanchu Shi
    We created PicoPet, a digital pet game based on mobile handheld projectors. The player can project the pet into physical environments, and the pet behaves and evolves differently according to the physical surroundings. PicoPet creates a new form of gaming experience that is directly blended into the physical world, thus could become incorporated into the player’s daily life as well as reflecting their lifestyle. Multiple pets projected by multiple players can also interact with each other, potentially triggering social interactions between players. In this paper, we present the design and implementation of PicoPet, as well as directions for future explorations. [Paper]

  • A Scalable Passive RFID-Based Multi-User Indoor Location System

    Shang Ma, Yuanchun Shi
    RFID-based indoor location systems have been proved to be both accurate and cost-effective. However, current implementations mainly use active tags which suffer the issues of batteries replacement, installation, maintenance and per-unit cost. Besides, as the number of users grows, how to guarantee the stability and high speed transmission of the location information can be challenging. To address these challenges, 1) we propose a passive RFID-based system for localizing multi-users, and 2) detecting human motion from various types of embedded sensors to be supplemented. In addition, we also implement a reliable transmission protocol to guarantee the location data transition between RF nodes based on dynamic PRI. According to the performance analysis, the tracking accuracy of our system is well assured. Its quick responsiveness and good scalability, as well as low cost on energy and infrastructure, make this system a more cost-effective and easy-to-deploy solution for stable indoor positioning. [Paper]

  • XINS: the anatomy of an indoor positioning and navigation architecture

    Yuan Gao, Qingxuan Yang, Guanfeng Li, Edward Y. Chang, Dong Wang, Chengu Wang, Hang Qu, Pei Dong, Faen Zhang
    Location-Based Service (LBS) is becoming a ubiquitous technology for mobile devices. In this work, we propose a signal-fusion architecture called XINS to perform effective indoor positioning and navigation. XINS uses signals from inertial navigation units as well as WiFi and floor-map constraints to detect turns, estimate travel distances, and predict locations. XINS employs non-intrusive calibration procedures to significantly reduce errors, and fuses signals synergistically to improve computational efficiency, enhance location-prediction accuracy, and conserve power. [Paper]

  • Integrating Smart Classroom and Language Services

    Yue Suo, Yuanchun Shi, Toru Ishida
    The real-time interactive virtual classroom with tele-education experience is an important approach in distance learning. However, most current systems fail to meet the new challenges raised by the development of the serviceoriented architecture. First, the learning systems should be able to facilitate easier integration of increasingly dedicated services, such as language services on the Internet. Second, the learning systems must open their internal interfaces as web services to other systems, so as to enable deeper integration of these systems and easier deployment. Third, the systems are expected to provide flexible interfaces to support mobile device interaction. To address these issues, we build a prototype system, called Open Smart Classroom, by upgrading the original Smart Classroom into a service-oriented open system. With the help of Language Grid services, two Open Smart Classrooms deployed in Tsinghua University and Kyoto University are connected and experimental co-classes have been successfully held. The results of the user study show that integrating Smart Classroom and language services is an interesting and promising approach to building future multicultural distant learning systems. [Paper]

  • uTable: a seamlessly tiled, very large interactive tabletop system

    Yongqiang Qin, Chun Yu, Jie Liu, Yuntao Wang, Yue Shi, Zhouyue Su, Yuanchun Shi
    We present uTable, a very large horizontal interactive surface which accommodates up to ten people sitting around and interacting in parallel. We identify the key aspects for building such large interactive tabletops and discuss the pros and cons of potential techniques. After several rounds of trials, we finally chose tiled rear projection for building the very large surface and DI solution for detecting touch inputs. We also present a set of techniques to narrow the interior bezels, uniform the color/brightness of the surface and handle multiple streams of inputs. Finally uTable achieves a good overall performance in terms of display effect and input capability. [Paper]

  • Air finger: enabling multi-scale navigation by finger height above the surface

    Chun Yu, Xu Tan, Yue Shi, Yuanchun Shi
    We present Air Finger, a novel technique that enables controlling CD ratio by finger height above the touch sur-face for multi-scale navigation tasks. Extending previous research on virtual touch, Air Finger divides the space above surface into two layers and associates the high, medium and low CD ratios to the touch surface, the lower air and the higher air respectively. Users can fluidly switch between the three navigation scales by lifting and pressing the finger. Air Finger enables multi-scale navigation control using one hand. [Paper]

  • RegionalSliding: enhancing target selection on touchscreen-based mobile devices

    (CHI EA ’11) Wenchang Xu, Chun Yu, Jie Liu, Yuanchun Shi
    Target selection on mobile devices with touchscreens usually gets users into trouble due to the occlusion of the target by the user’s finger and ambiguity about which part of the finger generates the result point. In this paper, we propose a novel technique to enhance target selection on touchscreen-based mobile devices, named RegionalSliding, which selectively renders the initially “selected” target as well as its “surrounding” targets in a non-occluded area when users press down on the screen and enables users to complete the selection with sliding gestures according to the visual feedback from the rendered area. A preliminary user study shows that RegionalSliding increases the selection accuracy and brings good user experience. [Paper]


  • The satellite cursor: achieving MAGIC pointing without gaze tracking using multiple cursors

    Chun Yu, Yuanchun Shi, Ravin Balakrishnan, Xiangliang Meng, Yue Suo, Mingming Fan, Yongqiang Qin.
    We present the satellite cursor – a novel technique that uses multiple cursors to improve pointing performance by reducing input movement. The satellite cursor associates every target with a separate cursor in its vicinity for pointing, which realizes the MAGIC (manual and gaze input cascade) pointing method without gaze tracking. We discuss the problem of visual clutter caused by multiple cursors and propose several designs to mitigate it. Two controlled experiments were conducted to evaluate satellite cursor performance in a simple reciprocal pointing task and a complex task with multiple targets of varying layout densities. Results show the satellite cursor can save significant mouse movement and consequently pointing time, especially for sparse target layouts, and that satellite cursor performance can be accurately modeled by Fitts’ Law. [Paper]

  • Toward Systematical Data Scheduling for Layered Streaming in Peer-to-Peer Networks: Can We Go Farther?

    Xin Xiao, Yuanchun Shi, Qian Zhang, Jianhua Shen, Yuan Gao.
    Layered streaming in P2P networks has become a hot topic recently. However, the “layered” feature makes the data scheduling quite different from that for nonlayered streaming, and it hasn’t been systematically studied yet. In this paper, first, according to the unique characteristics caused by layered coding, we present four objectives that should be addressed by scheduling: throughput, layer delivery ratio, useless packets ratio, and subscription jitter prevention; then a three-stage scheduling approach LayerP2P is designed to request data, where the min-cost flow model, probability decision mechanism, and multiwindow remedy mechanism are used in Free Stage, Decision Stage, and Remedy Stage, respectively, to collaboratively achieve the above objectives. With the basic version of LayerP2P and corresponding experiment results achieved in our previous work, in this paper, more efforts are put on its mechanism details and analysis to its unique features; besides, to further guarantee the performance under sharp bandwidth variation, we propose the enhanced approach by improving the Decision Stage strategy. Extensive experiments by simulation and real network implementation indicate that it outperforms other schemes. LayerP2P has also been deployed in PDEPS Project in China, which is expected to be the first practical layered streaming system for education in P2P networks. [Paper]

  • Structured laser pointer: enabling wrist-rolling movements as a new interactive dimension

    Yongqiang Qin, Yuanchun Shi, Hao Jiang, Chun Yu.
    In this paper, we re-visit the issue of multi-point laser pointer interaction from a wrist-rolling perspective. Firstly, we proposed SLP—Structured Laser Pointer, and detects a laser pointer’s rotation along its emitting axis. SLP adds the wrist-rolling gestures as a new interactive dimension to the conventional laser pointer interaction approach. We asked a group of users to perform certain tasks using SLP, and derived from test results a set of criteria to distinguish between incidental and intentional SLP rolling, and then the experimental results also approved the high accuracy and acceptable speed as well as throughput of such rolling interaction. [Paper]

  • HyMTO: The Hybrid Mesh/Tree Overlay for Large Scale Multimedia Interactive Applications over the Internet

    Yuan Gao, Yuanchun Shi, Xin Xiao, Jianhua Shen.
    In the large scale multimedia interactive applications over the Internet, data can be categorized into the streaming data (e.g. the audio and the video streaming) and the sporadic data (e.g. the annotation in a whiteboard). The mesh-pull overlay fits for the streaming data, for it is resilient in heterogeneous network, while the tree-push overlay is more suitable for the sporadic data, for its easy-to-manage structure can solve the problem of packet loss and disorder. To address the transmission of both types of data, a Hybrid Mesh/Tree Overlay (HyMTO) is proposed in this paper, where the sporadic data and the streaming data are transmitted through different overlays with a novel synchronous scheme. HyMTO can provide the resilient, reliable and ordered transmission for both types of data respectively. Simulations are presented to evaluate the effectiveness and efficiency of HyMTO and a practical e-conference system based on HyMTO is implemented and deployed to demonstrate its practicability and reasonability. [Paper]

  • HouseGenie: Universal Monitor and Controller of Networked Devices on Touchscreen Phone in Smart Home

    Yue Suo, Chenjun Wu, Yongqiang Qin, Chun Yu, Yu Zhong, Yuanchun Shi.
    Touch screen phone roaming with people features high accessibility and direct-manipulation interaction, regarded as one of the most convenient interfaces for daily life. In this paper, we present HouseGenie to leverage universal monitor and control of networked devices in smart home on touch screen phone. By wirelessly communicating with an OSGi based portal, which maintains all the devices through varied protocols (e.g. industrial standard UPnP or IGRS), HouseGenie facilitates universal home monitor and control: 1) monitoring current status in panoramic view; 2) direct manipulating single/multiple device(s) using pie-menu, list-mode and drag drop gesture; 3) easily controlling devices in several multimodality ways. [Paper]

  • SHSim: An OSGI-based smart home simulator

    Lei Zhang, Yue Suo, Yu Chen, and Yuanchun Shi
    With the development of pervasive computing, smart home is increasingly popular. Smart Home usually requires an integration of many heterogeneous devices and service applications in deployment so it is hard and expensive to implement system test. To cope with this problem, this paper describes an OSGI-based smart home simulator named SHSim that is suitable for smart home test. SHSim is built on a dynamic mechanism that allows user to configure the system and test cases easily, and provides device transparent simulation meaning that virtual devices can transplant to real devices with no or little modifications. Practical applications show that SHSim can effectively improve development efficiency and reduce test cost. [Paper]

  • pPen: enabling authenticated pen and touch interaction on tabletop surfaces

    Yongqiang Qin, Chun Yu, Hao Jiang, Chenjun Wu, Yuanchun Shi.
    This paper introduced pPen, a pressure-sensitive digital pen enabled precise pressure and touch input on vision based interactive tabletops. With the help of pPen inputs and feature matching technology, we implemented a novel method supporting multi-user authenticated interaction in the bimanual pen and touch scenario: login can be performed just by stroking one’s signature with pPen on the table; a binding between user and pPen was created at the same time, then each interaction command made by pPen is user differentiating. We also conducted laboratory user studies, which later proved the safety and the high resistance to shoulder surfing problem: in the evaluation procedure, any attacker can never log into other user’s working space. [Paper]

  • A tabletop multi-touch Dali’s painting appreciation system

    Li Tian, Xiangliang Meng, Yuanchun Shi.
    Painting artworks, especially those having surreal or super-rational elements, are difficult for ordinary people to deeply appreciate in a static exhibiting manner. We believe that bringing interactivity to an existing painting would have benefits of emphasizing the theme, encouraging active and collaborative learning, and compensating its aura. To this end, in this paper we present DALI (Dali’s Artwork for Learning Interactively), a multi-touch system designed to guide viewers in their processes of appreciating painting artworks. The current system runs on a multi-touch tabletop. 30 paintings of Spanish artist Salvador Dali’s were digitized, deconstructed, and interactively displayed on the table. Visual effects on the paintings can be triggered by hand and finger gestures-an effective way of visual arts education. [Paper]

  • iWebImage: Enabling real-time interactive access to web images

    Wenchang Xu, Yuanchun Shi and Xin Yang.
    Images are widely used in web pages. However, most web images can only be viewed passively. Currently, it remains inconvenient for users to collect and save web images for further editing. The positioning of a certain image in web pages with a large number of images, such as the image search results, is also troublesome and time-wasting, especially for mobile devices with small screens. In this paper, we propose to enable real-time interactive access to web images and design three modes of browsing web images including normal mode, starred mode and advanced mode. We implement a plug-in for Microsoft Internet Explorer, called iWeblmage, which incorporates efficient computer graphics algorithms and provides a customized user interface supporting real-time interactive access to web images. Experimental results show the usage scenarios of the three browsing modes and prove that iWeblmage is well accepted by users. [Paper]

  • Enhancing browsing experience of table and image elements in web pages

    Wenchang Xu, Xin Yang and Yuanchun Shi.
    As the popularity and diversification of both Internet and its access devices, users’ browsing experience of web pages is in great need of improvement. Traditional browsing mode of web elements such as table and image is passive, which limits users’ browsing efficiency of web pages. In this paper, we propose to enhance browsing experience of table and image elements in web pages by enabling real-time interactive access to web tables and images. We design new browsing modes that help users improve their browsing efficiency including operation mode, record mode for web tables and normal mode, starred mode, advanced mode for web images. We design and implement a plug-in for Microsoft Internet Explorer, called iWebWidget, which provides a customized user interface supporting real-time interactive access to web tables and images. Besides, we carry out a user study to testify the usefulness of iWebWidget. Experimental results show that users are satisfied and really enjoy the new browsing modes for both web tables and images. [Paper]

  • A Policy-Driven Service Composition Method for Adaptation in Pervasive Computing Environment

    *Baopeng Zhang, Yuanchun Shi, Xin Xiao,
    Service composition allows distributed application, such as multimedia application, to be composed from atomic service units and to adapt dynamically to users’ requirements and environment conditions in pervasive computing system. It augments the adaptation action space for the application of pervasive computing. According to the multidimensional QoS (Quality of Service) requirement of pervasive computing system, we proposed a comprehensive service composition method to enhance the capability of application adaptation. First, according to a hierarchy policy model and a policy specification language, strengthened by event calculus, service discovery policy action integrating the situation of user, application, environment and resource can be triggered. Secondly, the proposed physical space model can support the location-aware service discovery and explicit range query to improve the efficiency of the query. To the end, an adaptation policy evaluation model is utilized to maximize an evaluation criterion–quality of satisfaction of users and environment by optimizing the optional service selection and the composition path. Through experiment and discussion of the algorithm, the paper further illustrates the great potential advantage of the solution to service composition. [Paper]