• Tap-to-Pair: Associating Wireless Devices with Synchronous Tapping

    (IMWUT ’18) Tengxiang Zhang, Xin Yi, Ruolin Wang, Yuntao Wang, Chun Yu, Yiqin Lu and Yuanchun Shi,
    Tap-to-Pair is a spontaneous device association technique that initiates pairing from advertising devices without hardware or firmware modifications. Tapping an area near the advertising device’s antenna can change its signal strength. Users can then associate two devices by synchronizing taps on the advertising device with the blinking pattern displayed by the scanning device. By leveraging the wireless transceiver for sensing, Tap-to-Pair does not require additional resources from advertising devices and needs only a binary display (e.g. LED) on scanning devices.
    [Video] [Paper]

  • HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices

    (IMWUT’18) Yukang Yan,Chun Yu,XIN YI,Yuanchun Shi
    We propose HeadGesture, a hands-free input approach to interact with HMD devices. Using HeadGesture, users do not need to raise their arms to perform gestures or operate remote controllers in the air. Instead, they perform simple gestures with head movement to interact. In this way, users’ hands are free to perform other tasks and it reduces the hand occlusion of the field of view and alleviates arm fatigue. Evaluation results demonstrate that the performance of HeadGesture is comparable to mid-air hand gestures and users feel significantly less fatigue.

  • Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands

    (UIST 2018) Ke Sun,Chun Yu,Weinan Shi,Lan Liu,Yuanchun Shi
    We present Lip-Interact, an interaction technique that allows users to issue commands on their smartphone through silent speech. Lip-Interact repurposes the front camera to capture the user’s mouth movements and recognize the issued commands with an end-to-end deep learning model. Our system supports 44 commands for accessing both system-level functionalities (launching apps, changing system settings, and handling pop-up windows) and application-level functionalities (integrated operations for two apps). We verify the feasibility of Lip-Interact with three user experiments: evaluating the recognition accuracy, comparing with touch on input efficiency, and comparing with voiced commands with regards to personal privacy and social norms. We demonstrate that Lip-Interact can help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.
    [Video] [Paper]

  • TOAST: Ten-Finger Eyes-Free Typing on Touchable Surfaces

    (Ubicomp 2018) Weinan Shi, Chun Yu,Xin Yi, Zhen Li, Yuanchun Shi
    Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces.We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users’ typing data.
    [Video] [Paper]

  • Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality

    (CHI 2018) Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, Yuanchun Shi
    Eyes-free target acquisition is a basic and important human ability to interact with the surrounding physical world, relying on the sense of space and proprioception. In this research, we leverage this ability to improve interaction in virtual reality (VR), by allowing users to acquire a virtual object without looking at it. We expect this eyes-free approach can effectively reduce head movements and focus changes, so as to speed up the interaction and alleviate fatigue and VR sickness.

  • VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval

    (CHI 2018) Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Sun Ke, Yuanchun Shi
    We propose VirtualGrasp, a novel gestural approach to retrieve virtual objects in virtual reality. Using VirtualGrasp, a user retrieves an object by performing a barehanded gesture as if grasping its physical counterpart. The object-gesture mapping under this metaphor is of high intuitiveness, which enables users to easily discover, remember the gestures to retrieve the objects.

  • ForceBoard: Subtle Text Entry Leveraging Pressure

    (CHI 2018) Mingyuan Zhong, Chun Yu, Qian Wang, Xuhai Xu, Yuanchun Shi
    We present ForceBoard, a pressure-based input technique that enables text entry by subtle finger motion. To enter text, users apply pressure to control a multi-letter-wide sliding cursor on a one-dimensional keyboard with alphabetical ordering, and confirm the selection with a quick release. We examined the error model of pressure control for successive and error-tolerant input, which was incorporated into a Bayesian algorithm to infer user input.
    [Video] [Paper]


  • TouchPower: Interaction-based Power Transfer for Power-as-needed Devices

    (Ubicomp’17, discussion paper) Tengxiang Zhang, Xin Yi, Chun Yu, Yuntao Wang, Nicholas Becker, Yuanchun Shi
    The trend toward ubiquitous deployment of electronic devices demands novel low maintenance power schemes to decrease the burden of maintaining such a large number of devices. In this paper, we propose Interaction-based Power Transfer (IPT): a novel power scheme for power-as-needed devices (i.e., devices that only require power during interaction).
    [Video] [Paper]

  • BlindType: Eyes-Free Text Entry on Handheld Touchpad by Leveraging Thumb’s Muscle Memory

    (UbiComp ’17) Yiqin Lu, Chun Yu, Xin Yi, Yuanchun Shi
    Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen.

  • Is it too small?: Investigating the performances and preferences of users when typing on tiny QWERTY keyboards

    (IJHCS ’17) Xin Yi, Chun Yu, Weinan Shi, Yuanchun Shi
    Typing on tiny QWERTY keyboards on smartwatches is considered challenging or even impractical due to the limited screen space. In this paper, we describe three user studies undertaken to investigate users’ typing abilities and preferences on tiny QWERTY keyboards.

  • COMPASS: Rotational Keyboard on Non-Touch Smartwatches

    (CHI ’17, honorable mention) Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, Yuanchun Shi
    COMPASS is a non-touch text entry technique on smartwatches. It positions multiple cursors on the keyboard, with dynamic-optimization to minimize rotational distance. User reached 12.5 WPM after 90-minute practice.
    [Video] [Paper]

  • Float: One-Handed and Touch-Free Target Selection on Smartwatches

    (CHI ’17) Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, Yuanchun Shi
    We present Float, an interaction technique that enables one-handed and touch-free input on smartwatches based on a combination of wrist tilt and PPG finger gestures.
    [Video] [Paper]

  • Tap, Dwell or Gesture?: Exploring Head-Based Text Entry Techniques for HMDs

    (CHI ’17) Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, Yuanchun Shi
    We investigated three head-based text entry techniques for HMDs: DwellType, TapType and GestureType. We found gesture typing on HMD using head achieved 25 words per minute.
    [Video] [Paper]

  • ViVo: Video-Augmented Dictionary for Vocabulary Learning

    (CHI ’17) Yeshuang Zhu, Yuntao Wang, Chun Yu, Shaoyun Shi, Yankai Zhang, Shuang He, Peijun Zhao, Xiaojuan Ma, Yuanchun Shi
    We present ViVo, a novel video-augmented dictionary that provides an inexpensive, convenient, and scalable way to exploit huge online video resources for vocabulary learning. ViVo automatically generates short video clips for learning from existing movies.
    [Video] [Paper]

  • Word Clarity as a Metric in Sampling Keyboard Test Sets

    (CHI ’17) Xin Yi, Chun Yu, Weinan Shi, Xiaojun Bi, Yuanchun Shi
    We formally define word clarity, and show that it yield 26.4% and 25% difference in error rate and input speed respectively. We propose a Pareto optimization method for sampling test sets with different sizes.

  • CEPT: Collaborative Editing Tool for Non-Native Authors

    (CSCW ’17) Yeshuang Zhu, Shichao Yue, Chun Yu, Yuanchun Shi
    Due to language deficiencies, individual non-native speakers (NNS) face many difficulties while writing. In this paper, we propose to build a collaborative editing system that aims to facilitate the sharing of language knowledge among non-native co-authors, with the ultimate goal of improving writing quality. We describe CEPT, which allows individual co-authors to generate their own revisions as well as incorporating edits from others to achieve mutual inspiration.


  • One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses

    (CHI ’16, honorable mention) Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, Yuanchun Shi
    We present 1D Handwriting, a unistroke gesture technique enabling text entry on a one-dimensional interface. The challenge is to map two-dimensional handwriting to a reduced one-dimensional space, while achieving a balance between memorability and performance efficiency. After an iterative design, we finally derive a set of ambiguous two-length unistroke gestures, each mapping to 1-4 letters. To input words, we design a Bayesian algorithm that takes into account the probability of gestures and the language model. To input letters, we design a pause gesture allowing users to switch into letter selection mode seamlessly. Users studies show that 1D Handwriting significantly outperforms a selection-based technique (a variation of 1Line Keyboard) for both letter input (4.67 WPM vs. 4.20 WPM) and word input (9.72 WPM vs. 8.10 WPM). With extensive training, text entry rate can reach 19.6 WPM. Users’ subjective feedback indicates 1D Handwriting is easy to learn and efficient to use. Moreover, it has several potential applications for other one-dimensional constrained interfaces. [Paper]

  • Investigating Effects of Post-Selection Feedback for Acquiring Ultra-Small Targets on Touchscreen

    (CHI ’16) Chun Yu, Hongyi Wen, Wei Xiong, Xiaojun Bi, Yuanchun Shi
    In this paper, we investigate the effects of post-selection feedback for acquiring ultra-small (2-4mm) targets on touchscreens. Post-selection feedback shows the contact point on touchscreen after a user lifts his/her fingers to increase users’ awareness of touching. Three experiments are conducted progressively using a single crosshair target, two reciprocally acquired targets and 2D random targets. Results show that in average post-selection feedback can reduce touch error rates by 78.4%, with a compromise of target acquisition time no more than 10%. In addition, we investigate participants’ adjustment behavior based on correlation between successive trials. We conclude that the benefit of post-selection feedback is the outcome of both improved understanding about finger/point mapping and the dynamic adjustment of finger movement enabled by the visualization of the touch point. [Paper]


  • ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data

    (UIST ’15) Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, Yuanchun Shi
    Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users’ mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman’s input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy. [Paper][video]

  • ChinAR: Facilitating Chinese Guqin Learning through Interactive Projected Augmentation

    (Chinese CHI ’15, best paper) Yingxue Zhang, Siqi Liu, Lu Tao, Chun Yu, Yuanchun Shi, Yingqing Xu
    The Guqin, a seven-stringed fretless zither, is the most representative traditional musical instrument in China. However, the complexity of its unique notation and theory has severely limited its popularity in the modern world. With the goal of providing an easy and effective way of learning Guqin, we have created an interactive learning system called ChinAR which employs augmented reality. We have made three main contributions in this paper: (1) a systematic method to design for instrumental learning combing eastern and western musical concepts; (2) a primary validation of the effect of augmented reality in facilitating learning of the Chinese Guqin (3) a natural user interface for the learning system applying gesture detection. The result of user study shows our design is helpful in providing better learning experience and enhancing performance and memorization with markedly less time spent learning. This work shows how a new interface helps promote the use of heritage instruments and culture. [Paper]

  • A Tabletop-Centric Smart Space for Emergency Response

    (IEEE Pervasive Computing ’15) Jie Liu, Yongqiang Qin, Qiang Yang, Chun Yu, and Yuanchun Shi
    This article describes the design and implementation of a smart space for emergency response based on five system design guidelines. A large-scale interactive tabletop with dedicated software serves as the center for collaboration. Information on the tabletop and peripheral devices can be shared via a user interface sharing technique. The authors deployed their smart space in a forest fire simulation and evaluated team performance in comparison with a control group. Analysis of the results reveals that this smart space can significantly improve team performance as well as team cognition. This article is part of a special issue on smart spaces. [Paper]


  • FOCUS: enhancing children’s engagement in reading by using contextual BCI training sessions

    (CHI ’14) Jin Huang, Chun Yu, Yuntao Wang, Yuhang Zhao, Siqi Liu, Chou Mo, Jie Liu, Lie Zhang, Yuanchun Shi
    Reading is an important aspect of a child’s development. Reading outcome is heavily dependent on the level of engagement while reading. In this paper, we present FOCUS, an EEG-augmented reading system which monitors a child’s engagement level in real time, and provides contextual BCI training sessions to improve a child’s reading engagement. A laboratory experiment was conducted to assess the validity of the system. Results showed that FOCUS could significantly improve engagement in terms of both EEG-based measurement and teachers’ subjective measure on the reading outcome. [Paper]

  • TangramTheatre: Presenting Children’s Creation on Multimodal Tabletops

    (CHI EA ’14) Zhun Qu,Chun Yu,Yue Shi,Jin Huang,Li Tian,Yuanchun Shi
    The tangram is a jigsaw-like traditional Chinese art form rich in wittiness and expressiveness. However, there is an absence of efficient support to create animations for children and novice users after designing tangram characters. Thus, we present TangramTheatre, a performance-driven animation tool that combines both creation and animation of physical and virtual characters. TangramTheatre allows users to create characters using seven physical tangram pieces as what they do in real tangram games and then edit animations of these characters. In this paper we present our proof of concept prototype. A preliminary study was conducted to direct a future empirical study with children. The results show that all of the participants express great interests in TangramTheatre. [Paper]

  • BodyRC: Exploring Interaction Modalities Using Human Body as Lossy Signal Transmission Medium

    (UIC, honorable mention) Yuntao Wang, Chun Yu, Lin Du, Jin Huang, Yuanchun Shi
    With the increasing popularity of wearable computing devices, new sensing techniques that enable always-available interaction are highly demanded. In this paper, we propose Body RC, a novel body-based device using human body as lossy signal transmission medium. This device supports on-body interaction and body gesture recognition. In particular, Body RC recognizes the operations of on-body interaction and body gestures by analyzing the electrical properties when transmitting single high frequency analog signal through human body. We evaluate the capabilities and performance of Body RC through two controlled experiments showing robust classification of both on-body interaction and body gesture recognition. In addition, we design a real-time recognition system demonstrating the utility of our technique. [Paper]

  • QOOK: enhancing information revisitation for active reading with a paper book

    (TEI ’14) Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Taoshuai Zhang, Yuanchun Shi
    Revisiting information on previously accessed pages is a common activity during active reading. Both physical and digital books have their own benefits in supporting such activity according to their manipulation natures. In this paper, we introduce QOOK, a paper-book based interactive reading system, which integrates the advanced technology of digital books with the affordances of physical books to facilitate people’s information revisiting process. The design goals of QOOK are derived from the literature survey and our field study on physical and digital books respectively. QOOK allows page flipping just like on a real book and enables people to use electronic functions such as keyword searching, highlighting and bookmarking. A user study is conducted and the study results demonstrate that QOOK brings faster information revisiting and better reading experience to readers. [Paper]

  • Enhancing Collaboration in Competitive Games in Multi-Display Environment

    Jie Liu , Taoshuai Zhang, Yuanchun Shi
    In the Multi-Display Environment (MDE), we introduce a mechanism containing private, public and group workspaces for a computer-mediated tabletop board game through combination of the tabletop and mobile phones. It can sustain the important sociality between players while ensuring privacy and enhancing visual effect. Based on the popular board game Monopoly, we design Copoly on a multi-touch tabletop and mobile phones where players can form groups for collaboration. We explore the patterns of collaboration and its effect on tabletop game experience. Results show that social bonding play an important role on the frequency and patterns of collaboration in tabletop games, and players gain a more joyful experience through both competition and collaboration. [Paper]

  • Defining and Analyzing a Gesture Set for Interactive TV Remote on Touchscreen Phones

    Yuntao Wang, Chun Yu, Yuhang Zhang, Jin Huang, Yuanchun Shi
    In this paper, we recruited 20 participants preforming user-defined gestures on a touch screen phone for 22 TV remote commands. Totally 440 gestures were recorded, analyzed and paired with think-aloud data for these 22 referents. After analyzing these gestures according to extended taxonomy of surface gestures and agreement measure, we presented a user-defined gesture set for interactive TV remote on touch screen phones. Despite the insight of mental models and analysis of gesture set, our findings indicate that people prefer using single-handed thumb and also prefer eyes-free gestures that need no attention switch under TV viewing scenario. Multi-display is useful in text entry and menu access tasks. Our results will contribute to better gesture design in the field of interaction between TVs and touchable mobile phones. [Paper]

  • AirFlow: designing immersive breathing training games for COPD

    Yongqiang Qin, Chris Vincent, Nadia Bianchi-Berthouze, Yuanchun Shi
    Chronic Obstructive Pulmonary Disease (COPD) refers to a collection of lung diseases that result in breathing difficulties. In some cases, the symptoms of COPD can be reduced, by engaging in breathing exercises. Technology can support, and we are developing AirFlow, a suite of interactive computer games, to guide breathing exercises and promote learning. To establish requirements, we interviewed 20 people with COPD, in China, to understand their use of breathing exercises, and learn how technology might fit their lifestyle. The findings informed our design goals. We outline a prototype system, where respiration rate, waveform, and amplitude are captured and used to control a virtual environment. The system will guide users through breathing exercises, and provide training instructions, using a series of games. The immersive environment aims to support a fun and motivating experience, therefore underpinning user confidence. [Paper]

  • uStitchHub: Stitching Multi-Touch Trajectories on Tiled Very Large Tabletops

    Yongqiang Qin,Yue Shi,Yuanchun Shi
    It is common to tile normal sized units together to obtain very large tabletops or interactive digital walls. Usually cameras are adopted to capture the touch input on the surface, which demands for fusing several multi-touch input feeds into a single feed in real time. Common approaches of stitching camera frames into aggregate images do not scale to larger number of cameras. We present uStitch Hub to address these challenges, which is specifically applied in the tabletop domain. It is a fusion mediator which accept detected blobs from multiple cameras that do and don’t have overlapped field of view, remapping blobs to the dimension of the entire large tabletop, match blobs with real touches on the surface, and concatenate touch trajectories which are performed over two or more units. We conducted laboratory evaluation of uStitch Hub on its stitching success rate and as well as latency. The results prove that uStitch Hub processes fast and provides not only good stitch on touch trajectories but also low latency. [Paper]

  • Video avatar-based remote video collaboration

    Siqi Liu, Chun Yu, Yuanchun Shi
    Most existing remote video communication systems are confined to how to accurately deliver adequate information in real time, but lose sight of communicators’ interaction demands. Meanwhile, traditional 2D-based video communications cannot make full use of people’s 3D information, which shows great potential for video communication to achieve immersive, natural and efficient interactions with 3D technique. To enhance the immersion sense and expand interaction modes in video collaboration, the design goal for ‘immersive’ video communication systems was proposed and a novel, avatar-based remote video collaboration system was designed and implemented. Specifically, using Creative Senz3D depth camera, the proposed system extracted people’s foreground images through background segmentation as their video avatars, and located the avatars together in a common virtual scenario. Natural and immersive interactions among people and between people and virtual scenes were also well designed, which expanded modes of interaction and collaboration in video communication. Finally, a user study was conducted and the result indicates that the proposed video avatar-based remote video cooperation mode can effectively enhance people’s sense of immersion in telecommunication. [Paper]


  • Implicit bookmarking: Improving support for revisitation in within-document reading tasks

    (IJHCS ’13) Chun Yu,Ravin Balakrishnan,Ken Hinckley,Tomer Moscovich, Yuanchun Shi
    We explore improving support for revisitation in documents by automatically generating bookmarks based on users’ reading history. After showing that dwell time and number of visits are not appropriate for predicting revisitations in documents, we model the high-level reading task as a sequence of reading blocks and recognize long-distance scrolls as separators between them. A long-distance scroll is defined as a continuous scrolling action which causes the document to be navigated beyond a one-page distance. We propose a new technique, called the Head–Tail (HT) algorithm, to generate bookmarks at the head and the tail of reading blocks, whose validity is quantitatively verified by log data analysis. Two studies were conducted to investigate this HT implicit bookmarking technique. The first is a controlled experiment that compared the HT algorithm to the widely used simple recency algorithm for generating implicit bookmarks, in terms of revisit coverage ability and distance between bookmarks … [Paper]

  • Facilitating parallel web browsing through multiple-page view

    (CHI ’13) Wenchang Xu, Chun Yu, Songmin Zhao, Jie Liu and Yuanchun Shi
    Parallel web browsing describes the behavior where users visit web pages in multiple concurrent threads. Qualitative studies have observed this activity being performed with multiple browser windows or tabs. However, these solutions are not satisfying since a large amount of time is wasted on switch among windows and tabs. In this paper, we propose the multiple-page view to facilitate parallel web browsing. Specifically, we provide users with the experience of visiting multiple web pages in one browser window and tab with extensions of prevalent desktop web browsers. Through user study and survey, we found that 2-4 pages within the window size were preferred for multiple-page view in spite of the diverse screen sizes and resolutions. Analytical results of logs from the user study also showed an improvement of 26.3% in users’ efficiency of performing parallel web browsing tasks, compared to traditional browsing with multiple windows or tabs. [Paper]

  • Understanding performance of eyes-free, absolute position control on touchable mobile phones

    (MobileHCI ’13, honorable mention) Yuntao Wang, Chun Yu, Jie Liu, Yuanchun Shi
    Many eyes-free interaction techniques have been proposed for touchscreens, but few researches have studied human’s eyes-free pointing ability with mobile phones. In this paper, we investigate the single-handed thumb performance of eyes-free, absolute position control on mobile touch screens. Both 1D and 2D experiments were conducted. We explored the effects of target size and location on eyes-free touch patterns and accuracy. Our findings show that variance of touch points per target will converge as target size decreases. The centroid of touch points per target tends to be offset to the left of target center along horizontal direction, and shift toward screen center along vertical direction. Average accuracy drops from 99.6% of 2*2 layout to 85.0% of 4*4 layout, and average per target varies depending on the location of target. Our findings and design implications provide a foundation for future researches based on eyes-free, absolute position control using thumb on mobile devices. [Paper]

  • QOOK: a new physical-virtual coupling experience for active reading

    (UIST ’13 Adjunct) Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi
    We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents. [Paper]

  • Hero: designing learning tools to increase parental involvement in elementary education in china

    Yuhang Zhao, Alexis Hope, Jin Huang, Yoel Sumitro, James Landay, Yuanchun Shi
    In this paper, we present the design of Hero, a suite of learning tools that combine teacher-created extracurricular challenges with in-class motivational tools to help parents become more involved in their child’s education, while also engaging students in their own learning. To inform the design, we conducted field studies and interviews involving 7 primary teachers and 15 different families. We analyzed Chinese parenting styles and problems related to parental involvement, and developed three major themes from the data. We then proposed three design goals and created a high-fidelity prototype after several iterations of user testing. A preliminary evaluation showed that teachers, parents, and students could all benefit from the design. [Paper]

  • Exploring the effect of display size on pointing performance

    Yuntao Wang, Chun Yu, Yongqiang Qin, Dan Li, Yuanchun Shi
    In this paper, we studied how display size affects human pointing performance given the same display field of view when using a mouse device. In total, four display sizes (10.6, 27, 46, 55 inches) and three display field of views (20, 34 and 45) were tested. Our findings show that given the same display field of view, mouse movement time significantly increases as display size increases; but there is no significant effect of display size on pointing accuracy. This research may contribute a new dimension to literature in describing human pointing performance on large displays. [Paper]


  • Clustering web pages to facilitate revisitation on mobile devices

    Jie Liu, Chun Yu, Wenchang Xu, Yuanchun Shi
    Due to small screens, inaccuracy of input and other limitations of mobile devices, revisitation of Web pages in mobile browsers takes more time than that in desktop browsers. In this paper, we propose a novel approach to facilitate revisitation. We designed AutoWeb, a system that clusters opened Web pages into different topics based on their contents. Users can quickly find a desired opened Web page by narrowing down the searching scope to a group of Web pages that share the same topic. Clustering accuracy is evaluated to be 92.4% and computing resource consumption was proved to be acceptable. A user study was conducted to explore user experience and how much AutoWeb facilitates revisitation. Results showed that AutoWeb could save up a significant time for revisitation and participants rated the system highly. [Paper]

  • How much to share: A Repeated Game Model for Peer-to-Peer Streaming under Service Differentiation Incentive

    Xin Xiao*, Qian Zhang, Yuanchun Shi, Yuan Gao.
    In this paper, we propose a service differentiation incentive for P2P streaming system, according to peers’ instant contributions. Also, a repeated game model is designed to analyze how much the peers should contribute in each round under this incentive. Simulations show that satisfying streaming quality is achieved in the Nash Equilibrium state. [Paper]

  • A Scalable Distributed Architecture for Intelligent Vision System

    Guojian Wang*, Linmi Tao, Huijun Di, Xiyong Ye, Yuanchun Shi
    The complexity of intelligent computer vision systems demands novel system architectures that are capable of integrating various computer vision algorithms into a working system with high scalability. The real-time applications of human-centered computing are based on multiple cameras in current systems, which require a transparent distributed architecture. This paper presents an application-oriented service share model for the generalization of vision processing. Based on the model, a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently. The architecture provides a standard interface for loading various modules and a mechanism for modules to acquire inputs and publish processing results that can be used as inputs by others. Using this architecture, a system can load specific applications without considering the common low-layer data processing. We have implemented a prototype vision system based on the proposed architecture. The latency performance and 3-D track function were tested with the prototype system. The architecture is scalable and open, so it will be useful for supporting the development of an intelligent vision system, as well as a distributed sensor system. [Paper]

  • Inertial Body-worn Sensor Data Segmentation by Boosting Threshold-based Detectors

    Yue Shi, Yuanchun Shi, Xia Wang
    Using inertial body-worn sensors, we propose a segmentation approach to detect when a user changes actions. We use Adaboost to combine three threshold-based detectors: force/gravity ratios, peaks of autocorrelation, and local minimums of velocity. Experimenting with the CMU Multi-Modal Activity Database, we find that the first two features are the most important, and our combination approach improves performance with an acceptable level of granularity. [Paper]

  • Watching you moving the mouse, i know who you are

    Chun Yu, Yue Shi, Xinliang Wang, Yuanchun Shi
    Previous research on modeling human’s pointing behavior focuses on user-independent variables such as target width and distance. In this work-in-progress, we investigate a set of user-dependent variables, which are drawn from cursor trajectory data and may represent an individual user’s unique pattern when controlling mouse movement. Using these features, the 8 users in our experiment can be recognized at a promising accuracy as high as 87.5%. [Paper]

  • UI Portals: Sharing Arbitrary Regions of User Interfaces on Traditional and Multi-user Interactive Devices

    Jie Liu, Yuanchun Shi
    This paper introduces UI Portals, a novel approach to help users share their off-the-shelf applications’ user interfaces on traditional and multi-user interactive devices among various platforms. Users can choose an application window or select parts of the window to share. In addition to the traditional single-user mouse-and-keyboard interaction, we provide support for simultaneous interactions on large multi-user interactive surfaces, like tabletops and multi-touch vertical surfaces. We describe the concepts and implementation mechanisms of this approach. Furthermore, we implement UI Portals Toolsets (UIPT), a prototype that demonstrates sharing arbitrary regions of user interfaces among multiple platforms without any change to the application source code. In UIPT, we design a windowing tool dedicated to large multi-user interactive surfaces to fully leverage the benefit of simultaneous interaction. Two typical scenarios demonstrate the utility of UIPT and show how UIPT can help users work with their familiar software applications on different displays and platforms. [Paper]

  • AutoWeb: automatic classification of mobile web pages for revisitation

    Jie Liu, Wenchang Xu, Yuanchun Shi
    Revisitation in mobile Web browsers takes more time than that in desktop browsers due to the limitations of mobile phones. In this paper, we propose AutoWeb, a novel approach to speed up revisitation in mobile Web browsing. In AutoWeb, opened Web pages are automatically classified into different groups based on their contents. Users can more quickly revisit an opened Web page by narrowing down search scope into a group of pages that share the same topic. We evaluated the classification accuracy and the accuracy is 92.4%. Three experiments were conducted to investigate revisitation performance in three specific tasks. Results show AutoWeb can save significant time for revisitation by 29.5%, especially for long time Web browsing, and that it improves overall mobile Web revisitation experience. We also compare automatic classification with other revisitation methods. [Paper]

  • Digging unintentional displacement for one-handed thumb use on touchscreen-based mobile devices

    Wenchang Xu, Jie Liu, Chun Yu, Yuanchun Shi
    There is usually an unaware screen distance between initial contact and final lift-off when users tap on touchscreen-based mobile devices with their fingers, which may affect users’ target selection accuracy, gesture performance, etc. In this paper, we summarize such case as unintentional displacement and give its models under both static and dynamic scenarios. We then conducted two user studies to understand unintentional displacement for the widely-adopted one-handed thumb use on touchscreen-based mobile devices under both scenarios respectively. Our findings shed light on the following four questions: 1) what are the factors that affect unintentional displacement; 2) what is the distance range of the displacement; 3) how is the distance varying over time; 4) how are the unintentional points distributed around the initial contact point. These results not only explain certain touch inaccuracy, but also provide important reference for optimization and future design of UI components, gestures, input techniques, etc. [Paper]

  • Fall Detection on Mobile Phones Using Features from a Five-Phase Model

    Yue Shi, Yuanchun Shi, Xia Wang
    The injuries caused by falls are great threats to the elderly people. With the ability of communication and motion sensing, the mobile phone is an ideal platform to detect the occurrence of fall accidents and help the injured person receive first aid. However, the missed detection and false alarm of the monitoring software will cause annoyance to the users in real use. In this paper, we present a novel fall detection technique using features from a five-phase model which describes the state change of the user’s motion during the fall. Experiment results validate the effectiveness of the algorithm and show that the features derived from the model as gravity-cross rate and non-primarily maximum and minimum points of the acceleration data are useful to improve the precision of the detection. Moreover, we implement the technique as uCare, an Android application that helps elderly people in fall prevention, detection and first aid seeking. [Paper]

  • PiMarking: co-located collaborative digital annotating on large tabletops

    Yongqiang Qin, Chenjun Wu,Yuanchun Shi
    There are situations under which co-located people are required to perform collaborative marking tasks: for example, human resource officers need to review resumes together and teachers need to grade answer sheets after an examination. In this poster, we introduce PiMarking, a collaborative system designed to accommodate user-authenticated marking tasks and face-to-face discussions on large-scale interactive tabletop surfaces. PiMarking makes it easy for user-differentiation, document sharing and synchronized marking among group members. PiMarking puts forward user permission management mechanisms, allowing three modes for document sharing: distributed copy, share display and synchronized marking. We conducted a preliminary study using a realistic resume marking task, which proved the effectiveness of the features provided by PiMarking. [Paper]


  • RegionalSliding: enhancing target selection on touchscreen-based mobile devices

    Wenchang Xu, Chun Yu and Yuanchun Shi
    Target selection on mobile devices with touchscreens usually gets users into trouble due to the occlusion of the target by the user’s finger and ambiguity about which part of the finger generates the result point. In this paper, we propose a novel technique to enhance target selection on touchscreen-based mobile devices, named RegionalSliding, which selectively renders the initially “selected” target as well as its “surrounding” targets in a non-occluded area when users press down on the screen and enables users to complete the selection with sliding gestures according to the visual feedback from the rendered area. A preliminary user study shows that RegionalSliding increases the selection accuracy and brings good user experience. [Paper]

  • Enabling Efficient Browsing and Manipulation of Web Tables on Smartphone

    Wenchang Xu and Yuanchun Shi
    Tables are very important carriers of the vast information on the Internet and are widely used in web pages. However, most designs of web tables are only for desktop PCs and just focus on how to visually and logically show large amount of data without considering their visual effects on small-screen devices. Therefore, users suffer inconvenience when browsing web tables on smartphone. In this paper, we propose to enable efficient browsing and manipulation of web tables on smartphone in order to solve the problems of both information retrieval and content replication from web tables. We implemented a mobile web browser on Android 2.1 platform, which deals with web tables in three steps: genuine table detection, table understanding and user interface design. We conducted a user study to test the effects that users used such tool. Experimental results show that the tool increases users’ browsing efficiency of web tables and the novel browsing and manipulation modes are well accepted by users. [Paper]

  • uPlatform: A Customizable Multi-user Windowing System for Interactive Tabletop

    Chenjun Wu, Yue Suo, Chun Yu, Yuanchun Shi
    Interactive tabletop has shown great potential in facilitating face-to-face collaboration in recent years. Yet, in spite of much promising research, one important area that remains largely unexplored is the windowing system on tabletop, which can enable users to work with multiple independent or collaborative applications simultaneously. As a consequence, investigation of many scenarios such as conferencing and planning has been rather limited. To address this limitation, we present uPlatform, a multi-user windowing system specifically created for interactive tabletop. It is built based on three components: 1) an input manager for processing concurrent multi-modal inputs; 2) a window manager for controlling multi-user policies; 3) a hierarchical structure for organizing multi-task windows. All three components allow to be customized through a simple, flexible API. Based on uPlatform, three systems, uMeeting, uHome and uDining are implemented, which demonstrate its efficiency in building multi-user windowing systems on interactive tabletop. [Paper]

  • uMeeting, an Efficient Co-located Meeting System on the Large-Scale Tabletop

    Jie Liu, Yuanchun Shi
    In this paper, we present the uMeeting system, a co-located meeting system on the large-scale tabletop. People are used to sitting around a table to hold a meeting. It is natural and intuitive. The table has a central role to support team activities. Horizontal surfaces rather than vertical ones have inherent features to support the co-located meeting. Existing tabletops aren’t large enough to support more than 4 people’s meeting and the display area for each person is not large enough. Thus we develop uTable, a large-scale multi-touch tabletop. Based on uTableSDK we developed earlier, we develop uMeeting system that supports co-located meeting on the large tabletop uTable. [Paper]

  • Surprise Grabber: a co-located tangible social game using phone hand gesture

    Mingming Fan,Li Tian, Xin Li, Yuanchun Shi, Yu Zhong, Hao Wang
    Social network games (SNGs) are among the most popular games recently. Different from the asynchronous and online based SNGs, we present Surprise Grabber to see how tangible gesture interface could benefit the synchronous co-located social game. In Surprise Grabber, users control a virtual grabber’s moving in 3D game to catch the gifts by using their camera phone. An efficient code running on the phone detects hand motion, delivers results to Serve PC and provides feedbacks in real time. Distinguished from online SNGS, all players stand together in front of a public display. The results of the pilot user studies showed that: 1) Gesture interface was easy to catch up and made the game more immersive; 2) Occasionally inaccuracy in hand motion detection made the game more competitive instead of frustrating players; 3) Players’ performances were obviously influenced by the social atmosphere; 4) In most cases, players’ performances became better or worse at the same time. [Paper]

  • A rotation based method for detecting on-body positions of mobile devices

    Yue Shi, Jie Liu, Yunchun Shi
    We present a novel rotation based method for detecting where a mobile device is worn on a user’s body that utilizes the fusion of the data from accelerometer and gyroscope. Detecting the position of a mobile device could improve the performance of on-body sensor based human activity recognition and the adaptability of many mobile applications. In our method, the radius and angular velocity for a position is calculated based on the data read from the sensors integrated in a mobile device. We have evaluated our method with an experiment to detect four commonly used positions: breast pocket, trouser pocket, hip pocket and hand. [Paper]

  • Smart home on smart phone

    Yu Zhong, Yue Suo, Wenchang Xu, Chun Yu, Xinwei Guo, Yuhang Zhao, Yuanchun Shi
    Mobile phone with high accessibility and usability is regarded as the ideal interface for the users to monitor and control the approaching smart home environment. Moreover, networking technologies and protocols have been advanced enough to support a universal monitoring and controlling interface on smart phones. This paper presents HouseGenie, an interactive, direct manipulation application on mobile, which supports a range of basic home monitoring and controlling functionalities as a replacement of individual remotes of smart home appliances. HouseGenie also addresses several common requirements that may be behind the vision, such as scenario, short-delay alarm, area restriction and so on. We demonstrate that HouseGenie not only provides intuitive presentations and interactions for smart home management, but also improves user experience comparing to present solutions. [Paper]

  • PicoPet: “Real World” digital pet on a handheld projector

    Yuhang Zhao, Chao Xue, Xiang Cao, Yuanchu Shi
    We created PicoPet, a digital pet game based on mobile handheld projectors. The player can project the pet into physical environments, and the pet behaves and evolves differently according to the physical surroundings. PicoPet creates a new form of gaming experience that is directly blended into the physical world, thus could become incorporated into the player’s daily life as well as reflecting their lifestyle. Multiple pets projected by multiple players can also interact with each other, potentially triggering social interactions between players. In this paper, we present the design and implementation of PicoPet, as well as directions for future explorations. [Paper]

  • A Scalable Passive RFID-Based Multi-User Indoor Location System

    Shang Ma, Yuanchun Shi
    RFID-based indoor location systems have been proved to be both accurate and cost-effective. However, current implementations mainly use active tags which suffer the issues of batteries replacement, installation, maintenance and per-unit cost. Besides, as the number of users grows, how to guarantee the stability and high speed transmission of the location information can be challenging. To address these challenges, 1) we propose a passive RFID-based system for localizing multi-users, and 2) detecting human motion from various types of embedded sensors to be supplemented. In addition, we also implement a reliable transmission protocol to guarantee the location data transition between RF nodes based on dynamic PRI. According to the performance analysis, the tracking accuracy of our system is well assured. Its quick responsiveness and good scalability, as well as low cost on energy and infrastructure, make this system a more cost-effective and easy-to-deploy solution for stable indoor positioning. [Paper]

  • XINS: the anatomy of an indoor positioning and navigation architecture

    Yuan Gao, Qingxuan Yang, Guanfeng Li, Edward Y. Chang, Dong Wang, Chengu Wang, Hang Qu, Pei Dong, Faen Zhang
    Location-Based Service (LBS) is becoming a ubiquitous technology for mobile devices. In this work, we propose a signal-fusion architecture called XINS to perform effective indoor positioning and navigation. XINS uses signals from inertial navigation units as well as WiFi and floor-map constraints to detect turns, estimate travel distances, and predict locations. XINS employs non-intrusive calibration procedures to significantly reduce errors, and fuses signals synergistically to improve computational efficiency, enhance location-prediction accuracy, and conserve power. [Paper]


  • The satellite cursor: achieving MAGIC pointing without gaze tracking using multiple cursors

    Chun Yu, Yuanchun Shi, Ravin Balakrishnan, Xiangliang Meng, Yue Suo, Mingming Fan, Yongqiang Qin.
    We present the satellite cursor – a novel technique that uses multiple cursors to improve pointing performance by reducing input movement. The satellite cursor associates every target with a separate cursor in its vicinity for pointing, which realizes the MAGIC (manual and gaze input cascade) pointing method without gaze tracking. We discuss the problem of visual clutter caused by multiple cursors and propose several designs to mitigate it. Two controlled experiments were conducted to evaluate satellite cursor performance in a simple reciprocal pointing task and a complex task with multiple targets of varying layout densities. Results show the satellite cursor can save significant mouse movement and consequently pointing time, especially for sparse target layouts, and that satellite cursor performance can be accurately modeled by Fitts’ Law. [Paper]

  • Toward Systematical Data Scheduling for Layered Streaming in Peer-to-Peer Networks: Can We Go Farther?

    Xin Xiao, Yuanchun Shi, Qian Zhang, Jianhua Shen, Yuan Gao.
    Layered streaming in P2P networks has become a hot topic recently. However, the “layered” feature makes the data scheduling quite different from that for nonlayered streaming, and it hasn’t been systematically studied yet. In this paper, first, according to the unique characteristics caused by layered coding, we present four objectives that should be addressed by scheduling: throughput, layer delivery ratio, useless packets ratio, and subscription jitter prevention; then a three-stage scheduling approach LayerP2P is designed to request data, where the min-cost flow model, probability decision mechanism, and multiwindow remedy mechanism are used in Free Stage, Decision Stage, and Remedy Stage, respectively, to collaboratively achieve the above objectives. With the basic version of LayerP2P and corresponding experiment results achieved in our previous work, in this paper, more efforts are put on its mechanism details and analysis to its unique features; besides, to further guarantee the performance under sharp bandwidth variation, we propose the enhanced approach by improving the Decision Stage strategy. Extensive experiments by simulation and real network implementation indicate that it outperforms other schemes. LayerP2P has also been deployed in PDEPS Project in China, which is expected to be the first practical layered streaming system for education in P2P networks. [Paper]

  • Structured laser pointer: enabling wrist-rolling movements as a new interactive dimension

    Yongqiang Qin, Yuanchun Shi, Hao Jiang, Chun Yu.
    In this paper, we re-visit the issue of multi-point laser pointer interaction from a wrist-rolling perspective. Firstly, we proposed SLP—Structured Laser Pointer, and detects a laser pointer’s rotation along its emitting axis. SLP adds the wrist-rolling gestures as a new interactive dimension to the conventional laser pointer interaction approach. We asked a group of users to perform certain tasks using SLP, and derived from test results a set of criteria to distinguish between incidental and intentional SLP rolling, and then the experimental results also approved the high accuracy and acceptable speed as well as throughput of such rolling interaction. [Paper]

  • HyMTO: The Hybrid Mesh/Tree Overlay for Large Scale Multimedia Interactive Applications over the Internet

    Yuan Gao, Yuanchun Shi, Xin Xiao, Jianhua Shen.
    In the large scale multimedia interactive applications over the Internet, data can be categorized into the streaming data (e.g. the audio and the video streaming) and the sporadic data (e.g. the annotation in a whiteboard). The mesh-pull overlay fits for the streaming data, for it is resilient in heterogeneous network, while the tree-push overlay is more suitable for the sporadic data, for its easy-to-manage structure can solve the problem of packet loss and disorder. To address the transmission of both types of data, a Hybrid Mesh/Tree Overlay (HyMTO) is proposed in this paper, where the sporadic data and the streaming data are transmitted through different overlays with a novel synchronous scheme. HyMTO can provide the resilient, reliable and ordered transmission for both types of data respectively. Simulations are presented to evaluate the effectiveness and efficiency of HyMTO and a practical e-conference system based on HyMTO is implemented and deployed to demonstrate its practicability and reasonability. [Paper]

  • HouseGenie: Universal Monitor and Controller of Networked Devices on Touchscreen Phone in Smart Home

    Yue Suo, Chenjun Wu, Yongqiang Qin, Chun Yu, Yu Zhong, Yuanchun Shi.
    Touch screen phone roaming with people features high accessibility and direct-manipulation interaction, regarded as one of the most convenient interfaces for daily life. In this paper, we present HouseGenie to leverage universal monitor and control of networked devices in smart home on touch screen phone. By wirelessly communicating with an OSGi based portal, which maintains all the devices through varied protocols (e.g. industrial standard UPnP or IGRS), HouseGenie facilitates universal home monitor and control: 1) monitoring current status in panoramic view; 2) direct manipulating single/multiple device(s) using pie-menu, list-mode and drag drop gesture; 3) easily controlling devices in several multimodality ways. [Paper]

  • SHSim: An OSGI-based smart home simulator

    Lei Zhang, Yue Suo, Yu Chen, and Yuanchun Shi
    With the development of pervasive computing, smart home is increasingly popular. Smart Home usually requires an integration of many heterogeneous devices and service applications in deployment so it is hard and expensive to implement system test. To cope with this problem, this paper describes an OSGI-based smart home simulator named SHSim that is suitable for smart home test. SHSim is built on a dynamic mechanism that allows user to configure the system and test cases easily, and provides device transparent simulation meaning that virtual devices can transplant to real devices with no or little modifications. Practical applications show that SHSim can effectively improve development efficiency and reduce test cost. [Paper]

  • pPen: enabling authenticated pen and touch interaction on tabletop surfaces

    Yongqiang Qin, Chun Yu, Hao Jiang, Chenjun Wu, Yuanchun Shi.
    This paper introduced pPen, a pressure-sensitive digital pen enabled precise pressure and touch input on vision based interactive tabletops. With the help of pPen inputs and feature matching technology, we implemented a novel method supporting multi-user authenticated interaction in the bimanual pen and touch scenario: login can be performed just by stroking one’s signature with pPen on the table; a binding between user and pPen was created at the same time, then each interaction command made by pPen is user differentiating. We also conducted laboratory user studies, which later proved the safety and the high resistance to shoulder surfing problem: in the evaluation procedure, any attacker can never log into other user’s working space. [Paper]

  • A tabletop multi-touch Dali’s painting appreciation system

    Li Tian, Xiangliang Meng, Yuanchun Shi.
    Painting artworks, especially those having surreal or super-rational elements, are difficult for ordinary people to deeply appreciate in a static exhibiting manner. We believe that bringing interactivity to an existing painting would have benefits of emphasizing the theme, encouraging active and collaborative learning, and compensating its aura. To this end, in this paper we present DALI (Dali’s Artwork for Learning Interactively), a multi-touch system designed to guide viewers in their processes of appreciating painting artworks. The current system runs on a multi-touch tabletop. 30 paintings of Spanish artist Salvador Dali’s were digitized, deconstructed, and interactively displayed on the table. Visual effects on the paintings can be triggered by hand and finger gestures-an effective way of visual arts education. [Paper]

  • iWebImage: Enabling real-time interactive access to web images

    Wenchang Xu, Yuanchun Shi and Xin Yang.
    Images are widely used in web pages. However, most web images can only be viewed passively. Currently, it remains inconvenient for users to collect and save web images for further editing. The positioning of a certain image in web pages with a large number of images, such as the image search results, is also troublesome and time-wasting, especially for mobile devices with small screens. In this paper, we propose to enable real-time interactive access to web images and design three modes of browsing web images including normal mode, starred mode and advanced mode. We implement a plug-in for Microsoft Internet Explorer, called iWeblmage, which incorporates efficient computer graphics algorithms and provides a customized user interface supporting real-time interactive access to web images. Experimental results show the usage scenarios of the three browsing modes and prove that iWeblmage is well accepted by users. [Paper]

  • Enhancing browsing experience of table and image elements in web pages

    Wenchang Xu, Xin Yang and Yuanchun Shi.
    As the popularity and diversification of both Internet and its access devices, users’ browsing experience of web pages is in great need of improvement. Traditional browsing mode of web elements such as table and image is passive, which limits users’ browsing efficiency of web pages. In this paper, we propose to enhance browsing experience of table and image elements in web pages by enabling real-time interactive access to web tables and images. We design new browsing modes that help users improve their browsing efficiency including operation mode, record mode for web tables and normal mode, starred mode, advanced mode for web images. We design and implement a plug-in for Microsoft Internet Explorer, called iWebWidget, which provides a customized user interface supporting real-time interactive access to web tables and images. Besides, we carry out a user study to testify the usefulness of iWebWidget. Experimental results show that users are satisfied and really enjoy the new browsing modes for both web tables and images. [Paper]