Ke Sun
Pervasive HCI Group, Tsinghua University

I am a Ph.D student at the Department of Computer Science and Technology, Tsinghua University. I am now working in the Pervasive Interaction Group, supervised by Prof. Yuanchun Shi and Prof. Chun Yu.

My research interest is in HCI. I study human input performance, design input techniques for essential interaction tasks in emerging environments and realize them with computing methods (probability, applied machine learning, signal processing, etc). I have worked on text entry in AR (CHI'16) & VR (UIST'15) and smartwatch interactions (CHI'17).

I received my Bachelor's degree from the Department of Computer Science and Technology at Tsinghua University in 2014.

Ke Sun from Department of Computer Scinece at Tsinghua University


CHI 2017
Float: One-Handed and Touch-Free Target Selection on Smartwatches
Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, Yuanchun Shi

Float is a wrist-to-finger interaction technique that enables one-handed and touch-free input on smartwatches with high efficiency. With Float, a user tilts the wrist to point and performs an in-air finger tap to click. We realize Float using only commercially-available built-in sensors. Particularly, we detect the finger taps based on the photoplethysmogram (PPG) signal acquired from the heart rate monitor sensor (The supplementary file below provides a detailed destripiton of the PPGTap).
[PDF] [Supplementary File: PPGTap]

CHI 2016 | Honorable Mention Award (Top 5%)
One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses
Chun Yu, Ke Sun*(as the first student author), Mingyuan Zhong, Xincheng Li, Yuanchun Shi

1D-Handwriting is a unistroke gesture technique that enables text entry on a one-dimensional interface. We map two-dimensional handwriting to a reduced one-dimensional space, while achieving a balance between memorability and performance. After an iterative design, we derive a set of ambiguous two-length unistroke gestures, each mapping to 1-3 letters. 1D-Handwriting outperforms a selection-based technique for both letter and word input.
[PDF] [Slides] [Video] [ACM DL]

UbiComp 2016 Workshop: UnderWare
SkinMotion: what does skin movement tell us?
Yuntao Wang, Ke Sun, Lu Sun, Chun Yu, Yuanchun Shi

SkinMotion reconstructs human motions from skin-stretching movements. We discuss the potential applications of SkinMotion. In addition, we experimentally explore one specific instance ‒ finger motion detection using the skin movement on the dorsum of the hand. Results show that SkinMotion can achieve 5.84° estimate error for proximal phalanx flexion on average. We expect SkinMotion to open new possibilities for skin-based interactions.

UIST 2015
ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data
Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun ,Yuanchun Shi

Air Typing Keyboard (ATK) enables freehand ten-finger typing in the air based on 3D hand tracking data. We empirically investigate users' mid-air typing behavior, and examine fingertip kinematics, correlated movement among fingers and 3D distribution of tapping endpoints. We propose a probabilistic tap detection algorithm, augmenting Goodman's input correction model to account for the ambiguity in distinguishing tapping finger.
[PDF] [Slides] [Video] [ACM DL]

Other Projects

Here are some of my other HCI-relevant projects.

Continuous Word-Level Writing in Mid-Air

This is actually my undergraduate thesis (2014). Handwriting brings expressiveness and freehand interaction enables natural operations. We present a mid-air text entry interaction where users are allowed to do continuous word-level writing in the air through 3D handwriting recognition. After analyzing the pattern of mid-air writing, we extend traditional 2D handwriting recognition algorithms based on HMMs to the 3D space. For writing speed, a text-entry rate of 15.63 WPM is reached. For recognition, the first recommendation achieves an accuracy of 85.77% and top-3 recommendation achieves an accuracy of 90.63%. This [PDF] gives a brief description.

BodyMirror: Torso-Centric Touch Interactions with Absolute Indirect Interfaces

This is the course project of the HCI class (2015). BodyMirror is a barehanded input approach that leverages our torso (the front trunk of the human body) for spatial mapping interaction. With BodyMirror, users imagine their torso as an input touchable surface mirrored from a remote graphical output surface and perform absolute indirect touch interactions. BodyMirror bring the potential benefit of 'eyes-free' interaction. Users rely on the proprioception and tactile feedback for pointing and touch interactions, providing an alternative for mobile and wearable applications. This [PDF] gives a brief description on the design and implementation.

Touch Screen as Spatial Context for Surface-based Selecting and Interacting in MR

An attempt to use the touch-screen interaction in Mixed Reality of HoloLens (2016). We explore the spatial correspondence and touch expressiveness afforded by a smartphone to facilitate 'mixed' interactions with the real-world environment. We assume that the environment is formed of many splicing, structured planar surfaces, and they are the mid-level interactive primitives. We propose an orientation-based technique for selecting surfaces. Once a surface is selected, the user can seamlessly interact on/with it using the touch screen in hand. Our prototype enables users in MR to (1) fast and accurately locate virtual elements to the desired position; (2) easily translate, scale and rotate 3D elements; (3) use the surfaces as a reference to position virtual elements into mid-air.


Tsinghua University 09/2014 - present
Ph.D. student in Computer Science and Technology
Advisors: Prof. Yuanchun Shi

Tsinghua University 09/2010 - 06/2014
B.Eng. in Computer Science and Technology
GPA rank: 13/129, Graduate with honors in the Department of CS


Intern at HiScene-Augmented Reality , a leading company in Augmented Reality in China 10/2016 - 12/2016

I learned from and worked with experienced colleagues in the SLAM group. After a period of code cleanup work, I have a preliminary understanding of the basic algorithms of Augmented Reality. I also built a pipeline tool which takes image sequence as input and outputs the 3D mesh model reconstructed from them with open source softwares.

Teaching Assistant, Human-Computer Interaction: Theory and Technology, THU 09/2015 - 01/2016
09/2016 - 01/2017

Honors & Awards

National Scholarship by Ministry of Education 2016
Honorable Mention Award by ACM CHI 2016 2016
Outstanding Graduate by the Department of CS, Tsinghua University 2014
Comprehensive Excellence Scholarship by Tsinghua University 2011, 2013
Academic Excellence Scholarship by Tsinghua University 2012


C++, Java, Python, C#, MATLAB, Cooking

Updated in January 2017