Ke Sun
Pervasive HCI Group, Tsinghua University

I am a Ph.D student at the Department of Computer Science and Technology, Tsinghua University. I am now working in the Pervasive Interaction Group, supervised by Prof. Yuanchun Shi and Prof. Chun Yu. I received my Bachelor's degree from the Department of Computer Science and Technology at Tsinghua University in 2014.

My research interest is in Human-Computer Interaction (HCI). I study human input performance, design input techniques for essential interaction tasks in emerging environments, and realize them with computing methods (probability, applied machine learning, signal processing, etc)..

Ke Sun from Department of Computer Scinece at Tsinghua University


UIST 2018 [to appear]
Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands
Ke Sun, Chun Yu, Weinan Shi, Lan Liu, Yuanchun Shi

Lip-Interact is a system that allows users to input on a smartphone via silent speech commands. Lip-Interact repurposes the front camera to capture the user' mouth movements and recognizes the issued commands with an end-to-end deep learning model. Lip-Interactcan help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.

CHI 2018
VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval
Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, Yuanchun Shi

VirtualGrasp is a novel gestural approach to retrieve virtual objects in virtual reality (VR). Using VirtualGrasp, the user retrieves an object by performing a barehanded gesture as if grasping its physical counterpart. The object-gesture mapping under this metaphor is of high intuitiveness, which enables users to easily discover, remember the gestures to retrieve the objects.

CHI 2017
Float: One-Handed and Touch-Free Target Selection on Smartwatches
Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, Yuanchun Shi

Float is a wrist-to-finger interaction technique that enables one-handed and touch-free input on smartwatches with high efficiency. With Float, a user tilts the wrist to point and performs an in-air finger tap to click. We realize Float using only commercially-available built-in sensors. Particularly, we detect the finger taps based on the photoplethysmogram (PPG) signal acquired from the heart rate monitor sensor (The supplementary file below provides a detailed destripiton of the PPGTap).
[PDF] [Supplementary File: PPGTap]

CHI 2016 | Honorable Mention Award (Top 5%)
One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses
Chun Yu, Ke Sun*(as the first student author), Mingyuan Zhong, Xincheng Li, Yuanchun Shi

1D-Handwriting is a unistroke gesture technique that enables text entry on a one-dimensional interface. We map two-dimensional handwriting to a reduced one-dimensional space, while achieving a balance between memorability and performance. After an iterative design, we derive a set of ambiguous two-length unistroke gestures, each mapping to 1-3 letters. 1D-Handwriting outperforms a selection-based technique for both letter and word input.
[PDF] [Slides] [Video] [ACM DL]

UbiComp 2016 Workshop: UnderWare
SkinMotion: what does skin movement tell us?
Yuntao Wang, Ke Sun, Lu Sun, Chun Yu, Yuanchun Shi

SkinMotion reconstructs human motions from skin-stretching movements. We discuss the potential applications of SkinMotion. In addition, we experimentally explore one specific instance ‒ finger motion detection using the skin movement on the dorsum of the hand. Results show that SkinMotion can achieve 5.84° estimate error for proximal phalanx flexion on average. We expect SkinMotion to open new possibilities for skin-based interactions.

UIST 2015
ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data
Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun ,Yuanchun Shi

Air Typing Keyboard (ATK) enables freehand ten-finger typing in the air based on 3D hand tracking data. We empirically investigate users' mid-air typing behavior, and examine fingertip kinematics, correlated movement among fingers and 3D distribution of tapping endpoints. We propose a probabilistic tap detection algorithm, augmenting Goodman's input correction model to account for the ambiguity in distinguishing tapping finger.
[PDF] [Slides] [Video] [ACM DL]


Tsinghua University 09/2014 - present
Ph.D. student in Computer Science and Technology
Advisors: Prof. Yuanchun Shi

Tsinghua University 09/2010 - 06/2014
B.Eng. in Computer Science and Technology
GPA rank: 13/129, Graduate with honors in the Department of CS


Intern at , AI Platform and Research 04/2018 - 07/2018

Augmenting Input and Output Interfaces of Chatbot for Customer Service

Intern at HiScene-Augmented Reality , a leading company in Augmented Reality in China 10/2016 - 12/2016

I learned from and worked with experienced colleagues in the SLAM group. After a period of code cleanup work, I have a preliminary understanding of the basic algorithms of Augmented Reality. I also built a pipeline tool which takes image sequence as input and outputs the 3D mesh model reconstructed from them with open source softwares.

Teaching Assistant, Human-Computer Interaction: Theory and Technology, THU 09/2015 - 01/2016
09/2016 - 01/2017

Honors & Awards

National Scholarship by Ministry of Education 2016
Honorable Mention Award by ACM CHI 2016 2016
Outstanding Graduate by the Department of CS, Tsinghua University 2014
Comprehensive Excellence Scholarship by Tsinghua University 2011, 2013
Academic Excellence Scholarship by Tsinghua University 2012


C++, Java, Python, C#, MATLAB, Cooking

Updated in July 2018