During the 107th anniversary foundation of Tsinghua University, the 36th “Challenge Cup” College Student Curricular Academic Science and Technology Works Competition brought to a successful close, Department of Computer Science and Technology won the first prize.[Details]
ACM CHI 2018,the top-tier conference of Human Computer Interaction was held from April 21 to 26, in Montréal, Canada. Around 3000 professionals from both academia and industry attended the meeting. Three research articles from HCI Group were accepted.[Details]
Pervasive HCI Group have 5 articles accepted, making it the most productive research group around the world. Remarkable, our group got the nomination for the Honorable Mention Award for three consecutive years.[Details]
Facilitating Text Entry on Smartphones with QWERTY Keyboard for Users with Parkinson’s Disease
(CHI’21) Yutao Wang, Ao Yu, Xin Yi*, Yuanwei Zhang, Ishan Chatterjee, Shwetak Patel, Yuanchun Shi
QWERTY is the primary smartphone text input keyboard configuration. However, insertion and substitution errors caused by hand tremors, often experienced by users with Parkinson’s disease, can severely affect typing efficiency and user experience. In this paper, we investigated Parkinson’s users’ typing behavior on smartphones. In particular, we identified and compared the typing characteristics generated by users with and without Parkinson’s symptoms. We then proposed an elastic probabilistic model for input prediction. By incorporating both spatial and temporal features, this model generalized the classical statistical decoding algorithm to correct in sertion, substitution and omission errors, while maintaining direct physical interpretation.
FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision
(CHI’21) Yueting Weng, Chun Yu*, Yingtian Shi, Yuhang Zhao, Yukang Yan, Yuanchun Shi
We present FaceSight, a computer vision-based hand-to-face gesture sensing technique for AR glasses. FaceSight fixes an infrared camera onto the bridge of AR glasses to provide extra sensing capability of the lower face and hand behaviors. We obtained 21 hand-to-face gestures and demonstrated the potential interaction benefits through five AR applications. We designed and implemented an algorithm pipeline that achieves classification accuracy of all gestures at 83.06%, proved by the data of 10 users. Due to the compact form factor and rich gestures, we recognize FaceSight as a practical solution to augment input capability of AR glasses in the future.