Session:「Human Senses」

Multi-Touch Skin: A Thin and Flexible Multi-Touch Sensor for On-Skin Input

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173607

論文アブストラクト: Skin-based touch input opens up new opportunities for direct, subtle, and expressive interaction. However, existing skin-worn sensors are restricted to single-touch input and limited by a low resolution. We present the first skin overlay that can capture high-resolution multi-touch input. Our main contributions are: 1) Based on an exploration of functional materials, we present a fabrication approach for printing thin and flexible multi-touch sensors for on-skin interactions. 2) We present the first non-rectangular multi-touch sensor overlay for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. 3) To validate the feasibility and versatility of our approach, we present four application examples and empirical results from two technical evaluations. They confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations.

日本語のまとめ:

肌着用型のマルチタッチ可能なセンサーを開発した。既存のセンサはシングルタッチ&低解像度であったが、マルチタッチで高いSN比、高解像度を実現した。また任意の形の薄膜センサを設計するデザイン手法を提案した。

GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173681

論文アブストラクト: Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.'s $1 for rapid user interface prototypes. However, most existing recognizers are limited to a particular input modality and/or pre-trained set of gestures, and cannot be easily combined with other recognizers. In particular, creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skill. Inspired by $1's easy, cheap, and flexible design, we present the GestureWiz prototyping environment that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner. We present a series of experiments with designers and crowds to show that GestureWiz can perform with reasonable accuracy and latency. We demonstrate advantages of GestureWiz when recreating gesture-based interfaces from the literature and conducting a study with 12 interaction designers that prototyped a multimodal interface with support for a wide range of novel gestures in about 45 minutes.

日本語のまとめ:

既存のジェスチャー認識には高いプログラミング技術が必要とされ、プロトタイプ製作に労力がかかった。そこで人手でリアルタイムで認識結果を返すシステムを考案し、ユーザースタディを実施した。適度な精度とレイテンシを達成した。

PalmTouch: Using the Palm as an Additional Input Modality on Commodity Smartphones

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173934

論文アブストラクト: Touchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present PalmTouch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the device's top edge implicitly places the palm on the touchscreen. We present different use cases for PalmTouch, including the use as a shortcut and for improving reachability. To evaluate these use cases, we have developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. Results of the evaluation show that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate PalmTouch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.

日本語のまとめ:

手のひらを新しいモダリティとしたスマホUIを提案した。99.5%の精度でタッチが指なのか手のひらなのかを判定し、アプリ内のショートカットなどのユースケースを考案した。ユーザースタディで自然なUIであることを確認。

ChromaGlasses: Computational Glasses for Compensating Colour Blindness

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173964

論文アブストラクト: Prescription glasses are used by many people as a simple, and even fashionable way, to correct refractive problems of the eye. However, there are other visual impairments that cannot be treated with an optical lens in conventional glasses. In this work we present ChromaGlasses, Computational Glasses using optical head-mounted displays for compensating colour vision deficiency. Unlike prior work that required users to look at a screen in their visual periphery rather than at the environment directly, ChromaGlasses allow users to directly see the environment using a novel head-mounted displays design that analyzes the environment in real-time and changes the appearance of the environment with pixel precision to compensate the impairment of the user. In this work, we present first prototypes for ChromaGlasses and report on the results from several studies showing that ChromaGlasses are an effective method for managing colour blindness.

日本語のまとめ:

色覚異常者を補助する光学シースルーHMDを使用したメガネ型デバイスを提案した。ユーザーが区別できない色を識別して、リアルタイムでピクセルレベルの補正を行う。ハードウェアの小型化、異なる照明条件への適応が今後の課題。