Session:「Personal Object Recognizers: Feasibility and Challenges」

Facade: Auto-generating Tactile Interfaces to Appliances

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025845

論文アブストラクト: Common appliances have shifted toward flat interface panels, making them inaccessible to blind people. Although blind people can label appliances with Braille stickers, doing so generally requires sighted assistance to identify the original functions and apply the labels. We introduce Facade - a crowdsourced fabrication pipeline to help blind people independently make physical interfaces accessible by adding a 3D printed augmentation of tactile buttons overlaying the original panel. Facade users capture a photo of the appliance with a readily available fiducial marker (a dollar bill) for recovering size information. This image is sent to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface. Facade then generates a 3D model for a layer of tactile and pressable buttons that fits over the original controls. Finally, a home 3D printer or commercial service fabricates the layer, which is then aligned and attached to the interface by the blind person. We demonstrate the viability of Facade in a study with 11 blind participants.

日本語のまとめ:

視覚障がい者のための電化製品のボタン識別用点字付きカバー作成手法を提案した。他者の手が不要な設計を目指し、ユーザーの作業は操作部の写真を撮ることと取り付けのみとした。視覚障がい者に試用してもらい、その実行可能性を検証した。

People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025899

論文アブストラクト: Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recognition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photos taken by blind users. We explore personal object recognizers, where visually impaired people train a mobile application with a few snapshots of objects of interest and provide custom labels. We adopt transfer learning with a deep learning system for user-defined multi-label k-instance classification. Experiments with blind participants demonstrate the feasibility of our approach, which reaches accuracies over 90% for some participants. We analyze user data and feedback to explore effects of sample size, photo-quality variance, and object shape; and contrast models trained on photos by blind participants to those by sighted participants and generic recognizers.

日本語のまとめ:

視覚障がい者が撮影した写真を分類するための画像分類アルゴリズムを構築した.一般的な画像分類よりも,視覚障がい者が撮影した写真のデータセットを用いた画像分類アルゴリズムの方が精度が高いことが示された.

Jackknife: A Reliable Recognizer with Few Samples and Many Modalities

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3026002

論文アブストラクト: Despite decades of research, there is yet no general rapid prototyping recognizer for dynamic gestures that can be trained with few samples, work with continuous data, and achieve high accuracy that is also modality-agnostic. To begin to solve this problem, we describe a small suite of accessible techniques that we collectively refer to as the Jackknife gesture recognizer. Our dynamic time warping based approach for both segmented and continuous data is designed to be a robust, go-to method for gesture recognition across a variety of modalities using only limited training samples. We evaluate pen and touch, Wii Remote, Kinect, Leap Motion, and sound-sensed gesture datasets as well as conduct tests with continuous data. Across all scenarios we show that our approach is able to achieve high accuracy, suggesting that Jackknife is a capable recognizer and good first choice for many endeavors.

日本語のまとめ:

ラピッドプロトタイピングのための,少ないデータを用いた汎用的なジェスチャ認識を提案した.動的時間伸縮法(DTW)とパス方向のベクトルの内積を使用することにより,時系列データから高精度にジェスチャを認識できる.

Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025731

論文アブストラクト: Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access - an early forerunner of true ubiquitous access - screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments.

日本語のまとめ:

視覚障がい者支援技術、特に画面読み上げソフトの多様化がユーザーにもたらす問題を明らかにした。調査の結果、ソフトやバージョン間で一貫性を持つ事、使い慣れたソフトを携帯端末で持ち運べる事が望まれていることがわかった。