Session:「Impaired Vision and Navigation」

Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025518

論文アブストラクト: Young children with visual impairments tend to engage less with their surroundings, limiting the benefits from activities at school. We investigated novel ways of using sound from a bracelet, such as speech or familiar noises, to tell children about nearby people, places and activities, to encourage them to engage more during play and help them move independently. We present a series of studies, the first two involving visual impairment educators, that give insight into challenges faced by visually impaired children at school and how sound might help them. We then present a focus group with visually impaired children that gives further insight into the effective use of sound. Our findings reveal novel ways of combining sounds from wearables with sounds from the environment, motivating audible beacons, devices for audio output and proximity estimation. We present scenarios, findings and a design space that show the novel ways such devices could be used alongside wearables to help visually impaired children at school.

日本語のまとめ:

視覚障害を持つ幼い子供が学校で生活するなかで、音情報がどのように彼ら・彼女らの生活を助けられるかに関する実験。音の出るウェアラブルビーコン端末を試作し、実験した。

Embracing Errors: Examining How Context of Use Impacts Blind Individuals' Acceptance of Navigation Aid Errors

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025528

論文アブストラクト: Prevention of errors has been an orienting goal within the field of Human-Computer Interaction since its inception, with particular focus on minimizing human errors through appropriate technology design. However, there has been relatively little exploration into how designers can best support users of technologies that will inevitably make errors. We present a mixed-methods study in the domain of navigation technology for visually impaired individuals. We examined how users respond to device errors made in realistic scenarios of use. Contrary to conventional wisdom that usable systems must be error-free, we found that 42% of errors were acceptable to users. Acceptance of errors depends on error type, building feature, and environmental context. Further, even when a technical error is acceptable to the user, the misguided social responses of others nearby can negatively impact user experience. We conclude with design recommendations that embrace errors while also supporting user management of errors in technical systems.

日本語のまとめ:

視覚障害者が屋内ナビシステムを利用する際のエラーと利用環境が、どのようにユーザに影響を及ぼすか調査。57名の被験者に対して10の異なるシナリオで実験。エラーのハンドリングよりも周囲の人への影響を機にする人が多かった。

Understanding Low Vision People's Visual Perception on Commercial Augmented Reality Glasses

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025949

論文アブストラクト: People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commercial augmented reality (AR) glasses, despite their visual limitations and the device's constraints. We conducted a study with 20 low vision participants and 18 sighted controls, asking them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. We found that low vision participants were able to identify basic shapes and read short phrases on the glasses while sitting and walking. Identifying virtual elements had a similar effect on low vision and sighted people's walking speed, slowing it down slightly. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision people and discuss opportunities for accessibility applications on this platform.

日本語のまとめ:

弱視の人が市販のARグラスで何を見て、何を識別できるのか、20の弱視の人と18の視力矯正してる人で実験。視覚が限られる中でも異なる色や形、短文などを識別できた。

Synthesizing Stroke Gestures Across User Populations: A Case for Users with Visual Impairments

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025906

論文アブストラクト: We introduce a new principled method grounded in the Kinematic Theory of Rapid Human Movements to automatically generate synthetic stroke gestures across user populations in order to support ability-based design of gesture user interfaces. Our method is especially useful when the target user population is difficult to sample adequately and, consequently, when there is not enough data to train gesture recognizers to deliver high levels of accuracy. To showcase the relevance and usefulness of our method, we collected gestures from people without visual impairments and successfully synthesized gestures with the articulation characteristics of people with visual impairments. We also show that gesture recognition accuracy improves significantly when using our synthetic gesture samples for training. Our contributions will benefit researchers and practitioners that wish to design gesture user interfaces for people with various abilities by helping them prototype, evaluate, and predict gesture recognition performance without having to expressly recruit and involve people with disabilities in long, time-consuming gesture collection experiments.

日本語のまとめ:

健常者のジェスチャデータから視覚障害を持つ人のためのジェスチャサンプルを生成する。ジェスチャデータのサンプル数が十分でなくても認識精度を高めながら識別器をトレーニングできる。