論文アブストラクト： Young children with visual impairments tend to engage less with their surroundings, limiting the benefits from activities at school. We investigated novel ways of using sound from a bracelet, such as speech or familiar noises, to tell children about nearby people, places and activities, to encourage them to engage more during play and help them move independently. We present a series of studies, the first two involving visual impairment educators, that give insight into challenges faced by visually impaired children at school and how sound might help them. We then present a focus group with visually impaired children that gives further insight into the effective use of sound. Our findings reveal novel ways of combining sounds from wearables with sounds from the environment, motivating audible beacons, devices for audio output and proximity estimation. We present scenarios, findings and a design space that show the novel ways such devices could be used alongside wearables to help visually impaired children at school.
論文アブストラクト： Prevention of errors has been an orienting goal within the field of Human-Computer Interaction since its inception, with particular focus on minimizing human errors through appropriate technology design. However, there has been relatively little exploration into how designers can best support users of technologies that will inevitably make errors. We present a mixed-methods study in the domain of navigation technology for visually impaired individuals. We examined how users respond to device errors made in realistic scenarios of use. Contrary to conventional wisdom that usable systems must be error-free, we found that 42% of errors were acceptable to users. Acceptance of errors depends on error type, building feature, and environmental context. Further, even when a technical error is acceptable to the user, the misguided social responses of others nearby can negatively impact user experience. We conclude with design recommendations that embrace errors while also supporting user management of errors in technical systems.
論文アブストラクト： People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commercial augmented reality (AR) glasses, despite their visual limitations and the device's constraints. We conducted a study with 20 low vision participants and 18 sighted controls, asking them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. We found that low vision participants were able to identify basic shapes and read short phrases on the glasses while sitting and walking. Identifying virtual elements had a similar effect on low vision and sighted people's walking speed, slowing it down slightly. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision people and discuss opportunities for accessibility applications on this platform.
論文アブストラクト： We introduce a new principled method grounded in the Kinematic Theory of Rapid Human Movements to automatically generate synthetic stroke gestures across user populations in order to support ability-based design of gesture user interfaces. Our method is especially useful when the target user population is difficult to sample adequately and, consequently, when there is not enough data to train gesture recognizers to deliver high levels of accuracy. To showcase the relevance and usefulness of our method, we collected gestures from people without visual impairments and successfully synthesized gestures with the articulation characteristics of people with visual impairments. We also show that gesture recognition accuracy improves significantly when using our synthetic gesture samples for training. Our contributions will benefit researchers and practitioners that wish to design gesture user interfaces for people with various abilities by helping them prototype, evaluate, and predict gesture recognition performance without having to expressly recruit and involve people with disabilities in long, time-consuming gesture collection experiments.