論文アブストラクト： The combined use of sound and image has a rich history, from audiovisual artworks to research exploring the potential of data visualization and sonification. However, we lack standard tools or guidelines for audiovisual (AV) interaction design, particularly for live performance. We propose the AVUI (AudioVisual User Interface), where sound and image are used together in a cohesive way in the interface; and an enabling technology, the ofxAVUI toolkit. AVUI guidelines and ofxAVUI were developed in a three-stage process, together with AV producers: 1) participatory design activities; 2) prototype development; 3) encapsulation of prototype as a plug-in, evaluation, and roll out. Best practices identified include: reconfigurable interfaces and mappings; object-oriented packaging of AV and UI; diverse sound visualization; flexible media manipulation and management. The toolkit and a mobile app developed using it have been released as open-source. Guidelines and toolkit demonstrate the potential of AVUI and offer designers a convenient framework for AV interaction design.
音と映像をインタラクティブに統合できるツール． 標準的な設計指針やツールがなかったため，有識者インタビューやハッカソンを経て開発を行った． openFrameworksのアドオンが公開されている．
論文アブストラクト： Programmers, especially novices, often have difficulty learning new APIs (Application Programming Interfaces). Existing research has not fully addressed novice programmers' unawareness of all available API methods. To help novices discover new and appropriate uses for API methods, we designed a system called the Example Guru. The Example Guru suggests context-relevant API methods based on each programmer's code. The suggestions provide contrasting examples to demonstrate how to use the API methods. To evaluate the effectiveness of the Example Guru, we ran a study comparing novice programmers' use of the Example Guru and documentation-inspired API information. We found that twice as many participants accessed the Example Guru suggestions compared to documentation and that participants used more than twice as many new API methods after accessing suggestions than documentation.
論文アブストラクト： For eye tracking to become a ubiquitous part of our everyday interaction with computers, we first need to understand its limitations outside rigorously controlled labs, and develop robust applications that can be used by a broad range of users and in various environments. Toward this end, we collected eye tracking data from 80 people in a calibration-style task, using two different trackers in two lighting conditions. We found that accuracy and precision can vary between users and targets more than six-fold, and report on differences between lighting, trackers, and screen regions. We show how such data can be used to determine appropriate target sizes and to optimize the parameters of commonly used filters. We conclude with design recommendations and examples how our findings and methodology can inform the design of error-aware adaptive applications.
論文アブストラクト： HCI is increasingly exploring how temperature can be used as an interaction modality. One challenge is that temperature changes are perceived over the course of seconds. This can be attributed to both the slow response time of skin thermoreceptors and the latency of the technology used to heat and cool the skin. For this reason, thermal cues are typically used to communicate single states, such as an emotion, and then there is a pause of tens of seconds to allow the skin to re-adapt to a neutral temperature before sending another signal. In contrast, this paper presents the first experimental demonstration that continuous temperature changes can guide behaviour: significantly improving performance in a 2D maze navigation task, without having to return to a neutral state before a new signal is sent. We discuss how continuous thermal feedback may be used for real world navigational tasks.
腕につけたデバイスの温度変化によるナビゲーションを検討した研究． 迷路をクリアするタスクにおいては，35℃と29℃の2種類（正規ルートor not）の温度を連続的に変化させるのみで，効率的にクリアさせることができた．
論文アブストラクト： Novice programmers often have trouble installing, configuring, and managing disparate tools (e.g., version control systems, testing infrastructure, bug trackers) that are required to become productive in a modern collaborative software development environment. To lower the barriers to entry into software development, we created a prototype IDE for novices called CodePilot, which is, to our knowledge, the first attempt to integrate coding, testing, bug reporting, and version control management into a real-time collaborative system. CodePilot enables multiple users to connect to a web-based programming session and work together on several major phases of software development. An eight-subject exploratory user study found that first-time users of CodePilot spontaneously used it to assume roles such as developer/tester and developer/assistant when creating a web application together in pairs. Users felt that CodePilot could aid in scaffolding for novices, situational awareness, and lowering barriers to impromptu collaboration.