論文アブストラクト： How much do visual aspects influence the perception of users about whether they are conversing with a human being or a machine in a mobile-chat environment? This paper describes a study on the influence of typefaces using a blind Turing test-inspired approach. The study consisted of two user experiments. First, three different typefaces (OCR, Georgia, Helvetica) and three neutral dialogues between a human and a financial adviser were shown to participants. The second experiment applied the same study design but OCR font was substituted by Bradley font. For each of our two independent experiments, participants were shown three dialogue transcriptions and three typefaces counterbalanced. For each dialogue typeface pair, participants had to classify adviser conversations as human or chatbot-like. The results showed that machine-like typefaces biased users towards perceiving the adviser as machines but, unexpectedly, handwritten-like typefaces had not the opposite effect. Those effects were, however, influenced by the familiarity of the user to artificial intelligence and other participants' characteristics.
論文アブストラクト： Bots are estimated to account for well over half of all web traffic, yet they remain an understudied topic in HCI. In this paper we present the findings of an analysis of 2284 submissions across three discussion groups dedicated to the request, creation and discussion of bots on Reddit. We set out to examine the qualities and functionalities of bots and the practical and social challenges surrounding their creation and use. Our findings highlight the prevalence of misunderstandings around the capabilities of bots, misalignments in discourse between novices who request and more expert members who create them, and the prevalence of requests that are deemed to be inappropriate for the Reddit community. In discussing our findings, we suggest future directions for the design and development of tools that support more carefully guided and reflective approaches to bot development for novices, and tools to support exploring the consequences of contextually-inappropriate bot ideas.
論文アブストラクト： Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. In this paper, we focus on the cognitive loads of users in interpreting ASEs in this study. Specifically, we assume that a shorter response time indicates less cognitive load, and we hypothesize that users will show a shorter response time when interpreting ASEs compared with speech sounds. We succeeded in verifying our hypothesis in a web-based investigation done to comprehend participants' cognitive loads by measuring their response times in interpreting ASEs and speeches.
論文アブストラクト： Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.