論文アブストラクト： When human partners attend to peripheral computing devices while interacting with conversational robots, the inability of the robots to determine the actual engagement level of the human partners after gaze shift may cause communication breakdown. In this paper, we propose a real-time perception model for robots to estimate human partners' engagement dynamics, and investigate different robot behavior strategies to handle ambiguities in humans' status and ensure the flow of the conversation. In particular, we define four novel types of engagement status and propose a real-time engagement inference model that weighs humans' social signals dynamically according to the involvement of the computing devices. We further design two robot behavior strategies (explicit and implicit) to help resolve uncertainties in engagement inference and mitigate the impact of uncoupling, based on an annotated human-human interaction video corpus. We conducted a within-subject experiment to assess the efficacy and usefulness of the proposed engagement inference model and behavior strategies. Results show that robots with our engagement model can deliver better service and smoother conversations as an assistant, and people find the implicit strategy more polite and appropriate.
論文アブストラクト： "Remind me to get milk later this afternoon." In communications and planning, people often express uncertainty about time using imprecise temporal expressions (ITEs). Unfortunately, modern virtual assistants often lack system support to capture the intents behind these expressions. This can result in unnatural interactions and undesirable interruptions (e.g., having a work reminder delivered at 12pm when out at lunch, because the user said "this afternoon"). In this paper we explore existing practices, expectations, and preferences surrounding the use of ITEs. Our mixed methods approach employs surveys, interviews, and an analysis of a large corpus of written communications. We find that people frequently use a diverse set of ITEs in both communication and planning. These uses reflect a variety of motivations, such as conveying uncertainty or task priority. In addition, we find that people have a variety of expectations about time input and management when interacting with virtual assistants. We conclude with design implications for future virtual assistants.
論文アブストラクト： With domestic technology on the rise, the quantity and complexity of smart-home devices are becoming an important interaction design challenge. We present a novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures. We experimentally compare the robot to three common smart-home interfaces: a voice-control loudspeaker; a wall-mounted touch-screen; and a mobile application. Our findings suggest that interfaces that rate higher on flow rate lower on usability, and vice versa. Participants' sense of control is highest using familiar interfaces, and lowest using voice control. Situation awareness is highest using the robot, and also lowest using voice control. These findings raise questions about voice control as a smart-home interface, and suggest that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.