論文アブストラクト： The concept of power pose originates from a Psychology study from 2010 which suggested that holding an expansive pose can change hormone levels and increase risk-taking behavior. Follow-up experiments suggested that expansive poses incidentally imposed by the design of an environment lead to more dishonest behaviors. While multiple replication attempts of the 2010 study failed, the follow-up experiments on incidental postures have so far not been replicated. As UI design in HCI can incidentally lead to expansive body postures, we attempted two conceptual replications: we first asked 44 participants to tap areas on a wall-sized display and measured their self-reported sense of power; we then asked 80 participants to play a game on a large touch-screen and measured risk-taking. Based on Bayesian analyses we find that incidental power poses had little to no effect on our measures but could cause physical discomfort. We conclude by discussing our findings in the context of theory-driven research in HCI.
HCIの場面で偶然にもPower Poseを取ってしまうことがある．その時，Power PoseがHCIにどのような影響を与えるのか調査した．
論文アブストラクト： We present an investigation into how hand usage is affected by different body postures (Sitting at a table, Lying down and Standing) when interacting with smartphones. We theorize a list of factors (smartphone support, body support and muscle usage) and explore their influence the tilt and rotation of the smartphone. From this we draw a list of hypotheses that we investigate in a quantitative study. We varied the body postures and grips (Symmetric bimanual, Asymmetric bimanual finger, Asymmetric bimanual thumb and Single-handed) studying the effects through a dual pointing task. Our results showed that the body posture Lying down had the most movement, followed by Sitting at a table and finally Standing. We additionally generate reports of motions performed using different grips. Our work extends previous research conducted with multiple grips in a sitting position by including other body postures, it is anticipated that UI designers will use our results to inform the development of mobile user interfaces.
論文アブストラクト： Many recent studies have explored user-defined interactions for touch and gesture-based systems through end-user elicitation. While these studies have facilitated the user-end of the human-computer dialogue, the subsequent design of gesture representations to communicate gestures to the user vary in style and consistency. Our study explores how users interpret, enact, and refine gesture representations adapting techniques from recent elicitation studies. To inform our study design, we analyzed gesture representations from 30 elicitation papers and developed a taxonomy of design elements. We then conducted a partnered elicitation study with 30 participants producing 657 gesture representations accompanied by think-aloud data. We discuss design patterns and themes that emerged from our analysis, and supplement these findings with an in-depth look at users' mental models when perceiving and enacting gesture representations. Finally, based on the results, we provide recommendations for practitioners in need of "visual language" guidelines to communicate possible user actions.