論文アブストラクト： The design of Fitts' historical reciprocal tapping experiment gravely confounds index of difficulty ID with target distance D: Summary statistics for the candidate Fitts model and a competing model may appear identical, and the validity of Fitts' model for some tasks can be legitimately questioned. We show that the contamination of ID by either target distance D or width W is due to the common practices of pooling and averaging data belonging to different distance-width (D,W) pairs for the same ID, and taking a geometric progression for values of D and W. We analyze a case study of the validation of Fitts' law in eye-gaze movements, where an unfortunate experimental design has misled researchers into believing that eye-gaze movements are not ballistic. We then provide simple guidelines to prevent confounds: Practitioners should carefully design the experimental conditions of (D,W), fully distinguish data acquired for different conditions, and put less emphasis on r² scores. We also recommend investigating the use of stochastic sampling for D and W.
論文アブストラクト： This paper investigates a common task requiring temporal precision: the selection of a rapidly moving target on display by invoking an input event when it is within some selection window. Previous work has explored the relationship between accuracy and precision in this task, but the role of visual cues available to users has remained unexplained. To expand modeling of timing performance to multimodal settings, common in gaming and music, our model builds on the principle of probabilistic cue integration. Maximum likelihood estimation (MLE) is used to model how different types of cues are integrated into a reliable estimate of the temporal task. The model deals with temporal structure (repetition, rhythm) and the perceivable movement of the target on display. It accurately predicts error rate in a range of realistic tasks. Applications include the optimization of difficulty in game-level design.
論文アブストラクト： Physical controls are widely used by professionals such as sound engineers or aircraft pilots. In particular knobs and sliders are the most prevalent in such interfaces. They have advantages over touchscreen GUIs, especially when users require quick and eyes-free control. However, their interfaces (e.g., mixing consoles) are often bulky and crowded. To improve this, we present the results of a formative study with professionals who use physical controllers. Based on their feedback, we propose design requirements for future interfaces for parameters control. We then introduce the design of our KnobSlider that combines the advantages of a knob and a slider in one unique shape-changing device. A qualitative study with professionals shows how KnobSlider supports the design requirements, and inspired new interactions and applications.
論文アブストラクト： To press a button, a finger must push down and pull up with the right force and timing. How the motor system succeeds in button-pressing, in spite of neural noise and lacking direct access to the mechanism of the button, is poorly understood. This paper investigates a unifying account based on neuromechanics. Mechanics is used to model muscles controlling the finger that contacts the button. Neurocognitive principles are used to model how the motor system learns appropriate muscle activations over repeated strokes though relying on degraded sensory feedback. Neuromechanical simulations yield a rich set of predictions for kinematics, dynamics, and user performance and may aid in understanding and improving input devices. We present a computational implementation and evaluate predictions for common button types.