Session:「Camera-based Tracking」

EagleSense: Tracking People and Devices in Interactive Spaces using Real-Time Top-View Depth-Sensing

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025562

論文アブストラクト: Real-time tracking of people's location, orientation and activities is increasingly important for designing novel ubiquitous computing applications. Top-view camera-based tracking avoids occlusion when tracking people while collaborating, but often requires complex tracking systems and advanced computer vision algorithms. To facilitate the prototyping of ubiquitous computing applications for interactive spaces, we developed EagleSense, a real-time human posture and activity recognition system with a single top-view depth-sensing camera. We contribute our novel algorithm and processing pipeline, including details for calculating silhouette-extremities features and applying gradient tree boosting classifiers for activity recognition optimized for top-view depth sensing. EagleSense provides easy access to the real-time tracking data and includes tools for facilitating the integration into custom applications. We report the results of a technical evaluation with 12 participants and demonstrate the capabilities of EagleSense with application case studies.

日本語のまとめ:

人の位置や向き,行動のリアルタイム追跡にトップビューカメラを用いる場合,複雑なシステムやアルゴリズムが必要となる.そこで著者らはEagleSenseを開発し,簡単にリアルタイムのトラッキングデータを求めることを可能とした.

Interactive Visual Calibration of Volumetric Head-Tracked 3D Displays

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025685

論文アブストラクト: Head-tracked 3D displays can provide a compelling 3D effect, but even small inaccuracies in the calibration of the participant's viewpoint to the display can disrupt the 3D illusion. We propose a novel interactive procedure for a participant to easily and accurately calibrate a head-tracked display by visually aligning patterns across a multi-screen display. Head-tracker measurements are then calibrated to these known viewpoints. We conducted a user study to evaluate the effectiveness of different visual patterns and different display shapes. We found that the easiest to align shape was the spherical display and the best calibration pattern was the combination of circles and lines. We performed a quantitative camera-based calibration of a cubic display and found visual calibration outperformed manual tuning and generated viewpoint calibrations accurate to within a degree. Our work removes the usual, burdensome step of manual calibration when using head-tracked displays and paves the way for wider adoption of this inexpensive and effective 3D display technology.

日本語のまとめ:

ヘッドトラッキング3Dディスプレイは,ディスプレイに対するユーザの視点の較正における微小な誤差によって3D錯覚を混乱させる可能性がある.そこで,著者らは視覚的な位置合わせにより,容易かつ正確に較正するための手順を提案した.

Changing the Appearance of Real-World Objects By Modifying Their Surroundings

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025795

論文アブストラクト: We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.

日本語のまとめ:

著者らは,周囲の空間を制御することによって,物体の知覚される外観を変化させる手法を提示している.オブジェクトそのものではなく,周囲の空間を変更することで,オブジェクトのサイズ,色,形状などを変更する.

HeadPhones: Ad Hoc Mobile Multi-Display Environments through Head Tracking

論文URL: http://dl.acm.org/citation.cfm?doid=3025453.3025533

論文アブストラクト: We present HeadPhones (Headtracking + smartPhones), a novel approach for the spatial registration of multiple mobile devices into an ad hoc multi-display environment. We propose to employ the user's head as external reference frame for the registration of multiple mobile devices into a common coordinate system. Our approach allows for dynamic repositioning of devices during runtime without the need for external infrastructure such as separate cameras or fiducials. Specifically, our only requirements are local network connections and mobile devices with built-in front facing cameras. This way, HeadPhones enables spatially-aware multi-display applications in mobile contexts. A user study and accuracy evaluation indicate the feasibility of our approach.

日本語のまとめ:

著者らは,複数のモバイルデバイスをマルチディスプレイ環境に空間的に登録するためのアプローチとしてHeadPhonesを提案する. この手法では,複数のモバイルデバイスを共通の座標系に登録するために,ユーザの頭部を参照する.