Session:「VR/AR/Telepresence 2」

Remixed Reality: Manipulating Space and Time in Augmented Reality

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173703

論文アブストラクト: We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.

日本語のまとめ:

この文献ではリミックスリアリティという混合現実の斬新な形を提示する。複数のデプスカメラから収集された映像をライブ3D再構築し、そのオブジェクトに対してユーザはコピーや消去、変換など物理空間を拡張した仮想空間を操作できる。

Extracting Regular FOV Shots from 360 Event Footage

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173890

論文アブストラクト: Video summaries are a popular way to share important events, but creating good summaries is hard. It requires expertise in both capturing and editing footage. While hiring a professional videographer is possible, this is too costly for most casual events. An alternative is to place 360 video cameras around an event space to capture footage passively and then extract regular field-of-view (RFOV) shots for the summary. This paper focuses on the problem of extracting such RFOV shots. Since we cannot actively control the cameras or the scene, it is hard to create "ideal' shots that adhere strictly to traditional cinematography rules. To better understand the tradeoffs, we study human preferences for static and moving camera RFOV shots generated from 360 footage. From the findings, we derive design guidelines. As a secondary contribution, we use these guidelines to develop automatic algorithms that we demonstrate in a prototype user interface for extracting RFOV shots from 360 videos.

日本語のまとめ:

この文献では行事のまとめを作る際に360度カメラを設置することを提案する。設置することで簡単に使えてカメラマンも必要ない。それに加えて360度カメラの映像から通常視野角の写真を抽出するアルゴリズムを作ることで有用性を示した。

Quadcopter-Projected In-Situ Navigation Cues for Improved Location Awareness

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174007

論文アブストラクト: Every day people rely on navigation systems when exploring unknown urban areas. Many navigation systems use multimodal feedback like visual, auditory or tactile cues. Although other systems exist, users mostly rely on a visual navigation using their smartphone. However, a problem with visual navigation systems is that the users have to shift their attention to the navigation system and then map the instructions to the real world. We suggest using in-situ navigation instructions that are presented directly in the environment by augmenting the reality using a projector-quadcopter. Through a user study with 16 participants, we show that using in-situ instructions for navigation leads to a significantly higher ability to observe real-world points of interest. Further, the participants enjoyed following the projected navigation cues.

日本語のまとめ:

この文献ではプロジェクターとクワッドコプターを用いて、マップに表示されるルート案内を現実に投影し、より現実感を増したナビゲートの手法を提案する。実験の結果、既存のナビゲーションシステムに比べて周囲の記憶により効果的であった。

BioFidget: Biofeedback for Respiration Training Using an Augmented Fidget Spinner

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174187

論文アブストラクト: This paper presents BioFidget, a biofeedback system that integrates physiological sensing and display into a smart fidget spinner for respiration training. We present a simple yet novel hardware design that transforms a fidget spinner into 1) a nonintrusive heart rate variability (HRV) sensor, 2) an electromechanical respiration sensor, and 3) an information display. The combination of these features enables users to engage in respiration training through designed tangible and embodied interactions, without requiring them to wear additional physiological sensors. The results of this empirical user study prove that the respiration training method reduces stress, and the proposed system meets the requirements of sensing validity and engagement with 32 participants in a practical setting.

日本語のまとめ:

呼吸訓練を行うシステム、BioFidgetは追加の生理学的センサーを装着することなくユーザーは呼吸トレーニングに参加することができます。それらを用いることで呼吸訓練法がストレスを減少することが証明された。