Incorporating Geo-Tagged Mobile Videos Into Context-Aware Augmented Reality Applications

In recent years, augmented reality (AR) is gaining much attention from  the research community and industry. With AR, users look at the real-world space through an AR browser where the content is superimposed on the physical world as objects. AR is even regarded as the next-generation of web browser. However, there are challenges with popularizing AR usage. There is not enough AR content because creating the content is not only time-consuming but also  expertise-required. Thus, by leveraging the availability of big user-generated mobile content, we propose to incorporate geo-tagged mobile videos into AR applications. With our framework, any user can generate AR contents.

To enhance user’s experience, we focus on context-aware AR solution using rich-censored data including location (from GPS) and direction (compass). We propose filtering algorithms to effectively select a set of most interesting video segments out of a large video dataset so that the selected scenes can be automatically retrieved and displayed in AR applications. For the filtering, we define an interesting video segment as a sequence of video frames that follow a particular pattern (borrowed from film studies), including tracking, panning, zooming, and arching scenes.

We developed a demo regarding the integration of AR and geo-tagged user-generated mobile videos, conducted experiments to find interesting video segments from a large collection of videos, which mostly contains non-interesting content.

Source code

https://github.com/infolab-usc/mediaq-analytics

References

Hien To, Hyerim Park, Seon Ho Kim, and Cyrus Shahabi, Incorporating Geo-Tagged Mobile Videos Into Context-Aware Augmented Reality Applications, The Second IEEE International Conference on Multimedia Big Data (IEEE BigMM 2016), Taipei, Taiwan, April 20-22, 2016

Leave a comment