Interactive Indoor Localization Based on Image Retrieval and Question Response

Xinyun Li, Ryosuke Furuta, Go Irie, Yota Yamamoto, Yukinobu Taniguchi

研究成果: Conference article査読

抄録

Due to the increasing complexity of indoor facilities such as shopping malls and train stations, there is a need for a new technology that can find the current location of the user of a smartphone or other device, as such facilities prevent the reception of GPS signals. Although many methods have been proposed for location estimation based on image search, accuracy is unreliable as there are many similar architectural indoors, and there are few features that are unique enough to offer unequivocal localization. Some methods increase the accuracy of location estimation by increasing the number of query images, but this increases the user’s burden of image capture. In this paper, we propose a method for accurately estimating the current indoor location based on question-response interaction from the user, without imposing greater image capture loads. Specifically, the proposal (i) generates questions using object detection and scene text detection, (ii) sequences the questions by minimizing conditional entropy, and (iii) filters candidate locations to find the current location based on the user’s response.

本文言語English
ページ(範囲)796-803
ページ数8
ジャーナルProceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
4
DOI
出版ステータスPublished - 2023
イベント18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2023 - Lisbon, Portugal
継続期間: 19 2月 202321 2月 2023

フィンガープリント

「Interactive Indoor Localization Based on Image Retrieval and Question Response」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル