TY - JOUR
T1 - Interactive Indoor Localization Based on Image Retrieval and Question Response
AU - Li, Xinyun
AU - Furuta, Ryosuke
AU - Irie, Go
AU - Yamamoto, Yota
AU - Taniguchi, Yukinobu
N1 - Publisher Copyright:
© 2023 by SCITEPRESS - Science and Technology Publications, Lda.
PY - 2023
Y1 - 2023
N2 - Due to the increasing complexity of indoor facilities such as shopping malls and train stations, there is a need for a new technology that can find the current location of the user of a smartphone or other device, as such facilities prevent the reception of GPS signals. Although many methods have been proposed for location estimation based on image search, accuracy is unreliable as there are many similar architectural indoors, and there are few features that are unique enough to offer unequivocal localization. Some methods increase the accuracy of location estimation by increasing the number of query images, but this increases the user’s burden of image capture. In this paper, we propose a method for accurately estimating the current indoor location based on question-response interaction from the user, without imposing greater image capture loads. Specifically, the proposal (i) generates questions using object detection and scene text detection, (ii) sequences the questions by minimizing conditional entropy, and (iii) filters candidate locations to find the current location based on the user’s response.
AB - Due to the increasing complexity of indoor facilities such as shopping malls and train stations, there is a need for a new technology that can find the current location of the user of a smartphone or other device, as such facilities prevent the reception of GPS signals. Although many methods have been proposed for location estimation based on image search, accuracy is unreliable as there are many similar architectural indoors, and there are few features that are unique enough to offer unequivocal localization. Some methods increase the accuracy of location estimation by increasing the number of query images, but this increases the user’s burden of image capture. In this paper, we propose a method for accurately estimating the current indoor location based on question-response interaction from the user, without imposing greater image capture loads. Specifically, the proposal (i) generates questions using object detection and scene text detection, (ii) sequences the questions by minimizing conditional entropy, and (iii) filters candidate locations to find the current location based on the user’s response.
KW - Image Recognition
KW - Indoor Localization
KW - Scene Text Information
KW - Similarity Image Search
UR - http://www.scopus.com/inward/record.url?scp=85183598855&partnerID=8YFLogxK
U2 - 10.5220/0011624300003417
DO - 10.5220/0011624300003417
M3 - Conference article
AN - SCOPUS:85183598855
SN - 2184-5921
VL - 4
SP - 796
EP - 803
JO - Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
JF - Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
T2 - 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2023
Y2 - 19 February 2023 through 21 February 2023
ER -