Multimodal learning of geometry-preserving binary codes for semantic image retrieval

Go Irie, Hiroyuki Arai, Yukinobu Taniguchi

Research output: Contribution to journalArticle


This paper presents an unsupervised approach to feature binary coding for efficient semantic image retrieval. Although the majority of the existing methods aim to preserve neighborhood structures of the feature space, semantically similar images are not always in such neighbors but are rather distributed in non-linear low-dimensional manifolds. Moreover, images are rarely alone on the Internet and are often surrounded by text data such as tags, attributes, and captions, which tend to carry rich semantic information about the images. On the basis of these observations, the approach presented in this paper aims at learning binary codes for semantic image retrieval using multimodal information sources while preserving the essential low-dimensional structures of the data distributions in the Hamming space. Specifically, after finding the low-dimensional structures of the data by using an unsupervised sparse coding technique, our approach learns a set of linear projections for binary coding by solving an optimization problem which is designed to jointly preserve the extracted data structures and multimodal data correlations between images and texts in the Hamming space as much as possible. We show that the joint optimization problem can readily be transformed into a generalized eigenproblem that can be efficiently solved. Extensive experiments demonstrate that our method yields significant performance gains over several existing methods.

Original languageEnglish
Pages (from-to)600-609
Number of pages10
JournalIEICE Transactions on Information and Systems
Issue number4
Publication statusPublished - Apr 2017


  • Binary coding
  • Image retrieval
  • Multimodal learning

Fingerprint Dive into the research topics of 'Multimodal learning of geometry-preserving binary codes for semantic image retrieval'. Together they form a unique fingerprint.

  • Cite this