Visual concept modeling scheme using early learning of region-based semantics for web images

Yongqing Sun, Satoshi Shimada, Masashi Morimoto, Yukinobu Taniguchi

Research output: Contribution to journalArticle

Abstract

In this paper we present a novel approach to modeling visual concepts effectively and automatically using web images. The selection of training data (positive and negative samples) is strongly related to the quality of learning algorithms and is an especially crucial step when using noisy web images. In this scheme, first, images are represented by regions from which training samples are selected. Second, region features effectively representing a semantic concept are determined, and on their basis, the representative regions corresponding to the concept are selected as reliable positive samples. Third, high quality negative samples are determined using the selected positive samples. Last, the visual model associated with a semantic concept is built through an unsupervised learning process. The presented scheme is completely automatic and performs well for generic images because of its robustness in learning from diverse web images. Experimental results demonstrate its effectiveness.

Original languageEnglish
Pages (from-to)423-434
Number of pages12
JournalKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
Volume64
Issue number3
DOIs
Publication statusPublished - 1 Mar 2010

    Fingerprint

Keywords

  • Image learning
  • Visual concept model
  • Web image mining

Cite this