Image annotation by searching semantically related regions
-
Abstract
Based on abundant partially annotated images on the web, a novel framework for image annotation was proposed. By utilizing both the visual and textual knowledge of public available image database Image-Net, the proposed framework first learnt a set of weakly labeled visual concept classifiers, and then used the outputs of these learnt classifiers on image regions as descriptors to conduct the region-based search in a large scale image database for a query image. After that, search results mining and clustering was introduced to generate annotations to the query image. Compared with image-level representation, the proposed region-based semantic representation performs better at capturing images multi-objects/semantics. The proposed framework takes advantage of both traditional classification-based approaches and large scale data-driven approaches. Experimental results conducted on 24 million web images and challenging image database have demonstrated the effectiveness and efficiency of the proposed approach.
-
-