Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
The availability of databases of images labeled with keywords is necessary for developing and evaluating image annotation models. Dataset collection is however a costly and time consuming task. In this paper we exploit the vast resource of images available on the web. We create a database of pictures that are naturally embedded into news articles and propose to use their captions as a proxy for annotation keywords. Experimental results show that an image annotation model can be developed on this dataset alone without the overhead of manual annotation. . | Automatic Image Annotation Using Auxiliary Text Information YansongFeng and Mirella Lapata School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh Eh8 9LW uK Y.Feng-4@sms.ed.ac.uk mlap@inf.ed.ac.uk Abstract The availability of databases of images labeled with keywords is necessary for developing and evaluating image annotation models. Dataset collection is however a costly and time consuming task. In this paper we exploit the vast resource of images available on the web. We create a database of pictures that are naturally embedded into news articles and propose to use their captions as a proxy for annotation keywords. Experimental results show that an image annotation model can be developed on this dataset alone without the overhead of manual annotation. We also demonstrate that the news article associated with the picture can be used to boost image annotation performance. 1 Introduction As the number of image collections is rapidly growing so does the need to browse and search them. Recent years have witnessed significant progress in developing methods for image retrieval1 many of which are query-based. Given a database of images each annotated with keywords the query is used to retrieve relevant pictures under the assumption that the annotations can essentially capture their semantics. One stumbling block to the widespread use of query-based image retrieval systems is obtaining the keywords for the images. Since manual annotation is expensive time-consuming and practically infeasible for large databases there has been great in 1The approaches are too numerous to list we refer the interested reader to Datta et al. 2005 for an overview. terest in automating the image annotation process see references . More formally given an image I with visual features V v1 v2 . vN and a set of keywords w w1 w2 . wM the task consists in finding automatically the keyword subset WI c w which can appropriately describe the image I. Indeed several approaches have .