H. Glotin, S. Tollari, P. Giraudet, Approximation of Linear Discriminant Analysis for Word Dependent Visual Features Selection, Proc. of Advanced Concepts for Intelligent Vision Systems (ACIVS), LNCS 3708, Springer, pages 170-177, Antwerp, Belgium, September 2005


To automatically determine a set of keywords that describes the content of a given image is a difficult problem, because of (i) the huge dimension number of the visual space and (ii) the unsolved object segmentation problem. Therefore, in order to solve matter (i), we present a novel method based on an Approximation of Linear Discriminant Analysis (ALDA) from the theoretical and practical point of view. Application of ALDA is more generic than usual LDA because it doesn't require explicit class labelling of each training sample, and however allows efficient estimation of the visual features discrimination power. This is particularly interesting because of (ii) and the expensive manually object segmentation and labelling tasks on large visual database. In first step of ALDA, for each word wk, the train set is split in two, according if images are labelled or not by wk. Then, under weak assumptions, we show theoretically that Between and Within variances of these two sets are giving good estimates of the best discriminative features for wk. Experimentations are conducted on COREL database, showing an efficient word adaptive feature selection, and a great enhancement (+37%) of an image Hierarchical Ascendant Classification (HAC) for which ALDA saves also computational cost reducing by 90% the visual features space.


feature selection, Fisher LDA, visual segmentation, image auto-annotation, high dimension problem, word prediction, CBIR, HAC, COREL database, PCA

@inproceedings { GlotinTollariGiraudet2005ACIVS,
        author={H. Glotin and S. Tollari and P. Giraudet},
        title={Approximation of Linear Discriminant Analysis for Word Dependent Visual Features Selection},
        booktitle={Proc. of  Advanced Concepts for Intelligent Vision Systems (ACIVS)},