Print Email Facebook Twitter Feature Fusion for Efficient Content-Based Video Retrieval Title Feature Fusion for Efficient Content-Based Video Retrieval Author Visser, M. Contributor Van der Maaten, J.P. (mentor) Faculty Electrical Engineering, Mathematics and Computer Science Department Intelligent Systems Programme Pattern Recognition & Bioinformatics group Date 2013-03-14 Abstract Abstract—Content-based video retrieval is a complex task because of the large amount of information in single items and because databases of videos can be very large. In this paper we explore a possible solution for efficient similar item retrieval. In our experiments we combine relevant feature sets together with a learned Mahalanobis metric while using an efficient nearest neighbor search algorithm. The efficient nearest neighbor algorithms we compare are Locality Sensitive Hashing and Vantage Point trees. The two options are compared to several baseline systems in the general video retrieval framework. We used three sets of features to test the system: SURF features, color histograms and topics. The topics where extracted using a Latent Dirichlet Allocation topic model. We show that fusing the individual feature sets with a learned metric improves the performance upon the best individual feature set. The feature fusion can be combined with an efficient nearest neighbor search algorithm to reduce the number of exact distance computations with limited impact on retrieval performance. Subject content-based video retrievalfeature fusionmetric learningefficient retrievalnearest neighbor searchlocality sensitive hashingvantage point trees To reference this document use: http://resolver.tudelft.nl/uuid:de8abdb6-2038-4fba-90fd-13667abdd930 Part of collection Student theses Document type master thesis Rights (c) 2013 Visser, M. Files PDF paper.pdf 563.61 KB Close viewer /islandora/object/uuid:de8abdb6-2038-4fba-90fd-13667abdd930/datastream/OBJ/view