ENGLISH
La vitrine de diffusion des publications et contributions des chercheurs de l'ÉTS
RECHERCHER

An extended sparse classification framework for domain adaptation in video surveillance

Nourbakhsh, Farshad, Granger, Eric et Fumera, Giorgio. 2017. « An extended sparse classification framework for domain adaptation in video surveillance ». In Computer Vision – ACCV 2016 Workshops : ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III (Taipei, Taiwan, Nov. 20-24, 2016) Coll. « Lecture Notes in Computer Science », vol. 10118. pp. 360-376. Springer Verlag.
Compte des citations dans Scopus : 1.

[thumbnail of An-extended-sparse-classification-framework-for-domain-adaptation-in-video-surveillance.pdf]
Prévisualisation
PDF
An-extended-sparse-classification-framework-for-domain-adaptation-in-video-surveillance.pdf

Télécharger (741kB) | Prévisualisation

Résumé

Still-to-video face recognition (FR) systems used in video surveillance applications capture facial trajectories across a network of distributed video cameras and compare them against stored distributed facial models. Currently, the performance of state-of-the-art systems is severely affected by changes in facial appearance caused by variations in, e.g., pose, illumination and scale in different camera viewpoints. More- over, since an individual is typically enrolled using one or few reference stills captured during enrolment, face models are not robust to intra-class variation. In this paper, the Extended Sparse Representation Classiffication through Domain Adaptation (ESRC-DA) algorithm is proposed to improve performance of still-to-video FR. The system's facial mod- els are thereby enhanced by integrating variational information from its operational domain. In particular, robustness to intra-class variations is improved by exploiting: (1) an under-sampled dictionary from tar- get reference facial stills captured under controlled conditions; and (2) an auxiliary dictionary from an abundance of unlabelled facial trajecto- ries captured under different conditions, from each camera viewpoint in the surveillance network. Accuracy and effciency of the proposed tech- nique is compared to state-of-the-art still-to-video FR techniques using videos from the Chokepoint and COX-S2V databases. Results indicate that ESRC-DA with dictionary learning of unlabelled trajectories provides the highest level of accuracy, while maintaining a low complexity.

Type de document: Compte rendu de conférence
ISBN: 03029743
Professeur:
Professeur
Granger, Éric
Affiliation: Génie de la production automatisée
Date de dépôt: 11 avr. 2017 14:19
Dernière modification: 28 janv. 2020 16:17
URI: https://espace2.etsmtl.ca/id/eprint/15044

Actions (Authentification requise)

Dernière vérification avant le dépôt Dernière vérification avant le dépôt