A showcase of ÉTS researchers’ publications and other contributions

No-reference video quality assessment using distortion learning and temporal attention


Downloads per month over past year

Kossi, Koffi, Coulombe, Stéphane, Desrosiers, Christian et Gagnon, Ghyslain. 2022. « No-reference video quality assessment using distortion learning and temporal attention ». IEEE Access, vol. 10. pp. 41010-41022.

[thumbnail of Coulombe-S-2022-24440.pdf]
Coulombe-S-2022-24440.pdf - Published Version
Use licence: Creative Commons CC BY.

Download (1MB) | Preview


The rapid growth of video consumption and multimedia applications has increased the interest of the academia and industry in building tools that can evaluate perceptual video quality. Since videos might be distorted when they are captured or transmitted, it is imperative to develop reliable methods for no-reference video quality assessment (NR-VQA). To date, most NR-VQA models in prior art have been proposed for assessing a specific category of distortion, such as authentic distortions or traditional distortions. Moreover, those developed for both authentic and traditional distortions video databases have so far led to poor performances. This resulted in the reluctance of service providers to adopt multiple NR-VQA approaches, as they prefer a single algorithm capable of accurately estimating video quality in all situations. Furthermore, many existing NR-VQA methods are computationally complex and therefore impractical for various real-life applications. In this paper, we propose a novel deep learning method for NR-VQA based on multi-task learning where the distortion of individual frames in a video and the overall quality of the video are predicted by a single neural network. This enables to train the network with a greater amount and variety of data, thereby improving its performance in testing. Additionally, our method leverages temporal attention to select the frames of a video sequence which contribute the most to its perceived quality. The proposed algorithm is evaluated on five publicly-available video quality assessment (VQA) databases containing traditional and authentic distortions. Results show that our method outperforms the state-of-the- art on traditional distortion databases such as LIVE VQA and CSIQ video, while also delivering competitive performance on databases containing authentic distortions such as KoNViD-1k, LIVE-Qualcomm and CVD2014.

Item Type: Peer reviewed article published in a journal
Coulombe, Stéphane
Desrosiers, Christian
Gagnon, Ghyslain
Affiliation: Génie logiciel et des technologies de l'information, Génie logiciel et des technologies de l'information, Génie électrique
Date Deposited: 02 Jun 2022 19:00
Last Modified: 23 Jun 2022 14:58

Actions (login required)

View Item View Item