ENGLISH
La vitrine de diffusion des publications et contributions des chercheurs de l'ÉTS
RECHERCHER

A generalized graph reduction framework for interactive segmentation of large images

Gueziri, Houssem-Eddine, McGuffin, Michael J. et Laporte, Catherine. 2016. « A generalized graph reduction framework for interactive segmentation of large images ». Computer Vision and Image Understanding, vol. 150. pp. 44-57.
Compte des citations dans Scopus : 10.

[thumbnail of A-Generalized-Graph-Reduction-Framework-for-Interactive-Segmentation-of-Large-Images.pdf]
Prévisualisation
PDF
A-Generalized-Graph-Reduction-Framework-for-Interactive-Segmentation-of-Large-Images.pdf
Licence d'utilisation : Creative Commons CC BY-NC-ND.

Télécharger (11MB) | Prévisualisation

Résumé

The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to di�erent scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground / background labels can then be added by the user to refine the segmentation. All iterations of the graphbased segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0:01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation.

Type de document: Article publié dans une revue, révisé par les pairs
Professeur:
Professeur
McGuffin, Michael John
Laporte, Catherine
Affiliation: Génie logiciel et des technologies de l'information, Génie électrique
Date de dépôt: 30 mai 2016 16:11
Dernière modification: 17 mai 2018 04:00
URI: https://espace2.etsmtl.ca/id/eprint/12654

Actions (Authentification requise)

Dernière vérification avant le dépôt Dernière vérification avant le dépôt