Eine Plattform für die Wissenschaft: Bauingenieurwesen, Architektur und Urbanistik
ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration
Abstract Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration. Graphical abstract
ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration
Abstract Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration. Graphical abstract
ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration
Med Biol Eng Comput
Rivas-Villar, David (Autor:in) / Hervella, Álvaro S. (Autor:in) / Rouco, José (Autor:in) / Novo, Jorge (Autor:in)
Medical & Biological Engineering & Computing ; 62 ; 3721-3736
01.12.2024
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Keypoint based autonomous registration of terrestrial laser point-clouds
Online Contents | 2008
|Keypoint based autonomous registration of terrestrial laser point-clouds
Online Contents | 2008
|Automatic 3D Surface Co-Registration Using Keypoint Matching
Online Contents | 2017
|Keypoint-based 4-Points Congruent Sets – Automated marker-less registration of laser scans
Online Contents | 2014
|Automatic point cloud coarse registration using geometric keypoint descriptors for indoor scenes
British Library Online Contents | 2017
|