Deep Multimodal Neuroimaging Retrieval Based on Adaptive Hash Semantics

  • Wenyin Tao, Feng An

Abstract

Neuroimaging has been widely used in computer-assisted clinical diagnosis and treatment. In particular, multimodal neuroimaging retrieval technology, as an auxiliary technical means, can effectively improve the efficiency and accuracy of medical decision-making. However, the rapid increase of neuroimaging libraries has brought huge challenges to the rapid and efficient retrieval of neuroimaging. Existing image retrieval algorithms, on the other hand, frequently fail when applied directly to multimodal neuroimaging databases, because they typically use triplet loss functions to capture high-order semantic associations between samples. Triplet loss usually can only capture the local semantic similarity between neuroimaging samples. However, neuroimaging usually has complex semantic distribution, such as small inter-class differences and large inter-modal differences, which results in poor effects of the existing method. In order to solve these problems, this paper proposes a deep multimodal neuroimaging retrieval method based on adaptive hash semantics. Specifically, by directly learning the semantic space of neuroimage semantic tags, the hash network directly learns the Hamming semantic space distribution of each neuroimage from the semantic tags, thus avoiding the disadvantage of triplet loss. Meanwhile, the method directly uses category semantic tags for learning, which can achieve great learning effect. Widespread experimental results show that our method can generate effective hash codes and enable the most advanced multimodal neuroimaging retrieval performance.

Published
2021-12-15
How to Cite
Feng An, W. T. (2021). Deep Multimodal Neuroimaging Retrieval Based on Adaptive Hash Semantics. Forest Chemicals Review, 944-957. Retrieved from http://forestchemicalsreview.com/index.php/JFCR/article/view/258
Section
Articles