Multi-Modal Medical Image Fusion Based on Improved Parameter Adaptive PCNN and Latent Low-Rank Representation
PDF

Keywords

Image fusion; Improved parameter adaptive PCNN; non-subsampled shear-wave transform; latent low-rank representation.

How to Cite

Tang, Z., & Zhou, X. (2024). Multi-Modal Medical Image Fusion Based on Improved Parameter Adaptive PCNN and Latent Low-Rank Representation. Instrumentation, 11(2), 53–63. https://doi.org/10.15878/j.instr.202400059

Abstract

Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability that traditional medical image fusion solutions have in protecting image details and significant information. A new multimodality medical image fusion method (NSST-PAPCNN- LatLRR) is proposed in this paper. Firstly, the high and low frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients; An improved PAPCNN algorithm is also proposed for the fusion of high frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor

https://doi.org/10.15878/j.instr.202400059
PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2024 Zirui Tang, Xianchun Zhou

Downloads

Download data is not yet available.