×

Three-layer medical image fusion with tensor-based features. (English) Zbl 1460.92105

Summary: Replacing computed tomography (CT) with magnetic resonance image (MRI), MRI-positron electron tomography (PET) or MRI-single photon emission computed tomography (SPECT) imaging might have further advantages due to higher soft-tissue contrast of brain structure and lower dose absorbed by the patient. In this paper, a new three-layer (intensity, detail, and base layers) medical image fusion method with differential features for gray and pseudo-color images is proposed. The proposed method includes three steps. At the first step, differential feature by structure tensor is used to decompose the anatomical MRI medical image into its three-layer image representation. On the other side, differential feature by color tensor is adopted to decompose the functional PET or SPECT medical image into its three-layer image representation. At the second step, spatial frequency metric is proposed to combine the decomposed intensity layers and detail layers and absolute maximum is defined as the image fusion rule of the base layers. At the third step, the fused image is obtained by the addition of the fused intensity layer, the fused detail layer, and the fused base layer. The superiority of the proposed method is demonstrated by subjective and objective evaluation on experimental results.

MSC:

92C55 Biomedical imaging and signal processing
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Barra, V.; Boire, J. Y., A general framework for the fusion of anatomical and functional medical images, Neuroimage, 12, 3, 410-424 (2001)
[2] Buades, A.; Le, T.; Morel, J. M., Cartoon+Texture Image Decomposition, Image Processing on Line, 1, 200-207 (2011)
[3] Burt, P. J.; Adelson, E. H., The Laplacian pyramid as a compact image code, IEEE Trans. Commun., 31, 4, 532-540 (1983)
[4] Cai, W.; Feng, D. D.; Fulton, R., Content-based retrieval of dynamic PET functional images, IEEE Trans. Inf. Technol. Biomed., 4, 2, 152-158 (2000)
[5] Carrillo, A.; Duerk, J. L.; Lewin, J. S., Semiautomatic 3-D image registration as applied to interventional MRI liver cancer treatment, IEEE Trans. Med. Imaging, 19, 3, 175-185 (2000)
[6] Das, S.; Kundu, M. K., A neuro-fuzzy approach for medical image fusion, IEEE Trans. Biomed. Eng., 60, 12, 3347-3353 (2013)
[7] Du, J.; Li, W.; Xiao, B., Anatomical-functional image fusion by information of interest in local laplacian filtering domain, IEEE Trans. Image Process., 26, 12, 5855-5966 (2017)
[8] Du, J.; Li, W.; Lu, K., An overview of multi-modal medical image fusion, Neurocomputing, 215, 3-20 (2016)
[9] Estellers, V.; Soatto, S.; Bresson, X., Adaptive regularization with the structure tensor, IEEE Trans. Image Process., 24, 6, 1777-1790 (2015) · Zbl 1408.94166
[10] Everts, I.; Van Gemert, J. C.; Gevers, T., Evaluation of color spatio-temporal interest points for human action recognition, IEEE Trans. Image Process., 23, 4, 1569-1580 (2014) · Zbl 1374.94099
[11] S. Fürst, Reducing attenuation and motion artefacts in hybrid PET/MR imaging, http://mediatum.ub.tum.de/node?id=1238940 (2015) 1-25.
[12] Hossny, M.; Nahavandi, S.; Creighton, D., Comments on ‘Information measure for performance of image fusion’, Electron. Lett., 44, 18, 1066-1067 (2008)
[13] Hu, J.; Li, S., The multiscale directional bilateral filter and its application to multisensor image fusion, Inform. Fusion, 13, 3, 196-206 (2012)
[14] K.A. Johnson, J.A. Becker, The whole brain altas, [Online], Available: http://www.med.harvard.edu/aanlib/, 2011.
[15] Köthe, U., Edge and junction detection with an improved structure tensor, Pattern Recognit., 25-32 (2003)
[16] Levins, Y. C.; Yang, S.; Schreier, P. J., Sample-poor estimation of order and common signal subspace with application to fusion of medical imaging data, Neuroimage, 134, 486-493 (2016)
[17] Li, Z.; Mahapatra, D.; Tielbeek, J., Image registration based on autocorrelation of local structure, IEEE Trans. Med. Imaging, 35, 1, 63-75 (2016)
[18] Li, S.; Kang, X.; Fang, L., Pixel-level image fusion: a survey of the state of the art, Inform. Fusion, 33, 100-112 (2017)
[19] Li, S.; Kang, X.; Hu, J., Image fusion with guided filtering, IEEE Trans. Image Process., 22, 7, 2864-2875 (2013)
[20] Li, S.; Kwok, J. T.; Wang, Y., Combination of images with diverse focuses using the spatial frequency, Inform. Fusion, 2, 3, 169-176 (2001)
[21] Li, T.; Wang, Y., Biological image fusion using a NSCT based variable-weight method, Inform. Fusion, 12, 2, 85-92 (2010)
[22] Li, Z.; Wei, Z.; Wen, C., Detail-enhanced multi-scale exposure fusion, IEEE Trans. Image Process., 26, 3, 1243-1252 (2017) · Zbl 1409.94411
[23] Li, S.; Yin, H.; Fang, L., Remote sensing image fusion via sparse representations over learned dictionaries, IEEE Trans. Geosci. Remote Sens., 51, 9, 4779-4789 (2013)
[24] Lin, J., Divergence measures based on the Shannon entropy, IEEE Trans. Inf. Theory, 37, 37, 145-151 (1991) · Zbl 0712.94004
[25] Ling, Z.; Fan, G.; Gong, J., Perception oriented transmission estimation for high quality image dehazing, Neurocomputing, 224, 82-95 (2017)
[26] Liu, Z.; Blasch, E.; Xue, Z., Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study, IEEE Trans. Pattern Anal. Mach. Intell., 34, 1, 94-109 (2012)
[27] Liu, Y.; Chen, X.; Peng, H., Multi-focus image fusion with a deep convolutional neural network, Inform. Fusion, 36, 191-207 (2017)
[28] Liu, X.; Mei, W.; Du, H., Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion, Neurocomputing, 235, 131-139 (2017)
[29] Liu, Y.; Liu, S.; Wang, Z., A general framework for image fusion based on multi-scale transform and sparse representation, Inform. Fusion, 24, 147-164 (2015)
[30] Loeffelbein, D. J.; Mielke, E.; Buck, A. K., Impact of nonhybrid 99mTc-MDP-SPECT/CT image fusion in diagnostic and treatment of oromaxillofacial malignancies, Mol. Imaging Biol., 12, 1, 71-77 (2010)
[31] Luisier, F.; Blu, T.; Unser, M., Image denoising in mixed Poisson-Gaussian noise, IEEE Trans. Image Process., 20, 3, 696-708 (2011) · Zbl 1372.94168
[32] Ma, J.; Chen, C.; Li, C., Infrared and visible image fusion via gradient transfer and total variation minimization, Inform. Fusion, 31, 100-109 (2016)
[33] Napolitano, R.; Pariani, G.; Fedeli, F., Synthesis and relaxometric characterization of a MRI Gd-Based probe responsive to glutamic acid decarboxylase enzymatic activity, J. Med. Chem., 56, 6, 2466-2477 (2013)
[34] Pajares, G.; Cruz, J. M., A wavelet-based image fusion tutorial, Pattern Recognit., 37, 9, 1855-1872 (2004)
[35] Park, S.; Jang, J.; Kim, J., Real-time triple-modal photoacoustic, ultrasound, and magnetic resonance fusion imaging of humans, IEEE Trans. Med. Imaging, 99, 1912-1921 (2017)
[36] Schlemmer, H. P.; Pichler, B. J.; Schmand, M., Simultaneous MR/PET imaging of the human brain: feasi-bility study, Int. J. Med. Radiol., 248, 3, 1028-1035 (2008)
[37] Shen, J.; Zhao, Y.; Yan, S., Exposure fusion using boosting laplacian pyramid, IEEE Trans. Cybern., 44, 9, 1579-1590 (2014)
[38] Song, Y.; Cai, W.; Huang, H., Lesion detection and characterization with context driven approximation in thoracic FDG PET-CT images of NSCLC studies, IEEE Trans. Med. Imaging, 33, 2, 408-421 (2014)
[39] Suk, H. I.; Lee, S. W.; Shen, D., Hierarchical feature representation and multimodal fusion with deep Learning for AD/MCI diagnosis, Neuroimage, 101, 569-582 (2014)
[40] Van, W. J.; Gevers, T. A.; Smeulders, W. M., Robust photometric invariant features from the color tensor, IEEE Trans. Image Process., 15, 1, 118-127 (2006)
[41] Wang, T.; Chang, Y., Color-appearance-model based fusion of gray and pseudo-color images for medical applications, Inform. Fusion, 19, 1, 103-114 (2014)
[42] Wang, Y. Q.; Chen, W. F.; Yu, T. L., Hessian based image structure adaptive gradient vector flow for parametric active contours, IEEE Int. Confer. Image Process., 649-652 (2010)
[43] Wang, Q.; Li, S.; Qin, H., Robust multi-modal medical image fusion via anisotropic heat diffusion guided low-rank structural analysis, Inform. Fusion, 26, C, 103-121 (2015)
[44] Weijer, V. D.; Gevers, T.; Bagdanov, A. D., Boosting color saliency in image feature detection, IEEE Trans. Pattern Anal. Mach. Intell., 28, 1, 150-156 (2005)
[45] Wu, Z.; Wang, Q.; Wu, Z., Structure tensor total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising, Signal Process., 131, 202-219 (2017)
[46] Xiao-Bo, Q.; Jing-Wen, Y.; Hong, Z., Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain, Acta Autom. Sin., 34, 12, 1508-1514 (2008) · Zbl 1199.68502
[47] Xu, Z., Medical image fusion using multi-level local extrema, Inform. Fusion, 19, 1, 38-48 (2014)
[48] Yang, Y.; Que, Y.; Huang, S., Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain, IEEE Sensors, 16, 10, 3735-3745 (2016)
[49] Yeganeh, H.; Wang, Z., Objective quality assessment of tone-mapped images, IEEE Trans. Image Process., 22, 2, 657-667 (2012) · Zbl 1373.94466
[50] Zhou, Z.; Li, S.; Wang, B., Multi-scale weighted gradient-based fusion for multi-focus images, Inform. Fusion, 20, 1, 60-72 (2014)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.