×

Infrared and visible image fusion method based on rolling guidance filter and NSST. (English) Zbl 1425.94027

Summary: The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

MSC:

94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
62H35 Image analysis in multivariate analysis
60H35 Computational methods for stochastic equations (aspects of stochastic analysis)
65C20 Probabilistic models, generic numerical methods in probability and statistics
60H15 Stochastic partial differential equations (aspects of stochastic analysis)
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Adu, J.et al., Image fusion based on nonsubsampled contourlet transform for infrared and visible light image, Infrared Phys. Technol.61 (2013) 94-100.
[2] Aslantas, V. and Kurban, R., Fusion of multi-focus images using differential evolution algorithm, Expert Syst. Appl.37 (2010) 8861-8870.
[3] Benshoshan, Y. and Yitzhaky, Y., Enhancement of image fusion methods, Proc. Spie Opt. Eng. Appl.8856 (2013) 885604-885609.
[4] Burt, P. J. and Adelson, E. H., The Laplacian pyramid as a compact image code, IEEE Trans. Commun.31 (1938) 671-679.
[5] Cai, W., Li, M. and Li, X. Y., Infrared and visible image fusion scheme based on contourlet transform, Proc. Int. Conf. Image Graphics, Xian, China1 (2009) 516-520.
[6] Fu, Z.et al., Infrared and visible images fusion based on RPCA and NSCT, Infrared Phys. Technol.77 (2016) 114-123.
[7] Gan, W.et al., Infrared and visible image fusion with the use of multiscale edge-preserving decomposition and guided image filter, Infrared Phys. Technol.72 (2015) 37-51.
[8] He, K., Sun, J. and Tang, X., Guided image filtering, IEEE Trans. Pattern Anal.35 (2013) 1397-1409.
[9] Kong, W., Zhang, L. and Lei, Y., Novel fusion method for visible light and infrared images based on NSST-SF-PCNN, Infrared Phys. Technol.65 (2014) 103-112.
[10] Kou, F.et al., Gradient domain guided image filtering, IEEE Trans. Image Process.24 (2015) 4528-4539. · Zbl 1408.94325
[11] Kumar, B. K. S., Image fusion based on pixel significance using cross bilateral filter, Signal Image Video P.9 (2015) 1193-1204.
[12] Li, S., Yang, B. and Hu, J., Performance comparison of different multi-resolution transforms for image fusion, Inform Fusion.12 (2011) 74-84.
[13] Liu, Z.et al., A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain, Opt. Laser Eng.97 (2017) 71-77.
[14] Ma, J.et al., Infrared and visible image fusion via gradient transfer and total variation minimization, Inform Fusion.31 (2016) 100-109.
[15] Ma, J.et al., Infrared and visible image fusion based on visual saliency map and weighted least square optimization, Infrared Phys. Technol.82 (2017) 8-17.
[16] Ma, T.et al., Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis, Infrared Phys. Technol.92 (2018) 154-162.
[17] Min, D.et al., Fast global image smoothing based on weighted least squares, IEEE Trans. Image Process.23 (2014) 5638-5653. · Zbl 1374.94260
[18] Moonon, A. U., Hu, J. and Li, S., Remote sensing image fusion method based on nonsubsampled shearlet transform and sparse representation, Sens. Imag.16 (2015) 1-18.
[19] Nandu, V. P. S., Image fusion technique using multi-resolution singular value decomposition, Defence Sci. J.61 (2011) 479-484.
[20] Nencini, F.et al., Remote sensing image fusion using the curvelet transform, Inform. Fusion.8 (2007) 143-156.
[21] Pan, B., Shi, Z. and Xu, X., Hierarchical guidance filtering-based ensemble classification for hyperspectral images, IEEE Trans. Geosci. Remote Sens.99 (2017) 1-13.
[22] Pan, B., Shi, Z. and Xu, X., R-VCANet: A new deep-learning-based hyperspectral image classification method, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.10 (2017) 1975-1986.
[23] Paris, S. and Durand, F., A fast approximation of the bilateral filter using a signal processing approach, Int. J. Comput. Vision81 (2009) 24-52.
[24] Petschnigg, G.et al., Digital photography with flash and no-flash image pirs, ACM Trans. Graphic (TOG)23 (2004) 664-672.
[25] Qu, G., Zhang, D. and Yan, P., Information measure for performance of image fusion, Electr. Lett.38 (2002) 313-315.
[26] Sruthy, S., Parameswaran, L. and Sasi, A. P., Image fusion technique using DT-CWT, Proc. Int. Multi-Conf. Aut. Comput. Commun. Control Compr. Sens.7903 (2013) 160-164.
[27] Wang, H. and Yao, X., Objective reduction based on nonlinear correlation information entropy, Soft Comput.20 (2016) 2393-2407.
[28] Xu, C., Tao, D. and Xu, C., Multi-view intact space learning, IEEE Trans. Pattern Anal. Mach. Intell.37 (2015) 2531-2544.
[29] Xu, H.et al., Infrared and multi-type images fusion algorithm based on contrast pyramid transform, Infrared Phys. Technol.78 (2016) 133-146.
[30] Xu, L. P., Feng, D. Z. and Gao, G. R., Multi-focus image fusion based on non-subsampled shearlet transform, Iet Image Process.7 (2013) 633-639.
[31] Yang, J. W., Zhang, L. and Tang, Y. Y., Mellin polar coordinate moment and its affine invariance, Pattern Recogn.85 (2019) 37-49.
[32] Yang, Q., Recursive bilateral filtering, in Proc. European Conf. Computer Vision, Vol. 7572 (Springer-Verlag, 2012), pp. 399-413.
[33] Yin, M.et al., A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation, Neurocomputing226 (2016) 182-191.
[34] Zhang, H. and Cao, X., A way of image fusion based on wavelet transform, Proc. Ninth Int. Conf. Mobile Ad-Hoc Sensor Netw.1 (2013) 498-501.
[35] Zhang, K., Huang, Y. D. and Zhao, C., Remote sensing image fusion via RPCA and adaptive PCNN in NSST domain, Int. J. Wavelets, Multires.16 (2018) 1850037, 14 p. · Zbl 1401.60131
[36] Zhang, Q.et al., Rolling guidance filter, European Conf. Computer Vision, Vol. 8691 (Springer, Cham, 2014), pp. 815-830.
[37] Zhao, Y.et al., Multi-view manifold learning with locality alignment, Pattern Recogn.78 (2018) 154-166.
[38] Zhi, G. W. and Wang, Y. J., Image fusion algorithm using curvelet transform based on the edge detection, Opt. Techn.35 (2009) 681-682.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.