×

Pixel-wise parallel calculation for depth from focus with adaptive focus measure. (English) Zbl 1491.94007

Summary: Depth-from-focus methods can estimate the depth from a set of images taken with different focus settings. We recently proposed a method that uses the relationship of the ratio between the luminance value of a target pixel and the mean value of the neighboring pixels. This relationship has a Poisson distribution. Despite its good performance, the method requires a large amount of memory and computation time because it needs to store focus measurement values for each depth and each window radius on a pixel-wise basis, and filtering to compute the mean value, which is performed twice, makes the relationship among neighboring pixels too strong to parallelize the pixel-wise processing. In this paper, we propose an approximate calculation method that can give almost the same results with a single time filtering operation and enables pixel-wise parallelization. This pixel-wise processing does not require the aforementioned focus measure values to be stored, which reduces the amount of memory. Additionally, utilizing the pixel-wise processing, we propose a method of determining the process window size that can improve noise tolerance and in depth estimation in texture-less regions. Through experiments, we show that our new method can better estimate depth values in a much shorter time.

MSC:

94A08 Image processing (compression, reconstruction, etc.) in information and communication theory

Software:

GitHub
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] adrelino: Variational-depth-from-focus. (2020). https://github.com/adrelino/variational-depth-from-focus.
[2] Ahmad, MB; Choi, T-S, Application of three dimensional shape from image focus in LCD/TFT displays manufacturing, IEEE Transactions on Consumer Electronics, 53, 1-4 (2007) · doi:10.1109/TCE.2007.339492
[3] An, Y., Kang, G., Kim, I.-J., Chung, H.-S., & Park, J. (2008). Shape from focus through Laplacian using 3D window. In Proceedings of international conference on future generation communication network, Vol. 2, pp. 46-50.
[4] Blanter, YM; Büttiker, M., Shot noise in mesoscopic conductors, Elsevier Physics Reports, 336, 1-2, 1-166 (2000)
[5] Frommer, Y., Ben-Ari, R., & Kiryati, N. (2015). Shape from focus with adaptive focus measure and high order derivatives. In Proceedings of the British machine vision conference (BMVC), pp. 134-113412.
[6] Haarbach, A. (2017). Variational-depth-from-focus: devCam sequences. doi:10.5281/zenodo.438196.
[7] Kautsky, J.; Flusser, J.; Zitová, B.; Šimberová, S., A new wavelet-based measure of image focus, Pattern Recognition Letters, 23, 14, 1785-1794 (2002) · Zbl 1007.68918 · doi:10.1016/S0167-8655(02)00152-6
[8] Krotkov, E., Focusing, International Journal of Computer Vision, 1, 3, 223-237 (1988) · doi:10.1007/BF00127822
[9] Lewis, J. P. (1995). Fast template matching. In Proceedings of VIS interface, pp. 120-123.
[10] Malik, AS; Choi, T-S, Consideration of illumination effects and optimization of window size for accurate calculation of depth map for 3D shape recovery, Pattern Recognition, 40, 1, 154-170 (2007) · Zbl 1103.68773 · doi:10.1016/j.patcog.2006.05.032
[11] Matsubara, Y.; Shirai, K.; Tanaka, K., Depth from focus with adaptive focus measure using gray level variance based on Poisson distribution, The Institute of Image Electronics Engineers of Japan (IIEEJ), 46, 2, 273-282 (2017)
[12] Moeller, M.; Benning, M.; Schonlieb, C.; Cremers, D., Variational depth from focus reconstruction, IEEE Transactions on Image Processing (TIP), 24, 12, 5369-280935378 (2015) · Zbl 1408.94486 · doi:10.1109/TIP.2015.2479469
[13] Nayar, S. K., & Nakagawa, Y. (1990). Shape from focus: An effective approach for rough surfaces. In Proceedings of IEEE international conference on robotics automation (ICRA), pp. 218-225.
[14] Pertuz, S. (2016). Shape from focus. http://www.mathworks.com/matlabcentral/fileexchange/55103-shape-from-focus. · Zbl 1264.68192
[15] Pertuz, S.; Puig, D.; Garcia, MA, Analysis of focus measure operators for shape-from-focus, Pattern Recognition, 46, 5, 1415-1432 (2013) · Zbl 1264.68192 · doi:10.1016/j.patcog.2012.11.011
[16] Śliwiński, P.; Berezowski, K.; Patronik, P., Efficiency analysis of the autofocusing algorithm based on orthogonal transforms, Journal of Computer and Communications (JCC), 1, 6, 41-45 (2013) · doi:10.4236/jcc.2013.16008
[17] Subbarao, M.; Choi, T-S; Nikzad, A., Focusing techniques, Optical Engineering, 32, 11, 2824-2836 (1993) · doi:10.1117/12.147706
[18] Thelen, A.; Frey, S.; Hirsch, S.; Hering, P., Improvements in shape-from-focus for holographic reconstructions with regard to focus operators, neighborhood-size, and height value interpolation, IEEE Transactions on Image Processing (TIP), 18, 1, 151-157 (2009) · Zbl 1371.94370 · doi:10.1109/TIP.2008.2007049
[19] Xie, H., Rong, W., & Sun, L. (2006). Wavelet-based focus measure and 3-D surface reconstruction method for microscopy images. In The Institute of image information and television engineers (ITE), Technical report, pp. 229-234.
[20] Yang, G., & Nelson, B. J. (2003). Wavelet-based autofocusing and unsupervised segmentation of microscopic images. In Proceedings of IEEE/RSJ international conference on intelligent robots and systems, Vol. 3, pp. 2143-2148.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.