zbMATH — the first resource for mathematics

Improved estimation of defocus blur and spatial shifts in spatial domain: A homotopy-based approach. (English) Zbl 1035.68126
Summary: This paper presents a homotopy-based algorithm for the recovery of depth cues in the spatial domain. The algorithm specifically deals with defocus blur and spatial shifts, that is 2D motion, stereo disparities and/or zooming disparities. These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space. We show that they can be simultaneously computed by resolving a system of equations using a homotopy method. The proposed algorithm is tested using synthetic and real images. The results confirm that the use of a homotopy method leads to a dense and accurate estimation of depth cues. This approach has been integrated into an application for relief estimation from remotely sensed images.

68U10 Computing methodologies for image processing
Full Text: DOI
[1] Trucco, E.; Verri, A., Introductory techniques for 3-D computer vision, (1998), Prentice-Hall Upper Saddle River, NJ
[2] Julesz, B., Binocular depth perception of computer-generated pattern, Bell syst. tech. J., 39, 5, 1125-1162, (1960)
[3] Julesz, B., Binocular depth perception without familiarity cues, Science, 145, 3629, 356-362, (1964)
[4] Marr, D.; Poggio, T., A theory of human stereo vision, Proc. R. soc. London, series. B, 204, 301-328, (1979)
[5] Horn, B.K.P., Robot vision, (1986), MIT Press Cambridge, MA
[6] Ma, J.; Olsen, S.I., Depth from zooming, J. opt. soc. amer., 7, 10, 1883-1890, (1990)
[7] Lavest, J.M.; Rives, G.; Dhome, M., 3D reconstruction by zooming, IEEE trans. robot. automat., 9, 2, 196-208, (1993)
[8] Pentland, A.P., A new sense for depth of field, IEEE trans. pattern anal. Mach. intell., 9, 4, 523-531, (1987)
[9] Subbarao, M.; Surya, G., Depth from defocusa spatial domain approach, Int. J. comput. vis., 13, 3, 271-294, (1994)
[10] Chaudhuri, S.; Rajagopalan, A.N., Depth from defocus: A real aperture imaging approach, (1999), Springer New York
[11] Ziou, D.; Deschenes, F., Depth from defocus estimation in spatial domain, Comput. vision image understanding, 81, 2, 143-165, (2001) · Zbl 1011.68545
[12] Horn, B.K.P.; Schunck, B.G., Determining optical flow, Artif. intell., 17, 185-204, (1981)
[13] R. Manmatha, A framework for recovering affine transforms using points, lines, or image brightness, in: Proceedings of the IEEE, International Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 1994, pp. 141-146.
[14] Myles, Z.; Lobo, N.V., Recovering affine motion and defocus blur simultaneously, IEEE trans. pattern anal. Mach. intell., 20, 6, 652-658, (1998)
[15] F. Deschênes, D. Ziou, P. Fuchs, An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts, Technical Report No. 261, DMI, Université de Sherbrooke, 2001.
[16] R.G. Willson, S.A. Shafer, What is the center of the image? in: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’93), New York, USA, 1993, pp. 670-671.
[17] R. Enciso, T. Viéville, O. Faugeras, Comment simplifier le processus de calibration?, in: Proceedings of the Fourth International Conference: Interface to Real and Virtual Worlds, Montpellier, France, June 26-30, 1995, pp. 103-112.
[18] S.I. Olsen, Image point motion when zooming and focusing, in: Proceedings of the 10th Scandinavian Conference on Image Analysis, Lappeenranta, Finland, 1997, pp. 65-70.
[19] Papoulis, A., Systems and transforms with applications in optics, (1968), McGraw-Hill New York
[20] M. Subbarao, Spatial-domain convolution/deconvolution transform, Technical Report 91.07.03, Department of Electrical Engineering, State University of New York at Stony Brook, 1991.
[21] D. Ziou, Passive depth from defocus using a spatial domain approach, in: Proceedings of the International Conference on Computer Vision (ICCV), Bombay, 1998, pp. 799-804.
[22] F. Deschênes, D. Ziou, P. Fuchs, Enhanced depth from defocus estimation: tolerance to spatial displacements, in: Proceedings of the International Conference on Image and Signal Processing, Vol. 2, Agadir, Morocco, May 3-5, 2001, pp. 978-985.
[23] Massey, W.S., Basic course in algebraic topology, (1991), Springer New York, US · Zbl 0725.55001
[24] Martı́nez, J.M., Algorithms for solving nonlinear systems of equations, (), 81-108 · Zbl 0828.90125
[25] Allgower, E.L.; Georg, K., Simplicial and continuation methods for approximating fixed points and solutions to systems of equations, SIAM rev., 22, 1, 29-85, (1980) · Zbl 0432.65027
[26] Melville, R.C.; Trajkovic, L.; Fang, S.C.; Watson, L.T., Artificial parameter homotopy methods for the DC operating point problem, IEEE trans. CAD, 12, 861-877, (1993)
[27] Stonick, V.L.; Alexander, S.T., Global optimal rational approximation using homotopy continuation methods, IEEE trans. signal process., 40, 9, 2358-2361, (1992)
[28] Watson, L.T., Globally convergent homotopy algorithms for nonlinear systems of equations, Nonlinear dynam., 1, 143-191, (1990)
[29] Courant, R.; Robbins, H., What is mathematics?, (1941), Oxford University Press New York, US · JFM 67.0001.05
[30] F.M. Coetzee, V.L. Stonick, Sequential homotopy-based computation of multiple solutions to nonlinear equations, in: Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), Detroit, USA, May 1995.
[31] J. Verschelde, Homotopy Continuation Methods for Solving Polynomial Systems, Ph.D. Thesis, Katholieke Universiteit Leuven, 1996.
[32] Li, T.Y., Solving polynomial systems, Math. intell., 9, 3, 33-39, (1987) · Zbl 0637.65047
[33] Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; Vetterling, W.T., Numerical recipes in C, the art of scienfitic computing, (1988), Cambridge University Press Cambridge · Zbl 0661.65001
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.