zbMATH — the first resource for mathematics

Continuous lunches are free plus the design of optimal optimization algorithms. (English) Zbl 1206.90133
The authors investigate extensions of no-free-lunch theorems for countably infinite and continuous domains and derive an optimal optimization algorithm assuming a prior distribution on the distribution of functions and a finite number of iterates.

90C27 Combinatorial optimization
90C59 Approximation methods and heuristics in mathematical programming
68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
Full Text: DOI
[1] Auger, A., Schoenauer, M., Teytaud, O.: Local and global order 3/2 convergence of a surrogate evolutionary algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 857–864. Assoc. Comput. Mach., New York (2005)
[2] Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957) · Zbl 0077.13605
[3] Bertsekas, D.: Dynamic Programming and Optimal Control. Athena Scientific, Belmont (1995) · Zbl 0904.90170
[4] Billingsley, P.: Probability and Measure. Wiley, New York (1986) · Zbl 0649.60001
[5] Booker, A., Dennis, Jr., J.E., Frank, P., Serafini, D., Torczon, V., Trosset, M.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Optim. 17(1), 1–13 (1999)
[6] Conn, A., Scheinberg, K., Toint, L.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79, 397–414 (1997) · Zbl 0887.90154
[7] Corne, D.W., Knowles, J.D.: Some multiobjective optimizers are better than others. In: Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC 2003), pp. 2506–2512. IEEE Press, New York (2003)
[8] Culberson, J.C.: On the futility of blind search: an algorithmic view of ”No Free Lunch”. Evol. Comput. 6(2), 109–127 (1998) · Zbl 05412754
[9] Daniell, P.J.: Integrals in an infinite number of dimensions. Ann. Math. 20, 281–288 (1919) · JFM 47.0385.04
[10] Dennis, J., Torczon, V.: Managing approximation models in optimization. In: Alexandrov, N., Hussaini, M.-Y. (eds.) Multidisciplinary Design Optimization: State of the Art, pp. 330–347. SIAM, Philadelphia (1997)
[11] Doob, J.: Stochastic process measurability conditions. Ann. Inst. Fourier Grenoble 25(3–4), 163–176 (1975) · Zbl 0287.60032
[12] Droste, S., Jansen, T., Wegener, I.: Perhaps not a free lunch but at least a free appetizer. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 99), pp. 833–839, San Francisco, CA. Morgan Kaufmann, San Mateo (1999)
[13] Droste, S., Jansen, T., Wegener, I.: Optimization with randomized search heuristics–The (A)NFL theorem, realistic scenarios, and difficult functions. Theor. Comput. Sci. 287(1), 131–144 (2002) · Zbl 1061.68145
[14] Emmerich, M., Giotis, A., Özdenir, M., Bäck, T., Giannakoglou, K.: Metamodel-assisted evolution strategies. In: Parallel Problem Solving from Nature (PPSN 2002). Lecture Notes in Computer Science, vol. 2439, pp. 371–380. Springer, Berlin (2002)
[15] Gelly, S., Hoock, J.B., Rimmel, A., Teytaud, O., Kalemkarian, Y.: The parallelization of Monte-Carlo planning. In: Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO 2008), pp. 198–203 (2008, to appear)
[16] Gelly, S., Ruette, S., Teytaud, O.: Comparison-based algorithms are robust and randomized algorithms are anytime. Evol. Comput. J. 15(4), 411–434 (2007) · Zbl 05514010
[17] Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning (ICML 2007), pp. 273–280. Assoc. Comput. Mach., New York (2007)
[18] Gelly, S., Teytaud, O., Gagné, C.: Resource-aware parameterizations of EDA. In: Proceedings of the 2006 IEEE Congress on Evolutionary Computation (CEC 2006), pp. 2506–2512. IEEE Press, New York (2006)
[19] Geman, D., Jedynak, B.: An active testing model for tracking roads in satellite images. IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 1–14 (1996) · Zbl 05110517
[20] Grieken, M.V.: Optimisation pour l’apprentissage et apprentissage pour l’optimisation. Ph.D. Thesis, Université Paul Sabatier, Toulouse (2004)
[21] Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001) · Zbl 05412812
[22] Hopkins, D., Lavelle, T., Patnaik, S.: Neural network and regression methods demonstrated in the design optimization of a subsonic aircraft. Technical Report, NASA Glen Research Center, Research & Technology (2002)
[23] Husken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for evolutionary design optimization. Soft Comput. 9(1), 21–28 (2005) · Zbl 05036838
[24] Ibric, S., Jovanovic, M., Djuric, Z., Parojcic, J., Petrovic, S., Solomun, L., Stupar, B.: Artificial neural networks in the modeling and optimization of aspirin extended release tablets with Eudragit L100 as matrix substance. AAPS PharmSciTech 4(1), 62–70 (2003)
[25] Igel, C., Toussaint, M.: A No-Free-Lunch theorem for non-uniform distributions of target functions. J. Math. Model. Algorithms 3(4), 313–322 (2004) · Zbl 1079.90111
[26] Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998) · Zbl 0917.90270
[27] Keane, A., Nair, P.: Computational Approaches for Aerospace Design: The Pursuit of Excellence. Wiley, New York (2005)
[28] Kern, S., Hansen, N., Koumoutsakos, P.: Local meta-models for optimization using evolution strategies. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 939–948 (2006)
[29] Kleijnen, J.: Sensitivity analysis of simulation experiments: regression analysis and statistical design. Math. Comput. Simul. 34(3–4), 297–315 (1992) · Zbl 0850.62631
[30] Kocsis, L., Szepesvari, C.: Bandit-based Monte-Carlo planning. In: European Conference on Machine Learning (ECML 2006). Lecture Notes in Computer Science, vol. 4212, pp. 282–293. Springer, Berlin (2006)
[31] Kolmogorov, A.: Foundations of the Theory of Probability (Original: Grundbegriffe der Wahrscheinlichkeitsrechnung). Chelsea, New York (1956) (1933 original)
[32] Leary, S.J., Bhaskar, A., Keane, A.J.: A derivative based surrogate model for approximating and optimizing the output of an expensive computer simulation. J. Glob. Optim. 30(1), 39–58 (2004) · Zbl 1136.90302
[33] Poupart, P., Vlassis, N., Hoey, J., Regan, K.: An analytic solution to discrete bayesian reinforcement learning. In: Proceedings of the International Conference on Machine Learning (ICML 2006), pp. 697–704. Assoc. Comput. Mach., New York (2006)
[34] Powell, M.J.D.: Unconstrained minimization algorithms without computation of derivatives. Boll. Unione Mat. Ital. 9, 60–69 (1974) · Zbl 0324.65030
[35] Ames Research Center. Aerodynamic design using neural networks, the amount of computation needed to optimize a design is reduced. Technical Report, NASA Tech Briefs Online (2003)
[36] Radcliffe, N., Surry, P.: Fundamental limitations on search algorithms: Evolutionary computing in perspective. In: van Leeuwen, J. (eds.) Computer Science Today. Lecture Notes in Computer Science, pp. 275–291. Springer, Berlin (1995)
[37] Runarsson, T.-P.: Ordinal regression in evolutionary computation. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 1048–1057 (2006)
[38] Schumacher, C., Vose, M.D., Whitley, L.D.: The No Free Lunch and Problem Description Length. In: Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 565–570 (2001)
[39] VanMarcke, E.: Random Fields: Analysis and Synthesis. MIT Press, Cambridge (1998) · Zbl 1206.60051
[40] Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Glob. Optim. (to appear). Online First · Zbl 1180.90253
[41] Wang, Y., Gelly, S.: Modifications of UCT and sequence-like simulations for Monte-Carlo Go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii, pp. 175–182 (2007)
[42] Wolpert, D., Macready, W.: No Free Lunch Theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997) · Zbl 05451863
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.