×

zbMATH — the first resource for mathematics

Algorithm runtime prediction: methods & evaluation. (English) Zbl 1334.68185
Summary: Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm’s runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and – perhaps most importantly – a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.

MSC:
68T05 Learning and adaptive systems in artificial intelligence
68Q25 Analysis of algorithms and problem complexity
90C11 Mixed integer programming
90C27 Combinatorial optimization
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Ahmadizadeh, K.; Dilkina, B.; Gomes, C.; Sabharwal, A., An empirical study of optimization for maximizing diffusion in networks, (Proceedings of the 16th International Conference on Principles and Practice of Constraint Programming (CPʼ10), LNCS, vol. 6308, (2010), Springer-Verlag), 514-521
[2] Applegate, D. L.; Bixby, R. E.; Chvátal, V.; Cook, W. J., The traveling salesman problem: A computational study, (2006), Princeton University Press · Zbl 1130.90036
[3] Babić, D., Exploiting structure for scalable software verification, (2008), University of British Columbia Vancouver, Canada, PhD thesis
[4] Babić, D.; Hu, A. J., Structural abstraction of software verification conditions, (Proceedings of the 19th International Conference on Computer Aided Verification (CAVʼ07), LNCS, vol. 4590, (2007), Springer-Verlag), 366-378 · Zbl 1135.68463
[5] Babić, D.; Hutter, F., Spear theorem prover. solver description, (2007), SAT competition 2007
[6] Bartz-Beielstein, T., Experimental research in evolutionary computation: the new experimentalism, Natural Computing Series, (2006), Springer · Zbl 1106.68037
[7] Bartz-Beielstein, T.; Lasarczyk, C.; Preuss, M., Sequential parameter optimization, (Proceedings of the 2004 Congress on Evolutionary Computation (CECʼ05), (2005)), 773-780
[8] Bartz-Beielstein, T.; Markon, S., Tuning search algorithms for real-world applications: a regression tree based approach, (Proceedings of the 2004 Congress on Evolutionary Computation (CECʼ04), (2004)), 1111-1118
[9] Bergstra, J.; Bengio, Y., Random search for hyper-parameter optimization, J. Mach. Learn. Res., 13, 281-305, (2012) · Zbl 1283.68282
[10] M. Berkelaar, J. Dirks, K. Eikland, P. Notebaert, J. Ebert, lp_solve 5.5, http://lpsolve.sourceforge.net/5.5/index.htm, 2012. Last accessed on August 6, 2012.
[11] T. Berthold, G. Gamrath, S. Heinz, M. Pfetsch, S. Vigerske, K. Wolter, SCIP 1.2.1.4, http://scip.zib.de/doc/html/index.shtml, 2012. Last accessed on August 6, 2012.
[12] Bishop, C. M., Pattern recognition and machine learning, (2006), Springer · Zbl 1107.68072
[13] Box, G. E.P.; Draper, N., Response surfaces, mixtures, and ridge analyses, (2007), Wiley · Zbl 1267.62006
[14] Box, G. E.P.; Wilson, K., On the experimental attainment of optimum conditions (with discussion), J. R. Stat. Soc. B, 13, 1, 1-45, (1951) · Zbl 0043.34402
[15] Breiman, L., Random forests, Mach. Learn., 45, 1, 5-32, (2001) · Zbl 1007.68152
[16] Breiman, L.; Friedman, J. H.; Olshen, R.; Stone, C. J., Classification and regression trees, (1984), Wadsworth · Zbl 0541.62042
[17] Brewer, E. A., Portable high-performance supercomputing: high-level platform-dependent optimization, (1994), Massachusetts Institute of Technology, PhD thesis
[18] Brewer, E. A., High-level optimization via automated statistical modeling, (Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP-95), (1995)), 80-91
[19] Cheeseman, P.; Kanefsky, B.; Taylor, W. M., Where the really hard problems are, (Proceedings of the 9th National Conference on Artificial Intelligence (AAAIʼ91), (1991)), 331-337 · Zbl 0747.68064
[20] Chiarandini, M.; Goegebeur, Y., Mixed models for the analysis of optimization algorithms, (Bartz-Beielstein, T.; Chiarandini, M.; Paquete, L.; Preuss, M., Experimental Methods for the Analysis of Optimization Algorithms, (2010), Springer-Verlag), 225-264 · Zbl 05829044
[21] Cook, W., Applications of the TSP, (2012), Last accessed on April 10, 2012
[22] Cook, W., Concorde downloads page, (2012), Last accessed on October 24, 2012
[23] Eén, N.; Biere, A., Effective preprocessing in SAT through variable and clause elimination, (Proceedings of the 7th International Conference on Theory and Applications of Satisfiability Testing (SATʼ04), LNCS, vol. 3569, (2005), Springer-Verlag), 61-75 · Zbl 1128.68463
[24] Eén, N.; Sörensson, N., An extensible SAT-solver, (Proceedings of the 6th International Conference on Theory and Applications of Satisfiability Testing (SATʼ03), (2004)), 502-518 · Zbl 1204.68191
[25] Ertin, E., Gaussian process models for censored sensor readings, (Proceedings of the IEEE Statistical Signal Processing Workshop 2007 (SSPʼ07), (2007)), 665-669
[26] Fink, E., How to solve it automatically: selection among problem-solving methods, (Proceedings of the Fourth International Conference on AI Planning Systems, (1998), AAAI Press), 128-136
[27] Gagliolo, M.; Legrand, C., Algorithm survival analysis, (Bartz-Beielstein, T.; Chiarandini, M.; Paquete, L.; Preuss, M., Experimental Methods for the Analysis of Optimization Algorithms, (2010), Springer), 161-184 · Zbl 1214.68480
[28] Gagliolo, M.; Schmidhuber, J., Dynamic algorithm portfolios, (International Symposium on Artificial Intelligence and Mathematics (ISAIMʼ06), (2006)) · Zbl 1113.68101
[29] Gebruers, C.; Guerri, A.; Hnich, B.; Milano, M., Making choices using structure at the instance level within a case based reasoning framework, (Proceedings of the International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CPAIORʼ04), LNCS, vol. 3011, (2004), Springer-Verlag), 380-386 · Zbl 1094.68642
[30] Gebruers, C.; Hnich, B.; Bridge, D.; Freuder, E., Using CBR to select solution strategies in constraint programming, (Proceedings of the 6th International Conference on Case Based Reasoning (ICCBRʼ05), LNCS, vol. 3620, (2005), Springer-Verlag), 222-236
[31] Gomes, C. P.; Selman, B.; Crato, N.; Kautz, H., Heavy-tailed phenomena in satisfiability and constraint satisfaction problems, J. Autom. Reason., 24, 1, 67-100, (2000) · Zbl 0967.68145
[32] Gomes, C. P.; van Hoeve, W.-J.; Sabharwal, A., Connections in networks: a hybrid approach, (Proceedings of the International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CPAIORʼ08), LNCS, vol. 5015, (2008), Springer-Verlag), 303-307
[33] Guerri, A.; Milano, M., Learning techniques for automatic algorithm portfolio selection, (Proceedings of the 16th European Conference on Artificial Intelligence (ECAIʼ04), (2004)), 475-479
[34] Guo, H.; Hsu, W. H., A learning-based algorithm selection meta-reasoner for the real-time MPE problem, (Proceedings of the 17th Australian Conference on Artificial Intelligence (AIʼ04), LNCS, vol. 3339, (2004), Springer-Verlag), 307-318
[35] Gurobi optimization inc. gurobi 2.0., Last accessed on August 6, 2012
[36] Guyon, I.; Gunn, S.; Nikravesh, M.; Zadeh, L., Feature extraction, foundations and applications, (2006), Springer
[37] Haim, S.; Walsh, T., Online estimation of SAT solving runtime, (Proceedings of the 11th International Conference on Theory and Applications of Satisfiability Testing (SATʼ08), LNCS, vol. 4996, (2008), Springer-Verlag), 133-138 · Zbl 1138.68540
[38] Hansen, E. A.; Zilberstein, S., Monitoring the progress of anytime problem-solving, (Proceedings of the Thirteenth National Conference on Artificial Intelligence, Portland, Oregon, (1996)), 1229-1234
[39] Hastie, T.; Tibshirani, R.; Friedman, J. H., The elements of statistical learning, Springer Series in Statistics, (2009), Springer
[40] Helsgaun, K., An effective implementation of the lin-kernighan traveling salesman heuristic, Eur. J. Oper. Res., 126, 1, 106-130, (2000) · Zbl 0969.90073
[41] Herwig, P., Using graphs to get a better insight into satisfiability problems, (2006), Delft University of Technology, Department of Electrical Engineering, Mathematics and Computer Science, Masterʼs thesis
[42] Hoos, H. H.; Stützle, T., Stochastic local search - foundations & applications, (2005), Morgan Kaufmann Publishers · Zbl 1126.68032
[43] Horvitz, E.; Ruan, Y.; Gomes, C. P.; Kautz, H.; Selman, B.; Chickering, D. M., A Bayesian approach to tackling hard computational problems, (Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence (UAIʼ01), (2001)), 235-244
[44] Hothorn, T.; Lausen, B.; Benner, A.; Radespiel-Tröger, M., Bagging survival trees, Stat. Med., 23, 77-91, (2004)
[45] Howe, A. E.; Dahlman, E.; Hansen, C.; Scheetz, M.; Mayrhauser, A., Exploiting competitive planner performance, (Biundo, S.; Fox, M., Recent Advances in AI Planning (ECPʼ99), Lecture Notes in Computer Science, vol. 1809, (2000), Springer Berlin, Heidelberg), 62-72
[46] Hsu, E. I.; Muise, C.; Beck, J. C.; McIlraith, S. A., Probabilistically estimating backbones and variable bias: experimental overview, (Proceedings of the 14th International Conference on Principles and Practice of Constraint Programming (CPʼ08), LNCS, vol. 5202, (2008), Springer-Verlag), 613-617
[47] Huang, L.; Jia, J.; Yu, B.; Chun, B.; Maniatis, P.; Naik, M., Predicting execution time of computer programs using sparse polynomial regression, (Proceedings of the 23rd Conference on Advances in Neural Information Processing Systems (NIPSʼ10), (2010)), 883-891
[48] Hutter, F., Automated configuration of algorithms for solving hard computational problems, (2009), University of British Columbia, Department of Computer Science Vancouver, Canada, PhD thesis
[49] Hutter, F.; Babić, D.; Hoos, H. H.; Hu, A. J., Boosting verification by automatic tuning of decision procedures, (Proceedings of the 7th International Conference on Formal Methods in Computer-Aided Design (FMCADʼ07), (2007)), 27-34
[50] Hutter, F.; Hamadi, Y.; Hoos, H. H.; Leyton-Brown, K., Performance prediction and automated tuning of randomized and parametric algorithms, (Proceedings of the 12th International Conference on Principles and Practice of Constraint Programming (CPʼ06), LNCS, vol. 4204, (2006), Springer-Verlag), 213-228 · Zbl 1160.68551
[51] Hutter, F.; Hoos, H.; Leyton-Brown, K., Bayesian optimization with censored response data, (2013), ArXiv e-prints
[52] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Automated configuration of mixed integer programming solvers, (Proceedings of the International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CPAIORʼ10), LNCS, vol. 6140, (2010), Springer-Verlag), 186-202
[53] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Tradeoffs in the empirical evaluation of competing algorithm designs, Special Issue on Learning and Intelligent Optimization, Ann. Math. Artif. Intell., 60, 1, 65-89, (2010)
[54] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Bayesian optimization with censored response data, (NIPS 2011 Workshop on Bayesian Optimization, Sequential Experimental Design, and Bandits, (2011)), published online
[55] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Sequential model-based optimization for general algorithm configuration, (Proceedings of the 5th Workshop on Learning and Intelligent Optimization (LIONʼ11), LNCS, vol. 6683, (2011), Springer-Verlag), 507-523
[56] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Parallel algorithm configuration, (Proceedings of the 6th Workshop on Learning and Intelligent Optimization (LIONʼ12), LNCS, (2012), Springer-Verlag), 55-70
[57] Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Identifying key algorithm parameters and instance features using forward selection, (Proc. of LION-7, (2013)), in press
[58] Hutter, F.; Hoos, H. H.; Leyton-Brown, K.; Murphy, K. P., An experimental investigation of model-based parameter optimisation: SPO and beyond, (Proceedings of the Genetic and Evolutionary Computation Conference (GECCOʼ09), (2009)), 271-278
[59] Hutter, F.; Hoos, H. H.; Leyton-Brown, K.; Murphy, K. P., Time-bounded sequential parameter optimization, (Proceedings of the 4th Workshop on Learning and Intelligent Optimization (LIONʼ10), LNCS, vol. 6073, (2010), Springer-Verlag), 281-298
[60] Hutter, F.; Tompkins, D. A.D.; Hoos, H. H., Scaling and probabilistic smoothing: efficient dynamic local search for SAT, (Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming (CPʼ02), LNCS, vol. 2470, (2002), Springer-Verlag), 233-248
[61] International Business Machines Corp, IBM ILOG CPLEX optimizer - data sheet, (2012), Last accessed on August 6, 2012
[62] International Business, Machines Corp, CPLEX 12.1. http://www-01.ibm.com/software/integration/optimization/cplex-optimizer/, 2012. Last accessed on August 6, 2012.
[63] D.S. Johnson, Random TSP generators for the DIMACS TSP challenge. http://www2.research.att.com/ dsj/chtsp/codes.tar, 2011. Last accessed on May 16, 2011.
[64] Jones, D. R.; Perttunen, C. D.; Stuckman, B. E., Lipschitzian optimization without the Lipschitz constant, J. Optim. Theory Appl., 79, 1, 157-181, (1993) · Zbl 0796.49032
[65] Jones, D. R.; Schonlau, M.; Welch, W. J., Efficient global optimization of expensive black box functions, J. Glob. Optim., 13, 455-492, (1998) · Zbl 0917.90270
[66] Jones, T.; Forrest, S., Fitness distance correlation as a measure of problem difficulty for genetic algorithms, (Proceedings of the 6th International Conference on Genetic Algorithms (ICGAʼ95), (1995)), 184-192
[67] Kadioglu, S.; Malitsky, Y.; Sellmann, M.; Tierney, K., ISAC - instance specific algorithm configuration, (Proceedings of the 19th European Conference on Artificial Intelligence (ECAIʼ10), (2010)), 751-756
[68] Kilby, P.; Slaney, J.; Thiebaux, S.; Walsh, T., Estimating search tree size, (Proceedings of the 21st National Conference on Artificial Intelligence (AAAIʼ06), (2006)), 1014-1019
[69] Knuth, D., Estimating the efficiency of backtrack programs, Math. Comput., 29, 129, 121-136, (1975) · Zbl 0297.68037
[70] Kotthoff, L.; Gent, I. P.; Miguel, I., An evaluation of machine learning in algorithm selection for search problems, AI Commun., 25, 3, 257-270, (2012)
[71] Krige, D. G., A statistical approach to some basic mine valuation problems on the witwatersrand, J. Chem. Metall. Min. Soc. S. Afr., 52, 6, 119-139, (1951)
[72] Lawrence, N. D.; Seeger, M.; Herbrich, R., Fast sparse Gaussian process methods: the informative vector machine, (Proceedings of the 15th Conference on Advances in Neural Information Processing Systems (NIPSʼ02), (2003)), 609-616
[73] Leyton-Brown, K.; Hoos, H. H.; Hutter, F.; Xu, L., Understanding the empirical hardness of NP-complete problems, Commun. ACM, (2013), in press
[74] Leyton-Brown, K.; Nudelman, E.; Andrew, G.; McFadden, J.; Shoham, Y., Boosting as a metaphor for algorithm design, (Proceedings of the 9th International Conference on Principles and Practice of Constraint Programming (CPʼ03), LNCS, vol. 2833, (2003), Springer-Verlag), 899-903
[75] Leyton-Brown, K.; Nudelman, E.; Shoham, Y., Learning the empirical hardness of optimization problems: the case of combinatorial auctions, (Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming (CPʼ02), LNCS, vol. 2470, (2002), Springer-Verlag), 556-572
[76] Leyton-Brown, K.; Nudelman, E.; Shoham, Y., Empirical hardness models: methodology and a case study on combinatorial auctions, J. ACM, 56, 4, 1-52, (2009) · Zbl 1325.68110
[77] Leyton-Brown, K.; Pearson, M.; Shoham, Y., Towards a universal test suite for combinatorial auction algorithms, (EC ʼ00: Proceedings of the 2nd ACM Conference on Electronic Commerce, (2000)), 66-76
[78] Lin, S.; Kernighan, B. W., An effective heuristic algorithm for the traveling-salesman problem, Oper. Res., 21, 2, 498-516, (1973) · Zbl 0256.90038
[79] Lobjois, L.; Lemaître, M., Branch and bound algorithm selection by performance prediction, (Proceedings of the 15th National Conference on Artificial Intelligence (AAAIʼ98), (1998)), 353-358
[80] Mahajan, Y. S.; Fu, Z.; Malik, S., Zchaff2004: an efficient SAT solver, (Proceedings of the 8th International Conference on Theory and Applications of Satisfiability Testing (SATʼ05), LNCS, vol. 3542, (2005), Springer-Verlag), 360-375 · Zbl 1122.68610
[81] Meinshausen, N., Quantile regression forests, J. Mach. Learn. Res., 7, 983-999, (2006) · Zbl 1222.68262
[82] Mersmann, O.; Bischl, B.; Trautmann, H.; Wagner, M.; Bossek, J.; Neumann, F., A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem, Ann. Math. Artif. Intell., (2013), 32 p., published online: 28 March 2013 · Zbl 1291.90327
[83] Mitchell, D.; Selman, B.; Levesque, H., Hard and easy distributions of SAT problems, (Proceedings of the 10th National Conference on Artificial Intelligence (AAAIʼ92), (1992)), 459-465
[84] Nabney, I. T., NETLAB: algorithms for pattern recognition, (2002), Springer · Zbl 1011.68116
[85] Nannen, V.; Eiben, A. E., Relevance estimation and value calibration of evolutionary algorithm parameters, (Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAIʼ07), (2007)), 975-980
[86] Nelson, W., Applied life data analysis, Wiley Series in Probability and Statistics, (2003), John Wiley & Sons
[87] Nocedal, J.; Wright, S. J., Numerical optimization, (2006), Springer · Zbl 1104.65059
[88] Nudd, G. R.; Kerbyson, D. J.; Papaefstathiou, E.; Perry, S. C.; Harper, J. S.; Wilcox, D. V., Pace-a toolset for the performance prediction of parallel and distributed systems, Int. J. High Perform. Comput. Appl., 14, 3, 228-251, (2000)
[89] E. Nudelman, K. Leyton-Brown, G. Andrew, C. Gomes, J. McFadden, B. Selman, Y. Shoham, Satzilla 0.9. Solver description, 2003 SAT Competition, 2003.
[90] Nudelman, E.; Leyton-Brown, K.; Hoos, H. H.; Devkar, A.; Shoham, Y., Understanding random SAT: beyond the clauses-to-variables ratio, (Proceedings of the 10th International Conference on Principles and Practice of Constraint Programming (CPʼ04), LNCS, vol. 3258, (2004), Springer-Verlag), 438-452 · Zbl 1152.68569
[91] Pfahringer, B.; Bensusan, H.; Giraud-Carrier, C., Meta-learning by landmarking various learning algorithms, (Proceedings of the 17th International Conference on Machine Learning (ICMLʼ00), (2000)), 743-750
[92] Prasad, M. R.; Biere, A.; Gupta, A., A survey of recent advances in SAT-based formal verification, Int. J. Softw. Tools Technol. Transf., 7, 2, 156-173, (2005)
[93] Quinonero-Candela, J.; Rasmussen, C. E.; Williams, C. K., Approximation methods for Gaussian process regression, (Bottou, Léon; Chapelle, Olivier; DeCoste, Dennis; Weston, Jason, Large-Scale Kernel Machines, (2007), MIT Press), 203-223
[94] Rasmussen, C. E.; Williams, C. K.I., Gaussian processes for machine learning, (2006), MIT Press · Zbl 1177.68165
[95] Rice, J. R., The algorithm selection problem, Adv. Comput., 15, 65-118, (1976)
[96] Ridge, E.; Kudenko, D., Tuning the performance of the MMAS heuristic, (Proceedings of the International Workshop on Engineering Stochastic Local Search Algorithms (SLSʼ2007), LNCS, vol. 4638, (2007), Springer-Verlag), 46-60
[97] Roberts, M.; Howe, A., Learned models of performance for many planners, (ICAPS 2007 Workshop AI Planning and Learning, (2007))
[98] Sacks, J.; Welch, W. J.; Welch, T. J.; Wynn, H. P., Design and analysis of computer experiments, Stat. Sci., 4, 4, 409-423, (1989) · Zbl 0955.62619
[99] Santner, T. J.; Williams, B. J.; Notz, W. I., The design and analysis of computer experiments, (2003), Springer · Zbl 1041.62068
[100] Schmee, J.; Hahn, G. J., A simple method for regression analysis with censored data, Technometrics, 21, 4, 417-432, (1979)
[101] Schmidt, M., Minfunc, (2012), Last accessed on August 5, 2012
[102] Segal, M. R., Regression trees for censored data, Biometrics, 44, 1, 35-47, (1988) · Zbl 0707.62224
[103] Sherman, J.; Morrison, W. J., Adjustment of an inverse matrix corresponding to changes in the elements of a given column or a given row of the original matrix (abstract), Ann. Math. Stat., 20, 621, (1949)
[104] Smith-Miles, K., Cross-disciplinary perspectives on meta-learning for algorithm selection, ACM Comput. Surv., 41, 1, 6:1-6:25, (2009)
[105] Smith-Miles, K.; Lopes, L., Measuring instance difficulty for combinatorial optimization problems, Comput. Oper. Res., 39, 5, 875-889, (2012) · Zbl 1251.90339
[106] Smith-Miles, K.; Tan, T., Measuring algorithm footprints in instance space, (Proceedings of the 2012 Congress on Evolutionary Computation (CECʼ12), (2012)), 3446-3453
[107] Smith-Miles, K.; van Hemert, J., Discovering the suitability of optimisation algorithms by learning from evolved instances, Ann. Math. Artif. Intell., 61, 87-104, (2011) · Zbl 1236.49008
[108] Smith-Miles, K.; van Hemert, J.; Lim, X. Y., Understanding TSP difficulty by learning from evolved instances, (Proceedings of the 4th Workshop on Learning and Intelligent Optimization (LIONʼ10), LNCS, vol. 6073, (2010), Springer-Verlag), 266-280
[109] Soos, M., Cryptominisat 2.5.0. solver description, (2010), SAT Race
[110] Tresp, V., A Bayesian committee machine, Neural Comput., 12, 11, 2719-2741, (2000)
[111] Vilalta, R.; Drissi, Y., A perspective view and survey of meta-learning, Artif. Intell. Rev., 18, 2, 77-95, (2002)
[112] Wei, W.; Li, C. M., Switching between two adaptive noise mechanisms in local search for SAT. solver description, (2009), SAT competition
[113] Weinberger, E., Correlated and uncorrelated fitness landscapes and how to tell the difference, Biol. Cybern., 63, 325-336, (1990) · Zbl 0703.92016
[114] Weiss, N. A., A course in probability, (2005), Addison-Wesley
[115] Xu, L.; Hoos, H. H.; Leyton-Brown, K., Hierarchical hardness models for SAT, (Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming (CPʼ07), LNCS, vol. 4741, (2007), Springer-Verlag), 696-711 · Zbl 1145.68534
[116] Xu, L.; Hoos, H. H.; Leyton-Brown, K., Hydra: automatically configuring algorithms for portfolio-based selection, (Proceedings of the 25th National Conference on Artificial Intelligence (AAAIʼ10), (2010)), 210-216
[117] Xu, L.; Hutter, F.; Hoos, H.; Leyton-Brown, K., Satzilla-07: the design and analysis of an algorithm portfolio for SAT, (Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming (CPʼ07), LNCS, vol. 4741, (2007), Springer-Verlag), 712-727
[118] Xu, L.; Hutter, F.; Hoos, H.; Leyton-Brown, K., Satzilla2009: an automatic algorithm portfolio for sat. solver description, (2009), SAT competition
[119] Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Satzilla: portfolio-based algorithm selection for SAT, J. Artif. Intell. Res., 32, 565-606, (2008) · Zbl 1182.68272
[120] Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Evaluating component solver contributions in portfolio-based algorithm selectors, (Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing (SATʼ12), LNCS, vol. 7317, (2012), Springer-Verlag), 228-241
[121] Xu, L.; Hutter, F.; Hoos, H. H.; Leyton-Brown, K., Satzilla2012: improved algorithm selection based on cost-sensitive classification models, (Fifteenth International Conference on Theory and Applications of Satisfiability Testing, SAT Challenge 2012: Solver Descriptions, (2012))
[122] Zarpas, E., Benchmarking SAT solvers for bounded model checking, (Proceedings of the 8th International Conference on Theory and Applications of Satisfiability Testing (SATʼ05), LNCS, vol. 3569, (2005), Springer-Verlag), 340-354 · Zbl 1128.68368
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.