an:06760809
Zbl 1368.90148
Lombardi, Michele; Gualandi, Stefano
A Lagrangian propagator for artificial neural networks in constraint programming
EN
Constraints 21, No. 4, 435-462 (2016).
00359811
2016
j
90C30 68T05
constraint programming; Lagrangian relaxation; neural networks
Summary: This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for
optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of the authors [``A new propagator for two-layer neural networks in empirical model learning'', Lect. Notes Comput. Sci. 8124, 448--463 (2013; \url{doi:10.1007/978-3-642-40627-0_35})], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.