×

Logconcave reward functions and optimal stopping rules of threshold form. (English) Zbl 1325.60062

Summary: In the literature, the problem of maximizing the expected discounted reward over all stopping rules has been explicitly solved for a number of reward functions (including \((\max\{x,0\})^{\nu}\), \(\nu>0\), in particular) when the underlying process is either a random walk in discrete time or a Lévy process in continuous time. All of such reward functions are increasing and logconcave while the corresponding optimal stopping rules have the threshold form. In this paper, we explore the close connection between increasing and logconcave reward functions and optimal stopping rules of threshold form. In the discrete case, we show that if a reward function defined on \(\mathbb{Z}\) is nonnegative, increasing and logconcave, then the optimal stopping rule is of threshold form provided that the underlying random walk is skip-free to the right. In the continuous case, it is shown that for a reward function defined on \(\mathbb{R}\) which is nonnegative, increasing, logconcave and right-continuous, the optimal stopping rule is of threshold form provided the underlying process is a spectrally negative Lévy process. Furthermore, we also establish the necessity of logconcavity and monotonicity of a reward function in order for the optimal stopping rule to be of threshold form in the discrete (continuous, resp.) case when the underlying process belongs to the class of Bernoulli random walks (Brownian motions, resp.) with a downward drift.

MSC:

60G40 Stopping times; optimal stopping problems; gambling theory
60G50 Sums of independent random variables; random walks
60G51 Processes with independent increments; Lévy processes
62L15 Optimal stopping in statistics
60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
60J65 Brownian motion
PDFBibTeX XMLCite
Full Text: DOI