Recent zbMATH articles in MSC 62Ehttps://www.zbmath.org/atom/cc/62E2021-04-16T16:22:00+00:00WerkzeugPercentage points for testing homogeneity of covariance matrices of several bivariate Gaussian populations.https://www.zbmath.org/1456.620992021-04-16T16:22:00+00:00"Nagar, Daya K."https://www.zbmath.org/authors/?q=ai:nagar.daya-krishna"Zarrazola, Edwin"https://www.zbmath.org/authors/?q=ai:zarrazola.edwinSummary: In this article, the exact distribution and exact percentage points for testing equality of covariance matrices of \(q\) bivariate Gaussian populations are obtained. The distribution has been derived using the inverse Mellin transformation and the residue theorem. The percentage points have been computed for \(q=2(1)5\).A simplified proof of CLT for convex bodies.https://www.zbmath.org/1456.520042021-04-16T16:22:00+00:00"Fresen, Daniel J."https://www.zbmath.org/authors/?q=ai:fresen.daniel-jSummary: We present a short proof of Klartag's central limit theorem for convex bodies, using only the most classical facts about log-concave functions. An appendix is included where we give the proof that the thin shell implies CLT. The paper is accessible to anyone.Characterizations based on certain regression assumptions of adjacent order statistics.https://www.zbmath.org/1456.620212021-04-16T16:22:00+00:00"Huang, Wen-Jang"https://www.zbmath.org/authors/?q=ai:huang.wen-jang"Su, Nan-Cheng"https://www.zbmath.org/authors/?q=ai:su.nan-cheng(no abstract)On the unbiased asymptotic normality of quantile regression with fixed effects.https://www.zbmath.org/1456.622832021-04-16T16:22:00+00:00"Galvao, Antonio F."https://www.zbmath.org/authors/?q=ai:galvao.antonio-f-jun"Gu, Jiaying"https://www.zbmath.org/authors/?q=ai:gu.jiaying"Volgushev, Stanislav"https://www.zbmath.org/authors/?q=ai:volgushev.stanislavSummary: Nonlinear panel data models with fixed individual effects provide an important set of tools for describing microeconometric data. In a large class of such models (including probit, proportional hazard and quantile regression to name just a few) it is impossible to difference out the individual effects, and inference is usually justified in a `large \(n\) large \(T\)' asymptotic framework. However, there is a considerable gap in the type of assumptions that are currently imposed in models with smooth score functions (such as probit, and proportional hazard) and quantile regression. In the present paper we show that this gap can be bridged and establish unbiased asymptotic normality for fixed effects quantile regression panels under conditions on \(n,T\) that are very close to what is typically assumed in standard nonlinear panels. Our results considerably improve upon existing theory and show that quantile regression is applicable to the same type of panel data (in terms of \(n,T)\) as other commonly used nonlinear panel data models. Thorough numerical experiments confirm our theoretical findings.Asymptotic F tests under possibly weak identification.https://www.zbmath.org/1456.622112021-04-16T16:22:00+00:00"Martínez-Iriarte, Julián"https://www.zbmath.org/authors/?q=ai:martinez-iriarte.julian"Sun, Yixiao"https://www.zbmath.org/authors/?q=ai:sun.yixiao"Wang, Xuexin"https://www.zbmath.org/authors/?q=ai:wang.xuexinSummary: This paper develops asymptotic F tests robust to weak identification and temporal dependence. The test statistics we focus on are modified versions of the S statistic of \textit{J. H. Stock} and \textit{J. H. Wright} [Econometrica 68, No. 5, 1055--1096 (2000; Zbl 1015.62105)] and the K statistic of \textit{F. Kleibergen} [Econometrica 73, No. 4, 1103--1123 (2005; Zbl 1152.91715)]. In the former case, the modification involves only a multiplicative degree-of-freedom adjustment, and the modified S statistic is asymptotically F distributed under fixed-smoothing asymptotics regardless of the strength of the model identification. In the latter case, the modification involves an additional multiplicative adjustment that uses a J statistic for testing overidentification. We show that the modified K statistic is asymptotically F-distributed when the model parameters are completely unidentified or nearly-weakly identified. When the model parameters are weakly identified, the F approximation for the K statistic can be justified under the conventional asymptotics. The F approximations account for the estimation errors in the underlying heteroskedasticity and autocorrelation robust variance estimators, which the chi-squared approximations ignore. Monte Carlo simulations show that the F approximations are much more accurate than the corresponding chi-squared approximations in finite samples.Time-dependent probability density functions and information geometry in stochastic logistic and Gompertz models.https://www.zbmath.org/1456.620242021-04-16T16:22:00+00:00"Tenkès, Lucille-Marie"https://www.zbmath.org/authors/?q=ai:tenkes.lucille-marie"Hollerbach, Rainer"https://www.zbmath.org/authors/?q=ai:hollerbach.rainer"Kim, Eun-Jin"https://www.zbmath.org/authors/?q=ai:kim.eunjinBayesian estimation of parameters for the bivariate Gompertz regression model with shared gamma frailty under random censoring.https://www.zbmath.org/1456.621042021-04-16T16:22:00+00:00"Hanagal, David D."https://www.zbmath.org/authors/?q=ai:hanagal.david-d"Sharma, Richa"https://www.zbmath.org/authors/?q=ai:sharma.richaSummary: We consider the shared gamma frailty model with Gompertz distribution as baseline hazard for bivariate survival times. The problem of analyzing and estimating parameters of bivariate Gompertz distribution with shared gamma frailty is of interest and the focus of this paper. We solve the inferential problem in a Bayesian framework with the help of a comprehensive simulation study. We introduce Bayesian estimation procedure using the Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the proposed model and then compare the true values of the parameters with the estimated values for different sample sizes. A search of the literature suggests there is currently no work that has been done for Bayesian estimation of parameters of bivariate Gompertz distribution with shared frailty.Saddlepoint approximations for short and long memory time series: a frequency domain approach.https://www.zbmath.org/1456.622012021-04-16T16:22:00+00:00"La Vecchia, Davide"https://www.zbmath.org/authors/?q=ai:la-vecchia.davide"Ronchetti, Elvezio"https://www.zbmath.org/authors/?q=ai:ronchetti.elvezio-mSummary: Saddlepoint techniques provide numerically accurate, small sample approximations to the distribution of estimators and test statistics. Except for a few simple models, these approximations are not available in the framework of stationary time series. We contribute to fill this gap. Under short or long range serial dependence, for Gaussian and non Gaussian processes, we show how to derive and implement saddlepoint approximations for two relevant classes of frequency domain statistics: ratio statistics and Whittle's estimator. We compare our new approximations to the ones obtained by the standard asymptotic theory and by two widely-applied bootstrap methods. The numerical exercises for Whittle's estimator show that our approximations yield accuracy's improvements, while preserving analytical tractability. A real data example concludes the paper.On interval and point estimators based on a penalization of the modified profile likelihood.https://www.zbmath.org/1456.620372021-04-16T16:22:00+00:00"Ventura, Laura"https://www.zbmath.org/authors/?q=ai:ventura.laura"Racugno, Walter"https://www.zbmath.org/authors/?q=ai:racugno.walterSummary: In the presence of a nuisance parameter, one widely shared approach to likelihood inference on a scalar parameter of interest is based on the profile likelihood and its various modifications. In this paper, we add a penalization to the modified profile likelihood, which is based on a suitable matching prior, and we discuss the frequency properties of interval estimators and point estimators based on this penalized modified profile likelihood. Two simulation studies are illustrated, and we indicate the improvement of the proposed penalized modified profile likelihood over its counterparts.Nonparametric assessment of hedge fund performance.https://www.zbmath.org/1456.622402021-04-16T16:22:00+00:00"Almeida, Caio"https://www.zbmath.org/authors/?q=ai:almeida.caio"Ardison, Kym"https://www.zbmath.org/authors/?q=ai:ardison.kym"Garcia, René"https://www.zbmath.org/authors/?q=ai:garcia.reneSummary: We propose a new class of performance measures for Hedge Fund (HF) returns based on a family of empirically identifiable stochastic discount factors (SDFs). The SDF-based measures incorporate no-arbitrage pricing restrictions and naturally embed information about higher-order mixed moments between HF and benchmark factors returns. We provide a full asymptotic theory for our SDF estimators to test for the statistical significance of each fund's performance and for the relevance of individual benchmark factors within each proposed measure. We apply our methodology to a panel of 4815 individual hedge funds. Our empirical analysis reveals that fewer funds have a statistically significant positive alpha compared to the Jensen's alpha obtained by the traditional linear regression approach. Moreover, the funds' rankings vary considerably between the two approaches. Performance also varies between the members of our family because of a different fund exposure to higher-order moments of the benchmark factors, highlighting the potential heterogeneity across investors in evaluating performance.Posterior distribution of nondifferentiable functions.https://www.zbmath.org/1456.622922021-04-16T16:22:00+00:00"Kitagawa, Toru"https://www.zbmath.org/authors/?q=ai:kitagawa.toru"Montiel Olea, José Luis"https://www.zbmath.org/authors/?q=ai:montiel-olea.jose-luis"Payne, Jonathan"https://www.zbmath.org/authors/?q=ai:payne.jonathan-l"Velez, Amilcar"https://www.zbmath.org/authors/?q=ai:velez.amilcarSummary: This paper examines the asymptotic behavior of the posterior distribution of a possibly nondifferentiable function \(g(\theta)\), where \(\theta\) is a finite-dimensional parameter of either a parametric or semiparametric model. The main assumption is that the distribution of a suitable estimator \(\hat{\theta}_n\), its bootstrap approximation, and the Bayesian posterior for \(\theta\) all agree asymptotically.
It is shown that whenever \(g\) is locally Lipschitz, though not necessarily differentiable, the posterior distribution of \(g(\theta)\) and the bootstrap distribution of \(g(\hat{\theta}_n)\) coincide asymptotically. One implication is that Bayesians can interpret bootstrap inference for \(g(\theta)\) as approximately valid posterior inference in a large sample. Another implication -- built on known results about bootstrap inconsistency -- is that credible intervals for a nondifferentiable parameter \(g(\theta)\) cannot be presumed to be approximately valid confidence intervals (even when this relation holds true for \(\theta)\).Nonparametric estimation of the cross ratio function.https://www.zbmath.org/1456.620482021-04-16T16:22:00+00:00"Abrams, Steven"https://www.zbmath.org/authors/?q=ai:abrams.steven"Janssen, Paul"https://www.zbmath.org/authors/?q=ai:janssen.paul"Swanepoel, Jan"https://www.zbmath.org/authors/?q=ai:swanepoel.jan-w-h"Veraverbeke, Noël"https://www.zbmath.org/authors/?q=ai:veraverbeke.noelThe authors study a smooth, nonparametric Bernstein-based estimator for the
cross ratio function \[\theta(t_1, t_2) = \frac{\lambda(t_1|T_2 = t_2)}{\lambda(t_1 | T_2 > t_2)}\]
where \(T_1\), \(T_2\) are absolutely continuous variables and \(\lambda(\cdot| T_2 = t_2)\) and \(\lambda(\cdot| T_2 > t_2)\) are the conditional hazard rate functions for \(T_1\) given \(T_2 = t_2\) and \(T_2 > t_2\).
The estimator is constructed, as follows. Let \((T_{1,1}, T_{2,1}),\dots,(T_{1,n},T_{2,n})\) be a sample from \((T_1, T_2)\), and set
\[S_{j,n}(t) = \frac{1}{n}\sum_{i=1}^{n}I (T_{j,i} > t)~(j=1,2),\]
\[S_{n}(t_1,t_2) = \frac{1}{n}\sum_{i=1}^{n}I (T_{1,i} > t_1,T_{2,i} > t_2).\]
The empirical copula \(C_{n}\) given by
\[C_n(u_1,u_2) = S_{n}\left[S^{-1}_{1,n}(u_1),S^{-1}_{2,n}(u_2)\right],\]
and its Bernstein estimator is
\[C_{m,n}(u_1,u_2) = \sum_{k=0}^{m}\sum_{l=0}^{m}C_{n}\left(\frac{k}{m},\frac{l}{m}\right)\binom{m}{k}u_{1}^{k}(1-u_{1})^{m-k}\binom{m}{l}u_{2}^{l}(1-u_{2})^{m-l}.\]
The Bernstein estimator for \(\frac{\partial }{\partial u_2}C(u_1, u_2)\) is
\[C_{m,n}^{(2)}(u_1,u_2) = \frac{\partial}{\partial u_{2}}C_{m,n}(u_1,u_2).\]
The proposed estimator for the cross ratio function \(\theta(t_1, t_2)\) is \[\hat \theta(t_1, t_2) = \frac{\hat\lambda(t_1|T_2 = t_2)}{\hat\lambda(t_1 | T_2 > t_2)}\] where
\[\hat \lambda(t_1|T_2 = t_2) = \frac{1}{b_{n}}\int_{0}^{\infty}K_{0}\left(\frac{t_1-s}{b_{n}}\right)\hat \Lambda_{m}(ds|T_2=t_2),\]
\[\hat \lambda(t_1|T_2 > t_2) = \frac{1}{b_{n}}\int_{0}^{\infty}K_{0}\left(\frac{t_1-s}{b_{n}}\right)\hat \Lambda_{m}(ds|T_2>t_2)\]
with
\[\hat \Lambda_{m}(t_1|T_2=t_2) = \int_{0}^{t_{1}}\frac{-d_{s}C_{m,n}^{(2)}(S_{1,n}(s),S_{2,n}(t_2))}{C_{m,n}^{(2)}(S_{1,n}(s-t_2),S_{2,n}(t_2))},\]
\[\hat \Lambda_{m}(t_1|T_2>t_2) = \int_{0}^{t_{1}}\frac{-d_{s}C_{m,n}(S_{1,n}(s),S_{2,n}(t_2))}{C_{m,n}(S_{1,n}(s-t_2),S_{2,n}(t_2))},\]
and the smoothing kernel \(K_0\) is a continuous probability density function of bounded variation with bounded support. As \(n \rightarrow \infty\), it is assumed that \(m \rightarrow \infty\) and \(b_{n} \rightarrow 0\) with appropriate speed.
The authors prove that under appropriate smoothness assumptions, as \(n \rightarrow \infty\), \((n\cdot m^{-1/2}\cdot b_n)^{1/2}[\hat \theta(t_1, t_2) - \theta(t_1, t_2)]\) is asymptotically normal, and its mean and variance are given explicitly.
The finite sample performance of the estimator is assessed on simulated data for Clayton, Gumbel and Frank copulas. Its use is illustrated using a dataset on the relationship between food expenditure and net income.
Reviewer: Tamás Mátrai (Edinburgh)Convex bound approximations for sums of random variables under multivariate log-generalized hyperbolic distribution and asymptotic equivalences.https://www.zbmath.org/1456.600572021-04-16T16:22:00+00:00"Li, Zihao"https://www.zbmath.org/authors/?q=ai:li.zihao"Luo, Ji"https://www.zbmath.org/authors/?q=ai:luo.ji"Yao, Jing"https://www.zbmath.org/authors/?q=ai:yao.jingSummary: We propose convex bound approximations for the sum of log-multivariate generalized hyperbolic random variables. We derive explicit formulas for the distributions of convex bounds and for the frequently-used risk measures such as Value-at-Risk, Conditional Tail Expectation and stop-loss premium. We present numerical results showing that such approximations are not only accurate but also robust. Moreover, we further prove that there exist asymptotic equivalences between the sum and its convex bounds. To further illustrate the potentials of the convex bound approximations, we provide an application to capital allocation. We show that our formulas can be easily applied to precisely approximate capital allocation rule based on the conditional tail expectation.Robust causality test of infinite variance processes.https://www.zbmath.org/1456.621752021-04-16T16:22:00+00:00"Akashi, Fumiya"https://www.zbmath.org/authors/?q=ai:akashi.fumiya"Taniguchi, Masanobu"https://www.zbmath.org/authors/?q=ai:taniguchi.masanobu"Monti, Anna Clara"https://www.zbmath.org/authors/?q=ai:monti.anna-claraSummary: This paper develops a robust causality test for time series with infinite variance innovation processes. First, we introduce a measure of dependence for vector nonparametric linear processes, and derive the asymptotic distribution of the test statistic by \textit{M. Taniguchi} et al. [J. Multivariate Anal. 56, No. 2, 259--283 (1996; Zbl 0863.62042)] in the infinite variance case. Second, we construct a weighted version of the generalized empirical likelihood (GEL) test statistic, called the self-weighted GEL statistic in the time domain. The limiting distribution of the self-weighted GEL test statistic is shown to be the usual chi-squared one regardless of whether the model has finite variance or not. Some simulation experiments illustrate satisfactory finite sample performances of the proposed test.The statistical curse of the second half-rank.https://www.zbmath.org/1456.620302021-04-16T16:22:00+00:00"Desbois, Jean"https://www.zbmath.org/authors/?q=ai:desbois.jean"Ouvry, Stéphane"https://www.zbmath.org/authors/?q=ai:ouvry.stephane"Polychronakos, Alexios"https://www.zbmath.org/authors/?q=ai:polychronakos.alexios-pOn the optimality of the aggregate with exponential weights for low temperatures.https://www.zbmath.org/1456.621362021-04-16T16:22:00+00:00"Lecué, Guillaume"https://www.zbmath.org/authors/?q=ai:lecue.guillaume"Mendelson, Shahar"https://www.zbmath.org/authors/?q=ai:mendelson.shahar(no abstract)Non-standard inference for augmented double autoregressive models with null volatility coefficients.https://www.zbmath.org/1456.621982021-04-16T16:22:00+00:00"Jiang, Feiyu"https://www.zbmath.org/authors/?q=ai:jiang.feiyu"Li, Dong"https://www.zbmath.org/authors/?q=ai:li.dong.2"Zhu, Ke"https://www.zbmath.org/authors/?q=ai:zhu.keSummary: This paper considers an augmented double autoregressive (DAR) model, which allows null volatility coefficients to circumvent the over-parameterization problem in the DAR model. Since the volatility coefficients might be on the boundary, the statistical inference methods based on the Gaussian quasi-maximum likelihood estimation (GQMLE) become non-standard, and their asymptotics require the data to have a finite sixth moment, which narrows the applicable scope in studying heavy-tailed data. To overcome this deficiency, this paper develops a systematic statistical inference procedure based on the self-weighted GQMLE for the augmented DAR model. Except for the Lagrange multiplier test statistic, the Wald, quasi-likelihood ratio and portmanteau test statistics are all shown to have non-standard asymptotics. The entire procedure is valid as long as the data are stationary, and its usefulness is illustrated by simulation studies and one real example.Robust estimation with many instruments.https://www.zbmath.org/1456.623072021-04-16T16:22:00+00:00"Sølvsten, Mikkel"https://www.zbmath.org/authors/?q=ai:solvsten.mikkelSummary: Linear instrumental variables models are widely used in empirical work, but often associated with low estimator precision. This paper proposes an estimator that is robust to outliers and shows that the estimator is minimax optimal in a class of estimators that includes the limited maximum likelihood estimator (LIML). Intuitively, this optimal robust estimator combines LIML with Winsorization of the structural residuals and the Winsorization leads to improved precision under thick-tailed error distributions. Consistency and asymptotic normality of the estimator are established under many instruments asymptotics and a consistent variance estimator which allows for asymptotically valid inference is provided.Regularly varying random fields.https://www.zbmath.org/1456.601272021-04-16T16:22:00+00:00"Wu, Lifan"https://www.zbmath.org/authors/?q=ai:wu.lifan"Samorodnitsky, Gennady"https://www.zbmath.org/authors/?q=ai:samorodnitsky.gennady-pSummary: We study the extremes of multivariate regularly varying random fields. The crucial tools in our study are the tail field and the spectral field, notions that extend the tail and spectral processes of \textit{B. Basrak} and \textit{J. Segers} [Stochastic Processes Appl. 119, No. 4, 1055--1080 (2009; Zbl 1161.60319)]. The spatial context requires multiple notions of extremal index, and the tail and spectral fields are applied to clarify these notions and other aspects of extremal clusters. An important application of the techniques we develop is to the Brown-Resnick random fields.Inference in heavy-tailed vector error correction models.https://www.zbmath.org/1456.622162021-04-16T16:22:00+00:00"She, Rui"https://www.zbmath.org/authors/?q=ai:she.rui"Ling, Shiqing"https://www.zbmath.org/authors/?q=ai:ling.shiqingSummary: This paper first studies the full rank least squares estimator (FLSE) of the heavy-tailed vector error correction (VEC) models. It is shown that the rate of convergence of the FLSE related to the long-run parameters is \(n\) (sample size) and its limiting distribution is a stochastic integral in terms of two stable random processes when the tail index \(\alpha\in(0,2)\). Furthermore, we show that the rate of convergence of the FLSE related to the short-term parameters is \(n^{1/\alpha}\widetilde{L}(n)\) and its limiting distribution is a functional of two stable processes when \(\alpha\in(1,2)\), where \(\widetilde{L}(n)\) is a slowly varying function. However, when \(\alpha\in(0,1)\), we show that the rate of convergence of the FLSE related to the short-term parameters is \(n\) and its limiting distribution not only depends on the stationary component itself but also depends on the unit root component. Based on the FLSE, we then study the limiting behavior of the reduced rank LSE (RLSE). The results related to the short-term parameters of both FLSE and RLSE are significantly different from those of heavy-tailed time series in the literature, and it may provide new insights in the area for future research. Simulation study is carried out to demonstrate the performance of both estimators. A real example with application to 3-month Treasury Bill rate, 1-year Treasury Bill rate and Federal Fund rate is given.Bootstrapping structural change tests.https://www.zbmath.org/1456.622702021-04-16T16:22:00+00:00"Boldea, Otilia"https://www.zbmath.org/authors/?q=ai:boldea.otilia"Cornea-Madeira, Adriana"https://www.zbmath.org/authors/?q=ai:cornea-madeira.adriana"Hall, Alastair R."https://www.zbmath.org/authors/?q=ai:hall.alastair-rSummary: This paper demonstrates the asymptotic validity of methods based on the wild recursive and wild fixed bootstraps for testing hypotheses about discrete parameter change in linear models estimated via Two Stage Least Squares. The framework allows for the errors to exhibit conditional and/or unconditional heteroscedasticity, and for the reduced form to be unstable. Simulation evidence indicates the bootstrap tests yield reliable inferences in the sample sizes often encountered in macroeconomics. If the errors exhibit unconditional heteroscedasticity and/or the reduced form is unstable then the bootstrap methods are particularly attractive because the limiting distributions of the test statistics are not pivotal.Modeling time series when some observations are zero.https://www.zbmath.org/1456.621912021-04-16T16:22:00+00:00"Harvey, Andrew"https://www.zbmath.org/authors/?q=ai:harvey.andrew-c|harvey.andrew-r"Ito, Ryoko"https://www.zbmath.org/authors/?q=ai:ito.ryokoSummary: Sometimes a significant proportion of observations in a time series are zero, but the remaining observations are positive and measured on a continuous scale. We propose a new dynamic model in which the conditional distribution of the observations is constructed by shifting a distribution for non-zero observations to the left and censoring negative values. The key to generalizing the censoring approach to the dynamic case is to have (the logarithm of) the location/scale parameter driven by a filter that depends on the score of the conditional distribution. An exponential link function means that seasonal effects can be incorporated into the model and this is done by means of a cubic spline (which can potentially be time-varying). The model is fitted to daily rainfall in locations in northern Australia and England and compared with a dynamic zero-augmented model.Issues in the estimation of mis-specified models of fractionally integrated processes.https://www.zbmath.org/1456.622122021-04-16T16:22:00+00:00"Martin, Gael M."https://www.zbmath.org/authors/?q=ai:martin.gael-m"Nadarajah, K."https://www.zbmath.org/authors/?q=ai:nadarajah.k"Poskitt, D. S."https://www.zbmath.org/authors/?q=ai:poskitt.donald-stephenSummary: This short paper provides a comprehensive set of new theoretical results on the impact of mis-specifying the short run dynamics in fractionally integrated processes. We show that four alternative parametric estimators -- frequency domain maximum likelihood, Whittle, time domain maximum likelihood and conditional sum of squares -- converge to the same pseudo-true value under common mis-specification, and that they possess a common asymptotic distribution. The results are derived assuming the true data generating mechanism is a fractional linear process driven by a martingale difference innovation. A completely general parametric specification for the short run dynamics of the estimated (mis-specified) fractional model is considered, and with long memory, short memory and antipersistence in both the model and the data generating mechanism accommodated. The paper can be seen as extending an existing line of research on mis-specification in fractional models, important contributions to which have appeared in \textit{Journal of Econometrics}. It also complements a range of existing asymptotic results on estimation in \textit{correctly specified} fractional models. Open problems in the area are the subject of the final discussion.The uniform validity of impulse response inference in autoregressions.https://www.zbmath.org/1456.621952021-04-16T16:22:00+00:00"Inoue, Atsushi"https://www.zbmath.org/authors/?q=ai:inoue.atsushi.1"Kilian, Lutz"https://www.zbmath.org/authors/?q=ai:kilian.lutzSummary: Existing proofs of the asymptotic validity of conventional methods of impulse response inference based on higher-order autoregressions are pointwise only. In this paper, we establish the uniform asymptotic validity of conventional asymptotic and bootstrap inference about individual impulse responses and vectors of impulse responses when the horizon is fixed with respect to the sample size. For inference about vectors of impulse responses based on Wald test statistics to be uniformly valid, lag-augmented autoregressions are required, whereas inference about individual impulse responses is uniformly valid under weak conditions even without lag augmentation. We introduce a new rank condition that ensures the uniform validity of inference on impulse responses and show that this condition holds under weak conditions. Simulations show that the highest finite-sample accuracy is achieved when bootstrapping the lag-augmented autoregression using the bias adjustments of the second author [``Finite-sample properties of percentile and percentile-\(t\) bootstrap confidence intervals for impulse responses'', Rev. Econ. Stat. 81, No. 4, 652--660 (1999; \url{doi:10.1162/003465399558517})]. The conventional bootstrap percentile interval for impulse responses based on this approach remains accurate even at long horizons. We provide a formal asymptotic justification for this result.Quasi-maximum likelihood estimation and bootstrap inference in fractional time series models with heteroskedasticity of unknown form.https://www.zbmath.org/1456.621822021-04-16T16:22:00+00:00"Cavaliere, Giuseppe"https://www.zbmath.org/authors/?q=ai:cavaliere.giuseppe"Nielsen, Morten Ørregaard"https://www.zbmath.org/authors/?q=ai:nielsen.morten-orregaard"Taylor, A. M. Robert"https://www.zbmath.org/authors/?q=ai:taylor.a-m-robertSummary: We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of-squares estimators in the context of parametric fractional time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short memory parameters of the model. The limiting null distributions of these test statistics are shown to be non-pivotal under heteroskedasticity, while that of a robust Wald statistic (based around a sandwich estimator of the variance) is pivotal. We show that wild bootstrap implementations of the tests deliver asymptotically pivotal inference under the null. We demonstrate the consistency and asymptotic normality of the bootstrap estimators, and further establish the global consistency of the asymptotic and bootstrap tests under fixed alternatives. Monte Carlo simulations highlight significant improvements in finite sample behavior using the bootstrap in both heteroskedastic and homoskedastic environments. Our theoretical developments and Monte Carlo simulations include two bootstrap algorithms which are based on model estimates obtained either under the null hypothesis or unrestrictedly. Our simulation results suggest that the former is preferable to the latter, displaying superior size control yet largely comparable power.On the Chi-square test of homogeneity in case of a simultaneous increase of the number of observations and the number of interval partitions.https://www.zbmath.org/1456.620822021-04-16T16:22:00+00:00"Babilua, Petre"https://www.zbmath.org/authors/?q=ai:babilua.petre"Nadaraya, Elizbar"https://www.zbmath.org/authors/?q=ai:nadaraya.e-a"Patsatsia, Mzevinar"https://www.zbmath.org/authors/?q=ai:patsatsia.mzevinarSummary: The limiting distribution of the statistic of the homogeneity test of chi-square is established in case of a simultaneous increase of the number of observations and the number of interval partitions in case of ``close'' alternatives of Pitman type. Also, it is compared with another test based on the integral square deviation of a nonparametric kernel estimate of density. It is shown that the limiting power of the above-mentioned test is greater than the limiting power of Pearson's Chi-square test.The limiting behavior of some infinitely divisible exponential dispersion models.https://www.zbmath.org/1456.620262021-04-16T16:22:00+00:00"Bar-Lev, Shaul K."https://www.zbmath.org/authors/?q=ai:bar-lev.shaul-k"Letac, Gérard"https://www.zbmath.org/authors/?q=ai:letac.gerard-g(no abstract)Efficient estimation of heterogeneous coefficients in panel data models with common shocks.https://www.zbmath.org/1456.622942021-04-16T16:22:00+00:00"Li, Kunpeng"https://www.zbmath.org/authors/?q=ai:li.kunpeng"Cui, Guowei"https://www.zbmath.org/authors/?q=ai:cui.guowei"Lu, Lina"https://www.zbmath.org/authors/?q=ai:lu.linaSummary: This paper investigates the estimation and inference issues of heterogeneous coefficients in panel data models with common shocks. We propose a novel two-step method to estimate the heterogeneous coefficients. We establish the asymptotic theory of our estimators, including consistency, asymptotic representation, and limiting distribution. Our two-step method can effectively address the limitations of the existing methods, such as the common correlated effects method proposed by \textit{M. H. Pesaran} [ Econometrica 74, No. 4, 967--1012 (2006; Zbl 1152.91718)] and the iterated principal components method proposed by \textit{M. Song}, [Asymptotic theory for dynamic heterogeneous panels with cross-sectional dependence and its applications. Manuscript. Columbia University (2013)]. The two-step estimator is as efficient as the two existing competitors in the basic model, and more efficient in the model with zero restrictions. Intensive Monte Carlo simulations show that the proposed estimator performs robustly in a variety of data setups.Nonuniform bounds in the Poisson approximation with applications to informational distances. II.https://www.zbmath.org/1456.620252021-04-16T16:22:00+00:00"Bobkov, Sergey G."https://www.zbmath.org/authors/?q=ai:bobkov.sergey-g"Chistyakov, Gennadiy P."https://www.zbmath.org/authors/?q=ai:chistyakov.gennadiy-p"Götze, Friedrich"https://www.zbmath.org/authors/?q=ai:gotze.friedrich-wSummary: We explore asymptotically optimal bounds for deviations of distributions of independent Bernoulli random variables from the Poisson limit in terms of the Shannon relative entropy and Rényi/relative Tsallis distances (including Pearson's \(\chi^2\)). This part generalizes the results obtained in Part I [the authors, IEEE Trans. Inf. Theory 65, No. 9, 5283--5293 (2019; Zbl 1432.62034)] and removes any constraints on the parameters of the Bernoulli distributions.On the distribution of the \(\operatorname{T}^2\) statistic, used in statistical process monitoring, for high-dimensional data.https://www.zbmath.org/1456.620222021-04-16T16:22:00+00:00"Ahmad, M. Rauf"https://www.zbmath.org/authors/?q=ai:ahmad.muhammad-rauf|rauf-ahmad.m"Ahmed, S. Ejaz"https://www.zbmath.org/authors/?q=ai:ahmed.syed-ejazSummary: A modification to the asymptotic distribution of the \(\mathrm{T}^2\)-statistic used in multivariate process monitoring is provided when the dimension of the vectors may exceed the sample size. Under certain mild condition, a unified limit distribution is obtained that is applicable for both Phase I and II charts. Further the limit holds for charts based on individual observations as well as subgroup means. The limit is easily applicable and does not need any data preprocessing or dimension reduction. Simulations are used to demonstrate the accuracy of the proposed limit.Tests for regression coefficients in high dimensional partially linear models.https://www.zbmath.org/1456.620742021-04-16T16:22:00+00:00"Liu, Yan"https://www.zbmath.org/authors/?q=ai:liu.yan.1|liu.yan.7|liu.yan.5|liu.yan|liu.yan.8|liu.yan.3|liu.yan.2|liu.yan.6|liu.yan.4"Zhang, Sanguo"https://www.zbmath.org/authors/?q=ai:zhang.sanguo"Ma, Shuangge"https://www.zbmath.org/authors/?q=ai:ma.shuangge"Zhang, Qingzhao"https://www.zbmath.org/authors/?q=ai:zhang.qingzhaoSummary: We propose a U-statistics test for regression coefficients in high dimensional partially linear models. In addition, the proposed method is extended to test part of the coefficients. Asymptotic distributions of the test statistics are established. Simulation studies demonstrate satisfactory finite-sample performance.Edgeworth's time series model: not AR(1) but same covariance structure.https://www.zbmath.org/1456.622142021-04-16T16:22:00+00:00"Portnoy, Stephen"https://www.zbmath.org/authors/?q=ai:portnoy.stephen-lSummary: In an 1886 paper [Phil. Mag. (5) 22, 371--384 (1886; JFM 18.0176.02)], \textit{F. Y. Edgeworth} developed a method for simulating time series processes with substantial dependence. A version of this process with normal errors has the same means and covariance structure as an AR(1) process, but is actually a mixture of a very large number of processes, some of which are not stationary. That is, joint distributions of lag 3 or greater are not normal but are mixtures of normals (even though all successive pairs are bivariate normal). Thus, it serves as a cautionary example for time series analysis: though the AR(1) process cannot be distinguished from the Edgeworth Process by second order properties, inferences based on an AR(1) assumption can fail under the Edgeworth model. This model has many additional surprising features, among which is that it has Markov structure, but is not generated by a one-step transition operator.Nakagami distribution with heavy tails and applications to mining engineering data.https://www.zbmath.org/1456.623172021-04-16T16:22:00+00:00"Reyes, Jimmy"https://www.zbmath.org/authors/?q=ai:reyes.jimmy"Rojas, Mario A."https://www.zbmath.org/authors/?q=ai:rojas.mario-a"Venegas, Osvaldo"https://www.zbmath.org/authors/?q=ai:venegas.osvaldo"Gómez, Héctor W."https://www.zbmath.org/authors/?q=ai:gomez.hector-wSummary: In this paper we introduce a new extension of the Nakagami distribution. This new distribution is obtained by the quotient of two independent random variables. The quotient consists of a Nakagami distribution divided by a power of the uniform distribution in (0,1). Thus the new distribution has a heavier tail than the Nakagami distribution. In this study we obtain the density function and some important properties for making the inference, such as estimators of moment and maximum likelihood. We examine two sets of real data from the mining industry which show the usefulness of the new model in analyses with high kurtosis.On a generalisation of uniform distribution and its properties.https://www.zbmath.org/1456.620232021-04-16T16:22:00+00:00"Jayakumar, K."https://www.zbmath.org/authors/?q=ai:jayakumar.k-r"Sankaran, Kothamangalth Krishnan"https://www.zbmath.org/authors/?q=ai:sankaran.kothamangalth-krishnanSummary: \textit{S. Nadarajah} et al. [J. Stat. Comput. Simulation 83, No. 8, 1389--1404 (2013; Zbl 1453.62369)] introduced a family life time models using truncated negative binomial distribution and derived some properties of the family of distributions. It is a generalization of Marshall-Olkin family of distributions. In this paper, we introduce Generalized Uniform Distribution (GUD) using the approach of Nadarajah et al. [loc. cit.]. The shape properties of density function and hazard function are discussed. The expression for moments, order statistics, entropies are obtained. Estimation procedure is also discussed. The GDU introduced here is a generalization of the Marshall-Olkin extended uniform distribution studied in [\textit{K. K. Jose} and \textit{E. Krishna}, ProbStat Forum 4, Article No. 08, 78--88 (2011; Zbl 1235.62014)].Asymptotic inference for the constrained quantile regression process.https://www.zbmath.org/1456.620762021-04-16T16:22:00+00:00"Parker, Thomas"https://www.zbmath.org/authors/?q=ai:parker.thomas-h|parker.thomas-sSummary: I investigate the asymptotic distribution of linear quantile regression coefficient estimates when the parameter lies on the boundary of the parameter space. In order to allow for inferences made across many conditional quantiles, I provide a uniform characterization of constrained quantile regression estimates as a stochastic process over an interval of quantile levels. To do this I pose the process of estimates as solutions to a parameterized family of constrained optimization problems, parameterized by quantile level. A uniform characterization of the dual solution to these problems -- the so-called regression rankscore process -- is also derived, which can be used for score-type inference in quantile regression. The asymptotic behavior of quasi-likelihood ratio, Wald and regression rankscore processes for inference when the null hypothesis asserts that the parameters lie on a boundary follows from the features of the constrained solutions.On mixed \(AR(1)\) time series model with approximated beta marginal.https://www.zbmath.org/1456.622132021-04-16T16:22:00+00:00"Popović, Božidar V."https://www.zbmath.org/authors/?q=ai:popovic.bozidar-v"Pogány, Tibor K."https://www.zbmath.org/authors/?q=ai:pogany.tibor-k"Nadarajah, Saralees"https://www.zbmath.org/authors/?q=ai:nadarajah.saralees(no abstract)Generalized inverses of increasing functions and Lebesgue decomposition.https://www.zbmath.org/1456.600482021-04-16T16:22:00+00:00"de la Fortelle, Arnaud"https://www.zbmath.org/authors/?q=ai:de-la-fortelle.arnaudSummary: The reader should be aware of the explanatory nature of this article. Its main goal is to introduce to a broader vision of a topic than a more focused research paper, demonstrating some new results but mainly starting from some general consideration to build an overview of a theme with links to connected problems.
Our original question was related to the height of random growing trees. When investigating limit processes, we may consider some measures that are defined by increasing functions and their generalized inverses. And this leads to the analysis of Lebesgue decomposition of generalized inverses. Moreover, since the measures that motivated us initially are stochastic, there arises the idea of studying the continuity property of this transform in order to take limits.
When scaling growing processes like trees, time origin and scale can be replaced by another process. This leads us to a clock metaphor, to consider an increasing function as a clock reading from a given timeline. This is nothing more than an explanatory vision, not a mathematical concept, but this is the nature of this paper. So we consider an increasing function as a time change between two timelines; it leads to the idea that an increasing function and its generalized inverse play symmetric roles. We then introduce a universal time that links symmetrically an increasing function and its generalized inverse. We show how both are smoothly defined from this universal time. This allows to describe the Lebesgue decomposition for both an increasing function and its generalized inverse.Book review of: K. Krishnamoorthy, Handbook of statistical distributions, with applications.https://www.zbmath.org/1456.000142021-04-16T16:22:00+00:00"Jones, M. C."https://www.zbmath.org/authors/?q=ai:jones.m-christopher-w|jones.michael-chris|jones.michael-cReview of [Zbl 1111.62011].