Open Access Open Access  Restricted Access Access granted  Restricted Access Subscription Access

Vol 28, No 2 (2019)

Article

Asymptotic Theory for Longitudinal Data with Missing Responses Adjusted by Inverse Probability Weights

Balan R.M., Jankovic D.

Abstract

In this article, we propose a new method for analyzing longitudinal data which contain responses that are missing at random. This method consists in solving the generalized estimating equation (GEE) of [8] in which the incomplete responses are replaced by values adjusted using the inverse probability weights proposed in [17]. We show that the root estimator is consistent and asymptotically normal, essentially under the some conditions on the marginal distribution and the surrogate correlation matrix as those presented in [15] in the case of complete data, and under minimal assumptions on the missingness probabilities. This method is applied to a real-life data set taken from [13], which examines the incidence of respiratory disease in a sample of 250 pre-school age Indonesian children which were examined every 3 months for 18 months, using as covariates the age, gender, and vitamin A deficiency.

Mathematical Methods of Statistics. 2019;28(2):83-103
pages 83-103 views

The Empirical Process of Residuals from an Inverse Regression

Kutta T., Bissantz N., Chown J., Dette H.

Abstract

In this paper we investigate an indirect regression model characterized by the Radon transformation. This model is useful for recovery of medical images obtained by computed tomography scans. The indirect regression function is estimated using a series estimator motivated by a spectral cutoff technique. Further, we investigate the empirical process of residuals from this regression, and show that it satisfies a functional central limit theorem.

Mathematical Methods of Statistics. 2019;28(2):104-126
pages 104-126 views

On Predictive Density Estimation under α-Divergence Loss

L’Moudden A., Marchand È.

Abstract

Based on X ∼ Nd(θ, σX2Id), we study the efficiency of predictive densities under α-divergence loss Lα for estimating the density of Y ∼ Nd(θ, σY2Id). We identify a large number of cases where improvement on a plug-in density are obtainable by expanding the variance, thus extending earlier findings applicable to Kullback-Leibler loss. The results and proofs are unified with respect to the dimension d, the variances σX2 and σY2, the choice of loss Lα; α ∈ (−1, 1). The findings also apply to a large number of plug-in densities, as well as for restricted parameter spaces with θ ∈ Θ ⊂ ℝd. The theoretical findings are accompanied by various observations, illustrations, and implications dealing for instance with robustness with respect to the model variances and simultaneous dominance with respect to the loss.

Mathematical Methods of Statistics. 2019;28(2):127-143
pages 127-143 views

On the Asymptotic Power of Tests of Fit under Local Alternatives in Autoregression

Boldin M.V.

Abstract

We consider a stationary AR(p) model. The autoregression parameters are unknown as well as the distribution of innovations. Based on the residuals from the parameter estimates, an analog of empirical distribution function is defined and the tests of Kolmogorov’s and ω2 type are constructed for testing hypotheses on the distribution of innovations. We obtain the asymptotic power of these tests under local alternatives.

Mathematical Methods of Statistics. 2019;28(2):144-154
pages 144-154 views

A Multiple Hypothesis Testing Approach to Detection Changes in Distribution

Golubev G., Safarian M.

Abstract

Let X1, X2,... be independent random variables observed sequentially and such that X1,..., Xθ−1 have a common probability density p0, while Xθ, Xθ+1,... are all distributed according to p1p0. It is assumed that p0 and p1 are known, but the time change θ ∈ ℤ+ is unknown and the goal is to construct a stopping time τ that detects the change-point θ as soon as possible. The standard approaches to this problem rely essentially on some prior information about θ. For instance, in the Bayes approach, it is assumed that θ is a random variable with a known probability distribution. In the methods related to hypothesis testing, this a priori information is hidden in the so-called average run length. The main goal in this paper is to construct stopping times that are free from a priori information about θ. More formally, we propose an approach to solving approximately the following minimization problem:

\(\Delta(\theta;{\tau^\alpha})\rightarrow\min_{\tau^\alpha}\;\;\text{subject}\;\text{to}\;\;\alpha(\theta;{\tau^\alpha})\leq\alpha\;\text{for}\;\text{any}\;\theta\geq1,\)

where α(θ; τ) = Pθ{τ < θ} is the false alarm probability and Δ(θ; τ) = Eθ(τ − θ)+ is the average detection delay computed for a given stopping time τ. In contrast to the standard CUSUM algorithm based on the sequential maximum likelihood test, our approach is related to a multiple hypothesis testing methods and permits, in particular, to construct universal stopping times with nearly Bayes detection delays.

Mathematical Methods of Statistics. 2019;28(2):155-167
pages 155-167 views