On risk concentration for convex combinations of linear estimators
- Authors: Golubev G.K.1,2
-
Affiliations:
- Kharkevich Institute for Information Transmission Problems
- CNRS
- Issue: Vol 52, No 4 (2016)
- Pages: 344-358
- Section: Methods of Signal Processing
- URL: https://journal-vniispk.ru/0032-9460/article/view/166332
- DOI: https://doi.org/10.1134/S0032946016040037
- ID: 166332
Cite item
Abstract
We consider the estimation problem for an unknown vector β ∈ Rp in a linear model Y = Xβ + σξ, where ξ ∈ Rn is a standard discrete white Gaussian noise and X is a known n × p matrix with n ≥ p. It is assumed that p is large and X is an ill-conditioned matrix. To estimate β in this situation, we use a family of spectral regularizations of the maximum likelihood method βα(Y) = Hα(XTX) β◦(Y), α ∈ R+, where β◦(Y) is the maximum likelihood estimate for β and {Hα(·): R+ → [0, 1], α ∈ R+} is a given ordered family of functions indexed by a regularization parameter α. The final estimate for β is constructed as a convex combination (in α) of the estimates βα(Y) with weights chosen based on the observations Y. We present inequalities for large deviations of the norm of the prediction error of this method.
About the authors
G. K. Golubev
Kharkevich Institute for Information Transmission Problems; CNRS
Author for correspondence.
Email: golubev.yuri@gmail.com
Russian Federation, Moscow; Marseille
Supplementary files
