DescriptionThe dissertation considers three different topics which pertain to minimax shrinkage estimation:
1) Minimax estimation of a mean vector with variable selection for classes of spherically symmetric distributions: The results of Zhou and Hwang [31] and Maruyama [22] are extended from the normal case with known scale, to scale mixtures of normals and more generally to spherically symmetric distributions with a residual vector. Slight extensions to the class of estimators to which the results pertain are also given.
2) Minimax shrinkage estimators of a location vector under concave loss: In particular it is shown for a wide class of concave loss functions, James-Stein and Baranchik-type estimators which dominate the usual" estimator for quadratic loss also dominate for these concave losses. The distributions studied include multivariate normal distributions with covariance equal to a known multiple of the identity, normal distributions with an unknown scale times the identity, and general scale mixtures of multivariate normal distributions with an unknown scale.
3) Combining unbiased and possibly biased correlated estimators of a mean vector under general quadratic loss: The general approach is to use a shrinkage-type estimator which shrinks an unbiased estimator toward a biased estimator. Conditions under which the combined estimator dominates the original unbiased estimator are given. Models studied include normal models with a known covariance structure, scale mixtures of normals, and more generally elliptically symmetric models with a known covariance structure. Elliptically symmetric models with a covariance structure known up to a multiple are also considered.