Sankhya: The Indian Journal of Statistics

1998, Volume 60, Series A, Pt. 1 ,pp. 90-101

A SMALL SAMPLE OPTIMALITY PROPERTY OF THE M.L.E.

By

YANNIS G. YATRACOS, University of Montreal, Canada

SUMMARY. A decision theoretic foundation of the maximum likelihood estimation method is provided. In regular parametric models, the maximum likelihood estimator $\hat \theta$ is shown to be finite sample efficient for the parameter $\theta$, with respect to the mean squared error of the scores and within a large class $C$ of estimates; $C$ includes for some of these models the unbiased, as well as the equivariant estimators of $\theta.$ This result may be used as a tool, to prove that $\hat \theta$ is optimal within $C$ for the squared error loss without recourse to completeness, but also to provide good estimates of the log-likelihood. Among other applications a finite sample property of Rao's test is also revealed, that is not shared either by Wald's test or the likelihood ratio test.

AMS (1991) subject classification. 62A10, 62C99, 62F10, 62F11.

Key words and phrases. Completeness, efficiency, estimating functions, likelihood, maximum likelihood, minimum risk equivariant estimate, minimum score distance estimate, Rao's test, roots of the likelihood equation, uniformly minimum variance unbiased estimate.

Full paper (PDF)

This article in Mathematical Reviews