DescriptionAsymptotic approaches are widely used in statistics. Generally, I recognize two applications of asymptotic. First, asymptotics can solve some problems which cannot be solved exactly in mathematics. For example, density and mass functions and distribution functions of some statistics often cannot be found exactly. Asymptotic approaches will be used for finding the asymptotic density and mass functions and distribution functions under such circumstances. The error between asymptotic methods and truth is controlled within tolerance, like O(1/n) or something else. Chapter 1 presents this kind of problem. The two-stage Mann-Whitney statistic has known mass and distribution functions. But these exact representations are given only recursively, and the recursion is complicated. It means that we cannot express them mathematically. With the help of an asymptotic method, the Edgeworth expansion, we can express the distribution functions. Moments and cumulants are necessary for the Edgeworth expansion and I focus on the calculation of them in Chapter 1.
The second use of asymptotics is to compare two different methods or functions and find how they are close. When various methods are proposed to approximate something, one may just determine whether they are asymptotically correct. If asymptotically, the methods are correct, the error between them should be determined. Furthermore, how close they are to the truth must be determined. Chapter 2 is a typical example of this kind of problem. The traditional approach is called the studentized bootstrap and the new one is the tilted bootstrap. We compare the two approaches in multi-dimension and conclude the difference between their p-values is o(1) based on some assumptions.
Chapters 3 and 4 discuss a significance test to perform a variable selection for regression. The test is called the covariance test. The test is based on the exponential distribution, but the statistic does not follow it exactly but asymptotically. We investigate the properties of the test statistic and proposeanother covariance test based on the gamma distribution. This topic is a combination of the two problems mentioned above. We compare all available methods and provide an alternative better approach.
Chapter 5 presents a method for calculating the order of error numerically. It is derived from Chapters 3 and 4. We have to find the order of error numerically when it is too hard to find it analytically. Many examples are illustrated to demonstrate effectiveness.