TY - JOUR TI - Advanced computing methods for statistical inference DO - https://doi.org/doi:10.7282/t3-mveq-wk31 PY - 2019 AB - In this thesis, we provide some new and interesting solutions to problems of computational inference. In particular, the two problems we address are (1) How to obtain valid confidence sets for parameters from models with no tractable likelihood function and (2) How to obtain valid exact confidence sets for the odds ratio when the signal is very difficult to detect. Our approach to solving these problems is to develop algorithmic procedures that result in confidence distributions for the parameters of interest. A confidence distribution can be thought of as a frequentist analog to a Bayesian posterior. It is a distribution estimate for a parameter of interest that provides inferential results with respect to the Repeated Sampling Principle. 1. Most likelihood-free computational methods for statistical inference are performed under a Bayesian paradigm, even though they are driven by the need for inferential results in instances where the likelihood principle may fail. We develop a frequentist computational method to apply in situations where one has an intractable likelihood and instead rely on the Repeated Sampling Principle to justify our inferential results. Our method expands the applications of approximate Bayesian computing methods from and permits faster computational speed by eliminating the need for any prior information. Rather than attempting to work within a Bayesian framework without a tractable likelihood function, our method creates a special type of estimate, a confidence distribution, for the parameter of interest. 2. Establishing drug safety entails detecting relationships between treatments and rare, but adverse, events. For a 2x2 contingency tables of drug treatment and adverse events, this means that we are interested in inference for an odds ratio with a weak signal. In these situations, we will encounter very few adverse events, even if the number of patients under study is large. We develop a frequentist computational method for inference on sparse contingency tables that does not rely on large sample assumptions. Our method works under the assumption that one margin is fixed, enabling us to compare the observed data to simulated data through a data generating equation and a modied statistic. We make use of a stabilization parameter which allows us to consider smaller potential parameter values even if we have a zero observation in the data. This stabilization parameter makes our method distinct from the standard tail method approach. We show that our method can out-perform the overly-conservative existing exact methods and a Bayesian method. In both of these problems, the algorithmic approaches we propose attempt to capture the sample variability using a known random variable connected to the data through a data-generating equation. In order to validate the inferential results within a frequentist framework, the algorithmic approaches to both of the above problems work by producing a specific type of estimator, a confidence distribution, for the unknown parameter. We think these two problems illustrate the rich possibilities for incorporating confidence distribution theory into the world of statistical computing. KW - Statistics and Biostatistics KW - Bayesian inference KW - Bayesian statistical decision theory KW - Mathematical statistics LA - English ER -