Description
TitleEssays in macroeconomic forecasting and model evaluation
Date Created2018
Other Date2018-10 (degree)
Extent1 online resource (94 pages) : illustrations
DescriptionThis dissertation studies forecasting model specification, estimation, prediction,
and evaluation in big data environments. In an effort to contribute
to the discussions of macroeconomic forecasting, I examine the studies of
forecasting model specification and forecast accuracy testings and introduce
new methodologies in empirical frameworks. The whole set-up of forecasting
model specification and forecasting evaluation framework is a continuum of
decisions, which can lead to different forecasting results. In closely-connected
two papers, I attempt to empirically evaluate the implications of using different
methodologies throughout all stages of macro forecasting and provide
insightful conclusions for future researches in the literature.
Chapter 2 revisits the question of predictive accuracy testing and model
selection, and asks the question: does the loss function really matter, and if so,
what can be gained when utilizing loss function-free model comparison and
selection tests? So far in forecasting literature, forecasting results have been
compared based on moment-based approaches which mostly concern about
only first and second moment of forecasting errors and require to choose a loss
function to begin with, which is an additional decisional problem. In Chapter
2, I compare forecasting results based on a distributional comparison approach
suggested by Jin et al. (2016), which is technically based on the stochastic dominance principles and loss-function robust. A series of empirical experiments
are carried out using macroeconomic time series data modeled using big data
methods, including a large number of dimension reduction, shrinkage, and
machine learning methods. Analysis and ranking of these methods is found
to depend crucially on whether loss function dependent evaluation of their
accuracy is carried out, or not.
Chapter 3 builds on my first chapter by focusing on the usefulness of so called
“supervised” approaches to forecast model selection in big-data environments.
When constructing forecasting models using latent factor variables
that are designed to condense information from large datasets into a small set
of useful explanatory variables, standard approaches involve extracting information
relevant to the entire dataset, and not targeted to a particular variable
being forecasted. Supervised approaches to model specification do not do this,
but instead penalize model specifications according to metrics designed to focus
on the particular target variable(s) of interest. In order to evaluate the
efficacy of supervised approaches, I carry out Monte Carlo simulations and
empirical exercises and empirical results suggest that supervised approaches
that are geared for the purpose of forecasting do serve its own purpose.
NotePh.D.
NoteIncludes bibliographical references
Noteby Sungkyung Lee
Genretheses, ETD doctoral
Languageeng
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.