Published Papers

In this paper, I study large sample properties of a Bayesian approach to inference about slope parameters in linear regression models with a structural break. In contrast to the conventional approach to inference about the slope parameters that does not take into account the uncertainty of the unknown break location, the Bayesian approach that I consider incorporates such uncertainty. My main theoretical contribution is a Bernstein-von Mises type theorem (Bayesian asymptotic normality) for the slope parameters under a wide class of priors, which essentially indicates an asymptotic equivalence between the conventional frequentist and Bayesian inference. Consequently, a frequentist researcher could look at credible intervals of the slope parameters to check robustness with respect to the uncertainty of the break location. Simulation studies show that the conventional confidence intervals of the slope parameters tend to undercover in finite samples whereas the credible intervals offer more reasonable coverages in general. As the sample size increases, the two methods coincide, as predicted from my theoretical conclusion. Using data from Paye and Timmermann (2006) on stock return prediction, I illustrate that the traditional confidence intervals on the slope parameters might underrepresent the true sampling uncertainty.

NSF-NBER Seminar on Bayesian Inference in Econometrics and Statistics at Washington University in St. Louis, 2020
14th RCEA Bayesian Econometrics Workshop at Wilfrid Laurier University, 2020 (postponed)

Bayesian Approaches to Shrinkage and Sparse Estimation (with Dimitris Korobilis, accepted, Foundations and Trends in Econometrics) [Published Version, Matlab code]
Keywords: Bayesian LASSO, Bayesian Ridge

In all areas of human knowledge, datasets are increasing in both size and complexity, creating the need for richer statistical models. This trend is also true for economic data, where high-dimensional and nonlinear/nonparametric inference is the norm in several fields of applied econometric work. The purpose of this paper is to introduce the reader to the world of Bayesian model determination, by surveying modern shrinkage and variable selection algorithms and methodologies. Bayesian inference is a natural probabilistic framework for quantifying uncertainty and learning about model parameters, and this feature is particularly important for inference in modern models of high dimensions and increased complexity. We begin with a linear regression setting in order to introduce various classes of priors that lead to shrinkage/sparse estimators of comparable value to popular penalized likelihood estimators (e.g. ridge, lasso). We explore various methods of exact and approximate inference, and discuss their pros and cons. Finally, we explore how priors developed for the simple regression setting can be extended in a straightforward way to various classes of interesting econometric models. In particular, the following case-studies are considered, that demonstrate application of Bayesian shrinkage and variable selection strategies to popular econometric contexts: i) vector autoregressive models; ii) factor models; iii) time-varying parameter regressions; iv) confounder selection in treatment effects models; and v) quantile regression models. A MATLAB package and an accompanying technical manual allow the reader to replicate many of the algorithms described in this review.


Working Papers

Semiparametric Bayesian Estimation of Dynamic Discrete Choice Models (with Andriy Norets, October, 2022)
Keywords: Industrial Organization, Partial Identification, Mixture Model, Hamiltonian Monte
Carlo

We propose a tractable semiparametric estimation method for dynamic discrete choice models. The distribution of additive utility shocks is modeled by location-scale mixtures of extreme value distributions with varying numbers of mixture components. Our approach exploits the analytical tractability of extreme value distributions and the flexibility of the location-scale mixtures. We implement the Bayesian approach to inference using Hamiltonian Monte Carlo and an approximately optimal reversible jump algorithm from Norets (2020). For binary dynamic choice model, our approach delivers estimation results that are consistent with the previous literature. We also apply the proposed method to multinomial choice models, for which previous literature does not provide tractable estimation methods in general settings without distributional assumptions on the utility shocks. We show that our mixture based model possesses attractive theoretical properties. First, we derive approximation results for location-scale mixtures of extreme value distributions. Second, we show that for any given distribution of utility shocks, a finite mixture of extreme value distributions can deliver exactly the same conditional choice probabilities. Posterior concentration on the identified sets of utility parameters and counterfactuals is an implication of these results.

International Association for Applied Econometrics (IAAE) at King’s Business School in London, 2022
Australasia Meeting of the Econometric Society at University of Queensland (Virtual), 2022
12th World Congress of the Econometric Society at Bocconi University, 2020
NSF-NBER Seminar on Bayesian Inference in Econometrics and Statistics at Brown University, 2019
European Seminar on Bayesian Econometrics at the New Orleans Fed, 2018

Scalable Estimation of Multinomial Choice Models with Uncertain Consideration Sets (with Siddhartha Chib, Draft available upon request)
Keywords: Industrial Organization, Marketing, Random Choice Sets, Curse of Dimensionality, Panel Data

Estimation of consideration set models faces a curse of dimensionality because the number of parameters associated with choice set formation processes increases exponentially in the number of available alternatives. In this paper, we propose a scalable estimation while allowing for flexible dependence structure of considerations. We utilize economic reasoning to introduce sparsity into the collection of choice sets. We estimate our model using Markov Chain Monte Carlo (MCMC), for which the sampling algorithm is built on simple and intuitive steps. We are currently working on theoretical understanding of our approach as well as an empirical application using scanner data sets.

High-Dimensional Limited Attention Models (Draft available upon request)
Keywords: Industrial Organization, Marketing, Machine Learning

NSF-NBER Seminar on Bayesian Inference in Econometrics and Statistics at Washington University in St. Louis, 2022
European Seminar on Bayesian Econometrics at the Paris Lodron University of Salzburg, Austria, 2022