Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

The Bayesian principle relies on Bayes' theorem which states that the probability of B conditional on A is the ratio of joint probability of A and B divided by probability of B. Bayesian econometricians assume that coefficients in the model have prior distributions.

This approach was first propagated by Arnold Zellner.[1]

Basics

Subjective probabilities have to satisfy the standard axioms of probability theory if one wishes to avoid losing a bet regardless of the outcome.[2] Before the data is observed, the parameter is regarded as an unknown quantity and thus random variable, which is assigned a prior distribution with . Bayesian analysis concentrates on the inference of the posterior distribution , i.e. the distribution of the random variable conditional on the observation of the discrete data . The posterior density function can be computed based on Bayes' Theorem:

where , yielding a normalized probability function. For continuous data , this corresponds to:

where and which is the centerpiece of Bayesian statistics and econometrics. It has the following components:

  • : the posterior density function of ;
  • : the likelihood function, i.e. the density function for the observed data when the parameter value is ;
  • : the prior distribution of ;
  • : the probability density function of .

The posterior function is given by , i.e., the posterior function is proportional to the product of the likelihood function and the prior distribution, and can be understood as a method of updating information, with the difference between and being the information gain concerning after observing new data. The choice of the prior distribution is used to impose restrictions on , e.g. , with the beta distribution as a common choice due to (i) being defined between 0 and 1, (ii) being able to produce a variety of shapes, and (iii) yielding a posterior distribution of the standard form if combined with the likelihood function . Based on the properties of the beta distribution, an ever-larger sample size implies that the mean of the posterior distribution approximates the maximum likelihood estimator The assumed form of the likelihood function is part of the prior information and has to be justified. Different distributional assumptions can be compared using posterior odds ratios if a priori grounds fail to provide a clear choice. Commonly assumed forms include the beta distribution, the gamma distribution, and the uniform distribution, among others. If the model contains multiple parameters, the parameter can be redefined as a vector. Applying probability theory to that vector of parameters yields the marginal and conditional distributions of individual parameters or parameter groups. If data generation is sequential, Bayesian principles imply that the posterior distribution for the parameter based on new evidence will be proportional to the product of the likelihood for the new data, given previous data and the parameter, and the posterior distribution for the parameter, given the old data, which provides an intuitive way of allowing new information to influence beliefs about a parameter through Bayesian updating. If the sample size is large, (i) the prior distribution plays a relatively small role in determining the posterior distribution, (ii) the posterior distribution converges to a degenerate distribution at the true value of the parameter, and (iii) the posterior distribution is approximately normally distributed with mean .

History

The ideas underlying Bayesian statistics were developed by Rev. Thomas Bayes during the 18th century and later expanded by Pierre-Simon Laplace. As early as 1950, the potential of the Bayesian inference in econometrics was recognized by Jacob Marschak.[3] The Bayesian approach was first applied to econometrics in the early 1960s by W. D. Fisher, Jacques Drèze, Clifford Hildreth, Thomas J. Rothenberg, George Tiao, and Arnold Zellner. The central motivation behind these early endeavors in Bayesian econometrics was the combination of the parameter estimators with available uncertain information on the model parameters that was not included in a given model formulation.[4] From the mid-1960s to the mid-1970s, the reformulation of econometric techniques along Bayesian principles under the traditional structural approach dominated the research agenda, with Zellner's An Introduction to Bayesian Inference in Econometrics in 1971 as one of its highlights, and thus closely followed the work of frequentist econometrics. Therein, the main technical issues were the difficulty of specifying prior densities without losing either economic interpretation or mathematical tractability and the difficulty of integral calculation in the context of density functions. The result of the Bayesian reformulation program was to highlight the fragility of structural models to uncertain specification. This fragility came to motivate the work of Edward Leamer, who emphatically criticized modelers' tendency to indulge in "post-data model construction" and consequently developed a method of economic modelling based on the selection of regression models according to the types of prior density specification in order to identify the prior structures underlying modelers' working rules in model selection explicitly.[5] Bayesian econometrics also became attractive to Christopher Sims' attempt to move from structural modeling to VAR modeling due to its explicit probability specification of parameter restrictions. Driven by the rapid growth of computing capacities from the mid-1980s on, the application of Markov chain Monte Carlo simulation to statistical and econometric models, first performed in the early 1990s, enabled Bayesian analysis to drastically increase its influence in economics and econometrics.[6]

Current research topics

Since the beginning of the 21st century, research in Bayesian econometrics has concentrated on:[7]

  • sampling methods suitable for parallelization and GPU calculations;
  • complex economic models accounting for nonlinear effects and complete predictive densities;
  • analysis of implied model features and decision analysis;
  • incorporation of model incompleteness in econometric analysis.

References

  1. Greenberg, Edward (2012). Introduction to Bayesian Econometrics (Second ed.). Cambridge University Press. ISBN 978-1-107-01531-9.
  2. Chapter 3 in de Finetti, B. (1990). Theory of Probability. Chichester: John Wiley & Sons.
  3. Marschak made this acknowledgment in a lecture, which was formalized in Marschak (1954); cf. Marschak, J. (1954). Probability in the Social Sciences. In Marschak, J. (1974). Economic Information, Decision, and Prediction. Selected Essays: Volume I Part I - Economics of Decision. Amsterdam: Springer Netherlands.
  4. Qin, D. (1996). "Bayesian Econometrics: The First Twenty Years". Econometric Theory. 12 (3): 500–516. doi:10.1017/S0266466600006836.
  5. Leamer, Edward E. (1974). "False Models and Post-Data Model Construction". Journal of the American Statistical Association. 69 (345): 122–131. doi:10.1080/01621459.1974.10480138.
  6. Koop, Gary; Korobilis, Dimitris (2010). "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics". Foundations and Trends in Econometrics. 3 (4): 267–358. CiteSeerX 10.1.1.164.7962. doi:10.1561/0800000013.
  7. Basturk, N. (2013). Historical Developments in Bayesian Econometrics after Cowles Foundation Monographs 10, 14. Tinbergen Institute Discussion Paper 191/III.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.