Abstract
Bayesian model comparison requires the specification of a prior
distribution on the parameter space of each candidate model. In this connection
two concerns arise: on the one hand the elicitation task rapidly becomes
prohibitive as the number of models increases; on the other hand numerous
prior specifications can only exacerbate the well-known sensitivity to prior
assignments, thus producing less dependable conclusions.Within the subjective
framework, both difficulties can be counteracted by linking priors across
models in order to achieve simplification and compatibility; we discuss links
with related objective approaches. Given an encompassing, or full, model
together with a prior on its parameter space, we review and summarize a
few procedures for deriving priors under a submodel, namely marginalization,
conditioning, and Kullback–Leibler projection. These techniques are
illustrated and discussed with reference to variable selection in linear models
adopting a conventional g-prior; comparisons with existing standard approaches
are provided. Finally, the relative merits of each procedure are evaluated
through simulated and real data sets.
Lingua originale | English |
---|---|
pagine (da-a) | 332-353 |
Numero di pagine | 22 |
Rivista | Statistical Science |
Volume | 23 |
DOI | |
Stato di pubblicazione | Pubblicato - 2008 |
Keywords
- Compatible prior
- Kullback-Leibler projection