• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

Using a Markov Chain to Construct a Tractable Approximation of an Intractable Probability Distribution

Hobert, James P.; Jones, Galin L.; Robert, Christian P. (2006), Using a Markov Chain to Construct a Tractable Approximation of an Intractable Probability Distribution, Scandinavian Journal of Statistics, 33, 1, p. 37-51. http://dx.doi.org/10.1111/j.1467-9469.2006.00467.x

View/Open
2004-3.pdf (303.4Kb)
Type
Article accepté pour publication ou publié
Date
2006
Journal name
Scandinavian Journal of Statistics
Volume
33
Number
1
Publisher
Wiley
Pages
37-51
Publication identifier
http://dx.doi.org/10.1111/j.1467-9469.2006.00467.x
Metadata
Show full item record
Author(s)
Hobert, James P.
Jones, Galin L.
Robert, Christian P.
Abstract (EN)
Let π denote an intractable probability distribution that we would like to explore. Suppose that we have a positive recurrent, irreducible Markov chain that satisfies a minorization condition and has π as its invariant measure. We provide a method of using simulations from the Markov chain to construct a statistical estimate of π from which it is straightforward to sample. We show that this estimate is ‘strongly consistent’ in the sense that the total variation distance between the estimate and π converges to 0 almost surely as the number of simulations grows. Moreover, we use some recently developed asymptotic results to provide guidance as to how much simulation is necessary. Draws from the estimate can be used to approximate features of π or as intelligent starting values for the original Markov chain. We illustrate our methods with two examples.
Subjects / Keywords
burn-in; Gibbs sampler; minorization condition; mixture representation; Monte Carlo; regeneration; split chain

Related items

Showing items related by title and author.

  • Thumbnail
    A mixture representation of π with applications in Markov chain Monte Carlo and perfect sampling 
    Hobert, James P.; Robert, Christian P. (2004) Article accepté pour publication ou publié
  • Thumbnail
    Improving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modelling 
    Robert, Christian P.; Roy, Vivekananda; Hobert, James P. (2011) Article accepté pour publication ou publié
  • Thumbnail
    Moralizing perfect sampling 
    Robert, Christian P.; Hobert, James P. (2001) Document de travail / Working paper
  • Thumbnail
    A Short History of Markov Chain Monte Carlo: Subjective Recollections from Incomplete Data 
    Casella, George; Robert, Christian P. (2011) Article accepté pour publication ou publié
  • Thumbnail
    Rao–Blackwellisation in the Markov Chain Monte Carlo Era 
    Robert, Christian P.; Roberts, Gareth (2021) Article accepté pour publication ou publié
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo