Reversible jump, birth‐and‐death and more general continuous time Markov chain Monte Carlo samplers

Summary. Reversible jump methods are the most commonly used Markov chain Monte Carlo tool for exploring variable dimension statistical models. Recently, however, an alternative approach based on birth‐and‐death processes has been proposed by Stephens for mixtures of distributions. We show that the birth‐and‐death setting can be generalized to include other types of continuous time jumps like split‐and‐combine moves in the spirit of Richardson and Green. We illustrate these extensions both for mixtures of distributions and for hidden Markov models. We demonstrate the strong similarity of reversible jump and continuous time methodologies by showing that, on appropriate rescaling of time, the reversible jump chain converges to a limiting continuous time birth‐and‐death process. A numerical comparison in the setting of mixtures of distributions highlights this similarity.


Introduction
Markov chain Monte Carlo (MCMC) methods for statistical inference, in particular Bayesian inference, have become standard during the past 10 years (Cappé and Robert, 2000). For variable dimension problems, often arising through model selection, a popular approach is Green's (1995) reversible jump MCMC (RJMCMC) methodology. Recently, however, in the context of mixtures of distributions, Stephens (2000) rekindled interest in the use of continuous time birth-and-death processes for variable dimension problems, following earlier proposals by Ripley (1977), Geyer and Møller (1994), Grenander and Miller (1994) and Phillips and Smith (1996). We shall call this approach birth-and-death MCMC (BDMCMC) sampling and its generalizations continuous time MCMC (CTMCMC) sampling.
In this paper, we investigate the similarity between the reversible jump and birth-and-death methodologies. In particular, it is shown in Section 4 that, for any BDMCMC process satisfying some weak regularity conditions, there is a sequence of RJMCMC processes that converges, in a sense specified later, to the BDMCMC process.
In their application of RJMCMC methods to mixtures of distributions, Richardson and Green (1997) implemented two types of move that could change the number of components of the mixture: one was the birth-and-death move, in which a new component is created or an existing one is deleted, and the other was the split-and-combine move, in which one component is split in two, or two components are combined into one. In contrast, Stephens (2000) only dealt with birth-and-death moves to keep the algorithm within the theory of marked point processes on general spaces, while pointing out that 'one can envision a continuous time version of the general reversible jump formulation'. We show here that continuous time algorithms are not limited to the birth-and-death structure and that convergence of reversible jump to birth-and-death MCMC methodology is much more general. For example, split-and-combine moves could be incorporated, resulting in more general CTMCMC algorithms, and the appropriate theoretical framework is that of Markov jump processes. To complete our study of the connections between RJMCMC and CTMCMC methods, we implemented a full-scale numerical comparison with moves similar to those in Richardson and Green (1997) used in both algorithms: the outcome is the same for both samplers, with a longer execution time for CTMCMC algorithms.
The paper is organized as follows: in Section 2 we review the main features of the BDMCMC methodology, including moves that are more general than birth-and-death moves in Section 2.4 and variance reduction techniques in Section 2.5. This technology is exemplified for hidden Markov models in Section 3. Section 4 addresses a comparison of this approach with RJMCMC methodology, recalling the basics of RJMCMC methods in Section 4.1, establishing convergence of RJMCMC to BDMCMC methods in Section 4.2 and detailing the numerical comparison of both algorithms in Section 4.4. Section 5 concludes the paper with a discussion.

Continuous time Markov chain Monte Carlo methodologies
In this section we review BDMCMC methods in the mixture case that was considered by Stephens (2000) and we discuss the extension of the birth-and-death moves to other continuous time moves. Although Stephens (2000) provided a full description of the method in the specific set-up of mixtures of distributions, CTMCMC sampling is limited neither to birth-and-death moves nor to mixture models. For example, CTMCMC methods may be applied to any of the examples in Green (1995). See also Ripley (1977), Geyer and Møller (1994), Grenander and Miller (1994) and Phillips and Smith (1996), where broader descriptions of continuous time approaches can be found. In particular, Ripley (1977) introduced the concept of simulating a birth-and-death process to approximate its limiting distribution, even though he was interested in a problem of fixed dimension, whereas Geyer and Møller (1994) proposed a Metropolis-Hastings algorithm for spatial point processes and argued the superiority of this scheme compared with a continuous time approach, as did Clifford and Nicholls (1994).

A reference example: mixture models
Our bench-mark is a mixture model, with probability density function of the form where k is the unknown number of components, w = .w 1 , . . . , w k / are the component weights, = .φ 1 , . . . , φ k / are the component parameters and f.·|φ/ is some parametric class of densities indexed by a parameter φ, like the Gaussian, the gamma, the beta or the Poisson family. The component weights are non-negative numbers summing to 1. Mixture models have been extensively considered in the literature but remain a challenging setting for variable dimension techniques.
The above densities are written as conditional on the parameter φ, given the Bayesian perspective of the paper. Hence we need to specify a prior density for .k, w, /, denoted by r.k, w, /. Here, r is a density with respect to a product measure, made of the counting measure in the k-dimensions and of the Lebesgue measure in the .w, / dimension. We make no further assumptions about the prior, except that it is proper and exchangeable for each k, i.e. invariant under permutations of the pairs .w i , φ i /. We do not impose any ordering of the φ i , motivated by identifiability concerns (Richardson and Green, 1997). We also denote the likelihood as where y = .y 1 , . . . , y m / is the observed data. The posterior density, which is our startingpoint for inference, is thus proportional to r.k, w, /L.k, w, /. More realistic models typically involve hyperparameters, which add no further difficulty. Below we set = .w, /, with k being implicit in this notation, and Θ .k/ denotes the space of k component parameters.
A feature that is inherent to mixture models is that we may associate with each observation y i a label or allocation z i ∈ {1, . . . , k}, with P.z i = j | k, w/ = w j , that indicates from which component y i was drawn. Given the data, these labels can be sampled independently according to P.z i = j|k, w, , y i / = w j f.y i |φ j / k l=1 w l f.y l |φ l /: .1/ This simulation is called completing the sample as, following EM terminology, .z, y/ is referred to as the complete data. As detailed in Section 3 and as demonstrated in Celeux et al. (2000) for mixtures, completion is not necessary from a simulation point of view. Richardson and Green (1997) devised an algorithm that carries along the complete data through all moves of the sampler. In contrast, the algorithm of Stephens (2000) works with incomplete data, i.e. y alone, in the dimension changing moves, but completes the data at regular intervals to carry out a resampling of all the parameters and hyperparameters except k.
An important feature of BDMCMC sampling is that a continuous time jump process is associated with the birth-and-death rates: whenever a jump occurs, the corresponding move is always accepted. The acceptance probability of usual MCMC methods is replaced by the differential holding times. In particular, implausible configurations, i.e. configurations such that L. / r. / is small, die quickly.

The Markov jump process view and local balance
The birth-and-death process described in the previous subsection is a Markov jump process: whenever it reaches state , it stays there for an exponentially distributed time with expectation depending on , and, after expiry of this holding time, jumps to a new state according to a Markov transition kernel. To ensure that a Markov jump process has an invariant density that is proportional to L. / r. /, it is sufficient, although not necessary, that the local balance equations L. / r. / q. , / = L. / r. / q. , / for all , .3/ are satisfied (Preston, 1976;Ripley, 1977;Geyer and Møller, 1994). Here q. , / is the rate of moving from state to . Special care is required with such considerations, however, since the transition kernel of the jump chain typically does not have a density with respect to a single dominating measure. For example, after killing a component the new state is completely known given the current state. This problem also occurs for RJMCMC samplers, as exemplified by the measure construction in Green (1995), and we do not detail it further here. Further reading on Markov jump processes may be found in, for example, Preston (1976), Ripley (1977), sections 2 and 4, and Breiman (1992), chapter 15, sections 5 and 6. Let us now derive equation (2) from equation (3). In the particular case of birth-and-death moves and a k-component configuration , equation (3) takes the form L. / r. / β. / h{ ; .w, φ/}=.k + 1/! .1 − w/ k−1 = L{ ∪ .w, φ/} r{ ∪ .w, φ/} δ{ ; .w, φ/}=k!, .4/ which indeed leads to equation (2). The justification for the various factors in equation (4) is as follows: the factorials k! and .k + 1/! arise from the exchangeability assumption on the mixture components. Given that we do not impose an ordering constraint on φ 1 , . . . , φ k , there are k! and .k + 1/! equivalent ways of writing and ∪ .w, φ/ respectively. The equivalence is to be understood as giving the same likelihood, prior and posterior densities. The 1=.k + 1/! and 1=k! terms thus appear as the probabilities of selecting one of the .k + 1/! and k! possible ways of writing ∪ .w, φ/ and in the birth-and-death moves. This selection is immaterial, since it has no relevance for the posterior distribution. Furthermore, b. / h{ ; .w, φ/} is the density of proposing a new component .w, φ/, and .1 − w/ k−1 is again a Jacobian arising from renormalization of the weights. This determinant should be associated with the density h, as the .k + 1/-component parameter ∪ .w, φ/ is not drawn directly from a density on Θ .k+1/ , but rather indirectly through first drawing .w, φ/ and then renormalizing. To compute the resulting density on Θ .k+1/ we must then calculate a Jacobian. Thus, q{ , ∪ .w, φ/} = β. / h{ ; .w, φ/}=.1 − w/ k−1 : Stephens (2000) resampled component weights and parameters with fixed k, as well as hyperparameters, at deterministic times (as opposed to the random occurrences of the birth-and-death moves). This makes the overall process inhomogeneous in time. We can incorporate similar moves into the continuous time sampler by adding a continuous time process in which, in state , such moves occur at rate γ. /. Birth-and-death rates stay the same. The rates for resampling the component weights, parameters and hyperparameters could also be different.

Generalizations of birth-and-death Markov chain Monte Carlo methods
A further generalization is to introduce other moves, like the split-and-combine moves of Richardson and Green (1997). We consider here the special case where, as in Green (1995), the combine move is deterministic. For simplicity θ denotes an element of the k-component parameter . Thus, in a mixture context, typically θ = .w, φ/.
As for the RJMCMC proposal, the split move for a given component θ of the k-component vector is to split this component so as to give rise to a new parameter vector with k + 1 components, defined as .. \ θ/ ∪ T.θ, "// where T is a differentiable one-to-one mapping that outputs two new components and " is a random variable with density function p. We also assume that the mapping is symmetric in the sense that for all B , B : . 5/ We denote the total rate of splitting by η. / and assume that, in a split move, each component is chosen with equal probability 1=k. Conversely, the local balance equation (3) provides, for any of the k.k − 1/=2 pairs of components of , the rate of combining them. In this particular case, 2 L. / r. / η. / k p."/ @T.θ, "/ @.θ, "/ .k + 1/!
As before the factorials arise as probabilities of selecting particular representations of and . \θ/ ∪ T.θ, "/, and η. /=k is the rate of splitting a particular component as η. / is the overall splitting rate. The coefficient 2 is due to the fact that a component can be split into two pairs that are identical apart from the ordering and that occur with the same probability because of the symmetry assumption (5); otherwise we would have to replace p."/ with the average of two terms. Thus, the rate of combining two components, p."/ @T.θ, "/ @.θ, "/ : . 6/ In Section 4.3, we shall derive this rate directly from Richardson and Green's (1997) sampler.

Sampling in continuous time: a new Rao-Blackwellization
For a discrete time RJMCMC sampler, its output is typically monitored after each step, or at regular intervals to decrease intersample correlation as in Ripley (1977), section 5, and Richardson and Green (1997).
In continuous time there are more options. For example, the process may be sampled at regular times, as in Stephens (2000), or at instants given by an independent Poisson process. In either case posterior means E[g. /|y] are estimated by sample means where {θ.t/} is the CTMCMC process and the τ i s are the sampling instants. Under the former sampling scheme, if the sampling interval tends to 0, we effectively put a weight on each state visited by { .t/} that is equal to the length of the holding time in that state, when computing the sample mean. Before elaborating further on this idea, we introduce some additional notation.
Let T n be the time of the nth jump of { .t/} with T 0 = 0. By the jump chain we mean the Markov chain { .T n /} of states that are visited by { .t/}. We denote this chain by {˜ n }, i.e. n = .T n /. Let λ. / be the total rate of { .t/} leaving state , i.e. the sum of the birth-and all death-rates, plus the rates of all other kinds of move that there may be. Then the holding time T n −T n−1 of { .t/} in its nth state˜ n−1 has a conditional exponential Exp{λ.˜ n−1 /} distribution.
Returning to the sampling scheme, we can reduce sampling variability by replacing the weight T n − T n−1 by its expectation 1=λ.˜ n−1 /. In this way the variances of estimators built from the sampler output are decreased: both the numerator and the denominator have reduced variance by virtue of the Rao-Blackwell theorem, since and likewise for the denominator. The asymptotic variance of the ratio can then be shown to be smaller than when using T n − T n−1 in place of 1=λ.˜ n−1 /, following Geweke (1989). When sampling { .t/} this way, we only simulate the jump chain and store each state that it visits and the corresponding expected holding time. Alternatively, the expected holding times may be recomputed when post-processing the sampler output. The transition kernel of the jump chain is as follows: the probability that an event happens is proportional to its rate. For example, the probability of a birth is β. /=λ. /, and if a birth occurs the new component weight and parameter are drawn from h{ ; .w, φ/} as before. Thus we need to compute all rates when simulating the jump chain, just as we do when simulating { .t/}. All possible moves are incorporated into the Rao-Blackwellized estimator, not only those that are selected.
This reformulation of the continuous time algorithm has more than practical appeal for the approximation of integrals. Indeed it highlights a point that will be made clearer in Section 4, namely that the continuous time structure is paramount neither for the MCMC algorithm nor for the approximation of integrals.

An illustration for hidden Markov models
Before moving to the comparison with the RJMCMC method, we illustrate the potential of our continuous time extension in the set-up of hidden Markov models .

Setting
In this generalization of the mixture model, the observations y n are such that, conditional on a hidden Markov chain {z n } with finite state space {1, . . . , k}, y n is distributed as an N .0, σ 2 z n / variate. Therefore, marginally, y n is distributed from a mixture of normal distributions.
Unlike previous implementations, we choose to parameterize the transition probability matrix of the Markov chain {z n } by P = .ω ij / as follows: The ω ij s are therefore not identified, but this parameterization should facilitate the MCMC moves, provided that a vague proper prior is selected, since it relaxes the constraints on those moves. Further, this reparameterization allows for a point process representation of the problem (Preston, 1976;Ripley, 1977;Geyer and Møller, 1994). The prior model consists of a uniform prior U{1, : : : , M} on k, an Exp.1/ prior on the ω ij s, a uniform U.0, α/ prior on the σ i s and a data-dependent Exp.5 max |y n |/ prior on the hyperparameter 1=α; Robert et al. (2000) noted that the factor 5 in the exponential distribution was of little consequence. We stress that we impose no identifiability constraints by ordering the variances, in contrast with Robert et al. (2000). Another major difference is that, as in Stephens (2000), we do not use completion to run our algorithm, i.e. the latent Markov chain {z n } is not simulated by the algorithm. This can be avoided because of both the forward recursive representation of the likelihood for a hidden Markov model (Baum et al., 1970), which has been used before in Robert et al. (1999), and the random-walk proposals as in Hurn et al. (2003). Although it is not strictly necessary from an algorithmic point of view (Robert et al., 1999), this choice facilitates the comparison with Stephens (2000).

The moves of the continuous time Markov chain Monte Carlo algorithm
Since Robert et al. (2000) implemented reversible jumps for this model, we focus on the CTMCMC counterpart, extending Stephens (2000) to this framework. In addition to birthand-death moves, which were enough to provide good mixing in Stephens (2000), we are forced to introduce additional proposals, similar to those in Richardson and Green (1997), because we observed that the birth-and-death moves are not, by themselves, sufficient to ensure fast convergence of the MCMC algorithm. The proposals that we add are split-and-combine moves, as described earlier, and fixed k moves, where the parameters are modified via a regular Metropolis-Hastings step. The latter proposals are essential in ensuring irreducibility and good convergence properties.
The birth-and-death and fixed k moves are simple to implement and are equivalent to those given in Hurn et al. (2003) with fixed k moves relying on random-walk proposals over the transforms log.ω i / and log{σ i =.α − σ i /}.
The split-and-combine move follows the general framework of Section 2.4 with a combine rate given by expression (6). We used η S as an individual splitting rate which is the same for all components. This means that the overall rate of a split move for a k-component vector is η. / = kη S . In the practical implementation of the algorithm, we chose η S = η B = 2 and η F = 5, where η B and η F correspond to the birth and fixed k move rates respectively.
In the case of the above normal hidden Markov model, a split of state i 0 into states i 1 and i 2 involves four different types of actions.
(a) The first is a split move in row j = i 0 for ω j,i 0 as where " j ∼ U.0, 1/. This proposal is sensible when thinking that both new states i 1 and i 2 issue from state i 0 and the probabilities of reaching i 0 are thus distributed between the probabilities of reaching the new states i 1 and i 2 respectively. (b) The second is a split move in column i = i 0 for ω i 0 ,i as where ξ i ∼ LN .0, 1/. The symmetry constraint (5) is thus satisfied, i.e. ξ i and 1=ξ i have the same log-normal distribution. Before this, we tried a half-Cauchy C + .0, 1/ proposal, which also preserves the distribution under inversion, but this led to very poor mixing properties of the algorithm. (c) The third action is a split move for ω i 0 ,i 0 as where " i 0 is uniform on .0, 1/ and ξ i 1 and ξ i 2 are LN .0, 1/. (d) The last is a split move for σ 2 i 0 as The combine move is chosen in a symmetric way, so that states i 1 and i 2 are combined into state i 0 by taking first the geometric average of rows i 1 and i 2 in the unnormalized transition probability matrix and then adding columns i 1 and i 2 . One can check that this sequence of moves also applies to the particular case of ω i 0 ,i 0 . The variance σ 2 i 0 is the geometric average of σ 2 i 1 and σ 2 i 2 . Appendix B details the computation of the corresponding Jacobian.

An illustration
For a comparison with Robert et al. (2000), we consider a single data set studied there, namely the wind intensity in Athens (Francq and Roussignol, 1997). Since the prior distribution on the σs is a uniform U.0, α/ distribution, α is a hyperparameter that is estimated from the data set in a hierarchical way and updated through a slice sampler (see Robert et al. (2000) for details) via an additional process with intensity η α , set equal to 1. The variances σ 2 i , being constrained to be smaller than α 2 , are updated via a Gaussian random-walk proposal in the α-logit domain, i.e. by using the transform log{σ=.α − σ/} and its inverse. Fig. 1 summarizes the output for this data set. As in Robert et al. (2000), we obtain a mode of the posterior distribution of k at k = 3, although the posterior distribution differs slightly in our case since the posterior probabilities for k = 1, 2, 3, 4 are 0.0064, 0.1848, 0.7584, 0.0488, to be compared with Table 1 of Robert et al. (2000). Fig. 1 also provides the distribution of the number of moves per time unit (on the continuous time axis). The log-likelihoods cover a wider range than those found in Robert et al. (2000), although the highest values are the same. For instance, the largest likelihood for k = 2 is −688, whereas it is −675 for k = 3 and −670 for k = 4. That we find lower log-likelihoods than with RJMCMC techniques is to be expected since, although both RJMCMC and CTMCMC algorithms explore the same target distribution, continuous time algorithms can explore more unlikely regions in the parameter space, like the tails of the target, by downweighting states with shorter lifetimes.

Comparisons of reversible jump Markov chain Monte Carlo with continuous time algorithms
In this section we provide a comparison of reversible jump and continuous time methodologies, starting with a review of RJMCMC methods within the framework of mixtures.
Here the first ratio is the ratio of posterior densities, b. / h{ ; .w, φ/} is the density corresponding to proposing a new component .w, φ/ and d{ ∪ .w, φ/}=.k + 1/ is the probability of proposing to kill component .w, φ/ when in state ∪ .w, φ/. Finally .1 − w/ k−1 is the same Jacobian determinant as above, and the factorial ratio arises from the exchangeability assumption.
Recall that, unlike in Section 3, the w i s sum to 1. If a proposal to kill a component .w, φ/ of a .k + 1/-component state ∪ .w, φ/ is made, the acceptance probability is min.1, 1=A/, where A = A{ ; ∪ .w, φ/} is as above. RJMCMC sampling typically involves other kinds of move like fixed k moves resampling the component weights, parameters φ i and, possibly, hyperparameters-see, for example, Richardson and Green (1997). A complete sweep of the algorithm consists of the composition of a birth-and-death move with these other fixed k moves. Sampling for a fixed k can be carried out by using a Gibbs move after completing the sample according to equation (1). As noted above, Richardson and Green (1997) designed additional moves for splitting and combining components.

Convergence to birth-and-death Markov chain Monte Carlo sampling
In this section we construct a sequence of RJMCMC samplers converging to the BDMCMC sampler.
For N ∈ N we define an RJMCMC sampler by defining birth-and-death probabilities where · denotes the integer part. The state of the BDMCMC sampler at time t 0 is denoted by .t/.
We now consider what happens as N → ∞. The probability of proposing a birth in state tends to 0 as β. /=N. Hence the acceptance ratio A N tends to ∞, so a birth proposal is always accepted. If time is speeded up at scale N, on the nominal timescale the limiting process of accepted births in state is a Poisson process of rate β. /. Furthermore, the scaled probability of deleting component .w, φ/ in a state ∪ .w, φ/ ∈ Θ .k+1/ is and the right-hand side is just δ{ ; .w, φ/}, given in equation (2). Considering the rescaled time axis and the independent attempts to create or delete components, in the limit the waiting time until this component is killed has an exponential distribution with rate δ{ ; .w, φ/}, agreeing with the BDMCMC sampler. Thus, as N → ∞ a birth is rarely proposed but always accepted and a death is almost always proposed but rarely accepted. Both these schemes result in waiting times which are asymptotically exponentially distributed with rates in accordance with the BDMCMC sampler. Thus, we may expect that, as N → ∞, the processes { N .t/} and { .t/} will become increasingly similar.
We shall now make this reasoning strict, starting with the following assumptions.
(a) Φ has a separable topology which can be metrized by a complete metric. (b) β. / is positive and continuous on Θ.
(c) r. / and L. / are positive and continuous on Θ.
We first note that, since the standard topology on the open unit interval .0, 1/ is separable and can be metrized by a complete metric, e.g.
d.x, y/ = | log{x=.1 − x/} − log{y=.1 − y/}|, S k−1 can be viewed as a complete separable metric space. Then Θ, with the induced natural topology, is a space of the same kind. The process { .t/} is a Markov process on Θ which we assume has sample paths in D Θ [0, ∞/, the space of Θ-valued functions on [0, ∞/ which are right continuous and have left-hand limits everywhere.
We then derive the following result (see Appendix A for a proof).

Convergence to other continuous time processes
Recall again that, in Richardson and Green's (1997) version, the RJMCMC sampler also includes a split-and-combine move. More precisely, using the same notation as in Section 4, they proposed to split a randomly chosen component of the k-component vector with probability s N . / to give rise to a new parameter vector with k + 1 components, defined as . \ θ/ ∪ T.θ, "/. Conversely, the probability of proposing to combine a randomly chosen pair of components of (there are k.k − 1/=2 pairs) is denoted by c N . / = 1 − s N . /.
A split move changing the k-component vector to . \ θ/ ∪ T.θ, "/ has acceptance probability min 1, If, as above, we let s N . / = 1 − exp{−η. /=N} for some η. /, so that N s N . / → η. /, and accordingly scale by N the trajectory of the corresponding discrete time sampler, the limiting continuous time process has a rate of moving from . \ θ/ ∪ T.θ, "/ to by a combine move which is given by expression (6). Convergence of RJMCMC to continuous time processes thus occurs in a broader context than within the birth-and-death framework of Stephens (2000).

A numerical comparison of both methodologies
Although theorem 1 establishes a strong connection between RJMCMC and CTMCMC methodology, by showing that CTMCMC sampling can be arbitrarily well approximated by an RJMCMC algorithm, it does not imply that in practice both approaches perform equivalently, e.g. in terms of computational cost. We thus carried out a numerical comparison of both approaches based on identical moves and identical proposals on both sides. Further implementational details are provided in Appendix C. In this comparison, we chose to remain within the framework of mixtures of distributions, partly because the setting is simpler than hidden Markov models and partly because most of the earlier literature on the topic relates to this area. We use the galaxy data set (Roeder, 1990).

Implementational issues
We first discuss computational aspects of both discrete and continuous time algorithms. In continuous time settings, once a state has been visited, it is necessary to compute the rates of all possible moves leading to an exit from that state, i.e. O.k/ and O.k 2 / computations for birth-and-death and split-and-combine moves respectively. Discrete time settings do not require this exhaustive checking, as the acceptance ratio of a move is not computed until the move has been proposed. This advantage of RJMCMC sampling is, however, mitigated by three facts.
(a) For continuous time moves such as births and splits, the rates are typically very simple (e.g. constant) and it is only the death or combine rates that are expensive to compute. (b) Except for small data sets, the cost of evaluating the acceptance probability in RJMCMC sampling mainly lies in computing the log-likelihood at the parameters proposed according to For mixture models, the computation that is associated with RJMCMC sampling thus also increases proportionally to k. (c) At the expense of storing all values f.y i |φ j / as in Stephens (2000), it is possible to reduce significantly the cost of repeated evaluations of equation (8). For instance, in a death proposal the only required new computations are the summations in i and j, omitting the index of the component selected. Although this remark also applies to the RJMCMC sampler, it is most profitable when applied to the implementation of the continuous time sampler.
Thus, when only birth-and-death moves are used, the average computation times for simulating one jump of the continuous time sampler and one step of the reversible jump sampler are comparable. In our implementation, the former is longer by a factor which varies between 1.5 and 2, depending on the data set. In contrast, the computation time for continuous time simulation with split-and-combine moves is a factor 3 longer for the galaxy data set.

Birth-and-death samplers
We first contrast the performance of the two types of sampler, RJMCMC and CTMCMC, when only birth-and-death moves are used in addition to moves that do not modify the number of components. Except for the fine details of the proposals that are described in Appendix C and the absence of completion in the fixed k moves, we are thus in the setting considered by Stephens (2000). Note, however, that for CTMCMC sampling we adopted the Rao-Blackwellization device that is discussed in Section 2.5 (weighting each visited configuration by the inverse of the overall rate of leaving rather than by the corresponding exponentially distributed holding time). We proposed the fixed k moves according to an independent Poisson process of rate η F , which leaves the overall continuous time process Markovian, whereas Stephens (2000) proposed these moves at fixed regular times. By setting the probability P F of proposing a fixed k move in RJMCMC sampling equal to the rate η F = 0:5 at which fixed k moves are proposed in CTMCMC sampling and likewise P B = η B = 0:25 for the birth moves, we guaranteed that the moves were proposed in equal proportions by both samplers. The most important aspect is that both the reversible jump and the continuous time sampler were implemented using exactly the same move proposals to the point of sharing the same routines, which allows for meaningful comparisons. In what follows, we compare the performance of both samplers when the number of jumps (the number of configurations visited) in CTMCMC sampling is equal to the number of iterations of the RJMCMC algorithm.
The main message here is conveyed by Fig. 2 which shows that there is no significant difference between the samplers: whether it is for a small (5000) or a large (500000) number of iterations, the accuracy of the estimated posterior probabilities for all allowed values of k is equivalent for both samplers. Other signals like posterior parameter estimates conditional on a fixed k tend to show even less difference; this is not surprising given that both samplers share the same fixed k moves.
Another evaluation of the performance of MCMC samplers is provided by the autocovariance function of simulated traces. To implement this idea for the continuous time sampler, the Rao-Blackwellized continuous time path-i.e. the path of the continuous time process where the inverse rates are substituted for the corresponding holding times-was sampled regularly, with a number of points equal to the number of jumps. Fig. 3 shows the resulting autocovariance for the posterior simulations of k for both RJMCMC and CTMCMC sampling, estimated on 2 million iterations after discarding a burn-in period of 8 million iterations. Once again, both samplers are seen to perform equivalently: although all moves are accepted in the CTMCMC method, the mixing is not significantly improved over RJMCMC sampling because of the weighting mechanism. This is well captured by Fig. 4 which shows that only about 30% of the configurations that are visited by the continuous time sampler are maximally weighted. Conversely, 15% of the configurations have a negligible weight, a situation which occurs when there is at least one death move which has a very large rate. Richardson and Green (1997) suggested that for mixture models it is profitable to allow moves that can combine two components into a single one or conversely split a component. The inclusion of such moves in the CTMCMC framework is straightforward and has been discussed in Section 4.3. Fig. 5 is the equivalent of Fig. 2 with all types of move enabled; here, P F = η F = P B = η B = P S = η S = 0:2 is used, where P S and η S are the probability of proposing a split move in RJMCMC sampling and the split rate in CTMCMC sampling respectively. Looking in greater detail at the plot for 5000 iterations, it is possible to see a small advantage for the continuous time sampler: the reversible jump sampler has a small downward bias for k = 3 and its variability is slightly larger for all bins. Part of the explanation is that the weights (inverse rates) in the continuous time sampler have a very similar distribution for the death and combine moves whereas the acceptance probabilities for these are very different in the reversible jump sampler, where deaths are accepted about three times more often. This is because, even when k is large, there are always at least one or two pairs which have a reasonable rate of being combined and these are selected by the continuous time sampler. In contrast, when k is large, the reversible jump sampler has a low probability of drawing precisely these few pairs.

Samplers with split-and-combine moves
Another interesting conclusion to be drawn from Fig. 2 and Fig. 5 is that the inclusion of the split-and-combine moves does not significantly improve the accuracy of the results. This is understandable for RJMCMC sampling since split proposals need to be very carefully tuned to maintain reasonable acceptance probabilities (see also Appendix C). For CTMCMC sampling, however, the same conclusion is also true despite the advantage that was mentioned above.
In conclusion, if we were to rank all the techniques on the basis of their computation time, as detailed in Section 4.4.1, the optimal choice would be the RJMCMC method with birth and death only, very closely followed by the equivalent CTMCMC sampler, then, at some distance, RJMCMC sampling with both types of dimension changing moves enabled and finally CTMCMC sampling in the same conditions, which is unattractive because of its high computational cost.

Discussion
Our work suggests that there is no clear-cut improvement in using CTMCMC algorithms: although discrete time moves can also be implemented in continuous time, this alternative implementation does not bring a visible improvement in the performances of the algorithms. If anything, the continuous time samplers are slower, because they involve a consideration of the whole range of possible moves and their respective rates after each move. Repeated calls to the likelihood function are very costly in computing time and/or memory. The advantage of continuous time samplers is rather their ability to move to unlikely places: given that the split and birth-rates are independent of the data, the algorithm can impose moves to low probability regions of the parameter space. Such regions are of little interest for inference but they can constitute a kind of springboard for the Markov chains, allowing these to move from one mode of the posterior distribution to another. But this potentially better mixing behaviour can only be achieved when a wide variety of moves is proposed simultaneously, as illustrated in Fig. 5.
A typical set-up of BDMCMC sampling is to let β. / be constant, say β. / = 1, since a different constant only rescales time. Likewise, for RJMCMC sampling b. / = d. / = 1 2 is typical, except for states with k = 1 for which b. / = 1. Under these assumptions, equations (2) and (7) relate as A = .k + 1/δ −1 . Since both samplers have the same stationary distribution, we find that, if one of the algorithms performs poorly, so does the other. For RJMCMC sampling this is manifested as small As-birth proposals are rarely accepted-whereas for BDMCMC sampling it is manifested as large δs-new components are indeed born but die again quickly.
The 'attractive alternative' to Richardson and Green (1997) in terms of mixing over the values of k, as reported in Stephens (2000), section 5.3, is thus not to be sought in the continuous time nature of his algorithm, but rather in the different choices that are made in the sampler: Stephens (2000) used birth-and-death moves only for modifying the dimension of the model, and these moves did not involve the complete data, i.e. the component labels, whereas Richardson and Green (1997) used split-and-merge moves as well and carried along the component labels through all moves, including the dimension changing moves. The issue of completion is not directly related to the central theme of this paper, but it may be that the absence of completion explains the different behaviour of the sampler. This was not the case, however, in the fixed k mixture setting that was studied by Celeux et al. (2000).
Finally we perceive Rao-Blackwellization as an advantage of continuous time algorithms; this feature is, as noted above, obtained at no extra cost. Rao-Blackwellization could in principle be carried out in discrete time as well-holding times have geometric distributions-but, there, the expected holding times cannot be computed easily; see equation (9) in the proof of lemma 1 in Appendix A. See also Casella and Robert (1996) for another Rao-Blackwellization of Metropolis algorithms.
We start by looking at the 'birth part' (10) of this expression. We shall prove that it tends to 0 by showing that the integrand tends to 0 for all .w, φ/ and by showing that the integrand is dominated, for all sufficiently large N, by an integrable function. Bound the integrand as .w, φ/}|: . 13/ For β 0 and N > β, Hence, for sufficiently large N, expression (12)  .14/ by assumptions (b) and (d) in Section 4.2 for an appropriate G this expression tends to 0 as N → ∞ and is dominated by an integrable function. Regarding expression (13), it is dominated by an integrable function similar to expression (14) (remove 1=2N and the squaring), and it remains to show that it tends to 0 as N → ∞. We have By assumption (c) in Section 4.2, for each .w, φ/, L{ ∪ .w, φ/} r{ ∪ .w, φ/} and L. / r. / are bounded away from ∞ and 0 respectively, on a sufficiently small G. Likewise, by assumption (b), d N { ∪ .w, φ/} and b N . / tend to 1 and 0 respectively, uniformly over such a G. Finally, by assumption (d), h{ ; .w, φ/} is bounded on an appropriate G, and we conclude that expression (13) tends to 0 uniformly over G as N → ∞ if G is sufficiently small. We now turn to the 'death part' (11). By arguments that are similar to those above, for large N and sufficiently small G it holds that 1 uniformly over G, and, also using arguments as above, one can show that the right-hand side of this expression converges to δ{ \ .w i , φ i /; .w i , φ i /} as N → ∞, uniformly over a sufficiently small G.
Recall the definitions of jump times and the jump chain in Section 2.5. The sequence {˜ n , T n − T n−1 } of visited states and holding times form a Markov renewal process. The transition kernel of this process is denoted by K, i.e. K. ; A × B/ = P.˜ n ∈ A, T n − T n−1 ∈ B |˜ n−1 = /. Since { .t/} is Markov, the conditional distribution of T n − T n−1 given˜ n−1 = is exponential with rate λ. /. In addition, .T n / and T n − T n−1 are conditionally independent. Similarly, { N .t/} is a semi-Markov process with jump times {T N n } in the lattice i=N, and the kernel of the associated Markov renewal process is denoted by K N . Since { N n } is Markov, N .T N n / and T N n − T N n−1 are conditionally independent given N .T N n−1 /.

A.1. Proof of theorem 1
Using results of Karr (1975), it is sufficient to prove that, for each real-valued uniformly continuous function g on Θ × [0, ∞/, (a) K g. / is continuous on Θ and (b) K N g. / → K g. / uniformly on compact subsets of Θ as N → ∞.
We start by showing part (b). By the structure of Θ, it is sufficient to show that, for each ∈ Θ .k/ , there is a neighbourhood G ⊆ Θ .k/ of such that K N g. / → K g. / uniformly on G, and this is what we shall do. For ∈ Θ .k/ , K N g. / and K g. / can be written where x is the smallest integer that is no smaller than x. We again start by looking at the 'birth parts' of the kernels, bounding the corresponding part of |K N g. / − K g. /| as We wish to prove that this expression tends to 0 as N → ∞. We can do this by showing that the integrand tends to 0 for all u 0 and all .w, φ/ and that there exists a dominating (for all sufficiently large N) integrable function.
To accomplish this, we add and subtract a number of telescoping terms, giving us |g. , u/ − g. , u /| is the modulus of continuity of g; ∆ is a metric making Θ × [0, ∞/ separable and complete. All the terms on the right-hand side except the first can be treated as in the proof of lemma 1, with the extra observation that λ. / β. / is bounded away from 0 on compact subsets of Θ. Moreover, since where, by lemma 1, the o.1=N/ term is uniform over a small G so that the right-hand side tends to 0 uniformly over such a G. The inequality log.1 − x/ −x − 2x 2 for 0 x 1 2 leads to a reverse inequality which is handled similarly.
The 'death parts' of the kernels, i.e. bounding the corresponding parts of |K N g. / − K g. /|, can be handled by combining arguments for the 'birth parts' and arguments used to prove lemma 1.
Finally requirement (a) above can be proved by using similar techniques.
In the current implementation P S is constant except for edge effects (P S .M/ = 0). On the galaxy data, the choice of parameters that maximizes the acceptance rate for the split-and-combine move is γ S = 1, ρ S = 0:2 and ν S = 3. However, the acceptance rate is then only 4.3% (compared with 13.3% for the birth-and-death move).