Bayesian Inference for Mixtures of Stable Distributions

In many different fields such as hydrology, telecommunications, physics of condensed matter and finance, the gaussian model results unsatisfactory and reveals difficulties in fitting data with skewness, heavy tails and multimodality. The use of stable distributions allows for modelling skewness and heavy tails but gives rise to inferential problems related to the estimation of the stable distributions' parameters. Some recent works have proposed characteristic function based estimation method and MCMC simulation based estimation techniques like the MCMC-EM method and the Gibbs sampling method in a full Bayesian approach. The aim of this work is to generalise the stable distribution framework by introducing a model that accounts also for multimodality. In particular we introduce a stable mixture model and a suitable reparametrisation of the mixture, which allow us to make inference on the mixture parameters. We use a full Bayesian approach and MCMC simulation techniques for the estimation of the posterior distribution. Finally we propose some applications of stable mixtures to financial data.


Introduction
In many different fields such as hydrology, telecommunications, physics and finance, Gaussian models reveal difficulties in fitting data that exhibits a high degree of heterogeneity; thus stable distributions have been introduced as a generalisation of the Gaussian model. Stable distributions allow also for infinite variance, skewness and heavy tails. The tails of a stable distribution decay like a power function, allowing extreme events to have higher probability mass than in Gaussian model. For a summary of the properties of the stable distributions see Zoloratev [42] and Samorodnitsky and Taqqu [36], which provide a good theoretical background on heavytailed distributions. The practical use of heavy-tailed distributions in many different fields is well documented in the book of Adler, Feldman and Taqqu [1], which also reviews the estimation techniques.
In finance, the first studies on the hypothesis of stable distributed stock prices can be attributed to Mandelbrot [22], Fama [13], [14] and Fama and Roll [15], [14]. They propose stable distributions and give some statistical instruments for the inference on the characteristic exponent. The use of stable distributions has been motivated also on the basis of empirical evidence from financial markets. Brenner [5] uses the notion of stationarity of the time series to explain stability of stock prices. An illuminating work on inference for stable distributions is due to Buckle [6], who makes also an empirical analysis on daily stock prices, using a full Bayesian approach to estimate stable distributions parameters and finding significant evidence of the stable distribution hypothesis.
There are many recent works treating the use of stable distributions in finance. For example see Bradley and Taqqu [4] and Mikosch [26] for an introduction to the use of stable distributions in financial risk modelling. The work of Mittnik, Rachev and Paolella [25] and of Rachev and Mittnik [31] provides a quite complete analysis of the theoretical and empirical aspects of the stable distributions in finance.
Other early studies, performing empirical analysis on stocks prices, suggest to use mixtures of distributions in order to modelling the financial markets heterogeneity. Barnes and Downes [2] use the same estimation techniques of Fama and Roll [16] in order to discuss the results of Teichmoeller [39]. They find that for some stock the property of stability does not hold and that the characteristic exponent varies across the stocks. In order to account for this kind of heterogeneity of the stock prices the authors suggest mixture of stable distributions as an alternative hypothesis. Simkowitz and Beedles [3] perform an empirical analysis focusing on the asymmetry of stock returns. They find that the skewness of the stock returns is frequently positive and dependent on the level of the characteristic exponent. They conclude that securities distributions may be better modelled through mixtures of stable distributions. Finally an extensive empirical analysis due to Fieltz and Rozelle [17] shows that mixtures of Gaussian, or non-Gaussian distributions can better describe stock prices. In particular the authors suggest to use non-Gaussian stable mixtures model with changing scale parameter because it directly accounts for skewness. We can conclude that the problem of multimodality and in general of heterogeneity is well documented in the financial literature, also from the earlier studies on the stable distributions. Thus an appropriate modelling is needed.
Observe that, in order to account for heterogeneity and non-linear dependencies exhibited by the data, stable distributions have been already introduced in different kind of statistical models. For instance in survival models, the heterogeneity within survival times of a population are modelled trough common latent factors, which follow stable distributions, see for example Qiou, Ravishanker and Dey [29]. Stable distributions are also used to model heterogeneity over time. For an introduction to time series models with stable noises, see Qiou and Ravishanker [30] and Mikosch [26]. Different estimation methods for stable distributions have been proposed in the literature. For a full Bayesian approach see Buckle [6], for a maximum likelihood approach see DuMouchel [11] and for MCEM approach with application to time series with symmetric stable innovations see Godsill [21]. The first aim of our work is to propose a stable distributions mixture model in order to capture the heterogeneity of data. In particular we want to account for multimodality, which is present, for example, in financial data. The second goal of the work is to provide some inferential tools for stable distributions mixtures. As suggested in the literature on Gaussians mixtures (see for example Robert [34]), we propose a particular reparameterisations of the mixture model in order to make more easy the statistical inference on the mixture parameters. Furthermore we use both a full Bayesian approach and MCMC simulation techniques in order to estimate the parameters. The maximum likelihood approach (see for example McLachlan and Peel [23]) to the mixture model implies numerical difficulties, which rely on the fact that for many parametric density family the likelihood surface has singularities. Furthermore, as pointed out by Stephens [38], the likelihood may have several local maximum and it will be difficult to justify the choice of one of these point estimates. The presence of several local maximum and of singularities implies that the standard asymptotic theory for maximum likelihood estimation and the test theory do not apply in the mixture context. The Bayesian approach avoids these problem as parameters are random variables, with prior and posterior distributions defined on the parameter space. Thus it is no more necessary to choose between several local maximum, because point estimates are obtained by averaging over the parameter space, weighting by the posterior distribution of the parameters or by the simulated posterior distribution. The structure of the work is as follows. Section 2 defines a stable distribution and the method to simulate from a stable. Section 3 provides an introduction to some basic Markov Chain Monte Carlo(MCMC) methods for mixtures and exhibits the Bayesian model and the Gibbs sampler for a stable distribution. Section 4 describes the Bayesian model for stable mixtures, with particular attention to the missing data structure of the stable mixture model and the Gibbs sampler for stable mixture is developed in the case where the number of components is fixed. Section 5 provides some results of the Bayesian stable mixture model on financial dataset. Section 6 concludes.

Simulating from a Stable Distribution
The existence of simulation methods for stable distributions opens the way to Bayesian inference on the parameters of this distribution family. In this section we define a stable random variable and briefly describe the method to simulate from a stable distribution, first proposed by Chamber, Mallows and Stuck [8] and then discussed also in Weron [41]. We use this method in our work, to generate dataset to test the efficiency of the MCMC based Bayesian inference approach. In the following we denote a stable distribution by S α (β, δ, σ). Stable distributions do not generally have an explicit probability density function and are thus conveniently defined through their characteristic function. The most well known parametrisation is defined in Samorodnitsky and Taqqu [36].
The stable distribution is thus completely characterised through the following four parameters: the characteristic exponent α, the skewness parameter β, the location parameter δ and finally the scale parameter σ. An equivalent parametrisation is proposed by Zoloratev [42]. For a review on all the equivalent definitions of stable distribution and on all their properties see Samorodnitsky, Taqqu [36]. The distribution S α (β, 0, 1) is usually called standard stable and when α ∈ (0, 1) it is called positive stable because the support of the density is the positive half of the real line. In this case the characteristic function reduces to Stable distributions admit explicit representation of the density function only in the following cases: the Gaussian distribution S 2 (0, σ, δ), the Cauchy distribution S 1 (0, σ, δ) and the Lévy distribution S 1/2 (1, σ, δ). The algorithm we used for simulating a standard stable (see Chamber, Mallows and Stuck [8] and Weron [41]) is the following (3) Once a value Z from a standard stable S α (β, 0, 1) has been simulated, in order to obtain a value X from a stable distribution with scale parameter σ and location parameter δ, the following transformation is required

Bayesian Inference for Stable Distributions
In order to make inference on the parameters of a stable distribution in a Bayesian approach it is necessary to specify a hierarchical model on the parameters of the distribution. Often, the resulting posterior distribution of the Bayesian model cannot be calculated analytically, thus it is necessary to chose a numerical approximation method. Monte Carlo simulation techniques provide an appealing solution to the problem because, in high dimensional space, they are more efficient than traditional numerical integration methods and furthermore they require the densities involved in the posterior to be known only up to a normalising constant. In the following the basic Markov Chain Monte Carlo (MCMC ) techniques will be introduced and the Gibbs sampler for a stable distribution will be discussed.

MCMC Methods for Bayesian Models
As evidenced in Chapter ??, in Bayesian inference many quantities of interest can be represented in the integral form where π(θ|x) is the posterior distribution of the parameter θ ∈ X given the observed data x = (x 1 , . . . , x k ). In many cases to find an analytical solution to the integration problem is difficult and a numerical approximation is needed. A way to approximate the integral is to simulate the posterior distribution and to average the simulated values of f (θ). In particular the MCMC methods consist in the construction of a Markov chain θ (t) n t=1 and in the following approximation of the integral given in Eq. (7) which is a consistent estimator of the quantity of interest In some cases, as in mixture models, is not possible to simulate directly from the posterior distribution and a further simulation step (completion step) is needed. All MCMC algorithms are based on the construction of a discrete time Markov Chain, through the specification of its transition kernel. Thus the properties of this kind of stochastic process are useful in order to study the convergence of the MCMC simulation algorithms. We recall that the irreducibility of the chain is a sufficient condition in order to guarantee the convergence of I n to the quantity of interest given in Eq. (7).

Theorem 1 (Law of Large Numbers)
If the Markov chain {Θ} ∞ t=0 is irreducible and has σ-finite invariant measure π, then ∀f, g ∈ L 1 (π), with g(θ)dπ(θ) = 0 where For a brief introduction to Markov chains and to Markov Chain Monte Carlo methods we refer to Chapter ??. Further details on Markov chains can be found for example in Meyn and Tweedie [24], other theoretical results on convergence are in Tierney [40], finally Robert and Casella [35] provides some techniques for monitoring convergence.

The Gibbs Sampler
The Gibbs sampler has been introduced in image processing by Geman and Geman [19] (see also Chapter ?? for a general introduction to MCMC methods and to Gibbs sampling) and it is a method of construction of a Markov Chain {Θ (t) } ∞ t=0 with multivariate stationary distribution π(θ|(x)), where θ ∈ χ. This simulation method is particularly useful when the posterior density is defined on a high dimension space. If the random vector θ can be written as θ = (θ 1 , . . . , θ p ) and if we can simulate from the full conditional densities then the associated Gibbs sampling algorithm is given by the following transition kernel from θ (t) to θ (t+1) : Definition 2 (Gibbs Sampler) Given the state Θ (t) = θ (t) at time t, generate the state Θ (t+1) as follows Under some regularity conditions the Markov chain produced by the algorithm converges to the desired stationary distribution (see Robert and Casella [35]).

The Gibbs Sampler for Univariate Stable Distributions
In this paragraph we give a description of the Gibbs sampler proposed by Buckle [6] in order to estimate the characteristic exponent α of a stable distribution. It is known (see Section 1 ) how to simulate values from a stable distribution; furthermore it is possible to represent the stable density in integral form, by introducing an auxiliary variable y, as suggested by Buckle [6]. The stable density is obtained by integrating with respect to y the bivariate density function of the pair (x, y) τ α,β (y) = sin(παy + η α,β ) cos(πy) cos(πy) cos(π(α − 1)y) + η α,β where z = x−δ σ . Previous elements allow to perform simulation based Bayesian inference on the parameters of the stable distribution. The Bayesian model is described through the Directed Acyclic Graph (DAG) in Fig. 2. Suppose to observe n realizations x = (x 1 , . . . , x n ) from a stable distribution S α (β, σ, δ) and simulate a vector of auxiliary variables y = (y 1 , . . . , y n ), then the completed likelihood and the completed posterior distribution are respectively where θ = (α, β, δ, σ) is the stable parameter vector varying in the parameter space Θ.
In the following we suppose to observe n values from a standard stable distribution S α (β, 1, 0) and we assume the other parameters to be known. Parameters α and β are estimated by simulating with i = 1, . . . , n.
In order to simulate from the density function given in equation (19) we apply the accept reject method (see Devroye [9]), because the density is proportional to a function which has finite support (− 1 2 , 1 2 ) and which is bounded with value 1 at the maximum y * , where y * is such that τ α,β (y * ) = x. To emphasize numerical problems which arise in making inference on stable distributions, we plot in Fig. 4 the density function of y for different values of x. Note that for all values of α ∈ (0, 1), high values of x make the density function spiked around the mode. Thus the basic accept method performs quite poorly. A way to improve the simulation method is to build a histogram with the rejected values and to use it as an envelope in the accept reject algorithm. Due to the way the parameter α enters in the likelihood, the densities given in Equations (20), (21), (22) and (23) are undulating and rather concentrated, therefore as suggested by Buckle [6] and Ravishanker and Qiou [30] we introduce the following reparametrisations which give a more manageable form of the conditional posteriors of α, β and δ the resulting posteriors are At each step of the reparametrised Gibbs sampler, the Jacobian of the transformation, Due to the complexity of the function τ α,β , its inverse has not an analytical expression. Therefore, following Buckle [6], the inverse transformation is determined numerically. We use the modified safeguard Newton algorithm proposed in Press et al. [28].
In order to simulate from the posteriors given in Equations (26), (27) and (28) we use Metropolis-Hastings algorithm (see Chapter ?? for an introduction to Monte Carlo Markov Chain methods) .
In order to simulate the full conditional posteriors given in Eq. (26) and (27), we use a beta distribution, Be(a, b) as proposal. The sample generated from the beta distribution is not independent because in order to simulate the k-th value of the M.-H. chain, we pose the mean of the beta distribution to be equal to the (k − 1)-th value of the chain. In setting a and b, the parameters of the proposal distribution we distinguish the following cases where α k−1 is the value generated by the Metropolis-Hastings chain at step (k − 1) and v is the variance of the proposal distribution. This parameters choice allows us also to avoid numerical problems related to the evaluation of the Metropolis-Hastings acceptance ratio in the presence of fat tailed and quite spiked likelihood functions. We use a gaussian random walk proposal to simulate the full conditional posterior of the location parameter (Eq. (28)).
In order to complete the description of the hierarchical model and of the associated Gibbs sampler, we consider the following joint prior distribution where θ = σ α α−1 . We use informative priors for the location and scale parameters. For δ we assume a normal distribution. Note that the prior distribution of θ is the inverse gamma distribution IG(a 4 , b 4 ), which is a conjugate prior of the distribution given in equation (23). Simulations of the parameter σ can be obtained from the simulated values of θ by a simple transformation. Finally for parameters α and β we assume non informative priors. We show the efficiency of the MCMC based Bayesian inference, running the Gibbs sampler on simulated dataset. In the following examples we discuss the numerical results and also some computational remarks related to different values of the characteristic exponent. Note that through the Gibbs sample it is also possible to obtain the confidence intervals for the estimated parameters and to perform a goodness of fit test.
Example 2 -(α-Stable Distributions with α > 1) Note that in the first dataset α is less than 1, therefore the moment of order less than two are infinite. In some applications like in finance, in order to give an interpretation to the result it is preferable to work at least with finite first order moments. Therefore we verify the efficiency of the Gibbs sampler also on a sample generated from a stable distribution with α ∈ (1, 2]. For each dataset, Table 1 summarizes the estimated parameters, the standard deviations and the estimated acceptance rates of the M.-H. steps of the Gibbs sampler. Results are obtained on a PC with Intel 1063 MHz processor, using routines implemented in C/C++. We validate the MCMC code by checking that without any data the estimated joint posterior distribution correspond to the joint prior.

Bayesian Inference for Mixtures of Stable Distributions
In this section we extend the Bayesian framework, introduced in the previous section, to the mixtures of stable distributions. In many situations data may exhibit simultaneously: heavy tails, skewness, and multimodality. In time series analysis, the multimodality of the empirical distribution can also find a justification in a heterogeneous time evolution of the observed phenomena. For example, the distribution of financial time series like prices or prices volatility may have many modes because the stochastic process evolves over time following different regimes. Stable distributions allow for skewness and heavy tails, but not for multimodality. Thus a way to model these features of the data, is to introduce stable mixtures. Furthermore the use of stable mixtures is appealing also because they have normal mixtures as special case, which is a widely studied topic (see for example Stephens [38], Richardson and Green [32]). Other relevant works on the Bayesian approach to the mixture models estimation are Diebolt and Robert [10], Escobar and West [12] and Robert [34], [33]. In Appendix C some examples of two components stable mixtures are exhibited. We simulate stable mixtures with different parameters setting, in order to understand the influence of each parameter on the shape of the mixture's distribution.

The Missing Data Model
In the following we define a stable mixture model, while assuming to known the number of mixture components. Under a practical point of view the number of components may be detected by looking at the number of modes in the distribution or by performing a statistical test, see section 5. Let L be the finite number of mixture components and f (x|α l , β l , δ l , σ l ) the l-th stable distribution in the mixture, then the mixture model m(x|θ, p) is with L l=1 p l = 1, p l ≥ 0, l = 1, . . . , L where θ l = (α l , β l , δ l , σ l ), l = 1, . . . , L are the parameter vector and θ = (θ 1 , . . . , θ L ) . In the following we suppose L to be known. In order to perform Bayesian inference two steps of completion are needed. First, we adopt the same completion technique used for stable distributions. The auxiliary variable, y, is introduced in order to obtain an integral representation of the mixture distribution m(x|θ, p) = L l=1 p l The second step of completion is introduced in order to reduce the complexity problem, which arises in simulation based inference for mixtures. The completing variable (or allocation variable), ν = {ν 1 , . . . , ν L } is defined as follow and is used to select the mixture component. The allocation variable is not observable and this missing data structure can be estimated by following a simulation based approach. Simulations from the mixture model can be performed in two step: first, simulating the allocation variable; second, simulating a mixture component conditionally on the allocation variable. The resulting demarginalized mixture model is This completion strategy is now quite popular in Bayesian inference for mixtures (see Robert [33], Robert and Casella [35], Escobar and West [12] and Diebolt and Robert [10]). For an introduction to Monte Carlo methods in Bayesian inference from data modeled by mixture of distributions see also Neal [27] and for a discussion of the numerical and identifiability problems in mixtures inference see Richardson and Green [32], Stephens [37] and Celeux, Hurn and Robert [7].

The Bayesian Approach
The Bayesian model for inference on stable mixtures is represented through the DAG in Fig.  3. Before specifying the Bayesian model we introduce two distributions that are quite useful in Bayesian inference form mixtures: the multinomial distribution and the Dirichlet distribution.
As suggested in the literature on gaussian mixtures, in the following we assume a multinomial prior distribution for the completing variable ν: V ∼ f V (ν) = M L (1, p 1 , . . . , p L ).
Observing n independent values, x = (x 1 , . . . , x n ), from a stable mixture, the likelihood and the completed likelihood are respectively where y = (y 1 , . . . , y n ) and ν = (ν 1 , . . . , ν n ) are respectively the auxiliary variable and the allocation variable vectors and θ = (θ 1 , . . . , θ L ) and p = (p 1 , . . . , p L ) are the mixture's parameters vectors. From the completed likelihood and from the priors it follows that the complete posterior distribution of the Bayesian mixture model is: Bayesian inference on the mixture parameters requires the calculation of the expected value from the posterior distribution. A closed form solution of this integration problem does not exist, thus numerical methods are needed. The introduction of auxiliary variables, that are not observable, simplifies inference for mixtures and also suggests the way to approximate numerically the problem. In fact the auxiliary variables can be replaced by simulated values and the simulated completed likelihood can be used for calculating the posterior distributions. Furthermore in order to approximate numerically the posterior means is necessary to perform simulations from the posterior distributions of the parameters and to average the simulated values.

The Gibbs Sampler for Mixtures of Stable Distributions
Gibbs sampling allows us to simulate from the posterior distribution avoiding computational difficulties due to the dimension of the parameter vector. Due to the ergodicity of the Markov chain generated by the Gibbs sampler, the choice of the initial values is arbitrary. In particular we choose to simulate them from the prior. The steps of the Gibbs sampler for a mixture model can be grouped in: simulation of the full conditional distributions and augmentation by the completing variables i , i = 1, . . . , n and p (0) respectively from (ii) Simulate from the full conditional posterior distributions π(p 1 , . . . , p L |θ, x, y, v) = D(δ + n 1 (ν), . . . , δ + n L (ν)) (43) (iii) Update the completing variables for i = 1, . . . , n, where Steps (43), we use the algorithm proposed by Casella and Robert [35], while to draw value from the multinomial posterior distribution of Eq.
(45), we use the algorithm proposed by Fishman [18]. In Examples 4.3 and 4.3, we verify the efficiency of the Gibbs sampler on some test samples simulated from stable mixtures. For each mixture's component we assume the joint prior distribution given in equation (29). Furthermore, for the shake of simplicity, we consider L = 2. Because of the quite irregular form of the density f (x i , y i |θ l ), during the MCMC based estimation, some computational difficulties were encountered in evaluating the probability p * l of each mixture's component. Thus we introduce the following useful reparameterisation and approximation To conclude this section, we remark that in developing the Gibbs sampler for α-Stable mixtures and also in previous Monte Carlo experiments the number of components of the mixture is assumed to be known. Thus our research framework can be extended in order to make inference on the number of components. For example, Reversible Jump MCMC (RJMCMC) or Birth and Death MCMC (BDMCMC) could be applied in this context.

Application to Financial Data
Introducing two level of auxiliary variables in the stable mixture model allows us to infer all the parameters of mixture from the data. Gaussian distribution is usually assumed in modelling financial time series, but it performs poorly when data are heavy-tailed and skewed. Moreover the assumption of unimodal distribution becomes too restrictive for some financial time series. In this section, we illustrate how stable mixtures may result particularly useful in modelling different kind of financial variables and present estimates obtained with the MCMC based inferential technique proposed in the previous section.

Example 5 -(Stock Market)
In this example we analyse the return rate of the S&P500 composite index from 01/01/1990 to 27/01/2003. The return on the index is defined as: r t = (p t − p t−1 )/p t−1 . Alternatively, logarithmic returns could be used. The number of observations is 3410. Fig. 39 shows the data histogram and the best normal which is possible to estimate. The QQ-plot in Fig. 40 reveals that data are not normally distributed. We apply the Gibbs sampler for α-Stable mixtures to this dataset. The result is in Tab. 5. Parameter estimates are ergodic averages over the last 10,000 values of the 15,000 Gibbs sampler realisations. Note that index return distribution has tails heavier than Gaussian, becauseα = 1.674.

Example 6 -(Bond Market)
Our second dataset (source: DataStream) contains daily price returns on the J.P. Morgan's indexes concerning following countries: France, Germany, Italy, United Kingdom, USA and Japan, between 01/01/1988 and 13/01/2003. Denoting with p t the price index at time t. The return on the index is defined as: r t = (p t − p t−1 )/p t−1 . Fig.41 in Appendix E exhibits jointly the histogram, the best Gaussian approximation and the density line of the returns distribution. All time series exhibit a certain degree of kurtosis and skewness. Estimation result on the J.P. Morgan Great Britain index is in Tab. 5.  Fig.43 in Appendix E exhibits jointly the histogram, the best Gaussian approximation and the density line of the returns distribution. Quite all time series of this dataset exhibit multimodality. Estimation result on the 3-month Interest Rate for France is in Tab. 5

Conclusion
In this work we propose a α-Stable mixture model. As result in the literature from many empirical evidences, α-Stable distributions are particularly adapted for modelling financial variables. Moreover some financial empirical studies evidence that mixture models are needed in many cases, due to the presence of multi-modality in the asset returns distribution. We chose Bayesian inference due to the flexibility of the approach, that allows to simultaneously estimate all the parameters of the model. Furthermore we introduce a suitable reparameterisation of the α-Stable mixture in order to perform Bayesian inference. The proposed approach to α-Stable mixture models estimation is quite general and worked well in our simulation analysis, but it needs much more evaluation, with a particular attention to the case of symmetric stable mixtures. Furthermore the Bayesian approach used in this work allows to perform goodness of fit tests and also to use RJMCMC and BDMCMC techniques in order to make inference on the number of components of the mixture.

Appendix B -Proposal Distributions for the Metropolis-Hastings Algorithm
The shape of the stable distribution and the presence of skewness suggest us to use a Beta distribution Be(a, b) as proposal for the Metropolis-Hastings algorithm We assume that the mean of the distribution is equal to the (k − 1)-th value of the M.-H. chain and set exogenously the variance equal to v. Through the parameter v it is thus possible to control the acceptance rate of the Metropolis-Hastings algorithm. When α ∈ (0, 1) the value of the parameters of the proposal is When α ∈ (1, 2] we use a translated Beta distribution (1,2) .
By imposing the usual constraints on the mean and the variance we obtain the values of the proposal's parameters Also in this case the positivity constraints on the Beta's parameters must be considered. We proceed in a similar way for the proposal distribution of the skewness parameter β.

C.1 Mixtures with varying α
Observe that in all dataset exhibited in the histograms, N=100,000 values from right skewed (β = 1) standard stables have been simulated.

C.3 Mixtures with varying σ
Simulated samples of N=100,000 stable values are exhibited in the following histograms. In all the samples the location and the skewness parameters of the mixture are: δ 1 = 1, δ 2 = 40, β 1 = β 2 = 1.