Next: MCMC Estimators; Bias and Up: Error Analysis for MCMC Previous: Error Analysis for MCMC   Contents

### Estimate for the Rate of Convergence

• We will now show how the upper bounds for the variational distance and the second largest absolute value of the eigenvalues of the transition matrix derived in Section 2.3 can be used
• in order to determine upper bounds for the distance occurring in the th step of the MCMC simulation via the Metropolis algorithm,
• if the simulated distribution satisfies the following conditions.
• Namely we assume
• that for arbitrary such that ,
• and that the states are ordered such that .
• We may thus (w.l.o.g.) return to the notation used in Section 2.3 and identify the states and the first natural numbers, i.e. .
• The probabilities can thus be written in the following way:

 (56)

• where is a monotonically increasing function,
• and is chosen such that for a certain constant

 (57)

• and is an (in general unknown) factor.
• Furthermore, the definition of a Metropolis algorithm for the MCMC simulation of requires
• that the basis and the differences are known for all ,
• i.e. in particular that the quotients are known for all .
• Let the matrix of the ,,potential'' transitions be given by

 (58)

• Let the acceptance probability be defined as in (53), i.e.

• By (56) and (58) the entries of the transition matrix for the MCMC simulation are thus be given as

 (59)

and for

 (60)

Theorem 3.15   The second largest eigenvalue of the transition matrix defined by - has the following upper bound

 (61)

Proof

• By Theorem 3.14 the pair is reversible.
• Hence, Rayleigh's theorem (see Theorem 2.17) yields the following representation formula

 (62)

• where     denotes the subset of vectors in whose components are not all equal,
• is the variance of the components of with respect to
• and denotes the Dirichlet form of the reversible pair .
• Due to (62) it is sufficient to show that

 (63)

for some constant such that

 (64)

• Similar to the proof of Theorem 2.18 we obtain by copying the notation that for all

where the ,,edge probability'' is assigned to the ,,directed'' edge and denotes the ,,path'' from to .
• Using the notation we thus obtain

• This shows (63) for

 (65)

as we showed in Lemma 2.8 that

• It is left to show that the constant considered in (65) satisfies the inequality (64).
• For this purpose we choose the path for each pair such that .
• Then (56) and (59)-(60) imply

• Thus, the reversibility of the pair shown in Theorem  3.14 yields

• Because of (56) and (57) we obtain for arbitrary such that

• Moreover, all edges are of the form or , as for the entries of the transition matrix defined by (58)-(60) we have if .
• Thus, for ,

as and and hence

• For we obtain the estimate (64).

The following lemma will turn out to be useful in order to derive a lower bound for the smallest eigenvalue of the transition matrix defined by - .

Lemma 3.1
• Let be an arbitrary -matrix and for all let .
• Let be an arbitrary eigenvalue of , let be a left eigenvector corresponding to and let be the number of the component where

• Then,

 (66)

Proof

• By definition of and we have . In particular

and

• This implies

Theorem 3.16   The smallest eigenvalue of the transition matrix defined by - has the following lower bound

 (67)

Proof

• By Lemma 3.1 applied to (and to the index determined for )

• Thus, taking into account (59)-(60) we derive

Remark
Summarizing the results of Theorems 3.15 and 3.16 we have shown that

 (68)

Next: MCMC Estimators; Bias and Up: Error Analysis for MCMC Previous: Error Analysis for MCMC   Contents
Ursa Pantle 2006-07-20