MCMC Estimators; Bias and Fundamental Matrix

In this section we will investigate the characteristics of Monte-Carlo estimators for expectations.

- Examples for similar problems were already discussed in
Section 3.1.1,
- when we estimated by statistical means
- and the value of integrals via Monte-Carlo simulation.

- However, for these purposes we assumed
- that the pseudo-random numbers can be regarded as realizations of independent and identically distributed sampling variables.
- In the present section we assume that the sample variables form an (appropriately chosen) Markov chain.

- This is the reason why these estimators are called
*Markov-Chain-Monte-Carlo estimators*(MCMC estimators).

**Statistical Model**-
- Let be a finite (nonempty) index set and let
be a discrete random vector,
- taking values in the finite state space with probability ,
- where is identified with the set of the first natural numbers.
- Furthermore, we assume for all where denotes the probability function of the random vector .

- Our goal
- is to estimate the expectation
via MCMC
simulation where

- and is an arbitrary but fixed function.

- is to estimate the expectation
via MCMC
simulation where
- As an estimator for we consider the random variable

- where is a Markov chain with state space , arbitrary but fixed initial distribution and
- an irreducible and aperiodic transition matrix , such that is the ergodic limit distribution with respect to .

- Let be a finite (nonempty) index set and let
be a discrete random vector,
**Remarks**-
- Typically, the initial distribution
does
*not*coincide with the simulated distribution .- Consequently, the MCMC estimator defined by (70) is not unbiased for fixed (finite) sample size,
- i.e. in general for all .

- For determining the
*bias*the following representation formula will be helpful.

- Typically, the initial distribution
does

**Proof**-
- In Theorem 2.3 we proved that for all the distribution of is given by .
- Thus, by definition (70) of the MCMC estimator
, we get that

**Remarks**

Apart from this, the asymptotic behavior of for can be determined. For this purpose we need the following two lemmata.

for all and in particular

**Proof**-
- Evidently, (72) holds for .
- As is assumed to be irreducible and aperiodic,

**Remarks**

and

**Proof**

Theorem 3.17 and Lemma 3.3 enable us to give a more detailed description of the asymptotic behavior of the bias .

- Let where denotes the fundamental matrix of that was introduced by .
- Then, for all ,

where is a remainder such that for .

**Proof**