Recursive Representation

- In this section we will show
- how Markov chains can be constructed from sequences of independent and identically distributed random variables,
- that the recursive formulaź (9), (10), (11) and (13) are special cases of a general principle for the construction of Markov chains,
- that vice versa every Markov chain can be considered as solution of a recursive stochastic equation.

- As usual let
be a finite (or countably
infinite) set.
- Furthermore, let be a measurable space, e.g. could be the -dimensional Euclidian space and the Borel -algebra on , or could be defined as the unit interval and as the Borel -algebra on .
- Let now be a sequence of independent and identically distributed random variables mapping into , and let be independent of .
- Let the random variables
be given by
the
*stochastic recursion equation*

where is an arbitrary measurable function.

- Let the random variables be given by .
- Then

**Proof**-
- Formula (14) implies that

- where the last equality follows from the transformation theorem for independent and identically distributed random variables (see Theorem WR-3.18),
- as the random variables are functions of and hence independent of .

- In the same way one concludes that

- Formula (14) implies that
**Remarks**-
- The proof of Theorem 2.2 yields that the conditional
probability
- does not dependent on , as the ``innovations'' are identically distributed.
- Moreover, the joint probability
is given by

where . - Consequently, the sequence of random variables given by the recursive definition (14) is a Markov chain following the definition given in (3).

- The proof of Theorem 2.2 yields that the conditional
probability

Our next step will be to show that vice versa, every Markov chain can be regarded as the solution of a recursive stochastic equation.

- Let be a Markov chain with state space , initial distribution and transition matrix .
- Based on a recursive equation of the form (14) we
will construct a Markov chain
with
initial distribution
and transition matrix
such
that

for all :- We start with a sequence of independent random variables that are uniformly distributed on the interval .
- First of all the -valued random variable
is
defined as follows:
if and only iffor all , i.e.

- The random variables
are defined by
the recursive equation

where the function is given by

- It is easy to see that the probabilities for the sequence defined by (17)-(18) are given by (3), i.e., is a Markov chain with initial distribution and transition matrix .

**Remarks**-
- If (16) holds for two sequences
and
of random variables, these sequences
are called
*stochastically equivalent*. - The construction principle (17)-(19) can be exploited for the Monte-Carlo simulation of Markov chains with given initial distribution and transition matrix.
- Markov chains on a countably infinite state space can be constructed and simulated in the same way. However, in this case (17)-(19) need to be modified by considering vectors and matrices of infinite dimensions.

- If (16) holds for two sequences
and
of random variables, these sequences
are called