Next: Direct and Iterative Computation Up: Ergodicity and Stationarity Previous: Irreducible and Aperiodic Markov   Contents

### Stationary Initial Distributions

• Recall
• If is an irreducible and aperiodic Markov chain with (finite) state space and (quasi-positive) transition matrix ,
• then the limit distribution is the uniquely determined probability solution of the following matrix equation (see Theorem 2.5):

 (59)

• If the Markov chain is not assumed to be irreducible there can be more than one solution for (59).
• Moreover, if the initial distribution von is a solution of (59), then Theorem 2.3 and (59) imply

and thus for all .
• Due to this invariance property every probability solution of (59) is called a stationary initial distribution of .
• Conversely, it is possible to show that
• there is a unique probability solution for the matrix equation (59) if is irreducible.
• However, this solution of (59) is not necessarily the limit distribution as does not exist if is not aperiodic.

Theorem 2.10
• Let be an irreducible transition matrix, where .
• For arbitrary but fixed the entries of the stochastic -dimensional matrices where

 (60)

converge to a limit

 (61)

which does not depend on . The vector is a solution of the matrix equation   and satisfies .
• The distribution given by - is the only probability solution of .

A proof of Theorem 2.10 can be found in Chapter 7 of E. Behrends (2000) Introduction to Markov Chains, Vieweg, Braunschweig.

Remarks

• Besides the invariance property , the Markov chain with stationary initial distribution exhibits still another invariance property for all finite dimensional distributions that is considerably stronger.
• In this context we consider the following notion of a (strongly) stationary sequence of random variables.

Definition

• Let be an arbitrary sequence of random variables mapping into (which is not necessarily a Markov chain).
• The sequence of -valued random variables is called stationary if for arbitrary and

 (62)

Theorem 2.11
• Let be a Markov chain with state space .
• Then is a stationary sequence of random variables if and only if the Markov chain has a stationary initial distribution.

Proof

• The necessity of the condition follows immediately
• from Theorem 2.3 and from the definitions for a stationary initial distribution and a stationary sequence of random variables, respectively,
• as (62) in particular implies that for all
• and from Theorem 2.3 we thus obtain , i.e., is a stationary initial distribution.
• Conversely, suppose now that is a stationary initial distribution of the Markov chain .
• Then, by the definition (3) of a Markov chain , we have

• where the last but one equality is due to the stationarity of the initial distribution and the last equality uses again the definition (3) of the Markov chain .

Remarks

• For some Markov chains, whose transition matrices exhibit a specific structure, we already calculated their stationary initial distributions in Sections  2.2.2 and 2.2.3.
• Now we will discuss two additional examples of this type.
• In these examples the state space is infinite requiring an additional condition apart from quasi-positivity (or irreducibility and aperiodicity) in order to ensure the ergodicity of the Markov chains.
• Namely, a so-called contraction condition is imposed that prevents the probability mass to ,,migrate towards infinity''.

Examples

1. Queues

see. T. Rolski, H. Schmidli, V. Schmidt, J. Teugels (2002) Stochastic Processes for Insurance and Finance. J. Wiley & Sons, Chichester, S. 147 ff.

• We consider the example already discussed in Section 2.1.2
• of the recursively defined Markov chain with and

 (63)

• where the random variables are independent and identically distributed and the transition matrix is given by

 (64)

• It is not difficult to show that
• the Markov chain defined by the recursion formula (63) with its corresponding transition matrix (64) is irreducible and aperiodic if

 and (65)

• for all the solution of the recursion equation (63) can be written as

 (66)

• the limit probabilities exist for all where

• Furthermore

• Thus, for Markov chains with (countably) infinite state space,
• irreducibility and aperiodicity do not always imply ergodicity,
• but, additionally, a certain contraction condition needs to be satisfied,
• where in the present example this condition is the requirement of a negative drift , i.e.,
.
• If the conditions (65) are satisfied and , then
• the equation has a uniquely determined probability solution ,
• which coincides with but which in general cannot be determined explicitly.
• However, there is a simple formula for the generating function of , where

and

 (67)

• Namely, we have

 (68)

where and is the generating function of .

• Proof of (68)
• By the defibition (67) of , we have .
• Furthermore, using the notation , we obtain

i.e.

 (69)

• As

and

by L'Hospital's rule we can conclude that

• Hence (68) is a consequence of (69).

2. Birth and death processes with one reflecting barrier

• We modify the example of the death and birth process discussed in Section 2.2.3 now considering the infinite state space and the transition matrix

 (70)

where , and is assumed for all .
• The linear equation system is of the form

 (71)

• Similarly to the birth and death processes with two reflecting barriers one can show that
• the equation system (71) has a uniquely determined probability solution if

 (72)

• the solution of (71) is given by

• where is defined by the condition , i.e.

and, consequently,

• As we assume and for all birth and death processes with one reflecting barrier are obviously irreducible.
• Furthermore, if for some then birth and death processes with one reflecting barrier are also aperiodic (as well as ergodic if the contraction condition (72) is satisfied).

Next: Direct and Iterative Computation Up: Ergodicity and Stationarity Previous: Irreducible and Aperiodic Markov   Contents
Ursa Pantle 2006-07-20