The Matrix of the -Step Transition Probabilities

- Let be a Markov chain on the state space with initial distribution and transition matrix .
- For arbitrary but fixed and the product can be interpreted as the probability of the path .
- Consequently, the probability of the transition from state to
state within steps is given by the sum

where

**Remarks**-
- The matrix
is
called the -
*step transition matrix*of the Markov chain . - If we introduce the convention , where denotes the -dimensional identity matrix, then has the following representation formulae.

- The matrix
is
called the -

**Proof**- Equation (22) is an immediate consequence of
(20) and the definition of matrix multiplication.

**Example**- (
*Weather Forecast*)- Consider , and let
- One can show that the -step transition matrix
is given by the formula

- Consider , and let
**Remarks**

Furthermore, Lemma 2.1 allows the following
representation of the distribution of . Recall that
denotes the state of the Markov chain at step .

- Let be a Markov chain with state space , initial distribution and one-step transition matrix .
- Then the vector
of the
probabilities
is given by the equation

**Proof**-
- From the formula of total probability (see Theorem WR-2.6) and
(21) we conclude that
- Now statement (26) follows from Lemma 2.1.

- From the formula of total probability (see Theorem WR-2.6) and
(21) we conclude that
**Remarks**-
- Due to Theorem 2.3 the probabilities can be calculated via the th power of the transition matrix .
- In this context it is often useful to find a so-called
*spectral representation*of . It can be constructed by using the eigenvalues and a basis of eigenvectors of the transition matrix as follows. Note that there are matrices having no spectral representation.

*A short recapitulation*- Let be a (not necessarily stochastic) matrix, let be two -dimensional (column-) vectors such that for each of them at least one of their components is different from 0, and let be an arbitrary (real or complex) number.
- If

then is an*eigenvalue*of and and are left and right*eigenvectors*(for ). - As (27) is equivalent to
andis an eigenvalue of if and only if is a solution of the so-called
*characteristic equation*

- Note that the determinant in (28) is a polynomial of order . Thus, the algebraic equation (28) has possibly complex solutions . These solutions might not be all different from each other.
- W.l.o.g. we may assume the eigenvalues
to be ordered such that
- For every eigenvalue left and right eigenvectors and , respectively, can be found.
- Let
be the
matrix consisting of the right eigenvectors
and let
- By definition of the eigenvectors

where and denotes the diagonal matrix with diagonal elements .

- If the eigenvectors are
linearly
independent,
- the inverse exists and we can set .
- Moreover, in this case (29) implies
- This yields the
*spectral representation*of :

**Remarks**-
- An application of (30) for the transition matrix results in a simple algorithm calculating the th power of (26).
- For the necessary calculation of the eigenvalues and eigenvectors of standard software like MAPLE, MATLAB or MATHEMATICA can be used.
- A striking advantage of the spectral representation
(30) can be seen in the fact that the complexity of
the numerical calculation for
stays constant if is
increased.
- However, the derivation of (30) requires the eigenvectors to be linearly independent. The next lemma gives a sufficient condition for the linear independence of eigenvectors.

- If all eigenvalues of are pairwise distinct, every family of corresponding right eigenvectors is linearly independent.
- Furthermore, if the left eigenvectors
are given by
it holds that

**Proof**-
- The first statement will be proved by complete induction.
- As every eigenvector has at least one non-zero component, implies .
- Let now all eigenvalues of be pairwise different and let the eigenvectors be linearly independent for a certain .
- In order to show the independence of
it
suffices to show that

implies . - Let
be such that (32) holds. This also
implies
- The same argument yields
- As the eigenvectors
are linearly
independent
- Now (32) immediately implies .

- If the eigenvalues
of
are
pairwise distinct,
- the matrix consists of linearly independent column vectors,
- and thus is invertible.
- Consequently, the matrix
of the left eigenvectors is
simply the inverse
. This immediately implies
(31).

- The first statement will be proved by complete induction.