Next: Examples Up: Specification of the Model Previous: Specification of the Model   Contents

### State Space, Initial Distribution and Transition Probabilities

• The stochastic model of a discrete-time Markov chain with finitely many states consists of three components: state space, initial distribution and transition matrix.
• The model is based on the (finite) set of all possible states called the state space of the Markov chain. W.l.o.g. the state space can be identified with the set where is an arbitrary but fixed natural number.
• For each , let be the probability of the system or object to be in state at time , where it is assumed that

 (1)

The vector of the probabilities defines the initial distribution of the Markov chain.

• Furthermore, for each pair we consider the (conditional) probability for the transition of the object or system from state to within one time step.

• The matrix of the transition probabilities where

 (2)

is called one-step transition matrix of the Markov chain.
• For each set , for any vector and matrix satisfying the conditions (1) and (2) the notion of the corresponding Markov chain can now be introduced.

Definition

• Let be a sequence of random variables defined on the probability space and mapping into the set .
• Then is called a (homogeneous) Markov chain with initial distribution and transition matrix , if

 (3)

for arbitrary and .

Remarks

• A quadratic matrix satisfying (2) is called a stochastic matrix.
• The following Theorem 2.1 reveals the intuitive meaning of condition (3). In particular the motivation for the choice of the words initial distribution'' and transition matrix'' will become evident.
• Furthermore, Theorem 2.1 states another (equivalent) definition of a Markov chain that is frequently found in literature.

Theorem 2.1   The sequence of -valued random variables is a Markov chain if and only if there is a stochastic matrix such that

 (4)

for any and such that .

Proof

• Clearly condition (4) is necessary for to be a Markov chain as (4) follows immediately from (3) and the definition of the conditional probability; see section WR-2.6.1.
• Let us now assume to be a sequence of -valued random variables such that a stochastic matrix exists that satisfies condition (4).
• For all we define and realize that condition (3) obviously holds for .
• Furthermore,
• implies ,
• and in case from (4) we can conclude that

• Therefore

i.e., we showed that (3) also holds for the case .
• Now assume that (3) holds for some .
• By the monotonicity of probability measures (see statement 2 in Theorem WR-2.1)
immediately implies
.
• On the other hand if , then

• Thus, (3) also holds for and hence for all .

Corollary 2.1   Let be a Markov chain. Then,

 (5)

holds whenever .

Proof

• Let and hence also be strictly positive.
• In this case (3) yields

• This result and (4) imply (5).

Remarks

• Corollary  2.1 can be interpreted as follows:
• The conditional distribution of the (random) state of the Markov chain at time'' is completely determined by the state at the preceding time .
• It is independent from the states observed in the earlier history of the Markov chain.
• The definition of the conditional probability immediately implies

• the equivalence of (5) and
 (6)

• The conditional independence (6) is called the Markov property of .
• The definitions and results of Section 2.1.1 are still valid,
• if instead of a finite state space a countably infinite state space such as the set of all integers or all natural numbers is considered.
• It merely has to be taken into account that in this case and possess an infinite number of components and entries, respectively.

Next: Examples Up: Specification of the Model Previous: Specification of the Model   Contents
Ursa Pantle 2006-07-20