State Space, Initial Distribution and Transition Probabilities

- The stochastic model of a discrete-time Markov chain with
finitely many states consists of three components: state space,
initial distribution and transition matrix.
- The model is based on the (finite) set of all possible states called
the
*state space*of the Markov chain. W.l.o.g. the state space can be identified with the set where is an arbitrary but fixed natural number. - For each , let be the probability of the
system or object to be in state at time , where it is
assumed that

The vector of the probabilities defines the*initial distribution*of the Markov chain. - Furthermore, for each pair we consider the (conditional)
probability
for the transition of the object or
system from state to within one time step.
- The
matrix
of the transition probabilities where

is called one-step*transition matrix*of the Markov chain.

- The model is based on the (finite) set of all possible states called
the
- For each set , for any vector and matrix satisfying the conditions (1) and (2) the notion of the corresponding Markov chain can now be introduced.

**Definition**-
- Let be a sequence of random variables defined on the probability space and mapping into the set .
- Then
is called a (homogeneous)
*Markov chain*with initial distribution and transition matrix , if

for arbitrary and .

**Remarks**-
- A quadratic matrix
satisfying (2) is
called a
*stochastic matrix*. - The following Theorem 2.1 reveals the intuitive meaning of condition (3). In particular the motivation for the choice of the words ``initial distribution'' and ``transition matrix'' will become evident.
- Furthermore, Theorem 2.1 states another (equivalent) definition of a Markov chain that is frequently found in literature.

- A quadratic matrix
satisfying (2) is
called a

for any and such that .

**Proof**-
- Clearly condition (4) is necessary for to be a Markov chain as (4) follows immediately from (3) and the definition of the conditional probability; see section WR-2.6.1.
- Let us now assume to be a sequence of -valued random variables such that a stochastic matrix exists that satisfies condition (4).
- For all we define and realize that condition (3) obviously holds for .
- Furthermore,
- implies ,
- and in case
from (4) we can conclude
that

- Therefore
- Now assume that (3) holds for some
.
- By the monotonicity of probability measures (see statement 2 in
Theorem WR-2.1)

immediately implies

. - On the other hand if
,
then

- By the monotonicity of probability measures (see statement 2 in
Theorem WR-2.1)
- Thus, (3) also holds for and hence for all
.

**Proof**-
**Remarks**-
- Corollary 2.1 can be interpreted as follows:
- The conditional distribution of the (random) state of the Markov chain at ``time'' is completely determined by the state at the preceding time .
- It is
*independent*from the states observed in the earlier history of the Markov chain.

- The definition of the conditional probability immediately implies
- The definitions and results of Section 2.1.1 are still
valid,
- if instead of a finite state space a countably infinite state space such as the set of all integers or all natural numbers is considered.
- It merely has to be taken into account that in this case
and
possess an infinite number of components and entries,
respectively.

- Corollary 2.1 can be interpreted as follows: