State Space, Initial Distribution and
Transition Probabilities
The stochastic model of a discrete-time Markov chain with
finitely many states consists of three components: state space,
initial distribution and transition matrix.
The model is based on the (finite) set of all possible states called
the state space of the Markov chain. W.l.o.g. the state space
can be identified with the set
where
is an arbitrary but fixed natural
number.
For each , let be the probability of the
system or object to be in state at time , where it is
assumed that
(1)
The vector
of the
probabilities
defines the initial
distribution of the Markov chain.
Furthermore, for each pair we consider the (conditional)
probability
for the transition of the object or
system from state to within one time step.
The
matrix
of the transition probabilities where
(2)
is called one-step transition matrix of the Markov chain.
For each set
, for any vector
and matrix
satisfying the conditions (1) and
(2) the notion of the corresponding Markov chain can
now be introduced.
Definition
Let
be a sequence of random variables
defined on the probability space
and mapping into
the set
.
Then
is called a (homogeneous) Markov chain
with initial distribution
and transition matrix
, if
(3)
for arbitrary
and
.
Remarks
A quadratic matrix
satisfying (2) is
called a stochastic matrix.
The following Theorem 2.1 reveals the intuitive
meaning of condition (3). In particular the
motivation for the choice of the words ``initial distribution''
and ``transition matrix'' will become evident.
Furthermore, Theorem 2.1 states another (equivalent)
definition of a Markov chain that is frequently found in
literature.
Theorem 2.1
The sequence of -valued random variables is a Markov
chain if and only if there is a stochastic matrix
such that
(4)
for any
and
such that
.
Proof
Clearly condition (4) is necessary for
to be a Markov chain as (4) follows immediately
from (3) and the definition of the conditional
probability; see section WR-2.6.1.
Let us now assume to be a sequence of -valued random
variables such that a stochastic matrix
exists
that satisfies condition (4).
For all we define
and realize that
condition (3) obviously holds for .